1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News AMD responds to RX 480 'Powergate' issues

Discussion in 'Article Discussion' started by Gareth Halfacree, 4 Jul 2016.

  1. Gareth Halfacree

    Gareth Halfacree WIIGII! Staff Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    10,600
    Likes Received:
    807
  2. Guest-16

    Guest-16 Guest

  3. Impatience

    Impatience Active Member

    Joined:
    6 Apr 2014
    Posts:
    1,146
    Likes Received:
    15
    I definitely feel that powering a high-end GPU should ONLY be done via the PCIe power connectors.. There's been a few GPU's that have had this "issue" and it's not just AMD that is at fault here! My 750Ti is sitting at about 80W average through the PCIe lane only (according to reviews).. So have had to avoid any overclocks, even though I know the card can manage it!
     
  4. Harlequin

    Harlequin Well-Known Member

    Joined:
    4 Jun 2004
    Posts:
    7,071
    Likes Received:
    179
    gtx 950 slow powered only? gtx 960 6pin? both are above (and sometimes way way above) the 75w limit
     
  5. Corky42

    Corky42 What did walle eat for breakfast?

    Joined:
    30 Oct 2012
    Posts:
    8,396
    Likes Received:
    186
    You'd think as this seems a (unknown to me) common problem that a PCIe slot would have a hard limit on power draw, i.e something that prevents a slot from breaching specs.
     
  6. Elledan

    Elledan New Member

    Joined:
    4 Feb 2009
    Posts:
    948
    Likes Received:
    34
    Don't be silly, that'd make sense and cost money as well. Money better spent on more fancy LED lighting. D'oh ;)
     
  7. Broadwater06

    Broadwater06 Member

    Joined:
    10 Apr 2016
    Posts:
    200
    Likes Received:
    3
    Can see the difference clearly with the graphs though. GTX 960 and RX 480
     
  8. Maki role

    Maki role Dale you're on a roll... Staff

    Joined:
    9 Jan 2012
    Posts:
    1,587
    Likes Received:
    60
    Does everything have to be a "gate" nowadays?
     
  9. gupsterg

    gupsterg Member

    Joined:
    29 Sep 2015
    Posts:
    56
    Likes Received:
    1
    I reckon we need also PCI-SIG gate ;) .

    View this video
     
  10. DbD

    DbD Member

    Joined:
    13 Dec 2007
    Posts:
    411
    Likes Received:
    4
    AMD somehow manage to mess up yet another release, they can't just magic away the power usage or it wouldn't be using that much power in the first place. Clearly you shouldn't buy a stock card but wait for after market ones to come out with 2*6 pin or 1*8 pin irrespective of what AMD say.
     
  11. faugusztin

    faugusztin I *am* the guy with two left hands

    Joined:
    11 Aug 2008
    Posts:
    6,790
    Likes Received:
    240
    The issue is not about the spikes above 75W or 150W. That is fine, and won't hurt anything. The problem is when you have sustained high load, and that is the problem with AMD. The spike above 75W every few tens of miliseconds won't hurt the board. A sustained 90W load instead of 75W can heat up the pins in the slot and cause all kinds of problems. And that is without overclocking.
     
  12. Vault-Tec

    Vault-Tec Green Plastic Watering Can

    Joined:
    30 Aug 2015
    Posts:
    7,674
    Likes Received:
    364
    I still (from what I have seen) don't think it's that bad.

    People have been knocking together cheap PCs for ages and cards like the 750 and 950 do not have a power connector at all, so when overclocked *always* draw that current from the slot. So we are talking up to like 100w constant through the PCIE lane.

    I'm pretty confident that people have burned up PCIE slots overclocking cards like that but would probably think to themselves "I shouldn't have overclocked so much" or something like that and just returned the damaged part for RMA and got it swapped out.

    The only real problem here is that AMD are pushing a new tool that overclocks and overvolts the 480 and of course if they're saying that it's OK then pretty much every one will want to do it. That was their only crime IMO.

    Any way from what I hear this new driver should fix the problem and then the Nvidia boys can move onto finding something else to rant about.

    Maybe they can go back to complaining what a serious disappointment performance is. It was only really the guys who are on Nvidia who were building it up so high and they were probably doing that because they knew it would give them an angle of attack when the card did not deliver. I mean seriously, at one point on OCUK people were expecting 980ti performance but all of the swines doing the aggravating had already bought 1080s.

    I guess with no direct competition for the 1080 for them to roast into the ground they've moved onto the next best thing. Something which won't affect them in the slightest.

    Ridiculous. Still, if rumour is correct they have more than a year before they will see the next high end GPU so hopefully they'll get really bored and realise that without AMD.... Oh what am I saying they couldn't give a toss, the fools.
     
  13. faugusztin

    faugusztin I *am* the guy with two left hands

    Joined:
    11 Aug 2008
    Posts:
    6,790
    Likes Received:
    240
    Of course it is mostly old parts which have problem. For example this guy has problem with it in only one motherboard, which is old :


    But that shows where the problem will be - in computers with a bit older components, which didn't think there will be a manufacturer abusing the slot like RX480 does. Which is, ironically, one the segments RX480 aims at, that is budget gamers.
     
  14. Vault-Tec

    Vault-Tec Green Plastic Watering Can

    Joined:
    30 Aug 2015
    Posts:
    7,674
    Likes Received:
    364
    True but even I didn't realise that there would be people out there putting these cards in ten year old PCs. When they said budget gamer I figured like Skylake I3, not a dual core CPU worth about 2p.

    Ah well, it should be fixed soon :)
     
  15. gupsterg

    gupsterg Member

    Joined:
    29 Sep 2015
    Posts:
    56
    Likes Received:
    1
    Just as update to members here, The Stilt who is a pro overclocker and has good AMD product knowledge has released a method to reprogram IR3567B to take more of the loading to PCI-E plugs but due to how the VRM is spilit 50/50 between PCI-E slot/plugs the redistribution yields only a 10W decrease in PCI-E slot usage.

    1. The post explaining the ref PCB.

    2. The thread with i2c command fix and soon bios fix release.

    3. Anyone interested in photo showing phase distribution and has a link to Buildzoid's video testing RX 480 PCI-E slot/plug setup view this post by McSteel on TPU.

    Due to only 10W lowering on PCI-E slot and how PCB design is I would assume this would easily be used up when OC'ing. Hopefully when AMD release a fix it will not hamper performance to gain lower PCI-E slot power usage. If there was another controller that could change PCI-E slot/plug power distribution or PowerPlay in ROM could I know The Stilt would have done this.
     
    Last edited: 5 Jul 2016
  16. Bladestorm

    Bladestorm New Member

    Joined:
    14 Dec 2005
    Posts:
    698
    Likes Received:
    0
    I spent most of yesterday and some of today reading everything I could find on this, mostly out of interest but also because I'm in the market to eventually buy a new GPU, so the impact of this affects my eventual decision.

    The PC perspective article and discussion was particularly interesting because you have a knowledgeable engineer running tests and a motherboard engineer chipping in. from that:

    PCI-E is rated for 75w but this is for all the pins, when only the 12v pins are used as in this case, the number we care about is actually 66w (5 pins at 1.1a each apparently). 8% was given for the margin for error within the specification, though a little more may be possible.

    The main point of failure/worry is the pins in the socket itself, because if the contact points make anything less than a perfect connection they will be a point of particular resistance, causing power dissipation and heat generation. The socket design itself means they benefit from very little airflow and mostly have to cope by the heat flowing out through the physical structure of the socket. Spikes weren't said to be a big worry because they don't last long enough to add significant heat, but consistent over-draw is a big worry because they will push the heat up and up the longer it runs, risking burning out the socket.

    6-Pin cables theoretically have a 75w limit, but it is trivial for them to include much larger contact areas and heavier duty wiring, which means in practice they can almost always cope with much higher draw.


    From the bildozer(sp?) twitch stream comes the following:

    The RX 480 is wired as follows:
    3 VRM draw 50% of the core power through the PCI-E socket.
    3 VRM draw 50% of the core power through the 6-pin plug.
    a minor memory wire is also wired to the 6-pin plug (pulling around 3w)


    Put together, we get that if the RX 480 was running at the 150w they originally stated, it would pull around 73w through the socket going flat out and just about hit the PCI-E specification, though being in the margin for error. Pulling ~73w through the socket continually would still be a questionable move for long-term reliability, but one they could probably get away with.

    Many reviews are showing it actually running an average of 160-170 in power draw tests, meaning it is running about 79-83w through the 66w socket at stock speeds, and overclocks including factory ones push it up further quite quickly, resulting in some cases in 90w consistently going through that 66w socket. The board engineer seemed to think at that rate it wasn't a matter of if it would blow the board/socket, just a question of when.


    So right now the best advice for RX 480 owners is to make sure they are running at the base reference clocks and if possible follow the advice of legit reviews and see if they can reduce the top power profiles a little, which will reduce the power going through the slot - they even saw 1-3% performance gains when doing this, so it shouldn't be a noticeable performance loss as long as it is stable.

    In future board partners should be able to redesign the board, shifting power draw away from the socket and onto 6 or 8 pin connectors, which will definitely solve the problem, but reference board designs might need quite a bit of limiting if they are to be stable in the long run.
     
  17. gupsterg

    gupsterg Member

    Joined:
    29 Sep 2015
    Posts:
    56
    Likes Received:
    1
    Yep, the 6 pin on the RX 480 is wired as 8 pin.

    Most PSU is 6+2 so the 6 pin defo has 3x 12V and can easily give 3x 12v x 8A = 288W, the PCI-SIG spec for 6 & 8 pin is well below hardware limit. Some PSU like my CM V850 (Seasonic OEM KM3 platform) have even no OCP on PCI-E so you can draw pretty high A/W.

    Yep, PCI-E slot is being hammered compared with past AMD cards, view averages in this data from THG on 390X/Nano/Fury X/RX 480.

    Yep, 50/50 split is the 6 phases to GPU between PCI-E slot/plugs.

    Current ref PCB RX 480 is fail IMO and waiting till either revised or AIB cards are available to get a RX 480 for tinkering.

    The Stilt's fix shifts the loading of the 3 phases supplied by PCI-E slot to 3 phases supplied by PCI-E plug. The IR3567B can't change the supply source to the 3 phases which are connected to PCI-E slot.

    This proves that there is no programmable chip on the PCB to change supply source to VRM to lower PCI-E slot loading like past cards. In ROM the PowerPlay has PowerTune/Limit, this sets parameters for GPU only TDP/TDC/MPDL and can not differentiate supply source.

    AMD drivers will most probably tighten PowerTune/Limit IMO, they may change the EVV VID per DPM calc to set VID lower. Even this will not dramatically cut power usage on PCI-E slot IMO or inline with past cards. There is ASIC profiling done per GPU on PCB, taking into account leakage (+other properties) to set VID per DPM. ASIC profiling is done so a ROM does not have to be tailored per GPU to set VID per DPM.
     

Share This Page