1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Rumour - AMD Radeon HD 7000 supports PCI-E 3

Discussion in 'Article Discussion' started by arcticstoat, 29 Jul 2011.

  1. arcticstoat

    arcticstoat New Member

    Joined:
    19 May 2004
    Posts:
    916
    Likes Received:
    13
  2. bowman

    bowman Member

    Joined:
    7 Apr 2008
    Posts:
    363
    Likes Received:
    10
    Um.

    This is like platter-based hard drives supporting SATA 3.

    Neat, but pointless.
     
  3. misterd77

    misterd77 New Member

    Joined:
    18 Apr 2011
    Posts:
    96
    Likes Received:
    1
    500 watt gpu's ?
     
  4. Tattysnuc

    Tattysnuc Thinking about which mod to do 1st.

    Joined:
    19 Jul 2009
    Posts:
    1,592
    Likes Received:
    55
    With the way that Power supplies have evolved, I don't believe that the motherboard will be able to ever power the top GPU models, There must be all sorts of issues with running such high power over printed circuit boards on tracks - how much can be run before you get problems with arcing, or pin burnout etc. Makes sense to keep them isolated and use the 80/20 rule. Surely 80% of GPU's around the world are covered by this power envelope, especially bearing in mind that most boards now have an "APU" to misuse AMD's acronym...
     
  5. TAG

    TAG New Member

    Joined:
    7 Aug 2002
    Posts:
    313
    Likes Received:
    9
    They're skipping 32nm and going straight to 28nm.
    Next gen GPUs will hopefully use a lot less power.
     
  6. Goty

    Goty New Member

    Joined:
    13 Dec 2005
    Posts:
    411
    Likes Received:
    4
    Why use less power when you can increase performance. ;)
     
  7. BrightCandle

    BrightCandle Member

    Joined:
    30 Apr 2009
    Posts:
    74
    Likes Received:
    5
    300W is not unreasonable for a GPU maximum. There comes a point where its impossible to cool it even with the additional slot they currently use. Noise on the 300W cards is higher than is reasonable.
     
  8. Evildead666

    Evildead666 New Member

    Joined:
    27 May 2004
    Posts:
    340
    Likes Received:
    4
    300W is probably the max that should be allowed for Graphics cards anyway.
    PCIe 3 will mean better multi-card support, and higher upload/download bandwidth for those cards that may speak directly to the CPU.
     
  9. azazel1024

    azazel1024 New Member

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    I certainly don't think this is pointless. For actual graphics card performance I don't think it'll do a lick of good.

    HOWEVER, for the Bus itself and small add on cards it'll be enormous. A lot of current SATA3 and USB3 add-on cards are limited by being connected through a PCI-e 2 x1 lane. Bump that to PCI-e 3 and you have enough bandwidth to saturate SATA3 and USB3. Also you now are able to get 4 port GbE NICs on a single lane, 10GbE NICs on a single lane. Go with Intels suggested/recommended new x2 lane setup and you have the equivelent of a full on PCI-e 2 x4 lane slot for things like 4 port RAID cards, etc. You could also start implementing new graphics cards standards that run off a lowly x4 PCI-e port if you wanted to (most lower end cards currently don't even really saturate a x8 PCI-e slot).

    You also have twice the total bandwidth on the bus. Especially useful for entry level Intel CPU/chipsets that only support x16 lanes of PCI-e. You use most of that with a single high end discrete GPU. Obviously you aren't going to be using much of that bandwidth for other things when you are gaming, but think of a couple high end discrete GPUs in crossfire, combined with a GbE NIC (or worse a 10GbE NIC) passing large amounts of info and a x4 RAID card making a backup over the network. You are going to be mashing a god awful amount of data through a "restrictive" bus.
     
  10. Hakuren

    Hakuren New Member

    Joined:
    17 Aug 2010
    Posts:
    156
    Likes Received:
    0
    PCI-Ex 3.0 yes, it is useful now, but certainly not for VGAs. Not one, currently available, graphic card can saturate x16 slot. Today VGAs are barely capable of saturating x8 slot (and that only for top of the line products). So Gen.3 x16 slot is way too soon for graphics.

    Storage that is another matter. It is - pretty much - the only major branch of IT industry which will benefit from more bandwidth introduced with Gen.3. It will simplify manufacturing process by reducing number of x16 slots. x8 slot will provide as much bandwidth. And for entry level servers and workstations I can see moving back to x4 slot which will provide same amount of bandwidth as Gen.2 x8.

    300W limit. Hmm that is a tough one. I would gladly see limit reduced. There is no hope in hell to see any major breakthrough in the way modern PC is created. Of course as everybody I would love to see CPU with power of 30x i7 or VGA with GTX580 performance multiplied by 30 and both draining 5W. But that won't happen for next 50 years or so. And at that point I won't care too much about PCs! :D Anyway, I think there should be tendency to create bigger PSU to feed power directly instead stressing motherboard circuitry.
     
  11. schmidtbag

    schmidtbag New Member

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    I completely agree. If there are any devices that actually use 100% of the PCIE 2 bandwidth then there's probably less than 3 of those products altogether. I would be interested to see how today's top-end GPUs and SSDs perform on pcie 3 but I'm sure it's probably going to be a 1FPS difference.

    As for PCIE 3 not upping the wattage amount - good. Both AMD and Nvidia should not have made cards that exceed that limit. I think it's stupid that they're trying to make these behemoth cards that are so big they can't fit in the average case, spew so much heat that you need to water cooling them, probably consume more power than all of your other electronic devices combined, and in games gives you far more FPS than physically possible to notice.

    The power limitation on PCIE helps keep practical products.
     
  12. the_kille4

    the_kille4 Chaos will rule da world.eventually

    Joined:
    28 Aug 2009
    Posts:
    215
    Likes Received:
    5
    I would love to see an affordable way of having SSDs in PCI-E slots... because the current ones are usually meant for servers and using the extra bandwith can definately help their performance
     
  13. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,176
    Likes Received:
    284
    Question: Is 300W the maximum bus power draw (i.e; how much you can draw via the card edge connector before needing additional power connectors), or the rated total power draw (i.e; if you draw more than this amount of power, from all sources to a single card, that card cannot be certified as "PCI-E" compliant)?
    The former seems unlikely, but is the latter really an issue? Other than not being able to write PCI-E anywhere on your box, documentation, website, etc (I'm sure some creative workarounds could be found), is there something else that prevents manufacturers from breaking this part of the spec? Some sort of patent agreement whereby non-compliance with any part opens you up to being sued for even using something that is partially compatible with PCI-E?
     
  14. borandi

    borandi New Member

    Joined:
    27 Jan 2010
    Posts:
    128
    Likes Received:
    1
    There were two 600W GPUs on show at Computex, one of which was definitely being put into production.
     
  15. Psy-UK

    Psy-UK New Member

    Joined:
    22 Jan 2009
    Posts:
    111
    Likes Received:
    4
    Glad they've kept the power limit. Stops things getting silly and out of hand.
     
  16. HourBeforeDawn

    HourBeforeDawn a.k.a KazeModz

    Joined:
    26 Oct 2006
    Posts:
    2,637
    Likes Received:
    6
    sure it wont use what is offered in PCI-E 3.0 but it will help push the way for other devices that will.
     
  17. law99

    law99 Custom User Title

    Joined:
    24 Sep 2009
    Posts:
    2,382
    Likes Received:
    59
    it would be interesting if they just threw powerconsumption to the wind for a single product. But otherwise I'm pleased they want to keep sensible limits.
     
  18. Action_Parsnip

    Action_Parsnip New Member

    Joined:
    3 Apr 2009
    Posts:
    720
    Likes Received:
    40
    EVERY OTHER process change ever says your wrong.
     
  19. TAG

    TAG New Member

    Joined:
    7 Aug 2002
    Posts:
    313
    Likes Received:
    9
    40nm saw vapor chambers getting the norm.
    They pretty much ran out of better ways to cool smaller surfaces.

    I'm really curious as to what's gonna happen next. What will be used to increase cooling power? Artificially make chips larger by compartimentalising them into blocks deliberately spread appart from each other?
     
  20. TAG

    TAG New Member

    Joined:
    7 Aug 2002
    Posts:
    313
    Likes Received:
    9

    Also, look at laptops.
    They got more powerful AND use less power.
     
Tags: Add Tags

Share This Page