Graphics ATI 5970 vs nVidia GT300 (Fermi) [Not Rants-Only Speculation/Intelligent Discussion]

Discussion in 'Hardware' started by Jasio, 18 Nov 2009.

  1. Jasio

    Jasio Made in Canada

    Joined:
    27 Jun 2008
    Posts:
    810
    Likes Received:
    13
    Okay,

    Let me make something clear about what I am asking:

    1) I would prefer that the thread not disintegrate into an ATI vs. nVidia fanboy rant thread.
    2) GT300 is *not* out, hence this is only speculation.
    3) I am curious as to what people think nVidia/ATI will do next.
    4) What the future for GPU's will be.

    Okay, with that out of the way, I am interested in one particular aspect mentioned by Anandtech about the 5970 card:

    "Officially, AMD and its vendors can only sell a card that consumes up to 300W of power. That’s all the ATX spec allows for; anything else would mean they would be selling a non-compliant card. AMD would love to sell a more powerful card, but between breaking the spec and the prospect of running off users who don’t have an appropriate power supply (more on this later), they can’t.

    But there’s nothing in the rulebook about building a more powerful card, and simply selling it at a low enough speed that it’s not breaking the spec. This is what AMD has done.

    As a 300W TDP card, the 5970 is entirely overbuilt. The vapor chamber cooling system is built to dissipate 400W, and the card is equipped entirely with high-end electronics components, including solid caps and high-end VRMs. "


    So to sum it up, the 5970 runs at 300W peak "officially" but can be easily pushed over the ATX spec.

    We all know that both nVidia and ATI flagship cards are power hungry, and the 5xxx series has very aggressive power saving features.

    I would like to speculate on how nVidia' GT300 flagship will pan out vs. the ATI. I am assuming that nVidia will also stick to the ATX spec officially, and their new design will reflect extensive power saving features. However, I am also quite sure that the card will come very close to a 300watt TDP.

    Seeing as unofficially the ATI cooler can dissipate 400 watts, do you think something similar will happen with nVidia? As long as the ATX spec remains unchanged @ 300 watts are we going to enter a phase where high-end/flagship cards are all released as "300 watt TDP' compliant products but engineered to perform far better out of the box with the flip of the switch (by the end-user)?

    Personally, I wouldn't be surprised if nVidia did something similar: Release a 300TDP watt product but with far more overhead *engineered* into the product with the full expectation that the end-user will utilize it when they get home. If ATI assumes a 400 watt TDP cooler on the stock/reference card, nVidia probably won't do any worse; possibly better.

    Unless nVidia somehow produces an amazing product that produces little heat or draw much power I think we might temporarily see these kind of high-end products until the ATX spec is adjusted. I am oblivious as to how long it could take to change the spec, and that might prove to be a minor setback but it might not...

    Discuss.
     
  2. GoodBytes

    GoodBytes How many wifi's does it have?

    Joined:
    20 Jan 2007
    Posts:
    12,300
    Likes Received:
    710
    [request to delete my post.]
     
  3. pimonserry

    pimonserry sounds like a party.

    Joined:
    20 Dec 2008
    Posts:
    2,113
    Likes Received:
    75
    I never knew 300W was the limit for ATX spec. That's a shame :( Otherwise we could get the 5870X3 ;)

    Still, I wish ATI's 5800 series were a little smaller. I've a 4870 at the moment, and I can't fit a 5870 in my case :(

    Anyway, on topic, the 5970 is a stonker, it'll take some serious work by NVidia to conquer the current 5800 series (at a reasonable price) I think.
     
  4. pimonserry

    pimonserry sounds like a party.

    Joined:
    20 Dec 2008
    Posts:
    2,113
    Likes Received:
    75
    Edit: my browser bugged and posted twice. Sorry :(
     
  5. tonpal

    tonpal What's a Dremel?

    Joined:
    27 Jul 2008
    Posts:
    621
    Likes Received:
    32
    Historically Nvidia have been driven by performance and I don't expect Fermi to be any different. The information being released on the ability of Fermi to be used for GPUGP applications seems to back that up. Nvidia are looking to innovate in the performance of the chip rather than economise as AMD have done with their 4xxx and 5xxx series.

    What does that mean in terms of the GT300? I would expect a large expensive to produce GPU that is less power efficient than the competition but with great performance both in graphics and GPUGP applications.

    In terms of the manufacturer providing thermal headroom to "overclock" a GPU thought needs to be given to the ability to supply power to the grpahics card. A PCI-E 2 motherboard connector delivers a maximum of 150W, each 6 pin PCI-E power connector delivers a maximum of 75W and each 8 pin 8 pin PCI-E power connector delivers a maximum of 150W. The AMD 5970 which has a motherboard connection (obviously), a 6 pin power connector and an 8 pin power connector can draw a maximum of 375W. If Nvida follow the same path could we see the GT300 being released with two 8 pin power connectors?
     
  6. Jasio

    Jasio Made in Canada

    Joined:
    27 Jun 2008
    Posts:
    810
    Likes Received:
    13
    The 5970 actually has 2 x 8-pin but the second 8-pin connector has 2 connectors shrouded off. I would hope that one of the manufacturers ends up using it and offering 4GB (2GB per GPU) -- similar to the 4870 2GB. But the question is *how* quick can the GT300 be with the 300TDP limit, there are going to be certain memory/GPU clock frequency limitations at some point.

    It's worth mentioning that the ATX spec is 300watts per card, so its reasonable to assume that a parallel solution (as separate cards) may become a necessary reality; hopefully pushing both companies to better their multi-GPU performance (SLi/CrossFire) be it in 2, 3 or 4 card configurations.
     
  7. Sh0cKeR

    Sh0cKeR a=2(s-ut)/t²

    Joined:
    21 Aug 2009
    Posts:
    477
    Likes Received:
    11
    It seems that with the 5970, your getting two artificially limited HD5870's in crossfire with the voltages dropped. In one review, they pushed the card quite easily over 300w with overclocking and just about matched the crossfire equivalent give or take a few percent, yet being cheaper. ATI obviously designed the card originally to run at these overvolted/overclocked settings as only then does the cooler start kicking in at higher rpm and managing to dispell the heat. As far as GT 300 goes, i'd imagine the single gpu version will easily pull 250+ watts and if history is anything to go by, fit snuggly between the HD5870 and HD5970 in the performance charts.
     
  8. barndoor101

    barndoor101 Bring back the demote thread!

    Joined:
    25 Oct 2009
    Posts:
    1,694
    Likes Received:
    110
    just remember guys, the actual fermi chip is huge (which is why they will have worse yield problems than ati - the chip is almost twice the size of cypress) so a) it will give off more heat, and b) a dual gpu card is going to be tough (at least on the same PCB).
     
  9. GoodBytes

    GoodBytes How many wifi's does it have?

    Joined:
    20 Jan 2007
    Posts:
    12,300
    Likes Received:
    710
    Give it a few month after it's release, than the b version will be out where it will fit on some really smaller die.. probably 45nm
     
  10. barndoor101

    barndoor101 Bring back the demote thread!

    Joined:
    25 Oct 2009
    Posts:
    1,694
    Likes Received:
    110
    its already on a 40nm die. so the next step down is 32nm. not even intel have migrated their stuff to 32nm yet (i believe samsung have some memory chips at 32nm, but these are prototypes).

    There is no scope for a die shrink for at least a year, maybe even 2 years.
     
  11. GoodBytes

    GoodBytes How many wifi's does it have?

    Joined:
    20 Jan 2007
    Posts:
    12,300
    Likes Received:
    710
    So if it's 40nm... and no one did 32nm.. why do you say that the G300 is die is huge?
     
  12. unknowngamer

    unknowngamer here

    Joined:
    3 Apr 2009
    Posts:
    1,200
    Likes Received:
    98
    Not quite right.

    Global foundries (amd's old foundry) are at 32 and 28 nm

    http://www.arm.com/news/26063.html
    they've got 32 nm chips in production for mobile devices and are developing the 28 nm as well. expectd q1/2 2010
     
  13. okenobi

    okenobi What's a Dremel?

    Joined:
    3 Nov 2009
    Posts:
    1,231
    Likes Received:
    35
    Historically, ATi partners offer a great variety of custom cooler solutions than nVidia cards. Also as has been mentioned, ATi 4 and 5 series focused on power efficiency. In other threads, there's been talk of how we're all playing 360 games on a PC and given I tend to think that's largely true, I like ATi's approach lately. They've worked on a good compromise between affordability, power consumption and performance. The 5 series seem to be an evolution of this philosophy and are no doubt priced slightly high, because they can.

    Given all that, I think Fermi is likely to outperform the top-end 5 series in nVidia biased games and possibly in other games. It's also likely to fold well and be good at other peripheral tasks. However, I'm doubtful it will be as efficient, quiet or cool as the 5 series. Also, it'll be interesting to see what they do with the sub-£150 market, because let's face it, that's what really moves these days.
     
  14. Krikkit

    Krikkit All glory to the hypnotoad! Super Moderator

    Joined:
    21 Jan 2003
    Posts:
    23,935
    Likes Received:
    658
    For the past few generations ATi's partners had to be more inventive with the coolers because the stock ones were horrid.
     
  15. unknowngamer

    unknowngamer here

    Joined:
    3 Apr 2009
    Posts:
    1,200
    Likes Received:
    98
    My concern for fermi is Nvidia veering of to GPGPU compute and forgetting about graphics.

    as in it great at folding but not as good at graphics.

    so 50% faster at crysis/3dmark, but 100% at Folding.
     
  16. Aracos

    Aracos What's a Dremel?

    Joined:
    11 Feb 2009
    Posts:
    1,338
    Likes Received:
    47
    Forgive me for going slightly off topic but where is the nm limit? Like how small can you get before you can no longer get any smaller? Considering my CPU is 130nm and 32nm CPU's are currently getting worked on I really don't see 1nm before THAT long away maybe 10 years? But is there no limit? Also what measurement is after NM?
     
  17. barndoor101

    barndoor101 Bring back the demote thread!

    Joined:
    25 Oct 2009
    Posts:
    1,694
    Likes Received:
    110
    pico-metre or pm is the next one. I think i remember a papaer written on this, that there is a theoretical limit before you get too much voltage leakage across the silicon, cant remember what that limit is but were quite a way off yet.
     
  18. unknowngamer

    unknowngamer here

    Joined:
    3 Apr 2009
    Posts:
    1,200
    Likes Received:
    98
    A limit for the current technology is ultimatley atomic scale.

    when we create the technology to do that, is a different question.

    Also don't forget there are other technologies waiting in the wings:
    Quantum computers, mechanical nano systems, optical switches working at light speed, bio-chemical systems, dna computing

    Then the question becomes how do assimilate the technology and information.

    Crystal ball time.......
     
  19. Aracos

    Aracos What's a Dremel?

    Joined:
    11 Feb 2009
    Posts:
    1,338
    Likes Received:
    47
    That went completely over my head but i gather my worries are completely unfounded. Cool beans.
     
    pimonserry and unknowngamer like this.
  20. unknowngamer

    unknowngamer here

    Joined:
    3 Apr 2009
    Posts:
    1,200
    Likes Received:
    98
    I know those are technologies that are years away from useful, if ever viable.


    but 20 years ago, when the x86 arcitechture was in its infancy, 40nm was unthinkable.


    in another 20 years what is available will be unthinkable to us.
     

Share This Page