Discussion in 'Article Discussion' started by Sifter3000, 31 Mar 2010.
Cooler running, less power hungry GPU's are the way forward. At least as far as the mainstream consumer is concerned. The less power we use the more money we save. Unless Nvidia can make a massive jump in performance, their cards are just pointless in most day to day rigs. Unless you fold or used cuda optimized programs its just a waste of money x2.
Makes me chuckle..only Nvidia could make a 40nm based card more power hungry and hot than its predecessor.
"Like so many of us, I never want to see PC gaming die, but in my opinion the days of multi-billion transistor single chip graphics cards are practically over"
What has a change to multi chip GPUs got to do with PC gaming dying? If all they make are multi chip GPUs, wouldn't the support for them just get better?
As Crossfire and SLi have matured the multi GPU argument that ATi put forward with the launch of the 3870x2 is making more sense. High end cards rarely come close to 100% scaling due to CPU and memory limitations, especially in non-overclocked systems, but lower end cards in CF and SLi systems with OC'd CPU can often challenge the 'top end single GPU' and cost less, for example a pair of 5770s can compete with 5870 and three nips at the heels of the 5970. Cards like the 4870x2 and GTX295 have forced games developers to ensure their products can make good use of multi GPU systems which is beginning to erode out dated prejudices against SLI and CF in the same way as the prevalence of multi-threading in games slowly eroded the fast dual core vs quad core argument.
More money has to be put into driver development and more money has to be put into graphics card design. With the cards stacked against PC gaming already in some respects, and each generation of graphics cards having less and less of a performance jump from the last - are we already hitting a wall?
Pretty sure I remember people saying the same thing about the big G80 core when that came out too...
Haven't the CPU's come back from putting two chips in one package and are producing large, single chips again?
And why is it Global Foundries to NVidia's rescue? (ATI'd rejoice I guess, GloFo is still full of "old" AMD people)
What Node is GloFo at? 45nm? (Opterons mostly I guess)
Even intel is just at 32nm for something as complex as a processor (which a GPU is...if not more complex)
Really? that's a shame they were always the cards I looked for first.
'tis a shame, multi-GPU is way too dependent on drivers.
how about specifically design a bus for multi-GPU? i like 4870x2's side-bus (??) and i think that's the way forward for higher/better performance.
Where is the impetus to keep designing faster and faster gpu's? Unless and until developers bring us games and applications that stretch and even surpass current gpu capabilities all we will see, for quite awhile are Fermi-small steps in better graphics performance. There needs to be at least a dozen, if not more, games like Crysis; that make gpu's bleed so that even a basic Â£100 card has to give a 5780 level of quality to keep games playable.
It is unfortunate that we are unlikely to see any of this until after a new generation of consoles comes out and raises the floor under games graphics. PC gaming isn't dying, it just no longer leads.
I don't understand how the disappointment of the Fermi core equates to the death of single chip GPUs. Dual core GPUs are total overkill in today's gaming market, the average gamer simply does not need one. Why would anyone in their right mind spend £4-500 on a dual core graphics card, when single core cards are perfectly adequate? I'm playing on a year old gtx 275 and still cranking every game to the max.
I hope not. I'd much rather have a single GPU than need to rely on drivers to get good SLI/CF performance.
problem is if nvidia focuses on cuda as most expect ( which makes 50-60% of its profits acording to reports)
then ATI will have no competition
and without it no real will to push the graphics boundry further.
yes i prefer a single gpu always will but i think like the blob person we have seen the last of them
Question: could Microsoft build general multi-GPU support into the next API (Direct X 12). That would be much better than needing specific games profiles in the driver.
Or just use a single 5870 and get 85% of the performance....
Anyways no, Nvidia wont learn from this because they still think Fermi is the best thing since slice bread. Jen-Hsun Huang is borderline delusional in his own brand fanboyism, hell he still thinks nVidia "make the best chipsets in the world"....
I think there is a change of focus coming, perhaps Nvidia is going to concentrate more on the hpc market and this will drive development and a derivative of the hpc product will be used for gaming....very much as fermi is.
Tbh perhaps thats the way it should be...it has always struck me as a little frivolous that "gaming" should be a major driving force in computer hardware development.
Infact it may even be advantageous as regardless of the ebb and flow of demand for pc gaming hardware there will always be demand from the hpc sector...so provided there is sufficient demand to make a gaming derivative of a hpc product then hardware pc gaming hardware will continue to develop.
That would certainly be nice. When/if multi GPU moves forward and becomes more popular it would seem that there would be more support for it and therefore less issues. Look at 64 bit operating systems, used to cause some pretty big issues running one but now that modern PCs are hitting RAM capacity limitations there is a huge push to use a 64 bit OS and they are quite standard now so people make new products to work with them. Much the same way, I assume game developers would start making games better suited to use dual GPUs, API's would be changed, and drivers could be developed with the sole intent of dual-GPU cards.
I'm pretty sure Bindi, and others, is onto something. The days of huge monolithic GPUs are numbered. The graphics card industry is about to learn the same lesson the processor industry learned a few years back (mostly Intel with the P4): You can only go so far with (huge) single-core designs.
The future of GPUs clearly lies in smaller and more efficient multi-core designs, just as it does for CPUs. Yes, it'll take a paradigm-shift and the learning curve for optimizing code for multi-core solutions might be a bit steep, but it's clearly the way ahead.
Seriously folks ...you gotta ask; why?
Graphic cards purely for gaming as their main raison detrÃ©? Games are (with the honourable exception of Crysis) eaten alive by by most mid+ cards of the last two years. (Think 4890/275 or higher) . It is game developersthat have to develop games that DRIVE users to want up grade...Ati/Nvidia are on a lost cause if they think people will continually up grade just to have the latest card if their exsisting card remains more than adequate. Nvidia have bought games to show off physX. Wasted effort!. Ati (and Nvidia) should be paying them to make games only properly playable on the best of the last gen cards so people buy the next gen and the next gen is worth developing.
Some of the issues for games and drivers is down to the way the drivers are coded, the rest lays in the hand of the developers and the code they write. Far too many people are too keen to point to bad drivers when a lot of the time the problem lays within the software itself and is why on the PC platform we have the need for patches, something the consoles don't get because they spend more time testing for problems.
It is a misconception that the sheer variety of hardware on a pc platform is to blame, but in essence this simply comes down to drivers and the software that uses it. For a simplified understanding consider Adobe's Flash and that it is capable of running on any PC regardless of hardware.
I do agree that multi-core GPU's are going to be the way forward, though ATI have shown that there is still some life left in in single core GPU's for a little while longer yet.
Separate names with a comma.