Laptop parts. People can have their 680's if they wish, but the rest of us shall be enjoying desktop goodness.
Yep, they skipped 300 for laptop components, my theory was that 200-400, architecture change so skipped, 400-500 refinement of same architecture, no skipping, 500-700 architecture change again. Yep, I realised nVidia only support 3 monitors, and then only in dual card setups (unless that changes with kepler). I only plan on using 3 monitors anyway, all are exactly the same model at 1920x1080. I know I could always go for an AMD card now, but then I'd kick myself if nVidia were better. I will wait til they release them and then make my decision, my second 570 broke so I'm on 2 monitors at the moment just waiting to go surround.
8800 -> 9800 refinements (or going backwards in case of 8800GTX vs 9800GTX) skipped 100 series 200 series new architecture skipped 300 series 400 -> 500 refinements 600 series going to be skipped 700 series new architecture i predict real world performance of around 160% over 580. remember they are dropping shader core speeds down to the same as core speed. having had bad experience with ATI/AMD (x800, 5870) i think i'll stick with nvidia for the near future. will be watching 780 with close interest. no current single GPU can power 2560x1440 comfortably (may be 7970, but just about).
Indeed. I'd be pleased if I can just see three-screens on a single card (Single GPU. No 790 kthxbai). I think Nvidia could really take home the cheesecake if they implemented that. Just with a big warning saying "Look. This might not actually be powerful enough to run your three 2560x1600 screens. Take it with a pinch of salt." or something to that effect.
It may be possible that Nvidia is choosing to go more towards the AMD approach of having more shaders. Considering how optimized their drivers are for fewer shaders; The sheer performance increase of having double the shaders alone could lead to some fun times ahead. Combine it with a nice wide memory bus and some quick memory and you've got a good card already, assuming they put at least 2GB on the high end, now. The 580 did seem a little taxed for memory at higher resolutions. I wish I wasn't so broke now.
+1 to that. I'd like to see everything at the 760Ti level and above be capable of supporting 3 screens. They might not have the grunt to run them, but I'd like the option should I want to play with surround, Plus if the 780 is really as fast as they say; it should have more than enough kick to happily run three 1080p screens without any issues.
As much as I want to, I don't believe any of this. The numbers most certainly has been twisted for marketing purposes. Twice as fast? No way! 160%? Pffft... If they really are, I would expect to see quite heavy price tags on 560ti and above equivalents. Sure they would have lower models at a lower price point to match AMD but still, anything that is that much of an improvement will have a price tag to suit. I still call BS. Also, I was hoping with the 28nm process to be seeing far less power comsumption on these new cards. I hope Nvidia can do better than AMD after seeing the stats on the 7970. Thats my 2 cents anyway. I would love to eat my own hat on this one.
Well, they have had well over a year to bring something new, and much more powerful to the table. So all this extra time spent tweaking, should yield some staggering results...Well, one would hope!
Apparently they don't need to. Seems a "confidential slide" of dubious origins (and I don't necessarily mean 3DCenter here) is enough to shift focus from a competitor's product which is actually going on sale in less than a week.
The 600 series are going straight to OEMS just like the 300 series did. - I really hope they have atleast 3GB dedicated Vram on the new cards!
1500 shaders is plausible. 580 had 512 shaders, at twice the core clock speed. if 780 to have ~1500 shaders. but at same speed as core, putting them roughly half speed of previous generation. that means (without taking into account of clock speed and architecture changes) 780 will be just 150% faster than 580. so with increasing clock speed and architecture changes, ~160% of 580 in real world seems about right for ~1500 shaders.
Thinking ever so slightly outside of the box... who says that chart is showing framerates? We're moving to 28nm, so double the performance per watt is not impossible.
That's actually a fair point. Being able to run a high-end Nvidia GPU with only 160~ watts rather than 300 would be very fun indeed.
Well, yeah, that's what should be obvious, but everyone's discussing it as if it's straight from Nvidia themselves. So why haven't we heard an announcement that they're going to bring the release forward to compete with the 7xxx? If it's that ready? To be fair, better power consumption probably won't be the target. Better performance per watt is likely the target, which the 7970 has achieved pretty well (as can be seen here). If low-power is what we truly want, people would not hail Sandy Bridge for its power consumption, but chastise it, and praise Atom. Performance has to come into it. The 7970 uses less power than the 6970, and about the same as the 5870/6950, yet is an awful lot faster. 28nm was predicted to double performance, or halve power consumption (not quite as well, as lithography scaling isn't that good), and AMD have gone for somewhere between the two. I'd expect Nvidia to do likewise. It's where they choose to place themselves that will be interesting.