It's not clickbait. Going Samsung was Nvidia's dumbest decision. It's worse than Fermi. Nuff said. TSMC could have handled Ampere easily enough. They've had no issues really with supplying plenty of 7nm wafers to AMD. Nvidia used Samsung to try and get TSMC to cut them a cheap deal, but TSMC had plenty of "work" so refused. Which backfired on Nvidia, who were then stuck with Samsung. It's just a simple case of Nvidia once again pissing off people they deal with. Like that time they pissed off Intel, so Intel refused to license them any more sockets which killed Nforce stone dead.
Seeing as how there's a lack of titans it must be "best card except titans" which would be the 680, which was also the flagship when released and the best card that generation that wasn't a titna. Then they used the 780 later as well as the 780ti, so it should be 580vs680, then 680vs780 and 680vs780ti. So you have performance figures of an ampere card made on the tsmc process to confirm this then?
The 680 was the top single card for the 6 series I thought? Even if it wasn't, why wasn't there a different card from that range then?
$ 680 was mid range Kepler die. 780 was large Kepler. They started segmenting their releases then to pull in the fanboys. They did it with the 8 and 10 series too (980, 1080 when the real big cards were the 980ti and 1080ti).
It wasn't the flagship Kepler. It was the flagship 600 card if you want to call it that but it was still Kepler and it was tiny compared to the actual flagship, the 780 (and then the Ti as well as the Titans etc). GTX 680 was Kepler 294 mm² 28nm GTX 780 was Kepler 561 mm² 28nm. Seems you are getting confused by Nvidia's branding and numbering. They basically got two numbers out of one technology. Had the 7970 been better than it was the 780 would likely have happened first. But they realised the 680 could keep pace, so they launched the mid range silicon first.
At least read George's post properly before jumping down people's throats Andy mate... He asked if the 680 was the top single card for the 6 series, which indeed it was. The probability that they held Kepler back for 6 series to allow another generational performance leap for the 7 series with minimal R&D is another question - but it still stands that the 680 was the flagship card of that series.
I mean I wasn't saying the flagship Kepler card, I personally would think people tend to think of a series of the 6xx or 7xx rather than pascal vs maxwell etc but happy to be wrong on that. In this instance I was meaning 6xx series, got a top 4 series card (although only compare to a 2 series not the 5 series) then a 5 series, then no 6 series, then multiple 7 series, 9, 10, 20, 30 series, seems weird to leave the 6 series out was my point. There was like a year between the 680 and 780 I think, so I would have thought the 680 would have been the flagship card for a while.
https://www.overclock3d.net/news/so..._rtx_3080_and_rtx_3090_shortages_until_2021/1 From the sounds of it they don't have much choice. This is seeming more and more like a yield issue as every day passes. It reminds me of the gap between the 200 series and Fermi. They stopped making any 200 series parts and the shelves were literally empty for six months. Like, you couldn't buy Nvidia if you wanted to.
A price that may be well worth paying in the long run... Because having TSMC with an absolute monopoly on the production of top end GPUs (as has been the case for far too long) is extremely dangerous as they get to dictate everything from what can be manufactured to how much it costs.
Covid is also a major factor in supply at the moment, Unless your paying for Air Frieght and Nvidia is not doing this by all accounts, Sony and Microsoft have had to pay big money for Air Frieght for there latest consoles
All I see, is what was mentioned in that video I posted on the previous page. Samsung was a third cheaper than TSMC, but with Nvidia thinking they can dictate a better price to TSMC. Nvidia only cut off thier nose to spite one's face, turning to Samsung to take full production in the end. We as the consumer are left with Fermi type GPU's all over again... I just hope AMD pull it out the bag this time around, and bring to the table a GPU as fast as a 3080, and more efficient at an under cut price.
Serious in the fact I'm fed up of clowns like Nvidia and Intel taking the piss with hiked up prices, and marketing that convinces you, you are getting the best deal... Must be a reason why AMD have been chosen again to run in the next generation of consoles. Also with current spec leaks from navi 21, it seems possible. Time will tell. P.S I'm no fan boy of either company, just want fair competition to bring prices down.
So why don't you apply that same principle to TSMC and Samsung? Because that is the angle I'm coming from... competition to hopefully improve prices in the future.
If AMD have done what they should have done a few gens ago this round will be good for gaming. When they switched to GCN Nvidia laid off the big fat GPUs and started making gaming GPUs like Kepler and Maxwell. Which were cut back, yet could clock balls. This was great for gaming (Pascal was even better). However they wanted to return to the big heffers, this is the end result (similar to Fermi). AMD on the other hand? have gone back in the other direction. RDNA 2 should be much more power frugal with higher clocks. It should also have something left in the tank for overclocking. Which is all better for gaming. Just like their last awesome launch, the 5000 series. Like, the original 5000 series not Navi 1.
Because with Nvidia, it is just a bigger profit margin using Samsung. Regardless it they used TSMC for the 3xxx series, we would still be paying $699 for a 3080, and $1399 for a 3090, though the coolers would not of needed to be so big and expensive to manufacture to compensate the heat output that the Samsung node produces.
People keep refering to Fermi like it was some god awful thing. The only thing awful about it was the reference cooler and the massive slab of steel it had on top that acted like a storage heater. On a decent cooler it was really tame and could be clocked really well. My gigabyte 480 superoverclock did not have a monstrous cooler, it was a slim card, barely 2 slots and standard PCI card height had was damn near silent. And I had a case with no roof or side panels at the time so I mean silent. It may have been power hungry in comparison to previous gens, but sometimes that's needed to overcome barriers in the next level of performance. Graphics cards haven't always had additional power connectors, but that's slowly crept up to 2 8pins as the norm. The more complex they become and the more transistors they have to cram on that's only going to increase. And everyone's talking like the 6000 series from amd is going to sip power like that queen sips tea, like their track record of power consumption has been so frugal....