Discussion in 'Article Discussion' started by bit-tech, 1 Jun 2021.
Here we go again - regardless of launch pricing or ETH-nerfing these will just increase the height of the pile of cards few people can get hold of. BTW, I'm not convinced the LHR thing will be a long or even mid-term concern, although persistence of that perception could srill impact resale down the line.
It's weird - if you already have your card, you can watch these proceedings with bemused disbelief. However, if you're in the market for one or have been trying for months, you'll be chewing on your keyboard and spitting pure acid rage. Sucks.
$1199 equates to £850-ish at today's exchange rate.
Adding 20% VAT brings it up to £1020 or so.
I strongly doubt that FE cards will arrive at less than £1100.
I'm betting £1800 - £2000 one some AIB models.
Massive bump up from what a 3080 was supposed to be, cash grab by Nvidia, there will be no 3080s still and any die that may have been earmarked for 3080s will now drip out as 3080Tis, so they can get their cut of the price gouging, sure if you look at it as a cut price 3090 it is not so bad but that was never a good value proposition.
Will I ever replace the 1080Ti.....I'm just not that desperate, it's funny how current prices of GPUs have skewed things, I saw a 6900XT for sale the other day for £1400 and I found myself saying, hmm that's not bad....I think I'm broken
Still on Samsung? Not interested.
Nvidia established that as the new normal for flagships with Turing, after they trialled pushing prices up with Maxwell and Pascal.
Why is that?
What puzzles me is that they say ray tracing performance is better with this new architecture. But I'm seeing 1.5x performance increase is across the board, rasterising and ray traced games are both around 1.5x.
10 check (stores);
20 !available goto 10;
30 buy_all_of_them = true;
40 goto 10;
The bots F5’ing already.
Because I spent some time understanding the current production nodes at TSMC, Samsung, and Intel. And I spent time understanding why Nvidia would use Samsung instead of TSMC. After that I decided (quite some time before ampere launched) that I'd skip this generation if it was built on an inferior process node. And yes, it is.
what makes the node inferior?
You got me worried now
Care to elaborate? It kicks Turning's TSMC FinFET GPUs up and and down the street and, regardless of process, AMD GPUs are still playing catch up.
Why do you think it's inferior and what are it' perceived disadvantages?
The only alternative to the 3080TI / 3090 is the Radeon 6900XT.
And if you buy the 6900XT instead then you have to give up NVENC, Cuda and Raytracing.
(Or suffer the laughable performance of old cards)
Can do, but not now since I'm on mobile and writing technical stuff on gboard is a nightmare.
I have a 1080 Ti and a 2070S here (the wife and I are gamers). I'll just skip Ampere and AMD's 6x00 series completely and might upgrade once Ada Lovelace arrives (doubt it, tbh) or with Hopper. AMD has very good rasterisation performance but are not a viable option for me until they match Nvidia with at least a DLSS equivalent, maybe in RT performance.
Mmm tasty numbers. Or are they? Oh ghost numbers..right. Tasty ghost numbers. I see your totally uniform numbers Nvidia. Yes they are SO accurate... *cough*
First, there is nothing to be worried about. The three main issues I have with Nvidia using Samsung's '8nm' process instead of using TSMC's 7nm process are:
The only reason they had to use Samsung instead of TSMC is that Jensen tried to bully TSMC into lowering their prices. His stubbornness lead to TSMC selling their very much in demand capacity to other companies while he sat on his hands trying to show how big and important Nvidia is.
Samsung's 8nm process is actually a 10nm process with tiny improvements. The density is almost exactly equal to Samsung's 3+ year old 10nm process. It is also quite a bit less efficient than TSMC's 7nm process, requires more power to achieve worse clocks. From what I can understand there are two reasons Ampere is drawing that much power: 1) it is designed to scale with power draw much more than previous architectures because they knew they couldn't get the jump they needed for marketing reasons (basically because they already told everyone to expect xx% improvement over last gen way too early) without pumping A LOT of power through this chip. 2) the process node is just not capable of delivering anywhere near the level of performance/watt of TSMC's offering, which they counted on using when pre-maturely giving out performance increase numbers for Ampere. As far as I know, there were some design changes in the architecture which would not have been necessary with TSMC's process node.
Nvidia moved every GPU down the ladder one rung shortly before release. They didn't end the TITAN line because they really wanted to, they had to to reach whatever targets they made analysts believe they could reach. The 3090 is a TITAN-class card in all but name and was supposed to be the TITAN of this generation. But to make analysts happy they had to make it a normal GeForce product. In short, the now 3080 was supposed to be the 3090, the now 3070 was supposed to be the 3080. THAT is the real level of improvement we would've seen if Nvidia hadn't made the decision to re-name/re-structure their line-up very close to launch.
Ever wonder why all of Nvidia's professional cards (A100 et al) are produced by TSMC while all consumer cards are manufactured by Samsung? Getting margins up, that's why. They simply would not get away with such an inefficient product/architecture in data centres or AI/ML/DL cards, hence they paid whatever TSMC wanted to make sure businesses are happy. Consumers on the other hand can buy the 'crappy' Samsung-based products because it doesn't really matter to them (which, tbh, is true to a very high degree, I'm just 'special') and they wouldn't know or understand or care about the difference anyway.
I could never ever look at Ampere and be happy with what it is, because it could've been so much better if only Nvidia had used TSMC and designed the whole chip a bit differently. The only thing I was really impressed with when Ampere released was the FE cooler design. Both in the looks and performance department.
TL;DR: if Nvidia had not tried to bully TSMC to lower prices we would have a superior product with less power and/or better performance, definitely with better performance/watt. I really don't want to support a company making business-critical decisions based on the already famous sense of grandeur Jensen seems to have.
They use Samsung for capacity - everyone and their dog is using TSMC 7nm - so their isn't much space, and not surprisingly Nvidia use what capacity they have their on their much more expensive business cards. The only way that Nvidia could produce a decent number of gaming cards was to use Samsung. I am not sure why this is a problem - pick a gpu and see how it performs, if it's good enough buy it, if it's not don't. Doesn't really matter which fab made the chip.
Just gonna drop in a reminder that the nm number is a load of marketing w*nkery:
Separate names with a comma.