Discussion in 'Hardware' started by Spraduke, 12 Sep 2020.
Clear as mud! Gotta love tech naming conventions
It's going to be the same, only possibly a better bin. So it may clock better, kinda like the Lisa Su 5700XT. I doubt it will do much of anything better, but as Nvidia have proven 10% at that part of the market allows you to extract the most piss.
So if it's 5% faster than the 3090 they can slap on a branding like the Titan and totally take the piss. And fair play to them if that is it.
It will be cheaper because it has to be. AMD have had driver issues, a broken GPU die that needed to be patched out to stop it black screening (RDNA 1, or Navi) and so on. The RT performance will also be way less (the card shown barely keeps up with the 2080Ti in RT) and so on. Nvidia will just bang the RT drum louder which will force AMD to price smartly. There is no way on earth they will dare to over take the price of the 3070 or 3080 on competing cards. They simply won't sell any, as they are always considered the worse option. And, in lots of cases recently they have been. The Vega 64 was a dog to live with and should have been clocked much lower than it was. Only then the performance would have been laughable.
AMD will never get away with charging Nvidia like prices. And in a way that is kind of fair, given that their R&D budget is proportionally smaller than Nvidia's. Who charge what they do because they can. Until of course they have a **** round and AMD have a good one.
If this is all true, then Dr Lisa Su has done an amazing job turning AMD around as CEO of the company.
I guess having a CEO who is an engineer, and strong willed in sticking with their road maps, and predictions over the 5yrs, has proven anything is possible.
Guess you gotta hit rock bottom to be able to pick yourself up again, and succeed. AMD as a company almost went bankrupt, with shares hitting rock bottom too.
There is hints in the PR back on the 8th Oct with Zen 4 already in design on 5nm.
AMD have a very bright future with strong partnerships with the likes of Amazon AWS, Google, Microsoft, HP, just to name a few. Let alone AMD and Cray building the world's fastest supercomputer.
As long as they keep prices reasonable, and bring the fat cats back inline then I am happy to support them.
Of course Zen 4 is already in design. They don't launch and then sit around smiling. It's continuous. When I used to work on an emulator the public versions we launched were always months, and in sometimes years, behind what we were working on.
Lisa Su has stopped them talking BS. Mainly. However, from what I hear the poison was Raja. Bit of a big head apparently (as has been rumoured with the issues within Intel).
This could turn out exactly like Ryzen VS Intel. Hare and tortoise all over again. Nvidia decide they are so far in the lead they can change how they do things, go to a new fab, none of it will matter because AMD are so far behind they can do all of these things. And then wham, they get a nasty surprise like Intel got.
It won't continue like that though. It will teach Nvidia a lesson, they will go back to providing the best quality GPUs they can and the prices will just continue to soar.
Don't expect AMD to do you any favours. Their prices will be as close to Nvidia's as they dare. The better they get the more they will want paying for it.
I'll wait for real benchmarks in actual games, FakeMark numbers mean squat in real life.
Regarding DLSS - Games have to be coded to take advantage of these features, if a developer cares to do so.
I imagine AMD will use the tricks of the consoles, where games will mainly be console ports, so we will see the same lack of support with DLSS by game developers as we did with Physx in games. That's my opinion on it.
That is obvious, but no one could of said in stone that Zen 4 was going to be made on 5nm TSMC back then...
The point of that post is to show they are dead to their word with the road map into the future from when it was all first announced.
Bit like the old days of 3DMark06, where the ATI HD 2900XT was favoured over the Geforce 8800 GTX. Yet, the 8800 GTX destroyed it in all games...
So I agree, I don't compare 3DMark scores to determine which is faster either.
3DMark scales perfectly. Absolutely perfectly. Otherwise if it did not no one would bother to use it. Yet, it is frequently used as a "Who can piss the highest" benchmark and thus why so many people use it. If you want to talk about a nonsensical benchmark simply use Catzilla.
All you have seen on Zen 3 so far is benchmarks. No real gaming figures, yet you pre ordered a motherboard in anticipation.
Now granted, 3Dmark numbers do not always make their way into games. However, that is the games at fault, not the benchmark. IIRC the Vega 7 did very well in benchmarks, yet crap in games, but that was down to the card not being a gaming card just like Ampere.
AMD have already shown what I believe to be the gaming figures for this card. The three benchmarks, that put it head to head with the 3080. I think they are holding back their best card but that remains to be seen.
To discount one benchmark and accept another is a bit daft. At least with 3Dmark it is a set bench, you can't cheat, you can't alter the settings away from the defaults without it being completely inadmissible and so on. In a game? you can alter whatever you like to fool people.
Sorry, I really can't be bothered to argue over a benchmark that is for epeen reasons.
What we have all seen so far, is game benchmarks which shows the 3080 around 5-10% faster over whatever GPU SKU AMD used. So the fact these results are not pulled out of someone's arse, to make AMD look good, makes me believe more into what is being shown.
There is argument in the fact AMD used the Ryzen 5900x to do the benchmarks, which at 4K will not make a difference to 3080 results on a 10900K.
Correct, as we have all seen with Zen3 is benchmarks on a PR clip. But, what I see is myself without a main PC, who rather than buy a current gen CPU, I have decided to wait it out for the release of Zen3. (Intel's Rocket Lake does not look appealing with only upto 8 cores).
What I also see, is more cores and threads for a CPU that is priced within reason to the competition, with a good bump in performance over current gen, with architect tweaks to vastly improve latency, over an already competitive 3xxx series Zen 2 (Which dominates in some areas, and is not that far behind the competition in others).
The benchmarks that have been shown have been proven to show AMD have been genious to Intel in the likes of Cinebench results. So a clear proven fact that the IPC has been greatly approved upon, which will benefit gaming with the fact games tend to have a dominant core.
So that is why I decided to buy this product.
For the upcoming GPU's, I doubt I will buy anything until next year, not unless things are in the shops and prices back to being reasonable, rather than hiked up by certain e-tailers.
So you've never ran it whilst overclocking to validate the performance uplift and you never use it because it's for epeen?
Do me a favour. It's called a benchmark for a reason.
I've only ran it back in the day's of my Tri and Quad Sli setups for epeen points getting myself in the top 10.
Other than that, I don't use it.
Benchmarks I do use to see gains, are ingame ones, as this is more to real world performance.
If the performance holds up and the benchmarks are true we just need to see whether there will be good water blocks. Has anybody heard rumors about that from the usual brands?
Up to 2.58 Ghz on a Navi21 Sku apparently:
Wondering if we'll find out anything new on Wednesday at this rate.
This got me wondering, if we have 5Ghz CPUs on the same wafers what is it about GPU chips that stops them reaching similar clocks?
A CPU has anything between two and, what, 32 cores. The 3090 has 10,469 shading units - each of which is basically a core (albeit massively simplified compared to a CPU core). Plus 328 texture mapping units. Plus 112 ROPs. Plus 328 Tensor cores. Plus 82 RT cores.
Could you have a 5GHz GPU? Sure. You'd probably have, say, between two and 32 cores and it'd be rubbish.
You're basically limited by the slowest core/shader unit out of the bunch. The more there are, the higher likelihood at least one of them doesn't hit that frequency you want.
If the price to performance isn't terrible and waterblock support isn't the worst, I could see myself dipping a toe in 'current gen' hardware again with the 6000 series.
When is AMD's announcement presentation today?
Also, I just read, AMD buying Xilinx!
Separate names with a comma.