Discussion in 'Article Discussion' started by bit-tech, 23 Oct 2018.
It would be interesting to see how my Gigabyte Aorus Z390 Master stacks up.
I'm yet to overclock my 9900K, but at stock when running AIDA 64 Stress test with a HSF it peaks at 85c (4.7GHz all cores). (Noticed that the vcore was shooting up to 1.45v at stock)
I wish motherboard manufactures would stop covering up heatsinks with plastic bits, that's not directed at MSI specifically as they're all at it, and stop cooling the NAND memory while your at it.
Given the current inflation on Intel CPU prices there is no way I'd use them.
i7 7700K: (yes, that's almost 375 Euros... *sigh*)
Simon, truly sometimes fella you have more money than sense - forking out that sort of cash for a product that is still inherently flawed with numerous vulnerabilities and at the resolutions you game at not much faster than the 2700X...
Remember that PCs are basically infinitely reusable... reassign to HTPC duties, give away parts to friends and family etc, just because some people here upgrade a bit too often doesn't mean the old stuff won't get used any more.
Whilst I appreciate all of what you’ve said, nah, I stand by my original post.
It's quite wide ranging. 8350K, which was awesome at £150 (basically a 7600K but for £60 less), is now well over £200. I'm sure they'll settle down but I'd definitely wait. We'll have our 9600K and 9700K reviews up soon too.
Next on the list!!!
The upgrade was more towards VR, as the 2700X was holding me back to 45fps where as I get 90fps in the games I play. Also, I have to say, as I stream and use the CPU to encode, the 9900K seems to allow me to stream at 4k 30 without taxing my GPU in VR.
Anyway, I also managed to get 5GHz all cores stable at 1.260v with temps not peaking past 90c on a HSF using AIDA 64 stress test.
Full system stress was under 500W, and CPU max temps in the benchmarks was 68c
Also results from my 2700X @4.2GHz in 3Dmarks11
9900K results from 9900K @ 5GHz in 3Dmarks11
3DMark 9900K @5GHz
Whilst I applaud your enthusiasm and synthetic bench numbers, nah sorry, the premium you pay to get it for a few hours per day of virtual enjoyment - personally I’ve drawn the line at a set point with both the current CPU and GPU tax (yes Intel and Nvidia I’m talking about you).
Yes, we get it, you have a hate-on for Intel and Nvidia (and ignore the markup at retailer level). What you determine is good value for you is not what everyone else determines is good value for them.
It has nothing to do with a hate on as you put it, Nvidia have introduced as yet unproven hardware to the market (as in nothing softwarewise is or will be able to take advantage of it for months or years to come) at an inflated premium - Intel on the other hand are only reacting to AMD's new share of the market after declaring years ago it couldn't be done, again at a premium.
I fail to see how hate comes into the equation, seeing through the veiled bs of tech manufacturers isn't hate...
Surely the thing holding you back for VR is GPU, VR is fairly low res isn't it, my gen1 Ryzen pretty is almost on par with the results you've posted for 9900K despite being generations behind and having a ~20% clockspeed disadvantage as we have the same GPU.
After owning three Ryzen Processors from the 1600X, 1800X, and 2700X, the 9900K just feels more snappy on the desktop, and is noticeably faster in games, especially VR.
VR is not really low res when you think it has to render 1920x1200 x2 x90FPS
GPU usage seems to never go above 50% usage, as my 2700X couldn't keep above 90fp. So due to ASW, I was limited to 45fps, either that or, or I had to drop the quality right down.
The 9900k is capable of giving me 90fps in VR, so for me it's a win.
VR rendering 1920x1200x2x90 = 414,720,000 pixels per second. (More GPU dependant)
VR rendering 1920x1200x2x45 = 207,360,000 pixels per second.
(More CPU dependant)
4k rendering 3849x2160x60 = 497,664,000 pixels per second.
So that is why the 9900K is better for VR.
On top of that, VR is latency sensitive rather than throughput sensitive. That means it responds better to operations being completed more quickly, so single core speed has an outsized benefit as game engines are extremely lightly threaded for the core render loop (out of necessity).
Plus the render resolution is higher than the panel resolution to account for the non-rectilinear optics, leaving you with a 2700x1600 eye-buffer render target for the Rift CV1 without supersampling, or 2700x1600x2x90 = 777,600,000 pixels/s.
Supersampling is highly encouraged and has an excellent perceptual effect out to 2x* (though diminishing returns past ~1.5x) due to aliasing causing distant objects to 'shimmer' noticeably.
At 1.5x SS, that's 4050x2400x2x90 = 1,749,600,000 pixels/s.
At 2x SS that's 5400x3200x2x90 = 3,110,400,000 pixels/s.
These huge pixel counts that make UHD cry are why techniques like Lens Matched Shading, Variable Rate shading, etc are useful. If you can only apply a single scaling factor per image, you get a big benefit in the centre of the view but waste pixels around the periphery.
* I'm using Oculus' SS factor notation, which is linear per axis, meaning 2x takes 100x100 to 200x200. Valve use total-pixel-count SS factors, so to go from 100x100 to 200x200 would be denoted as 4x.
Whilst I don't disagree with what you are saying, vr res is little more than 1080p, on average with a 1080Ti a 5Ghz chip will provide ~ 9-10% uplift over Ryzen but @ 1080p a 2080Ti will provide on average a 20% uplift, so ditching the 1080 for a 2080 would to me make more sense for similar money.
That MEG seems like a nice enough board but blimey that price is headed to HEDT level yet seemingly offering nothing more? I guess that is what the review means by stepping on toes.
A HEDT on an older gen chip will give you quad RAM more PCIe etc and will over clock and perform similarly will it not (I'll be honest Intels line up confuse me so I could be wrong here) ?
That simply is not even close to being true. At 1.5x SS, it's out by a factor of 10.
It will not, as every benchmark of the last decade of HEDT chips will tell you.
OK so you are saying it is more pixels to push, which means you will need more GPU rather than CPU as at higher rendering resolutions you are GPU limited?
See the first part of my previous post.
Separate names with a comma.