Discussion in 'Article Discussion' started by bit-tech, 12 Feb 2020 at 13:00.
It really is amazing that in the space of a year or two we have gone from quad cores being the 'norm' (sometimes without HT) for enthusiasts and hexacore being fairly exotic to now with 6 or 8 cores being the new 'norm' enthusiast market and a 64 core CPU being available at a not totally insane figure (<£10k).
I really think we will now see an epoch in software making better use of multi - threading because the ability for a companies software to scale to 64 times faster than the single threaded version (in an ideal world) is too much to ignore.
Kudos to AMD for pushing the envelope.
I'm not so sure, because isn't the problem that it's super hard to implement parallelism for many workloads? We've had dual and quad core CPUs for a long, long time now but it feels like the majority of software still doesn't scale with the number of cores.
Obviously there are scenarios where this happens, but the point is that it's already happening where it can. Just because more computers have more cores isn't going to lead to all software taking advantage of that.
So a lot of stuff is hard to scale across cores. I've been programming for 4-5 years and I've only recently learnt how to do it properly for specific work loads. I'm talking maybe 5% at the very upper limit of the stuff I develop here. I do think some software doesn't really care about a 2x speedup for the work required to make stuff scale but when you say ok what about a 64x speedup it makes the argument more compelling. Languages such as Go are also making it easier IIRC so this may help as well.
...also most of the work that was easy to parallelise got offloaded to GPUs.
We can also look at the mobile realm to see the situation at an accelerated pace and without a pile of legacy cruft: SoC CPUs rapidly expanded from single cores to octa-cores, but the objectively fastest chips are Apples with a 'mere' 2 fast cores and 4 slow cores. There are just vanishingly few workloads that both scale to more than a small handful of threads but also do not scale so much that they are better suited offloaded to GPGPU or another coprocessor, and most that do only spawn additional threads for relatively simple tasks due to Amdahl's Law scaling dominating for client computing tasks. Gustafson's Law scaling is not the case here: client workloads rarely scale with throughput (e.g. if you're editing an image, you don;t care if you can edit four in parallel in twice the time for a theoretical 2x speedup, because you only have one pair of eyeballs so end up editing one photo in twice the time) and instead almost exclusively scale with execution time.
The two oft-cited examples of multi-core CPU performance - video encoding and raytrace rendering - are both ones that are better suited offloading to dedicated coprocessors (FFB encoders and BVH-traversal accelerated GPUs respectively).
It would be really interesting to see a game make use of these cores ...
Funnily enough didn’t we say that at 2 cores, 4 cores and 6?
The most ironic and hilarious thing is that as soon as AMD released Ryzen Intel spent bumloads of cash on getting devs to use more cores without realising they were signing their own death warrant*.
* OK so I'm not exaggerating but do acknowledge we'll never see Intel close because they're good at talking and selling crap.
Separate names with a comma.