Discussion in 'Article Discussion' started by bit-tech, 20 Mar 2019.
I am a very firm advocate of having as many cores as I can afford. I'd always forsake the clock speed for some extra cores and threads I'm not a hard core gamer, but it does need to push the graphics card as fast as the resolution/refresh rate can manage.
I built a Ryzen 1700 last year, and it won't break a sweat on anything.
Given there were at least 12 core CPUs on Ivy I would say this is long overdue.
I think the lasting effect of the core count wars for the vast majority of people will be increased prices of top-end parts.
A decade and a half in from consumer multicore CPUs (and for multi-core consoles), 6 years of multi-core x86 consoles, and the majority of applications are still single-threaded or very lightly threaded. This isn't just 'lazy developers', it's simply that most tasks either do not parallelise well at all, or do it so well that they would be better offloaded to GPGPU. The remaining niche of high core count CPUs is for heavy parallel number crunching, and that's just not a common desktop task at all. Commonly cited tasks are: Video encoding for livestreaming (fixed-function hardware in CPUs and GPUs beats that up and down the street), compositing/rendering video for production (relatively small market) and 3DCG rendering (also a small market, and about to be eaten by ray acceleration on dedicated hardware).
May I remind people of a story from last month?
With the implication being that in the future not all cores will be created equal any way.
Which has long been the case outside the world of x86, of course: Arm unveiled DynamIQ back in March 2017 based on the big.LITTLE concept originally announced in October 2011 alongside the Arm Cortex-A7 (apparently I didn't cover that here, tho'), and Nvidia's Tegra 3 as unveiled in February 2011 had a fifth 'Shadow Core' for background tasks. 'bout time the big-box boys caught up!
You forgot to include program compiling, and one that is close to my heart, and can never have enough cores, audio production.
Also, what do you mean inceased price of top end parts? Most people don't go anywhere near top end desktop parts, and for the people who need them, can they be considered expensive?
The whole point of this article is really about how people who currently need to go for say a threadripper platform just for the cores, could be better served by high core desktop parts, which would ultimately save them a huge chunk of money on platform costs.
Well what you want is specialist cores for specialist jobs - you cite video creation as a reason to have loads of general purpose cores, which is true for x86 today but you know if it was important to many people they could probably add some specialist hardware to do it which would be much faster again at a fraction of the power. That's exactly what phones do - they might have a load of faster/slower arm cores, but they also have video encode/decode hardware which is orders of magnitude more efficient then using the arm cores to do it.
Phones/tablets really show how dated x86 is - hardly anything happens to those cpu's over the years other then small core count increases and a few new instructions. In the mean time arm chips have all sorts of goodies getting added to them with each generation (e.g. last year they all started getting specialist ai cores for image processing).
I... I do?
Sorry, the article writer does
If you can't remember what you've wrote, might want to ease off on the whisky
[yes... I know @Combatus wrote the article, not Gareth]
See, x86 is really slipping behind here. The main problem is that a lot of loads are still single-threaded, so you *need* the single-thread speed as well as the wide availability.
Or you could go to ARM, where this happens:
ARM (or for that matter, POWER or RISC-V) on the desktop is even less likely than Linux on the desktop. And not just due to decades of software inertia: while headlines of "New [Apple AwhateverX] faster than [desktop CPU]!" are good for clicks, once you dig down it ends up being about geekbench numbers (or at best, a decade old version of SPEC) rather than in real-world tasks.
Though Apple are also a lesson in multi-thread performance: Instead of focussing on triple
multi-core as many others have to... questionable results *coughExynoscough*, they've instead built SOCs with two or at most 4 cores focussing on maximum single-thread performance.
I'm not suggesting ARM on the desktop, but Intel were experimenting with massively wide CPUs (80-core Project Polaris) but quickly abandoned it because single-core performance was terrible.
Does high-end parts becoming more expensive really matter to most people, if more mainstream and affordable parts are becoming much more capable?
Given the reception of more expensive high end parts in the GPU world, it does indeed appear to matter.
That's different though isn't it? The difference between a low and high end CPU is mostly time, the difference between a low and high end GPU is mostly visual (resolution, quality, FPS).
There is little question that Intel in particular could have pushed higher core count earlier for some segments. However, this must be seen in the context of seeking maximum performance for a given system, which for common consumer tasks meant single- or dual-thread performance untill recently.
Adding more cores means more heat and power, and tends to impact single-core performance for a given power budget and TDP. It also tends to mean more chip area (fewer chips per wafer) and a fully-working chip (i.e. no wafers with disabled cores being salvaged), both of which drive up production costs. And motherboards need to be equipped to provide more power, as well as the cache and memory bandwidth to feed them (as relatively few use cases require high CPU power over a small dataset).
Ultimately the software was the slowest part. Multi-threaded software is hard, and it is only now that browsers (arguably the main application for most users) are fully benefitting from significant numbers of cores.
Improved core boosting mechanisms also help limit the cost of more threads and so promote putting them in just in case they will be used, just as per-cylinder idle can reduce the cost of high-powered motors. But that means you can see big drops in speed on some CPUs after more than one or two cores are used, which may be entirely acceptable for rendering, but undesirable for other applications.
Separate names with a comma.