1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Blogs Is the move to high core count mainstream CPUs long overdue?

Discussion in 'Article Discussion' started by bit-tech, 20 Mar 2019.

  1. bit-tech

    bit-tech Supreme Overlord Lover of bit-tech Administrator

    Joined:
    12 Mar 2001
    Posts:
    3,676
    Likes Received:
    138
    Read more
     
  2. Zak33

    Zak33 Staff Lover of bit-tech Administrator

    Joined:
    12 Jun 2017
    Posts:
    263
    Likes Received:
    54
    I am a very firm advocate of having as many cores as I can afford. I'd always forsake the clock speed for some extra cores and threads :) I'm not a hard core gamer, but it does need to push the graphics card as fast as the resolution/refresh rate can manage.

    I built a Ryzen 1700 last year, and it won't break a sweat on anything.
     
  3. Vault-Tec

    Vault-Tec Green Plastic Watering Can

    Joined:
    30 Aug 2015
    Posts:
    15,495
    Likes Received:
    4,066
    Given there were at least 12 core CPUs on Ivy I would say this is long overdue.
     
  4. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    I think the lasting effect of the core count wars for the vast majority of people will be increased prices of top-end parts.

    A decade and a half in from consumer multicore CPUs (and for multi-core consoles), 6 years of multi-core x86 consoles, and the majority of applications are still single-threaded or very lightly threaded. This isn't just 'lazy developers', it's simply that most tasks either do not parallelise well at all, or do it so well that they would be better offloaded to GPGPU. The remaining niche of high core count CPUs is for heavy parallel number crunching, and that's just not a common desktop task at all. Commonly cited tasks are: Video encoding for livestreaming (fixed-function hardware in CPUs and GPUs beats that up and down the street), compositing/rendering video for production (relatively small market) and 3DCG rendering (also a small market, and about to be eaten by ray acceleration on dedicated hardware).
     
  5. Anfield

    Anfield Multimodder

    Joined:
    15 Jan 2010
    Posts:
    7,089
    Likes Received:
    983
  6. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,381
    Likes Received:
    7,215
    Which has long been the case outside the world of x86, of course: Arm unveiled DynamIQ back in March 2017 based on the big.LITTLE concept originally announced in October 2011 alongside the Arm Cortex-A7 (apparently I didn't cover that here, tho'), and Nvidia's Tegra 3 as unveiled in February 2011 had a fifth 'Shadow Core' for background tasks. 'bout time the big-box boys caught up!
     
    Anfield likes this.
  7. IamSoulRider

    IamSoulRider Minimodder

    Joined:
    24 Aug 2016
    Posts:
    137
    Likes Received:
    11
    You forgot to include program compiling, and one that is close to my heart, and can never have enough cores, audio production.

    Also, what do you mean inceased price of top end parts? Most people don't go anywhere near top end desktop parts, and for the people who need them, can they be considered expensive?

    The whole point of this article is really about how people who currently need to go for say a threadripper platform just for the cores, could be better served by high core desktop parts, which would ultimately save them a huge chunk of money on platform costs.
     
    Last edited: 20 Mar 2019
  8. DbD

    DbD Minimodder

    Joined:
    13 Dec 2007
    Posts:
    519
    Likes Received:
    14
    Well what you want is specialist cores for specialist jobs - you cite video creation as a reason to have loads of general purpose cores, which is true for x86 today but you know if it was important to many people they could probably add some specialist hardware to do it which would be much faster again at a fraction of the power. That's exactly what phones do - they might have a load of faster/slower arm cores, but they also have video encode/decode hardware which is orders of magnitude more efficient then using the arm cores to do it.

    Phones/tablets really show how dated x86 is - hardly anything happens to those cpu's over the years other then small core count increases and a few new instructions. In the mean time arm chips have all sorts of goodies getting added to them with each generation (e.g. last year they all started getting specialist ai cores for image processing).
     
  9. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,381
    Likes Received:
    7,215
    I... I do?
     
  10. DbD

    DbD Minimodder

    Joined:
    13 Dec 2007
    Posts:
    519
    Likes Received:
    14
    Sorry, the article writer does :)
     
  11. RedFlames

    RedFlames ...is not a Belgian football team

    Joined:
    23 Apr 2009
    Posts:
    15,682
    Likes Received:
    3,161
    If you can't remember what you've wrote, might want to ease off on the whisky ;)

    [yes... I know @Combatus wrote the article, not Gareth]
     
  12. Anakha

    Anakha Minimodder

    Joined:
    6 Sep 2002
    Posts:
    587
    Likes Received:
    7
    See, x86 is really slipping behind here. The main problem is that a lot of loads are still single-threaded, so you *need* the single-thread speed as well as the wide availability.

    Or you could go to ARM, where this happens:
     
  13. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    ARM (or for that matter, POWER or RISC-V) on the desktop is even less likely than Linux on the desktop. And not just due to decades of software inertia: while headlines of "New [Apple AwhateverX] faster than [desktop CPU]!" are good for clicks, once you dig down it ends up being about geekbench numbers (or at best, a decade old version of SPEC) rather than in real-world tasks.
    Though Apple are also a lesson in multi-thread performance: Instead of focussing on triple -quad -hexa -octa -dodeca multi-core as many others have to... questionable results *coughExynoscough*, they've instead built SOCs with two or at most 4 cores focussing on maximum single-thread performance.
     
  14. Anakha

    Anakha Minimodder

    Joined:
    6 Sep 2002
    Posts:
    587
    Likes Received:
    7
    I'm not suggesting ARM on the desktop, but Intel were experimenting with massively wide CPUs (80-core Project Polaris) but quickly abandoned it because single-core performance was terrible.
     
  15. Mr_Mistoffelees

    Mr_Mistoffelees The Bit-Tech Cat. New Improved Version.

    Joined:
    26 Aug 2014
    Posts:
    5,504
    Likes Received:
    2,710
    Does high-end parts becoming more expensive really matter to most people, if more mainstream and affordable parts are becoming much more capable?
     
  16. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    Given the reception of more expensive high end parts in the GPU world, it does indeed appear to matter.
     
  17. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    That's different though isn't it? The difference between a low and high end CPU is mostly time, the difference between a low and high end GPU is mostly visual (resolution, quality, FPS).
     
  18. GreenReaper

    GreenReaper Rambling Norn

    Joined:
    21 Mar 2019
    Posts:
    3
    Likes Received:
    0
    There is little question that Intel in particular could have pushed higher core count earlier for some segments. However, this must be seen in the context of seeking maximum performance for a given system, which for common consumer tasks meant single- or dual-thread performance untill recently.

    Adding more cores means more heat and power, and tends to impact single-core performance for a given power budget and TDP. It also tends to mean more chip area (fewer chips per wafer) and a fully-working chip (i.e. no wafers with disabled cores being salvaged), both of which drive up production costs. And motherboards need to be equipped to provide more power, as well as the cache and memory bandwidth to feed them (as relatively few use cases require high CPU power over a small dataset).

    Ultimately the software was the slowest part. Multi-threaded software is hard, and it is only now that browsers (arguably the main application for most users) are fully benefitting from significant numbers of cores.

    Improved core boosting mechanisms also help limit the cost of more threads and so promote putting them in just in case they will be used, just as per-cylinder idle can reduce the cost of high-powered motors. But that means you can see big drops in speed on some CPUs after more than one or two cores are used, which may be entirely acceptable for rendering, but undesirable for other applications.
     
Tags: Add Tags

Share This Page