1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Blogs The stage is set for the CPU Core War 2018

Discussion in 'Article Discussion' started by bit-tech, 12 Jun 2018.

  1. bit-tech

    bit-tech Supreme Overlord Lover of bit-tech Administrator

    Joined:
    12 Mar 2001
    Posts:
    3,676
    Likes Received:
    138
    Read more
     
  2. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    I get the distinct impression Intel are using smoke and mirrors when it comes the 8086 and that 28c HEDT part, the 8086 only hits 5Ghz on a single core and as mentioned in the article the 28c HEDT was nothing more than a Xeon SKU on steroids.

    I think AMD really caught them on the hop with ZEN and their reaction seems to indicate their in a bit of a fluster, for a long time Intel have been focusing on clock speed over core counts, *arguably to our detriment, I'm hoping we'll see all these extra cores being put to good use now as there's not really been much in the way of innovation in terms of how the software on our system runs for over a decade, we've been stuck with four cores slowly getting faster.

    *Depending on your usage.
     
  3. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    On the other hand, Intel have demonstrated actually achieved all-core clocks with the 28-core XCC die (albeit with HPC cooling) while the Threadripper 32-core remains a paper chip (though likely based on the Epyc 7601) with unknown actual performance.

    For consumer workloads, the core-count race is worryingly akin to the Megapixel race. Consumer workloads are Amdahl's Law scalers, not Gustafson's. Returns drop off rapidly, and even a decade of quad-core CPUs has not resulted in widespread threaded workloads. This isn't just 'developer lazyness', just that parallelism is not always even possible.
    We've already seen this in the mobile SoC market in accelerated time, with a race to OCTA COOOOOORE resulting in no real performance gain but an increase in power demand, followed by core counts dropping far back down and a handful of faster cores (usually two fast cores with 2/4 low power cores) being the preferred route e.g. the Apple A series.

    For HPC and the small number of workloads that DO scale well, Moar Cores = better. But those are often embarrassingly-parallel workloads anyway, where GPUs eat the lunch of high core count CPUs anyway.
     
  4. sandys

    sandys Multimodder

    Joined:
    26 Mar 2006
    Posts:
    4,907
    Likes Received:
    722
    Eh? AMD showed threadripper 32 core running aircooled, how is that more of a paper chip than Intels effort?

    Intel may as well have got de8aur in and said we're launching a 7Ghz i7 it would be as legit. :D

    One is going to be out in a couple of months dropping into a consumer tr4 socket, the other only exists in server land, and is significantly slower than 5Ghz.

    Not sure where AMDs 4Ghz comes from everything I have seen and read say 3Ghz base 3.4 all core boost but they do have WIP next to the 3.4, no mention of anything higher although you might expect it to be with XFR2 and Pb2
     
    Last edited: 12 Jun 2018
  5. Guest-56605

    Guest-56605 Guest

    Call me cynical, mention Intel and I think Brian Krzanich, corruption and years of ****ing price gouging.
     
    adidan likes this.
  6. MLyons

    MLyons 70% Dev, 30% Doge. DevDoge. Software Dev @ Corsair Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    3 Mar 2017
    Posts:
    4,174
    Likes Received:
    2,732
    [​IMG]
     
    adidan likes this.
  7. Anfield

    Anfield Multimodder

    Joined:
    15 Jan 2010
    Posts:
    7,058
    Likes Received:
    969
    32 core (and the almost certainly going to happen as well 24 core) Ripper is not going to be an entirely new product category, instead it is designed to be the Annihilator of the 7980XE.
    The 28 core from Intel however is in effect creating a completely new Ultra HEDT XTX PE category with a rebranded server socket and so on separate from the regular peasant HEDT.


    Ultra HEDT XTX PE was made up on the spot, Intel hasn't actually announced the marketing name yet.

    Luckily for CPUs a lot of what could be moved to GPUs hasn't done so.

    But yeah, there are definitely limits to the core wars, plus what you said about mobile CPUs will eventually happen in Desktop CPUs as well, next big thing will be not all cores being created equally any more.
     
    Last edited: 12 Jun 2018
  8. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    True, but in my eyes all Intel achieved was showing us how they can overclock a Skylake Xeon part that they release almost a year ago, there's no information of when, or even if, they'll release a 28c 5Ghz parts (i highly doubt they will btw)

    No and i wasn't implying that they are, their probably under time and financial constraints but their not lazy, however i question if the lack of widespread threaded workloads is due to there being only four cores to play with so it's consider not worth investing in or is it truly because parallelism is not always even possible.
     
    MLyons likes this.
  9. MrJay

    MrJay You are always where you want to be

    Joined:
    20 Sep 2008
    Posts:
    1,290
    Likes Received:
    36
    Can Intel even scale much further core wise with a single die approach?
     
  10. Anfield

    Anfield Multimodder

    Joined:
    15 Jan 2010
    Posts:
    7,058
    Likes Received:
    969
    Yes and no.
    There are physical limits to how big a die can be, but die shrinks keep them away from that limit.
    However there is another aspect to consider, is it worth it?
    If you push manufacturing too far the yield will inevitably hit a wall from where it drops exponentially, at which point you can save tons of money by backing off a step and going the multi die glue route.
     
  11. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    Adding more cores gives severely diminishing returns. If your workload is parallelism, then just going from single to dual core (way back in 2005) would give you a doubling of performance. 'Just' 4 cores would give a further doubling (quadrupling). If your workload is not easily parallelisable, then adding more cores is not an incentive no matter if it's 8 or 32.
    Up to at least 28 cores, clearly yes as those are shipping. Whether they can just keep growing XCC depends on how close to reticle size they want to push.
    It may be a moot point anyway with EMIB. Bandwidth of a monolithic Silicon interposer, but without the cost, assembly (one failed bond kills the assembly), or manufacturing (no TSVs) issues.
     
  12. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    I know that's how it works, what i was questioning was if there could be some way around that, theoretically a sequential job can be broken down into smaller parts and worked on individually, hence the mention about innovation.

    Leaving aside the recent security issues surrounding branch prediction isn't that in essence breaking down what should be a sequential job and working on what it predicts the outcome will be before the sequential job has finished.
     
  13. Anfield

    Anfield Multimodder

    Joined:
    15 Jan 2010
    Posts:
    7,058
    Likes Received:
    969
  14. sandys

    sandys Multimodder

    Joined:
    26 Mar 2006
    Posts:
    4,907
    Likes Received:
    722
    Software needs to be built with parallelism in the first place, the problem we have now is the problem we will have going forward and that is that people rarely build new engines, building one that works takes time in the first place whether its games or workplace, often new features are kludged into old frameworks when there are clear reasons why more cores would work, with routines updated here and there but the overall effect does not always get you where you need to be, as there is more often than not, part of the pipeline fundamental to the software framework than can't be split up, as it was never designed to be, multicore support is often just a band aid, in the same way that giving your granny a new hip doesn't turn her into an athlete as there still the rest of her to contend with.
     
  15. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    It's not even a problem of 'old engines': in the years since multi-core CPUs become popular we've gone through 3 versions of Unreal Engine and the entire development of Unity. Most consumer workloads are as parallel as they're going to get.
     
  16. bawjaws

    bawjaws Multimodder

    Joined:
    5 Dec 2010
    Posts:
    4,266
    Likes Received:
    865
    The thing is that even if you can parallelise (is that even a word?) workloads, the whole thing is generally only as fast as the slowest part, and if that can't be parallelised then you're boned. Some stuff just can't be split up across threads/cores.
     
Tags: Add Tags

Share This Page