1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News JEDEC boosts HBM densities, performance

Discussion in 'Article Discussion' started by bit-tech, 18 Dec 2018.

  1. bit-tech

    bit-tech Supreme Overlord Lover of bit-tech Administrator

    Joined:
    12 Mar 2001
    Posts:
    3,676
    Likes Received:
    138
    Read more
     
  2. The_Crapman

    The_Crapman World's worst stuntman. Lover of bit-tech

    Joined:
    5 Dec 2011
    Posts:
    7,917
    Likes Received:
    4,173
    Will HBM ever be used as system ram? Why don't we see faster memory like gddr5/6 used when it's far faster than ddr4?
     
    adidan likes this.
  3. mi1ez

    mi1ez Modder

    Joined:
    11 Jun 2009
    Posts:
    1,644
    Likes Received:
    115
    I believe the main reasons are down to latency and bus widths, but I'm not 100% on the details.
     
    Corky42 likes this.
  4. adidan

    adidan Guesswork is still work

    Joined:
    25 Mar 2009
    Posts:
    20,184
    Likes Received:
    5,986
    Is it used in anything much?

    I know AMD have thrown it about every now and then in some gpus but besides that is it mainly used in things other than the desktop pc field?
     
  5. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    Joined:
    4 Dec 2007
    Posts:
    17,385
    Likes Received:
    7,226
    There's a couple of reasons that high-bandwidth GDDR/HBM isn't typically used as system RAM. For GDDR, the reason was latency: GDDR is specifically optimised to provide high bandwidth at the cost of increased latency, which doesn't matter a jot when you're throwing around gigabytes of texture data but would absolutely kill general-purpose performance if you tried to use it as system RAM. (That said, there are techniques for using a graphics card's framebuffer as swap space in Linux, but they're hacky and have a tendency to crash the system.)

    For HBM specifically, one of its key design features is that it sits on the same interposer as the processor (CPU, GPU, FPGA, whatever you're building) using looooooads of pins - like, 2,860 pins per channel, compared to 380 for DDR4. So many pins that you couldn't route them out to a slot-in board, and if you did the signal would degrade too much anyway. So, while you could use HBM as system RAM, you'd have to put it on the same chip as the CPU - which means if you bought a Core i17 with 1TB HBM4 and wanted to upgrade it to 2TB in the future you'd be throwing away the CPU as well.

    The bigger problem, though, and this applies to both GDDR and HBM, is cost. DDR4 is, comparatively speaking, super-cheap in price-per-gigabyte - it's the cheapest form of high-performance memory, in fact. Anything else - even LPDDR4, the low-power variant - is more expensive, and HBM2 is the most expensive of them all. It's also harder to make, which means lower volumes, which drives prices up again - and you can't design a system to use HBM and DDR interchangeably depending on what's available, leaving you designing an entirely novel platform for a very small number of sales.

    Now, there was a third tech in the mix: Hybrid Memory Cube, or HMC. Micron developed it back in 2011 as a competitor to HBM, but despite offering an order-of-magnitude performance improvement over DDR3 it never caught on: the HMC Consortium has seen HP and Microsoft drop out since its formation, and even Micron itself has discontinued HMC production as of this year. The tech kinda-sorta lives on, though, in that JEDEC's HMC-inspired Wide I/O is now part of the new, improved HBM spec.
    Both AMD (no surprise, considering AMD invented it) and Nvidia use it on their discrete GPU products, while Intel uses it in its Stratix 10 FPGAs and Xeon Phi copper boards.
     
    Last edited: 19 Dec 2018
    The_Crapman, edzieba and adidan like this.
  6. perplekks45

    perplekks45 LIKE AN ANIMAL!

    Joined:
    9 May 2004
    Posts:
    7,608
    Likes Received:
    1,897
    Sounds like the pitch for a next-gen Apple product.
     
    The_Crapman likes this.
  7. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    The Xeon PHi is using HMC (referred to as 'MCDRAM'), not HBM. Intel was Micron's partner in developing HMC.
     
  8. The_Crapman

    The_Crapman World's worst stuntman. Lover of bit-tech

    Joined:
    5 Dec 2011
    Posts:
    7,917
    Likes Received:
    4,173
    Yeh that's what I thought, I just wanted to check other people knew that :worried:
     
Tags: Add Tags

Share This Page