Discussion in 'Article Discussion' started by bit-tech, 18 Dec 2018.
Will HBM ever be used as system ram? Why don't we see faster memory like gddr5/6 used when it's far faster than ddr4?
I believe the main reasons are down to latency and bus widths, but I'm not 100% on the details.
Is it used in anything much?
I know AMD have thrown it about every now and then in some gpus but besides that is it mainly used in things other than the desktop pc field?
There's a couple of reasons that high-bandwidth GDDR/HBM isn't typically used as system RAM. For GDDR, the reason was latency: GDDR is specifically optimised to provide high bandwidth at the cost of increased latency, which doesn't matter a jot when you're throwing around gigabytes of texture data but would absolutely kill general-purpose performance if you tried to use it as system RAM. (That said, there are techniques for using a graphics card's framebuffer as swap space in Linux, but they're hacky and have a tendency to crash the system.)
For HBM specifically, one of its key design features is that it sits on the same interposer as the processor (CPU, GPU, FPGA, whatever you're building) using looooooads of pins - like, 2,860 pins per channel, compared to 380 for DDR4. So many pins that you couldn't route them out to a slot-in board, and if you did the signal would degrade too much anyway. So, while you could use HBM as system RAM, you'd have to put it on the same chip as the CPU - which means if you bought a Core i17 with 1TB HBM4 and wanted to upgrade it to 2TB in the future you'd be throwing away the CPU as well.
The bigger problem, though, and this applies to both GDDR and HBM, is cost. DDR4 is, comparatively speaking, super-cheap in price-per-gigabyte - it's the cheapest form of high-performance memory, in fact. Anything else - even LPDDR4, the low-power variant - is more expensive, and HBM2 is the most expensive of them all. It's also harder to make, which means lower volumes, which drives prices up again - and you can't design a system to use HBM and DDR interchangeably depending on what's available, leaving you designing an entirely novel platform for a very small number of sales.
Now, there was a third tech in the mix: Hybrid Memory Cube, or HMC. Micron developed it back in 2011 as a competitor to HBM, but despite offering an order-of-magnitude performance improvement over DDR3 it never caught on: the HMC Consortium has seen HP and Microsoft drop out since its formation, and even Micron itself has discontinued HMC production as of this year. The tech kinda-sorta lives on, though, in that JEDEC's HMC-inspired Wide I/O is now part of the new, improved HBM spec.
Both AMD (no surprise, considering AMD invented it) and Nvidia use it on their discrete GPU products, while Intel uses it in its Stratix 10 FPGAs and Xeon Phi copper boards.
Sounds like the pitch for a next-gen Apple product.
The Xeon PHi is using HMC (referred to as 'MCDRAM'), not HBM. Intel was Micron's partner in developing HMC.
Yeh that's what I thought, I just wanted to check other people knew that
Separate names with a comma.