Discussion in 'Article Discussion' started by bit-tech, 11 Jan 2018.
The elephant in the room is cost. A single HBM die gives you 8GB vRAM at 307GB/s, at the cost of needing an expensive HBM stack plus an interposer assembly. To do the same with GDDR6 using Samsung's numbers (16gb/s per pin, 32 pins per package, 64GB/s per package) means 5 GDDR6 packages, which gives you a total of 320GB/s bandwidth and 10GB capacity (2GB/package), but you skip the interposer production and assembly entirely. You can also scale things up and down by adding or removing only a handful of GDDR packages, whereas for HBM2 that means adding a whole extra stack.
As you scale up (terabytes/second of bandwidth, many double-digit-GB of capacity) HBM2 makes great sense over laying down vast fields of GDDR packages, but for consumer cards that big initial cost of having the interposer present at all is a killer.
Yes, there are issues with HBM2, which is why Samsung is batting for both teams:
Separate names with a comma.