1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Other PCIe Bandwidth

Discussion in 'Hardware' started by dec, 21 Jun 2019.

  1. dec

    dec [blank space]

    10 Jan 2009
    Likes Received:
    After reading the announcement about PCIe 6.0 spec I noted some funny numbers. The currently silly amounts of bandwidth available in newer PCIe specifications (4.0 onward) may end up being more effectively utilized by SSDs than GPUs. My PC has a peak memory bandwidth of about 35GB/s. That number is comparable to the 32GB/s available in PCe 4.0 x16 in each direction. This is half of what PCIe 5.0 can do and a quarter of what is available with PCIe 6.0.

    Would it be possible to replace RAM entirely with a team of SSDs connected via PCIe 4.0 or newer? I'm sure you would need to get through the tall hurdles of booting and configuring a PC without RAM on both the hardware and software side. Since the memory controller and PCIe links are on the northbridge/IO chiplet/on CPU die, the physical links are present (and not optimized in the slightest from an electrical engineering perspective) and I'm sure PCIe and SSD latency far surpasses RAM latency. I'm not too sure about latency though since AMD I believe is using IF to communicate between both RAM and PCIe devices directly from the CPU. The upcoming DDR5 standard is likely to increase RAM transfer rates again and may very well outpace PCIe, so doing this would only be an exercise in thought.

    Optane may or may not become this someday.
  2. Gareth Halfacree

    Gareth Halfacree WIIGII! Lover of bit-tech Administrator Super Moderator Moderator

    4 Dec 2007
    Likes Received:
    You're basically describing "universal memory," which 3DXPoint (Optane) is having a bloody good go at becoming. It's already possible to use the data centre version as pseudoRAM, and the DIMM modules versions can be used as actual RAM. Intel and Micron have been pushing that vision since it was publicly unveiled in 2015.

    There's still a way to go, though, and the latency and bandwidth still aren't up there as a day-to-day RAM replacement. Give it time, though, and the idea of a computer where there's no longer any distinction between RAM and storage could come true. Use the same stuff for cache memory, and you've got a system which could theoretically survive any length of power outage - servers that previously needed a UPS for resilience rather than availability don't any more, laptops could literally cut the power completely at idle to boost battery life - and which could run significantly faster by not having to copy data from storage to RAM and back again every time it's processed. (Some servers do this already - it's called in-memory computing, and involves having a massive wodge of RAM that you can load your whole data set into - while others are working on adding processing capabilities to the RAM itself so you don't have to go storage-RAM-cache-CPU-RAM-storage all the time.)
    MLyons likes this.
  3. edzieba

    edzieba Virtual Realist

    14 Jan 2009
    Likes Received:
    PCIe latecy is too high to be suitable for main system memory, by a couple of orders of magnitude vs. DDR. By the time you trim overhead and add implicit assumptions about endpoint behaviour to PCIe to make it suitable for main memory you've lost its utility as a general purpose bus, and vice versa (hence why it was so hard to get Optane DIMMs working over DDR).

Share This Page