Discussion in 'Article Discussion' started by Da Dego, 15 Feb 2007.
i didnt really know sram was used for cache, to be honest im kind of surprised. and the article makes it seem like this could have come years ago...why hasnt it happened before now???
Said somewhere in there that the technology to squeeze it on there has only been recently developed, or possibly the way IBM have implemented has only just become available.
I don't know if this is a good thing. SRAM has many advantages over DRAM. Size isn't everything.
Haven't ATI already done this with the xbox 360 graphics chip? I'm pretty sure it has 10 Mb of eDRAM to use as a frame buffer.
SRAM has been way faster than DRAM for ages, which is why they used it. it's also 'nicer' as you don't need to update the contents of it at each clock tick.
i guess DRAM has gotten to the point where the speed difference isn't huge enough to warrant sticking with the (much more expensive) SRAM.
So does this mean larger but slower cache is the future? Out of curiosity anyone know how much of a performance hit you take if you were to move the cache off the CPU and put it say as a separate chip?
dr. strangelove: i guess so! btw, moving the cache off the CPU is exactly the reason the Slot design was introduced with the p2 - to reduce the cost of the CPU die they used offboard cache and put it on a daughterboard packaged up with the CPU. you increased the latency by a fair margin, but it was cheaper. then they rejigged things, whatever, and got the cache back into the core again. hello socket!
I'm wondering how long before we see a Via chip with all the RAM built in.
I'm guessing SRAM will still be used for L1 cache at least, because it is hella faster. Also, SRAM is a latching technology - that is why it requires 6 transistors instead of DRAM's 1 transistor + 1 capacitor. So SRAM is bigger and arguably more complex. However, unless this is some special kind of DRAM, SRAM is considerably more power efficient in 'standby' modes - it doesn't require refreshing, and its power draw is miniscule except when being read / written. This is why SRAM was used for years in PC BIOSes - it could be left powered by a tiny button battery for ages and didn't lose its contents.
intressting move, fair enough, it means more on chip, but depends on the time's of access between the two, i no dram's speed has increased dramitclly over the last few years, but size isn't everything though, so i am not convinced, will need to see it in action against a chip with sram cache!
Most BIOS will run a diagnostic on the processor's cache, if you look at the POST screen you will see one of the first entries wiil read 'xxxx SRAM' where x is the amount of cache in kB. At least on motherboards for the K8...
ATi's Xenos GPU in the XBox360 has a DRAM on the same package, but not enmbeded on the chip.
Whatever gives us more speed and performance .... Chips are now at a stage where I am not waiting for the next speed generation to come so that I can upgrade and it has been so for the last year or two. So if they can reduce the cost and deliver the same speeds in a smaller, cooler, less power-hungry package - it sounds good.
QFT, it still seems like DRAM is slower, as it needs to be recharged every so often (a few clocks) plus the addressing system is much worse (slower access times)
SRAM has been used for a long time for a reason, its fast, its easy to connect in circuits
I would have thought it might make more sense to have like 256k L2 cache made of SRAM, and then a much larger L3 cache that can store much more, if its all on the processor the access time will be minimal anyway
There goes my lecturers solemn promise that only DRAM would ever be used for processor cache... oh well i get to put him down next time i'm in his class!!
PS Isn't computer Architecture fun boys and girls!?
I can see an advantage with the switch if....
Keep L2 cache the same size (MB) use the freed up space (mm²) for more SRAM L1 cache
I believe we will see this Dram used in systems that consumers do not need high performance say the $1000 and below category or on divisus like handhelds where additional cache is far better than speed.
Also at announcement date Fusion by AMD was to be the discrete GPU killer and then after some of the smarter geek sites pondered how it would work, ATI withdrew their replacement of discrete graphics to a much more sober laptop integrated graphics replacement. I believe this will happen to Dram for cache, it will find the same fate reduced to being designed for lower ends of the market where saving a dollar or two from the cost of a CPU is important.
If I am wrong and it turns out to be as nearly as fast and smaller in size per MB when compared to SRAM then AMD will be able to benefit since when they added the on-board memory controller they lost real Estate room and smaller memory might help them put an additional MB or two on a dual core CPU. That is a big if since, if this was so good Intel would have announced their work on this DRAM memory shrink since it seems that both companies announce almost similtaniously lately.
It will be good to see IBM back in the chip market to give more competition with AMD and intel. I still have a few old IBM chips in my draw.
Separate names with a comma.