1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News AMD hUMA introduced: Heterogeneous Unified Memory Access

Discussion in 'Article Discussion' started by Meanmotion, 30 Apr 2013.

  1. Meanmotion

    Meanmotion bleh Moderator

    Joined:
    16 Nov 2003
    Posts:
    1,652
    Likes Received:
    19
  2. bowman

    bowman Minimodder

    Joined:
    7 Apr 2008
    Posts:
    363
    Likes Received:
    10
    This is what the promise of Fusion really is. Putting a CPU and a GPU in the same MCM, or heck, even on the same die, is not revolutionary, and hardly even evolutionary. Thus far it has worked mostly as a cost-saving measure. The system architecture is still the same as a regular computer with separate CPU and GPU.

    Now we're talking, though. Unfortunately this is more likely to mean the GPU will be a low-end one shackled with the inadequacies of DDR3 memory, rather than the amazing opportunity of letting a CPU and GPU share some horrendously fast GDDR5 memory.

    Oh well, at least AMD will implement that in the silly Sony box.
     
  3. mi1ez

    mi1ez Modder

    Joined:
    11 Jun 2009
    Posts:
    1,650
    Likes Received:
    120
    hUMAn after all...
     
  4. will_123

    will_123 Small childs brain in a big body

    Joined:
    2 Feb 2011
    Posts:
    1,060
    Likes Received:
    15
    devug..? Little typo.
     
  5. Meanmotion

    Meanmotion bleh Moderator

    Joined:
    16 Nov 2003
    Posts:
    1,652
    Likes Received:
    19
    Ta, fixed.
     
  6. will_123

    will_123 Small childs brain in a big body

    Joined:
    2 Feb 2011
    Posts:
    1,060
    Likes Received:
    15
    Also judging by the PS4 specs will this really be the case...Its got 8GB of DDR5 unified memory?
     
  7. azazel1024

    azazel1024 What's a Dremel?

    Joined:
    3 Jun 2010
    Posts:
    487
    Likes Received:
    10
    Nice.

    Though, unless I greatly misunderstand it, Haswell brings unified CPU/GPU memory to Intel chips in...uh...a month. So, another "Intel beating AMD to the market" thingie.
     
  8. mi1ez

    mi1ez Modder

    Joined:
    11 Jun 2009
    Posts:
    1,650
    Likes Received:
    120
    Haven't AMD been first with most recent CPU techs?
     
  9. schmidtbag

    schmidtbag What's a Dremel?

    Joined:
    30 Jul 2010
    Posts:
    1,082
    Likes Received:
    10
    This is pretty fantastic IMO, it will give AMD a considerable performance gain and, as stated earlier, makes the term Fusion much more true. What I find interesting about this is you could potentially have several GB of memory go toward the GPU. With a little overclocking, this could probably easily handle 6 monitors that aren't doing anything GPU intensive (such as HD video or 3D). If you want a multi-seat office or school computer, this would be very ideal. Many people overestimate the needs of office computers.
     
  10. Guest-16

    Guest-16 Guest

    Technically AMD already have it in the PS3 and probably Xbox 720/Next/whatever too. It may not be on the consumer market but the tech is there and working.
     
  11. SAimNE

    SAimNE What's a Dremel?

    Joined:
    23 Oct 2012
    Posts:
    122
    Likes Received:
    0
    gddr5 isnt actually "faster" than ddr3. it is just optimized for graphics(pretty sure it handles higher volume transfers better at the sacrifice for a bit of added latency, but i could be wrong, havent looked too far into it) anyway if they make it GDDR5 mem the processor side of things will suffer, while the graphics would improve... so best outcome would probably just be DDR4 coming out in time for the apus.
     
  12. Adnoctum

    Adnoctum Kill_All_Humans

    Joined:
    27 Apr 2008
    Posts:
    486
    Likes Received:
    31
    GDDR5 isn't better than DDR3, it IS DDR3 but optimised for the parallel tasks of GPUs. GDDR5 has high bandwidth because it can have multiple (high latency/high bandwidth) controllers per channel (while also reading AND writing during the cycle) while DDR3 has a single (low latency/low bandwidth) controller per channel (and can only read OR write during the cycle).

    CPUs want DDR3 because they prefer low latency, as they have multiple workloads all needing access quickly so as to not hold up the current thread.
    GPUs want GDDR5 because they want high bandwidth, and care less about latency because they need to move a lot of data but it is less time critical.

    These are two competing requirements. On the desktop you'll want DDR3 because you will have multiple workloads running simultaneously. Consoles such as the PS4 will be able to get away with GDDR5 because it will be undertaking a single workload that will be mainly GPU-related for which GDDR5 will suffice.

    It should be noted that it is not GDDR5 that has high latency but the controllers themselves, as high bandwidth and low latency are competing requirements.
    Low latency GDDR5 controllers should be do-able, it is just that it hasn't been needed for past/current/future AMD/nVidia GPUs which require high bandwidth. Perhaps a controller for APUs that can switch high bandwidth/low latency modes is the answer.
     
    Last edited: 1 May 2013
  13. jb0

    jb0 Minimodder

    Joined:
    8 Apr 2012
    Posts:
    555
    Likes Received:
    93
    GDDR5, DDR3... it's still all DRAM. Slow, power-hungry, complex-to-interface DRAM.

    Wake me when we're using SRAM for more than cache again.
     
  14. yougotkicked

    yougotkicked A.K.A. YGKtech

    Joined:
    3 Jan 2010
    Posts:
    251
    Likes Received:
    9
    If the integrated GPU has any real number crunching power to it, this could be a huge deal. It won't need to be a GTX Titan, something with a few hundred compute cores that can significantly outperform a CPU for basic parallel tasks would do the trick. I can imagine computing clusters with a dozen APU's per blade server, offering huge throughput with relatively low power demands.

    Of course, this is all marketing BS if the integrated GPU isn't big enough. Careful programming can mitigate the data transfer overhead, which isn't so bad if you don't need to constantly load new gigabyte-scale blocks of data onto the GPU (bear in mind that the 'bottleneck' is the PCI-E bus, nothing compared to the bus between the CPU and RAM, but it's not like we're moving 10 gigs onto a USB drive).

    I sure would love it if this turns out to be as good as it sounds: over the summer I'll be teaching researchers how to do GPGPU computing, and eliminating the data transfer step would make things way simpler when coding.
     
  15. will_123

    will_123 Small childs brain in a big body

    Joined:
    2 Feb 2011
    Posts:
    1,060
    Likes Received:
    15
    Didn’t say it was faster just said that the Playstation 4 specs showed unified memory.
     
Tags: Add Tags

Share This Page