Other TSMC Details Silicon Road Map - Great read on EETimes

Discussion in 'Hardware' started by Guest-16, 21 Mar 2016.

  1. Guest-16

    Guest-16 Guest

    http://www.eetimes.com/document.asp?_mc=RSS_EET_EDT&doc_id=1329217

    Some real gems of info in here:

    Q117

    That's seriously impressive if it can pull it off.

    Could be non-GPU devices using HBM2, but most likely that's an admission of Nvidia Pascal w/HBM2 (GP200) was taped out in Feb and with 16GB of memory. So no GP200 til Q4, providing it's a HBM2 exclusive chip. If Nvidia are making GP200 with GDDR5/X it'll be ~June.

    EDIT: I was just told it could be a networking CPU - but why reference that when NV should have taped out already? But also why 2xHBM2 versus 4? So, maybe not.

    Intel is pushing for EUV at 7nm AFAIK. TSMC also claim they are working on it but they're usually behind Intel's adoption by a step. Either way, the cost is going to be astronomical and production speed sloooooow [compared to 2x nm nodes]. EUV speeds things up but prices will have to increase. I read recently it'll cost around $150M per 10nm chip design, so 7nm likely to be similar. That's a HUGE investment so we'll be seeing less variety being made and more software defined FPGAs or just recycling going on.

    Page 4 is about 5nm and beyond. This is basically where silicon runs out so it gets very exotic. We'll probably be stuck on 7nm as long as we were on 28nm.

    [And for reference on the GPU side in the next few months, this is what AMD might be doing: http://forums.bit-tech.net/showpost.php?p=4024310&postcount=61]
     
    Last edited by a moderator: 21 Mar 2016
  2. play_boy_2000

    play_boy_2000 It was funny when I was 12

    Joined:
    25 Mar 2004
    Posts:
    1,540
    Likes Received:
    87
    Intriguing, I wonder what sort of networking beast requires a bleeding(ish) edge process node and two stacks of HBM2 (4Tbit/s memory bandwidth??). Cisco or Juniper must be cooking up one hell of a router.
     
  3. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    HBM2 doubles the bandwidth of HBM, and quadruples to octuples the storage per stack. For the moment, it looks like 512gb/s is sufficient for at least current GPU workloads, so doubling the total storage over HBM1 at the same bandwidth and at a lower cost (smaller interposer, fewer HBM stacks) seems like a sound design.
     

Share This Page