1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Graphics Pascal and Polaris Discussion Thread

Discussion in 'Hardware' started by Parge, 25 Aug 2015.

  1. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    One more thing to add on consumer NVLink: there's a good chance the first time it turns up in a consumer card will be linking between chips on a 2x GPU card. That avoids all the messiness with trying to implement an ultra-high bandwidth physical interconnect. One of NVidia's goals with NVLink is unified memory, so that would also tie in with keeping things on one card and being able to share memory even after HBM2 has moved the dies onto GPU-local interposers rather than having a pile of unassociated dies on a shared PCB.
     
  2. Guest-16

    Guest-16 Guest

    TSMC 16FF is confirmed
    NVLink GPU to GPU is confirmed for unified memory/coherency modes.
    HBM2 confirmed
    Updated Maxwell arch is confirmed (+ mixed precision)

    http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Pascal-GPU_Compute-Performance.jpg
    http://cdn.wccftech.com/wp-content/uploads/2014/11/NVIDIA-Pascal-GPU-NVLINK.png

    This is all GP100, and Nvidia usually ships Tesla cards first to big customers for big $$$ so this could all happen in Q1, with consumer parts back into Q2. You don't need mixed precision on gaming cards so it'll likely be left out for GP104 to save die size, where tape-out has yet to be confirmed. They'll start saving less-than-Tesla-quality dies for the next-gen Titan later in the year.

    If SLI gives extra memory it'll make the upsell so much easier, and single cards sold with lower memory will be cheaper in order to push people to buying a second to double the size. Will 2x 'GTX 1060 4GB' be worth it over 1x 'GTX 1080 8GB'?
     
  3. Harlequin

    Harlequin Modder

    Joined:
    4 Jun 2004
    Posts:
    7,131
    Likes Received:
    194
    http://wccftech.com/nvidia-pascal-g...hip-single-chip-card-feature-16-gb-hbm2-vram/

    don't think wel`ll see Gp104 before H2 - possible right at the the end of Q2. Moreso than Nv have gone to TSMC when Samsung expected the contract on 14nm finfet
     
  4. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    Inter-GPU NVlink on x.86 architectures was already confirmed through the slides available on the NVLink website, the question was whether it would ever appear in consumer cards. HBM2 and mixed precision were also known from March's GTC presentation. Doesn't look like the GTC Japan keynote session has been presented yet, so the slides aren't available to read through (and no session video yet).
     
  5. Guest-16

    Guest-16 Guest

    2H is too late. Needs to be a year after GTX 980 and a consumer part needs to arrive in 2Q latest. TSMC will require volume commitment because among losing A9 contract to Samsung and mobile slowdown, they need CAPEX recoup ASAP ahead of 10nm, and NV will want to be as ahead of AMD as possible to scoop up as much HBM2 as they can. Big buys win that.

    edzieba - Fairdos missed it before. NVLink required for uniform memory. It would be a win for consumer sales rather than just compute.
     
  6. Guest-16

    Guest-16 Guest

  7. Harlequin

    Harlequin Modder

    Joined:
    4 Jun 2004
    Posts:
    7,131
    Likes Received:
    194
    TSMC and bulk on a new process historically have never gone well. I personally do not expect 16nm bulk to be ontime nor on budget.
     
  8. Guest-16

    Guest-16 Guest

    Well, their first parts maybe are shipping now for Apple, although we don't know. The high performance nodes are different to mode though, but to market proof will definitely be there in Q4.

    Technically they're already 'a year' late though as they were meant to match Samsung to 14.
     
  9. Harlequin

    Harlequin Modder

    Joined:
    4 Jun 2004
    Posts:
    7,131
    Likes Received:
    194
    whilst shipping SOC to Apple is 1 thing , its not high performance in BULK ;) the common consensus is Q2 next year in retail and late Q2 at that - June.

    Its down to new tech being in quantity from 2 manufacturers , not just new tech but new process. a lot of risk being carried.
     
  10. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    Rumors being rumors this could turn out to be nothing more than BS but thought it maybe worth posting.

    NVIDIA rumored to use GDDR5X on next-gen Pascal-based GPUs
    http://www.tweaktown.com/news/47976/nvidia-rumored-use-gddr5x-next-gen-pascal-based-gpus/index.html
     
  11. Parge

    Parge the worst Super Moderator

    Joined:
    16 Jul 2010
    Posts:
    13,022
    Likes Received:
    618
    I guess thats what they will use on the lower end/mid range cards then.
     
  12. Guest-16

    Guest-16 Guest

    Never even heard of GDDR5X before, but not unsurprising that there will be a lower cost alternative and we're long overdue for a GDDR5 replacement/upgrade. I need to poke some people in JEDEC/memory industry.

    Even though it's still GDDR5 based by the sounds, I would imagine greater density is certainly on the cards and some sort of revised interface to reach those speeds AND to keep the PCB cost down. Otherwise it'll never make the price point.

    GP100/104 for HBM2, GP106/7 for GDDR5/5X/maybe DDR4 for very low end x40s or if they update the 610/730.
     
  13. Corky42

    Corky42 Where's walle?

    Joined:
    30 Oct 2012
    Posts:
    9,648
    Likes Received:
    388
    Yea I didn't have a clue what GDDR5X was either. :)

    Toms done a single page synopsis of what's changed back in September, an even shorter version is that they've doubled the pre-fetch of GDDR5 from eight data words for each memory access, to 16 data words per access.
     
  14. Guest-16

    Guest-16 Guest

    Oh that flew under the radar!

    So current clock rates to hit 7Gbps are 875MHz and 10-12Gbps = 625-750MHz, 14G is a similar 875MHz and proposed 16G is 1GHz.

    From a marketing standpoint they're stupid to call it GDDR5X. It needs to be GDDR6. New numbers sell. Bloody engineers doing product naming again :rolleyes:
     
  15. Guest-16

    Guest-16 Guest

    Let's dig this up for ASCII's evaluation: http://ascii.jp/elem/000/001/109/1109096/
    Grab your salt for pinching
    http://ascii.jp/elem/000/001/109/1109103/map_2000x831.png

    Titan will apparently drop first in ~April. Single, Nvidia design; no AIB redesigns as always for Nvidia. Very limited supply given 16FF+/HBM2 availability, but all they really want is the brand exposure, profit margin and fulfilling key accounts. Despite inevitable multi-$000 cost per unit it will sell out in high-end systems world-wide. They predict 980 Ti replacement won't arrive 'til 2017 (!) 99% of people upgrading (not considering AMD) will have nothing 'til the summer.

    Sadly *80/*70 (GP104) is expected to be GDDR5X not HBM2, available around Computex - this is good to launch 3rd party designs from all the usual TW AIBs. Others support this assumption as well. No SFF cards then, but we can hope AMD provides where Nvidia won't. This is guessed due to volume: Nvidia's 970 was it's best selling card, so to meet that demand it can't risk HBM capacity not up to market needs. GDDR5 to GDDR5X will be comparatively simple for in comparison. Probably *80 will be 8GB and *70 will be 6GB, which is possible under GDDR5X without altering the bus bandwidth.

    *60/*50 (GP106) will be Q3. It suggests *60 will be 256-bit but I dont see that. Better to offer GDDR5X and GDDR5 versions on 128bit for additional performance: this leaves the physical GPU chip the same but the PCB layout changes. You'll certainly see 4GB GDDR5 options, maybe even 6/8GB on 'slower-end' 5X depending on volume.
     
    Last edited by a moderator: 26 Jan 2016
  16. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    I'd take a truckload of salt with that estimated timeline. It implies that Nvidia would reverse their entire bottom-up development philosophy with Pascal, which seems pretty unlikely with their recent emphasis on in-car systems (which need mid/low-end GPUs apart from the -PX vision module) and their slowly increasing sales of Tegra. It's a development process that has had them consistently undercutting AMD in perf/watt too, so I can't see them abandoning it.
    Personally, I'd bet on seeing mid/low-end cores first using GDDR5X before we see anything that uses HBM2.
     
  17. Parge

    Parge the worst Super Moderator

    Joined:
    16 Jul 2010
    Posts:
    13,022
    Likes Received:
    618
    Lovely find Bindi. Not much of that seems unreasonable, but equally it could all be educated guesswork.

    If the above is the case there are two elements of disappointment for me personally there. Firstly the lack of HBM2, which is a shame because, as you say, it limits the likelihood of small cards. Secondly, the xx80ti release schedule! Not many of us can afford a Titan, so I was hoping for a card that could compete in 2016. We all know what terrible value the 980 (and Titan) turned out to be when the 980ti came around (and even before that, when it was just up against the 970).

    Early indications suggest then, that we may have to look to AMD to provide us with a top tier part at a reasonable price. Hopefully they deliver. :eyebrow:

    [​IMG]
     
  18. Guest-16

    Guest-16 Guest

    Tegra in consumer devices is dead. It became their automotive stuff.

    To launch with Pascal GP100 is the biggest potential PR winner and smallest risk due to low volume, highest ASP, own design and it will drive demand for the GP104 releases in mid-year. Short term only having to satisfy the few gives valuable learning and driver dev ahead of more volumuous releases and to quip "we shipped 16nm parts early in Q2 and they won x amount of awards, with increasing ASPs" looks great on the investor relations calls.

    Parge. Could be that the *80 will be a significant enough upgrade still. AMD .. who knows. By going 14nm GloFo no idea when they'll arrive. If they can't match Nvidia's launch - whatever it is - that'll be a huge, huge loss in itself. I hope they're not counting on Fuji to be the stop-gap.
     
  19. rollo

    rollo Modder

    Joined:
    16 May 2008
    Posts:
    7,887
    Likes Received:
    131
    Titan will launch before any TI version of the card that is not really guesswork been that way for a while now, they milk that for 6 months and we get a TI version.

    Really does depend though on what performance gains are had from 2 process shrinks. A 780ti was faster in most titles than the 980.

    There's no guarantee that with the change of process node that a 980ti will beat a 1080 or whatever it's called.

    Most are expecting a doubling of performance at the top end.

    If AMD are first to market expect price gouging either way, no matter who launches first we are going to be ripped off its only when both are out that prices will settle.

    Maybe early next year before prices settle to a level most are willing to pay. The 970 equivalent will be the best price / performance Nvidia card most assume.

    Wonder what price points we will see this generation. Think a Ti version will be above £600 with the Titan around the £1000 mark.

    Pixel c was launched recently with Nvidias CPU gpu in it which is tegra. To say it's dead is wierd when it's the fastest performing android tablet out there.
     
  20. Guest-16

    Guest-16 Guest

    OK, very limited projects. Certainly nothing mobile. Even its own tablets are tinytinytiny vol. I haven't seen a roadmap beyond current parts.
     

Share This Page