After Twitter discussions conclusion that this maybe GP206, if it is really a Pascal GPU and not a Maxwell stand in. In that case >It confirms still use of GDDR5(X) for mainstream not HBM2. Unlikely to be optional/switchable due to massive materials construction difference. >TFLOP will double for x50/x60 mainstream range of GPUs from ~2 to 4 (according to Nvidia stating their product has 8TFLOP over two chips). If Nvidia is spec'ing GTX 970 as a minimum for VR now, you'll be able to do it on a x60 or maybeee x50 next gen. x60 could be baseline 4K capable too, likely definitely in SLI. >8(maybe 16) chips likely still 128-bit bus but could potentially be 256. >"250W" could mean GPU only or GPU+CPU; unknown. Either way it means 125W or less per GPU. Original article/picture source: http://www.engadget.com/2016/01/04/nvidia-drive-px2/ Also we're speculating that these could be GPU-less or GPU-disabled Tegra's. Not sure yet - Nvidia didn't detail. The Tegra/Pascal combo could be used in Telsa accelerators as IO/OS host controller, like Intel is doing with Phi.
Even though it's Pascal i don't think any conclusions can be draw as it's being used for a very specific purpose, from my understanding both AMD & Nvidia's new GPU's are going to be capable of being used with either, or GDDR and HBM.
Looks like both Nvidia/RTG doing the same thing then - GDDR5 for mainstream, HBM2 for high end. And perhaps a Q3 launch for both high end parts too. RTG have already confirmed that it'll be a mid year launch for the lower tier GDDR5 parts with the high end to follow. How long was FuryX after the 380/390 etc.
I'm hoping AMD are first to the punch this time. They've been beaten ever since the 5870. Corky - IIRC it was about 4-6 weeks. It was delayed ever so slightly and was about another month before I got mine.
To be honest I don't see anything in that article that would suggest that the next lineup of GPU's will be limited to GDDR5X for mainstream cards. The whole article from what I can just talks about the use of nVidia gpu cores for self driving cars and not for gaming cards. I mean the headline etc kind of points at what it is only about.
I see the fact that they're using laptop-style AIB's as indication that not only is this solution flexible, they want to save development and QC time by vetting a fewer number of SKUs. Using the same component in as many markets as possible saves costs. AMD/NV will vary depending on TSMC or GloFo, but both also AMD/NV high end will rely on HBM2 availability to some extent. The volume manufacturing time to market will dictate product I'm sure. Damien - there's nothing out there regarding gaming cards. I was extrapolating theories with some other analysts based on what we know is possible technically and business-likely. I was just using the article for the picture. You won't get HBM2 on mainstream. I will put all my chips behind that.
Yea fully agree that we won't get HBM on mainstream, at least i would be shocked if it wasn't reserved for high end only, that doesn't mean their going to have two separate fabrication processes, one for HBM, another for GDDR. What seems more likely is that they've both designed the silicon with either two memory controllers or a single MC able to address both types of RAM, that way they can divide up each wafer as appropriate, that's unless I've got it wrong on when the chip/silicon meets the interposer.
Given the assumption that HBM/2 will not be mainstream, I fully expect new a top tier and a massive re-branding of the current line-up to suit.
I think AMD are throwing HBM2 at the next gen of APU mainly - would make sense as the current APU are bandwidth starved (and show linear scaling with faster ram)
AMD have priority access to SK Hynix's HBM2 chips - but SK Hynix are not the only ones making them now
Sorry jump to the conclusion that you were using the article to say there will not be HBM2 on the mainstream cards, my mistake apologies. I think you are right though I don't think the HBM2 will be on anything below what ever replaces the GTX980Ti & Titan X but we could all be suprised and nVidia may put it on what ever replaces the GTX970 and just use a lower amount of it on that card. Either way I won't be upgrading to either AMD or nVidia's next cards out but will be happy with my GTX980Ti's tomorrow when they arrive.
Technically it could be the same memory controller but the physical design is different. The interposer is part of the packaging process, which in turn is (AFAIK) closely aligned to fabrication due to layout of the microchannels. 1024 bit bus vs 256-512 in GDDR. Costs a lot to make two parts of the same thing. HBM2 would have to be in verrrry short supply to push them to develop a second run without interposer. *If* NV follow their current naming split: 'GP204' 'GP1x0' will be HBM2 and 'GP206' should be GDDR5(X). The new X spec will offer ~30-50% more bandwidth on the same bus, but these should certainly be interchangable at the AIB level between GDDR5 and 5X (or maybe Micron rebrands it to GDDR6?)
Not disagreeing but does this provide more details on when and how GPU, Interposer, and HBM come together, i have a rough understanding of it so maybe someone can shed some light on when it all comes together. To me it seems like all three main parts are made separately and assembled in the later stages, maybe I've got that wrong though so I'll do some more reading.
Bummer. http://www.anandtech.com/show/9903/nvidia-announces-drive-px-2-pascal-power-for-selfdriving-cars
How does this impact you if you have a 980ti, will there be a 980ti card killer replaced this year that owns the 980ti in high end VR performanxe, I'm getting used to buying and selling GPUs, and losing £200 each time why stop this year.