https://wccftech.com/nvidias-upcoming-lineup-will-be-called-geforce-gtx-11-series/ To be revealed on their keynote presentation at GTC2018, starting at 9am PDT (5pm BST). (10-series GPUs are now all out of stock at Nvidia.com)
Bar an hour or two every few weeks, the GPUs for sale on nvidias' website are always out of stock, simply because it's the only place to sell them at RRP (thus the limit per customer). I look forward to the 11 series however. Glad they're calling the the 11 series instead of 20. 20 just seemed silly.
Ah, good old WCCFTech. As long as you regularly keep announcing "X will be revealed tomorrow" then eventually X will be announced and you can point to your article from the previous day and yell "Hah-HAH! We were right!".
To be fair it was Tweaktown who allegedly confirmed it, although i can neither confirm or allege that.
I'm beginning to think consumer grade GeForce products are way down the "To do list" of Nvidia just now... Secondly, if they're released in small quantities then the miners will just send their prices straight into the stratosphere all over again
Yup you can forget it for the foreseeable. I would imagine they would want to release their next gen on GDDR6 which Samsung have only just started producing, according to something I read today. I would say summer now. Which hey, I am not complaining too much about because this is the first time *EVER* I have bought a GPU and it still be worth the same or more than I paid for it a year later.
Yes. As they own the market, AMD have already publicly said next-gen Vega isn't before Q4 and everyone is selling everything they can make, why bother? Both of them are tiptoeing on GPU orders in case the mining market suddenly bottoms out and they're left with a shitton of unsold stock competing with all the second-hand ragged GPUs flooding the market. In the commercial and HPC markets they make an absolute ****ton of profit so that's where they are focusing their talent now. Like Intel the PC market is just for harvesting.
Err... Intel made $34 billion revenue from its Client Computing Group compared to $19.1 billion from its Data Centre Group in FY2017, which if you scroll down becomes $12.919 billion CCG and $8.395 billion DCG operating income. CCG is Intel's bread and butter; it makes significantly less from DCG. It's the same for Nvidia, too: sure, the margins in data centre are higher, but the volume's in consumer. It's even worse for AMD, which barely has any skin in the data centre game anyway relative to Intel and Nvidia - consumer's the lifeblood for AMD.
If I were Nvidia, I'd be holding off on releasing and cards until it was actually needed, and instead spend time and money stockpiling as much GDDR6 as the supply chain can pump out. That's the limiting Critical Path factor for the moment - even over fabbing GPU dies themselves - so having more on hand means a larger launch volume can be produced even if they hold off on actually taping out any GPU dies for a few more months. Why not demand more GDDR5/5X instead? Because that would end up locking you into using that for future cards, or you'd end up on a stockpile of RAM nobody wants. It would also mean fabs keep lines open making RAM you don't want to use in the future (delaying shifting production lines to GDDR6), whereas if you can get them to move to GDDR6 early you can to some extent amortise the line startup cost over a larger batch of orders rather than having a high initial price that drops off over time.
Even worse, next-gen Vega in Q4 will not be a consumer product, that is just when the enterprise version will start sampling to select customers in small numbers. Miners will probably not slobber too much over anything new from either nvidia or amd: https://www.cnbc.com/2018/03/26/ana...ew-cryptocurrency-mining-chip-from-china.html While there are plenty of non Eth coins to mine on gpus the gpu mining refugees from Eth will drive difficulty into the sky and render them unprofitable to mine.
It's a harvest industry that's stable-to-decline. PC is not growth, it's not fashionable, so don't invest in it - that's why since SandyBridge the platform has only had iterative updates. Revenue is not ASP and investors will only ask about margin because it yields dividends. All big vol does is fill your fab utilization, then in turn helps fun the growth projects. Nvidia is doing the same with its consumer cards while it focuses on HPC and newer markets like automotive. MediaTek had its best ever year by revenue in 2016 and the markets hated it because it's ASP dropped from 40+% in 2014 to low 30s. AMD literally cannot make enough product, that's the biggest issue. They are 2nd tier status by vol in TSMC and it should move its GPU manuf over to GloFo really, but GloFo isn't cutting it at the same pace as TSMC so they can't afford to be behind (well, they already are...) This is why Lisa Su is playing down AMD's exposure to miners by saying single digit % of buyers, however most people expect it to be much higher.
It's just the continued cycle of thick-client-thin-client that's been going on since the inception of computing. We're in a 'thin client' swing right now (OMG CLOUD! PUT ALL THE THINGS IN THE CLOUD! GAMING IN THE CLOUD! RENDER IN THE CLOUD!), but I expect we're nearing the end from a combination of bandwidth (and latency) starting to hit limits of existing installed capacity, societal pushback of "we put all our data in the cloud, but who is this cloud anyway?", and a new spread of high-workload-latency-sensitive applications perfect for thick clients (VR and AR).
Not this time because 5G will specifically see to that. Plus the investments in AI are being made in edge(IoT, smartphone, auto and hybrid-edge/cloud) not PC. PC will remain the creative area for professionals, business and prosumers. Unfortunately gaming is still a small % of total sales. Game-streaming and the next gen of SaaS will begin to take off with 5G as e2e latencies come down and entry barrier cost becomes lower, especially if this trend of high component costs doesn't break. I co-authored a whitepaper on 5G use cases. There's a lot of positive expectation cross-industry. In some cases where there's line-of-sight and poor broadband you might even get 5G mmWave replace your landline entirely.
Yea ALL this is dependent on data caps. Actual session bandwidth is no more/less than Netflix etc because it's basically a compressed video stream down and up is ~USB 2 input and a mono microphone. It's e2e latency that's the issue, so it certainly won't replace global console roll-out quickly. 5G is long-term: 10 years. 4G started ~8 years ago
The big problem is that many mobile providers are also home broadband providers and they will not be willing to cut into their own flesh, so I fear we will see a continuation of the trend of mobile broadband being artificially crippled.
5G is not even CLOSE to sufficient for upcoming applications. To put things in context, current wired dedicated networks are not sufficient for adequate streaming for VR, it's just too latency sensitive a task. The latency issue already precludes any sort of even vaguely efficient video encoding (you're stuck with schemes that compress line-by-line rather than by frame). And that ignores bandwidth. For context, current HMDs could saturate a 10GbE link even if latency were acceptable. For upcoming HMDs using higher refresh rates and higher resolutions, you're looking at multiple tens of gigabits flying about. And if you look ~5-10 years out where AR starts to become viable, you now cut acceptable round-trip latency requirements from 20 milliseconds to a handful of microseconds. This is already a challenge for anything other than dedicated hardware sitting right next to the display, let alone expecting to stream that over a shared network.