Edit: moved from last post, as it’s not fair to make it look like I’m directing this ire at @ModSquid! A single B200 chip has been priced at $30,000 to $40,000; they also have the GB200, which is two B200 chips paired with a Grace-Blackwell “superchip” processor. Nvidia are more interested in selling complete systems for “AI”, instead of individual cards. They offer a complete system called the DGX GB200 NVL72, a full rack loaded with 72x B200 chips and 36x Grace-Blackwell “superchips”, which has an overall power budget of up to 120kW. It is competitively priced, of course, at $3,000,000 per system. Or, if sir prefers, up to 8 of these systems can be combined into a single “SuperPOD” - and if sir has to enquire as to the cost of such a “SuperPOD”, then I’m afraid sir may not be quite the discerning customer that Nvidia is looking for. Not hard to see why they’re quite happy to shaft gamers in favour of selling frankly insane 8-megawatt $24,000,000+ systems by the boatload to the likes of Microsoft, Google, Amazon, Facebook, etc. This is what “AI” really means: rack systems that cost $3,000,000 and consume 120kW of power, in data centres stuffed with hundreds of such racks, so that OpenAI and their cohorts can sell you an algorithm that lies and makes sh*t up. And Nvidia are at the heart of it all, making amounts of money that frankly no one can even truly comprehend.
They are hardly shafting gamers, there's just very little need for more GPU power than what exists in the gaming space, most people use sub 1440p mid range gear, the high end gamier is not something you want to sustain your business on really is it, every company wants growth, where's the growth driver in the gaming market? The last big driver of GPUS was not games but mining. Still 2.6 billion is hardly small change, they are not doing bad with it but datacentre is going gangbusters right now, it has a growth driver, I'd say it was somewhat over blown but hey, make hay as the sunshine's etc. I'm in optical comms so long may it continue
Interestingly, Battlemage support has begun landing in Linux - https://www.phoronix.com/news/Intel-Battlemage-Goes-Mesa-24.2
I CBA with the point-by-point stuff these days, but this...: ... is the heart of the issue. It's not just about making an obscene amount of money, it's about this year's obscene amount of money being even more than last year's obscene amount of money. It's the governing principle of the vast majority of publicly listed companies - making a profit simply isn't enough if that profit doesn't exceed last year's profit. Nvidia know where the profit is, and that's in hawking obscenely priced systems in the "AI" market. Who cares about innovating and improving your products when you've got racks costing $24,000,000+ to sell. Which is why, as I keep saying, the market needs competitors, like Intel. That's good to see, even though it's very very early days. I do respect the amount of work that Intel's Arc software/driver teams have had to put in, they're going up against competitors with, quite literally, decades of experience in graphics drivers.
Line go up - good. Line go down - bad. Hoping for Intel to help break a monopolistic industry stranglehold... who'd have ever thought...
Asrock have put Nvidia's 12V-2x6 power connector onto a Radeon GPU https://www.techpowerup.com/323054/...00-graphics-card-with-12v-2x6-power-connector
Nvidia finally admit that GPUs are getting too chonky, and release an "SFF-RTX" standard (But it's still quite chonky!) https://www.techpowerup.com/323092/...enthusiast-geforce-rtx-graphics-card-standard
Not sure which definition of SFF they are using, it certainly looks chonky in every dimension. 2.5 slots is a non starter for many cases, notably the much appreciated Dan A4-SFX
I was agreeing with you and questioning their SFF-Ready status particularly as the article says they are not aiming for ITX and it's for the 4070 upwards. As someone else pointed out, there were small 1080Ti models, which still take some beating