Discussion in 'Article Discussion' started by bit-tech, 3 Dec 2018.
It appears that, like with the Titan V, The 'GeForce' moniker has been omitted and 'Titan' is now it's own distinct line (alongside GeForce, Quadro, etc).
So it has. So, that gives us:
Titan: PEOPLE WITH MORE MONEY THAN SENSE.
Titan: For when GeForce isn't fast (read: MOAR RAM) enough for your tasks, but Quadro/Tesla's ECC features are unnecessary and you want to save a few bucks.
Doesn't Titan still come with artificially-crippled double-precision float compute, just like GeForce? Not to mention the EULA which says you can't use them in data centres...
The Quadro RTX cards also have 1:32 FP64 rate too. You wouldn't be puting Quadros in a datacenter either, that's what Teslas are for. Quadro (and now Titna) are for 'desktop' (or Big Box Under desk) workstations or local render boxes.
£400 less than the Titan V? Colour me surprised...
(not like I'd buy one, but good to see Nvidia has not jacked the price up even further)
Now they’re seriously just taking the P. As if the price tags of the new cards weren’t ridiculous enough already.
f**k April 1st, for comedy we have Nvidia !
Definitely getting 2 then. Be rude not to and a snip at £5k.
Over two grand. Hahahahahahaha
For example: nearline unbiased path-tracing previews. You're working on a scene for a feature film that will take x hours per frame to render in full unbiased pathtracing on your render farm. You want to make sure your shot looks good, doesn't have any weird reflections or highlights in awkward places, etc. Your choices are:
- Offload frame to render farm and render it 'for real'. Long round trip time between tweaks and results
- Render locally using the pathtracer (e.g. on a non-RTX GeForce, Quadro, or Tesla) at a massively low resolution to bring render times down. Miss almost all detail
- Render locally using a rasteriser (e.g. on a non-RTX GeForce, Quadro, or Tesla). Don't get accurate results compared to the path tracer
- Render locally using a GeForce RTX and a small subset of the scene to get it to fit into memory. Don't get accurate results compared to the full scene.
- Render locally using a Quadro RTX and a the full scene, using denoising to bring render times down. Even 1fps gives you a real-time preview for scene changes.
- Render locally using a Titan RTX and a the full scene, using denoising to bring render times down. Even 1fps gives you a real-time preview for scene changes.
The latter two are where the Titan makes sense, as the Quadro costs ~£4k more for no actual benefit. A hypothetical as-yet-unannounced Tesla RTX (let's call it the Nvidia PWNs the Cinematic Render Market Edition) would have a similar issue for nearline use: cost increase without performance gain.
I thought the 2080Ti was heavily priced as it was the new Titan?
The Titan is dead, long live the Titan.
Oh nvidia, by the way, jog on.
Edit: Want people to take a new technology on, release the Titan and drop the price on the rest of the 20** series.
I took your comment to mean you can use the Titan where you would normally use a Tesla, so long as you don't care about ECC; now you're saying it only makes sense as a replacement for the Quadro.
Oh, god, can you imagine how the early adopters who have actually bought the 2000 Series would react? There was enough salt surrounding the post-launch free game bundle, ne'er mind a price cut!
May be a tl;dr would help:
Workstations/nearline: You could find here GeForce, Quadro, Titan, or Tesla (depending on application)
Datacentre: Here be Teslas, unless ye be Doing It Wrong (or doing something very silly like game streaming).
The same as every other time the release of a new top end cards coincides with the drop in price of cards from launch pricing? That is, a well practised wailing, met with little sympathy.
The only people using Teslas in workstations are using them for GPGPU compute stuff where double-precision performance is king, which is why they're not using Quadros or GeForces (or, indeed, Titans.)
I pointed out that there's more to switching from a Tesla to a Titan than losing ECC, which there very much is. Feel free to continue to argue the point, but I'll warn you now you'll be doing it to the equivalent of a dial tone 'cos I'm done.
From what I've read it doesn't (re:the crippled FP), it's basically a workstation version of the Quadro 6000 from what i can tell.
Apart from the Teslas without high FP64 performance. e.g. the Turing T4 (which I'd forgotten had been announced now), the Maxwell Teslas, the 'low end' Pascal Teslas, etc, basically anything other than GP100 and GV100 where beastly half-rate FP64 was introduced.
Heck, I have a Tesla K80 sitting on my desk as an ornament (can't leave the building other than via the shred bin) pulled by my own hands from a decommissioned workstation. Teslas do indeed end up outside the datacentre for non FP64-heavy workloads.
Pretty much just a Quadro RTX 6000 with a Goldfinger
plated cooler from a 2080Ti.
A quick Google found me more detailed specs than Nvidia's giving out, which says 0.51 TFLOPS double-precision: that's considerably lower than the Tesla V100 (7 TFLOPS) and even the Titan V (6.9 TFLOPS), and only a bit better than the GeForce RTX 2080 Ti (0.44 TFLOPS), and way, *way* less than the 16.3 TFLOPS given here for the Quadro RTX 6000
- my bad, I was looking at FP32 not FP64: the Quadro 6000 is 0.51 TFLOPS, same as the Titan.
So, looks like it does have the same artificial compute performance limitation range -
unless I'm missing something, which given I've only spent a couple of minutes looking into it while I'm supposed to be doing something else is always more than possible!
EDIT: The thing I was missing was reading the FP32 figure for the Quadro, not the FP64 figure! It is indeed the same as the Quadro 6000, which is considerably slower than the Volta-based Teslas.
Separate names with a comma.