I think we could discuss RTX forever, but I imagine we're both gonna see things from different points of view. I do see where you're coming from though, and I understand what you're getting at with your analogies. I still think they've left it ambiguous (maybe deliberately so?), both in their presentation and dev page. Grab yourself all the chicken soup Gareth!
Oh, aye, they're being deliberately cheeky, no question about that. Chicken soup for the soul! L'chaim!
Doesn't make having a presentation where half the list of RTX games only uses a tiny fraction of it any less shady, especially when mixed with the whole preorders and no reviews yet thing....
I agree with you, but AMD did the same with Ryzen, pre-orders before reviews, I think its just the way business is handled now with this price hike im officially boycotting Nvidia now. I was considering doing a new build based around 8700k and 2080ti, but a £200 hike over the 1080ti with no reviews yet ..... its like a 25% price increase, again, they did it with the 1080 and I said then, problem then was I didn't wanna sell my Gsync, well now I have and I would rather crossfire 2x Vega 64's for less than a 2080ti than give Nvidia anymore of my money
Not really, Compute Unified Device Architecture 'cores' is just Nvidia's name for the programmable parallel processors and in the Geforce range there mainly used for graphics related arithmetic, in the professional range of cards they can be tasked with computational tasks other than graphics related stuff. The general purpose processing that the graphics processor units get used for outside of graphics is just a variation on the arithmetic needed for calculating graphics work, CUDA is just Nvidia's fancy name for programmable parallel processors, there was 3D acceleration before CUDA after-all, programing the parallel processors to do other stuff came afterwards. The requirement for the programmable parallel processors to do more than FP32 calculations is something that adds complexity but isn't much use for graphics related workloads, hence why it seems to me that all these extra features have been added to graphics processors so they can work better on non-graphics related work, that is unless, like i suspect RTX is attempting to, those extra but underused (in the geforce range) features can be put to good use in graphics related workloads. That doesn't sound good, hope you feel better soon.
I'm entirely happy to assume I'm being stupid owing to either my illness or just the fact that I'm stupid, but as far as I'm aware this isn't right. The CUDA cores are what Nvidia called "unified shaders" or "scalar processors", which replaced the single-purpose vertex and fragment shader hardware of everything up to and including the GeForce GTX 7000 series. When CUDA came around, the cores didn't stop being unified shaders; CUDA simply repurposes them for new workloads. Because Nvidia knows the power of branding and because it's the only one that offers CUDA support, the "unified shaders" became "CUDA cores." The same thing applies, though: you take the CUDA cores away from any Nvidia graphics card from the GeForce 8000 series upwards and it'll no longer do any 3D acceleration, because it won't have any unified shaders any more - and it already lost its vertex and fragment shaders, so it's now trying to do 3D rendering with no shaders at all. Which, y'know, won't work. (If you want to get even more technical, no current Nvidia graphics card has any shaders at all: it has scalar processors which can run shader or CUDA workloads, and which Nvidia calls "CUDA cores.")
I get the impression were talking about the same thing, yes it went from vertex and fragment processors, to unified shaders, to CUDA 'cores', however in my eyes they all trace there heritage back to accelerating 3D graphics, that acceleration of 3D graphics mainly uses specific arithmetical computations, however it seems to have picked up a fair amount of non-3D acceleration graphics related arithmetical computational baggage along the way. That's fair enough but making a single chip where 2/3rds of it isn't being used (*am i using the right word?) probably isn't great. *When i say isn't being used i don't mean it's not doing anything at all, if i design an ASIC to perform FP64 calculations and only send it an FP32 calculation every clock cycle while it's able to calculate that FP32 it uses the same amount of resources (power, silicon area, memory bandwidth, etc, etc) as if i had asked it to do a FP64, would underutilised be more correct?
When the venerable GeForce 256 launched, nobody was doing texture and lighting in hardware, so all that die area was 'wasted' with all existing games using software T&L. PBR didn't kill off texture artists, just changed the skillset required. Lighting Designers exist for IRL lighting for buildings, structures, spaces, etc. Similar skills will be needed for games too.
I've thought about it more on the way home and i really didn't do a good job of getting what was inside my head out onto the screen. Wasted maybe the wrong word, basically what i was trying to say is that a 'core' inside a GPU, however we want to define that, is designed to process X bits of data per clock cycle, be it 8, 16, 32, or 64bits, did i get that right? If i design each core to handle 32bits and i send it 16bits each clock cycle then I'm processing half the data but using the same resources as if i had processed 32bits, i could use rapid packed maths but that comes with drawbacks. I wonder if I'm making any sense anymore.
Closest I've got is an Ace Rimmer T-shirt, I'm afraid! I think I understand now (the rum's probably helping): you're not saying the CUDA cores are Quadro/Tesla-specific, you're saying that over the years they've been modified to be better suited to GPGPU workloads at the potential expense of traditional GPU workloads. Aye, it's hard to argue against that one, I reckon.
I knew i was being infuriating and more than a bit Tim Nice-But-Dim but OMG I've driven you to the drink.
When hardware journalism gets as dumb as games journalism: https://www.tomshardware.com/news/nvidia-rtx-gpus-worth-the-money,37689.html
Gamer's Nexus tore that article apart on Saturday too. New senior editor apparently. I don't think he'll be at Tom's much longer...
No tears would be shed if he was kicked to the curb, we certainly don't need the excessive feeding of hype and preorder culture that infests gaming in hardware.
I've got someone at work taking my 1080 for $400, which I will upgrade to a 1080 Ti for an additional $100. They're crazy 'cheap' now. The lack of competition from AMD is really showing us that NVIDIA can charge whatever they'd like, and that they'll sell in the boatloads. It's sad, but that's the way the industry goes. Just look at Intel...