Discussion in 'Article Discussion' started by bit-tech, 18 Mar 2019.
Someone needs to make a version of the old Spanish laughing man about RTX cards and RT cores.
Nvidia: 10 years in the making...
Crytek: WTF were you doing?
I still laugh at Nvidia's "FIRST EVER REAL-TIME RAY TRACING CARDS," with poor Imagination sat in the corner going "err, excuse me?"
Isn't the point of the RTX cards that they have hardware that is particularly suited to real-time raytracing, not that only RTX cards can do real-time raytracing? If engines have been written so that RT is viable on non-RTX cards then that's great, providing that performance is okay. If RTX cards give better RT performance than non-RTX cards then they'll justify their existence, but if not then they'll be a failure, but we'll all be winners
So the million dollar question is: how does performance on RTX cards differ from performance on equivalent non-RTX cards?
Nvidia launched the Turing architecture in the Quadro RTX range as 'the world's first ray tracing GPU' - which is wrong 'cos professional grade users could buy Caustic cards in 2010 and Siliconarts announced the RayCore IP in the same year. In terms of gaming ray tracing, that's been an option just around the corner for a couple of decades now - but Nvidia was the first to get it anything close to mainstream.
If Crytek's telling the truth and it can do RTX-style ray tracing on non-RTX hardware, then that's way more mainstream than Nvidia's offering. At the moment, if you want to use a hybrid ray tracing renderer in a commercial game, you'll be using DXR - and if you're using DXR, you're using an Nvidia RTX card and its RT Cores. It simply won't work on anything else yet.
With the caveat that performance is any good, of course I'm very intrigued to see how non-RTX raytracing performs.
30fps at 4K, if the video's not a pile of horse apples.
Intel showed off real-time raytracing a decade ago. The hard part is whether you can do a useful amount of raytracing at an acceptable quality, without performance impact on other parts of the rendering pipeline. The same rendering tricks Crytec are applying can also be applied to RTX accelerated raytacing to further increase performance.
Take a close look at the demo, and you can see where Crytec are scavenging performance: raycount drops as local curvature increases (e.g. flat mirrors and puddles get a denser rays where the angle is common across all rays), SUPER heavy temporal sampling as can be seen with the spinning mirrors (like, 7+ frames at least!) with little to no post-filtering resulting in heavy ghosting, lots of flat surfaces with overlayed decals (e.g. rain on vertical windows).
The record-speed turnaround from "nobody even wants Raytracing, it's too slow!" to "Raytracing is easy, any GPU can do it!" is pretty amusing though.
As I understand it the Crytek demo uses a technique (SVOGI) invented by an Nvidia employee (Cyril Crassin) and was originally planned for UE4 as the default lighting technique but pulled as performance wasn't good enough. So this demo comes from that 10 years work, but isn't the solution Nvidia chose to go with.
Separate names with a comma.