1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Nvidia points to Turing's TSS for VR acceleration

Discussion in 'Article Discussion' started by bit-tech, 26 Oct 2018.

  1. bit-tech

    bit-tech Supreme Overlord Staff Administrator

    Joined:
    12 Mar 2001
    Posts:
    1,712
    Likes Received:
    32
    Read more
     
  2. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    2,834
    Likes Received:
    198
    It's one of the more interesting techniques, and one that can be applied (albeit without hardware acceleration) to any GPU. Super simplified version:
    In normal rendering, for a pixel covering an object you use the UV map of that object to look up the location on that object's texture that that pixel covers, then using the position and angle of the object relative to lighting in the scene to shade that texture sample (Texel) and paste that shaded pixel onto the screen. When you render the next frame, you do the whole Texel-lookup-and-then-shade chain again for every pixel.
    With TSS, once you calculate that shaded pixel you don't just store it in the screen buffer, but you also pass it 'back' onto a new texture for that object, using the existing UV map. On the next frame, you do a texel lookup and you do a second lookup on your new map. If there is a shaded texel already available, the shading stages can be skipped entirely and the existing texel pasted onto the pixel buffer. In effect, it means you can use lighting calculated in advance for subsequent frames.

    There are some obvious downsides: if you only shade once, then that shading is only truly correct for the camera and object (and light, for moving lights) location it was shaded from. Every other combination of locations will be incorrect to vayring degrees. For world lights (or static shading like cubmaps) that incorrectness is going to be fairly low, but for things like specular highlights it's going to be very obvious the shading is wrong with only small displacements. This is why it cant just be retrofitted to every texture lookup: you want some shading stages to be able to offload their results to TSS (e.g. world lights) and others forced to render every frame (e.g. specular highlights).
    It's not just applicable for VR (shading every frame but once for both eyes) you can also use this to decouple shading rate from frame rate, e.g. shading only every other frame. For complex scenes with relatively static viewing angles (e.g. an RTS, or a fixed-camera sidescroller) this could allow you to increase shading complexity (e.g. add SSAA) while still keeping the same geometry update rate.

    If this all sounds familiar, having lighting calculations baked into your textures was how things were done before doing your lighting calculations in real-time (to allow for dynamic lights) was feasible. That means a lot of the old tricks to hide this 'hack' could be applicable again, though on shorter timescales (your hacks may only need to hold up for a few frames rather than the entire game).
     
    B1GBUD likes this.
Tags: Add Tags

Share This Page