1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Nvidia announces Ampere technology

Discussion in 'Article Discussion' started by bit-tech, 18 May 2020.

  1. bit-tech

    bit-tech Supreme Overlord Lover of bit-tech Administrator

    Joined:
    12 Mar 2001
    Posts:
    3,676
    Likes Received:
    138
    Read more
     
  2. Monkfish

    Monkfish What's a Dremel?

    Joined:
    17 Dec 2006
    Posts:
    14
    Likes Received:
    3
    Back in 2007 when nvidia first exposed the number crunching power of its GPUs to general processing (GPGPU) it was certainly a wise move and opened up a raft of new business opportunities alongside the traditional graphics business. Until recently the two businesses could be satisfied by a single hardware strategy, but moving forward I'm not sure that this can continue. The future of A.I. is massive and although there are some graphical crossovers, such as DLSS, A.I. has far wider implementations than purely graphics. Computer graphics on the other hand has just moved down a new path itself with raytracing, which has nothing to do with A.I. What game players want in the next GeForce GPU is far superior raytracing performance, not far superior A.I. performance. So can Nvidia continue to serve both markets with the same hardware design philosophy? I'm not sure they can.
     
    MLyons likes this.
  3. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    On the contrary, ML is critical to making real-time raytracing viable. ML is used for filtering sparse rays to produce an acceptable image (denoising). ML is used for path tracing to reduce the raycount by guessing which rays are going to provide the most useful information before casting them (biased path tracing).
     
  4. Monkfish

    Monkfish What's a Dremel?

    Joined:
    17 Dec 2006
    Posts:
    14
    Likes Received:
    3
    I stand corrected. I thought it was the RT cores that were responsible for raytracing performance, I didn't realise that the tensor cores played an important roll in the denoising. I was under the impression that nvidia's ML business didn't really need RT cores, and their graphics business didn't really need tensor cores to the same degree as their ML business, hence the increased bifurcation in hardware strategy.
     
  5. edzieba

    edzieba Virtual Realist

    Joined:
    14 Jan 2009
    Posts:
    3,909
    Likes Received:
    591
    The RT cores accelerate the actual tracing of rays (BVH tree traversal), the Tensor cores accelerate the filtering of those calculated rays to turn a grainy image made up of the raw samples into a useful one.
    While their datacentre business could probably get away with removing the RT cores, that would remove the A100 as an option for the offline render market (movie and TV 3DCG) so it makes sense to leave them there. BVH traversal acceleration may also be useful for any HPC task that involves raycasting (e.g. thermal and nuclear simulation).
     
Tags: Add Tags

Share This Page