1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Intel’s vision for ray tracing exposed

Discussion in 'Article Discussion' started by Guest-16, 3 Apr 2008.

  1. Guest-16

    Guest-16 Guest

  2. Flibblebot

    Flibblebot Smile with me

    Joined:
    19 Apr 2005
    Posts:
    4,658
    Likes Received:
    152
    What are the realistic timescales before we see a mainstream graphics chipset capable of raytracing, or even a hybrid raster/ray tracer? Is it likely to be within the next decade, or are we talking long-term research here?
     
  3. TreeDude

    TreeDude New Member

    Joined:
    12 May 2007
    Posts:
    270
    Likes Received:
    0
    Well more creative freedom is great. But it also means much more work must done in order to get the desired result. This is why OpenGL is not used as much as DirectX. It requires more time and effort to get what you want. But OpenGL is more robust and allows for more creative freedom than DirectX (plus you can still use DX along with OpenGL).

    I don't see ray tracing taking over as the main way to create games any time soon. I think within the coming years we will see the big budget games start using it though. Maybe Crysis 2.....
     
  4. johnmustrule

    johnmustrule New Member

    Joined:
    12 Jan 2006
    Posts:
    345
    Likes Received:
    3
    Although I know it's not a very good comparison, the image quality in 3ds max when using ray tracing is simply so much higher than alternatives it's always been worth it to me to wait a week for a scene to render vs just a day.
     
  5. zero0ne

    zero0ne Member

    Joined:
    19 Jul 2004
    Posts:
    117
    Likes Received:
    0
    in 5 years we will have processors that are clocked at ~30GHz going by moores law (doubling every 18 months, so 3.3 cycles of doubling with a start point of 3Ghz)

    Of course this will probably be slowed down since the major companies are looking less towards clock speed and more towards multi core processors.

    in 20 years we will have a quantum computer (since moores law will be failing us at about this point; quantum mechanics easily proves that this WILL happen), and in 50, that quantum computer could theoretically run at the speed our brain does. (of course our brain is a "learning neural network" and trying to simulate it will probably not happen)

    check out "michio kaku" for more about these farther reach points I made, He is one of the better Geniuses of our time, since he is able to gracefully explain all these crazy topics in laymen's terms. (and don't confuse crazy with not-true, every idea of his is coming from one research paper or another about quantum mechanics, string theory, etc etc)
     
  6. wuyanxu

    wuyanxu still wants Homeworld 3

    Joined:
    15 Aug 2007
    Posts:
    10,610
    Likes Received:
    234
    so, in about 10 years time, we'll probably have to cough out lots money for a ray-tracing CPU to play ray-tracing games smoothly, while also pay through the nose for nVidia's monster graphics card to play traditionally programmed games properly?
     
  7. TreeDude

    TreeDude New Member

    Joined:
    12 May 2007
    Posts:
    270
    Likes Received:
    0
    By that logic we should already have 5ghz+ CPUs. Yet we do not. Speed isn't everything. Efficiency means a lot more.
     
  8. knyghtryda

    knyghtryda New Member

    Joined:
    2 Jan 2006
    Posts:
    101
    Likes Received:
    0
    Number 1 misconception about moore's law. It has NOTHING to do with speed. The original Moore's law is that the density of processors will double every 18 months. By doubling density one can normally get ~2x in performance, but that doesn't mean clock speed/memory speed/any speed will go up during that process.

    As for ray tracing... a hybrid render path would be interesting (more realistic reflections, shadows, and particles comes to mind), but remember, games like Crysis are still currently very CPU bound as well as graphics bound, so speeds are going to need to increase significantly in order to really utilize this hybrid render path.
     
  9. Flibblebot

    Flibblebot Smile with me

    Joined:
    19 Apr 2005
    Posts:
    4,658
    Likes Received:
    152
    Moore's law doesn't say anything about speed, it's about the number of transistors on similar sized chips. Moore's law states that the number of transistors on a similar sized chip will double roughly every 18 months.

    But it's not a law in the strict scientific sense, it's more of an observation.

    EDIT:
    Damn, got there before me!
    Not necessarily true. The trend for processors at the moment is towards multiple cores, so it's not inconceivable to image CPUs with 8 or 16 cores within the next 5 years or so. If that's the case, you could quite easily use one or more of those cores solely for managing your render path.

    Parallelism Cheesecake :D
     
    Last edited: 4 Apr 2008
  10. Hamish

    Hamish New Member

    Joined:
    25 Nov 2002
    Posts:
    3,649
    Likes Received:
    4
    http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3276 table on the first page here shows that quite nicely
    original pentium: 3.1m transistors and 294mm^2
    penryn: 410m and 107mm^2
     
  11. wuyanxu

    wuyanxu still wants Homeworld 3

    Joined:
    15 Aug 2007
    Posts:
    10,610
    Likes Received:
    234
    Moore's Law is a marketing tool generated by Intel, Intel put the 18 month there. all Moore did was made a very, very rough observation, Intel marketing people did the rest.
    (IET magazine and Wiki both agree on this)
     
  12. E.E.L. Ambiense

    E.E.L. Ambiense Acrylic Heretic

    Joined:
    26 Jul 2007
    Posts:
    2,957
    Likes Received:
    68
    [sarcasm] Didn't you know? By then PC gaming will be dead and it'll all be console-based droll. [/sarcasm]

    :rolleyes: :hehe:
     
  13. Phil Rhodes

    Phil Rhodes Hypernobber

    Joined:
    27 Jul 2006
    Posts:
    1,415
    Likes Received:
    10
    This is a good idea if it can be made to work well - imagine graphics cards with arrays of raycasting processors rather than stream processors.

    The DirectX/openGL triangle renderers are very lovely, but the only reason they look as good as they do is that they've had an awful lot of work put into a limited number of goals, and that's why computer games are all starting to look the same. I can't be the only person around who remembers the variety of gaming that was available on the Amiga, from top down to side scrollers to platform games and even the beginnings of first-person shooters, it went on and on. These days you have FPS or FPS and they all have the same effects and tricks because that's what the systems will do.

    In short they aren't very general. Every time someone wanted to do effect X, be it refraction or anisotropics or glows or whatever, they put in an extension for it. You can't do anything that someone else hasn't thought of. Everything's basically being faked, and there's a million examples of how you can make that rather obvious. The best one that comes to mind is the stack-o-planes effect you can get when smoke intersects other geometry. DX10 tries to solve this and is partially successful, but it's a hack on top of a hack to try and get away with not really doing it properly in the first place. I think you would certainly get less of these sorts of glitches - depth sorting problems with distant objects, intersection issues - with a raytracer.

    If Intel are saying that this is not the way to go in the future, I fully agree with them.

    P
     
Tags: Add Tags

Share This Page