Discussion in 'Article Discussion' started by Da Dego, 7 Aug 2006.
Crysis doesn't use raytracing
The problem with real time ray tracing has always been that it is totally unsuited to a pipeline architecture. But with multi-core processors arriving and hopefully multi-core GPUs on the horizon it would be damn interesting to see what kind of realism could be achieved in games in the future.
Actually, DeX, if you read Intel's whitepaper on the subject, raytracing works VERY well with how processors pipe information. In particular, there is a strong benefit from the way memory is organized into heirarchies on a computer that makes the calculation particularly suitable...far more so than the algorithmic guessing that is today's pixel and vertex GPU pipeline!
Hmm maybe I should have said unsuitable to a GPU pipeline architecture. I'm not sure how good CPUs are compared to GPUs at ray tracing I heard that there was quite a difference between the sort of processing that goes on in a GPU to render raster graphics than would go on to render ray traced graphics.
Raytracing sounds yummy.
Sounds like Brett knows a thing or two about this, so I'm going to trust my expert and side with him
B.Thomas Inc share price ++
So Mr. Da Dego sah, is this the perfect answer to the question of what we're gonna do with all these parrelel processors we're about to end up with? As Mr. Pope says, you seem to know your stuff, so I'm hoping you can gimme an answer on that.
If so, the future of gaming seems extremely simple. A move to Raytracing within the next 24 months, with core 3 and up being used specifically for the purposes of the raytracing. I'm assuming that doing so we could mostly remove the need for a graphics processor? Or at least change the specifics of what it does. My understanding of raytracing is pretty loose, but so many people seem to be talking about how perfect it is for lots of processors. It just seems like what's naturally going to happen is we're going to end up using the cores for spangley graphics.
Expect my answer in a column soon. Needless to say, though, I'm quite excited. The idea of raytracing being soooooo close makes a lot of the recent shake-ups in the industry make a lot more sense. But as both Bindi and Biggles can attest to, Raytracing has been my hope for the industry for a long time.
Yup, and here's a demo and, soon, the real time ray tracing code: http://blogs.zdnet.com/OverTheHorizon/?p=10
This is confusing to me cuz i would wonder how you mix traditional game lighting techniques with pure raytracing. For example, how would you light a large outdoor scene using this technique? use standard pre-calculated lighting built into the textures and just apply raytracing to some items? or just rain rays from the sun and let the raytracing calculate everything? but that would just kill performance completely no? Also wouldnt you get strange effects if you had the bounces limited to 1 or 2 or so, then items reflected in water for example wouldnt show the suns light on them.
Does it also mean that as a by-product of doing pure raytracing we can get realtime ambient occlusion/color bleeding as the light bounces?
You could do the standard calculate a texture beforehand and paint it on, but it wouldn't be that bad of a performance hit to just add the sun as another light source, it's fairly easy to do. In raytracing, for every intersection point with geometry that you want to draw, you calculate shadow rays to every light source, so adding the sun would just be adding an extra shadow ray. Also, the beauty of raytracing is that large outdoor scenes would actually be more efficient than a smaller less complicated scene, slower to draw yes, but more resource efficient. This is unlike rasterization, where things get linearly slower and less efficient with increased scene complexity.
Also yes, if you limit bounces like that you may end up with strange effects, but generally the accepted method is to limit it to around 7 or 8 bounces, or until the contribution of the calculated colour of that ray to the final colour of the pixel in question won't be noticeable, whichever comes first.
Lastly, no, we do not get free ambient occlusion/colour bleeding. That technique is called global illumination and is usually done with either photon mapping, light transport, or radiosity. It's also MUCH slower than both raytracing and rasterization. Still, one can dream. Can you tell I'm doing my thesis on computer graphics?
Holy crud necrophelia!
haha i definitely can
Sounds kinda cool, tho we both probably know it takes a lot more than straight raytracing to make CGI look really really nice. i'd be lost without area lights but they wouldnt be possible unless u got GI etc right?
interesting point u made about the GI, i was aware it used photon mapping but i didnt know that was much different from just standard raytracing. is it similar just the light rays make a light spot on the photon map and u get a lit effect from it? cuz ive done renders with low ray counts and u get a kinda spotty effect, i guess when u have enough rays this dissapears?
"Standard" raytracing is a bit of a misnomer. My understanding is that "raytracing" is a generic term for rendering algorithms that literally trace rays - i.e. trace the path that light beams would take through the image.
The ultimate solution would, I suppose, be to plot a very large number of rays leaving the light source and trace them until they reach the "camera", leave the scene, or reduce in intensity to below a very low threshold. Provided you accurately modelled the objects in the scene and were working to a suitable degree of precision, that would give you a very accurate facsimile of the real world. If you modelled the internals of the camera as well (lenses, aperture and capture surface), you would also get realistic depth of field, lens flare etc. This, however, would require an enormous amount of computation as you would lose, so compromises have to be found.
The simplest ray-tracer will trace one ray per pixel, from the camera out into the scene. If the ray hits an object, you then measure angles and distances to light sources and check whether the path to each light source is clear by tracing a ray from there, perform some simple calculations and add the results together to give the colour of that pixel. This is trivially easy on modern CPUs at even high resolutions, but while you get pixel accurate shadowing, you don't get any kind of ambient light, so with a single light source, anything in shadow is totally black. It is simple to add a global ambient light, but the effect is pretty poor. You also don't get focusing effects, anti-aliasing, specular highlights, subsurface scattering, partial or total transmission of light, reflections or any of the other myriad effects that make a scene convincing.
Photon mapping is an excellent solution - you trace a large number of rays from the light source into the scene, tracking them for a number of bounces, bend them through refractive objects, scatter them where they hit matt surfaces, bounce them where they hit shiny surfaces. You also split the rays where necessary - where a photon hits a glass object, for example, you will have a reflected and a refracted ray.
Once you have done this you have a photon map - a model of the illumination of the scene. You then run your "camera" function, tracing rays from the point of view into the photon mapped scene. You may use one ray per pixel, but you will probably want to use more where there are object edges (for anti-aliasing) or you may supersample the entire image and then downsample it. You might use several rays per pixel passing through your camera's virtual lens at different points, to get a focus / depth of field effect.
One great bonus is you could do the photon mapping once and then, provided the scene is static, you could image it from any location relatively cheaply (in terms of computer resource required) because you'd only be re-running the second stage of the render.
damn, where do i start replying to all that.
the photon mapping sounds like a great method to use on current static scenes, this is probably already whats going on in games like half life 2 during the map compilation process right? then its results are just "baked" onto the map and it looks like realtime photon-esque visuals.
its mind boggling stuff. i wish i understood the maths so i could have a stab at writing a raytracer or something like it
Separate names with a comma.