Real? Fake? Either way my mind is blown. More info at http://unlimiteddetailtechnology.com/ , although for a company supposedly about to revolutionise graphics, their website is rubbish.
reminds me of those PowerVR Kyro cards that only renders what you can see on screen at any one time, instead of the whole scene. looks good, but theyll need to get someone big on their side (intel might like this if it is done in software like he said).
Sounds interesting, but wow that video was irritating... Waffled for pretty much 7 out of the 8 minutes about how awesome it was and (IMHO) talking down to you about it before finally offering an explanation at the end. And if i hear the word 'data' pronounced as 'daaarh-taaarh' once more i think i will have to scream
Hmm... Very interesting, though it also makes sense the way it's described. You only need as many points as you have pixels on your screen, which makes sense as one point = one pixel, though it seems to me like it would be immensely difficult to process exactly what points to call at what time depending on what direction whatever you're controlling is moving in, especially with larger monitors (resolution). If they've got it figured out, more power to em... The demonstration looks impressive, and actually being able to use curved surfaces instead of polygons arranged in such a way things look curved is definitely the future of graphics.
I still don't know what the big deal with making curves is.. I also know nothing about 3D graphics, it just doesn't seem difficult. Isn't the 'only render what you see' thing already being done? It's not ambient occlusion, but I think the name is similar or something. Blah, if it makes games look pretty I'm all for it.
It's not fake, other companies have shown this type of tech, with the possible exception of their claim of it all being done on cpu but I think that was just a mis-speak by the very bad voiceover guy. The only thing I have issue with in the demo is them calling it unlimited, it's speaks to me of a very ill-thought out launch plan. Also when you do some digging into the company it raises some questions.
Z-Buffering. Which has been done since the original Quake IIRC. And theres another one, but I can't remember the name atm. Just had to write a 2000 word report on the impact of Graphics evolution on Game Graphics,
I had Delta Force, great game but the voxels were pants at low resolutions, and nothing had enough real grunt to make it look nice enough at the time. PS Your love Crysis uses voxel based rendering for terrain.
'There are two polygon companies, ATI and Nvidia, who don't like each other very much' - I chuckled a little there. Looks good but will we need a dedicated GPU to run it?
they should put some higher res images on their site.. I don't think anyone would mind getting rid of anti aliasing- that's a big leap forward if true.. those pyramids looked alot like dx10- it can take the same model and clone it without a hit.. so I dunno.. could be a fake
I also know almost nothing about 3D graphics on the technical side, so my simple mind needs some explaining. Take AutoCAD for example. I can draw a circle which is, according to the program, perfectly round. It will be displayed with as much circular precision as my monitor's resolution will allow. The math used to describe the circle is universal and can scale to any machine's capabilites, it simply requires some more code to render it as curvy lines... right now we're at about the level of a graphing calculator. But with enough complicated math, anything can be described, even things that aren't possible! It would seem to me that things such as trees could be written as functions describing large circles at the bottom with small circles at the top and a texture wrapped around the space in between like the label on a soup can, scaling to different machines is simply picking more or less variables to put into the function. Is this method just too slow?
Some other discussion I read pointed out the lack of dynamic meshes, dymanic lighting, and general movement other than the camera. Taking static geometry, partitioning it, and storing it in a data structure for a fast search based on visibility is not a new problem. I know for a fact Doom used a similar idea ages ago (it used a binary space partition tree). Now, if that foliage were swaying in a breeze, that would be something to get a little excited about.
what you are describing is the difference between vector (images derived from equations) and raster (images built from pixels). the idea the video was putting across is that even using high quality textures there will always be a point where a) the texture quality is low while zoomed in, or b) the performance hit from using such a high res texture becomes noticeable. i believe that quite a few games do use generated trees (Oblivion is the one i know of), but unless you can describe the texture as a vector you will get the same problems with level-of-detail that we have now.