I did some more research into the company Euclideon after I read that it supposedly got a $2 million grant. What I found pretty much confirms (at least in my mind) that a company named Euclideon does exist for the purpose of researching 3D graphics technology, and it did recieve a grant from the Australian government for a lot of money. So at least we know it's not one guy making a youtube video to troll the world . source 1 source 2 However, as to the validity of their claims or the existance of their software, I couldn't find anything. Still, if this is entirely a prank and nothing at all exists behind Euclideon, then it's the most elaborate prank I've ever heard of. I think there's a better chance that the software will be released and people will find that it's not as good as it's claimed to be. This entire thing being a hoax just doesn't seem all that possible although there is still that chance...
I'd love to take the credit for that but I'm afraid I can't. http://www.rockpapershotgun.com/2011/08/02/notch-vs-unlimited-detail/
just copying off reddit: Ahh, sparse voxel octrees. Carmack intends to use them in the id Tech 6 engine (post-Rage). Here's what the video didn't mention: you can't animate these. They are as utterly inflexible as sprites. The closest you can come is to define every frame, can still have amazing object detail, but can't be procedurally animated (e.g. by ragdoll physics) and will move at a fixed framerate with no obvious tweening method. You can have this voxel world and it will look awesome, but it will be almost completely static. By the sound of things this company is using a binary search like the PULS 256-byte raytracing demo, so at least bounding volumes will work and moving static objects around won't be a huge performance issue. but you probably already knew/read/thought of that
Notch makes a mistake in that he calculates the amount of space needed to store every single data point, but the whole point of sparse voxel octrees is that they do not include data for empty points, such as air and interior spaces. (Everything else Notch says is pretty much on point, though. No reason to discount his entire post for one little mistake.) Although the number of points that would have to be stored (and the space required) is still far too large to consider feasible for our current level of technology, it's not quite as large as Notch makes it sound; elsewhere on the internet, i've heard 1-2 orders of magnitude greater than current storage requirements for polygon data, but there's no evidence to support that speculation. Another showstopping problem with using point clouds/voxels in a game engine is animation. Notice that Euclideon has zero animation of any sort present in any of their demos; this is because animating point clouds is impossibly complex due to the need to calculate movement for every single point. With current polygon technologies, animation deforms a skeleton which is linked to vertices of the model and the rest of the model's movement is interpolated, which is a fairly fast operation. Consider that a model with 5,000 triangles could have at most 15,000 vertices; thus, animating that model would require calculating the movement of at most 15,000 objects. Euclideon's technology would require doing that math for every single one of the millions, billions, or trillions of data points which have to move. Remember that then you would have to store each keyframe of animation data, easily increasing the storage space needed for an animated Infinite Detail model by another order of magnitude for each 10 keyframes required. The other alternative is to create groups of points, and then move whole groups at a time instead of individual points. Think one group for the head, one for the torso, upper arm, lower arm, etc. Too bad this style of animation looks terrible. Dynamic lighting is another issue for this engine. In the demos, lighting is precalculated and then the light value of each point is stored along with its other data. Calculating lighting in real-time for every single point in a large scene (like a game world) is another processing challenge which is impossible to overcome. Again, it would require doing lighting operations on trillions of individual objects. Euclideon would have us playing a game world with zero dynamic lighting: no cast shadows, no flashlights, no explosions... and no immersion. Their novel search algorithm may make it possible to efficiently generate scenes out of huge sparse voxel octree datasets and display them, but you can't call their engine a game engine. With infinite storage, we could store as much data as needed for highly detailed models and animations, and with infinite processing power we could calculate real-time lighting for their incredibly high detail voxel models, but we don't have infinite space or infinite power. Please, don't waste your time on this stuff.
One of my steam friends just shared this with me and at first I thought "wow, nice"... but after reading this I'm not so sure. The geometry is very nice but the lack of dynamic lighting makes it look like crap. As for its authenticity or plausibility, I couldn't possibly say because I don't know enough about 3D game design.
This 'We can do it all with the magic of voxels' ...is a VERY old story and EVERY TIME it goes nowhere. This is just 'Ray tracing is the future' crap again wrapped up in different hype. Make a sentence out of these words: No such thing, Lunch, Free, As a
I remember the hype surrounding that game and it's voxel engine; in fact, I've still got the CD somewhere (along with Quake I & Quake II ). All I remember is that the game looked terrible - even on powerful hardware.
I remain sceptical. Very sceptical. What I've seen is a tech demo with some pretty rocks, and and the same patch of grass used over and over again. Nothing moved, there was one static light source and no reflections whatsoever. It was basically just a static 3D image you can circle around through - and not even a really good one. Certainly not impressed here. Much more interesting I find Atomontage. There are quite a few vids on Youtube I believe that truly show the technology doing something (unlike this vid). The car leaving tracks in the sand and getting stuck is amazing. In the end I think this technology could be useful, when combined with polygons. Think of a hybrid engine with pretty voxel sand and rocks and animated polygon people like in Crysis for example. That would be graphics heaven. Also: obligatory Art Style > Graphics remark
They did say that. However those conversions wouldn't have increased details. That would be (as far as I know) impossible. You just have the identical polygon model only that it consists out of at"atoms". You would still need to add real details...
I'm sitting on the fence for now. I've looked at both sides and each have compelling arguments. However to add some more points to the "it's possible" side I'll add my understanding and assumptions of this tech: + The core technology is based on algorithms, not a rendering technology per se; I understand it operates like a highly efficient search engine/database. I assume this means that you won't have the same problem of memory requirements. Game maps/models are traditionally masses of wireframe polygons that then have high-resolution pictures (textures) pasted around [which can be highly memory consuming]. Now imagine that the model itself is a database entry - no more than an reference for the engine to place atoms. A scene will not be made of high-res pictures any more - instead it will be made of particles with RGB values. Have enough of them and an image forms (ie how pixels form images..). To summarise: the engine places coloured atoms on the model as required. This process would be dynamic and leads me to the next point: + The level of complexity scales to the camera viewpoint. Similar to current game technology, only far more subtle and incremental. Using atoms means only minor additions and reductions are being made to the models as the viewpoint changes: there shouldn't be a feeling of lag or low frame rates. Traditional polygon-based models have to load from a pool of increasingly complex pre-made models the closer you get in game, and this can cause games to lag as they load content (think GTA skyscrapers). Now imagine there is no need to load entire pre-fabricated models but only tiny incremental steps to existing ones, to make them more or less complex as required. + Referencing/indexes. The official video makes a point of saying how many billions of "atoms" are used in their island model - I personally have the feeling that is how many are referenced by their model, and far fewer are actually rendered at any given time. + Side points: As long as the engine is optimised and fast, I don't see any problems with the core idea. Just assume that atoms can be plotted at a fast enough rate onto their correct reference points to make it playable. Of course it could all be a hoax; I like to keep my eyes open to new ideas however. Arguments have also been brought up about difficulties of animation and lighting etc. I believe that a hybrid approach would give us the best of both. EG an atom-based world/map could give us incredible detail without sacrificing performance, then we can add high polygon character models using traditional animation on top. So forget saying "it's impossible, it's all lies LIES I SAY" etc etc; if it is a new technology then there is potential to do things we haven't thought of yet. And remember, in the early days there was also a similar backlash against the first pioneers to push games in *gasp* 3 dimensions!!!