So any idea what would be needed to hardware accelerate this above 20fps? does this mean that tessellation will be redundant?
That is just so fake. I's fakey fake fake McFake Fakealishous bullfake with a side order of fake So they can store and render an infinite quantity of voxels? In software? At 20FPS? Er how exactly? By using an infinitely sized supercomputer with an infinite amount of RAM? Because that's the only way! You can't do voxels without lots of processor grunt and lots of RAM. So you can't render an infinite quantity of them. http://en.wikipedia.org/wiki/Voxel I can't believe I'm wasting my time even commenting on this crap.
It's probably just be for low graphic settings... I think this could give Directx a run for it's money, I hope it takes off
Could just be rendered on a server farm for now. But this tech blows everything out of the water. Seeing as most modern GPU's are basically supercomputers if they get this optomised on the next gen of GPU's it'll be a real winner!
What "tech"? It's not real! Everything about it screams hoax from so many levels! Even their websites have been built using a Plex "3 step wizard" website creator, and everything they have ever said is totally unbelievable as their claims are... and I don't use this phrase lightly... Impossible! I'm not going to argue my point from a technical standing as this hoax while fun has no technical merit. There is no element of truth in any of it. It's 100% fake. It's not even good enough to go here: http://en.wikipedia.org/wiki/List_of_hoaxes
Thats what people thought about Onlive, and yet that works... And if it's not that what DID they make it with, as I've never seen that level of detail produced by anything...
On their site, if you believe what they say, they aren't using voxals. They claim to be using an algorithm that's unlike any of the current 3D rendering techniques used thus far. It's a very convenient claim I'll admit, but it would explain why this seems impossible with voxals (because it is). From what I gathered from reading the bottom (take note that I have *NO* experience in this subject what so ever so I may be totally wrong), I think they're using some type of smart selective algorithm which will ignore all the points that aren't necessary. This makes unlimited detail possible since at each resolution and zoom level the graphics is processing the same number of pixels which is based on what the algorithm is returning as significant pixels. That's how I think they explained it, but don't take my word for it since I don't really know anything about this or any technology related to this . That same page makes claims that the graphics they're running can be done on ANY consumer hardware which I find very... VERY hard to believe.
They're not "using" anything as it's all 100% fake. There's no technology at the heart of it. There's no software. There's not even a real company. FAKE! If you think otherwise is even remotely possible, then here's something else for you: http://bit.ly/pXEqyA
You really are trolling this hard... Just repeating 'fake' over and over in bigger and bigger font doesn't make you right...
But he said it in big italic letters! And used capslock?! I thought capslock was just a way of expressing your anger but now I see it also makes your statement more convincing! I TOTALLY THINK THIS IS REAL
I like the point he makes that it only needs to calculate to your screen resolution. This in itself is a very interesting way of looking at their approach to this. If thats the case, the implementation of this could really tip the scale of how we look at graphics and rendering. And I think if you take their term as infinite, he really does mean infinite as the screen resolution is the only thing that the computer ever has to throw up, so the system can keep the data and 'fetch it' to the necessary points on the screen as needed, rather than brute force rendering of millions of points that are not needed. Clearly it needs some work, but if their claims are grounded than its significant. I for one look forward to their work, it looks promising (granted I see why they are NOT spamming their technique all over the place, after all if you've got something good its in your better interest to make it proprietary and secret)
The technology itself is sound - take the scene, find which 'atoms' will be shown for a pixel, apply lighting effects and done! The only problem is storage. How are you going to store trillions of points of data on a modern PC? Assuming for a second that each point takes up 1 byte of data (it would probably take more). 1,000 points is a kilobyte 1,000,000 points is a megabyte 1,000,000,000 points is a gigabyte (billion points so far) 1,000,000,000,000 points is a terabyte. Thus, to store the geometry for a world using these basic 1 byte points with a trillion points, you would need 1TB of RAM. I think the future of geometry is 'vector' geometry. If you think of polygons as raster graphics (made of pixels/points), then vector geometry would be represented by mathematical equations. Eg. it would be extremely easy to build a perfectly smooth pillar - simply say Code: (2[pi]r)*h Where r is the radius of the pillar, and h is the height.
I'm as skeptical as the next man, but do you have any evidence to back up your claims? If not then you're just like the rest of us: skeptical.
Evidence? They are claiming to be able to render images with unlimited detail using standard CPUs. They are not claiming "More detail" but unlimited detail. And the phoney baloney techno babble makes references to it being like a google search index where you only need to pick out the pixels they need, well that won't work... well it will work on a small scale but the technique to get it to work is still linked to ray tracing, ray casting etc. But the big issues is the size of their datasets. Their first demo video containing 8 billion points. That's a dataset of a minimum of ~45 terabytes and that's without any material or colour data associated with each point. And that was last years "demo". The 2011 demo island "contains" 21 trillion data points. Which considering each point will need an x,y & z coordinate (Let's say 64 bit). So that's 192 bits for each point. Let's add another 24 for colour, and 16 for material data. This 232 bits of data is really really conservative. So how much data are we talking about? That's 4,872,000,000,000,000 bits of data. That's 6,090,000,000,000,000 bytes 6,090,000,000,000 Megabytes 6,090,000,000 Gigabytes 6,090,000 Terabytes. Read that? SIX MILLION TERABYTES! Google's web crawl only indexes 850 terabytes of data. So their demo dataset is three and a half thousand times larger than the whole of the web content indexed by google search. Read that bit again. THREE AND A HALF THOUSAND TIMES LAGER THAN THE INTERNET VISIBLE TO GOOGLE SEARCH! Are we agreed that's a bit big? They provide no information on how they claim to have accomplished this. Their websites and videos are amateurish and their self referenced technical explanation wouldn't fool anyone with even the smallest amount of technical knowledge on the subject. I don't need to prove it's impossible. It's like someone claiming (via a dodgy web video like this) they can turn 1 litre of water into an unlimited quanity of water. It's not up to the universe to prove they can't do the impossible, it's up to them to prove they can do the impossible. If there is any truth to this hoax I will eat my hat. No... I'll spend an "Unlimited" amount of time eating an "Unlimited" quantity of hats. Google info source: http://static.googleusercontent.com...abs.google.com/en//papers/bigtable-osdi06.pdf