Unlimited Detail

Have you guys checked this out yet?

https://www.popsci.com/technology/article/2010-04/video-new-graphics-tech-promises-unlimited-graphics-power-without-extra-processing

The tech seems pretty legitimate … I don’t know why, but I’m not that skeptical about it. With the proper funding and resources (which they seem to lack at the moment) this could very well be the future of graphics technology.

What do you think?

Its cool and I’ve always wondered why this concept wasn’t already created. I saw a bunch of this on another site and they were saying that they had trouble with animations and stuff. Also that it was only possible on super computers.

Only Raminator could play a game with this technology basically.

I think it holds promise and it could be the future of graphics, but we are looking at probably 10 years down the road before we see anything actually using this technology.

Its going to be a while till we see this, but I suspect it ultimately will be the future. But one man is not going to be able to change an entire industry.

I’m very intrigued to see how they accomplish this, and if its even possible to render in realtime. Obviously they’re making the case that it can, but I do have a hard time believing billions of cloud points can be calculated per frame.

The most interesting thing to me is that they compare the technology with the Google search engine, finding the points that matter on the screen and only rendering those points. Correct me if I’m wrong, but doesn’t google searching take enormous server banks to calculate the “600,000,000 results in 0.000001 picoseconds” ?

Well I have a million questions, I guess we’ll just have to wait untill he reveals more of it.

Yes; this is why I don’t understand how this would be effective. As he mentions in the video you supposedly only need to retrieve points for each pixel on the screen…but if you do that, how is it going to render portions of objects that have points that are occluded? Don’t you need that information to help draw the shape of something, because the points help define something’s shape? There has to be some sort of pre-rendering or pre-pass calculation happening to speed up the search process…otherwise isn’t it just like rendering something in a 3D package? Anybody who’s done that can attest that it’s definitely not real time. I’m curious to learn more about how his rendering method works, and not just for static scenes, but motion as well (let’s see all of those animals in the pyramids move…)

Seriously, this boggles my mind, and is going to keep me awake all night…

Seriously, I’m confused.

I’m convinced I’ve already seen that exact same video (and read all the information regarding it) a few months ago, yet this article has been posted yesterday?

Its nothing new. It is just a modern voxel engine with a nifty way to search for appropriate data.

Here are the problems which arise from a voxel engine compared to a polygone:

  • Boxy
    Voxels look box unless you have insane amount of points, especially to make it look just as good as an modern poly engines.
  • Storage
    You have to hold information on each point rather than a few points like a polygon engine. We are talking a at least one blue ray disc for a game like half-life 2 to look just as good.
  • Anti-alias
    It is impossible to apply anti-alias if fractals are used like in the video. If you want it to look better then you have to have more points.
  • Animation
    All the points would have to be processed creating a more of a system hog than a polygon engine.

Polygone engines are efficient unlike voxels. Hell sometimes hybrid approaches are used like in Crysis where voxels are used to compute the terrain but rendered in a polygone engine.

The “Unlimited Detail” is a search engine which attempts to limit the amount of resources (memory) needed and only displays what should be visible. The memory requirements for this to even look like a modern game would be insane because it has to iterate and organize millions if not billions of points. If it does not use system memory but rather just look at the data on the hard drive, then well sucks to be you with a slow poo drive.

It sounds nice but wont be practical for ages. Polygon engines are here to stay.

Nerd-gasm if true, indifferent if not.
I just want to know how it could/does effects physics, and the impact on storage requirements.

https://www.youtube.com/watch?v=1sfWYUgxGBE

I could be wrong but I think the easiest way to explain is may be like this:

For every rendered scene, there is a limited (although very large) number of different possible renderings for any next move you could make. Now that we have powerful computers will large hard drives, your system uses advanced algorithms to calculate the data (pixel information) for all possible renderings. All of this data is stored within a cloud and retrieved as needed.

It literally fills in the scene pixel by pixel versus drawing polygons modeled to resemble objects.

I could be wrong.

…So we have invented the light bulb, but we don’t have electricity yet?

That kinda answered what I was asking about, but I was mostly referring to how things would look, behave & slow-down if they were blown-up. :smiley:

Sounds like a backwards step to me.

Whether its a GPU processing and drawing billions of polygons or unlimited detail pulling pixel data from some cloud, the end result in either case is the pixels on your screen being filled. Its the computations and rendering process that are different.

With unlimited detail there is no GPU processing and drawing billions of polygons per second as you walk around some 3D environment. Instead, individual pixel information is retrieved from a data array (the array could very well be identical to your monitor resolution, in my case 1920x1080) and the scene is rendered accordingly. All the necessary array data is stored within a cloud consisting of the pixel information for all renditions of any possible move within the 3D environment.

Like I’ve said however, I could be wrong.

Carmack said ID Tech 5 is going to use this. Animated characters are going to be traditionally polygonal, but terrain/static backgrounds will use it. It’s the natural extension of ID Tech 4’s “unlimited texture detail” tech (imagine Google Maps without the download times).

I would like this to be compared with Tessellation. I think Tessellation is the better way. Model swap isn’t needed and you can let parts of geometry untesselated in case it wouldn’t give any noticable change.

you mean adaptive subdivision? The deal with a voxel octree system is that it only draws a maximum of one polygon per pixel on your screen, so you don’t need model swapping for level-of-detail in this case either. Subdividision is good for smooth surfaces but not rough ones.

I like the concept, but I get the impression that it requires a minimum amount of processing power in order to be at all effective.

Sorta like comparing a y = x line(current graphic rendering method) versus y = x/10 + 100(new rendering method) where X is what level of graphical detail is being attempted to achieve, and Y is the processing power needed to achieve it. That’s just a crude analogy though.

A resolution of 1600x1200 would “only” need 1.920.000 results in maybe 1 ms, compared to google that’s nothing.

Founded in 2004, Leakfree.org became one of the first online communities dedicated to Valve’s Source engine development. It is more famously known for the formation of Black Mesa: Source under the 'Leakfree Modification Team' handle in September 2004.