In the last video there was a creature moving in the engine - seemed to me as if dynamics worked there. I don’t know about destructability of the engine, but it seems to me that it’s their goal to make the rendering CPU-cost efficient. Their algorithm seems to only compute “visible atoms”, which then sounds like not such a costly, yet difficult in forms of programming, job.
Still I see the point where the hell they want to keep positioning data of trillions of dots…
The only reference I have with such programs using deformation is with FEM-Analysis programs and boy are they costly in CPU-time…
It seems to me like you haven’t watched the video nor read that thing posted by notch. Clearly you have no idea what you’re saying. IT IS possible to do that without straining your cpu. That’s the fucking point of that video. The problem is that it would take a shitload of hdd space.
Rendering down to the atom isn’t necessary, per se, for most purposes. Tell me, if there’s a forest of trees in the background, do you need to have veins in the leaves rendered at all or could you get away with a single green pixel for each tree and then, when you get closer to said trees, the amount of detail rises?
Sorry, but if a car is a mile away, I can’t see the license plate number and the fact that the letters and numbers are raised from the surface are irrelevant until you get close enough so, therefore, the computer doesn’t need to expend processing power for these kinds of things (such as occlusion in video games; you don’t really need to see the computer in another room if there’s a wall blocking the sight of it so why expend processing power to see the computer from the different angles until you’re inside that room?)
I think you could get away with it not rendering these details and use the processing power for other things.
^ This is all just a work in progress - the company is young and, as they’ve mentioned several times, only a technology company and not an artistic one. They should have hired artists to create a truly beautiful scene, but it sounds like their budget and time is being pushed to the limit and it seems that they’re making slow but steady progress by themselves.
As for the technology itself, it seems sound to me. People are just being unduly harsh in their criticisms - it’s not perfect, but nothing is when it’s this young. Compromises will have to be made but I think this is without a doubt, the future of gaming. All they need is 1% of the millions that goes into research for new video cards and their problems would probably be solved.
I can’t wait to see what they do.
Yep. I can see the need for conceits in programming of the game to keep the amount of data down to a manageable level. Take my license plate example.
If a license plate has raised lettering and, let’s say fine detail such as rusty scratches and dents, instead of putting in where each point of data is for the license plate, you could program it so that the letters are raised a certain percentage from the plate and that scratches are so many percentage points deep, then have the computer figure out where to place the points from there. So, all an artist has to do is to tell the computer to randomize the license plate characters and the amount of scratches and the computer will automatically place the points according to that data.
For example, in Black Mesa, they are randomizing the faces and textures of the NPCs such as guards, scientists, and soldier (with some limitations of course) so that they don’t have to individually model each and every guard, scientist, and soldier down to the amount of stubble on Scientist #18’s face. I mean, Half-Life 2 only has a limited number of textures that they draw from and they’re used in multiple places (such as, for example, the clumps of grass).
I see something like this being done with this point cloud type stuff.
You tell the tree to have X-number of leaves and that the leaves are generally a Y shape, and the computer figures out where to place the leaves and such. The rocks in a road’s asphalt could be randomly generated according to a set pattern as well so that you also don’t have to individually model each and every pebble embedded in the asphalt. It just pulls from a pool of available information and places it on the fly.
Are you even familiar with the concept and how it works? It’s evident that you are not
He’s right, but only if people actually listen to the ridiculous promises made in the video. Physics simulation on that scale would be impossible on modern computers in real time, but most devs would not go down to the level of dirt for their games, its too unpractical.
It’s a proof of concept, boobs.
ODB, did you even read this?
Well okay, someone says it’s a voxel scam, that it has been done before (while saying it’s still impressive) and accuses them of wanting funding. All of that based on nothing but assumptions. So?
I don’t really give a fuck if it’s Notch. Moreover, notch is one fishy cunt himself
Someone says otherwise:
Who should we believe?
https://en.wikipedia.org/wiki/Point_cloud
I smell that notch just feels that minecraft is fucked. He’s directly interested in this company going down - evident with his ‘they want funding’ accusations.
https://nwn.blogs.com/nwn/2011/08/is-the-future-of-immersive-3d-in-atoms-euclideoncom.html
https://www.next-gen.biz/features/unlimited-detail-engine-gets-new-airing
https://twitter.com/#!/ID_AA_Carmack/status/98127398683422720
Carmack has something to say too.
I don’t know. And I don’t really give a fuck if it’s scam. But really, if it’s not and they are really working towards making it happen - it’s good for everyone.
He’s jelly cause Minecraft laggs like shit compared to the point cloud.
^I think it’s pretty much bang on, considering what Notch The Appropriator did to create minecraft.
Referring you to Infiniminer and the fact that there is still no retail release for Minecraft.
And all he did to ‘uncover’ Euclideon was some assumtions and allegations.
hypocrite much?
the storage problem argument still stands.
danielsangeo has a point with the detail thing. What if we looked at it this way. Let’s say your monitor is 920x1080. So it only has that many pixels. Ignoring the whole atom argument, the computer can only render that many pixels at a time. Since the monitor screen is flat, the illusion of 3d is created only by pixel color. So the computer only needs to render 920x1080 atoms at any one time, and the algorithms only need to calculate what color they need to be to create the illusion of a rock or whatever. The problem the company needs to solve would be how to create the illusion of more atoms behind the visible ones, especially for animation. Actually, that seems way too simple someone explain to me why we don’t do this already.
I guess it would be difficult to have it flow well with deleting pixels that you can’t see and bringing them back when the character moves.
Ray Casting.
Only the atoms touched by the rays are shown on the screen.
The main issue with voxels is the Amount of points needed to be rendered.
In a polygon based system, to create a Triangular polygon, all you need to do it know where the corners are, and interpolate (fill in the missing data) between them to create the edges of your surface (Which can either be a straight line or a Spline( which is a curve calculated on the average angle of curve needed to cover all points in a bendy line))
Once this is done you can then take a texture, clip it using the polygon as a reference (bit more complicated than this but this is the simple version) and then lay it on top of the polygon to create an object.
This means you only need (for a Pyramid) store 4 points (which could be something in the region of 4 bits (x,y,z,colour)) and then a .jpg holding the texture.
this system is also animation efficent because you can put the object (or parts of it) into conversion matrixies and then translate points efficently (which is the main job of the graphics card, to work out matricies)
a Voxel or point cloud based system need to store all the points that make up the entire object, not just the fine points.
And what you said about only needing the 1920x1080 ish voxels which make up the screen is that a computer isnt instinctive. it would have to search through every voxel point stored and then do a z-buffer check (depth check) to see if it is overlapped by any other voxel between the camera and it. this is for a vast amount of data.
Animation is hard aswell because you have to define the movement for every voxel serperatly instead of just the defined corner points as used in a polygon based system.
the only way i have been able to work out how his search engine work is that apon run-time, he loads a basic polygon model from memory and a detailed texture file. he then uses the texture file to fill in the missing points between the polygons defined points (kind of like a pseudo-tesselation). Once this is done, the model is stored in a tree which instead of having its position as a critera for its depth in the tree it has a detail level (which is worked out from a LOD indicator of some sort). when displaying a model, it can work out the distance from the camera, the area of the model it can see (from z-buffer checks) and then use the detailed texture map to fill in the spaces between the polygons defined points (which would be used as a skeleton to define its location in space, with the texture taking over as creating LOD).
What i dont see is that why he doesnt try and interface his engine with a graphics card so that it can make use of its onboard multiple processors
I did watch the video. And my point was that it was inefficient and physical simulation of the atoms would be impossible without ridiculous amounts of processing power, a very large buffer for the atoms, and, as you pointed out, plenty of hard disk space to house all of that.
Looking at this thread, it’s pretty damn evident that people have pretty much unlearned how to think out of the box. That’s all.