Unlimited Detail

I’ve learned that people tend to blind themselves just so they can think outside the box. No ones claiming this can’t work, but with present technology and present evidence of their engine’s abilities, its very doubtful its going to amount to much right now.

No fucking shit sherlock, current state of technology is geared towards poligonal rendering.

Without hardware improvement (optimising in our example) - there is no stimuli for software improvement.

I’m not thinking in a box, I’m thinking in terms of current boxes. If you follow.

EDIT: Never mind, Someonerandum beat me to it. Disregard this.

This may work on limited uses on a hybrid engine. So you can have a super detailed world built on point data/voxels and polygons for animated models.

Okay. But if you listen to the vid (especially the previous one), there were quite a few words about “current boxes”. They are criticising current solutions (‘only 25% a year’) and are proposing alternatives. It’s just the people getting overly excited/skeptical/calling scam in a purely consumerist fashion. Business as usual.
Them being a technology company =/= they are only proposing software programming. Okay, they most likely are right now (although I’m not sure) - but no one talked about current state of affairs.

Btw, this lot is government funded if you didn’t know. That makes Notch a total, jelly ass.

I think the best implementation of this sort of tech would be after they figure out a way to preform physics calculations on the atoms. That way destructable cover could be chipped away bullet by bullet, calculated in real time as opposed to scripted wall chipping. Plus, you’d have better looking footprints for surfaces if you could do some light physics sim on mud or snow. Or you could simulate frag grenades by calculating the individual shrapnel generated instead of having a hitbox damage calculation. Hell, you could simulate an NPC trying to keep balance on a slippery area.

When computers with 10 core cpu’s and dozens of terabytes of storage are considered shit maybe this tech will finally see the proper day of light. :stuck_out_tongue:

10 core cpu’s are redundunt when one can simply change architecture.
For example, russian supercomputers (yes, soviets have been developing Elbrus since 70s) are utilizing Explicitly parallel instruction computing (EPIC) architecture which allows 23 operations per cycle right now compared to your average superscalar 1-8 operations per cycle architecture - which is bound to number of cores. This also allows for lower frequencies

They are now working to develop a home version which isn’t as bulky and is backward compatible with superscalars as a proof of concept. The only thing that is stopping them is the loss of production capacity when shit hit the fan in the 90s - post-soviet electronics and instrumentations industries were taken out one of the first by a group of well-known filthy crypto-jews, may them rot in hell.
They are still at >100nm, but they are slowly regenerating.
And of course, no one wants to let go off the multi-core monopoly in todays computing.

Intel itanium architecture is the western EPIC supercomputing equivalent since 1989 IIRC. And they look quite fucking successful. But they don’t look like they are developing for home markets.

https://en.wikipedia.org/wiki/Explicitly_parallel_instruction_computing
https://en.wikipedia.org/wiki/Elbrus_(computer)
https://en.wikipedia.org/wiki/VLIW - note backward compatibility, may as well be related to the thread topic
https://en.wikipedia.org/wiki/Itanium

P.S. Btw, despite the poor industry capacity and lower frequency - the russian backward-compatible commercial prototype made with 1991 tech already passed the test spec several seconds faster than the intel machine in IA-32 mode in 2005. Leave alone VLIW mode, where the 300mhz machine showed the performance of 2ghz Pentium - and that on 130nm die.
Oh and since 2008 they made a dual core on 90nm die and with higher frequency.

Too fucking bad the patent holder is a company registered at Cayman Islands and most of the russian and soviet brain is working for intel now.
As of today, only the military (AA and Missile Defense) is using them.

Either way, when a 300mhz dual core reaches exactly half the terraflops of 2,4ghz Core 2 Duo - it’s pretty fucking remarkable.

tl; dr

Just kidding, but you sure know quite a lot about these stuff.

That’s just… wow O_o

Btw itanium gets 20 operations per cycle at 800 Mhz as of 2008. That’s basically like 16 Ghz x86 processor.

Sony has been using a similar approach IIRC in PS2/EE and then PS3/CELL. I may be wrong though

Pardon my ignorance but isn’t it scalar processors that can only do one instruction per cycle? Or do you mean something different by the word ‘operation’? As I understand it modern processors have plenty of math and logic units that can be used simultaneously per their superscalar nature.

At the very least one would think that it makes a lot more sense to get as much work done per cycle than increasing the clock frequency to the point of instability and physical limits of the material.

What does 20 operations per second mean anyway? 20 adds or 20 muls per cycle or what?
I heard that interest in Itanium is very low because of the odd architecture.

latest processors have 8 cores.
https://www.maximumpc.com/article/news/amd_announces_8-core_bulldozer_cpu

Read the damn wiki article maybe.
Usual supescaler makes one operation per cycle per core. The more clock rate you have on superscalar - the more cycles you can achieve in a given time. Furthermore - you can have more cores.
VLIW uses compiler to bundle several operations for one cycle.

Compare Nehalem:

to itanium


As for itanium:

Tukwilla prices:

Elbrus is trying to make a backward-compatible home version with binary compilator.
As for weird architecture (more like efficient), the monopoly is now in the hands of multi-core whoppercputards.

As soon as EPIC processors reach mass production with enough production capacity at, say, 40nm - it’s going to lose 70-80% of it’s current price.

I know. You did say “1-8 operations per cycle” and used the number of cores as the base for it. So I assumed that you were basically saying that a single-core superscalar processor can only do one operation (instruction?) per cycle, which would be bollocks. But I may just be misunderstanding what you originally meant.

Oh shit. I mixed up a lot in my previous posts. Sorry for that, folks.

Scalars are the ones limited to one operation per cycle, not superscalars. Pure superscalars are usually single-core architectures compared to multi-core.

The limitations of Superscalar processors are of different nature. They cannot perform interdependant operations at once so the CPU needs to do a hell of a lot of work to run checks for dependancies. This also requires additional circuitry within the chip.

To efficiently perform 24 operations per cycle such as Elbrus is a wet dream for a superscalar.
VLIW and EPIC are offloading that work from the CPU to the compiler - hence more capability. And this also allows the burdensome circuitry to be effectively removed and the freed up space used for more useful purposes. But that would need different memory architecture such as ccNUMA. Latest Elbrus uses numa.
And hyperthreading/multicore is another alternative of course - the cores themselves are not usually pure superscalars in multi-core architectures.

Anyway, here’s an armored notebook from Russia. Runs linux-based russian OS. -20 + 70c operating temp

And 4 processor module ready for mass production:


32 Gflops, 8 gig RAM, 32GB/sec, -10 … +50C operating temperature. All that on 120W supply

Here’s some other armored stuff at right side:

I know this might sound hypocritical coming from me, but save your effort. It’s far too much work trying to explain computer architecture through forum posts.

Aye, in basic terms, It looks rather simple for me.

It looks pretty damned cool, and I want to understand it… but I just can’t…

Founded in 2004, Leakfree.org became one of the first online communities dedicated to Valve’s Source engine development. It is more famously known for the formation of Black Mesa: Source under the 'Leakfree Modification Team' handle in September 2004.