Unlimited Detail

Normalmaps exist to add detail on light interactions without increasing polygon count, not to eliminate noise on meshes.
High polygonal density does produce noise if there’s no AA, but that’s easily solved with a clever AA scheme.
Lods serve the purpose of freeing computational power for high poly meshes, especially if the rendered mesh is just a handful of pixel and doesn’t need all of the hi poly data to display its shape and light interaction.

Hardly relevant to a discussion on game rendering engines i’d say.

Nvidia Gpus (and AMD’s too for that matter) are rasterizers at their core, not well suited to other algorithms. A new algo would need better data structures and acceleration, so it’s accurate to say that a new rendering paradigm should benefit from different hardware.
For now polygons and rasterization are the best choice for the current hadrware design, even if compute is slightly changing the situation as time passes.

Euclideon is known from some time as a vaporware company, and every research that they publish is made from scratch just for videos and to seek financing (which never happens btw). The company is owned by a crazy guy who is not new to this kind of initiatives, lately he made himself “known” again for trying to market an Oculus Rift Killer (I don’t remember what was its name, just google it) at big electornics events. Needless to say the product was not only inferior in design, it was vaporware as well.
I’d suggest to research more instead of being progressive just for the sake of it.
Atomontage would have been a better choice to defend, since it’s kind of interesing and it actually exists, but Euclideon is just a joke.

A world of difference actually, since after computation the data must be streamed to the gpu for display. Currently only specialized hardware could stream all that data at interactive framerates, not even a big gaming rig can do it.

Unreal Engine 5 maybe?

NVidia GPUs are general-purpose highly parallelized computing devices at their core. NVidia CUDA is one of their primary technologies. For cross-platform (i.e cross-vendor) applications, we have OpenCL. Those computing technologies are used everywhere nowadays (physic simulations, non-polygonal rendering engines, neural networks, etc, etc, etc), saying “they’re not suited for other applications” is nonsense.

When the amount of data is massive, you modify algorithm to render it without decompressing it. Most of the point cloud will be highly similar data or empty space. There’s already DXT compression for 2d images, something similar should be done for volumetric data too. Voxels compress well with octrees.

Frankly, I’d expect you to know/understand that already.

If you take the time to fully read my sentence you’ll notice i said “it’s not well suited for other applications”. And this is a fact. While compute (and CUDA/OpenCL is a part of it) is shifting perspective, the chip design still retains a bit of the classic rasterizer layout. Read up articles by John Carmack on this.
If what you are saying would be true then we would be running a raytracer in unreal engine, not a rasterizer like it is now. While compute parallelize and is very powerful we’re not yet at the point where we can fully utilize that power without going through a classic pipeline, at least for games.
Also please, try not to distort what people is saying just to prove a point, and if you quote what people say please do it fully.

At this moment you cannot even run a GI solution at interactive framerates with octrees (search up SVOGI for instance) because traversing speed is bad, how well do you expect it to perform against a multi-terabyte asset that also needs to be streamed from disk to card? Bottlenecks are in data transfer most of the times. After that bottleneck you need to traverse the octree then perform processing. Do you really think all of this would be that feasible now?

Also, saying “Most of the point cloud will be highly similar data or empty space” is just a wild assumption. There’s not enough data based on games out there to support this statement. You also have no real control of what the users will be creating, and you need to make the solution work for whatever asset your artists are gonna produce, especially in a generalized engine like UE. Look at minecraft, and see how much the human mind can work backwards.

If the whole industry is going in one direction instead of another there’s bound to be a valid reason don’t you think? Have you really researched the pros and cons of what you are proposing while considering the state of things as it is now? Or are you gonna be progressive just for the sake of it?
Frankly, I’d avoid this much arrogance in comments and research some more.

@NegInfinity

In any case, i just reread the thread, what’s up with the change? What you said at the beginning of the thread is what i’m saying why do i feel we’re discussing for nothing? :slight_smile:

HoloTrek? No!!!

Such technology may still be several centuries ahead from anything existing today.
Within such a thing, the reality is replicated entirely using strange quantum properties and a kind of electromagnetism.

By then it would be smarter to have the whole game experience in your brain, no need for polygons.

Well Banislav Siles is doing it with the Atomontage Engine…

http://atomontage.com/

That’s Ad Populum.

The reason the industry is using old rendering appraoch (from the nineties) with small cosmetic changes over time is because it worked once and then everyone has been trying to do the same. That’s “chicken and egg” problem.
However, flat polygonal data is ill-suited for 3d representation in general (destruction and csg turns into fairly complex problem, which is not the case when you switch to voxels or even simplex data), so at this point this appraoch has been pushed to the limit, pretty much, and there were steps away from it (parallax mapping is one of those examples, since it is pretty much volume display).

So, it would be wise to start researching the alternatives right now.

Also, I’d like to point out that “realtime GI solution” is fundamentally different (and more difficult) problem that requires more resources than simple volumetric data display, so it is not a good argument against non-polygonal data. The issue with GI is that any one point in scene can potentially affect infinite number of other points in scene. That is not the case with simple volumetric data display which requires simple rasterization approach switch and can use the same kind of algorithms that are used right now for polygonal display.

Speaking of realtime raytracing, this kind of demo was done about 10 years ago and could run on CPU. Now we have more computing power.

Once again, “realtime raytracing” also does not really apply to apply to the issue of volumetric data, because realtime raytracing of scene that involves dynamically moving volumes is, once again, a different problem that would require a way to quickly rebuild scene octrees(or alternative) while dynamic object passes through it. You do not need this kind of thing when you’re simply switching away from polygonal approach to non-polygonal approach. Because once object is in scene, regardless of the way it was visualized, you can use same kind of tricks on it as you used on polygons. Meaning fake reflections and all that stuff.

I believe my initial assumption about point cloud data is relatively safe, because unless someone literally goes nuts, in most of the scenarios, point cloud will have high-frequency data around surface, and low-frequency detail inside the interior, simply because most people won’t be seeing the interior. That’s a great compression potential, similar to DXT/Jpeg/whatever.

Now, I consider those things to be obvious.

Either way, my initial comment was caused by annoyance I feel every time people try to shoot down potentially useful approach “just because it ain’t polygons”. I would appreciate if people wouldn’t be doing that and were a bit more open-minded about it.

That’s all there is to it, pretty much.

Firstly, it’s worth considering how much rendering has actually changed over the past 20 years, since the introduction of hardware acceleration. Whilst we may still be using polys, there have been lots of other changes, from going from forward to reverse rendering, g buffers, the introduction & evolution of shaders etc.

We haven’t failed to move on because we’re happy with what we’ve got, but that we haven’t had hardware suitable for other methods.

There were a few games in the mid-late 90s that embraced voxels to create large landscapes, which wouldn’t have been handled so well with polys, but within a couple of years of the first Delta Force being related, GFX hardware could handle landscapes on that scale better with polys than voxels.

What is interesting is to see hardware ray tracing units coming on PowerVR cards. I’d like to know more about their performance against GPUs and CPUs doing those calculations themselves. If they offer a significant gain, then I’d expect to see them crop up on AMD & NVidia GPUs in 3-5 years time (I assume there is no patent on hardware ray tracing units). Those units have the potential speed up AI, IK, physics and audio and we know Nvidia and AMD are both trying to get their GPUs to do more than just GFX.

That is not “a lot” of changes.

Many technologies were originally conceived/created years ago, and then hardware caught up.

Shaders, for example, were originally created for renderman which is … 25 years old?
Ideas used in 3d rendering can be traced back to a fairly old research paper. While initial release of, say, OpenGL, is marked as 1992, IIRC there were articles describing polygonal rendering/stencil buffers/etc BEFORE that.

It was all about using tried approach for quarter of a century and continuously adding more transistors to the graphic chip to make it able to draw more polygons and have larger textures.

I simply think this is not good enough.

I have not read this thread but i will tell you that this is snake oil. They claimed it was going to be the biggest thing since sliced bread years ago but the tech only works if the geometry (voxels) are stationary. Also this will only work if you are using allot of instanced geometry voxels. In many of their videos they only show a few types of voxel geometries, meaning they are utilizing many instances of say one or 2 voxel geometries. The amount of disk space required for this in a game would be ludicrous also, though a good compression and it would be fine. It’s really a neat idea, but no one has found a good way to animate this tech yet, once that happens i could see this taking some merit.

One thing that rinses me off though is they claim and act like they are doing something brand new when in fact its been done for many many years now. Its not new and its not what they claim it to be. It’s just a sparse voxel octree…

For a better example of this tech look at the Atomontage engineor the Voxlap or The new Voxlap PND3D

What nobody ever wants to discuss when talking about Sparse Voxel Octrees is animation.

You can’t animate SVOs very well, bone rigging is a nightmare. Not to mention collision hulls. So you’d still be using polygonal rendering for all skeletally-driven objects, and you’d need to keep polygon data on-hand for all physics/queries/etc.

So even if SVOs can work for making cobblestones look super-good up close without chewing up all your FPS and disk apace (and NVIDIA has a tech demo and whitepaper on this that seems interesting), all of the workflows and pipelines will still require old-school polygon-based meshing for most of the critical objects and assets in the game.

And having to simultaneously render both would further compound the processing expense and complicate the workflow… And it’s sort of like, is really detailed cobblestones all that worth it?

Apologies for the deviation everyone.

It’s alot in the sense that many developers have had to significantly rewrite or create engines from scratch several times per console generation. That costs a significant portion of the dev time and money during development, even when it comes to buying and adapting middleware to suit your game’s needs.

And what experience do you have to put any value, for us, in your belief that chip designers and software engineers haven’t been doing a good enough job? Do you have a sensible idea of what sacrifices would have been made elsewhere if we had ray tracing and if the results would be comparable, better or worst than what we have had instead since the late 90s? If you have a solid and educated idea for where we could be right now instead, and how we got there, & all backed up by facts, (so many commas!) then I’d like to hear all of that.

Since this thread have assumed sensational proportions about a such very promising “Unlimited Detail” technology.

Maybe in the future I can be enjoying a sensational experience in Augmented Reality into my personal HoloTrek, unfolding myself over a hypercube, using electromagnetic and gravitational forces.

Well, yes. If the people behind this new tech, are being serious about what they do. Then they can take the source of the UE4 engine and implement it themselves. :smiley:

This game was made in 1999 and use raycast (called voxel at the time) to render the terrain:

Didn’t read all the thread, but I saw right away you mentioned Euclideon, just wanted to say, it was a scam, a big one. They advertised new and unique technology, able to do all sorts of things: especially animation. This last one - which is the most crucial when it comes to games and all dynamic situations - proved impossible for them - even though they said they had it done for 1year+ - and unfortunately, that brings the whole model down. Plus doing this in UE4 would require maybe 90% (?) being rewritten, so odds aren’t very big that this will be implemented :slight_smile:

Ps- I basically posted to trash Euclideon hah, wasn’t intentional, but it was a big disappointment.

I think I’ve listed few quite obvious issues of polygonal approach in this thread and quite a few advantages of volumetric approach here.
Repeating it all over again is pointless.

Trying to change your opinion would take too much time to my liking, so I won’t be doing that. You have the internet access, look things up, draw your own conclusions. My opinion makes sense to me and that’s good enough for me.

This is unfortunate. Perhaps someone will eventually come up with something better.

Oh, it was a scam? So that’s that kind of word that is used to describe the effort to bring new tech? I wasn’t aware that one can change the world with in a year. Kool storwy…

And no. One doesn’t need animation, if one only wanted to replace the static part of a world, like terrain. This is called a hybrid. Also the thing is, that since 2011, Euclideon is still around and works hard to make this tech a reality. And hey, guess what? Those so called “scammers” made their way into that geo thing business, and actually contributed with their technology and make some money now. Their goals of bringing this tech to the gaming world still exists, too. I read that there are two games in the work. We should learn more about this in 2016 and see the implemented animation as well.

Thanks for saying “called voxel at the time”, because it seems that some don’t realize it ain’t actual voxels in Outcast.