OK ok guys, think I opened pandora’s box here
The test I had been doing were really small, to make them manageable, and I want to keep it that way until it’s a bit more stable
In any case, my code doesn’t even compile right now, need to finish updating to 4.7 ( 4.7.2 now… )
Ok,no problem.
Great Stuff! I am really interested in tech, Some friends and I are trying to come up with a offline type solution to using UE4 in an episodic animation pipeline and being able to render GI in real time as opposed to baking it would go a long way to improving not just turn around time, but also improve shading quality throughout the pipeline.
For instance, I want to do a lot of lighting through emissives, not just because it generally looks better, but also to save a lot of setup time. Leaving lights available to add to the shots where necessary.
I have a thread going here (sorry not trying to hijack): Lighting and Post Experiments - Film, TV & Animation - Epic Developer Community Forums that shows the kind look we are going for, but it also highlights some of the limitations in the current method I am using. For example the characters which are skeletal mesh’s do not really cast any shadows. Since our goal is to treat UE4 as an offline renderer we can render scenes at a much higher quality, than a game would need since we are rendering frames to file for later playback and post work.
again great stuff!
Hope to see more soon!
Well, the project I was working on is (mostly) finished and (mostly) delivered, so I hope to be able to get back into by weekend. Will have more news soon
to hear man. Can’t wait to see
Well, the project I was doing is now officially finished, so back to AHR!
It’s currently not compiling, as the update to 4.7 broke a few things, but will try to leave it working weekend
Until later people
Yes, I think the change to the rendering code is FInstancedStaticMeshSceneProxy:: DrawDynamicElements in InstancedStaticMesh.cpp, which has been replaced by a caching method. i.e. they cache the dynamic mesh draw list at the start of the frame and reuse for all passes. In DeferredShadingRenderer.cpp, you will notice that each section of code which uses DrawDynamicElements has been replaced by DrawDynamicMesh. I hope helps.
Quick update.
Got the engine to boot, but still need to re work the voxelization stage. Will try to leave all that working over the week.
Also, making some changes to the tracing part, that should bring some good performance improvements. Preliminary tests place the tracing at about 6 ms for a half res buffer at 1080p, down from about 25. Still, preliminary
Preferably I would like it if there was an option to adjust as much of the quality settings as possible. So if we wanted we could have fine lighting detail.
Don’t worry, it’s quite configurable
how does it handle transparency and refraction?
How does compare with VXGI?
AHR and VXGI are just two different answers to the same problem. wise, they are mostly the same, but the algorithms are diferent.
Think both AHR and VXGI are too early in development to compare other stuff, but like I said before, having to different techniques cant hurt
Same as VXGI or LPV . It doesn’t For now at least, it could be supported.
[EDIT]
Let me clarify a bit. What it doesn’t handle are refractions, caustics, and tracing transparent surfaces. What do I mean with that? When voxelizing, alpha masking is taken into account, but the voxels that end up filled are all considered as opaque. Also, you can have a particle system, which uses alpha blending, and get GI for it (not getting it to affect GI, getting it to be affected by it), it’s all the same as tracing runs in screen space.
I figured as much about refraction,
about particles: You are saying that Particles cannot cast emissive light? (not a huge problem, since you can attach lights to particles) they should still show up in reflections though right?
No, not yet at least. That would require to get the particles into the Voxel grid. While its possible for emissive, for particles reflections you’ll need to have an alpha channel for the voxels, and that’s not supported, like I said before.
edit: They could show up, it’s just getting them to the Voxel grid, but they will be treated as opaque. Adding support for transparent objects it’s easy, but it increases the Voxel memory size
an
In our case, we are looking into setting up a GPU cluster for a render farm, probably Titan X’s so memory is less of an as it would be for most people. Even if we can’t pool memory between cards (which we should be able to) we’d be using it to send shots to dedicated GPUs for rendering so that the individual artists would not have to sit an wait for a shot to render. Normally that would not be a major if most shots are something like 4 seconds long, but we want to be able to distribute renders as best as we can, which would also mean that most workstations would not need a Titan in it if all they are doing is making sure the animation imported right and other things like that. Ideally, and is the reason why I want to be able to use as much Emissive lighting as possible, is that I don’t want to have people taking a lot of time to set up lighting, because it can take a lot of time even in engine, because lights can look very different depending on their angle to the camera etc. We want to be able to just drop in animation and hit render, and our scripts will handle the rest (ideally). If we can get our import system working the way I want it, we might not even have to worry about placing items in the levels manually. but that depends on the level of effort required to make that happen.
Well, I could perfectly add a few cvars for higher quality voxels and transparency. I’m focusing in getting it to run properly in ue first
no worries mate, we aren’t starting up tomorrow or anything, but we need to have a solid pipeline in place before we start working on our pipeline demo.
Ideally we just need it to do everything that UE can do, all the shaders and their properties, everything. If we have to go another rout, we go another rout should kind of thing limit what we can do or make us have to develop our tools farther than what were would be ready for. At the end of the day, the idea that we could render 90% of our lighting via emissives isn’t a deal breaker for us, but baked lighting might be a hindrance to us, which means going fully dynamic with Epics’s solution and taking a hit by not getting fine detailed lighting at least in the beginning. After all we plan on doing Sci-Fi episodically which means lots of particles and explosions, and dynamic destruction of environments etc etc. We already know what Epic has will work for us, what I am trying to do is streamline the process and improve over all quality.
So don’t sweat the small stuff yet, I am just explaining that we have specific needs, and that I am trying to figure out if your solution would work for us. I do however very much want to test what you are developing, and see what the boundaries might be.
Can’t wait to see your work, the video was quite
thanks, I posted a second one to that thread the other day. my next step is to look into using Epics dynamic system and seeing what I can accomplish with that alone.