The RTX 2080 realtime ray tracing hype

I am starting to buy into the RTX hype. I feel it could make my archviz projects look really amazing with the realtime ray tracing stuff (reflections, ambient occlusion, etc) I know it still uses a mix of rasterization also but still looks very good.

My question is, once this is implemented into the engine is it something that only nvidia cards will be able to take advantage of? Will we have to make separate project that falls back on “baked” lighting or the old methods for AMD users and the stardard way of doing things for them? That seems like double the work… I just have concerns that this is going to end up like VR SLI or the gameworks stuff and nobody will use it, and will be exclusive to their specific cards which sort of kills the hype for me. I want to know the limitations also. Is this limited to smaller scenes. What happens when you have a large open world and want to make even more things using the ray tracing like the global illumination. Is it still possible to use all of the ray tracing effects in a situation like that?

I’m considering ordering the card tomorrow but am thinking it might be best to wait to see how things go. Any info would be helpful.

One more thing. Basically I’m in a dilemma because I’ve already got a 1080ti but not sure if it’s going to be worth it since the ray tracing seems to be an Nvidia only feature. Does the DirectX ray tracing API somehow make this all cross compatible once an AMD solution is released?

Epic only ever implements gpu-agnostic solutions into the main branch and I don’t see that changing anytime soon (I can agree to it, it’s not good to alienate users because of their specific hardware)
so if RTX only works on nvidia cards I guess it will stay exclusively in the Nvidia UE4 branch (gameworks etc).
or -maybe- RTX works on non-nvidia cards but just really slowly (like software rendering), which would kinda be ok to get integrated into the main UE4

There is a Direct X API for raytracing that Nvidia supports as well and hopefully AMD will support with their future cards, so in the end I think Epic will implement the DX raytracing and not the gameworks one

man… i just wanna bake lightmaps with my gpu… ill buy the quadro ones if i have too…

It runs on AMD now, and on older Nvidia cards. DX12 just falls back to implementing it using compute shaders if there is no specialized hardware, it is a Microsoft API.

question. Will quadro be better for this or 2080 ti? Will there be much of a difference in between?

The best quadro card I think was about US$ 9-10k… 48 GB RAM (more than I have on my desktop, lol)

I haave a question in regards to the RTX cards and VR for any Epic devs here:

So with the annoucement today I found just one article that talks about the new RTX 2080 features for VR. Here: NVIDIA Turing Architecture Propels VR Toward Full Immersion | NVIDIA Blog and it only talked about *Variable Rate Shading and Multi-View Rendering *and discusses the tensor rt core being capable of ray tracing in general, but not that it’s capable in VR.

No mention of any of the new ray tracing features for VR at all (rt shadows, GI, rt reflections, etc) which has me slightly concerned. It would be nice to know before I order this thing, as I noticed in the presentation they reported about 45ms of latency for the AI denoiser to denoise the ray traced lighting effects. That seems too high to work in VR as I believe the target has always been under 20ms. I could imagine with that sort of latency it could create a sort of smearing effect all over the screen.

Can someone at Epic provide more info on this? I wanted to look into these RTX gpus a high end VR Archviz project. I know that these cards would probably not be strong enough to handle all three or four of these effect (rt reflections, shadows, gi, or ambient occlusion) at the same time. I know that ambient occlusion doesn’t even seem to work for VR at all right now… my question is can it handle even just one of these realtime raytraced effects, or does latency AI denoising kill the entire thing? I am really hoping that it’s only performance constraints that are the problem and not the underlying technology making this happen (the DLAA stuff nvidia mentioned)

Solution from AMD but when EPIC will integrate?

This is not something which is coming fast, thou I was really surprised with the 3 major titles announced already using RTX, but we will have in the end the following scenarios:

a) native nVidia implementation
b) native AMD implementation
c) both nVidia and AMD through DirectX accessing the driver implementation

Previous generation Graphics card are not obsolete, and as far as I know the new cards running old titles will not have great added performance (if any), so what these news cards are all about is if you really want to play games, do archviz, whatever with them when comes about to have the benefit of Raytracing as fast as those cards can bring.

Also the new features for Raytracing are included in Shader Model 6, we have engine sort of stable with SM5 just recently and not yet have a good Vulkan implementation, so I would not rush on adoption this fast.

Correct me anyone if Im wrong!

The Battlefield 5 open beta is happening in early Sept. That seems plenty fast to me.

From what I’m gathering is that although NVIDIA has their native implementation, it’s all built in conjunction with Microsoft’s DXR framework. It doesn’t just seem like another VXGI fringe situation. Also every new NVIDIA card will have this tech built in and they rebranded everything for it. This raytracing moment feels like an industry-wide push and less of another trick added to hype a video card upgrade.

I understand where everyone is coming from, wary of new methods that could cause engine instability and yearning for support of graphic features seemingly within reach (like GPU light mapping or SVOGI alternative).

I for one couldn’t be more excited for this tech to come and the sooner the better to start working through all the kinks. I’m sure there will be issues with latency but also with this denoising procedure. I still have to denoise most of my footage rendered out of Arnold and it introduces myriad of unwanted effects, similar to what we see with TAA. Also specular noise is a huge problem with brute force raytracing. Curious how they overcome these issues with AI or whatever.

I say bring it on even if it’s a painful transition. I don’t want to bake another light map, add another reflection probe or place a fake shadow to ground a chair leg to the floor.

I would really like to see Epic’s roadmap for feature implementation. Their name and demos are all over everything related to DXR/RTX but I feel like I haven’t scene a clear official outline of how they will implement it. Just a few comments from Epic employees buried in the forums, talk of releasing something by 4.22, etc.

So many questions: Will it work in Forward Shading/VR? Can we raytrace sound (traced rays returning material and sound qualities)? Would be awesome if Epic did one of their learning streams on the subject.

Having said all of this, I am definitely not upgrading until there is a functioning implementation in Unreal or Unity. Also I’m more of an artist more than a developer so maybe it just seems more enticing on that level.

This is the most for a hype for me which is actually working with film, so besides it will be great for games, so ofc I am all for this!

Yes, right now it’s mostly going to be useful for people that want to render images/videos. But in the meantime, it’ll be a faster alternative to Lightmass, might avoid the issues that Lightmass has as well like where there’s lighting variations between objects.

I was tempted to preorder a card to start hacking on real-time ray tracing in Unreal Engine, and compare results against current VXGI and other options. But it looks like even Nvidia’s Gameworks integration of ray-tracing is currently invite-only:

Although that may open up to all developers by the time the cards release, it’s a gamble.

The closest thing to a timeframe I could find for native UE4 ray tracing support (using GPU-agnostic DXR) was from https://www.eurogamer.net/articles/digitalfoundry-2018-08-20-nvidia-unveil-rtx-2070-rtx-2080-and-rtx-2080-ti-at-gamescom-7006:

Encouraging, but surely just a rough estimate at best. No reason not to wait until RTX cards hit the street (including the more affordable RTX 2070 in October) to see real benchmarks, and dev tools are in our hands to actually use this tech in our own development.

Don’t get me wrong though, I’m all in, can’t believe we’ve finally made this jump!

The “feature” definitely isn’t Nvidia only, they love to frame it that way because $$$, but it’s based off a newer DX12 API, and next year AMD should show up with support for such as well. IE yeah, it’ll be the future. And Archviz is definitely something I can see benefiting early from it. If you think it’s worth the cost (for not a huge perf gain otherwise) then go for it. But as others said, maybe wait till it’s out on UE4?

BTW, it is NOT a replacement for Lightmapping. Which, yeah that sucks but 10 2080ti’s aren’t going to get you the same quality GI as Lightmass, let alone 1. Nor will you be able to not place reflection probes, again, not happening. The hype way overstates things, this isn’t some magic box that can instantly replace predone stuff. But if you say, want a quick preview of different lighting setups, or to never have to deal with planar reflections or something, that it will do.

Jeebus… I’m sitting here reading on you guys talk about buying this next gen. And I’m still holding on to my 940m gpu on my laptop. Gets the job done. Like a furnace though.

So far, I went ahead to look to some guys actually working with the tech (not disclosing who), but the thing is that the current titles in development are showing poor performance 35-48 FPS @1080p on the RTX 2080TI card, but this mostly because drivers and SDK are still in alpha, so long way till final product. The titles which are targeting RTX will launch with Raytracing disabled and added later as a patch.

Is it something that only nvidia cards will be able to take advantage of?
NVIDIA just being the first to make a dedicated GPU for ray tracing with its RT core, the others can follow suit as DirectX DXR (and Vulkan later) is an API that takes advantage the RT core. Basically NVIDIA just introduced a new GPU family model.

Will we have to make separate project that falls back on “baked” lighting or the old methods for AMD users and the stardard way of doing things for them?
Note that this whole ray tracing thing is an alternate rendering method than what we usually have in real-time graphics. The one we usually have is called rasterization. So ray tracing only complements or adds more choice on the rendering method (and you can switch between these methods as you like runtime, supposedly due to this nature). To answer your question, I don’t think there there is any changes required for your project unless you want to fully optimize your work only for ray tracing.

What happens when you have a large open world and want to make even more things using the ray tracing like the global illumination. Is it still possible to use all of the ray tracing effects in a situation like that?
The idea of the RTX is it has a dedicated RT core, which means you could say most of the rendering (the crazy rays going around for lighting determinations to create shadow) is done by that core + the tensor cores utilizing AI deep learning to perform NVIDIA’s denoiser to reduce noise faster. So there should be no difference (or even, better) than what we have now. It basically transfers the load from the normal GPU cores to the RT core, maybe leaving the main core to deal with only the shaders and whatever I dunno.

One more thing. Basically I’m in a dilemma because I’ve already got a 1080ti but not sure if it’s going to be worth it since the ray tracing seems to be an Nvidia only feature. Does the DirectX ray tracing API somehow make this all cross compatible once an AMD solution is released?
Ray tracing itself is not new, and DirectX provide the DXR API as GPU agnostic I believe. The function is to simply target to the RT cores, that’s all. It’s pretty much like shaders, specifically compute shaders. The one that people should think about is the denoiser algorithm which is made by NVIDIA, but even that I think is provided as gameworks (software), which I think should work with other GPU as well.

Hope this helps.

In the Tomb Raider demo, they always move the camera with the RTX *off *so you can’t notice dropped frames so easily when they’re panning :slight_smile: