I have been reading many pages that demonstrate the why and how NVIDIA managed to show a real implementation of the Ray trace feature that is available in DX12 windows latest update , but non of them can give us a direct answer about what we should do to get the best of UE4 with our current GPU , i mean many of us till using the Nvidia 10 series , but to get the DXR quality look do we really need to buy the 20 series cards or what we have will give us the same ,
am only asking because basically am an Architectural Visualizer that been using UE4 for almost 4 years now and i need to know is UE4 feature DXR updates requires the RTX cards or we can continue working on the 10 series and get the SAME QUALITY that we have seen in the Porsche video ,
what about this articale that included this part DXR Isn’t Married to RTX Microsoft’s new DirectX Raytracing API is a big step towards the photorealistic real-time graphics that we’ve been dreaming of for decades. It gives developers a standardized toolset to work with, which they can use to create the most realistic graphics ever produced.
Currently, Nvidia’s the only game in town with hardware that supports real-time ray tracing, but Microsoft’s API is not married to Nvidia’s RTX technology. Like all Direct X APIs, DirectX Raytracing is hardware agnostic, which means that when new GPUs hit the market from the likes of AMD and eventually Intel, they should be compatible with DXR as well.
Nvidia 400 to 700 series supports Direct3D 12 feature level 11_0
Nvidia 900-20 series supports Direct3D 12 feature level 12_1
I definitely could see Epic artificially limiting UE4 RT to RTX GPUs only while the features are being developed, but I don’t see an apparent reason for a DXR implementation to not support all GPUs that have a minimum feature level of Direct3D 12 support, once fully released. There’s plenty of hypothetical or potential reasons such as not enough memory, support might need to be added on a GPU by GPU bases, performance being simply too poor, etc.
There are some features from DXR which can be translated to Compute workloads that any card manufactured in the latest 5 years can do (slowly but can), but for this to work the card vendors need to supply a driver with the required DXR entries to teach how to have those workloads done, meaning that if NVidia and AMD don’t provide those drivers updates, you won’t see anything happening regarding this. The fallback solution at DXR level in OS was removed as soon as the RTX cards were launched, from what I have read.
Even thou, NVidia has quite a portfolio of RTX cards by now, each pipeline for the engine will require a different card, so if you are developing a game, you might want to have a RTX 2060 just for the sake of knowing someone will have this card, but I hardly see this as future proof because 6GB of RAM seems too low, so a RTX 2070 would be the target for gamedev at least. For archviz I see a better choice one anything equal of greater than RTX 2080TI (more VRAM 11GB) including setups with 2 x RTX 2080 (more VRAM 16GB with NVLink) and why not RTX Quadros.
thats actually the point i was trying to refer to , is its not just the developers that are facing hard times trying to make a side budget for a gpu upgrade to best present their work to the public users , its actually the users thats gonna face a harder time to upgrade the gpu to be able to run the game or the application with the new raytrace feature for example ,
so basically its fair that DXR is a new feature that can be supported by a driver update for the gpu but the quality will depend on the ram size and the speed , but it will not be fair if its gonna be only applicable to the new RTX cards that nvidia lunched recently as its not that easy that users can grap money and just but a new card ,
imagine someone who saved money to buy a 1080 ti is being told that sorry u gonna have to sell it for a price less then half its value to add more money to buy the 2080 ti
The major problem is that even thou theoretically a series 9 or 10 can do the job, it would be like 1-5 FPS. The specialized components are there for a reason of not only accelerate ray tracing, but to leave the CUDA cores for the workload they were already doing. This thing is pretty much a chicken x egg problem, because RTX cards are selling but not massively and the jump in performance is not great to justify a purchase for series 10 owners and the RT is not yet available in game titles because c’mon NVidia rushed this thing… We don’t even have it ready in Unreal!
I got in my personal rig a 1080 (not even TI) which I have purchased really cheap 1.5 years ago and I am still not sure which model I will purchase and I think many people are in the same spot. If a developer, makes sense to have the low end model (the 2060) just for the sake of testing stuff and knowing it works, for a competitive gamer I see makes sense 2070 or 2080, but for film and archviz it only makes sense having 2080TI or Titan or Quadros, but all these makes sense for a 1st purchase, not for pascal owners, because again how to justify the purchase if there is only 1 game ready for it (Raytracing), no engine, no renderer, so I think the problem is more like “when” than “if”. I know I will only make the purchase when the engine has it released outside the experimental fase.