The RTX 2080 realtime ray tracing hype

Interestingly enough, the RTX 2080 does not seem all that superior to a GTX 1080 Ti in terms of hardware specs. So unless it’s core architecture is different I’m not sure how much of an improvement we will see in when using it outside the realm of Ray Tracing. So it seems like we would be paying for a GPU expressely for the purpose of Ray Tracing. Other than that, I don’t see a reason to buy this new card.

IDK, maybe I’m wrong but that’s the way it seems. I’d rather keep my 1080 and put the 1200 toward my game development.

As an aside, I can’t wait for the new Tomb Raider!

Well it’s almost twice as powerful, and the 2080 Ti has more memory

It’s closer to the Titan XP than 1080ti.

GTX 1080 Ti: 3584 CUDA cores, 224 texture units, 88 ROPs. 11GB of GDDR5X VRAM, 11Gbps – base clock 1,480MHz, boost clock 1,582MHz. [HR][/HR]RTX 2080 Ti: 4352 CUDA cores, 272 texture units, 88 ROPs. 11GB of GDDR6 VRAM, 14Gbps – base clock 1,350MHz, boost clock 1,545MHz.

There’s a founders edition, too with faster clocks at around 1600+ MHz but the selling point is the RT bit, surely.

Can someone in the know (poke @) answer this - does an art studio now has to spend more time making sure things look pretty *with *and *without *RT? We know how a normal pipeline looks like. What happens with RT? Here’s a million rays, let them bounce as they may and leave it at that or… ?

You are not wrong at all. To me these era-changing cards are meant for developers, like “Okay here, we (nvidia or amd later) finally present you a card that can really do real-time ray tracing. Can you keep up?” Maybe in 4-5 years. So until then, for gamers, they can stick with 1080 Ti for now, haha. But for developers, this is recommended to buy, I think.

Just my assumption, but I think the best indicator to purely shift to ray tracing is when gaming consoles can finally introduce their new machine with RT cores, where developers just want to optimize their game for those machines. Then when they port they’d just say minimum requirements of RTX 2070 or RTX 2060, if any.

It all boils down if you want to be pioneer doing it right now, pay extra for being pioneer, or wait cards and tech mature, meaning cheaper cards and stable platform and follow the pioneers. More of a choice than need in this case. At some point, having a product which you can setup an say it is ready for RT, might be a differential for selling argument. If you want to go after the tech just to have distance from all the time you need to bake lights and speed up your workflow, then you should go for it.

Now thinking as a gamer: I won’t make a purchase based only on the extra CUDA + Texture Units, simply because a GTX 1080 was already enough for me, otherwise I would have acquired a Titan xp long time ago.

As a professional: I will still wait for the hype to pass.

Just FIY a video containing some nVidia benchmarks against GTX 1080 vs RTX 2080 (both NOT the TI models), I don’t want to believe they would generate something on their own and tell lies, watch and judge yourself: RTX 2080 TWICE The Performance Of GTX 1080?? - YouTube

Lies? No. But you can manipulate the outcome with the green bars going sky high, sure. These benchmarks often isolate certain scenarios and highlight them. Both nVidia and AMD have been doing it for years. In this case it’s the 4k resolution with AA cranked up. Not a common use scenario, is it?

Besides, the benchmark is silly, what is this scale from 0 to 2.5? Performance units? Based on what?

Hear my prediction. Judging by the specs and architectural improvements, it’s going to be an extra 20-25% of raw oomphs (my own performance unit :slight_smile: when going from 1080 to 2080. That’s enough to convince me as an owner of an older generation GPU. Somehow I feel 1080 owners will stay reluctant.

Anyway, it’s the feasibility of the RT tech that’s I’m more curious about - that’s why I poked you. Just wondering how this will change the work pipeline taking into account what raytracing does to shadows and reflections.

@Everynone I agree. Even if RT capability on those cards has brought 1 second per frame it would be still preferred then hours or days… After watching several videos what intrigues me more is how the AI will help improve frame rates… Is this something you can train, gather a set of data after training and put it along your code, so when someone plays the game that data is used and then you can see the frame rates improved? So many questions… the scenario could also be applied on film on a difficult scene with lots of lighting conditions (huge set of explosions), so I am marveled.

As for the pipeline thoughts, even if I have a card on my hands right now, I would need to wait for access to SDKs (if I really wanted to deal with low level stuff), or wait for Epic to implement it and then this surely can (probably will) push my decision on the purchase for later. I can’t imagine thinking in changing pipeline without knowing beforehand how which piece of tech will work together.

That’s Nvidia for you. You notice no one’s actually allowed to post benchmarks but them yes?

Nvidia’s been straining at the legal requirements of truthful marketing for years, the benchmarks artificially inflate the performance numbers by using some improved neural net upscaling to make you think the 2080 is actually running at that resolution, but it’s not (NN upscaling is cool and all, but anyone can do it on any card). I’m more than glad to see the internet start to mock a lot of Nvidia’s claims, the “TDP is important for desktop users!” was bad enough, higher watt power supplies cost like $5 more, and the electricity over a year used could be measure in dimes, but they convinced people it was important anyway.

And they claim thats the price of technology “now”… I can’t understand that… what if they have spent 10 years (as they claim) and the whole research has brought nothing else but Pascal? Will they charge more suddenly for the same? I think if something is good enough, the amount of sales will pay for the research faster. This seems an attempt to grab your money fast because AMD is right there in the corner… (lets pray Navi is the answer).

Benchmark embargo until every reviewer gets one and a chance to test is pretty standard. As long as the embargo ends at least a week before the cards ship, that’s normal for computer hardware releases. Who knows, Nvidia still might be doing driver updates for the cards, and making sure games get patched/updated.

TDP is really important for laptops, but yeah, as long as the card can keep itself cool, it’s a non issue to the vast majority of people.

I’m really curious about the AI AA, would love to see a tech breakdown of it.

I just hope Epic add Radeon Rays support, its cross-platform, devices that support OpenCL 1.2 can run Radeon Rays…
Unity have started adding support to Radeon Rays
RR is free, open source, license-free, works with a non-AMD hardware, works with Linux, Windows and MacOS <3
So why don’t add support to RR?

This small video from Battlefield V made showing RTX enabled just tell how good the reflections can be: Battlefield 5 live gameplay with RTX effects enabled - YouTube

Radeon Rays may be useful for baking, but it is still a few orders of magnitude too slow to be used for realtime rendering in any capacity.

i get my rtx 2080 TI in 8 -10 days :D… SO… i just really badly want a way to generate lightmaps quickly and beautifully… replace the current lightmass already… give me this PLEASE :smiley: AND HURRY

Humm well the interesting thing is when ever Nvidia announces a new card with improved tech support it takes a good 5 years for developers to start to take advantage of the new pipeline as the R&D costs are to expensive and limited as to use until the “new” product reaches saturation. The thinking is why iimplement the new tech when the installed base is not high enough to support features added to the advanced video cards.

A good place to start as to tracking video card installed base

https://store.steampowered.com/hwsurvey/directx/

The interesting stat is the lower cost Nvidia cards is outselling the high end cards over the past 4 months.

What is interesting though is the RTX release event Nvidia was hitting ray trace really hard. The Titian X for example has a return over the past 4 months of an average .006% where the GTX 1060 has shown a 9.02% return over the same time period. This as a games developer asks the question why go through the additional effort with in the current development cycle. It’s always the case of a game will be developed based on the lowest common denominator and feature enhancements usually being consigned to the feature creep pile as having no value as to the ROR.

But

and what was of interest was the Star Wars example shown was the example of ray traced being preformed in an Unreal 4 example indicates UE4 already has ray trace so my question would be can RT be preformed using static light tracing improving graphics visual with out the end user having to kit up with the “expensive” RTX .

My opinion it;s all just hype once again, just like how DX 12 was going to change the world of video gaming yet I have not seen a marked improvementin graphics quality above and beyond what can be obtained via a half decent video card.

It won’t be something that will become big within games for a while, since the install base is going to be so small.
But–for developers things will be much faster. I’d expect the baked lighting system in UE4 to support acceleration from this at some point soon.
And with things like NVlink which allows you to combine the memory of multiple GPU’s it’s much more viable for doing light baking since you aren’t as limited in what you can render.

So for real time–it’ll take a while
But for developers using it with baked lighting it will be useful very soon.

How does nvlink work? Are you saying i could somehow combine 3 different computers gpus to share memory for something very memory intensive ?

Yes, NVlink allows you to combine GPU’s and in doing so it combines GPU memory. This is only on the high end cards and the upcoming RTX cards, but it’s kind of like SLI but faster. If you have 3 RTX 2080 cards, it will combine the memory so you’d have 24GB of gpu memory rather than just 8GB.

what if i have a gtx 980 and an rtx card? Capable?