How to maintain texture clarity at a distance?

So I’m making an indoor basketball arena, and will be generating pre-rendered images for a final product. However the lines on the court are disappearing as the camera gets farther from the court. As the court is the focal point of the image, I need to find a way to prevent this from happening.

So far I’ve tried 3 different methods, and all have yielded similar results.

  1. Use an alpha mask in the wood material to Lerp between the wood and green paint.
  2. Use a separate plane static mesh with the blend mode set to ‘masked’ with the green lines masked out and floating just barely above the surface of the wood.
  3. Use geometry for the lines.

I have the engine scalability settings all set to ‘cinematic’, the ‘material quality’ set to ‘high’, and i run as a standalone game set to fullscreen.

Having texture streaming turned off should be enough for rendered images. Please let me know if that works, or if you have further questions.

290071-rendermovie.png

Thank you, I will look into this tomorrow at work.
If I may ask you more about this, does this setting apply when using the console command HighResShot? That is the method I am currently using to generate my images.

What was the problem with using geometry for lines? In your case, it can be the way, either in pure form or as decals.
You can combine it with signed distance field and/or procedural masking to create more smooth circular mask shape. Texture and geometry alone won’t help you here.

Have you played around with mip settings and mip bias?

I assume it will not apply. These settings are for sequencer rendering. Don’t forget that your image resolution will also affect the “thickness” of the lines.

So I attempted to compare the image quality of generating an image through sequencer to one generated with highresshot. It looks as though there are pros and cons. The lines on the court are clearer, though not perfect. However, it looks like the the reflective properties of my scene have changed noticeably. Both of these images were generated at 4k x 4k, and my machine can’t handle making them much larger.

I’m going to try some of the other suggestions people have offered and see how they fare in comparison.

I don’t know if you were asking what problem would prevent me from using geo, or what problem resulted from using geo, but to answer both: There is no reason why I can’t use geo for the lines, I just tried it both ways to compare the differences. Ultimately I experienced the same sort of aliasing issues using both methods.

I will look into the signed distance field / procedural masking things you mentioned, those are new concepts to me.

I have a very basic understanding of mip maps, but I’ve never manipulated how they function in this way. Thanks for the suggestion, I will do some experimenting.

That looks like a difference due to auto exposure (looking at the score and clock). The reflections themselves look mostly fine to me. Forcing the exposure to a set value for every picture should do the trick. Sadly the sequencer starts adjusting exposure once rendering starts, leading to under/overexposed frames when using auto exposure.

I have the auto exposure turned off, and the min/max exposure settings in the post process volume clamped. Would sequencer override this?

290165-exposure.png

Here’s an example of the same view rendered with either sequencer or the highresshot console command. These were taken moments apart, same light bake, no lights modified.

290166-comparison-animated.gif

So I tried to educate myself by doing some google searches on “ue4 signed distance field” and “ue4 procedural masking”, but i think those might be very broad topics for me to understand their specific application to my problem.

Aren’t distance fields about how shadows are rendered? And if so, I don’t understand how that would be applied to my situation where textures and geo are disapearing/aliasing at long distances.

Alright, so I played with the mip bias, but it wasn’t clear to me how much of an effect it was making. However, this lead me down a rabbit hole that brought me to an acceptable solution. I changed the Anti-Aliasing method of the project to MSAA (from TemporalAA), and as long as I render an image large enough that I can scale it down to reduce the aliasing I’m getting a much better result.

If anyone has similar problems, here’s the video that I thought explained it well, and quickly:

290174-antialiasing.jpg

I believe setting the texture to a texture2d may work better

Huh. Curious. Sequencer should only override the settings when keyframes are placed. It must be the sequencer itself then. Could you do me a favour and test this one more time with a delay before the warm up? Same Problem I have not run into the reflection problem yet, as I always record more than I need to cut it apart later, and this seems to occur on the first frame.