BAKING SOLAR DATA LAYER - PROCEDURAL BIOMES

I am trying to figure out how I would bake out a Solar Radiation Map for a landscape, so I can then bake that information into a texture channel given the min and max values (Solar Map would be 0-1 range).

I am thinking I would setup a 1-cycle (i.e. One Year) animation of my directional light, and when I run the animation, I would divide the animation into steps, so that each step represents a specific amount of time (i.e. 1 Hour).

I want to render the result to a texture, the result being the amount of light that hit a specific point.

Each step would add to the previous to get a total amount for 1 cycle, and readings at specific intervals so I can do math on that information to get a final result for the image texel.

I do not know if there is a direct way of doing this?

I was thinking I was going to have to setup a line trace from each point on the landscape that would represent the center of the area covered by the texel. Each fram, do a linetrace and see if that spot can ā€œseeā€ the directional light. Then output result to the corresponding texel.

Last, (Using something like a render target? I am familiar with render targets in materials, but not how to use them like this) render out each result to an image and save the image to disk, or store in a variable for computation (Such as an array of hours for the day to then average the results for the day)

I then plan on using this Solar Radiation information, along with temperature, altitude, slope, humidity and precipitation to create a Data Packed Image to use for procedural generation anda blueprint script to determine what foliage and rocks etc. to spawn

solar.zip (24.9 MB)
Did a line cast from the calculator down to the terrain to get starting points.
Then basically did a line trace between the sun position and the current point on the ground. If blocked it will draw a dark red pixel if not it draws a light red pixel.

From the line trace I also measure the distance and use a clamped remap function to get a dark or light green depending on distance.

I then write these value to the texture for every sample point.

Itā€™s not perfect but it might be a good starting point.

For the straight up altitude you would probably need to do a line trace directly from above the terrain and then subtract the resulting distance from the beginning trace start / endpoint distance. (now I just measure the distance from the sun)

Edit: scene is in map1

1 Like

Oh wow, best reply ever! I am excited to check this out! This should be super helpful!

Accumulate light data in the way that 3dRaven described. Run your line-trace from the sun, every hour, 1/2 hour, whatever and in each pass write a very small value, adding on top of itself over time; a solar-irradience-map.

This will give you an aggregated heatmap of how much solar-radiation a given spot gets.

Put ā€˜betterā€™ plants in sunnier spots, put no plants in dark-spots, moss, etc.

Itā€™s like getting the PrecomutedAOMask, but over time.

If you REALLY wanted to, you could bounce the line-trace, effectively re-creating a ray-traced path and do light-mathy-stuff from there. Actually seems the kind of thing that might be a nifty marketplace solutionā€¦

I did not think of bouncing the line-trace. That is a good addition for more realistic sims. I will use the solar radiation map in conjunction with a temperature map, precipitation map, humidity, elevation etc. to spawn things in proper areas, as well as hopefully add material shaders to make plants look appropriate forthe climate and time of year etc.

I am doing similar things.

Is the engine lighting reliable enough to do this?

If so, one can set up a render target to expose repeatedly like you would a film.

Since the end product you need is a texture and a screencapture2d returns exaclty that, it would elminate some work.

My idea behind it is that - assuming engine lighting is good enogh to call this scientific - if you have the light move at the proper RA/declination for your date/time and lon/lat you can get a pretty accurate solar map.
You just need to refine the way the render target captures to allow it to fill up overtime/expose correctly.

You can probably remove eye adaptation and give the engine very low light levels to get the final texture - which should already be in 0 to 1 range (unless you configure it otherwise)

Id start with a fully black/opaque landscape, see if the light catches enough to create white spots on the texture without reflectivity.
Change/alter from there ā€¦

I would be curious to discuss more and compare thoughts. What kind of size maps are you doing this for? I am hoping to figure out a good way to make 300km+ maps. Was thinking maybe I would bake out landscape data layers at certain ā€œlevelsā€.

For instance, using a height map where 1 pixel = 1km, using 4k maps (maybe 8k? But was thinking 8k textures are slow to process) and baking the solar data for this (Scaling down the landscape to proportionately to 21km to fit in Float error range etc.; however I think in 5.4 there might b plans to make possible near infinite (very very massive) landscapes and levels; then in areas where I need detail (Cities, places of interest etc.) I would use a scale of something like 1 pixel = 100m and then finally 1px = 1m.

I would then (not sure on math) use the larger data layers as area averages, and then (I guess?) add the next layer to the large average, and then add each layer after that in the same manor.

This way I can get good results that look and feel correct for large areas, as well as small areas where they are neededā€¦ The center of each map would have a world location in Long/Lat for positioning when baking and knowing when to load level if that area is rendered.

If you are planning on doing larger maps then I would suggest using asynchronous line traces. You could also scan the terrain in sectors adding to a texture atlas.

If you want to use it for procedural terrains then baking data from the sun position wonā€™t work as well (map can have offsets and be at different heights).
You would have to just bake in information that is relative to the terrain and normalize the data.

You could in theory bake in the sunā€™s path across the sky and set where you could have moss grow, leave snow from melting etc.

I am currently brainstorming ways of doing massive sized maps (as in 1000ā€™s of kilometers). I was thinking of ways to break down data into hierarchies. For example, I might have a texture where 1 texel represents 1km. This will give me averages for large areas. Areas of detail/interest would be 1 texel per meter, or 1 texel per 10 meters if a third layer is deemed necessary. Calculations would be done for the separate detail layers then added or multiplied for final result (whatever mathematical formula produces correct values). This way I can generate information to procedural generate large areas when camera is zoomed out to view large areas, and then more accurate results can be achieved for the smaller (up to 20km^2) areas which will have greater detail for heights and can be ā€œaccuratelyā€ rendered in detail for precipitation, SOlar Radiation etc.

I have not gotten into asynchronous, I will look this up. I have an idea what it is (like multi-threading?) but do not know how to specifically implement.

look into FRunnable tutorials for multithreaded tasks. You can probably also call the async version of linetrace for performance, but you will probably need to pass in object information beforehand as the Runnable thread is not the same as the game thread.

If threads are too complex (passing in terrain may be a problem), Iā€™m sure even running async linetrace on the main game thread will be better than just firing normal traces.

Good to know, this may be a bit above my level and might take too much time for me to figure out. Much appreciate the advice though! I will check into this!

I finally had time to explore this today, but ran into this:


I tried opening in 5.3, do I need to open in a different version, or for some reason, is the code contained in this macro missing from project?

it breaks for some reason but the macro is in the engine
Look for 2D Grid Execution Macro and just re-parent the connections.

Oh, ok, thats cool, thanks!
You have given me so much new stuff to look into and learn from this, I am not sure what plugs into the input and where the output is plugged into for the End Canvas to Render Target? I have to look up all these nodes in detail to break down how these work, which is great, thank you so much for this! I will learn so much I had no clue existed! The only waI can figure is the macro plugs into it and the output then goes to the Line trace input? The append string for the X andY values was disconnected, I assume you had a print string in there fr debugging when you were putting it all together?

Ok I just remembered how I made this :stuck_out_tongue:
I made my own macro based on the 2d grid execution macro where I added an exec at the end of the loop, thatā€™s where the End draw was hooked up.

I need to rebuild it and most importantly remember it :wink:

The line trace start and end comes from the poly center for sure (end has the offset). Will try to rebuild.

Edit: Version 2 inbound

solar v2.zip (24.9 MB)

Added a larger start offset above the volume.
Moved the macro to the blueprint so it should be intact

(I remember that I duplicated the macro at the time and it was in a toolset so thatā€™s probably why it didnā€™t export properly)

This is fantastic! Started to break it down in detail.
I thought I understood what was going on, but it seems I must not understand what is happening with the line traces. I tried changing the resolution of the render target image (so that it would be the same as my landscape-1, and then mapped the texture 1:1 to the landscape. I was thinking that would give me 1 trace per pixel with each pixel representing a square on the landscape. I also switched the blueprint to get the extent bounds of the landscape instead of the Box Collision Component in the blueprint.

When I changed the render target to a high resolution, it only partially renders the image, or it seems that way, most likely rendering entire image, line traces just are not hitting anything or rendered to incorrect parts of texture?

This might be a bit my level. I think its the grid execution I might not be understanding, or the draw texture node. Trying to read up on the draw texture and draw canvas nodes.

Make sure the collision box encompases the landscape. You can turn on line trace debug to see if its hitting the ground (should be lots of traces from the sky to the landscape)

Yeah, i will probably be spending the next two weeks figuring this out lol. Seemed pretty straight forward in my head.

I changed the connection you had to the box collision to get the box extents to the landscape actor to get the box extents directly from the landscape, and made it so user can select landscape. Before I did this, I had changed the box size to encompass entire landscape.

Just having the correct nodes to work with helps so much, I have only touched render targets when rendering a material to a texture. I have been dying to figure out how to read and write textures, and this gets me in the right direction for sure! I did a lot of looking around, and was not able to really find information on how to do this, and actually, sometimes sources stating it couldnt be done through blueprint

I noticed the directional light was in the center, but then the line traces were using the lights position? Iā€™m thinking I need/should change it to get the directional vector of the light, then a distance along that vector for starting point, and the end point would be a distance know to be far enough to hit the landscape. Or maybe do it backwards?..and have the trace go from the center of a landscape tile, in the direction of the light for X distance, and if it does not hit, then technically that would mean its getting light, and if it is blocked, it would not be getting light.

It looks like everything is run in several loops. I have not dissected this yet to fully understand the existing loops, but was wondering if I should run a loop to create array of points. Then, loop the array to do the rotations, then loop array to run traces and create an array with the information to be written to texture, then run loop through that array to write each pixel to texture? Maybe this is how that was setup, but at first glance I thought maybe it was getting point, line trace and write to texture in each iteration?