Hi guys,
i would like to know a method to find out how much of a chosen actor is/was visible from the cameraview at a given time.
Something like the pokemon snap system.
What i have already tried were finding out if visible at all with recently rendered and dot product and what i’ve also tried was line and shape tracing but the results were mediocre.
I think the way it works in a game like snap would be to assign a unique material to the actors i want to track and when doing the “snap” the scene renders black and the actors white and there is a counting of the white pixels.
I have absolutly no clue how to achieve this and would be thanksfull for any directions and help.
I don’t know what you want to use it for but I tried some alternative approach some time ago.
I simply sent LineTraces to every bone of a character and counted how much I have hit.
The count divided by the total bone count was the value I used to calculate the visibility.
Your target character has normally a skeletal mesh with bones. When you now send traces from your camera to these bones and check how many you are hitting then it has almost the same effect like building a complex pixel counting mechanism.
This type of calculations is often simplified in computer games to lower operation time and increase performance.
I like @HAF-Blade 's solution, It works to determine if any parts of the mesh (approximately) are obscured by other physical objects. If you want to determine facing, you’ll need to use something like Dot Product. Check out this video by the great Mathew Wadstein: WTF Is? Get DOT Product To in Unreal Engine 4 ( UE4 ) - YouTube
Welp, if you’re really concerned with granularity such that bone occlusion is not enough, might I suggest vertex occlusion? You’ll need C++ for this, but you can get skeletal mesh vertex locations from the render buffer and transform them to the photo subject’s actor location then send the traces. Those that are not occluded are divided by the total number of vertices to give you a visibility percentage.
Dot product works for height differential, “Get Horizontal Dot Product To” which ignores Z.
For distance concerns, you only need to get the distance between the start and end vectors for the trace and determine a multiplier for the distance to each bone. For example, if a bone is more than 500 units away, its multiplier is 0.1 while 100 units away gives a multiplier of 1.0. Then average all the multipliers.
FOV changes seems like a design decision that’s in your court rather than a technical consideration, but when taking the actual ‘photo’ you could enforce a certain FOV for scoring before taking the picture then reverting back to the player’s preferred FOV for gameplay.
I am not sure about taking screenshot and checking pixels in resulting texture (most likely will need C++).
However if that can be doable you can create postprocess material that shows only pokemons visible part (as white with black background). Very similar to any outline materials (and you can find tutorials for that). Then make snapshot/picture with that material applied (you can hide it in fake flash effect etc.)
So when you have that scene capture material, you can calculate white pixels or simply calculate average brightness of whole frame.
Thank you for your contribution, I will go through the points and try them out. Maybe I can combine them with my previous results and achieve the desired outcome.
Before you start with postprocess etc, find out how to access pixels of scene capture texture.
This is biggest problem with my idea, getting those pixels out of texture.
If you get only visible part of mesh (from scene capture) into texture.
Make it Black and White mask of visible area only.
You can use: GetAverageBrightness
Knowing your texture size and average brightness (coming only from visible pixels) you can use simple proportion, because average brightness will be just that (assuming black and white pixels). However checking if average brightness is really aveage (ie. linear function) is needed.