I am trying to create an effect that essentially involves the following operation: If I have an image at time instant t, I want to compute the difference between that image and the same image at time instant (t-1). Thus, all the pixels whose values have changed (due to movement or lighting changes) will ‘light up’.
Is it possible to perform this inside a material? I have seen some examples of how optical flow is being computed inside material nodes as a postprocess effect which seem somewhat relevant, but for my case, I’m more interested in pixel intensities than velocity. Is there anyway to keep track of the image rendered from the previous frame?
I don’t fully understand your first question - the change can be because of either camera movement or changes in lighting. This change from t-1 to t would have already happened before we enter the postprocess stage, so I need a way to keep the t-1 pixel values in memory.
The reason I am trying this out is to simulate what is known as an ‘event camera’. In an event camera, changes in pixel intensities, when they are above a certain threshold, are recorded as events (+ if increase, - if decrease). So naturally, this is a bit like subtracting an old image from the latest and doing a thresholding operation on it (in reality it is not, because event cameras are asynchronous as opposed to frame-based).
The reason I wanted to do this inside the material editor as a postprocess effect is because using a ReadPixels() operation can be really expensive, so I was hoping I could achieve it through HLSL.
If using t-1 for the later image, it may be pertinent to use the subtract math expression between image frames. But what is it that’s determining the change in image? Is the change a type of smooth transition, or dependent on the instant it enters the post process volume / lighting which changes it?
I think it can be performed correctly in the Material editor, but it’s going to require calculating the pixel values which are set to change. It’ll also probably involve some sort of PPV node that gets the value of pixels at t-1 and outputs them to be intensified (lit up). If it’s not an actual PPV node, then it’ll have to be an associated setting in the PPV, such as color brightness or a more precise pixel-based node.
So you’re trying to acquire and retain pixel intensity change at certain threshold(s) in order to create an event from it? Basically tracking pixel intensity changes based on the object’s position in the scene (via how lighting and anything else influences/changes those pixels). Check this out:
Perhaps you can create a blendable at the threshold levels of pixel intensity change, and use a function to generate the event from it. The event would reference the image frame at t, and then calculate its difference to the image frame at t-1. Then use a variable to set the threshold at which that target event is recorded, and not only generated. Then output the recorded event to an array or some other data list or something. Probably not the most thorough or efficient idea, but it could be a start.
Thanks for the info! “The event would reference the image frame at t, and then calculate its difference to the image frame at t-1.” - does retention of the previous image happen automatically in a blendable, or do I have to manually keep track of that frame and pass it somehow?
I don’t know. I think it wouldn’t be automated or tracked by the blendable itself. Do a search of different pixel data nodes in blueprints, then read up on those in the docs. There might also be a youtube video of asynchronously generating events based on pixel values (whether those values are color, brightness, or another type).