Hi. I’m currently trying to apply a post process material to a SceneCaptureComponent2D. From a different topic, have discovered this tutorial which can get close to a single frame upon adjusting the variables however, doesn’t mesh well with the other data that is provided. Especially since it can change frame to frame. (Copy paste nodes here)
Example data:
“aspectRatio”: 1.0,
“center”: {
“x”: 959.5,
“y”: 539.5
},
“distortion”: {
“harris”: 0.3143091945892773,
“k1”: 0.0,
“k2”: 0.0,
“k3”: 0.0,
“k4”: 0.0,
“k5”: 0.0,
“k6”: 0.0,
“p1”: 0.0,
“p2”: 0.0,
“type”: 0
},
“focalLength”: 2574.9918539983396,
“height”: 1080,
“width”: 1920
Of note: harris and focalLength are the ones that generally change between frames. Harris being the more important one, am needing to work out how it can be intergrated with the post process material. For current data, tends to stay rounghly around 0.3 - 0.5
Due to the nature of the project, I am unable to use Unreal’s new Lens Calibration plugin as that requires a direct connection the the camera via LiveLink and we are still working in 5.0.3 as well anyway.
So to ask the question, is there a way to create or adjust a lens distortion effect given the harris distortion value?
Is there a paper on what the harris variable actually does to the lense or a final frame in practice?
Can’t really find much more than harry cameras
And this, which isnt what you need but… at least it isn’t a camera.
https://docs.nvidia.com/vpi/algo_harris_corners.html
As far as “are you able to distort the final image with a post process?” Yes you are.
Many techniques out there, including panini distortion.
Camera lense manufactures usually provide specs for it as well, at least for Nikon and Canon camera lense.
There are tonnes of tutorials on removing lense distortion on photos.
I would assume you could perform the inverse of those to end up with an accurately distored render…
Thanks for the reply.
Honestly, if I had a paper hopefully I’d be in a much better position. Though I did stumble across this one at the end of my day. Will need to re-read it though.
Below is what the initial intended purpose of using the harris value was for. Example code for doing a project from a 3D point in world coordinates into 2D with Harris distortion. Which works beautifully.
vec2d project(double focalLength,
double /*aspectRatio*/,
vec2d center,
double harrisDist,
glm::tmat4x3<double> extrinsicsMat,
vec3d p3d /* in a right handed coordinate system! */) {
// transform point from world to camera coordinates
auto camCoords = extrinsicsMat*vec4d(p3d,1);
vec2d p2d;
{
// normalize
double rInv = 1.0 / camCoords.z;
p2d = vec2<double>(rInv * camCoords.x, rInv * camCoords.y);
}
{
// apply harris distortion
double r2 = p2d.x * p2d.x + p2d.y * p2d.y; // distSq from center
double s = 1.0 / sqrt(1.0 - harrisDist * r2);
p2d = s * p2d;
}
{
// convert to image coordinates
p2d = center + focalLength * p2d;
}
return p2d;
}
Due to other outside help, this ending up being the solution used in the post process material:
{
// convert image to camera coordinates
p2d = (p2d - center) / focalLength;
}
{
// apply inverse of harris distortion (Harris distortion inverts perfectly, which is a useful property)
double r2 = p2d.x * p2d.x + p2d.y * p2d.y; // distSq from center
double s = 1.0 / sqrt(1.0 + harrisDist * r2);
p2d = s * p2d;
}
{
// convert back to image coordinates
p2d = center + focalLength * p2d;
}
Where p2d is the ScreenPosition(ViewportUV).
And what seems to be most likely due to the camera that was used, and implementing into unreal, also had to divide the focalLength by 2000.
Only comment I have is that unreal is a left handed coordinate system.
Maybe that is why the function is inverted, but iversion would not be the way to go about changing the coordinate system, you need a swizzle (inverting just the x or the y).
Either way, glad you got it sorted. Even with the code I’m a bit at a loss as to what it actually does in practicality…