Given a camera point in world space, a normalized camera direction (vector) and a list of points in world space,
How can I find which point the camera is most looking at.
I was assuming:
Make a list of object
Subtract the direction vector of the camera against the object to get a directional vector. example camera angle (0,0,1) vs point position (1,1,1)
Then get the angle from the original camera vector against the new vector.
How’s ever angle is smaller that should be one that is most looking at right?
Wonder how that could that be written in code?
How can I find which point the camera is most looking at.
I was assuming:
Make a list of object
Subtract the direction vector of the camera against the object to get a directional vector. example camera angle (0,0,1) vs point position (1,1,1)
Then get the angle from the original camera vector against the new vector.
How’s ever angle is smaller that should be one that is most looking at right?
Wonder how that could that be written in code?
Comment