Hello everyone,
Im working on a VR project using HP Reverb G2 Omnicept edition (w/ eye tracking) and UE 4.27 using only c++ (I use 4.27 because of omnicept sdk dependency). I’ve been researching how to estimate gaze direction and depth using eye gaze unit vectors (individual eye gaze, combined gaze, individual eye position, pupil dilation, pupil position, etc [all data available by sdk/plugin]). Is there a good way you have found to estimate the depth of gaze? there are challenges within VR eyetracking and gaze depth in 2D screens, i understand, but how have you mitigated this? I have been exploring different approaches:
-
custom calibration script: place N amount of targets at known location → track gaze location in world-> find average difference (vector, since there seems to be a constant offset between my target and my eyetracking gaze location when i draw a circle where I’m “focusing at”). → save this (potentially) constant vector as an offset for my game → do this again for each subject/player.
-
Using the same method as step (1), save data of known gaze targets → map gaze targets to actual gaze location (based on the unit vector given by eyetracking gaze vector) → train ML model to predict actual gaze location (specifically gaze depth!!) → apply model to experiment.
-
Also, I thought of applying that algorithm (not infinite, but probably a few meters away) and use time as a gaze/focus target if it crosses a threshold in a specific radius from previous N samples.
Question:
-
Can I do step (2) in UE 4.27? I’ve seen python models been applied to UE 5.3 but due to Omnicept dependency, I cannot change from UE 4.27.
-
If not, what algorithms have you explored?
Note: I have seen software that create a line trace basically to infinity and consider the target the object of focus (or gaze depth) but this isn’t supported by academic research. Because of this, I do not want to go forward with that approach.
Let me know what you think,
cheers,
alex