Hi, greetings to all. I’m trying to compile thoughts and code about a issue I expect interest a lot of people. There are a lot of concepts involved and please before anything else forgive my poor english and my lack of expresivity.
The topic I’m trying to relaunch is depth perception in a regular screen as was introduced to most of all by the king Johnny Lee with a wiimote and some cheap hacking as he shows as in this 2007 video.
Time has come since then and other examples has been made easily as they say (demo) but still is not a common feature for any device. Technology is going fast in this subject but looks like headset oculus kind is what is going to win, and don’t get me wrong, I think that approach it’s the most immersive feeling by far, but head tracking and face recognition it’s going as well but feels kind of hard to implement.
I’m been stuck with this post for a while now trying to reconcile some parts and understanding:
and maybe others you might propose
For me, apart of overwhelming this posts means two things, first it’s achievable even with open source tech for some time and second, looks like unfinished, like every developer disappear or quit after.
My goal it’s to mimic this effect and apply it in a indie scale, 3D pixelstyle scroll platform will rock with this visuals. I’m trying to get freetrack working for the head tracking but I’m kind of stuck with unreal as to make the off-axis projection matrix. Any help is welcome because seems like I unknow even how this things are called. Please let me know any info that could let me find the path to achieve this and if you are interested in the topic or have examples let me watch them because this don’t get me sleep.
Thanks for reading.