Ooh, nice! Does it come with pre-built plugins for Windows/iOS/Android (don’t see binaries on Github) ?
AR using video from drone seems to be extremely useful for all kind of architectural projects! (whether to show real-time presentation or render out offline video from UE4)
add those shared libraries to Unreal’s Build.cs
Unfortunately I don’t have experience with building UE projects for Android (I only worked with simple Java-based Android programs). Do you know if adding external libraries in UE works with Android?
BEAUTIFULL WORK!
I tried to compile opencv for unreal myself and failed miserably :<
Hey, did You tried any experiments with visual odometry? Is it possible to implement it using Your plugin?
Sorry for the lack of reply - it seems I did not receive the notification.
There are many features I have added since the first release: spatial marker configurations, switching video sources, multiple marker tracking.
I will try to make a release for the new features soon.
New version released! - please see in original post.
OpenCV has implementations of SURF and SIFT features, I think I have even seen someone implement positioning on OpenCV. So this should be possible.
One difficulty though is determining the initial pose of the camera.
Of course, but imagine you want the AR objects to be on your table. With markers, you place them on the table and we know that is the Z=0 plane for example.
With visual SLAM you do not know where the table is. Also there is always some drift and no good way to correct it.
Yea, especially with monocular VO, binocular has the ability to get some deph information, but it seems that the only valid approach are RGBD cameras like kinect (obviously too heavy for headmounting) and realsense. Im waiting eagerly for intel tango phones for that matter.
Well there is much less drift with SLAM with loop closure, and i’ve seen some applications of Large Scale SLAM with negigible drift. Here is an example Large-Scale Direct SLAM with Stereo Cameras (IROS '15) - YouTube.
Easier than performing full SLAM is to use a known object as marker, the object does not have to be a fiducial marker. There is an implementation in OpenCV.
I would like to know if I can achieve the same result with this plugin what they showed at GDDC with the black bird. Multiple trackers on one object for better / stable tracking from multiple angels. Video: https://www.youtube.com/watch?v=VJEoY1JT71c
If so how can I do this? Thank you
The plugin uses very similar technology to the one shown on GDC (they also said they use OpenCV but Chilitags not ArUco).
A multiple marker configuration can be used - markers are placed in the blueprint editor (as actor components). In the GDC demo, they put the markers on the car then scanned them, in this plugin you need to know the position of the markers.
If you are interested in trying the plugin, please let me know. so I can package the current version for you (the last published one is far behind current state of the plugin)