Creating AR with Unreal Engine.

Hi guys!
Please help me to decide :
Some proof of concept (real-time) - YouTube made with touch-designer in less then continues few hours.
(this is not the engine results, just proof of concept using simpler tracking method)

Im right now check the option for creating content mostly UIs
not necessarily games,using Novel Augmented Reality engine written with C++ to work in Unreal 4 or Unity.

I want the engine or the UI solution to work in mobile devices iOS\android too. (cross platform).

1.“How easy” is to acquire data from the mobile sensors such as :
Camera resolutions\FPS,Accelerometer,Gyro,magnetometer,sound etc… or just webcams or other external sensors.

2.“How easy” is to create UI with unreal and reference their inputs to the sensors and the custom made c++ AR engine or other external server\inputs etc ?

3.Any one can say how easy is to create the video I uploaded above with
iPAD or Android tablet using unreal ?
how long it will take for you to create something similar ?

4.How different is the licenses in case Im not producing games,but only UIs ?

I cant find many examples for “real” Augmented Reality content using unreal…

please feel free to share your thoughts.

Barak.

My pepper has been augmented.

This is really cool!

easy or hard depends or your knownledge.

if you are asking… mean going to be hard for you. You need begin with things more basic and spend time learning alot.

and i think UE4 haven’t a way for camera access, you must code yourself.

Hi, thank you for the reply!
I have started to look on the basic stuff.but I have a goal and more C++ programers will be involve so I`m pretty optimist :-).

  1. depends on the hardware, if it isn’t currently supported you would write a binding which would most likely be a plugin.

  2. related to 1. UI is easy through UMG, then you would use the same API calls to pass data forward.

  3. Depends on optimization, I don’t see why it wouldn’t work on modern tablets though.

  4. license should be the same

Also wanted to give you a heads-up that an upcoming version of the unofficial leap plugin will support full pass-through of the IR camera.

So if you have it mounted on an HMD you will be able to use it for AR stuff or a mix/blend of AR/VR such as removing your real roof with a sky (UE4 example)

or have floating UI like the leap examples (Unity example, no widgets included in plugin for now, youll have to create your own)

https://leapmotion-leapdev-production.s3.amazonaws.com/uploads/library/detail_image/bd7f7bf3-b579-4842-95f9-4b632a0f6250.gif

You can try to do with UMG, we have in widget components that render in the world. But you’ll have to modify engine source to get them to respond to overlap events as if it were mouse/touch interaction. Currently there is no extension point that would permit a hand in 3D space to act like a cursor correctly.

For now, it would probably be best to do it entirely with 2D/3D meshes that act like UI or a mix of 2D/3D meshes with collision and UMG inner mixed for displaying the changes when the overlap events are fired on a surrounding mesh/shape.

UI is so hard to do right in AR without haptic feedback :-/

Hi Nick and GetNamo thank you the for the insight.
I hope things will go right and in few months will able to share some output.
It will be long distance run,yet started.
I also hope Unreal will make it easier to create (real) AR in the near future.

Thanks for clarifying, I didn’t know about the lack of overlap. Since UMG is a 2d surface, you could create a custom trigger that would translate, via 3d to 2d projection, input back into UMG though this approach would be limiting in some cases. I agree that using planes, meshes, and composite objects with interactive response would be the ideal approach, I wonder if some standard widgets could be made to make this easier.

Have you tried the leap widgets demo in Unity? The visual feedback from displacing geometry (e.g. you pushing a view inward to scroll) works very well in my opinion, think bounce-back from scrolling on a smart phone giving contextual queues. Another very interesting UI demo is the VR planeterium where you have a wrist display which changes its function and buttons depending upon which direction your arm is rotated (akin to turning your wrist to look at a watch). Leap’s current experimentation with UI has quite a bit of potential.

Hi ,

I’m working on an AR proof-of-concept now which seems like it is the direction that you are looking at.

I’ll post the source code in github when it’s ready, but the idea is that you use OpenCV to get the raw video data from a webcam (I use the Logitech C615) that you then draw to the background so you can draw other objects with Unreal Engine in front of it.

For example, you could draw widgets and then interact with them like you are suggesting. My demo will just use CoherentUI 2D widgets and then use Leap Motion for “touch screen” type interfaces, like in this VR demo that I created:

…so the idea is to implement the same thing but this time in AR with the webcam video in the background.

By the way, my opinion is that it’s not necessarily true that hand tracking is the best input to use for AR applications. Since with AR applications you are not blinded and can actually see your keyboard/mouse/controller/etc., you can actually still use those devices for input effectively. Actually, in my opinion one of the best inputs you can use in AR is just to use your head as the pointing device, and then keyboard or mouse buttons to take actions, for example as in this VR demo that I did:

In this demo I just used a raytrace to detect the hit location on the CoherentUI view surface, and then translated that to mouse coordinates that I could pass to the Coherent UI view for mouseover or mouse click events. This type of interface works great with AR as well. For example - what if you want to interact with an object that is (virtually) 5 meters away from you? In that case you can’t just reach out your hands to interact with it but the raytrace / traditional input approach will still work.

Thank you Imalave,

You doing awesome work! ,very interesting .
If you develkop for Microsoft using media foundation (to acquire webcam data) could be easier no ?

Hi ,

Thanks for the suggestion! I do a lot of development on the Mac so I was looking for a solution like OpenCV that works on many platforms. I was also planning to try AR development for Android soon, since I think Android will be the most popular VR platform in the near future, and phones already have a built-in camera.

Yeah, that’s good approach,unfortunately the still going platform chaos is a big pain.

Is there any way with leap or something that you can track the corners of an ipad or something and use that as an input device. In VR you’d see a plane or box with the exact proportions of the ipad screen and when you reached out for it you’d feel it.

Hi ,

I finally got around to posting my AR proof of concept that I mentioned earlier. Here is a new thread I posted on it:

It also has a link to the source code where you can see how easy it is to do the video capture with OpenCV. It’s just a few lines of code like the following once you have the SDK download and your libpath set up correctly:

cv::VideoCapture VidCap(CameraIndex);
if (VidCap.isOpened()) {
cv::Mat Frame;
VidCap >> Frame; // get a new frame from camera
uint8* RawFrameBuffer = (uint8*) Frame.data; // this is the raw RGB image buffer
}

Hi Dannington. This should definitely be possible with already available AR / computer vision libraries. I like your idea that in VR you would see where the phone or tablet is so you can reach out and grab it.

What’s interesting, though, is that once you’re holding the device you don’t necessarily always need to be looking directly at it. Since in VR you can have a heads-up display, you can be holding the device at a comfortable position as you would a game controller while you could still look straight ahead and see your input. So it could be something like the Microsoft Smartglass concept where the device becomes just a “dumb” input device that acts as a controller that passes input to the Unreal application, rather than using apps on the device itself.

Could you tell us how you integrate Unreal with OpenCV?