Multi Engine

I don’t know if i’M gonna express myself clearly, but i’ll do my best.

Actually I’ve got a multiple features request in one.

As an interactive designer (i’ve got only a little background in interactive/immersive installation design and development and in game/level design… and also a little bit of web and app UX), I’m looking for the best tools for my projects.

At school I learned UDK, Max/MSP/Jitter (https://cycling74.com/) and a little bit of Quartz Composer.

What I am looking for is an engine that will allow me to create a videogame but instead of being exported for a console, PC, Mac or mobile, etc… I want it to be projected in an environment/space. I want to do Video game design and development mixed with Video Mapping like the softwares Resolume (https://resolume.com/), Madmapper (http://www.madmapper.com/) or Millumin does, if I have to pass through a third party software like wyphon, syphon (http://syphon.v002.info/) or spout (http://spout.zeal.co/) I’ll do it, but i’ll prefer not. I would want to develop in one tool, to eliminate the chance of bugs and mistakes.

I want to be able to work in a UI that permits me to build an entire project in one tool, like Unreal Engine with a lot of Max 7 or Touch Designer features with it (http://www.derivative.ca/).

Considering that Immersive Studios works with Unreal Engine as technology for theirs works (http://immersivedesignstudios.com/), I was wondering if anyone in the community or from Epic that could help, inform me or say tome that Epic or third parties are working on new features and plug-ins that will allow more experimentals or artistics projects that implement gaming. It will be nice to see Unreal become some kind of multitask or “multi” medias engine, like for LEDs interactive installation, arduino and electronics art or games, Internet of things.

So much potential in UE4

Thank You

It’s highly doubtful that Epic would add in projection mapping features, since the engine is geared towards games and not many people are asking for the feature. But it’s something that someone else could add in, the full source is available so a programmer could figure out how to do something like that. It’s 3D so you would just need a way to calibrate your projection and then have the engine set up the correct perspective render to match your environment.

Thank You for the quick answer

Like this is a good example of video mapping done with UDK (2011)

And what I would want is implement interactivity

Unity looks like it can do some interactive projection mapping. But I prefer Unreal to Unity.

Or it looks like there is a french start up that is launching some technologies for immersive projection

It’s the same deal in Unity as it is in UE4, it’s not an important enough feature to be added as a part of the engine by default so it’s something that others have created an addon for.

Hey Guito1986,

Here’s how I’d tackle this problem potentially and I don’t think something like this would be hard at all. I personally think it’d be pretty easy to do but that all depends on how you want to implement the end-result for the user. More so WRT if they are more or less wearing any sort of tracking devices or IR sensors. If you want the user to wear nothing, then the solutions are still very possible, but much more involved.

Since you can render the environment and elements you’d need inside a game, demo or application, its all about how to handle the projection when outputting that through a projector instead of a monitor. I spent many years working with the Kinect v1/v2 and if you use them in a multi-Kinect array where you combine the data from multiple sensors, you can stabilize the head movement pretty good. That’s option #1.

One other way is using a hacked version Johnny Lee’s implementation https://www.youtube.com/watch?v=Jd3-eiid-Uw (which I’m sure led to many companies today using IR to track device positions post Nintendo’s Wii). I’ve even ported this to XNA in C# a while back and its pretty straight forward code but this would be a very cheap and reliable option. You could pick up a track IR or one of those options if they have a pretty open API to get the data. Then its just having a some form custom UCameraComponent where you can feed that data inside the engine. But without buying the TrackIR or other IR tracking solutions, you can make this with a couple bucks and a trip to Home Depot (as well as Johnny’s source code) which is also great if you need to prove this in Prototype form first :smiley:

But if you already happen to have a Kinect handy, that could be a good starting point but just expect the results may be a tad bit jittery at times and loose the user.

This a demo of how Microsoft is testing a similar solution for a potential communication app and this one also uses the Kinect to help with masking the user. https://www.youtube.com/watch?v=tRzOqTRxoek. Hope this helps and best of luck!

Thank You very much it is more than a good strat for me.

I still continue some research for the video mapping thing aswell, because my projects are not showed in a monitor/screen or helmet/glass (Oculus or Hololens), but on walls or boxes etc…

But I like the solutions you gave me.