Controlling objects with click drag on a tablet

Hi there

This is probably a very noob question coming from a scripting noob :wink: I am looking to get some items to be controlled by click dragging on the screen on a tablet eg. I need to control the height of a plane by clicking on the screen and dragging up, I need to control the direction of the sun by clicking on the screen - if i drag below the horizon line the sun will set and it will become nighttime (I have the sky set up and controlled by a direction light) and lastly and most complicated I need to be able to pick up objects in a scene and throw them around. All 3 of these examples will have locked off cameras.


Simple gestures code is not complicated, it is quite simple when you get idea right. But getting to that simple solution was long and painful.
Unreal does not have gesture support. You need to collect and process all data by yourself.

We have nice system (finally got idea for simple blueprint doing it last week), and currently we are thinking about sharing it with community.
But because its only last week code we want it to mature first a bit more before we make it public. I know touch interface is hot topic here.

Thanks, looking forward to seeing it when its more mature :slight_smile:

Some update on touch interface:

I had great idea of creating custom empty widget that remembers own position and size, and gives it to touch interface blueprint when asked, to define custom touch zones in umg editor. But there is one huge problem: widgets do not know their absolute location in viewport. To get it i would need to trace parents of parents of parents, add all anchors together and margins etc. This is too much for widgets to get something as silly as absolute location in viewport.

And there is even bigger mess from epic:
When camera does not have exactly aspect ratio of display (ie. when you have black stripes on sides) you get UMG on whole screen, and game rendered is only on area that fits aspect ratio. So touch coordinates do not fit umg coordinates. On top of all this converting wolrd location to viewport location also has some weird rescale and has yet another viewport resolution.

And last smallest pita is that positioning widgets (ie coordinates) depend on anchors you choose.

I hoped this touch interface to be small, clean blueprint. But all this above need a lot of code to counter.
This reminds me of best ms practices, or even exceeds that by order of magnitude.

Widgets know their geometry during OnTick and OnPaint. It’s the Geometry structure they are passed, the Geometry structure can convert local coordinates to absolute coordinates, or abs to local.

World to Screen conversion deals strictly in terms of the deferred scene renderer, which is affected by things like resolution scale, and the physical rendering resolution. Slate is unaffected by resolution scale, it also uses a DPI scaler to scale the UI up or down depending on the resolution of the device, and whatever custom rules you set. If you’re just using the World To Screen blueprint node you wont get a useful point. There are some utility functions in there for converting world points to a slate viewport local position in the layout blueprint function library in UMG. I think the black bar overlapping is fixed in 4.8, but you need to add widgets to the player’s screen instead of the viewport. That takes into account any and all constraints on the player’s view, like when you’re using split-screen.

Not entirely certain why this would be necessary to build a gesture detecting widget / gesture pad widget. I would just handle all incoming mouse events/touch events coming into the widget, use the geometry structure in the mouse event/touch event function to convert to local space, quantize it by angles, and normalize the gesture, and pipe it through whatever model you’re using for detection, SVM, hidden markov…etc for like pattern match style gestures. In Slate, we support OS style continuous gestures on a few platforms, maybe just mac. Like Pinch to Zoom, if you’re on an OS that has builtin gestures and we handle them at the Slate application level, we do pipe them down through the widget hierarchy. If I wanted to use those style gestures I’d probably look at fixing bindings from Android/iOS’s OS and hooking them up to SlateApplication. Probably dragons there, but that’s where I’d add it.

Cool, I did not know that widgets know this only during OnTick and OnPaint.

However how do i break geometry structure, i tried everything, it does not have usual break struct node.

First/oldest and simplest reason why we are building our own touch interface:
We could not build simple fire button with multitouch interface support. Starting fire was not a problem. However big problem was to detect when touch left fire button area.
For eg. when user dragged finger out of screen, event to stop fire never happened. Then there was that logic for testing each finger everywhere we needed to just know if anything is pressing that button.
Everything together got very (unnecessary) complex for simple arcade game. So we thought that with direct (raw) input from device we could do far less complex touch interface. We have drag horizontally and vertically gestures, we may need pinch to zoom, and that is all. Also slate is nice for complicated games like rpg, for simple arcade shooter is mostly overkill.

We still use UMG for visual feedback, however it loves to crash. Looks like its most unstable part of unreal.

Maybe biggest culprit here is lack of tutorials and documentation for umg and touch interface. For us it was faster to develop our own touch interface than go blindly trough all those undocumented functions. And it seems that nobody but you Epic guys can answer those umg questions here. Or this is too much explaining for most that know to bother.

There is quite big limitation for us, we can use only blueprints, because lack of apple PC we cannot compile C++ for ios.

PS. thanks for pointers.

You don’t break it; There are just some functions that use it directly for converting abs/local and local/abs, and a couple other functions to I think get the size…etc.

Will fix it if I get call-stacks and reproducible test cases :slight_smile:

With info from you I managed to get exact positioning of widgets when i use 3d world space as source of location.

I did few crash reports yesterday. So far i am unsure what causes crashes.

I also tried my custom widget in 3d space (experimental component). It works fine when i use it in umg hud on all platforms. It is 6 text blocks in some horizontal bars and borders, quite simple, no fancy things.

While on windows and android it was fine. On iphone 6 plus it was mirrored (or i seen its backside, but it should not render), it also did not clear area, ie it collected bitmaps of all previous calls (or draws). Btw this thing (3d widget component) would be great with always face camera option.

Now when i know that converting locations works only inside events that have “geometry” i will probably discover it all.