Looking for help & feedback expanding on the VR editor with some features...

Hello and good day,

So our team of international collaborators in genomics and bioinformatics have been fortunate enough to have been granted HTC Vive developer kits. It is exciting because VR has the potential to break down real barriers with 1’s and 0’s, and this keeps us coming back for more. Our team is spread out all over the world, but in VR everyone can be in the same space.

The files are in the computer? The editor is the game! In using Unreal Engine 4 over the past year, we have come to the realization that UE4’s VR editor represents the 21st century’s successor to Lego, and coding is the underlying literacy. But there has to be a way to make it more accessible to everyone on earth ; less overwhelming for digital immigrants - especially since unemployment is projected to hit 50% by 2020 due to the effects of automation.

We would like to further develop on the VR editor, but add to the editor itself. We would like to engage in a community effort on the VR editor itself. We are not experts in computer science, however some of us are scientists. We believe in the magic of VR and its potential benefit to humanity in education, research and industry. Our work is aimed to be open source, so we would like to share some ideas and add to the existing collaborations. Specifically, we see three main regions for development in the age of #bigDataVR:

  1. The interface of the user with the vr editor: a workflow that reveals its complexity with time, voice search on google or wikipedia to display the results within the VR editor itself.

  2. The interface of the Internet with the VR editor: the ability to search for tutorials (Youtube videos) or download assets (/content) to be implemented or accessed without leaving the editor. *Has anyone tried to get the VR Editor working on Gear VR? I am aware of the current positional tracking problems, but in expectation of stereo cameras for the next wave of smartphones we are looking to Un

  3. The interface of the data of interest with the user: algorithms like nupic, nengo or other researcher tools. Currently we are struggling with actually running any non-trivial examples in UnrealPy and UnrealJS

https://github.com/mastermind202/GoogleVoice

Finally, being alone in such a magical space could definitely benefit from multiplayer. Can anyone suggest where to start to begin? I understand there is something called the Live Editor Plugin, but we would like to expand on its functionality.

Thanks for your time and attention, take care and have a wonderful day :smiley:

-eric:cool:

Shame that no one replied. I never worked with game engines, but was dreaming for a very long time to create my own pocket worlds, with magic portals etc. I was unconsciously waiting for technology to improve and get easier to use. Now that I have monster rig and HTC Vive, I really want to play these “Lego” games with Unreal Editor.
I was wondering how much you can you do inside of your headset, editing ? How much of the editing could I do, without taking my Vive off ?

Hey, thanks for the reply! The VR Editor is in its infancy, but as Epic describes it as mostly used for placement, immersion and scale. The blueprints cannot be used in the VR Editor. I would say you are limited to about 20 minutes, before your face heats up from the HTC Vive.

To augment such an environment would require coding the VR Editor itself.

Is that something you would be willing to collaborate on/with @Dark-union?

50% unemployment in 4 years time? Where are you pulling those numbers from?

Trying to run the VR editor on the Gear VR would not be possible since it’s the editor, not a compiled program. Trying to run the whole unreal editor on an android phone would not really be doable, not for another 5 years or so anyways, if somebody was working on it that is.

For visualizing the data in Unreal I have done 1s before using the VaRest plugin to pull information from rest API’s. Might be tough with gigantic datasets though.

There was a data viz competition ran a year or so ago I think, you might want to check out some of the entrants in that to get a better gauge on what they did to achieve their versions

The numbers come from a CBC show called Spark. These are projections by experts, not numbers pulled out of somewhere. If automation continues as it has, then surely you can see why unemployment will rise. The question is - whether humans can innovate new jobs, or new schemes for society.

Running the VR editor is possible, as it is something you compile (ue4 is indeed that meta!!) - please see compiling the editor from source. Some projections by industry experts say 2017 will have sufficient mobile computing performance to do this, so that leaves about a year to work on it.

It’s funny you mention the big data [vr] challenge, as my team and I were one the groups that participated. Our work has continued since the challenge, and our collaborators have become more numerous. It is an exciting time, as virtually nothing has been invented yet!

All I could find on google about that for Spark was 50% of disabled people in Canada are unemployed? Every 2nd person having no job in 3.5 years time just doesn’t really seem possible to me. Robotics isn’t coming along that fast, self driving cars aren’t going to take over from taxi drivers for probably 10 years, AI isn’t going to take over from creatives and artists for a fair while and games and movies are not going to be made by robots for I would assume 50-100 odd years maybe?

You can obviously compile UE4 editor since it’s written in C++ it has to be compiled to work at all. What I meant was that compiling all the capabilities of it to run on an android device would require a fair amount of rework, regardless of how powerful the device is. You can easily run android on a desktop powered machine but still not have the editor running in it without a lot of work.

The video you linked is for Woody Norris talking about his electronic stuff? FYI parametric speakers are awesome fun for ******** around with people :wink:

I’m sorry to hear you couldn’t find more information. However, I can find the exact episode of CBC Spark if you like: automation.

Our team happens to do work with artificial intelligence and deep learning algorithms and I would have to disagree with you @ZoltanJr. Having attended a few University classes on the subject has helped me better understand the field - and how far it has actually come in the past few years.

Do you have any experience with ML, CLA HTM’s or other systems like Lucida or IBM Watson - @ZoltanJr?

If you observe Global Employment numbers you will see a trend of this sort. I can provide additional sources, for such information. If you’re still not sold, check out:

“Causal Entropic Forces” for an equation on the force of intelligence.
http://www.alexwg.org/link?url=http%3A%2F%2Fwww.alexwg.org%2Fpublications%2FPhysRevLett_110-168702.pdf

Hierarchical Temporal Memory Networks, White Paper by Jeff Hawkins

Seems like a big jump to go from 5% unemployment in a majority of major countries in the world to 50% unemployment in such a short time period though.

I have done plenty of work with AI systems and neural evolutionary networks for problem solving, home automation and predictive analysis type things, but nothing is anywhere near close to being able to craft a great movie script or design cool particle effects on it’s own or come up with a beautiful painting to rival a great artist.

We still currently struggle to do CV type tasks and it takes hugely powerful machines to be efficient in spatialisation and object recognition, let alone figuring out what to do with all of that stuff and how it relates to the real world.

Computers will always be better at a lot of things compared to humans, but teaching a computer to understand things the way a human does is incredibly difficult.

I would love to agree with you, but then we’d both be wrong :wink:

The thing about technology that makes it dangerous if we ignore it is its double exponential - and that’s because of the acceleration of development and the growth of nodes in the graph.

As for computer vision, Google shared a press release:

Saying: ‘this will always be A, and that will always be C’

  • is an easy way to be wrong. I would resist making such strong statements.

I recommend you check out Nengo and Nupic - the two platforms we have been experimenting with; both model the neuro-physiology and dynamics of brains.