Download

Viability of Architectural Visualization on Android/iOS powered devices using Unreal Engine

Almost every client that I am coming across is demanding real time architectural visualization for android or iOS , which I personally think is not practical. There are very limited and less diversified clientele for this type of technology. Either they are large scale developers with million dollar budget or small scale architecture firms with keen eye on future technology for product marketing. There is always a dilemma for price quotation and end product delivery method. So here I’m trying to present my perception regarding this paradoxical problem that I’m facing now days.

There was a time when people were thinking about future of PC gaming when XBOX and Playstation started to dominate the gaming market (still is a very debatable issue) but PC gaming is still surviving or I can say thriving. Then came the age of Mobile gaming, everybody got their views and predictions about how it will going to change the future of gaming (it sure did, now we can play angry birds and candy crush using our smart phones).

What I am trying to imply here is that not everything is meant to ultimately go down smart phone or tablet friendly path. There may be certain amount of flexibility or usability involved but developing everything according to mobility is not conducive to creativity.

Smartphone and tablets are there for us to provide second best option when we are away from home or office. They never were intended to replace our primary hardware. Who wants to watch their favorite Hollywood flick on a 5.5 inch display? And who will not choose to play Battlefield on their 4K UHD monitor? I know these are tailor made examples of things Smartphone and tablets are not meant for, but certainly Real Time Architectural Visualization is not for these inferior devices as of now.

For real estate developers who are funding super expensive advertisements containing walkthroughs and animations in traditional 3D packages, Unreal Engine real time visualization can certainly cut overall cost with better experience of the actual property.

Why a million dollar house buyer would want to see his/ her property in substandard quality? I think the best place for real time visualization is in the real estate advertising agency’s office or on the actual ongoing construction site where potential clients can see the actual physical place and experience their soon to be constructed house in virtual reality and other option which may or may not be viable because of hardware constraint is availability of interactive file on cloud so anybody can download and play. Investing something around thousand dollars for decent graphic card with VR device is a meager amount compared to traditional VRAY animations and renders.

Here I ‘m not implying that unreal engine can replace 3ds Max and Vray but they both have their advantages to live and flourish alongside.

There are other options which I think are more viable for end point delivery than android or iOS some of them are as follows:

360 panoramic renders
Amazon web services and similar ones
HTML 5

Your critics and comments are welcome. Need your esteemed views regarding the same.

Your thoughts are basically the same ones I encountered while thinking about the best approach to ArchViz without having too much to downgrade.

First thought was to go with GearVR mostly because of the wifi possibilities, but current limitations are too much and the results, in terms of quality and performance, are too much to get a stunning visual feedback, which is one of the strongest point in the end, because the customer don’t care about polys and shader optimization, they want to see how things will look so they can have the possibility to see/change/modify/remove/add whatever they want and have a visual feedback almost as real as the real thing itself.
Currently the technology is not developed enough to have a scene like the ones floating on this forum to run as good on a smartphone ( and needs to be rendered twice! ) and it’ll be a couple of years since the hardware will be good enough, but it’ll also be always behind a Desktop PC.

The DK2 and HTC Vive currently are the strongest devices you can use to get the best results, but you’re tied to the cables and tracking camera…so what’s the catch?

I already did lots of 360° photos using my DSLR, PTGui and PanoTour, but that is something which can be done for existing places…so what I was also doing was to get 360° images within Maya from a project to get “virtual apartments” to be viewed in a 360° picture with some interactive elements ( very few ).

Right now available on UE4 the 360° panoramic exporter is freaking cool and could be a temporarely solution…

I would say that 360° pictures may do the job right, but can’t be compared with a UE4 scene combined with the DK2, which is top notch right now…

Point is that if you’re working on something which doesn’t currently existing but its just an architectural project, you steel need to do modeling, UVs, shading, animations so if you’re using Maya or UE4 80% of the job needs to be done for both of them, then you can choose to create 360° photos in Maya and create a (limited ) virtual walkthrough or go full VR with DK2/Vive.
Considering the amount of work you put in to create everything I will say that the DK2 solution is the most elegant and worthy thing to do in order to have a visual feedback that can’t be compared to a static render or a 360° photo.

I hope to blow your mind in 1-2 months showing the project I’m currently working on :slight_smile:

Consumers are infatuated with mobiles and tablets… and mobiles and tablets have superceded desktops and laptops.

There’s a lot of “dumbing down” we have to do as PC/Console develoepers. But with iOS Metal, and the latest mobile chips, one can deliver a lot of rich graphics for mobile and tablet.

Personally I avoid mobile and tablet as far as possible.

But depending on the industry it may be absolutely unavoidable.

I am ready for my mind to be blown away…:eek:

Very well said. VR is something to look forward here.

Yes I 100% agree. It is just that the downgrade in quality which is unacceptable for me. But if clients insists, we have to deliver. It’s really a pain to make them understand that there will be a difference in quality.

The solution has to be streaming. It will remove the problem of cluncky hardware and also having to dumb down everything. I think streaming solutions are going to become mainstream but we just have to wait a little more (and it sucks).

Like Raghu, I also don’t want to make sub-par ‘‘experiences’’ because of cellphone and tablets hardware. It just sucks.

with a good streaming it could be a very high quality VR experience delivered on a portable device like a gearVR. That would be the ideal solution I think. (or a vive/rift plugged on ANY laptop).

Hi Raghu, i agree with heartlessphil. I did a lot of demo with appstream, while its still not ideal, it is something to look forward to if the technology and attention on it picks up. Archviz projects does not have to be re-optimized for the tablet hardware, which is a time saver. The only additional thing would be to add navigation based on touch instead of keyboard and mouse.

As for VR on gear vr…hmm…one day…one day i hope.

I think X.io is even better. To quickly see how powerful it is, go to https://home.otoy.com/ and at the top right click on octane render 3 demo. It will open a streamed instance of octane render instantly. It’s quick, simple and works pretty well. Running on nvidia grid k520. This is middle range hardware since all these services are in beta, but imagine when you’ll be able to scale it to use a lot of gpus at once…it’s going to be limitless basically.

The biggest problem I faced with X.io and probably appstream is that you can’t go fullscreen and if you navigate a scene and you turn and your mouse cursor leave the windowed app, you lose control of what you were doing in your scene. Kills the immersion. But it’s possible to always show the cursor and click+drag to move in unreal. That would remove that window’s boundary problem.

That octane render 3 demo…when you think about it…is pretty **** nice!

I guess another possibility would be to develop for ps4/xbox one… they have capable hardware with a relatively low cost compared to a high end pc. VR headsets are coming for next gen consoles too!!! Every architect, designer or developper could easily borrow the kids console to check out their brand new arch viz I guess! hehe

I can’t see the streaming solution working for VR the way it works right now… VR is a very latency-sensitive application and people are struggling to reduce small amounts of latency even on local rendering (to reduce sickness problems). If you bring cloud rendering to that pipeline right now, you’ll be adding at least unavoidable latency due to server distance (if you ignore Internet connection issues). Actually eve the WiFi streaming of local rendering to the VR device adds too much latency to be considered viable to VR (that’s way the VR headsets are all cabled).

The only way that cloud computing could help in this case is if there is an alternative solution to the video streaming where the client device can local render in real time (with low latency) some pre-rendered content received from the cloud (like sharing the rendering task between both client and server). And I don’t think a solution like that will appear before VR and ordinary cloud rendered 3D apps leave the hype zone and become viable tech…

Yeah you are right. I would use cloud for regular realtime app but that’s pretty much it. VR isn’t even there yet so it’s not going to be adopted by the mass anytime soon anyway. I would see using the cloud for streaming regular real-time app tho.

Yep, VR needs to be more consumer friendly (Low res screen and high system demand, When I first tested my DK2 I was like - it’s not going to impress quality freaks, resolution is too low). Mobile platforms was the main concern and I think that is totally out of question then?

Future is looking promising :cool:

Can those streaming solutions cache the data onto the iPad? That would solve the “PC-Quality ArchiViz on tablet” issue.

Other than that if the infrastructure and server reliability is good enough, cloud gaming, app streaming etc. could really give tablets “PC-Quality” abilities.

Currently cloud in general is great… except when it (that particular service you really need) is down.

Unfortunately streaming capability is not good at the moment, so I honestly will avoid that solution for now.

The new MSI with the 980GTX will probably be my choice in the near future, it looks like finally there is a laptop capable of running VR properly :slight_smile:

I want to share some of my experience in this field. About 8 years of my professional experience is dedicated to creating virtual scale models - interactive real-time applications for real-estate market. Some of this time was spend on visualization and for majority of the projects, both visualization and real-time application were produced.

One of the things you have to keep in mind all the time is - what is the purpose of your final product? I’ve seen this happening many times, when both client and developer don’t truly understand the end purpose of the product. There is a huge difference in approach and process if what you are building is going to facilitate technical communication or you are there to create emotional response. From my experience, real-time applications shine in technical communication. They help discussion to progress between real-estate developers and investors. There is very little space for emotions at this stage. Visual quality is not so important, it’s nice to have, it might attract some clients but what matter in the end of the day is dexterity - your ability to build and modify application really fast. We are talking about just few weeks for everything. By everything I mean - getting reference materials, modelling, getting content into engine, coding and testing. The only way you can do this is by having very efficient pipeline, which is focused on maximizing - amount of work not done. There is no time to make new materials, not time to animate characters, not time to optimize existing models, not time to nicely unwrap everything. In my honest opinion, I don’t see UE4 as a good basis for such pipeline, unless you spend a considerable amount of time and money into building custom tools for your artists. Simple things like assigning proper UE4 material just on a basis of 3dsmax or maya material should be automated. Your actors - birds, pedestrians, cars and similar animated props, have to be universal and automated - you won’t have time to sit and draw walking or driving path for each of your actors. If you have to use lightmaps because of performance or some other reasons - do it outside, there are many ways how it can be done much faster and in automated way using 3dsmax or maya instead of using UE4 for this.
The reason why this is important - minimizing cost. Before you say - “I have reach clients” I can tell you that even if you work for client from UAE and Saudi Arabia, cost will be one of the reasons for them to get back to you for another project. This have nothing to do with your competition either, trust me most of the clients who want a Crysis 3 quality real-time application of Burj Dubai - never did this kind of project before. This might sound discouraging but this is a business where you produce products to help with communication, this is not an art installation and aesthetic beauty means very little outside of group of your peers.

For emotional impact it’s better to go with good CG, it’s easier to produce, much easier to convey your message exactly, production cycle is much shorter and there are plenty of studios which can do it really well. The only show stopper I would see is if client is planning to go throw many iterations of the design, but again, this is suppose to happen on previous steps of development. Emotional materials are the last stage - when client starts marketing to the end users, not when they are still deciding how many floors this shopping mall is going to have.

So to answer OP question - I don’t really see a good reason to have a high quality AAA kind of content in architectural application in the first place, regardless on which platform it is running. If you are happen to be in such a niche market then your tools of choice would be something like RT products and not game development engines.
Otherwise my concern would be not a technology but amount of time you have to spend to build good extremely low poly version of the model which you could actually run decently on smartphone. From there, choice of engine is rather irrelevant as long as it has an efficient native implementation.

You can’t cache data from those app streaming solutions the way it works right now because they are just a video streaming being sent to your device. And it’s all about interactivity here, so the user input on the tablet/smartphone must travel to the server (where your unreal app is being rendered) and come back to your device as a video streaming so fast that you couldn’t tell that the application is not installed and running in your device (caching the video would cause huge delay). If you are going to stream any kind of video that has no interactions, it makes more sense to upload it on Youtube or something…

A possible solution would be a kind of “shared render” where the client would be helped by the server to render the application for you on a asynchronous way. But there is no solution like on the market. Maybe in the future…

I think it can be used right now in some controlled situations… Like if you know that it will be running on a server not too far from the client and that the client itself has a good and stable Internet connection…

Really interesting positions, man… Thanks for sharing that! :smiley:

Ah yes. The “killer streaming app” for graphics could be if the mobile/tablet app can cache certain “portions” for non-connected viewing. For example, 360degree viewing from a fixed position of each room (or “camera point”). Then cache some interactive pathways (“rails”) moving from one portion to another.

One would say at this point why not just make a video.

However, if the “killer streaming app” for mobile/tablet could do “standard” cloud streaming for 100% interactivity, and cache the “offline” portions as above, then that would deliver extremely high quality graphics on mobile/tablet without us poor developers having to take high-end stuff and make it “mobile-friendly” - a process I’m sure many of us feel is like taking a peacock and compressing it into a pigeon (OK, weird analogy).

Anyways, in the above “killer streaming app” proposal what the offline caching would do is basically liase with the server to respond to certain interactivity and cache those video portions including 360 degree video which as YouTube shows is quite feasible.

So let’s say the property sales team, if they have good WiFi or 3G/4G/5G they could present the full deal in all its glory, but if there are connectivity issues or if they are going somewhere remote they would have “preloaded” the cached portions of the ArchiViz.

At the home office/ headquarters the development team would iterate and update the “back-end” and roll that out accordingly to the sales team in the field.

This could be a best-of-both-worlds kinda thing.

Confusion regarding the place of UE in Arch-Viz is getting deeper and deeper without any apparent reason. Let me share what was I thinking when I first diverted my attention towards the UE. It was the capability to render 4K videos on my current machine without the constraint of frame rate and length of video. I am still a part of great team working for esteemed studios worldwide and can safely say that UE is not considered as a part production pipeline but rather considered as an extension with varied uses like – VR, RT and fast solution for videos.

@Boredengineer suggest the use of UE in technical communication as irrelevant, I think the same. AutoCAD, Revit and 3ds Max/Vary is the way to go here because of technical aspects and involvement of Architects and draftsman but this all is considered as phase one in development. In second phase things get much interested as interior designers and marketing guys jumps in. Then comes the round of test renders and changes. Here again traditional tools shines, but to a certain extent only. After finalizing the designs and all, if you have this file converted to UE (Conversion into unreal is not that time consuming if it’s been done for windows platform) with close to Vray quality then changing the materials and textures becomes a breeze, not to say real-time movement of interior and exterior space, video render in minutes (4K quality). This gives a tremendous boost to marketing team.

Concept of rich clients – UE can target clients with limited budget as well as big firms without money constraint equally. It becomes especially useful to small players in Arch-Viz who wants to create videos of their small 2BHK – 3BHK flats/spaces because rendering 2K videos in Vray with decent quality is not an easy task and the prices are exorbitant.