Unreal Engine Livestream - 4.18 Preview - Oct 5 - Live from Epic HQ

The release of 4.18 is just around the corner with fantastic updates to volumetric lightmaps, the clothing tool, the revamped Physics Asset Editor, Media Framework 3.0 and even improved support for both Google’s ARCore and Apple’s ARKit. Nick and James join the livestream to discuss these changes and talk about what else is in store for 4.18. If you haven’t already, you can check out the Preview on the Epic Games Launcher!

Feel free to post any questions you have to this thread. Do know, however, that time will be tight since we have tons of information to provide, but we’ll try to get to any questions if possible.

Thursday, October 5th @ 2:00PM ET Countdown]


Nick Penwarden - Director of Engine Development - @](
James Golding - Lead Programmer - @EpicJamesG](
Amanda Bott - Community Manager - @amandambott](


Can we expect any info regarding ARKit implementation ? ARKit seems broken in Preview and apparently there’s no info whatsoever on how to approach ARKit development.

QUESTION: which of 4.18 features are already used by Fortnite and Paragon? Are there using a near standard engine or an heavily customized one?

Question: Is it better to have a higher res cubemap when using the 4.18 skylight lightmap baking with directionality? If so what’s the ideal resolution?

Sweet! I’m pumped about anything volumetric… march those rays!

What about snap object to object (pivot/rotation/etc) just like in 3dMax (even with power of 2 modeling)?
A much needed basic operation to save time!


Question: Will the clothing tool support painting latchToNearest values in a subsequent release? Without it I can’t avoid self-penetration for clothing that has thickness.

you should add teamcreate

The new blog post answers some of my questions about Fortnite Unreal Engine Improvements for Fortnite: Battle Royale - Unreal Engine

Yes please make it possible to snap vertex/pivot/socket to vertex/pivot/socket with/without attachment. It would greatly improve workflow.
I got socket to socket snapping to work with blueprints and it would be great to have that built in. It makes a huge difference and it is not crazy difficult to put in.

Is it possible to get the Niagara plug-in to work (without code changes) in 4.18? If so, how?
I assume this is a work-in progress/silent beta - lol, good luck with that XD - release kind of thing, so I kind of want to know what the intention is.

  • For those that have not found it, it does not seem to emit particles and crashes a lot. So do not get your hopes up.

Were any improvements/changes made to the destruction system when it was moved to a plug-in?

  • It used to be that when you import a fractured mesh via FBX into destruction, you have to assign a separate material to each chunk (instead of one material slot per different material). Which made me think that each chunk gets a separate draw-call. Could you fix that? I am aware of the APEX tools for creating fractured meshes, but they make me vomit (a little) every time I see them. I want to be able to author the fractions completely in Houdini. Can I export some kind of additional attributes/data via FBX to avoid the use of the APEX tools? - Could you make that possible, please?

Question: How is the better filtering method (PCSS or something) for shadow mapping coming along?

Do you have a **roadmap **considering mobile performance & optimization in the near future? Especially considering aesthetics of light&shadow. Things I miss a lot as of now:

  • Emit Light Module of particles not working on mobile
  • Dynamic shadows of movable Point Lights

Things along these lines, you get the idea :slight_smile:

Thanks a lot in advance!

^^^ This please!

Question: Will the live link with Maya showcased at GDC be part of 4.18 as an experimental feature?

Question: ARKit and ARCore let you place objects on a plane that is detected in the physical world, but they don’t have any ability to align to a specific location on that plane in the physical world. Are there any plans to incorporate any other feature recognition into the more general AR framework? Alternatively, is it possible to process camera or media stream frames such that feature or color detection could be implemented by blueprints or C++? Thanks!

Would be nice if you could take a look at this and merge it, its a very small change (just exposing the stuff to any custom components) that will allow Koderz to add support for dynamic distance field generation to the RMC:

I wish I could make the streams these days, but alas. I do have one question, about sequencer. A pretty big feature that is surprisingly missing from it, exporting audio. When can we expect to be able to export audio?

The audio is very out of sync on this vid. Is it possible to get a reupload?