[4.4 Localization] Localization Gather Text Completes, Manifests/Archives created, but not all BP FText are present in the archives!

Dear Friends at Epic,

As you might know I am working with Hourences, and he is reporting to me that a lot of his FText are not showing in the localized text archives I am generating.

Here are my BP settings in my Localization file:

;Gather text from assets

;Include Paths

;Exclude Paths

;Asset Extensions

;Exclude Classes


Do you see anything missing in the above? Or presen that shouldn’t be? I copied my settings from Engine.ini localization file.

#Audio Subtitle Bug

Hourences has reported that all of his audio subtitles are not being gathered.

Here’s a pic!

#Blueprint Text Literal Bug

Hourences reported that his literals are also not getting gathered


A lot of the Blueprint-related FText items seem to not be getting gathered!


Help! I am not sure what else to do on my end :slight_smile:


For localized subtitles, please use the Dialogue system. It is not yet complete, but it is at least as functional as the Sound system. Dialogue Wave text is gathered for localization and the localized text will be used. The Dialogue system presents a specialized workflow and pipeline for spoken dialogue to be localized, both in terms of subtitles and in terms of (eventually) playing localized sound files.

As for the literal text not being gathered from your blueprint, I made a very quick test case here (master, not using 4.4) and found no issue. I’ll edit this post when I’ve tested using 4.4 from Github.

Update: Tested with 4.4 from GitHub. The behavior was reproduced. Some change after 4.4 fixes the issues. I’ll try to track it down.

Okay thanks for looking into this Sheevok!

I am forwarding your response to Hourences :slight_smile:

We’ve tried 3 times back and forth to get the Literal Text to work without success so far.

Are my ini settings okay ?


Your INI settings do not seem to be the cause of the issue. I’ve updated my answer with additional details.

Yes. But you should still use the Dialogue system for any localization of spoken dialogue in the project. 4.5 should have the fix for the literal text nodes.

Thanks for the update Sheevok!

So does this mean I can expect for things to work again in 4.5 release ?

Thanks for the continued updates!


Thanks Sheevok!



I am working together with Rama, and we’ve been looking into the dialogue system but we hit new issues doing so.

Would it not be possible to simply let the engine gather subtitle text from sound waves also? In case someone does want to go that route? I take it that would not be difficult programming wise?

I can’t make subtitles appear with the dialogue system. My setup looks like this - the AAAAAAA does not appear anywhere when played. Regular subtitles do appear (the ones set up in sound waves directly).


I have some serious workflow problems also with the dialogue system.

Sound waves can be right-clicked → Create sound cue. The whole process of creating dialogues has to be done one by one. I have hundreds of waves and this is going to take me ages.

Each of those already have their subtitles setup also. It leads to subtitle text existing/being available in both a wave and the dialogue assets, leading to confusion to the user.

Furthermore sound waves have their text embedded within them. Because I now need to make a dialogue asset also for each sound wave I will now have twice the number of files to deal with and organize, cluttering up the content browser.

It would IMO be much cleaner if the subtitle can only be entered in one of the 2 (pref the sound wave so it reduces number of files) + if there is a way of generating dialogues from selected waves like with sound cues. i would also prefer a simple check box on each wave “Should Localize” for it to be allowed to be localized. That would be much more streamlined IMO.

It also feels like overkill. I just want the player to say a one liner, but I am forced to deal with and spend time on setting up a speaker, an audience, dialogue voices, plurality, gender, etc. … I just want to play a sound… I understand you need this for more complex setups, but can the extra options not be made optional or automatically filled in with some default setups that will work for most simple cases? This would also save a lot of time on setting it all up.

Hey Hourence,

You’ve unfortunately wondered onto something stuff that is still very much being developed. While the basics more or less work, as you’ve pointed out it clearly lacks the workflow polish needed to be embraced.

We have plans for significant work on these systems, especially since you can’t actually localize audio in the engine at all atm. Unfortunately, we have only one person who works on all that is localization/internationalization and thus some other highly important tasks are blocking progress on the audio systems.

The current plan for the future is to remove the subtitling from SoundWaves and only support subtitling through DialogueWaves. You are not technically forced to create a SoundWave to play a DialogueWave (though I do not know if the subtitles appear on screen like they are supposed to). Creating a development only SoundWave for the DialogueWave will be dramatically smoothed out. We would support quick audio import or automatically generated robot voice.

I said development only because official recordings would come through the localization pipeline and doesn’t require that any pre-existing SoundWaves. This is ideal for development as you only need to create DialogueWaves for each line and don’t have to worry hooking up X number of SoundWaves which vary based on the culture the audio is recorded for.

I’m truly sorry for the trouble, I’ve been pushing very hard to get the resources to fix these systems properly, but it looks like you are going to have to wait a while longer it seems.

We do have this system schedule to get love before the end of the year.


I am curious Hourence, how are you attempting to play the DialogueWaves?

You mention you want to create a SoundCue out of them, but that actually reduces some of the flexibility of the system, and I rather dislike how everyone seems to need to create a SoundCue and a SoundWave for every piece of audio.

There probably isn’t a better way to do what you are trying at the moment, but I want to make sure we understand your use-case to ensure it’s addressed in the future.

Hi Justin,

Thanks a lot for the info!
I am concerned that if work is going to begin on this by the end of the year, it probably takes until February or March or so until it all works properly? That has the potential to have a real impact on our plans for next year release wise, so I will be sure to let others at Epic know about that as well. Maybe you get a bit more resources then.
Plus your non subscriber licensees must be hitting this stuff pretty hard also, as it is one of those things that effectively makes it impossible to release a big game with standard UE4 right now…

I don’t actually have 700 sound cues, I do have multiple waves in 1 cue, and then I make it randomly pick one of those. However with the dialogue system I would truly need 1 dialogue thing for every wave, given the subtitle text is in there.

We have a player who continuously says one liners. Each line has 4 or so variations, which are combined in 1 cue. The cue then randomly picks one of the variations and plays it.

If I have to begin using the dialogue system, I would use the dialogue player in sound cues so that I could keep all of my cues, with the randomization active as well, but then link them to the dialogues instead. So the ability to add dialogue into cues as you can now is crucial for us, or otherwise I’d have to redo even more.


A quick way to filter out any dialogue that does not has text set would be great. This is missing for sound waves also. If one of the 700 files lacks subtitles I have no way of knowing this right now other than opening them up one by one.

I believe that the current subtitles from sound waves do not appear correctly on Oculus Rift. I did not look into it all too much but I believe only one 1 eye sees the text. Getting that solid would also be crucial for us.

Unfortunately we haven’t gotten pressure from our non-subscriber licensees to complete this system. Most big companies either opt to swap in their own audio system and/or audio localization system. Epic internally isn’t yet approaching a time when we would need these features for our own games.

That said I totally agree with your sentiment and see this as a pretty crucial part of the engine.

As for the SoundCues what are you using to play them? Blueprints, code, something else?

Also I haven’t tried it yet, but you may be able to multiselect all your SoundWaves / DialogueWaves and use the Property Matrix in the right-click menu of the content browser. This would allow you to inspect, edit, and sort properties within any uobject. All you might have to do is pin the Subtitle property as a column to quickly get the information you desire.

Ah yes the matrix viewer works indeed. For both dialogue and waves. Thanks for that.

We built a queue system in code for our one liners. A blueprint function inputs a Sound Cue into the queue, and then code will take the entry highest in the queue and play it when it determines there is nothing withholding it from playing (you are dead, underwater, already talking, cool down delays etc.).

So the code does the actual playing of the sound cues we have, but blueprint enters the cue it is to play into our queing system.

Hi people!

It seems Epic Games fixed the gather text commandlet and now it indeed catches the soundwaves and soundcues :wink: