Hi, we’ve just switched to Unreal 4 and I’m currently playing around with a test project to work out how localization is supposed to work.
I found the documentation to be very fragmented but was finally able to get the text localization to work, i.e. localizable content defined in Blueprints, C++ code and external text files is gathered correctly and the correct translation is output both in my Blueprint test nodes and the UI.
However, I am running into problems when it comes to localizing sounds. All I could find when searching the documentation was that it’s supposed to be fully supported and to “use the dialogue system” (?). I remember we had trouble getting localized sounds in Unreal 3 to work (though it worked fine in the end), but as far as I can tell, the system works completely differently now.
Here’s what I’m trying to do:
I have two speech files in wav format. They have identical names, but one is English and the other one German.
I’ve imported both wav files into into the appropriate sub-folders “de” and “en” within a new Content folder “Sounds”.
I then created a SoundCue with the same name + “_cue” and a DialogueWave with the suffix “_dwave”.
For the DialogueWave, I created a dummy DialogueVoice (neutral, singular) and referenced the English wav file.
For the SoundCue, I referenced this new DialogueWave class as property in a DialoguePlayer node.
I then placed the SoundCue directly into my test level, so it starts playing immediately.
No matter what culture I set in the Standalone settings, the sound that plays is always the English one.
I also tried making the following changes to my initial set-up, none of which had any effect on the outcome:
renaming the imported wav files to use the appropriate locale suffix (“de”, “en”) as per the Unreal 3 system
using a WavePlayer node in the SoundCue instead of a DialogueWave
triggering the sound via collision box instead
Obviously I’m missing something here but I was unable to find any documentation how this is supposed to work. FWIW, I can tell that the culture itself is correct because a “Print String” node triggered on BeginPlay outputs the correctly translated message.
A related question is the localization of subtitles. Both SoundCues and DialogueWaves have properties which are supposed to be localized and indeed all entered strings are listed in the gathered localization files. Yet for some reason when I use PlayDialogueAtLocation, the displayed subtitle shows the untranslated localization source text. Does this happen because the sound file itself is not localized correctly?
Currently, UE4 does not have a functioning audio localization pipeline. We are actively working on the system though and you can probably expect initial releases to appear in 4.10 or 4.11 editor builds.
You can, however, set your game up so it is ready for localization when the pipeline comes online. In order to do that all you need to do is use the DialogueWave and DialogueVoice assets. You don’t have to manually import the final audio for the dialogue waves, as the localization pipeline will do that for you, so all you need is some temporary placeholder audio to associate with the DialogueWave while developing.
One important thing to remember when using DialogueWaves is to setup your “contexts” appropriately. Not only are contexts how you specify what sound to play, but are also the hooks the audio localization pipeline uses to swap in appropriate audio files. For example, if you have the DialogueWave “Hello” with these contexts:
Dan → Sally
Sally → Dan
Dan → Jeff
Sally → Mary
While in English all of the Sally speaking contexts will be hooked up to the same SoundWave (her saying hello) and all of Dan’s speaking contexts will be hooked up to the same SoundWave (him saying hello), in other languages it may be appropriate for the term hello to vary based on the gender of the person you are talking to (or perhaps you familiarity with that person). In those languages, each context would be associated with their own SoundWave and translation. I hope this makes sense.
As for subtitles, the option will be removed from SoundWaves and only exist on DialogueWaves in the future.
In regards to text localization, did you happen to go the route of setting up commandlets or did you use the new experimental localization dashboard tool? Just looking for feedback on the tool.
Thanks for the explanation. Can you give an estimation when the system will go live?
I did have a look at the dashboard but by that time I had already set up the commandlets, which personally I consider more convenient. We’ve already got a translation pipeline of our own, so all we need is a way to hook the translated text into our game. I figure that converting our custom translation format into Portable Object format and running the script whenever we get a batch of new translations will be much easier than using an integrated system we don’t really need. Sorry.
All I can say is that we are actively developing the audio localization pipeline and that the first phase will be released with the 4.10 or 4.11 editor version.
As for the dashboard, it doesn’t actually force you do translate your content a specific way (I don’t recommend using the translation editor except for debugging or if you have very minimal text). The dashboard mostly just gives you convenient buttons to click instead of having to run the commandlets by commandline.
Though you’ll still need the command line if you are trying to automate your localization process.
Correct, it is not ready. Part of audio localization is “package localization” which will certainly be available in the 4.10 release, even if the full audio localization pipeline isn’t.
Hi, we are actually looking on how to implement asset localization with the editor. Any update about the development of your localization pipeline for audio, texture and mesh? Does DialogueWaves is ready to be used for localization with Unreal 4.12?