Okay, that’s some useful feedback. I can say that Dialogue Waves themselves are not a full dialogue system, but they are the primitive component that you’d use to build one.
In the example you gave, I imagine the idea would be that you’d cut your dialogue up and then queue them to play as needed. I say “I imagine” because I didn’t actually design Dialogue Waves, but you’re right that they don’t really seem to be designed to handle large chunks of text (like you might find in a cut-scene) out-of-the-box. Cutting up the dialogue is also not ideal as it will make it harder for your VA to perform unless you do it in post.
With regards to adding offsets to the existing subtitle data within a Dialogue Wave, you also have to be aware of that fact that those offsets will change on a per-language basis, and we currently don’t have a good way to dealing with localized meta-data like that (Sound Wave assets do have offsets on their subtitles, but they’re not localization friendly).
We do actually have a newer system (called “overlays”, see
UBasicOverlays) that support offsets and localization, and can be imported from SRT files. I think these could be a reasonable solution to complex Dialogue Wave subtitles, as you could just provide a subtitle override which is an overlays asset, which we’d then queue into the subtitles system.
In the current subtitle manager code we do have a OnSetSubtitleText function that you can bind a delegate to. This would let you override the subtitle rendering with a UMG widget, which is what Fortnite does. It is, however, marked with the comment “HACK”, so I imagine someone intended to clean it up at some point