Getting some spatialized audio for VOIP is something I’m very excited about, so seeing this mentioned in the 4.19 preview notes had me wanting to try it as soon as possible. I’m having a hard time understanding how to get it going, however. Is the component supposed to act like a VOIP version of the audio capture component? Is it supposed to extend the functionality of the audio capture component? Or am I supposed to create the VOIP functionality in C++ and use the VOIPTalker as a kind of output to apply attenuation and effects? Or maybe some other implementation I’m not grasping?
I’ve been trying a few different things within blueprints. It’s tough for me to test, though. I’m hearing myself through the audio capture component so that’s getting in the way a bit when trying to hear if the other player is projecting audio. I’m also getting hang-ups when playing in the editor. I’d say half of the time I’ll get a completely unresponsive editor and have to end UE4Editor through task manager.
There is the CreateVoiceTalkerForPlayer static function which will return a valid VoipTalker component. And then there is the RegisterWithPlayerState function that will register an existing VoipTalker to the provided player so that their audio component is linked.
There is also currently a Voip BP library that contains: SetMicThreshold which sets the input threshold when its not PTT.
There are a few other useful functions in their voice library but they aren’t exposed to BP yet.
Oh, sorry I wasn’t clear enough in what steps I took. I did register the player state. I actually did it a couple of ways, but I ended up creating a custom player state and adding the VOIPTalker in there and registering it, with the idea that when one is assigned to a player, it comes with a VOIPTalker. Not sure if that isn’t a valid approach.
Is creating the VOIP component, creating an audio capture component, and registering the VOIPTalker all one has to do to get it started?
The audio component is part of the VoiceEngine, VoipTalker is a middleman component. You register it with a player state because it lets it be linked to the correct audio component in the back end through the player id.
If you are spawning one outside of the player then you’ll need to attach it them so it has the correct location.
In the voiceengine it checks for if there is a VoipTalker for a player
Im trying to set up VOIP talker but im not getting any sounds from the players. I created the talker for the player and then registered the voip talker am I missing a step?
You have to go through the normal VOIP setup as well, this just adds proximity and filtering to the normal voip workflow.
I’ll also note that this is for simulating clients to hear another player with effects, which is why you pass it a player state, its for non owning clients.
Thanks okay so I probably just messed up the normal VOIP setup just wondering Voice Level will show a value higher then 0 when it works right or do I need to have a player join the server for voice level to return a value?
I’m interested in this “normal VOIP setup” you’re referring to. I only found fragments of it online.
What I did was:
Create a Third person example project
In the DefaultGame.ini I’ve added the
[/Script/Engine.GameSession]
bRequiresPushToTalk=false
In the DefaultEngine.ini I’ve added the
[Voice]
bEnabled=true
Edited the character blueprint with Begin play -> Delay(1.0) -> Register with player state with my imputs being a VOIPTalker component and the player’s state.
I added a branch that made sure i wasn’t the server before registering the VOIPTalker
So my question is, which of these steps was unnecessary and which did I miss?
For example, if i need an additional audio component I’d like to know the steps on what to do with it and which audio component are we specifically talking about? (Audio, AudioCurveSource, AudioCapture or something completely different?)
The audio capture plugin is not related to the VOIP stuff, even if they seem like they should be. The audio capture is a knock-on of the mic-recording feature in Sequencer (for Mo-Cap recording). It was an ask by one of our prototype teams for a way for microphone input to drive gameplay. It’s local only and not currently setup to VOIP stuff. We’d like to unify the 2 things in the future so there’s not multiple code paths for getting mic data!
As for the VOIP stuff, I’m not entirely sure what the minimal steps necessary to get VOIP working for your project. I’d use ShooterGame sample project as a starting point as this has VOIP setup already out of the box.
I’m currently investigating how to get mic data and there seem to be a few routes to do that in 4.19. VOIPTalker has some traction (including some console support) but I’m not interested in VOIP. I only need local access to the audio stream.
Media Framework 3.0 capture (IMediaModule interface)
AudioCapture looks like a simpler route than Media Framework, but the current implementation is heavily focused on Windows and I’ll need to hack in support for additional platforms.
Can you provide any insight on which path to pursue? I hate to put too much effort into something that will be going away. Audio in UE4 seems to be in such flux right now.
This should be a different thread since it’s not directly related to VOIP.
If you want mic data, the best way to do it is to use the audio capture plugin, make a mic component.
As for what you mean by “want”, if you want the raw PCM data to do processing, analysis, recording, you’ll then want to write a source effect (see the existing source effects in the synthesis plugin for examples) that allows you to access the mic audio data. You can do whatever you need to with the audio stream.
If you want the mic data to drive gameplay through some envelope-follower/amplitude thing, that’s now supported by default in 4.19. Just assign the envelope follower delegate in BP to something and you’ll get the envelope of the sound.
As for things being “in flux”, nothing is “going away” in the audio mixer. It’s in early access not “experimental”. There are UE4 games shipping with the audio mixer right now. We are about to turn it on for Fortnite. Fortnite is a live game so I’ve been a bit conservative about throwing it on since swapping an audio renderer for a live game played by millions of players is obviously a risky proposition!
Edit: We are adding a BUNCH of new features which are audio-mixer only since there’s a ton of low-hanging fruit with realtime DSP/effects, etc. So it seems like its in flux, but it’s not. Also, since its new, there’s a lot of bug fixes, optimizations, etc.
I’ve followed the instructions in the other thread, but I’m having no luck with spatialized VOIP.
I’m using the same “VRAttenuation” settings I use for my other sound effects, but the sound still reaches the other player unmodified. I’m using Create Talker For Player after a 1 second delay on my Pawn’s “BeginPlay”. Get VOIP Level seems to also always return 0, even when the player in question is speaking. (Using the AdvancedSessions blueprint plugin, IsRemotePlayerTalking correctly returns True or False.)
I’m using Steam Audio, for what it’s worth, to do the rest of my spatialization. Steam Audio seems to be behaving correctly. I’m also using OnlineSubsystemSteam.
I know this is quite old thread , but I just want to say that You didnt mention anything related to different platform , when I setup getting raw data from mic using VOIP , it works fine on android however when the app closes, it did not shutdown the AudioRecord driver correctly even I stop and shutdown everything on exit, which leads to be works only first time you try the app . there is a bug I already reported .
Is really any future for AudioCatpure plugin to be multiplatform than just window? this really so important to me to know
All of the other steps you already did properly. It seems to route through the OSS (i.e. Oculus or Steam), so they must have their own BP objects and configurations. Checking on that, myself, now.
*Update: just found this tutorial. It goes into much more detail about everything which needs to be setup for VoIP: