Inside Unreal: Protecting Player Safety with Modulate

This week on Inside Unreal we’ll be sitting down with the team from Modulate to discuss their ToxMod proactive voice moderation technology that makes voice chat safe for players. We’ll get a look into their innovative machine learning processes and how this tech can be utilized to create an environment on multiplayer games that is safe & fun, keeping the focus on the game with better inter-player dynamics and communication.

Thursday, August 11th @ 2:00 PM ET - Countdown



Mike Pappas (he/him) - CEO / cofounder of Modulate - @mpappas74
Carter Huffman (he/him) - CTO / cofounder of Modulate
Zach Neveu (he/him) - Senior Core Engineer - LinkedIn
Tina Wisdom (she/her) - Community Manager - @TheUnWiseTina

If you’re unable to make the livestream, all episodes of Inside Unreal can be viewed afterwards on-demand .


This (we\I) will watch! :rofl:

1 Like

Honestly, I don’t like this. As a non-developer, I think moderating voice chat has so so so so so many potential problems. I’m sorry to say, but trash talking is just part of internet culture. I’m all for giving parents the option to disable voice chat to protect their kids, 100%, but if the player is over the age of 18 and can’t handle some rude words thrown at them by a complete stranger then… I don’t know what to say. They aren’t mature enough to be in a voice chat, let alone function in the adult world. We can put all the protections up we can for people, but inevitably, there will be a situation where their feelings get hurt and I think we should be focusing more on equipping people to handle those moments than creating this bubble of “safety”. Practice makes perfect, even if the practice sucks. Personally, online chats have definitely helped me learn how to handle these situations and not value strangers’ opinions of me so much. It wasn’t fun, but it was necessary for my growth as a person.

Honestly, I don’t know a lot about this particular group and what their end goals are, they very well might agree with me completely, but I don’t know. All I know is more moderation is not always a good thing.

Voice chat moderation is just a bad idea in general, not to mention the legal issues that could be encountered (freedom of speech and all that).

How Ironic would it be if this comment is censored lmao. If it isn’t, props to the mods.


Protecting safety, a.k.a. censorship. Frankly, I would never play a game that uses this technology. They even cite the ADL on their website. There is plenty more I have to say, but I expect to get banned if I do so, and I wish to obviate the inconvenience of having to make a new account. This is called the chilling effect.


Yeah, I’m around 20 minutes in and stuff like this is the reason why people stick with services like TeamSpeak


So the pitch of this product is to introduce mass surveilance on the players voice chat and censor anything the “AdVanCeD aI” deems “dangerous”?

This sounds like an extremly over-complicated solution for what can easily be achived with a simple mute-button…


Wow, they are also creating “Social Rating” system with this tool.
From their website:

…ToxMod does track player history - i.e. how many times they’ve committed offenses - in order to help prioritize repeat or escalating offenders, but this only determines how urgently it flags new offenses after the player misbehaves again.


1 Like

Additionally, the founders appeared on a podcast with none other than Anita Sarkeesian. CARTER HUFFMAN - Should This Exist? Make of that what you will.


I’m around 50 mins in and I’ll share my thoughts about it later on, today. But I really don’t think that we should grab our torhces and pitchforks over this. Sure, it’s an means of retailating against people who say the wrong things in chat; but it’s kind of surreal that people would actually need something like this. I’ve had my share of abuse slung at me while I was playing League of Lsgends and I just laughed it off because it’s just an video game. Of course, I never made it to Ranked.

But anyways, the most that we can do is boycott their products, or just set up an third-party server for voice chat instead of risk being banned for sounding like an TeamHeadKick video.

Censorship has never been a working solution in human history.

Imo, this tool could be helpful to turn off players broadcasting music (annoyance, copyright strike) or prevent spam ads / scams etc. in voice chat.
Ofc including toggles for each and not mandatory.

1 Like

More censorship tech. You figure the when the internet was conceived it was all about freedom. The new gen it’s all about tyranny,

Only my thoughts; this isn’t about censorship or ‘Big Brother’, or Sauron. Maybe not this tool, but there must be a way to stop hate speech, death threats and similar. I’m not writing about rude people in chat, name calling, or even brow beating. There’s a difference between what is considered proper online etiquette, the breaking of such rules, and then to the extreme of hate speech. That extreme is where we need to have the discussion. This is beyond freedom of speech. Those people who feel they have the right to destroy the humanity of another for some completely uneducated rationale, need to not only be blocked but reported to the authorities. We need tools to do this. This tool is not for some people who are not mature enough to lose at a game. This is for those cretins who specifically come to a social platform to cause damage.

You say it isn’t about censorship, then in the next sentence you talk about stopping certain kinds of speech. Stopping speech (“hate speech” or otherwise) is the very definition of censorship.
You say there must be a way to stop certain kinds of speech, but you never explain why this is. Clearly you expect everyone to agree with you, but I do not agree.
Additionally, you give a very loose definition of what should be censored. For example, the always poorly defined “hate speech”, and the addition of “and similar”
You say “we” need tools, but who is “we”?
Finally, you resort to emotive langauge such as “destroy the humanity of another”. These kind of abstract phrases serve to muddy the waters, not to provide clarity. It’s just rhetoric.
Let’s be honest here, do you really think this technology will be employed exclusively to crack down on threats of violence and terrorist recruitment? I don’t. If you give people the power to control conversation like that, it will be abused. That is simply human nature. It already happens on social media, it happens in the real world, and it’s now happening in games.
You say it is only for those who seek to cause damage, but this tool can (and will) be configured to flag up whatever the end-user wishes.