4 min read
.
on Apr 21, 2021
In this post, we show you how to build a real-time, scalable chat solution (PubNub) that also keeps the chats clean of abuse (Tisane).

In-game chats are a staple of many gaming experiences. Players love to chat with each other, but it's no secret that many game chats suffer from colorful language that often includes profanity and personal attacks.

Even though players may tolerate this language to some degree, game studios must take action to prevent it from happening to them. This has become even more important as the gaming community diversifies along age, gender, and nationality lines.

The good news is that most game studios try to keep their chats clean. However, the players who write the verbal attacks are usually quick to adapt. They can easily defeat solutions based on keyword filtering, and can even outmaneuver smarter solutions that require extensive model training or labeling.

So the question is, how can you offer real-time chat that is welcoming to all?

In this post, we will show you how to build a solution that does just that. This solution not only provides a real-time, scalable chat solution (PubNub), but also keeps the chats clean of abuse (Tisane).

Introducing PubNub

PubNub empowers developers to build low-latency, real-time applications that perform reliably and securely at a global scale. Among other things, PubNub provides an extensible and powerful in-app chat SDK. The chat SDK is used in a variety of scenarios, from virtual events like the ones hosted by vFairs to in-game chats. It can also be used to create chat rooms in Unity 3D.

Introducing Tisane

Tisane is a powerful NLU API with a focus on detection of abusive content. Tisane detects different types of problematic content, including but not limited to hate speech, cyberbullying, attempts to lure users off the platform, and sexual advances. Tisane classifies the actual issue and pinpoints the offending text fragment. An optional explanation can be supplied for a sanity check or for audit purposes. A user or a moderator does not have to figure out what part of the message caused the alert or why. Tisane is a RESTful API, using JSON for its requests and responses.

Tisane is integrated with PubNub, and just like with other PubNub blocks, Tisane can be easily integrated in PubNub-powered applications. That includes applications using the in-app chat SDK.

Using Tisane with PubNub

Now, the fun part. Let's start coding! Follow these steps to get your Tisane-powered PubNub account setup. 

  • Sign up for a Tisane Labs account here.

  • Log on to your account and obtain the API key, either primary or secondary. This API key goes to the Ocp-Apim-Subscription-Key header with every request.

  • Now go back to the PubNub portal and import the Tisane block by following the Try it now link, or clicking here.

  • Add your Tisane API key as tisaneApiKey to the PubNub Functions Vault using the My Secrets button in the editor.

  • Create a project and a module inside.

Tisane labs image 1

  • Add a function used by the Before Publish or Fire event.

  • Use this Github Gist for the code, modifying it as needed.

  • If the language is different from English, modify en to the standard code of the input language. For unknown languages, use * (asterisk) to request that Tisane detect the language automatically.

Tisane labs image 2

Session-level moderation privileges

Statistically, the vast majority of violations are personal attacks (attacks targeting another participant in the chat). When the chat is structured with responses and threads, it's easy to figure out who is the target of the attack. It's usually the poster of the comment being replied to. If the chat is not structured, it is a poster of one of the recent messages.

Human moderators usually don't like moderating game chats. It's less about community building and more about banning users being nasty to each other. And you know what? Maybe they don't have to. Since Tisane tags personal attacks, the privilege to moderate a post with a suspected personal attack can be delegated to the suspected target(s). If it is not a personal attack, nothing will happen. If it is indeed an insult, then the offended party will have the power to discard the message, or maybe ban the poster. Note that this will only happen if Tisane suspects a personal attack.

The community saves on moderator manpower, and the potential offenders behave differently knowing that the person they attack will have the power to punish them. How often does one attack a moderator? 

(More info on this 2-factor moderation, as we call it, can be found here.)

Of course, Tisane is not a panacea. It is merely a tool to maintain good user experience. Like any natural language processing software, it can make mistakes. It is also intended to be a part of the filtering process, not to substitute the whole process.

We recommend for gaming communities to combine Tisane with low-tech, tried-and-true methods, such as considering whether the user whose post was tagged is a long-standing member of the community with a good record. When possible, drastic actions should be subject to human review as well.

We also recommend taking different actions depending on the context. Death threats or suicidal behavior should be treated differently than profanity. Scammers and potential sexual predators should not be asked to rephrase their posts, they need to be reported to a moderator. 

When using Tisane integrated within a PubNub-powered chat, in-game moderators can be assured that their community will be safe, secure, and welcoming to all of their users.

More From PubNub