On this page

Chat and content moderation

PubNub provides multiple content moderation approaches that can work together or independently, depending on your needs. Content moderation helps protect users from harmful, inappropriate, or unwanted content in real-time conversations.

Moderation approaches

Moderation can be reactive or proactive. Whether you want to prevent harmful content from being published (proactive) or to detect and remove it after it has been published (reactive) is key to choosing the right approach.

Moderation approaches

AttributeAuto Moderation (Chat SDK/Core SDK)Channel Monitor (Chat SDK/Core SDK)Custom Functions
Access method
Admin Portal (BizOps Workspace - Auto Moderation)
Admin Portal (BizOps Workspace - Channel Monitor)
PubNub Functions (UGC Module)
When messages are checked
Before publish (proactive)
After publish (reactive)
Configurable
Best for
Automated filtering at scale
Admin/moderator teams
Advanced custom logic
Engineering effort
Low (wizard setup)
Low (UI-based)
Medium (code)
Availability
Beta (contact sales)
Paid plans
All plans
Automation level
Fully automated
Manual review
Custom logic
AI-powered
Yes
No
Optional
Word masking
Yes
No
Custom
Spam detection
Yes
No
Custom
Message editing
No
Yes
Yes
Message deletion
Yes (blocking)
Yes
Yes
User mute/ban
No
Yes
Custom
Real-time monitoring
Automatic
Live preview (5 channels)
Event-based

Auto moderation (AI-powered)

Auto Moderation uses AI and word lists to automatically filter inappropriate content before messages are published to channels. It's best for applications that need automated, scalable content filtering as the primary moderation method.

Works with both Chat SDK and Core SDK

Auto Moderation operates at the message level, filtering content regardless of which PubNub SDK (Chat SDK or Core SDK) is used to publish messages.

Key features:

  • PubNub-hosted AI model detects and blocks or reports spam messages
  • Define word lists to automatically mask profanity with asterisks
  • Messages are checked before delivery, ensuring inappropriate content never reaches users
  • Configuration wizard in the Admin Portal
  • Apply different moderation rules to different channels

For more information, refer to Auto Moderation.

Beta feature

Auto Moderation is in beta and available upon request. Contact PubNub Support or Sales.

Channel monitor

Channel Monitor is a web-based tool in the Admin Portal for real-time manual moderation of live conversations. It's best for chat moderators and administrators who need to actively monitor and respond to issues in real-time.

Key features:

  • View messages on up to 5 channels simultaneously
  • Edit or delete inappropriate messages in real-time
  • Mute or ban users from specific channels
  • Highlight messages containing specific keywords
  • Review messages flagged by users
  • Review and provide feedback on AI-moderated content

For more information, refer to Channel Monitor.

Chat SDK moderation features

Chat SDK moderation features provide programmatic methods for implementing user-level moderation controls in your chat application. It's best for developers who need to implement moderation controls in their chat application.

Key features:

  • Users can report offensive content with a reason
  • Remove write access (users can read but not send messages)
  • Remove read and write access from channels
  • Verify if a user is muted or banned on a channel
  • Integrated directly into Chat SDKs (JavaScript, Swift, Kotlin, Unity, Unreal)

For more information refer to the Moderation documentation for the various Chat SDKs:

Custom Functions implementation

PubNub Functions allow you to build your own moderation logic and integrate with third-party services. It's best for developers who need to implement custom moderation logic in their chat application.

Key features:

  • Use the ugc.moderateMessage API to invoke Auto Moderation from custom Functions
  • Modify or block messages before delivery with custom logic
  • Integrate with third-party services like TwoHat Community Sift, Neutrino, or Tisane
  • Implement any moderation rules specific to your use case
  • Rate-limit message publishing by IP or user

For more information, refer to:

Combining moderation approaches

You can use multiple moderation approaches together for comprehensive protection.

GoalApproach
Automated filtering with manual moderator review for edge cases
Auto Moderation + Channel Monitor
AI filtering plus in-app user reporting
Auto Moderation + Chat SDK moderation features
Manual moderation combined with in-app user reporting
Channel Monitor + Chat SDK moderation features
Add custom moderation logic to supplement built-in tools
Custom Functions + Any approach

Common use cases

Use caseDescriptionSolutionLearn more
Live event moderation
Moderate fan conversations during live sports, concerts, or streaming events
Auto Moderation word lists + Channel Monitor for real-time review
Game chat moderation guide
Community chat safety
Maintain safe, welcoming communities by filtering harmful content automatically
Auto Moderation for automated filtering + Chat SDK moderation features for user reports
Auto Moderation documentation
Customer support chat
Ensure professional interactions in customer support conversations
Channel Monitor for supervisor oversight + Custom Functions for compliance logging
Channel Monitor documentation
Gaming chat
Protect players from toxic behavior and maintain fair play standards
Auto Moderation + Chat SDK mute/ban features + Anti-spam functions
Chat SDK moderation
Last updated on