PhoneGapPhoneGapNode.jsReact V4WebJavaScriptPhoneGap V4 SDK Soft & Hard Limitations

FunctionSoft Limit (Best Practices)Hard LimitIf hard limited is exceeded?
Publish Rate

10-15 messages/sec per channel

NOTE: it is a best practice to limit the publish rate not because PubNub can't keep up - you should be able to publish as fast as your network/hardware will allow and PubNub will absorb it - but because the subscriber may miss out on messages if the subscribe rate is too slow. A slow subscribe rate causes messages to overflow the server side channel message queue before the subscriber can receive them.

The queue only overflows if during the reconnect time, you have published over 100 messages. Average reconnect time is around 150ms, technically that means publishing ~600/second. Keep in mind the queue could also overflow during a brief internet disconnection.

However, if you have a multiplex connection, the total messages received can't exceed 100 over all the channels combined.

If it takes 230ms to reconnect, you can only actually get ~400 per second (4 reconnects per second, each 230ms between, 100 messages per connect). That means if you publish 500 messages per second, the subscriber will miss some of them. Also consider this is not taking into consideration the download time for 100 messages (potentially 32Kb * 100).

Typically reconnection is faster than that. If you are on AWS, your reconnect might be as low as 30-50ms. With a 50ms reconnect time, you can publish close to 2000 messages per second, and the subscriber won't miss messages due to overflow.

The most complicated part is multiplexed subscriber connections, it's 100 messages total across all the channels, so you can't publish more than 100 messages in the reconnect time across n channels in the multiplex channel list. Hard to give an exact recommendation here! Also applies to channel group and wildcard subscriptions in addition to multiplexing.

Increasing the queue size for the sub-key (to 300 or 500, for example) will allow the subscribers to keep up with more messages.

Also, the queue life (TTL) must be considered in addition to the queue size; see Message Buffer Cache section of this doc.

No throttling on keys in good standing (either any key with a paid plan present or keys on Free Tier at usage levels within the allowed limits).

Use HTTP Pipelining for higher throughput.

Free Tier keys exceeding usage quotas are subject to deactivation without warning, though for a first overage we will attempt to get in contact before disabling keys. All other keys, N/A

Error message (Channel quota exceeded)

Subscribe RateMost of the considerations mentioned above in soft limits for publish rate are actually because of the limitations on the subscriber end (queue limit, roundtrips, etc).

No limit

Use HTTP Streaming for maximum throughput

N/A
# of ChannelsUnlimitedN/A
# of Subscribers per channel or keysetUnlimitedN/A
# of Publishers per channel or keysetUnlimitedN/A
Message SizeLess than < 30KB to be safe.Length of GET request, including HTTP headers & URI encoding overhead (~32kb)HTTP 400 error
Message Buffer Cache

Size: 100 message queue

TTL: 16 minutes maximum before cleared off the cache, but can be shorter (no guarantees).

If longer than 8-10 minutes disconnect, use Storage & Playback

NOTE: Each socket connection has an in-memory message queue (FIFO) that holds onto recently published messages for the cache duration (12-16 minutes) and is limited to the most recent 100 messages. Consequently, publishing over 100 messages in the window of the subscribe reconnect time inevitably results in older messages overflowing the queue and getting discarded. For long term, reliable persistence and retrieval of missed message, you should enable Storage & Playback add-on with a duration up to 30 days or even unlimited and use the history API to retrieve those messages.

Size: Configurable on our end, may be added costs

TTL: Currently defined with a hard limit of 20 mins and a configurable effective percentage that controls the behavior of the network. So, a percentage of 90 would mean the effective TTL is 18 mins. Currently it's set to 80% or 16 mins

Catch-Up100 messages (or size of buffer cache)N/A
Channel Name Length64 characters92 Characters (97 characters including padding for base64 encoding)
# of Sockets/Instances per client

PubNub does not limit the number of sockets/instances you can create, but TCP connections are typically limited by the device/platform (e.g. some browsers only allow up to 40 TCP connections).

Keep in mind each PubNub client instance creates 2 TCP socket connections: one for subscribes and the other for non-subscribe operations.

UnlimitedN/A
Subscribe Keep-Alive
# of API keys per PubNub account?An API is available if you need to manage an important number of keys. Contact us at support@pubnub.com if you need access to the Key Provisioning API.UnlimitedN/A
FunctionSoft LimitHard LimitIf hard limited is exceeded?
Data Retention1, 3, 7, 15, or 30 days, or ForeverN/A
FunctionSoft LimitHard LimitIf hard limited is exceeded?
Multiplexing (available without Stream Controller)

10-50 channels

When subscribing to many channels, channel groups allows for the persistence of channel lists.

No limit in client SDKs, but we advertise a hard limit of 100 channels

No server side limit, only limited by URI length.

Channel GroupsUp to 10 channel groups, each with 2,000 channels for a total of 20,000 channelsIf subscribing to more than 10 channel groups, receive 400 HTTP status code with description Maximum channel registry count exceeded
Wildcard Subscribe

Depends on subscribe rate

NOTE: see note for Publish Rate

3 levels (2 dots) of wildcarding:

  • a.*
  • a.b.*

No limit to # of channels able to subscribe to in a wildcard subscribe.

Wildcard Channel Names are not allowed in Channel Groups.

Channel Group Name Length92 Characters (97 characters including padding for base64 encoding)
FunctionSoft LimitHard LimitIf hard limited is exceeded?
# of TokensNo limitsN/A
Grant latency

Server to server (grant by server for server usage), give it 1 second between grant and usage

Separate process (server grant, client usage), wait for grant callback before returning to client

Channels per grant200
Auth-keys per grant200
Granting on Wildcard Levels

Can only grant one level deep:

  • a.* - you can grant on this
  • a.b.* - grant will not work on this
If you grant on a.b.*, the grant will treat a.b.* as a single channel with name a.b.*
Forbidden Cache Time
FunctionSoft LimitHard LimitIf hard limited is exceeded?
HeartbeatMinimum: 1 minute heartbeat & 29 second interval

Minimum: heartbeat:10; interval: 4 (lowest common denominator across 3.x SDKs; each have different hard limits)

4.x SDKs do not currently have limitations.

Presence Announce Max100We can adjust this limit, but there are considerations
Webhook Retries

Presence webhooks will try to POST to your URL endpoint for a maximum of 4 times, each with a 5s timeout;

Ensure that the customer's server (REST endpoint) returns 200

NOTE: If a channel has reached the Presence Announce Max limit, we will not send webhook requests for that channel

If webhook has reached maximum number of retries, the request is lost.
Global Here NowIf very large user base (100k+), chunk out to smaller global presence channels and do individual here_now calls on those (no more than 10k per global presence channel)No limit - latency can get pretty high since no pagination
FunctionSoft LimitHard LimitIf hard limited is exceeded?
# of Push Certificates1 APNS certificate & 1 GCM key per set of PubNub keysN/A
Push Notification Message Size

Maximum: 2kb for APNS, 4kb for GCM

(support for 4kb payload with Apple's HTTP/2 APIs is in the backlog)

PAM - Best Practice: don't grant manage to clients

Add/Remove channels : Best Practice: Don't implement add/remove channels on your clients because if you add Access Manager, you shouldn't grant manage to client apps.

Keep in mind base64encoding can add up to 30% bloat.

However, you can gzip after base64 encode to reduce the bloat back. See: http://googo.me/fpVd

Sometimes base64 + gzip is actually smaller than original binary!