Insights

The Scaling Challenge in App Dev with Real-time Features

3 min read Developer Relations Team on Aug 5, 2019

Apps, and the connected experiences they deliver, have changed the way we connect, get around, eat, play, meet, and everything in between. It’s very apparent how important they’ve become to our everyday lives.  But with significant growth comes two major challenges: increased user expectations and increased data-intensity.

Location, updates, chat messages, data streaming – apps are sending and consuming massive amounts of data at all times of the day. Users are demanding more of the apps they use, with competition just a click or finger tap away. With more data, and more users, and more interactivity, that makes scale the biggest threat to the success of any app.

Here is everything that you need to know about scale in real time features in mobile apps.

What is Scale?

Scale is a fairly broad term, but in the world of real-time functionality of apps, scale is maintaining and ensuring reliability and performance as your user base grows, depending on the number of users, their app usage and where they’re located across the globe.

With real-time features in apps, scale is very apparent and tangible. Because real-time delivers data and experiences instantaneously, fluctuations in efficiency and performance are easy to spot and incredibly frustrating to users.

Scale Comes Down to the Infrastructure

Scale may not be top of mind in the early stages of app development, but as apps and features grow in adoption and usage, the phrase what works in the lab is not guaranteed to work in the wild emerges. That’s because a ton of engineering goes into real-time infrastructure.

So, as a developer, how well your real-time app scales depends a lot on a build vs buy decision, which is more of a spectrum than a black and white decision. How much infrastructure do you want to build and maintain yourself, and how much do you want to utilize hosted services and vendors?

Let’s walk through what it takes to build a scalable real-time infrastructure.

  • Spinning up multiple testing, staging, and production environments.
  • Coordinating provisioning for those multiple environments (from straight-up rack-and-stack in a data center to Kubernetes containers).
  • Deploying your application code to the environments.
  • Data replication for multiple points of presence and automatic failover to ensure that messages are delivered 100% of the time (and actually in real time).
  • Message “catch-up” in case of connection dropout (if a user is in a tunnel, for example, they’ll receive the message when they come out the other side).
  • Setting up service management, system monitoring, and ops alerting.
  • Creating a load balancing scheme (like 
 or HAProxy).
  • Implementing a scheme to segment data by channels or topics.
  • Finding a store and forward solution for signal recovery, like in-memory caching.
  • Implementing a method to detect connect individual clients to the ideal data center and port (broadly speaking, global server load balancing).
  • Computing which channels/topics to send/receive for a given client.
  • Building orchestration between data centers/cloud regions to ensure data reliability between endpoints.
  • Deciding which platforms and languages to support.
  • Creating universal data serialization.
  • Customizing code to detect data uplink that works across device types.
  • Determining Quality of Service and level of loss boundaries, and developing a data recovery scheme.
  • A custom-built load testing service that can simulate a real audience.
  • Creating update protocol & continuously modifying your network to support new products/services.
  • Paying for Socket server costs, QA systems, and hot failovers.
  • Ongoing Ops monitoring and additional headcount required.
  • Building a load distribution system.
  • Identifying error messages.
  • Building a log system.
  • Knowing when faults occur and developing a playbook of responses.
  • Building service management (like PagerDuty).
  • Developing multi-datacenter deployment.
  • So, as you can tell, a ton of engineering and expertise goes into designing, deploying, and orchestrating a scalable real-time infrastructure. If you’re thinking of building and maintaining your own backend infrastructure with open source technologies and resources, you’ll be faced with these challenges. To do this well, expertise in DevOps, server-side technologies, and more is essential.

    Not to say it’s impossible, but for teams both small and large, hosted services provide reliable and scalable backend real-time infrastructures, relieving you the stress, and responsibility, for delivering the seamless, reliable experience for your end-users.