IN THIS ARTICLE
Subscribe to Our Newsletter
In 1968 Robert Miller published his classic paper Response time in man-computer conversational transactions in which he described three different orders of magnitude of computer mainframe responsiveness:
- A response time of 100ms is perceived as instantaneous.
- Response times of 1 second or less are fast enough for users to feel they are interacting freely with the information.
- Response times greater than 10 seconds completely lose the user’s attention.
From this Miller concluded that a consistent 2-second response would be ideal. Years later this same value of 2 seconds has been used as a performance target for web-based applications. Today’s realtime applications, however, require near-instantaneous responsiveness. Does even 100ms cut it? The answer depends on the context.
How Fast Can a Human Process Input?
As humans beings, we have the curious inborn ability to observe and experience the persistent passage of time. The architecture of our human brains however, limits our sensory perception in a way that prevents us from reacting to our perceptions within a certain short timeframe. This timeframe is commonly known as Reaction Time.
Human Reaction Time
The average human reaction time, is on the order of a quarter of a second (250 milliseconds). Don’t believe it? You can test your own reaction time with this little test.
As you know, some humans have better reaction times than others. Fighter pilots, Formula One drivers, and championship video game players fall into the 100 – 120ms bucket on the left side of the curve.
How much of that time is spent receiving data versus mentally processing and physically reacting?
Realtime Latency: From Eye to Brain
Reaction time is a complex subject and includes several different components of mental processing including:
- Sensory perception
- Receipt of input into our consciousness
- Context applied to the input
- Decision made based on processing output.
To really understand how fast realtime is to the human brain we’ll focus on the Sensory Perception phase. This is where our senses receive the incoming data from the outside world whether that be visual or auditory.
For example, the time that an image of a tiger arriving on your retina takes to travel down your optic nerve into the visual cortex is incredibly fast. New studies show that humans can interpret visual queues seen for as little as 13 ms (about 1 in 75 frames per second).
As the brain receives the incoming data stream, an asynchronous process acknowledges the input and admits it into our consciousness. Now aware of the incoming data stream, another part of the brain applies context to the stream so that a decision can be made about how to react. All this happens very quickly. (Cats are nearly twice as fast.)
How Does Unwanted Latency Impact Human Performance?
While there is more involved in human reaction time than just mental processing, the important concepts here are:
1. The fastest rate at which humans appear to be able to process incoming visual stimuli is about 13 ms. Receiving a stream of data faster than this will only underscore the limits of our perception.
2. Increasing latency above 13 ms has an increasingly negative impact on human performance for a given task. While imperceptible at first, added latency continues to degrade a human’s processing ability until approaching 75 to 100 ms. Here we become very conscious that input has become too slow and we must rely on adapting to conditions by anticipating input rather an simply reacting to input.
In a duel, for example a 100 ms lag matters. Especially if it is random and cannot be anticipated.
Implications for Realtime Application Developers
Realtime applications have varying tolerances to data stream latency. Typically those applications with very demanding targets include:
It is these types of applications in which realtime human perception and interaction is required. Given the resources required to build and maintain a realtime data stream network to support these types of applications, many developers make the strategic decision to outsource the messaging layer in order to focus more on the application itself.
While turn based games, role-playing and strategy games typically do not rely on realtime movements or actions and they can tolerate latencies of up to 500ms or more, for Massive Multiplayer Online Gaming (MMOG), realtime is a requirement.
As online gaming matures, players flock to games with more immersive and lifelike experiences. To satisfy this demand, developers now need to produce games with very realistic environments that have very strict data stream latency requirements:
- 300ms < game is unplayable
- 150ms < game play degraded
- 100ms < player performance affected
- 50ms > target performance
- 13ms > lower detectable limit
A delay of even 100 ms reduces player performance in Twitch games by a measurable amount. It becomes noticeably difficult to track targets effectively and forces players into predicting movements.
Overall game enjoyment continues to decrease as latency increases and players experience jerky playback, ghosting and out-of-sync behavior that ultimately ruin the game for all players involved.
Given these parameters, to be successful to MMOG architecture must consider network performance as a fundamental requirement to ensure Quality of Experience for gamers. This architecture needs to be capable of delivering thousands of simultaneous data streams with latencies as low as 50 ms or better and to make it even more challenging, it must do so at scale for players in different geographic regions, on different access networks, using a range of devices.
“PubNub allows us to focus on our application, rather than the backbone network that supports it and the worries that accompany that. Knowing that we don’t have to setup a whole monitoring system to make sure our backbone network is running and sending messages is amazing; no crashing, no hardware reboots, and no worries,” said James Ross, co-founder and Operations Manager of NodeCraft Hosting.
In Voice over IP communication (VoIP), delay is measured as the latency between the spoken message and the ear receiving it. To guarantee call quality, most VoIP providers have a maximum latency target for a voice call of 150ms. Exceeding this latency threshold degrades call quality to the point it becomes impossible to communicate.
Rebtel, for example, chose the PubNub as the Data Stream Network to solve the challenge of building a global network for IP voice communications that meets critical standards for reliability and performance:
“In the telephony world, milliseconds matter,” said Daniel Forsman, Head of R&D at Rebtel. “PubNub has allowed us to drive down our infrastructure and development costs by allowing us to message devices anywhere in the world via a single API in fractions of a second.”
Another interesting example of the use of realtime data streams is in the area of collaboration. In the online classroom, it’s essential to have a reliable realtime communication between devices. When dealing with a class room full of students with short attention spans, devices need to be able to signal between each other as quickly as possible. Otherwise, you’re losing the attention of that entire class. However realtime data stream network design is outside core competency for most.
“We didn’t know whether we should hire people to do that, whether we’d have to increase the size of the team, and eventually we just sat down and thought, ‘this isn’t what the core of our business is about. We shouldn’t really be spending loads of money and time trying to make real-time work, when we should be focusing on our own business challenges. Real-time was a requirement for our business, but not a business challenge that we should have to solve,” said Liam Don, Co-founder and CTO of ClassDojo.