What is HTTP/3?

What is HTTP/3?

HTTP/3 (or HTTP-over-QUIC) is the third major version of the Hypertext Transfer Protocol (HTTP). It is an application layer protocol for communication between web browsers and servers. HTTP/3 is designed to improve the performance and security of web communications.

One of the key features of HTTP/3 is that it is built on top of the User Datagram Protocol (UDP) instead of the Transmission Control Protocol (TCP), which was used in previous versions of HTTP. UDP is a connectionless protocol that offers lower latency and better performance for real-time applications.

HTTP/3 also introduces a new transport protocol called QUIC (Quick UDP Internet Connections). QUIC provides several benefits over TCP, including reduced latency, improved reliability, and better congestion control. It also includes built-in encryption, enhancing web pages and communications security.

Another important feature of HTTP/3 is its support for multiplexing. This means multiple requests and responses can be sent and received concurrently over a single connection, improving data transfer efficiency.

HTTP/3 also includes other optimizations to improve performance, such as header compression and stream prioritization. These optimizations help reduce the overhead and improve the overall speed of web communications.

Overall, HTTP/3 significantly improved functionality over its predecessor, HTTP/2, in terms of performance and security. It is particularly beneficial for developers building real-time chat and messaging applications, as it offers lower latency and improved reliability, making it ideal for use cases that require delivering real-time data fast.

Let's step back to see where it all began with the first HTTP version.

A Brief History of HTTP

The HTTP (Hypertext Transfer Protocol) protocol has a rich history that spans several decades. It was first introduced in the early 1990s as a means of communication between clients and servers on the World Wide Web.

HTTP was initially developed by Tim Berners-Lee and his team at CERN (European Organization for Nuclear Research) to facilitate the exchange of hypertext documents. The first version, HTTP/0.9, was a simple protocol that only supported GET requests for retrieving HTML documents.

In 1996, the HTTP/1.0 protocol was standardized by the Internet Engineering Task Force (IETF). This version introduced several important features, including support for POST requests, response status codes, and headers. HTTP/1.0 also allowed the transmission of different media types, such as images and videos, alongside HTML documents.

However, as the web became more complex and interactive, the limitations of HTTP/1.0 became apparent. It was designed around a request-response model, where each request required a separate connection to the server. This resulted in high latency and inefficient use of network resources.

The HTTP/1.1 protocol addressed these issues.


HTTP/1.1 is the second major version of the Hypertext Transfer Protocol (HTTP), and it was introduced in 1999 and is widely used on the internet today.

One of the key features of HTTP/1.1 is its support for persistent connections, also known as keep-alive connections. In previous versions of HTTP, a new TCP connection had to be established for each request and response, which resulted in increased latency and overhead. With HTTP/1.1, multiple requests and responses can be sent over a single connection, reducing the need for establishing new connections and improving performance.

Another important feature of HTTP/1.1 is its support for pipelining. Pipelining allows multiple requests to be sent without waiting for the corresponding responses, which can help reduce latency and improve data transfer efficiency.

HTTP/1.1 also introduced the concept of caching. Caching allows web browsers to store and reuse previously accessed resources, such as images and stylesheets, which can significantly improve page load times and reduce bandwidth usage.

However, despite these improvements, HTTP/1.1 has some limitations. It can be inefficient for handling multiple requests and responses concurrently, as it requires strict ordering of the messages. This can result in performance issues, especially for real-time applications that require low latency and high concurrency.

Furthermore, HTTP/1.1 does not support header compression, which can lead to increased overhead and slower data transfer.


HTTP/2, the latest major version of the Hypertext Transfer Protocol (HTTP), was introduced in 2015 as an improvement over the previous HTTP/1.1 version.

HTTP/2 was designed to address the limitations of HTTP/1.1 and provide better performance, efficiency, and security for web applications. It introduces several key features that aim to optimize the way data is transmitted between clients and servers.

One of the main improvements of HTTP/2 is its support for multiplexing. With HTTP/2, multiple requests and responses can simultaneously be sent over a single connection. This eliminates the need for strict ordering of messages and allows for better concurrency, reducing latency and improving the overall efficiency of data transfer.

Additionally, HTTP/2 introduces header compression. In HTTP/1.1, headers were sent in plaintext for each request and response, which resulted in significant overhead, especially for large headers. In HTTP/2, headers are compressed using the HPACK compression algorithm, reducing the amount of data that needs to be transmitted and improving performance.

HTTP/2 also includes other performance optimizations, such as stream prioritization, which allows clients to prioritize certain requests over others, and flow control, which helps prevent congestion and ensure optimal performance.

In terms of security, HTTP/2 includes support for encrypted connections by default. This means all data transmitted between the client and server is encrypted, providing better protection against eavesdropping and tampering.

HTTP/2 is well-suited for real-time chat and messaging applications requiring low latency and high concurrency. By leveraging multiplexing, header compression, and server push, developers can build scalable and secure applications that provide a seamless and responsive user experience.

What is HTTP/2 Push?

HTTP/2 Push is a feature introduced in the HTTP/2 protocol that allows the server to send resources to the client before they are requested proactively. This means that instead of waiting for the client to request individual resources, such as images or stylesheets, the server can push these resources to the client without a specific request.

When a client sends a request to the server, the server can examine the request and identify additional resources the client will likely need. It can then push these resources along with the initial response to the client. The client can choose to use these pushed resources or ignore them if they are unnecessary.

HTTP/2 Push can significantly improve page load times and reduce the number of round trips between the client and server. By proactively pushing resources, the server can decrease the overall latency of the application, providing a faster and more responsive user experience.

However, it is important to note that HTTP/2 Push should be used judiciously and cautiously. If too many resources are pushed to the client, it can result in unnecessary data transfer and potentially impact the application's performance. Therefore, it is crucial for developers to carefully analyze and determine which resources should be pushed and when.

Overall, HTTP/2 Push is a powerful feature that can enhance the performance of web applications, particularly real-time chat and messaging applications that require low latency. By pushing resources to the client, developers can optimize the data transfer process and provide a seamless browsing experience for users.

And that brings us back to HTTP/3.

HTTP/3 vs. HTTP/2

HTTP/3 differs from HTTP/2 in several ways.

  • Protocol: HTTP/3 is based on the QUIC (Quick UDP Internet Connections) protocol, while HTTP/2 is based on TCP (Transmission Control Protocol). QUIC is designed to improve performance by reducing latency and packet loss through a combination of techniques such as multiplexing and encryption.

  • Transport Layer: HTTP/3 uses UDP (User Datagram Protocol) as its transport layer protocol, a lightweight, connectionless protocol that offers lower overhead and faster speed compared to TCP used by HTTP/2. UDP's connectionless nature allows for faster transmission of packets without waiting for acknowledgment.

  • Multiplexing: HTTP/3 supports improved multiplexing compared to HTTP/2. In HTTP/2, multiple streams are multiplexed over a single TCP connection, but a delay or loss in one stream can affect others. HTTP/3, on the other hand, uses QUIC's multiplexing capabilities, allowing for independent streams that are less affected by delays or losses.

  • Security: While HTTP/2 and HTTP/3 support encryption, HTTP/3 has enhanced security features. QUIC incorporates TLS 1.3 encryption by default, providing improved security and privacy for communications.

  • Head-of-Line Blocking: In HTTP/2, If a packet is lost or delayed, it can cause head-of-line blocking, where subsequent packets must wait for the missing packet to be retransmitted. HTTP/3 reduces the impact of head-of-line blocking through its use of QUIC, which allows for independent packet transmission, reducing the delay caused by lost or delayed packets.

What security improvements does HTTP 3 provide?

HTTP-over-QUIC provides several security improvements compared to its predecessor, HTTP/2. Some of the notable security enhancements offered by HTTP/3 are:

  1. Transport Layer Security (TLS) Encryption: HTTP/3 is designed to work exclusively over the Transport Layer Security (TLS) protocol, ensuring end-to-end encryption of data transmitted between the client and the server. This encryption prevents unauthorized access and eavesdropping, enhancing the security of the communication.

  2. Reduced Attack Surface: HTTP/3 uses the QUIC (Quick UDP Internet Connections) transport protocol based on the User Datagram Protocol (UDP). Unlike TCP (Transmission Control Protocol) used in previous versions, QUIC operates at the transport layer, which reduces the attack surface by eliminating certain vulnerabilities associated with TCP.

  3. Connection Migration: HTTP/3 allows for seamless connection migration between different network interfaces, such as switching between Wi-Fi and cellular networks. This feature helps to maintain secure connections even when the network conditions change, preventing potential security risks associated with connection interruptions.

  4. Improved Resistance to Denial-of-Service (DoS) Attacks: HTTP/3 incorporates various mechanisms to mitigate the impact of DoS attacks. The QUIC protocol provides built-in protection against link-flooding and amplification attacks, reducing the risk of service disruptions caused by malicious attacks.

  5. Zero-RTT (Round Trip Time) Handshake: HTTP/3 introduces a zero-RTT handshake, allowing for faster secure connection establishment. This reduces latency and improves the overall performance of real-time chat and messaging applications.

  6. Enhanced Privacy: HTTP/3 provides improved privacy by using independent packet transmission for each stream. This eliminates the head-of-line blocking, which can expose sensitive information in previous versions of HTTP. By ensuring that each stream is isolated, HTTP/3 enhances the privacy and confidentiality of user data during transmission.

  7. Forward Error Correction (FEC): HTTP/3 incorporates FEC to recover lost packets and minimize the impact of packet loss on the performance of real-time applications. This helps maintain the communication's reliability and integrity, especially when network conditions are less than optimal.

  8. Compatibility: HTTP /3 is designed to be backward compatible with HTTP/2, allowing developers to easily migrate their existing applications to the new protocol without significant changes.

Overall, HTTP/3 offers significant security improvements over previous versions, making it a preferred choice for developers building real-time chat and messaging applications. By leveraging the features provided by HTTP/3, developers can ensure their applications' scalability, speed, and security, enhancing the overall user experience.

What protocols are used with HTTP/3?

The protocols used with HTTP/3 are:

QUIC (Quick UDP Internet Connections)

QUIC, which stands for Quick UDP Internet Connections, is a transport layer protocol developed by Google. It is the main protocol used with HTTP/3 to improve the performance and security of web communication. Unlike its predecessors, QUIC is built on top of the User Datagram Protocol (UDP) instead of the Transmission Control Protocol (TCP).

QUIC combines the features of HTTP/2, TCP, and TLS (Transport Layer Security) to offer improved performance and reduced latency. It provides reliable, secure, and low-latency communication over the Internet.

One of its key advantages is its ability to establish connections faster than TCP. It achieves this by using a combination of encryption and multiplexing techniques. QUIC also includes built-in congestion control and error correction mechanisms, enhancing reliability and performance.

Overall, QUIC is designed to provide a more efficient and secure communication protocol for real-time applications such as chat and messaging. Its integration with HTTP/3 makes it a preferred choice for developers looking to build scalable and secure applications.

DTLS version 1.3 - used for encryption and secure communication

DTLS stands for Datagram Transport Layer Security. It is a variant of the Transport Layer Security (TLS) protocol specifically designed for UDP-based communication. While TLS is typically used with TCP to provide secure communication over the internet, DTLS is used with UDP to provide encryption and authentication for data exchanged between a client and a server.

DTLS version 1.3 is the specific version used with HTTP/3. It offers advanced security features and improvements over previous versions. Some of the key features of DTLS 1.3 include enhanced handshake security, improved performance, and reduced latency. It also protects against attacks such as replay attacks and downgrade attacks.

In HTTP/3, DTLS 1.3 encrypts and secures the communication between the client and the server. It ensures that the data transmitted over the network remains confidential and cannot be intercepted or tampered with by unauthorized parties.

DTLS 1.3 plays a crucial role in ensuring the security and integrity of the data exchanged in real-time chat and messaging applications built on top of HTTP/3.

Does http/3 support encryption?

Yes, HTTP/3 does support encryption. HTTP/3 is the latest version of the Hypertext Transfer Protocol (HTTP) and is designed to improve the performance and security of web communications. It is based on the QUIC (Quick UDP Internet Connections) protocol, a transport layer protocol that runs over UDP (User Datagram Protocol).

Encryption is a critical component of HTTP/3. It encrypts the transmitted data and provides secure communication between the client and the server. This helps protect sensitive information, such as user credentials, personal data, and other confidential information, from unauthorized access or interception.

The encryption in HTTP/3 is achieved using the Transport Layer Security (TLS) protocol. TLS ensures that the data exchanged between the client and the server is encrypted and cannot be easily deciphered by unauthorized parties. It provides end-to-end encryption, meaning that the data is encrypted on the client side, transmitted securely over the network, and decrypted on the server side.

By supporting encryption, HTTP/3 enhances the security of web communications and helps protect against various security threats, such as eavesdropping, data tampering, and impersonation attacks. It also improves the privacy and integrity of user data, ensuring that it remains confidential and unaltered during transmission.

In summary, HTTP/3 does support encryption through the use of the TLS protocol, thereby providing a secure and reliable communication channel for real-time chat and messaging applications.

What is the current status of http/3 implementation?

As of now, the implementation of HTTP/3 is still in progress and not yet finalized. HTTP/3 is the next major revision of the HTTP protocol, which aims to improve performance and security over its predecessor, HTTP/2.

HTTP/3 is based on the QUIC (Quick UDP Internet Connections) transport protocol, designed to provide low latency and reliable communication over the Internet. QUIC uses the User Datagram Protocol (UDP) instead of the Transmission Control Protocol (TCP) used by previous versions of HTTP.

The Internet Engineering Task Force (IETF) is responsible for developing and standardizing HTTP/3. The protocol is in the draft stage, with the latest version being draft-32. The HTTP/3 draft specifies the details of the protocol, including its frame format, error handling, and security considerations.

Several major web browsers and server software have already started working on implementing HTTP/3. For example, Google has been actively developing QUIC and deployed a variant in some of its services, such as Google Chrome and YouTube. Safari is on its way, too.

However, it's important to note that due to the ongoing development and evolving nature of HTTP/3, its implementation may vary across different software and platforms. It is recommended to refer to the official documentation and updates from the IETF and relevant software vendors for the most up-to-date information on the status of HTTP/3 implementation.

If it’s about real-time apps, it’s about PubNub. Whether you’re looking for Real-Time APIs, Chat APIs, Javascript SDKs, or an edge messaging solution to broker real-time communication and data exchange closer to your endpoints, PubNub has you covered.

Sign up for a free trial and get up to 200 MAUs or 1M total transactions per month included.