HTTP/3

HTTP/3

It's widely understood that the performance of a site plays a crucial role in its potential popularity. Studies show that slow-loading sites are linked to higher visitor drop-offs. Plus, a website's performance affects shopper loyalty during their shopping experience.”

HTTP/3 represents a new standard set to revolutionise the way web browsers and servers interact. This promises substantial enhancements in user experience, encompassing performance, dependability, and security. HTTP/3 offers an end-user experience characterised by low latency and high performance, which is highly beneficial.

A good understanding of the HTTP protocols is crucial to understanding HTTP/3.

HTTP follows a Client/Server communication model, wherein the client initiates a request to a server, and the server then responds to this message. The original HTTP lacked a version designation, it was subsequently labeled as 0.9 to distinguish it from subsequent versions.

  • HTTP/0.9 was very simple and the requests consisted of a single line and started with the only one possible method of GET followed by the path to the resource. The complete URL wasn't included as the protocol, server, and the service port weren't necessary once connected to the server. There were no HTTP headers and no status or error codes.
  • In HTTP/1.0 a status code line was transmitted in the beginning of the response. This enabled the browser to independently determine the success or failure of a request and adjust its actions accordingly. The HTTP headers were introduced for both requests and responses. Metadata could be transmitted, and the protocol's flexibility and extensibility greatly increased. Documents other than plain HTML files could also get transmitted. The main cause of latency within the HTTP/1.0 protocol is the Head-of-Line (HOL) blocking problem. HOL blocking is a performance-limiting phenomenon that occurs when a line of packets is held up in a queue by a first packet. In HTTP/1.0 it is necessary for the browser to finish processing a request, including fully receiving the response, before initiating the next request. Everything was done in sequence. HOL was the main cause of latency within HTTP/1.0 protocol. As web pages demand various resources such as HTML, CSS, images, JS, etc. the web browser is required to send multiple requests to the server. This meant the web browser needs to make multiple requests to the server.
  • HTTP/1.1 brought several enhancements aimed at addressing this issue. The main one is pipelining which involves a client sending multiple HTTP requests to a server without waiting for a response. Responses must then be returned from the server in the same sequence that the requests were received.However, it still requires that all responses arrive in sequence, preserving the order in which they were requested, thus potentially causing a bottleneck at the head of the line.
  • HTTP/2 resolved the issue of connection limits by introducing multiplexing, which enables the simultaneous transfer of multiple files through a single connection. The other major improvement was the introduction of better header compression, alongside a few other features. A browser can now initiate a new request at any point, and the responses can be received in any sequence. This eliminates blocking at the application level. In HTTP/2, browsers can communicate with servers through bidirectional streams, which contain multiple messages, each composed of numerous frames. HTTP/2 is done over TCP and with much fewer TCP connections than when using earlier HTTP versions. TCP is a protocol that ensures reliable transfers, meaning what is sent from one end will eventually reach the other end in the same order. If a single packet is dropped or lost in the network between two endpoints communicating via HTTP/2, it implies that the entire TCP connection comes to a standstill as the lost packet is re-transmitted and makes its way to the destination. It becomes a TCP-based head of line (HOL) block.
  • HTTP/3 no longer utilises TCP as its protocol. Instead, it has introduced a new protocol layer developed by Google called QUIC (Quick UDP Internet), which takes the place of both TCP and TLS protocols. The user datagram protocol (UDP) is an alternative to TCP. The main features of QUIC are as follows:
  • A substantial decrease in the latency for establishing connections
  • Connection migration enables the client to switch from LTE (cellular network) to WIFI seamlessly, without the need to establish a new session, and multiplexing without HOL(head-of-line) blocking.
  • QUIC generates its unique connection ID, enabling network transitions while retaining the same connection UUID.

This implies that the IP address used to change when a client switched between LTE and WIFI networks. In HTTP/1.1 and HTTP/2, when clients sent a request from IP address A, they were required to receive the response exclusively at IP address A. QUIC achieves this by enabling the client to reuse reliability and security settings, eliminating the need to initiate a connection from the beginning.

HTTP/3 (H3) appears to possess all the necessary components to potentially replace existing protocols. It promises to halve connection time and facilitate seamless IP address transitions, all while reducing the number of handshakes between server and client during HTTP request and response.

Apart from the performance enhancements, the smooth transition from HTTP/2 to HTTP/3 will likely make it an obvious choice for organisations to adopt the latest HTTP version.

To view or add a comment, sign in

More articles by Amit Malhotra

  • What LLMs are not?

    LLMs are trained on internet data, which includes exposure to not so pleasant aspects of humanity. As a result, LLMs…

    2 Comments
  • How XDR helps??

    The complexity of modern day attacks requires analysis of multiple data/log sources to identify and confirm malicious…

    3 Comments
  • Security Service Edge (SSE)

    The old approach to security was based on establishing a perimeter and deploying a firewall to keep the attackers from…

  • Secure Container Image

    Considering there’s enough change and volatility in a container ecosystem, a secure container image should be the…

  • Threat Modeling

    Threat modeling is a popular technique used to help designers think about the security threats that their systems and…

    2 Comments
  • Windows Hello for Business.

    Passwords are a primary attack vector. Bad actors use social engineering, phishing, and spray attacks to compromise…

    1 Comment
  • B2B Collaboration (Azure AD)

    As companies focus more on their core business, the need to partner with other businesses increases. Companies need to…

  • Azure Managed Identity

    A common challenge faced while creating a cloud solution is the management of secrets, credentials, certificates, and…

    1 Comment
  • Data Classification Controls

    Using data classification helps organizations maintain the confidentiality, ease of access and integrity of their data.…

    1 Comment
  • Container Security

    A container is an isolated and lightweight environment for running an application on the host operating system…

    2 Comments

Others also viewed

Explore content categories