What is HTTP / 3?

Tram Ho

Introduce

The article is translated and synthesized from CloudFlare Blog and Medium, looking forward to receiving suggestions from everyone 🙇

HTTP-over-QUIC is a test protocol that will soon be renamed to HTTP / 3. IETF released the draft on March 3, 2020.

There’s been a big step forward in the development process from HTTP / 1.1 (launched in 1999) to the arrival of HTTP / 2 (officially in 2015) and things are continuing to grow with HTTP / 3 expected. completed in 2019. The article has a lot of comparisons between HTTP / 2 and 3 so if you do not know what HTTP / 2 is, you can read it here .

Road to QUIC

QUIC (Quich UDP Internet Connections), an internet transport protocol, brings a lot of improvements to the design to speed up HTTP traffic as well as make it more secure, with the ultimate goal of gradually replacing TCP and TLS on the web. In this blog post, let’s take a look at some of the main features of QUIC and its advantages as well as the challenges encountered in supporting this new protocol. HTTP / 3 is essentially an evolution of the QUIC protocol developed by Google, all starting from the following suggestion of Mark Nottingham.

There are actually 2 similar protocols called QUIC:

  • “Google QUIC” (abbreviated as gQUIC) is the original protocol designed by Google engineers many years ago, which after many years of testing is being put into the common standard by IETF (Internet Engineering Task Force).
  • “IETF QUIC” (from now on we will simply call QUIC) is a protocol based on gQUIC but has changed so much that it can be considered a completely different protocol. From the format of packets to handshaking and mapping of HTTP, QUIC has improved many of gQUIC’s original designs thanks to the open cooperation of many organizations and individuals, sharing a common goal to make the Internet faster. and more secure.

In short, QUIC can be considered as a combination of TCP + TLS + HTTP / 2 but implemented on UDP. Because TCP has been developed, built deep into the kernel of the operating system, into the middlebox hardware, it is almost impossible to make major changes to TCP. However, because QUIC is built on UDP, it is completely unrestricted.

So what are the improvements that QUIC brings?

Security is on the go (and also performance).

One of the advantages of QUIC over TCP is the default security right from the protocol’s purpose of design. QUIC achieves this by providing security features such as authentication and encryption (which is usually done by the higher layer protocol than TLS) right within the protocol itself.

QUIC’s initial handshake process consists of three-way handshake like TCP’s and comes with TLS 1.3 handshake, providing end-points authentication and negotiation. coding parameters. For those who are familiar with the TLS protocol, QUIC replaces the TLS record layer with its own frame format while preserving TLS handshake messages.

The consequence is that not only ensures that the connection is always authenticated into the security, it also makes the connection creation faster: shaking hands through QUIC requires only one transfer between the client and the server is complete. Therefore, instead of having to spend 2 times: 1 time for TCP and 1 time for TLS 1.3 as currently.

HTTP Request Over TCP + TLS

HTTP Request Over QUIC

However, QUIC goes even further, encrypting even the metadata of the connection, which can be exploited by intermediaries to interfere with the connection. For example, the number of packets that can be used by an attacker on the transmission line to find a connection with user activity on different networks when connection migration (see below) takes place . By encrypting, QUIC ensures that no one can find a connection with the activity based on this information other than the end-point that the user connects to.

Coding is also an effective cure for ossification , which is a phenomenon in protocol design, if you design a protocol with flexible structure but rarely use that feature. If you have an open door but rarely open it, then the hinge (the moving part will gradually oxidize), allowing us to negotiate different versions of the same protocol. In fact, this ossification is the reason why TLS 1.3 implementation is delayed too long, and can only be done after a lot of changes, designed to prevent device manufacturers from misunderstanding and blocking the version. The new version of TLS, is put into practice.

Head-of-line blocking

One of the improvements HTTP / 2 brings is the ability to multiplex multiple HTTP requests on the same TCP connection. This allows applications that use HTTP / 2 to handle concurrent requests and optimize the available bandwidth.

This is really a big improvement over the previous generation, requiring the app to make multiple TCP + TLS connections if the app wants to handle multiple HTTP / 1.1 requests (such as when the browser needs to download Javascript and CSS simultaneously. to display the page). Creating new connections requires repeated handshakes many times, and undergoes a warm-up process that results in slow webpage rendering. The ability to multiplex multiple HTTPs avoids all of this.

However, there are still disadvantages. Because many requests / responses are transmitted over the same TCP connection, all are affected by packet loss, such as network congestion, even if only the data is lost. affect a connection. This phenomenon is called “head-of-line blocking” .

QUIC went a step further and supported multiplexing so that different HTTP streams will be mapped with different QUIC transport streams, but all still share the same QUIC connection, so there’s no need to shaking hands. In addition, although the congestion is shared, QUIC threads are distributed separately, so in most cases the loss of one packet does not affect the other.

This can reduce the time it takes to fully display the web page (with CSS, Javascript, images and various media types) especially in situations where the network is congested and the rate of dropout is high.

It sounds simple, right?

In order to deliver such promise, QUIC needs to break some of the assumptions that many network applications still believe in, making it difficult to install QUIC and deploy it.

Therefore, QUIC is designed to operate based on UDP protocol to make it easier to deploy and avoid problems when network devices drop packets from an unknown protocol, because Basically all network devices have UDP support available. This also allows QUIC to be quickly introduced into user-level applications, for example, browsers can install new protocols and deliver them to users without having to wait for the operating system to update.

Although the original design goal of QUIC was to avoid having to “break” existing ones, this design also made it more challenging to prevent misuse of packet routing to end-point.

Just NAT to put it all in and quietly binding from the back

Basically, NAT routers can monitor TCP connections running through a four-component tuple (source IP address, source port, destination IP address, destination port) and by monitoring TCP packets. SYN, ACK, FIN are transmitted over the network, the router can monitor when new connections are created and canceled. This also allows the router to accurately manage NAT binding time ( NAT binding : mapping between the IP address and the internal and external ports).

For QUIC, this is still not feasible because the NAT routers present today still do not understand what QUIC is, so when dealing with them, they will fall back to the default, meaning that the QUIC packet is a normal UDP packet and handling. UDP processing is inaccurate, arbitrary, and has a short time-out that can affect connections that need to be maintained for long periods of time.

During a timeout, NAT rebinding occurs, and at this point, the end-point outside the scope of the NAT will see packets coming from a different port than the original port when the connection was initiated, resulting in connection tracking. Only by tuple including 4 components as above is not possible.

And not only does NAT have problems, another feature that QUIC brings is experiencing the same situation. It is a connection migration feature: allows you to switch between different connections, for example, when a mobile device is connecting via cellular data (cellular data) captures better WiFi and switches to the network. This wifi, still ensure connection even though the IP and port may change.

QUIC solves this problem by introducing a new concept: Connection ID. The Connection ID is an arbitrary length segment of data inside a QUIC packet that allows identification of a connection. End-points can use this ID to track the connection that needs to be processed without having to check the 4-component tuple as above. In fact, there may be multiple IDs in the same connection (when there is a connection migration, this will happen to avoid interlinking with different network paths) but this is basically managed by the end-point rather than by the end-point. must be an intermediary device so it’s okay.

However, this can also be a problem for carriers using anycast and ECMP routing when a single destination IP address can be identified for many servers on the back. Because the edge routers in the network don’t know how to handle QUIC, it is possible for UDP packets to be in the same QUIC connection (that is, the same connection ID) but have 4 tuple to Different parts (due to NAT rebinding or connection migration) may be redirected to another server resulting in disconnection.

To solve this problem, carriers need to implement a smarter load balancing solution on layer 4, possibly by software without touching the routers (see Facebook’s Katran project).

QPACK

Another advantage introduced by HTTP / 2 is header compression (HPACK), which allows HTTP / 2 end-points to reduce the amount of data that must be transmitted over the network by eliminating redundancy in HTTP requests and responses.

Specifically, HPACK has dynamic tables that contain headers that have been sent (or received) in previous HTTP requests (or responses), allowing end-points to refer to previous headers with headers encountered. in the new request (or resoponse) and no need to retransmit it again.

HPACK’s dynamic tables need to be synchronized between the encoding party (the sender of the HTTP request or response) and the decoder (where it receives them) otherwise the decoding side will not be able to decode it.

With HTTP / 2 running on TCP, this synchronization is very obvious because the TCP layer has processed that helps us transmit HTTP requests and responses in the order they were sent. The coding side can then send instructions to update the table as part of the request (or response), which makes coding very simple. But for QUIC, things are more complicated.

QUIC can send multiple HTTP requests (or responses) across multiple independent threads, meaning that if there is only one thread, QUIC will also try to send in the correct order but when multiple threads appear, the order is no longer guaranteed. .

For example, if the client sends an HTTP request A through thread A, and request B via stream B, the following scenario may occur: due to the reordering of the packet that request B was received by the server before request A , and if request B is encoded by the client and has a reference to a header from request A, the server will not be able to decode because it has not received request A ?!

In the gQUIC protocol, this problem was simply solved by ordering all HTTP request and response headers (headers, not bodies) on the same gQUIC stream. Then all the headers will be transmitted in the correct order in all cases. This is a very simple mechanism that allows a lot of HTTP / 2 code to be reused, but otherwise increases the head-of-line blocking problem, which QUIC is designed to reduce. Therefore, the IETF QUIC team has designed a new mapping between HTTP and QUIC (“HTTP / QUIC”) and a new header compression mechanism called “QPACK”.

In the latest draft of the HTTP / QUIC mapping and QPACK mapping, each HTTP request / response exchange uses its own 2-way QUIC stream, so head-of-line blocking will not occur. In addition, to support QPACK, each side of the connection will create 2 more QUIC flows one way, one to send QPACK table updates to the other side, one to confirm these updates. In this way, QPACK encoding can use the dynamic reference table only after it has been verified by the decoder side.

Against Reflection attacks

A common problem with UDP-based protocols (see more here and here ) is that they are sensitive to reflection (reflection) attacks. This is a form of attack where an attacker cheated the server to send large amounts of data to the victim by spoofing the source IP address of the packet sent to the server making them appear to be sent from the victim. This form is heavily used in DDoS attacks.

It is also especially effective in cases where the response sent by the server is much larger than the request it receives (so it is also called the “amplified” attack).

TCP is not commonly used for this attack because the packets sent during the handshake (SYN, SYN + ACK, …) are of equal length, so there is no potential for “amplification”. “.

QUIC’s handshake process is the opposite, not as symmetrical: for example, with TLS, the first time it is sent, the QUIC server will often send back its certificate chain, which can be very large, while the client only needs to send it. There are several bytes (TLS ClientHello message contained in the QUIC packet). For this reason, the original QUIC packet sent from the client needs to pad (add data) to a minimum length despite the fact that the content of the packet is much smaller. Even so, this measure is not enough, because basically, the server’s response is often divided into multiple packets, so it can still be larger than many packets that have been padding from the client.

The QUIC protocol also defines a mechanism for validating the source IP address, when the server instead of sending a long response, only sends a “retry” packet much smaller than a special encrypted token that the client will must echo (server message) toward the server in a new packet. In this way, the server can make sure that the client does not fake its own source IP address (because the client does indeed receive a retry packet) and can continue the handshake process. The disadvantage of this method is that it increases the normal handshake time from 1 client-server exchange to 2 rounds.

Another solution is to reduce the size of the response from the server until a moment when the reflection attack becomes ineffective, for example using the ECDSA certificate (basically much smaller than the equivalent RSA certificate). ). Cloudflare is also experimenting with TLS certificate compression mechanisms that use compression algorithms such as zlib, broti (a feature originally introduced by gQUIC but not currently available in TLS).

Forward error correction

Error correction technique that detects and fixes certain errors that occur during data transmission without the need for retransmission. HTTP / 3 also supports this technique, read more at https://http3-explained.haxx.se/en/quic-v2#forward-error-correction

UDP performance

A recurring problem with QUIC is that current hardware and software devices do not yet understand what the new protocol is. Above, we have looked at how QUIC solves the problems that arise when packets pass through network devices and routers but there is another potential problem that is the performance of sending and receiving data via UDP above. The end-points are using QUIC. Over the years, a lot of work has been done to optimize TCP as much as possible, from building off-loading capabilities to software (operating systems) and hardware (network cards), but for UDP, the nothing yet.

However, it is only a matter of time before UDP will benefit from these efforts. Recently, we can see a few examples, from implementing Generic Segmentation Offloading for UDP on LInux , which allows applications to send multiple UDP segments between user-space and network stack of kernel-space at the same cost ( or almost equivalent) of sending only 1 paragraph. Or adding zerocopy socket support on Linux also allows the application to avoid costs incurred when copying memory from user-spacve to kernel-space.

Conclude

Like HTTP / 2 and TLS 1.3, QUIC brings many new features to improve the performance and security of web sites, as well as other internet-based components. The IETF working group is aiming to release the first version of the QUIC spec by the end of 2018.

  • Cloudflare has also been working hard to soon bring the benefits of QUIC to all customers. Update: Cloudflare has completed support for HTTP / 3 by creating a quiche , the open-source version of HTTP / 3 written in Rust.
  • Google says that nearly half of all requests from the Chrome browser to Google’s servers are made through QUIC and they expect to increase the traffic made through QUIC in order to make it the default method from the connection. Google’s clients, including from browsers and mobile devices, to Google’s servers.
  • Support for HTTP / 3 was added to Chrome (Canary build) in September 2019, and although HTTP / 3 was not enabled by default in any browser, as of 2020, HTTP / 3 was supported. support in Chrome and Firefox (requires flags to be used). You can experiment with HTTP / 3 according to the instructions here . We can look forward to a bright future with QUIC!

Refer

Share the news now

Source : Viblo