
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 19:30:02 GMT</lastBuildDate>
        <item>
            <title><![CDATA[QUIC action: patching a broadcast address amplification vulnerability]]></title>
            <link>https://blog.cloudflare.com/mitigating-broadcast-address-attack/</link>
            <pubDate>Mon, 10 Feb 2025 14:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare was recently contacted by researchers who discovered a broadcast amplification vulnerability through their QUIC Internet measurement research. We've implemented a mitigation. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare was recently contacted by a group of anonymous security researchers who discovered a broadcast amplification vulnerability through their <a href="https://blog.cloudflare.com/tag/quic"><u>QUIC</u></a> Internet measurement research. Our team collaborated with these researchers through our Public Bug Bounty program, and worked to fully patch a dangerous vulnerability that affected our infrastructure.</p><p>Since being notified about the vulnerability, we've implemented a mitigation to help secure our infrastructure. According to our analysis, we have fully patched this vulnerability and the amplification vector no longer exists. </p>
    <div>
      <h3>Summary of the amplification attack</h3>
      <a href="#summary-of-the-amplification-attack">
        
      </a>
    </div>
    <p>QUIC is an Internet transport protocol that is encrypted by default. It offers equivalent features to <a href="https://www.cloudflare.com/learning/ddos/glossary/tcp-ip/"><u>TCP</u></a> (Transmission Control Protocol) and <a href="https://www.cloudflare.com/learning/ssl/transport-layer-security-tls/"><u>TLS</u></a> (Transport Layer Security), while using a shorter handshake sequence that helps reduce connection establishment times. QUIC runs over <a href="https://www.cloudflare.com/en-gb/learning/ddos/glossary/user-datagram-protocol-udp/"><u>UDP</u></a> (User Datagram Protocol).</p><p>The researchers found that a single client QUIC <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-17.2.2"><u>Initial packet</u></a> targeting a broadcast IP destination address could trigger a large response of initial packets. This manifested as both a server CPU amplification attack and a reflection amplification attack.</p>
    <div>
      <h3>Transport and security handshakes</h3>
      <a href="#transport-and-security-handshakes">
        
      </a>
    </div>
    <p>When using TCP and TLS there are two handshake interactions. First, is the TCP 3-way transport handshake. A client sends a SYN packet to a server, it responds with a SYN-ACK to the client, and the client responds with an ACK. This process validates the client IP address. Second, is the TLS security handshake. A client sends a ClientHello to a server, it carries out some cryptographic operations and responds with a ServerHello containing a server certificate. The client verifies the certificate, confirms the handshake and sends application traffic such as an HTTP request.</p><p><a href="https://datatracker.ietf.org/doc/html/rfc9000#section-7"><u>QUIC</u></a> follows a similar process, however the sequence is shorter because the transport and security handshake is combined. A client sends an Initial packet containing a ClientHello to a server, it carries out some cryptographic operations and responds with an Initial packet containing a ServerHello with a server certificate. The client verifies the certificate and then sends application data.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wsMcFwy8xMRYwQyFNm6oC/5d131543e7704794776dfc3ed89c1693/image2.png" />
          </figure><p>The QUIC handshake does not require client IP address validation before starting the security handshake. This means there is a risk that an attacker could spoof a client IP and cause a server to do cryptographic work and send data to a target victim IP (aka a <a href="https://blog.cloudflare.com/reflections-on-reflections/"><u>reflection attack</u></a>). <a href="https://datatracker.ietf.org/doc/html/rfc9000"><u>RFC 9000</u></a> is careful to describe the risks this poses and provides mechanisms to reduce them (for example, see Sections <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-8"><u>8</u></a> and <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-9.3.1"><u>9.3.1</u></a>). Until a client address is verified, a server employs an anti-amplification limit, sending a maximum of 3x as many bytes as it has received. Furthermore, a server can initiate address validation before engaging in the cryptographic handshake by responding with a <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-8.1.2"><u>Retry packet</u></a>. The retry mechanism, however, adds an additional round-trip to the QUIC handshake sequence, negating some of its benefits compared to TCP. Real-world QUIC deployments use a range of strategies and heuristics to detect traffic loads and enable different mitigations.</p><p>In order to understand how the researchers triggered an amplification attack despite these QUIC guardrails, we first need to dive into how IP broadcast works.</p>
    <div>
      <h3>Broadcast addresses</h3>
      <a href="#broadcast-addresses">
        
      </a>
    </div>
    <p>In Internet Protocol version 4 (IPv4) addressing, the final address in any given <a href="https://www.cloudflare.com/learning/network-layer/what-is-a-subnet/"><u>subnet</u></a> is a special broadcast IP address used to send packets to every node within the IP address range. Every node that is within the same subnet receives any packet that is sent to the broadcast address, enabling one sender to send a message that can be “heard” by potentially hundreds of adjacent nodes. This behavior is enabled by default in most network-connected systems and is critical for discovery of devices within the same IPv4 network.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/49zGjFbeIv7RxZMM6W2i5V/9e9e5f2f3bd8401467887d488930f476/image3.png" />
          </figure><p>The broadcast address by nature poses a risk of DDoS amplification; for every one packet sent, hundreds of nodes have to process the traffic. </p>
    <div>
      <h3>Dealing with the expected broadcast</h3>
      <a href="#dealing-with-the-expected-broadcast">
        
      </a>
    </div>
    <p>To combat the risk posed by broadcast addresses, by default most routers reject packets originating from outside their IP subnet which are targeted at the broadcast address of networks for which they are locally connected. Broadcast packets are only allowed to be forwarded within the same IP subnet, preventing attacks from the Internet from targeting servers across the world.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5TU3GO26KOJgzLHcS9Uxiu/6cd334afc3925b1713b7e706decc7269/image1.png" />
          </figure><p>The same techniques are not generally applied when a given router is not directly connected to a given subnet. So long as an address is not locally treated as a broadcast address, <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol</u></a> (BGP) or other routing protocols will continue to route traffic from external IPs toward the last IPv4 address in a subnet. Essentially, this means a “broadcast address” is only relevant within a local scope of routers and hosts connected together via Ethernet. To routers and hosts across the Internet, a broadcast IP address is routed in the same way as any other IP.</p>
    <div>
      <h3>Binding IP address ranges to hosts</h3>
      <a href="#binding-ip-address-ranges-to-hosts">
        
      </a>
    </div>
    <p>Each Cloudflare server is expected to be capable of serving content from every website on the Cloudflare network. Because our network utilizes <a href="https://www.cloudflare.com/learning/cdn/glossary/anycast-network/"><u>Anycast</u></a> routing, each server necessarily needs to be listening on (and capable of returning traffic from) every Anycast IP address in use on our network.</p><p>To do so, we take advantage of the loopback interface on each server. Unlike a physical network interface, all IP addresses within a given IP address range are made available to the host (and will be processed locally by the kernel) when bound to the loopback interface.</p><p>The mechanism by which this works is straightforward. In a traditional routing environment, <a href="https://en.wikipedia.org/wiki/Longest_prefix_match"><u>longest prefix matching</u></a> is employed to select a route. Under longest prefix matching, routes towards more specific blocks of IP addresses (such as 192.0.2.96/29, a range of 8 addresses) will be selected over routes to less specific blocks of IP addresses (such as 192.0.2.0/24, a range of 256 addresses).</p><p>While Linux utilizes longest prefix matching, it consults an additional step — the Routing Policy Database (RPDB) — before immediately searching for a match. The RPDB contains a list of routing tables which can contain routing information and their individual priorities. The default RPDB looks like this:</p>
            <pre><code>$ ip rule show
0:	from all lookup local
32766:	from all lookup main
32767:	from all lookup default</code></pre>
            <p>Linux will consult each routing table in ascending numerical order to try and find a matching route. Once one is found, the search is terminated and the route immediately used.</p><p>If you’ve previously worked with routing rules on Linux, you are likely familiar with the contents of the main table. Contrary to the existence of the table named “default”, “main” generally functions as the default lookup table. It is also the one which contains what we traditionally associate with route table information:</p>
            <pre><code>$ ip route show table main
default via 192.0.2.1 dev eth0 onlink
192.0.2.0/24 dev eth0 proto kernel scope link src 192.0.2.2</code></pre>
            <p>This is, however, not the first routing table that will be consulted for a given lookup. Instead, that task falls to the local table:</p>
            <pre><code>$ ip route show table local
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
local 192.0.2.2 dev eth0 proto kernel scope host src 192.0.2.2
broadcast 192.0.2.255 dev eth0 proto kernel scope link src 192.0.2.2</code></pre>
            <p>Looking at the table, we see two new types of routes — local and broadcast. As their names would suggest, these routes dictate two distinctly different functions: routes that are handled locally and routes that will result in a packet being broadcast. Local routes provide the desired functionality — any prefix with a local route will have all IP addresses in the range processed by the kernel. Broadcast routes will result in a packet being broadcast to all IP addresses within the given range. Both types of routes are added automatically when an IP address is bound to an interface (and, when a range is bound to the loopback (lo) interface, the range itself will be added as a local route).</p>
    <div>
      <h3>Vulnerability discovery</h3>
      <a href="#vulnerability-discovery">
        
      </a>
    </div>
    <p>Deployments of QUIC are highly dependent on the load-balancing and packet forwarding infrastructure that they sit on top of. Although QUIC’s RFCs describe risks and mitigations, there can still be attack vectors depending on the nature of server deployments. The reporting researchers studied QUIC deployments across the Internet and discovered that sending a QUIC Initial packet to one of Cloudflare’s broadcast addresses triggered a flood of responses. The aggregate amount of response data exceeded the RFC's 3x amplification limit.</p><p>Taking a look at the local routing table of an example Cloudflare system, we see a potential culprit:</p>
            <pre><code>$ ip route show table local
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
local 192.0.2.2 dev eth0 proto kernel scope host src 192.0.2.2
broadcast 192.0.2.255 dev eth0 proto kernel scope link src 192.0.2.2
local 203.0.113.0 dev lo proto kernel scope host src 203.0.113.0
local 203.0.113.0/24 dev lo proto kernel scope host src 203.0.113.0
broadcast 203.0.113.255 dev lo proto kernel scope link src 203.0.113.0</code></pre>
            <p>On this example system, the anycast prefix 203.0.113.0/24 has been bound to the loopback interface (lo) through the use of standard tooling. Acting dutifully under the standards of IPv4, the tooling has assigned both special types of routes — a local one for the IP range itself and a broadcast one for the final address in the range — to the interface.</p><p>While traffic to the broadcast address of our router’s directly connected subnet is filtered as expected, broadcast traffic targeting our routed anycast prefixes still arrives at our servers themselves. Normally, broadcast traffic arriving at the loopback interface does little to cause problems. Services bound to a specific port across an entire range will receive data sent to the broadcast address and continue as normal. Unfortunately, this relatively simple trait breaks down when normal expectations are broken.</p><p>Cloudflare’s frontend consists of several worker processes, each of which independently binds to the entire anycast range on UDP port 443. In order to enable multiple processes to bind to the same port, we use the SO_REUSEPORT socket option. While SO_REUSEPORT <a href="https://blog.cloudflare.com/the-sad-state-of-linux-socket-balancing/"><u>has additional benefits</u></a>, it also causes traffic sent to the broadcast address to be copied to every listener.</p><p>Each individual QUIC server worker operates in isolation. Each one reacted to the same client Initial, duplicating the work on the server side and generating response traffic to the client's IP address. Thus, a single packet could trigger a significant amplification. While specifics will vary by implementation, a typical one-listener-per-core stack (which sends retries in response to presumed timeouts) on a 128-core system could result in 384 replies being generated and sent for each packet sent to the broadcast address.</p><p>Although the researchers demonstrated this attack on QUIC, the underlying vulnerability can affect other UDP request/response protocols that use sockets in the same way.</p>
    <div>
      <h3>Mitigation</h3>
      <a href="#mitigation">
        
      </a>
    </div>
    <p>As a communication methodology, broadcast is not generally desirable for anycast prefixes. Thus, the easiest method to mitigate the issue was simply to disable broadcast functionality for the final address in each range.</p><p>Ideally, this would be done by modifying our tooling to only add the local routes in the local routing table, skipping the inclusion of the broadcast ones altogether. Unfortunately, the only practical mechanism to do so would involve patching and maintaining our own internal fork of the iproute2 suite, a rather heavy-handed solution for the problem at hand.</p><p>Instead, we decided to focus on removing the route itself. Similar to any other route, it can be removed using standard tooling:</p>
            <pre><code>$ sudo ip route del 203.0.113.255 table local</code></pre>
            <p>To do so at scale, we made a relatively minor change to our deployment system:</p>
            <pre><code>  {%- for lo_route in lo_routes %}
    {%- if lo_route.type == "broadcast" %}
        # All broadcast addresses are implicitly ipv4
        {%- do remove_route({
        "dev": "lo",
        "dst": lo_route.dst,
        "type": "broadcast",
        "src": lo_route.src,
        }) %}
    {%- endif %}
  {%- endfor %}</code></pre>
            <p>In doing so, we effectively ensure that all broadcast routes attached to the loopback interface are removed, mitigating the risk by ensuring that the specification-defined broadcast address is treated no differently than any other address in the range.</p>
    <div>
      <h3>Next steps </h3>
      <a href="#next-steps">
        
      </a>
    </div>
    <p>While the vulnerability specifically affected broadcast addresses within our anycast range, it likely expands past our infrastructure. Anyone with infrastructure that meets the relatively narrow criteria (a multi-worker, multi-listener UDP-based service that is bound to all IP addresses on a machine with routable IP prefixes attached in such a way as to expose the broadcast address) will be affected unless mitigations are in place. We encourage network administrators and security professionals to assess their systems for configurations that may present a local amplification attack vector.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Network]]></category>
            <category><![CDATA[Edge]]></category>
            <category><![CDATA[DDoS]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Bug Bounty]]></category>
            <guid isPermaLink="false">6ZaxgQxDACeIF6MZAquLPV</guid>
            <dc:creator>Josephine Chow</dc:creator>
            <dc:creator>June Slater</dc:creator>
            <dc:creator>Bryton Herdes</dc:creator>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[Open sourcing h3i: a command line tool and library for low-level HTTP/3 testing and debugging]]></title>
            <link>https://blog.cloudflare.com/h3i/</link>
            <pubDate>Mon, 30 Dec 2024 14:00:00 GMT</pubDate>
            <description><![CDATA[ h3i is a command line tool and Rust library designed for low-level testing and debugging of HTTP/3, which runs over QUIC. ]]></description>
            <content:encoded><![CDATA[ <p>Have you ever built a piece of IKEA furniture, or put together a LEGO set, by following the instructions closely and only at the end realized at some point you didn't <i>quite</i> follow them correctly? The final result might be close to what was intended, but there's a nagging thought that maybe, just maybe, it's not as rock steady or functional as it could have been.</p><p>Internet protocol specifications are instructions designed for engineers to build things. Protocol designers take great care to ensure the documents they produce are clear. The standardization process gathers consensus and review from experts in the field, to further ensure document quality. Any reasonably skilled engineer should be able to take a specification and produce a performant, reliable, and secure implementation. The Internet is central to everyone's lives, and we depend on these implementations. Any deviations from the specification can put us at risk. For example, mishandling of malformed requests can allow attacks such as <a href="https://en.wikipedia.org/wiki/HTTP_request_smuggling"><u>request smuggling</u></a>.</p><p>h3i is a binary command line tool and Rust library designed for low-level testing and debugging of HTTP/3, which runs over QUIC. <a href="https://crates.io/crates/h3i"><u>h3i</u></a> is free and open source as part of Cloudflare's <a href="https://github.com/cloudflare/quiche"><u>quiche</u></a> project. In this post we'll explain the motivation behind developing h3i, how we use it to help develop robust and safe standards-compliant software and production systems, and how you can similarly use it to test your own software or services. If you just want to jump into how to use h3i, go to the <a href="#the-h3i-command-line-tool"><u>h3i command line tool</u></a> section.</p>
    <div>
      <h2>A recap of QUIC and HTTP/3</h2>
      <a href="#a-recap-of-quic-and-http-3">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/http3-the-past-present-and-future/"><u>QUIC</u></a> is a secure-by-default transport protocol that provides performance advantages compared to TCP and TLS via a more efficient handshake, along with stream multiplexing that provides <a href="https://en.wikipedia.org/wiki/Head-of-line_blocking"><u>head-of-line blocking</u></a> avoidance. <a href="https://www.cloudflare.com/en-gb/learning/performance/what-is-http3/"><u>HTTP/3</u></a> is an application protocol that maps HTTP semantics to QUIC, such as defining how HTTP requests and responses are assigned to individual QUIC streams.</p><p>Cloudflare has supported QUIC on our global network in some shape or form <a href="https://blog.cloudflare.com/http3-the-past-present-and-future/"><u>since 2018</u></a>. We started while the <a href="https://ietf.org/"><u>Internet Engineering Task Force (IETF)</u></a> was earnestly standardizing the protocol, working through early iterations and using interoperability testing and experience to help provide feedback for the standards process. We <a href="https://blog.cloudflare.com/quic-version-1-is-live-on-cloudflare/"><u>launched support</u></a> for QUIC version 1 and HTTP/3 as soon as <a href="https://datatracker.ietf.org/doc/html/rfc9000"><u>RFC 9000</u></a> (and its accompanying specifications) were published in 2021.</p><p>We work on the Protocols team, who own the ingress proxy into the Cloudflare network. This is essentially Cloudflare’s “front door” — HTTP requests that come to Cloudflare from the Internet pass through us first. The majority of requests are passed onwards to things like rulesets, workers, caches, or a customer origin. However, you might be surprised that many requests don't ever make it that far because they are, in some way, invalid or malformed. Servers listening on the Internet have to be robust to traffic that is not RFC compliant, whether caused by accident or malicious intent.</p><p>The Protocols team actively participates in IETF standardization work and has also helped build and maintain other Cloudflare services that leverage quiche for QUIC and HTTP/3, from the proxies that help <a href="https://blog.cloudflare.com/icloud-private-relay/"><u>iCloud Private Relay</u></a> via <a href="https://blog.cloudflare.com/unlocking-quic-proxying-potential/"><u>MASQUE proxying</u></a>, to replacing <a href="https://blog.cloudflare.com/zero-trust-warp-with-a-masque/"><u>WARP's use of Wireguard with MASQUE</u></a>, and beyond.</p><p>Throughout all of these different use cases, it is important for us to extensively test all aspects of the protocols. A deep dive into protocol details is a blog post (or three) in its own right. So let's take a thin slice across HTTP to help illustrate the concepts.</p><p><a href="https://www.rfc-editor.org/rfc/rfc9110.html"><u>HTTP Semantics</u></a> are common to all versions of HTTP — the overall architecture, terminology, and protocol aspects such as request and response messages, methods, status codes, header and trailer fields, message content, and much more. Each individual HTTP version defines how semantics are transformed into a "wire format" for exchange over the Internet. You can read more about HTTP/1.1 and HTTP/2 in some of our previous <a href="https://blog.cloudflare.com/a-primer-on-proxies/"><u>blog</u></a> <a href="https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/"><u>posts</u></a>.</p><p>With HTTP/3, HTTP request and response messages are split into a series of binary frames. <a href="https://datatracker.ietf.org/doc/html/rfc9114#section-7.2.2"><u>HEADERS</u></a> frames carry a representation of HTTP metadata (method, path, status code, field lines). The payload of the frame is the encoded <a href="https://datatracker.ietf.org/doc/html/rfc9204"><u>QPACK</u></a> compression output. <a href="https://datatracker.ietf.org/doc/html/rfc9114#section-7.2.1"><u>DATA</u></a> frames carry <a href="https://datatracker.ietf.org/doc/html/rfc9110#section-6.4.1"><u>HTTP content</u></a> (aka "message body"). In order to exchange these frames, HTTP/3 relies on QUIC <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-2"><u>streams</u></a>. These provide an ordered and reliable byte stream and each have an identifier (ID) that is unique within the scope of a connection. There are <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-2.1"><u>four different stream types</u></a>, denominated by the two least significant bits of the ID.</p><p>As a simple example, assuming a QUIC connection has already been established, a client can make a GET request and receive a 200 OK response with an HTML body using the follow sequence:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7vVfQ5CYaaVPVmGloRUnkI/88bd727c3526e540bd493bc15fbe904a/unnamed.png" />
          </figure><ol><li><p>Client allocates the first available client-initiated bidirectional QUIC stream. (The IDs start at 0, then 4, 8, 12 and so on)</p></li><li><p>Client sends the request HEADERS frame on the stream and sets the stream's <a href="https://datatracker.ietf.org/doc/html/rfc9000#section-19.8"><u>FIN bit</u></a> to mark the end of stream.</p></li><li><p>Server receives the request HEADERS frame and validates it against <a href="https://datatracker.ietf.org/doc/html/rfc9114#section-4.1.2"><u>RFC 9114 rules</u></a>. If accepted, it processes the request and prepares the response.</p></li><li><p>Server sends the response HEADERS frame on the same stream.</p></li><li><p>Server sends the response DATA frame on the same stream and sets the FIN bit.</p></li><li><p>Client receives the response frames and validates them. If accepted, the content is presented to the user.</p></li></ol><p>At the QUIC layer, stream data is split into STREAM frames, which are sent in QUIC packets over UDP. QUIC deals with any loss detection and recovery, helping to ensure stream data is reliable. The layer cake diagram below provides a handy comparison of how HTTP/1.1, HTTP/2 and HTTP/3 use TCP, UDP and IP.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4049UpKGn4BJcYcEXSFgWz/32143a5ba3672786639908ad96851225/image2.png" />
          </figure>
    <div>
      <h2>Background on testing QUIC and HTTP/3 at Cloudflare</h2>
      <a href="#background-on-testing-quic-and-http-3-at-cloudflare">
        
      </a>
    </div>
    <p>The Protocols team has a diverse set of automated test tools that exercise our ingress proxy software in order to ensure it can stand up to the deluge that the Internet can throw at it. Just like a bouncer at a nightclub front door, we need to prevent as much bad traffic as possible before it gets inside and potentially causes damage.</p><p>HTTP/2 and HTTP/3 share several concepts. When we started developing early HTTP/3 support, we'd already learned a lot from production experience with HTTP/2. While HTTP/2 addressed many issues with HTTP/1.1 (especially problems like <a href="https://www.cgisecurity.com/lib/HTTP-Request-Smuggling.pdf"><u>request smuggling</u></a>, caused by its ASCII-based message delineation), HTTP/2 also added complexity and new avenues for attack. Security is an ongoing process, and the Protocols team continually hardens our software and systems to threats. For example, mitigating the range of <a href="https://blog.cloudflare.com/on-the-recent-http-2-dos-attacks/"><u>denial-of-service attacks</u></a> identified by Netflix in 2019, or the <a href="https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/"><u>HTTP/2 Rapid Reset</u></a> attacks of 2023.</p><p>For testing HTTP/2, we rely on the Python <a href="https://pypi.org/project/requests/"><u>Requests</u></a> library for testing conventional HTTP exchanges. However, that mostly only exercises HEADERS and DATA frames. There are eight other frame types and a plethora of ways that they can interact (hence the new attack vectors mentioned above). In order to get full testing coverage, we have to break down into the lower layer <a href="https://pypi.org/project/h2/"><u>h2</u></a> library, which allows exact frame-by-frame control. However, even that is not always enough. Libraries tend to want to follow the RFC rules and prevent their users from doing "the wrong thing". This is entirely logical for most purposes. For our needs though, we need to take off the safety guards just like any potential attackers might do. We have a few cases where the best way to exercise certain traffic patterns is to handcraft HTTP/2 frames in a hex editor, store that as binary, and replay it with a tool such as <a href="https://docs.openssl.org/1.0.2/man1/s_client/"><u>OpenSSL s_client</u></a>.</p><p>We knew we'd need similar testing approaches for HTTP/3. However, when we started in 2018, there weren't many other suitable client implementations. The rate of iteration on the specifications also meant it was hard to always keep in sync. So we built tests on quiche, using a mix of our <a href="https://github.com/cloudflare/quiche/blob/master/apps/src/client.rs"><u>quiche-client</u></a> and <a href="https://github.com/cloudflare/quiche/tree/master/tools/http3_test"><u>http3_test</u></a>. Over time, the python library <a href="https://github.com/aiortc/aioquic"><u>aioquic</u></a> has matured, and we have used it to add a range of lower-layer tests that break or bend HTTP/3 rules, in order to prove our proxies are robust.</p><p>Finally, we would be remiss not to mention that all the tests in our ingress proxy are <b>in addition to </b>the suite of over 500 integration tests that run on the quiche project itself.</p>
    <div>
      <h2>Making HTTP/3 testing more accessible and maintainable with h3i</h2>
      <a href="#making-http-3-testing-more-accessible-and-maintainable-with-h3i">
        
      </a>
    </div>
    <p>While we are happy with the coverage of our current tests, the smorgasbord of test tools makes it hard to know what to reach for when adding new tests. For example, we've had cases where aioquic's safety guards prevent us from doing something, and it has needed a patch or workaround. This sort of thing requires a time investment just to debug/develop the tests.</p><p>We believe it shouldn't take a protocol or code expert to develop what are often very simple to describe tests. While it is important to provide guide rails for the majority of conventional use cases, it is also important to provide accessible methods for taking them off.</p><p>Let's consider a simple example. In HTTP/3 there is something called the control stream. It's used to exchange frames such as SETTINGS, which affect the HTTP/3 connection. RFC 9114 <a href="https://datatracker.ietf.org/doc/html/rfc9114#section-6.2.1"><u>Section 6.2.1</u></a> states:</p><blockquote><p><i>Each side MUST initiate a single control stream at the beginning of the connection and send its SETTINGS frame as the first frame on this stream. If the first frame of the control stream is any other frame type, this MUST be treated as a connection error of type H3_MISSING_SETTINGS. Only one control stream per peer is permitted; receipt of a second stream claiming to be a control stream MUST be treated as a connection error of type H3_STREAM_CREATION_ERROR. The sender MUST NOT close the control stream, and the receiver MUST NOT request that the sender close the control stream. If either control stream is closed at any point, this MUST be treated as a connection error of type H3_CLOSED_CRITICAL_STREAM. Connection errors are described in Section 8.</i></p></blockquote><p>There are many tests we can conjure up just from that paragraph:</p><ol><li><p>Send a non-SETTINGS frame as the first frame on the control stream.</p></li><li><p>Open two control streams.</p></li><li><p>Open a control stream and then close it with a FIN bit.</p></li><li><p>Open a control stream and then reset it with a RESET_STREAM QUIC frame.</p></li><li><p>Wait for the peer to open a control stream and then ask for it to be reset with a STOP_SENDING QUIC frame.</p></li></ol><p>All of the above actions should cause a remote peer that has implemented the RFC properly to close the connection. Therefore, it is not in the interest of the local client or server applications to ever do these actions.</p><p>Many QUIC and HTTP/3 implementations are developed as libraries that are integrated into client or server applications. There may be an extensive set of unit or integration tests of the library checking RFC rules. However, it is also important to run the same tests on the integrated assembly of library and application, since it's all too common that an unhandled/mishandled library error can cascade to cause issues in upper layers. For instance, the HTTP/2 Rapid Reset attacks affected Cloudflare due to their <a href="https://blog.cloudflare.com/technical-breakdown-http2-rapid-reset-ddos-attack/#impact-on-customers"><u>impact on how one service spoke to another</u></a>.</p><p>We've developed h3i, a command line tool and library, to make testing more accessible and maintainable for all. We started with a client that can exercise servers, since that's what our focus has been. Future developments could support the opposite, a server that behaves in unusual ways in order to exercise clients.</p><p><b>Note: </b>h3i is <i>not</i> intended to be a production client! Its flexibility may cause issues that are not observed in other production-oriented clients. It is also not intended to be used for any type of performance testing and measurement.</p>
    <div>
      <h2>The h3i command line tool</h2>
      <a href="#the-h3i-command-line-tool">
        
      </a>
    </div>
    <p>The primary purpose of the h3i command line tool is quick low-level debugging and exploratory testing. Rather than worrying about writing code or a test script, users can quickly run an ad-hoc client test against a target, guided by interactive prompts.</p><p>In the simplest case, you can think of h3i a bit like <a href="https://curl.se/"><u>curl</u></a> but with access to some extra HTTP/3 parameters. In the example below, we issue a request to <a href="https://cloudflare-quic.com"><u>https://cloudflare-quic.com</u></a>/ and receive a response.</p><div>
  
</div><p>Walking through a simple GET with h3i step-by-step:</p><ol><li><p>Grab a copy of the h3i binary either by running <code>cargo install h3i</code> or cloning the quiche source repo at <a href="https://github.com/cloudflare/quiche/"><u>https://github.com/cloudflare/quiche/</u></a>. Both methods assume you have some familiarity with Rust and Cargo. See the cargo <a href="https://doc.rust-lang.org/book/ch14-04-installing-binaries.html"><u>documentation</u></a> for more information.</p><ol><li><p><code>cargo install</code> will place the binary on your path, so you can then just run it by executing <code>h3i</code>.</p></li><li><p>If running from source, navigate to the quiche/h3i directory and then use <code>cargo run</code>.</p></li></ol></li><li><p>Run the binary and provide the name and port of the target server. If the port is omitted, the default value 443 is assumed. E.g, <code>cargo run cloudflare-quic.com</code></p></li><li><p>h3i then enters the action prompting phase. A series of one or more HTTP/3 actions can be queued up, such as sending frames, opening or terminating streams, or waiting on data from the server. The full set of options is documented in the <a href="https://github.com/cloudflare/quiche/blob/master/h3i/README.md#command-line-tool"><u>readme</u></a>.</p><ol><li><p>The prompting interface adapts to keyboard inputs and supports tab completion.</p></li><li><p>In the example above, the <code>headers</code> action is selected, which walks through populating the fields in a HEADERS frame. It includes <a href="https://datatracker.ietf.org/doc/html/rfc9114#section-4.3.1"><u>mandatory fields</u></a> from RFC 9114 for convenience. If a test requires omitting these, the <code>headers_no_pseudo</code> can be used instead.</p></li></ol></li><li><p>The <code>commit</code> prompt choice finalizes the action list and moves to the connection phase. h3i initiates a QUIC connection to the server identified in step 2. Once connected, actions are executed in order.</p></li><li><p>By default, h3i reports some limited information about the frames the server sent. To get more detailed information, the <code>RUST_LOG</code> environment can be set with either <code>debug</code> or <code>trace</code> levels.</p></li></ol>
    <div>
      <h2>Instant record and replay, powered by qlog</h2>
      <a href="#instant-record-and-replay-powered-by-qlog">
        
      </a>
    </div>
    <p>It can be fun to play around with the h3i command line tool to see how different servers respond to different combinations or sequences of actions. Occasionally, you'll find a certain set that you want to run over and over again, or share with a friend or colleague. Having to manually enter the prompts repeatedly, or share screenshots of the h3i input can turn tedious. Fortunately, h3i records all the actions in a log file by default — the file path is printed immediately after h3i starts. The format of this file is based on <a href="https://datatracker.ietf.org/doc/html/draft-ietf-quic-qlog-main-schema"><u>qlog</u></a>, an in-progress standard in development at the IETF for network protocol logging. It’s a perfect fit for our low-level needs.</p><p>Here's an example h3i qlog file:</p>
            <pre><code>{"qlog_version":"0.3","qlog_format":"JSON-SEQ","title":"h3i","description":"h3i","trace":{"vantage_point":{"type":"client"},"title":"h3i","description":"h3i","configuration":{"time_offset":0.0}}}
{
  "time": 0.172783,
  "name": "http:frame_created",
  "data": {
    "stream_id": 0,
    "frame": {
      "frame_type": "headers",
      "headers": [
        {
          "name": ":method",
          "value": "GET"
        },
        {
          "name": ":authority",
          "value": "cloudflare-quic.com"
        },
        {
          "name": ":path",
          "value": "/"
        },
        {
          "name": ":scheme",
          "value": "https"
        },
        {
          "name": "user-agent",
          "value": "h3i"
        }
      ]
    }
  },
  "fin_stream": true
}</code></pre>
            <p>h3i logs can be replayed using the <code>--qlog-input</code> option. You can change the target server host and port, and keep all the same actions. However, most servers will validate the :authority pseudo-header or Host header contained in a HEADERS frame. The --replay-host-override option allows changing these fields without needing to modify the file by hand.</p><p>And yes, qlog files are human-readable text in the JSON-SEQ format. So you can also just write these by hand in the first place if you like! However, if you're going to start writing things, maybe Rust is your preferred option…</p>
    <div>
      <h2>Using the h3i library to send a malformed request with Rust</h2>
      <a href="#using-the-h3i-library-to-send-a-malformed-request-with-rust">
        
      </a>
    </div>
    <p>In our previous example, we just sent a valid request so there wasn't anything interesting to observe. Where h3i really shines is in generating traffic that isn't RFC compliant, such as malformed HTTP messages, invalid frame sequences, or other actions on streams. This helps determine if a server is acting robustly and defensively.</p><p>Let's explore this more with an example of HTTP content-length mismatch. RFC 9114 <a href="https://datatracker.ietf.org/doc/html/rfc9114#section-4.1.2"><u>section 4.1.2</u></a> specifies:</p><blockquote><p><i>A request or response that is defined as having content when it contains a Content-Length header field (Section 8.6 of [HTTP]) is malformed if the value of the Content-Length header field does not equal the sum of the DATA frame lengths received. A response that is defined as never having content, even when a Content-Length is present, can have a non-zero Content-Length header field even though no content is included in DATA frames.</i></p><p><i>Intermediaries that process HTTP requests or responses (i.e., any intermediary not acting as a tunnel) MUST NOT forward a malformed request or response. Malformed requests or responses that are detected MUST be treated as a stream error of type H3_MESSAGE_ERROR.</i></p><p><i>For malformed requests, a server MAY send an HTTP response indicating the error prior to closing or resetting the stream.</i></p></blockquote><p>There are good reasons that the RFC is so strict about handling mismatched content lengths. They can be a vector for <a href="https://portswigger.net/research/http2"><u>desynchronization attacks</u></a> (similar to request smuggling), especially when a proxy is converting inbound HTTP/3 to outbound HTTP/1.1.</p><p>We've provided an <a href="https://github.com/cloudflare/quiche/blob/master/h3i/examples/content_length_mismatch.rs"><u>example</u></a> of how to use the h3i Rust library to write a tailor-made test client that sends a mismatched content length request. It sends a Content-Length header of 5, but its body payload is “test”, which is only 4 bytes. It then waits for the server to respond, after which it explicitly closes the connection by sending a QUIC CONNECTION_CLOSE frame.</p><p>When running low-level tests, it can be interesting to also take a packet capture (<a href="https://en.wikipedia.org/wiki/Pcap"><u>pcap</u></a>) and observe what is happening on the wire. Since QUIC is an encrypted transport, we'll need to use the SSLKEYLOG environment variable to capture the session keys so that tools like Wireshark can <a href="https://wiki.wireshark.org/TLS#using-the-pre-master-secret"><u>decrypt and dissect</u></a>.</p><p>To follow along at home, clone a copy of the quiche repository, start a packet capture on the appropriate network interface and then run:</p>
            <pre><code>cd quiche/h3i
SSLKEYLOGFILE="h3i-example.keys" cargo run --example content_length_mismatch</code></pre>
            <p>In our decrypted capture, we see the expected sequence of handshake, request, response, and then closure.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Qkdd3h0x826tH95S61u92/5de829e018b9d3ef409a2452362fa81e/image1.png" />
          </figure>
    <div>
      <h2>Surveying the example code</h2>
      <a href="#surveying-the-example-code">
        
      </a>
    </div>
    <p>The <a href="https://github.com/cloudflare/quiche/blob/master/h3i/examples/content_length_mismatch.rs"><u>example</u></a> is a simple binary app with a <code>main()</code> entry point. Let's survey the key elements.</p><p>First, we set up an h3i configuration to a target server:</p>
            <pre><code>let config = Config::new()
        .with_host_port("cloudflare-quic.com".to_string())
        .with_idle_timeout(2000)
        .build()
        .unwrap();</code></pre>
            <p>The idle timeout is a QUIC concept which tells each endpoint when it should close the connection if the connection has been idle. This prevents endpoints from spinning idly if the peer hasn’t closed the connection. h3i’s default is 30 seconds, which can be too long for tests, so we set ours to 2 seconds here.</p><p>Next, we define a set of request headers and encode them with QPACK compression, ready to put in a HEADERS frame. Note that h3i does provide a <a href="https://docs.rs/h3i/latest/h3i/actions/h3/fn.send_headers_frame.html"><u>send_headers_frame</u></a> helper method which does this for you, but the example does it manually for clarity:</p>
            <pre><code>let headers = vec![
        Header::new(b":method", b"POST"),
        Header::new(b":scheme", b"https"),
        Header::new(b":authority", b"cloudflare-quic.com"),
        Header::new(b":path", b"/"),
        // We say that we're going to send a body with 5 bytes...
        Header::new(b"content-length", b"5"),
    ];

    let header_block = encode_header_block(&amp;headers).unwrap();</code></pre>
            <p>Then, we define the set of h3i actions that we want to execute in order: send HEADERS, send a too-short DATA frame, wait for the server's HEADERS, then close the connection.</p>
            <pre><code>let actions = vec![
        Action::SendHeadersFrame {
            stream_id: STREAM_ID,
            fin_stream: false,
            headers,
            frame: Frame::Headers { header_block },
        },
        Action::SendFrame {
            stream_id: STREAM_ID,
            fin_stream: true,
            frame: Frame::Data {
                // ...but, in actuality, we only send 4 bytes. This should yield a
                // 400 Bad Request response from an RFC-compliant
                // server: https://datatracker.ietf.org/doc/html/rfc9114#section-4.1.2-3
                payload: b"test".to_vec(),
            },
        },
        Action::Wait {
            wait_type: WaitType::StreamEvent(StreamEvent {
                stream_id: STREAM_ID,
                event_type: StreamEventType::Headers,
            }),
        },
        Action::ConnectionClose {
            error: quiche::ConnectionError {
                is_app: true,
                error_code: quiche::h3::WireErrorCode::NoError as u64,
                reason: vec![],
            },
        },
    ];</code></pre>
            <p>Finally, we'll set things in motion with <code>connect()</code>, which sets up the QUIC connection, executes the actions list and collects the summary.</p>
            <pre><code>let summary =
        sync_client::connect(config, &amp;actions).expect("connection failed");

    println!(
        "=== received connection summary! ===\n\n{}",
        serde_json::to_string_pretty(&amp;summary).unwrap_or_else(|e| e.to_string())
    );</code></pre>
            <p><a href="https://docs.rs/h3i/latest/h3i/client/connection_summary/struct.ConnectionSummary.html"><u>ConnectionSummary</u></a>  provides data about the connection, including the frames h3i received, details about why the connection closed, and connection statistics. The example prints the summary out. However, you can programmatically check it. We do this to write our own internal automation tests.</p><p>If you're running the example, it should print something like the following:</p>
            <pre><code>=== received connection summary! ===

{
  "stream_map": {
    "0": [
      {
        "UNKNOWN": {
          "raw_type": 2471591231244749708,
          "payload": ""
        }
      },
      {
        "UNKNOWN": {
          "raw_type": 2031803309763646295,
          "payload": "4752454153452069732074686520776f7264"
        }
      },
      {
        "enriched_headers": {
          "header_block_len": 75,
          "headers": [
            {
              "name": ":status",
              "value": "400"
            },
            {
              "name": "server",
              "value": "cloudflare"
            },
            {
              "name": "date",
              "value": "Sat, 07 Dec 2024 00:34:12 GMT"
            },
            {
              "name": "content-type",
              "value": "text/html"
            },
            {
              "name": "content-length",
              "value": "155"
            },
            {
              "name": "cf-ray",
              "value": "8ee06dbe2923fa17-ORD"
            }
          ]
        }
      },
      {
        "DATA": {
          "payload_len": 104
        }
      },
      {
        "DATA": {
          "payload_len": 51
        }
      }
    ]
  },
  "stats": {
    "recv": 10,
    "sent": 5,
    "lost": 0,
    "retrans": 0,
    "sent_bytes": 1712,
    "recv_bytes": 4178,
    "lost_bytes": 0,
    "stream_retrans_bytes": 0,
    "paths_count": 1,
    "reset_stream_count_local": 0,
    "stopped_stream_count_local": 0,
    "reset_stream_count_remote": 0,
    "stopped_stream_count_remote": 0,
    "path_challenge_rx_count": 0
  },
  "path_stats": [
    {
      "local_addr": "0.0.0.0:64418",
      "peer_addr": "104.18.29.7:443",
      "active": true,
      "recv": 10,
      "sent": 5,
      "lost": 0,
      "retrans": 0,
      "rtt": 0.008140072,
      "min_rtt": 0.004645536,
      "rttvar": 0.004238173,
      "cwnd": 13500,
      "sent_bytes": 1712,
      "recv_bytes": 4178,
      "lost_bytes": 0,
      "stream_retrans_bytes": 0,
      "pmtu": 1350,
      "delivery_rate": 247720
    }
  ],
  "error": {
    "local_error": {
      "is_app": true,
      "error_code": 256,
      "reason": ""
    },
    "timed_out": false
  }
}
</code></pre>
            <p>Let’s walk through the output. Up first is the <a href="https://docs.rs/h3i/latest/h3i/client/connection_summary/struct.StreamMap.html"><u>StreamMap</u></a>, which is a record of all frames received on each stream. We can see that we received 5 frames on stream 0: 2 UNKNOWNs, one <a href="https://docs.rs/h3i/latest/h3i/frame/struct.EnrichedHeaders.html"><u>EnrichedHeaders</u></a> frame, and two DATA frames.</p><p>The UNKNOWN frames are extension frames that are unknown to h3i; the server under test is sending what are known as <a href="https://datatracker.ietf.org/doc/draft-edm-protocol-greasing/"><u>GREASE</u></a> frames to help exercise the protocol and ensure clients are not erroring when they receive something unexpected per <a href="https://datatracker.ietf.org/doc/html/rfc9114#extensions"><u>RFC 9114 requirements</u></a>.</p><p>The EnrichedHeaders frame is essentially an HTTP/3 HEADERS frame, but with some small helpers, like one to get the response status code. The server under test sent a 400 as expected.</p><p>The DATA frames carry response body bytes. In this case, the body is the HTML required to render the Cloudflare Bad Request page (you can peek at the HTML yourself in Wireshark). We chose to omit the raw bytes from the ConnectionSummary since they may not be representable safely as text. A future improvement could be to encode the bytes in base64 or hex, in order to support tests that need to check response content.</p>
    <div>
      <h2>h3i for test automation</h2>
      <a href="#h3i-for-test-automation">
        
      </a>
    </div>
    <p>We believe h3i is a great library for building automated tests on. You can take the above example and modify it to fit within various types of (continuous) integration tests.</p><p>We outlined earlier how the Protocols team HTTP/3 testing has organically grown to use three different frameworks. Even within those, we still didn't have much flexibility and ease of use. Over the last year we've been building h3i itself and reimplementing our suite of ingress proxy test cases using the Rust library. This has helped us improve test coverage with a range of new tests not previously possible. It also surprisingly identified some problems with the old tests, particularly for some edge cases where it wasn't clear how the old test code implementation was running under the hood.</p>
    <div>
      <h2>Bake offs, interop, and wider testing of HTTP</h2>
      <a href="#bake-offs-interop-and-wider-testing-of-http">
        
      </a>
    </div>
    <p><a href="https://datatracker.ietf.org/doc/html/rfc1025"><u>RFC 1025</u></a> was published in 1987. Authored by <a href="https://icannwiki.org/Jon_Postel"><u>Jon Postel</u></a>, it discusses bake offs:</p><blockquote><p><i>In the early days of the development of TCP and IP, when there were very few implementations and the specifications were still evolving, the only way to determine if an implementation was "correct" was to test it against other implementations and argue that the results showed your own implementation to have done the right thing.  These tests and discussions could, in those early days, as likely change the specification as change the implementation.</i></p><p><i>There were a few times when this testing was focused, bringing together all known implementations and running through a set of tests in hopes of demonstrating the N squared connectivity and correct implementation of the various tricky cases.  These events were called "Bake Offs".</i></p></blockquote><p>While nearly 4 decades old, the concept of exercising Internet protocol implementations and seeing how they compare to the specification still holds true. The QUIC WG made heavy use of interoperability testing through its standardization process. We started off sitting in a room and running tests manually by hand (or with some help from scripts). Then <a href="https://seemann.io/"><u>Marten Seemann</u></a> developed the <a href="https://interop.seemann.io/"><u>QUIC Interop Runner</u></a>, which runs regular automated testing and collects and renders all the results. This has proven to be incredibly useful.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OGnVUbatoX8Ya2IO5RdCl/754316e004a8e658ac089e10e70b72ca/image6.png" />
          </figure><p>The state of HTTP/3 interoperability testing is not quite as mature. Although there are tools such as <a href="https://kazu-yamamoto.hatenablog.jp/"><u>Kazu Yamamoto's</u></a> excellent <a href="https://github.com/kazu-yamamoto/h3spec"><u>h3spec</u></a> (in Haskell) for testing conformance, there isn't a similar continuous integration process of collection and rendering of results. While h3i shares similarities with h3spec, we felt it important to focus on the framework capabilities rather than creating a corpus of tests and assertions. Cloudflare is a big fan of Rust and as several teams move to Rust-based proxies, having a consistent ecosystem provides advantages (such as developer velocity).</p><p>We certainly feel there is a great opportunity for continued collaboration and cross-pollination between projects in the QUIC and HTTP space. For example, h3i might provide a suitable basis to build another tool (or set of scripts) to run bake offs or interop tests. Perhaps it even makes sense to have a common collection of test cases owned by the community, that can be specialized to the most appropriate or preferred tooling. This topic was recently presented at the <a href="https://github.com/HTTPWorkshop/workshop2024/blob/main/talks/5.%20Testing/testing.pdf"><u>HTTP Workshop 2024</u></a> by Mohammed Al-Sahaf, and it excites us to see <a href="https://www.caffeinatedwonders.com/2024/12/18/towards-validated-http-implementation/"><u>new potential directions</u></a> of testing improvements.</p><p>When using any tools or methods for protocol testing, we encourage responsible handling of security-related matters. If you believe you may have identified a vulnerability in an IETF Internet protocol itself, please follow the IETF's <a href="https://www.ietf.org/standards/rfcs/vulnerabilities/"><u>reporting guidance</u></a>. If you believe you may have discovered an implementation vulnerability in a product, open source project, or service using QUIC or HTTP, then you should report these directly to the responsible party. Implementers or operators often provide their own publicly-available guidance and contact details to send reports. For example, the Cloudflare quiche <a href="https://github.com/cloudflare/quiche/security/policy"><u>security policy</u></a> is available in the Security tab of the GitHub repository.</p>
    <div>
      <h2>Summary and outlook</h2>
      <a href="#summary-and-outlook">
        
      </a>
    </div>
    <p>Cloudflare takes testing very seriously. While h3i has a limited feature set as a test HTTP/3 client, we believe it provides a strong framework that can be extended to a wider range of different cases and different protocols. For example, we'd like to add support for low-level HTTP/2.</p><p>We've designed h3i to integrate into a wide range of testing methodologies, from manual ad-hoc testing, to native Rust tests, to conformance testbenches built with scripting languages. We've had great success migrating our existing zoo of test tools to a single one that is more accessible and easier to maintain.</p><p>Now that you've read about h3i's capabilities, it's left as an exercise to the reader to go back to the example of HTTP/3 control streams and consider how you could write tests to exercise a server.</p><p>We encourage the community to experiment with h3i and provide feedback, and propose ideas or contributions to the <a href="https://github.com/cloudflare/quiche"><u>GitHub repository</u></a> as issues or Pull Requests.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rp4YDTbXm37OxK7dtjiKF/816c0eed08926b7d34842f4769808277/image4.png" />
          </figure><p></p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Testing]]></category>
            <category><![CDATA[Protocols]]></category>
            <category><![CDATA[Rust]]></category>
            <guid isPermaLink="false">2yX9ADcaKBprzyI9BaBoqN</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>Evan Rittenhouse</dc:creator>
        </item>
        <item>
            <title><![CDATA[Zero Trust WARP: tunneling with a MASQUE]]></title>
            <link>https://blog.cloudflare.com/zero-trust-warp-with-a-masque/</link>
            <pubDate>Wed, 06 Mar 2024 14:00:15 GMT</pubDate>
            <description><![CDATA[ This blog discusses the introduction of MASQUE to Zero Trust WARP and how Cloudflare One customers will benefit from this modern protocol ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gjB6Xaz5umz7Thed17Fb8/831d6d87a94f651c4f4803a6444d0f5c/image5-11.png" />
            
            </figure>
    <div>
      <h2>Slipping on the MASQUE</h2>
      <a href="#slipping-on-the-masque">
        
      </a>
    </div>
    <p>In June 2023, we <a href="/masque-building-a-new-protocol-into-cloudflare-warp/">told you</a> that we were building a new protocol, <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE</a>, into WARP. MASQUE is a fascinating protocol that extends the capabilities of <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> and leverages the unique properties of the QUIC transport protocol to efficiently proxy IP and UDP traffic without sacrificing performance or privacy</p><p>At the same time, we’ve seen a rising demand from <a href="https://www.cloudflare.com/learning/security/glossary/what-is-zero-trust/">Zero Trust</a> customers for features and solutions that only MASQUE can deliver. All customers want WARP traffic to look like HTTPS to avoid detection and blocking by firewalls, while a significant number of customers also require FIPS-compliant encryption. We have something good here, and it’s been proven elsewhere (more on that below), so we are building MASQUE into Zero Trust WARP and will be making it available to all of our Zero Trust customers — at WARP speed!</p><p>This blog post highlights some of the key benefits our Cloudflare One customers will realize with MASQUE.</p>
    <div>
      <h2>Before the MASQUE</h2>
      <a href="#before-the-masque">
        
      </a>
    </div>
    <p>Cloudflare is on a mission to help build a better Internet. And it is a journey we’ve been on with our device client and WARP for almost five years. The precursor to WARP was the 2018 launch of <a href="/announcing-1111/">1.1.1.1</a>, the Internet’s fastest, privacy-first consumer DNS service. WARP was introduced in 2019 with the <a href="/1111-warp-better-vpn/">announcement</a> of the 1.1.1.1 service with WARP, a high performance and secure consumer DNS and VPN solution. Then in 2020, we <a href="/introducing-cloudflare-for-teams">introduced</a> Cloudflare’s Zero Trust platform and the Zero Trust version of WARP to help any IT organization secure their environment, featuring a suite of tools we first built to protect our own IT systems. Zero Trust WARP with MASQUE is the next step in our journey.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zi7uOkKEYkgp6dpBwQRo4/cb0147f0558ed92bb83a0f61a4ebbacc/image4-14.png" />
            
            </figure>
    <div>
      <h2>The current state of WireGuard</h2>
      <a href="#the-current-state-of-wireguard">
        
      </a>
    </div>
    <p><a href="https://www.wireguard.com/">WireGuard</a> was the perfect choice for the 1.1.1.1 with WARP service in 2019. WireGuard is fast, simple, and secure. It was exactly what we needed at the time to guarantee our users’ privacy, and it has met all of our expectations. If we went back in time to do it all over again, we would make the same choice.</p><p>But the other side of the simplicity coin is a certain rigidity. We find ourselves wanting to extend WireGuard to deliver more capabilities to our Zero Trust customers, but WireGuard is not easily extended. Capabilities such as better session management, advanced congestion control, or simply the ability to use FIPS-compliant cipher suites are not options within WireGuard; these capabilities would have to be added on as proprietary extensions, if it was even possible to do so.</p><p>Plus, while WireGuard is popular in VPN solutions, it is not standards-based, and therefore not treated like a first class citizen in the world of the Internet, where non-standard traffic can be blocked, sometimes intentionally, sometimes not. WireGuard uses a non-standard port, port 51820, by default. Zero Trust WARP changes this to use port 2408 for the WireGuard tunnel, but it’s still a non-standard port. For our customers who control their own firewalls, this is not an issue; they simply allow that traffic. But many of the large number of public Wi-Fi locations, or the approximately 7,000 ISPs in the world, don’t know anything about WireGuard and block these ports. We’ve also faced situations where the ISP does know what WireGuard is and blocks it intentionally.</p><p>This can play havoc for roaming Zero Trust WARP users at their local coffee shop, in hotels, on planes, or other places where there are captive portals or public Wi-Fi access, and even sometimes with their local ISP. The user is expecting reliable access with Zero Trust WARP, and is frustrated when their device is blocked from connecting to Cloudflare’s global network.</p><p>Now we have another proven technology — MASQUE — which uses and extends HTTP/3 and QUIC. Let’s do a quick review of these to better understand why Cloudflare believes MASQUE is the future.</p>
    <div>
      <h2>Unpacking the acronyms</h2>
      <a href="#unpacking-the-acronyms">
        
      </a>
    </div>
    <p>HTTP/3 and QUIC are among the most recent advancements in the evolution of the Internet, enabling faster, more reliable, and more secure connections to endpoints like websites and APIs. Cloudflare worked closely with industry peers through the <a href="https://www.ietf.org/">Internet Engineering Task Force</a> on the development of <a href="https://datatracker.ietf.org/doc/html/rfc9000">RFC 9000</a> for QUIC and <a href="https://datatracker.ietf.org/doc/html/rfc9114">RFC 9114</a> for HTTP/3. The technical background on the basic benefits of HTTP/3 and QUIC are reviewed in our 2019 blog post where we announced <a href="/http3-the-past-present-and-future/">QUIC and HTTP/3 availability</a> on Cloudflare’s global network.</p><p>Most relevant for Zero Trust WARP, QUIC delivers better performance on low-latency or high packet loss networks thanks to packet coalescing and multiplexing. QUIC packets in separate contexts during the handshake can be coalesced into the same UDP datagram, thus reducing the number of receive and system interrupts. With multiplexing, QUIC can carry multiple HTTP sessions within the same UDP connection. Zero Trust WARP also benefits from QUIC’s high level of privacy, with TLS 1.3 designed into the protocol.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ARWf5TO9CaOucOU527M2X/b53da149e40b8c28fc812552cfcaca26/image2-11.png" />
            
            </figure><p>MASQUE unlocks QUIC’s potential for proxying by providing the application layer building blocks to support efficient tunneling of TCP and UDP traffic. In Zero Trust WARP, MASQUE will be used to establish a tunnel over HTTP/3, delivering the same capability as WireGuard tunneling does today. In the future, we’ll be in position to add more value using MASQUE, leveraging Cloudflare’s ongoing participation in the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a>. This blog post is a good read for those interested in <a href="/unlocking-quic-proxying-potential/">digging deeper into MASQUE</a>.</p><p>OK, so Cloudflare is going to use MASQUE for WARP. What does that mean to you, the Zero Trust customer?</p>
    <div>
      <h2>Proven reliability at scale</h2>
      <a href="#proven-reliability-at-scale">
        
      </a>
    </div>
    <p>Cloudflare’s network today spans more than 310 cities in over 120 countries, and interconnects with over 13,000 networks globally. HTTP/3 and QUIC were introduced to the Cloudflare network in 2019, the HTTP/3 standard was <a href="/cloudflare-view-http3-usage/">finalized in 2022</a>, and represented about <a href="https://radar.cloudflare.com/adoption-and-usage?dateStart=2023-01-01&amp;dateEnd=2023-12-31#http-1x-vs-http-2-vs-http-3">30% of all HTTP traffic on our network in 2023</a>.</p><p>We are also using MASQUE for <a href="/icloud-private-relay/">iCloud Private Relay</a> and other Privacy Proxy partners. The services that power these partnerships, from our Rust-based <a href="/introducing-oxy/">proxy framework</a> to our open source <a href="https://github.com/cloudflare/quiche">QUIC implementation</a>, are already deployed globally in our network and have proven to be fast, resilient, and reliable.</p><p>Cloudflare is already operating MASQUE, HTTP/3, and QUIC reliably at scale. So we want you, our Zero Trust WARP users and Cloudflare One customers, to benefit from that same reliability and scale.</p>
    <div>
      <h2>Connect from anywhere</h2>
      <a href="#connect-from-anywhere">
        
      </a>
    </div>
    <p>Employees need to be able to connect from anywhere that has an Internet connection. But that can be a challenge as many security engineers will configure firewalls and other networking devices to block all ports by default, and only open the most well-known and common ports. As we pointed out earlier, this can be frustrating for the roaming Zero Trust WARP user.</p><p>We want to fix that for our users, and remove that frustration. HTTP/3 and QUIC deliver the perfect solution. QUIC is carried on top of UDP (<a href="https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml">protocol number 17</a>), while HTTP/3 uses <a href="https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml">port 443</a> for encrypted traffic. Both of these are well known, widely used, and are very unlikely to be blocked.</p><p>We want our Zero Trust WARP users to reliably connect wherever they might be.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/53RZc92rNIUWscFuLuA13w/098b18464be4ee893d51786ff74a5bc4/image1-13.png" />
            
            </figure>
    <div>
      <h2>Compliant cipher suites</h2>
      <a href="#compliant-cipher-suites">
        
      </a>
    </div>
    <p>MASQUE leverages <a href="https://datatracker.ietf.org/doc/html/rfc8446">TLS 1.3</a> with QUIC, which provides a number of cipher suite choices. WireGuard also uses standard cipher suites. But some standards are more, let’s say, standard than others.</p><p>NIST, the <a href="https://www.nist.gov/">National Institute of Standards and Technology</a> and part of the US Department of Commerce, does a tremendous amount of work across the technology landscape. Of interest to us is the NIST research into network security that results in <a href="https://csrc.nist.gov/pubs/fips/140-2/upd2/final">FIPS 140-2</a> and similar publications. NIST studies individual cipher suites and publishes lists of those they recommend for use, recommendations that become requirements for US Government entities. Many other customers, both government and commercial, use these same recommendations as requirements.</p><p>Our first MASQUE implementation for Zero Trust WARP will use <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3</a> and FIPS compliant cipher suites.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25Qc8qdJd78bngZqpH0Pv7/1541929144b5ed4d85ccca36e0787957/image3-12.png" />
            
            </figure>
    <div>
      <h2>How can I get Zero Trust WARP with MASQUE?</h2>
      <a href="#how-can-i-get-zero-trust-warp-with-masque">
        
      </a>
    </div>
    <p>Cloudflare engineers are hard at work implementing MASQUE for the mobile apps, the desktop clients, and the Cloudflare network. Progress has been good, and we will open this up for beta testing early in the second quarter of 2024 for Cloudflare One customers. Your account team will be reaching out with participation details.</p>
    <div>
      <h2>Continuing the journey with Zero Trust WARP</h2>
      <a href="#continuing-the-journey-with-zero-trust-warp">
        
      </a>
    </div>
    <p>Cloudflare launched WARP five years ago, and we’ve come a long way since. This introduction of MASQUE to Zero Trust WARP is a big step, one that will immediately deliver the benefits noted above. But there will be more — we believe MASQUE opens up new opportunities to leverage the capabilities of QUIC and HTTP/3 to build innovative <a href="https://www.cloudflare.com/zero-trust/solutions/">Zero Trust solutions</a>. And we’re also continuing to work on other new capabilities for our Zero Trust customers.Cloudflare is committed to continuing our mission to help build a better Internet, one that is more private and secure, scalable, reliable, and fast. And if you would like to join us in this exciting journey, check out our <a href="https://www.cloudflare.com/careers/jobs/">open positions</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Access]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <guid isPermaLink="false">5sDoFBGGZJbT4D9pftVhXY</guid>
            <dc:creator>Dan Hall</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing HTTP/3 Prioritization]]></title>
            <link>https://blog.cloudflare.com/better-http-3-prioritization-for-a-faster-web/</link>
            <pubDate>Tue, 20 Jun 2023 13:00:46 GMT</pubDate>
            <description><![CDATA[ Today, Cloudflare is very excited to announce full support for HTTP/3 Extensible Priorities, a new standard that speeds the loading of webpages by up to 37% ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5WMquEFDro1TjSvsKvdb2X/1ebf5bacb443b2c7c611b2600cfe1352/image4-9.png" />
            
            </figure><p>Today, Cloudflare is very excited to announce full support for HTTP/3 Extensible Priorities, a new standard that speeds the loading of webpages by up to 37%. Cloudflare worked closely with standards builders to help form the specification for HTTP/3 priorities and is excited to help push the web forward. HTTP/3 Extensible Priorities is available on all plans on Cloudflare. For paid users, there is an enhanced version available that improves performance even more.</p><p>Web pages are made up of many objects that must be downloaded before they can be processed and presented to the user. Not all objects have equal importance for web performance. The role of HTTP prioritization is to load the right bytes at the most opportune time, to achieve the best results. Prioritization is most important when there are multiple objects all competing for the same constrained resource. In <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>, this resource is the QUIC connection. In most cases, bandwidth is the bottleneck from server to client. Picking what objects to dedicate bandwidth to, or share bandwidth amongst, is a critical foundation to web performance. When it goes askew, the other optimizations we build on top can suffer.</p><p>Today, we're announcing support for prioritization in HTTP/3, using the full capabilities of the HTTP Extensible Priorities (<a href="https://www.rfc-editor.org/rfc/rfc9218.html">RFC 9218)</a> standard, augmented with Cloudflare's knowledge and experience of enhanced HTTP/2 prioritization. This change is compatible with all mainstream web browsers and can improve key metrics such as <a href="https://web.dev/lcp/">Largest Contentful Paint</a> (LCP) by up to 37% in our test. Furthermore, site owners can apply server-side overrides, using Cloudflare Workers or directly from an origin, to customize behavior for their specific needs.</p>
    <div>
      <h3>Looking at a real example</h3>
      <a href="#looking-at-a-real-example">
        
      </a>
    </div>
    <p>The ultimate question when it comes to features like HTTP/3 Priorities is: how well does this work and should I turn it on? The details are interesting and we'll explain all of those shortly but first lets see some demonstrations.</p><p>In order to evaluate prioritization for HTTP/3, we have been running many simulations and tests. Each web page is unique. Loading a web page can require many TCP or QUIC connections, each of them idiosyncratic. These all affect how prioritization works and how effective it is.</p><p>To evaluate the effectiveness of priorities, we ran a set of tests measuring Largest Contentful Paint (LCP). As an example, we benchmarked blog.cloudflare.com to see how much we could improve performance:</p><div></div>
<p></p><p>As a film strip, this is what it looks like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1nUzro5TRNUdT66D48SAX9/eea3706754ab1adcdcf2de1520b4e8b2/unnamed.png" />
            
            </figure><p>In terms of actual numbers, we see Largest Contentful Paint drop from 2.06 seconds down to 1.29 seconds. Let’s look at why that is. To analyze exactly what’s going on we have to look at a waterfall diagram of how this web page is loading. A waterfall diagram is a way of visualizing how assets are loading. Some may be loaded in parallel whilst some might be loaded sequentially. Without smart prioritization, the waterfall for loading assets for this web page looks as follows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7GEe1xRnxvauMVfOKy5KH2/ab43c1561d3d512589a6de0b063419a1/BLOG-1879-waterfall-analysis-2.png" />
            
            </figure><p>There are several interesting things going on here so let's break it down. The LCP image at request 21 is for 1937-1.png, weighing 30.4 KB. Although it is the LCP image, the browser requests it as priority u=3,i, which informs the server to put it in the same round-robin bandwidth-sharing bucket with all of the other images. Ahead of the LCP image is index.js, a JavaScript file that is loaded with a "defer" attribute. This JavaScript is non-blocking and shouldn't affect key aspects of page layout.</p><p>What appears to be happening is that the browser gives index.js the priority u=3,i=?0, which places it ahead of the images group on the server-side. Therefore, the 217 KB of index.js is sent in preference to the LCP image. Far from ideal. Not only that, once the script is delivered, it needs to be processed and executed. This saturates the CPU and prevents the LCP image from being painted, for about 300 milliseconds, even though it was delivered already.</p><p>The waterfall with prioritization looks much better:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Y2mU3vMrf4DVgI6Cq7CSv/980f93870fee27984d40dd310ee38c8d/BLOG-1879-waterfall-analysis-1.png" />
            
            </figure><p>We used a server-side override to promote the priority of the LCP image 1937-1.png from u=3,i to u=2,i. This has the effect of making it leapfrog the "defer" JavaScript. We can see at around 1.2 seconds, transmission of index.js is halted while the image is delivered in full. And because it takes another couple of hundred milliseconds to receive the remaining JavaScript, there is no CPU competition for the LCP image paint. These factors combine together to drastically improve LCP times.</p>
    <div>
      <h3>How Extensible Priorities actually works</h3>
      <a href="#how-extensible-priorities-actually-works">
        
      </a>
    </div>
    <p>First of all, you don't need to do anything yourselves to make it work. Out of the box, browsers will send Extensible Priorities signals alongside HTTP/3 requests, which we'll feed into our priority scheduling decision making algorithms. We'll then decide the best way to send HTTP/3 response data to ensure speedy page loads.</p><p>Extensible Priorities has a similar interaction model to HTTP/2 priorities, client send priorities and servers act on them to schedule response data, we'll explain exactly how that works in a bit.</p><p>HTTP/2 priorities used a dependency tree model. While this was very powerful it turned out hard to implement and use. When the IETF came to try and port it to HTTP/3 during the standardization process, we hit major issues. If you are interested in all that background, go and read my blog post describing why we adopted a <a href="/adopting-a-new-approach-to-http-prioritization/">new approach to HTTP/3 prioritization</a>.</p><p>Extensible Priorities is a far simpler scheme. HTTP/2's dependency tree with 255 weights and dependencies (that can be mutual or exclusive) is complex, hard to use as a web developer and could not work for HTTP/3. Extensible Priorities has just two parameters: urgency and incremental, and these are capable of achieving exactly the same web performance goals.</p><p>Urgency is an integer value in the range 0-7. It indicates the importance of the requested object, with 0 being most important and 7 being the least. The default is 3. Urgency is comparable to HTTP/2 weights. However, it's simpler to reason about 8 possible urgencies rather than 255 weights. This makes developer's lives easier when trying to pick a value and predicting how it will work in practice.</p><p>Incremental is a boolean value. The default is false. A true value indicates the requested object can be processed as parts of it are received and read - commonly referred to as streaming processing. A false value indicates the object must be received in whole before it can be processed.</p><p>Let's consider some example web objects to put these parameters into perspective:</p><ul><li><p>An HTML document is the most important piece of a webpage. It can be processed as parts of it arrive. Therefore, urgency=0 and incremental=true is a good choice.</p></li><li><p>A CSS style is important for page rendering and could block visual completeness. It needs to be processed in whole. Therefore, urgency=1 and incremental=false is suitable, this would mean it doesn't interfere with the HTML.</p></li><li><p>An image file that is outside the browser viewport is not very important and it can be processed and painted as parts arrive. Therefore, urgency=3 and incremental=true is appropriate to stop it interfering with sending other objects.</p></li><li><p>An image file that is the "hero image" of the page, making it the Largest Contentful Pain element. An urgency of 1 or 2 will help it avoid being mixed in with other images. The choice of incremental value is a little subjective and either might be appropriate.</p></li></ul><p>When making an HTTP request, clients decide the Extensible Priority value composed of the urgency and incremental parameters. These are sent either as an HTTP header field in the request (meaning inside the HTTP/3 HEADERS frame on a request stream), or separately in an HTTP/3 PRIORITY_UPDATE frame on the control stream. HTTP headers are sent once at the start of a request; a client might change its mind so the PRIORITY_UPDATE frame allows it to reprioritize at any point in time.</p><p>For both the header field and PRIORITY_UPDATE, the parameters are exchanged using the Structured Fields Dictionary format (<a href="https://www.rfc-editor.org/info/rfc8941">RFC 8941</a>) and serialization rules. In order to save bytes on the wire, the parameters are shortened – urgency to 'u', and incremental to 'i'.</p><p>Here's how the HTTP header looks alongside a GET request for important HTML, using HTTP/3 style notation:</p>
            <pre><code>HEADERS:
    :method = GET
    :scheme = https
    :authority = example.com
    :path = /index.html
     priority = u=0,i</code></pre>
            <p>The PRIORITY_UPDATE frame only carries the serialized Extensible Priority value:</p>
            <pre><code>PRIORITY_UPDATE:
    u=0,i</code></pre>
            <p>Structured Fields has some other neat tricks. If you want to indicate the use of a default value, then that can be done via omission. Recall that the urgency default is 3, and incremental default is false. A client could send "u=1" alongside our important CSS request (urgency=1, incremental=false). For our lower priority image it could send just "i=?1" (urgency=3, incremental=true). There's even another trick, where boolean true dictionary parameters are sent as just "i". You should expect all of these formats to be used in practice, so it pays to be mindful about their meaning.</p><p>Extensible Priority servers need to decide how best to use the available connection bandwidth to schedule the response data bytes. When servers receive priority client signals, they get one form of input into a decision making process. RFC 9218 provides a set of <a href="https://www.rfc-editor.org/rfc/rfc9218.html#name-server-scheduling">scheduling recommendations</a> that are pretty good at meeting a board set of needs. These can be distilled down to some golden rules.</p><p>For starters, the order of requests is crucial. Clients are very careful about asking for things at the moment they want it. Serving things in request order is good. In HTTP/3, because there is no strict ordering of stream arrival, servers can use stream IDs to determine this. Assuming the order of the requests is correct, the next most important thing is urgency ordering. Serving according to urgency values is good.</p><p>Be wary of non-incremental requests, as they mean the client needs the object in full before it can be used at all. An incremental request means the client can process things as and when they arrive.</p><p>With these rules in mind, the scheduling then becomes broadly: for each urgency level, serve non-incremental requests in whole serially, then serve incremental requests in round robin fashion in parallel. What this achieves is dedicated bandwidth for very important things, and shared bandwidth for less important things that can be processed or rendered progressively.</p><p>Let's look at some examples to visualize the different ways the scheduler can work. These are generated by using <a href="https://github.com/cloudflare/quiche">quiche's</a> <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-qlog-main-schema/">qlog</a> support and running it via the <a href="https://qvis.quictools.info/">qvis</a> analysis tool. These diagrams are similar to a waterfall chart; the y-dimension represents stream IDs (0 at the top, increasing as we move down) and the x-dimension shows reception of stream data.</p><p>Example 1: all streams have the same urgency and are non-incremental so get served in serial order of stream ID.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2mzTrM4iI9h7uEJXa2TOuT/40d1ba7c1d13949107d68a2e1fb5398f/u-same.png" />
            
            </figure><p>Example 2: the streams have the same urgency and are incremental so get served in round-robin fashion.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1V44aLuoHvqxR4gpETKJ2x/fdb8ddb148353333b4aaceff11858ff6/u-same-i.png" />
            
            </figure><p>Example 3: the streams have all different urgency, with later streams being more important than earlier streams. The data is received serially but in a reverse order compared to example 1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2koT4ij8bzGMc06OVNIFGZ/26572e5bffa63b588860050e005fe64e/u-reversed.png" />
            
            </figure><p>Beyond the Extensible Priority signals, a server might consider other things when scheduling, such as file size, content encoding, how the application vs content origins are configured etc.. This was true for HTTP/2 priorities but Extensible Priorities introduces a new neat trick, a priority signal can also be sent as a response header to override the client signal.</p><p>This works especially well in a proxying scenario where your HTTP/3 terminating proxy is sat in front of some backend such as Workers. The proxy can pass through the request headers to the backend, it can inspect these and if it wants something different, return response headers to the proxy. This allows powerful tuning possibilities and because we operate on a semantic request basis (rather than HTTP/2 priorities dependency basis) we don't have all the complications and dangers. Proxying isn't the only use case. Often, one form of "API" to your local server is via setting response headers e.g., via configuration. Leveraging that approach means we don't have to invent new APIs.</p><p>Let's consider an example where server overrides are useful. Imagine we have a webpage with multiple images that are referenced via  tags near the top of the HTML. The browser will process these quite early in the page load and want to issue requests. At this point, <b>it might not know enough</b> about the page structure to determine if an image is in the viewport or outside the viewport. It can guess, but that might turn out to be wrong if the page is laid out a certain way. Guessing wrong means that something is misprioritized and might be taking bandwidth away from something that is more important. While it is possible to reprioritize things mid-flight using the PRIORITY_UPDATE frame, this action is "laggy" and by the time the server realizes things, it might be too late to make much difference.</p><p>Fear not, the web developer who built the page knows exactly how it is supposed to be laid out and rendered. They can overcome client uncertainty by overriding the Extensible Priority when they serve the response. For instance, if a client guesses wrong and requests the LCP image at a low priority in a shared bandwidth bucket, the image will load slower and web performance metrics will be adversely affected. Here's how it might look and how we can fix it:</p>
            <pre><code>Request HEADERS:
    :method = GET
    :scheme = https
    :authority = example.com
    :path = /lcp-image.jpg
     priority = u=3,i</code></pre>
            
            <pre><code>Response HEADERS:
:status = 200
content-length: 10000
content-type: image/jpeg
priority = u=2</code></pre>
            <p>Priority response headers are one tool to tweak client behavior and they are complementary to other web performance techniques. Methods like efficiently ordering elements in HTML, using attributes like "async" or "defer", augmenting HTML links with Link headers, or using more descriptive link relationships like “<a href="https://html.spec.whatwg.org/multipage/links.html#link-type-preload">preload</a>” all help to improve a browser's understanding of the resources comprising a page. A website that optimizes these things provides a better chance for the browser to make the best choices for prioritizing requests.</p><p>More recently, a new attribute called “<a href="https://web.dev/fetch-priority/">fetchpriority</a>” has emerged that allows developers to tune some of the browser behavior, by boosting or dropping the priority of an element relative to other elements of the same type. The attribute can help the browser do two important things for Extensible priorities: first, the browser might send the request earlier or later, helping to satisfy our golden rule #1 - ordering. Second, the browser might pick a different urgency value, helping to satisfy rule #2. However, "fetchpriority" is a nudge mechanism and it doesn't allow for directly setting a desired priority value. The nudge can be a bit opaque. Sometimes the circumstances benefit greatly from just knowing plainly what the values are and what the server will do, and that's where the response header can help.</p>
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>We’re excited about bringing this new standard into the world. Working with standards bodies has always been an amazing partnership and we’re very pleased with the results. We’ve seen great results with HTTP/3 priorities, reducing Largest Contentful Paint by up to 37% in our test. We’ll be rolling this feature out over the next few weeks as part of the HTTP Priorities feature for HTTP/2 that’s already available today.</p> ]]></content:encoded>
            <category><![CDATA[Speed Week]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <guid isPermaLink="false">3sxwiYeGEwXXvE9ltToeUB</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>Achiel van der Mandele</dc:creator>
        </item>
        <item>
            <title><![CDATA[The state of HTTP in 2022]]></title>
            <link>https://blog.cloudflare.com/the-state-of-http-in-2022/</link>
            <pubDate>Fri, 30 Dec 2022 14:00:00 GMT</pubDate>
            <description><![CDATA[ So what happened at all of those working group meetings, specification documents, and side events in 2022? What are implementers and deployers of the web’s protocol doing? And what’s coming next? ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6YG57TW3Ue3Z2iGZU7dO4u/853a3d2f76588ca638c4e24727f5eca5/http3-tube_2x-2.png" />
            
            </figure><p>At over thirty years old, HTTP is still the foundation of the web and one of the Internet’s most popular protocols—not just for browsing, watching videos and listening to music, but also for apps, machine-to-machine communication, and even as a basis for building other protocols, forming what some refer to as a “second waist” in the classic Internet hourglass diagram.</p><p>What makes HTTP so successful? One answer is that it hits a “sweet spot” for most applications that need an application protocol. “<a href="https://httpwg.org/specs/rfc9205.html">Building Protocols with HTTP</a>” (published in 2022 as a Best Current Practice RFC by the <a href="https://httpwg.org/">HTTP Working Group</a>) argues that HTTP’s success can be attributed to factors like:</p><p>- familiarity by implementers, specifiers, administrators, developers, and users;- availability of a variety of client, server, and proxy implementations;- ease of use;- availability of web browsers;- reuse of existing mechanisms like authentication and encryption;- presence of HTTP servers and clients in target deployments; and- its ability to traverse firewalls.</p><p>Another important factor is the community of people using, implementing, and standardising HTTP. We work together to maintain and develop the protocol actively, to assure that it’s interoperable and meets today’s needs. If HTTP stagnates, another protocol will (justifiably) replace it, and we’ll lose all the community’s investment, shared understanding and interoperability.</p><p>Cloudflare and many others do this by sending engineers to <a href="/cloudflare-and-the-ietf/">participate in the IETF</a>, where most Internet protocols are discussed and standardised. We also attend and sponsor community events like the <a href="https://httpwork.shop">HTTP Workshop</a> to have conversations about what problems people have, what they need, and understand what changes might help them.</p><p>So what happened at all of those working group meetings, specification documents, and side events in 2022? What are implementers and deployers of the web’s protocol doing? And what’s coming next?</p>
    <div>
      <h3>New Specification: HTTP/3</h3>
      <a href="#new-specification-http-3">
        
      </a>
    </div>
    <p>Specification-wise, the biggest thing to happen in 2022 was the publication of <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>, because it was an enormous step towards keeping up with the requirements of modern applications and sites by using the network more efficiently to unblock web performance.</p><p>Way back in the 90s, HTTP/0.9 and HTTP/1.0 used a new TCP connection for each request—an astoundingly inefficient use of the network. HTTP/1.1 introduced persistent connections (which were backported to HTTP/1.0 with the `Connection: Keep-Alive` header). This was an improvement that helped servers and the network cope with the explosive popularity of the web, but even back then, the community knew it had significant limitations—in particular, head-of-line blocking (where one outstanding request on a connection blocks others from completing).</p><p>That didn’t matter so much in the 90s and early 2000s, but today’s web pages and applications place demands on the network that make these limitations performance-critical. Pages often have hundreds of assets that all compete for network resources, and HTTP/1.1 wasn’t up to the task. After some <a href="https://www.w3.org/Protocols/HTTP-NG/">false starts</a>, the community finally <a href="/http-2-for-web-developers/">addressed these issues with HTTP/2 in 2015</a>.</p><p>However, removing head-of-line blocking in HTTP exposed the same problem one layer lower, in TCP. Because TCP is an in-order, reliable delivery protocol, loss of a single packet in a flow can block access to those after it—even if they’re sitting in the operating system’s buffers. This turns out to be a real issue for HTTP/2 deployment, especially on less-than-optimal networks.</p><p>The answer, of course, was to replace TCP—the venerable transport protocol that so much of the Internet is built upon. After much discussion and many drafts in the <a href="https://quicwg.org/">QUIC Working Group</a>, <a href="/quic-version-1-is-live-on-cloudflare/">QUIC version 1 was published as that replacement</a> in 2021.</p><p>HTTP/3 is the version of HTTP that uses QUIC. While the working group effectively finished it in 2021 along with QUIC, its publication was held until 2022 to synchronise with the publication of other documents (see below). 2022 was also a <a href="/cloudflare-view-http3-usage/">milestone year for HTTP/3 deployment</a>; Cloudflare saw <a href="https://radar.cloudflare.com/adoption-and-usage?range=28d">increasing adoption and confidence</a> in the new protocol.</p><p>While there was only a brief gap of a few years between HTTP/2 and HTTP/3, there isn’t much appetite for working on HTTP/4 in the community soon. QUIC and HTTP/3 are both new, and the world is still learning how best to implement the protocols, operate them, and build sites and applications using them. While we can’t rule out a limitation that will force a new version in the future, the IETF built these protocols based upon broad industry experience with modern networks, and have significant extensibility available to ease any necessary changes.</p>
    <div>
      <h3>New specifications: HTTP “core”</h3>
      <a href="#new-specifications-http-core">
        
      </a>
    </div>
    <p>The other headline event for HTTP specs in 2022 was the publication of its “core” documents -- the heart of HTTP’s specification. The core comprises: <a href="https://httpwg.org/specs/rfc9110.html">HTTP Semantics</a> - things like methods, headers, status codes, and the message format; <a href="https://httpwg.org/specs/rfc9111.html">HTTP Caching</a> - how HTTP caches work; <a href="https://httpwg.org/specs/rfc9112.html">HTTP/1.1</a> - mapping semantics to the wire, using the format everyone knows and loves.</p><p>Additionally, <a href="https://httpwg.org/specs/rfc9113.html">HTTP/2 was republished</a> to properly integrate with the Semantics document, and to fix a few outstanding issues.</p><p>This is the latest in a long series of revisions for these documents—in the past, we’ve had the RFC 723x series, the (perhaps most well-known) RFC 2616, RFC 2068, and the grandparent of them all, RFC 1945. Each revision has aimed to improve readability, fix errors, explain concepts better, and clarify protocol operation. Poorly specified (or implemented) features are deprecated; new features that improve protocol operation are added. See the ‘Changes from...’ appendix in each document for the details. And, importantly, always refer to the latest revisions linked above; the older RFCs are now obsolete.</p>
    <div>
      <h3>Deploying Early Hints</h3>
      <a href="#deploying-early-hints">
        
      </a>
    </div>
    <p>HTTP/2 included <i>server push</i>, a feature designed to allow servers to “push” a request/response pair to clients when they knew the client was going to need something, so it could avoid the latency penalty of making a request and waiting for the response.</p><p>After HTTP/2 was finalised in 2015, Cloudflare and many other HTTP implementations soon <a href="/announcing-support-for-http-2-server-push-2/">rolled out server push</a> in anticipation of big performance wins. Unfortunately, it turned out that’s harder than it looks; server push effectively requires the server to predict the future—not only what requests the client will send but also what the network conditions will be. And, when the server gets it wrong (“over-pushing”), the pushed requests directly compete with the real requests that the browser is making, representing a significant opportunity cost with real potential for harming performance, rather than helping it. The impact is even worse when the browser already has a copy in cache, so it doesn’t need the push at all.</p><p>As a result, <a href="https://developer.chrome.com/blog/removing-push/">Chrome removed HTTP/2 server push in 2022</a>. Other browsers and servers might still support it, but the community seems to agree that it’s only suitable for specialised uses currently, like the browser notification-specific <a href="https://www.rfc-editor.org/rfc/rfc8030.html">Web Push Protocol</a>.</p><p>That doesn’t mean that we’re giving up, however. The <a href="https://httpwg.org/specs/rfc8297.html">103 (Early Hints)</a> status code was published as an Experimental RFC by the HTTP Working Group in 2017. It allows a server to send <i>hints</i> to the browser in a non-final response, before the “real” final response. That’s useful if you know that the content is going to include some links to resources that the browser will fetch, but need more time to get the response to the client (because it will take more time to generate, or because the server needs to fetch it from somewhere else, like a CDN does).</p><p>Early Hints can be used in many situations that server push was designed for -- for example, when you have CSS and JavaScript that a page is going to need to load. In theory, they’re not as optimal as server push, because they only allow hints to be sent when there’s an outstanding request, and because getting the hints to the client and acted upon adds some latency.</p><p>In practice, however, Cloudflare and our partners (like Shopify and Google) spent 2022 experimenting with Early Hints and finding them much safer to use, with <a href="/early-hints-performance/">promising performance benefits</a> that include significant reductions in key web performance metrics.</p><p>We’re excited about the potential that Early Hints show; so excited that we’ve <a href="/early-hints-on-cloudflare-pages/">integrated them into Cloudflare Pages</a>. We’re also evaluating new ways to improve performance using this new capability in the protocol.</p>
    <div>
      <h3>Privacy-focused intermediation</h3>
      <a href="#privacy-focused-intermediation">
        
      </a>
    </div>
    <p>For many, the most exciting HTTP protocol extensions in 2022 focused on intermediation—the ability to insert proxies, gateways, and similar components into the protocol to achieve specific goals, often focused on improving privacy.</p><p>The <a href="https://ietf-wg-masque.github.io">MASQUE Working Group</a>, for example, is an effort to add new tunneling capabilities to HTTP, so that an intermediary can pass the tunneled traffic along to another server.</p><p>While CONNECT has enabled TCP tunnels for a long time, MASQUE enabled <a href="https://datatracker.ietf.org/doc/html/rfc9298">UDP tunnels</a>, allowing more protocols to be tunneled more efficiently–including QUIC and HTTP/3.</p><p>At Cloudflare, we’re enthusiastic to be working with Apple to use MASQUE to implement <a href="/icloud-private-relay/">iCloud Private Relay</a> and enhance their customers’ privacy without relying solely on one company. We’re also very interested in the Working Group’s future work, including <a href="/unlocking-quic-proxying-potential/">IP tunneling</a> that will enable MASQUE-based VPNs.Another intermediation-focused specification is <a href="https://www.ietf.org/archive/id/draft-ietf-ohai-ohttp-06.html">Oblivious HTTP</a> (or OHTTP). OHTTP uses sets of intermediaries to prevent the server from using connections or IP addresses to track clients, giving greater privacy assurances for things like collecting telemetry or other sensitive data. This specification is just finishing the standards process, and we’re using it to build an important new product, <a href="/building-privacy-into-internet-standards-and-how-to-make-your-app-more-private-today/">Privacy Gateway</a>, to protect the privacy of our customers’ customers.</p><p>We and many others in the Internet community believe that this is just the start, because intermediation can partition communication, a <a href="https://intarchboard.github.io/draft-obliviousness/draft-kpw-iab-privacy-partitioning.html">valuable tool for improving privacy</a>.</p>
    <div>
      <h3>Protocol security</h3>
      <a href="#protocol-security">
        
      </a>
    </div>
    <p>Finally, 2022 saw a lot of work on security-related aspects of HTTP. The <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-digest-headers.html">Digest Fields</a> specification is an update to the now-ancient `Digest` header field, allowing integrity digests to be added to messages. The <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-message-signatures.html">HTTP Message Signatures</a> specification enables cryptographic signatures on requests and responses -- something that has widespread ad hoc deployment, but until now has lacked a standard. Both specifications are in the final stages of standardisation.</p><p>A <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-rfc6265bis.html">revision of the Cookie specification</a> also saw a lot of progress in 2022, and should be final soon. Since it’s not possible to get rid of them completely soon, much work has taken place to limit how they operate to improve privacy and security, including a new `SameSite` attribute.</p><p>Another set of security-related specifications that Cloudflare has <a href="/cloudflare-supports-privacy-pass/">invested in for many years</a> is <a href="https://www.ietf.org/archive/id/draft-ietf-privacypass-architecture-09.html">Privacy Pass</a> also known as “Private Access Tokens.” These are cryptographic tokens that can assure clients are real people, not bots, without using an intrusive CAPTCHA, and without tracking the user’s activity online. In HTTP, they take the form of a <a href="https://www.ietf.org/archive/id/draft-ietf-privacypass-auth-scheme-07.html">new authentication scheme</a>.</p><p>While Privacy Pass is still not quite through the standards process, 2022 saw its <a href="/eliminating-captchas-on-iphones-and-macs-using-new-standard/">broad deployment by Apple</a>, a huge step forward. And since <a href="/turnstile-private-captcha-alternative/">Cloudflare uses it in Turnstile</a>, our CAPTCHA alternative, your users can have a better experience today.</p>
    <div>
      <h3>What about 2023?</h3>
      <a href="#what-about-2023">
        
      </a>
    </div>
    <p>So, what’s next? Besides, the specifications above that aren’t quite finished, the HTTP Working Group has a few other works in progress, including a <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-safe-method-w-body.html">QUERY method</a> (think GET but with a body), <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-resumable-upload.html">Resumable Uploads</a> (based on <a href="https://tus.io">tus</a>), <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-variants.html">Variants</a> (an improved Vary header for caching), <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-sfbis.html">improvements to Structured Fields</a> (including a new Date type), and a way to <a href="https://httpwg.org/http-extensions/draft-ietf-httpbis-retrofit.html">retrofit existing headers into Structured Fields</a>. We’ll write more about these as they progress in 2023.</p><p>At the <a href="https://github.com/HTTPWorkshop/workshop2022/blob/main/report.md">2022 HTTP Workshop</a>, the community also talked about what new work we can take on to improve the protocol. Some ideas discussed included improving our shared protocol testing infrastructure (right now we have a <a href="https://github.com/httpwg/wiki/wiki/HTTP-Testing-Resources">few resources</a>, but it could be much better), improving (or replacing) <a href="https://httpwg.org/specs/rfc7838.html">Alternative Services</a> to allow more intelligent and correct connection management, and more radical changes, like <a href="https://mnot.github.io/I-D/draft-nottingham-binary-structured-headers.html">alternative, binary serialisations of headers</a>.</p><p>There’s also a continuing discussion in the community about whether HTTP should accommodate pub/sub, or whether it should be standardised to work over WebSockets (and soon, WebTransport). Although it’s hard to say now, adjacent work on <a href="https://datatracker.ietf.org/group/moq/about/">Media over QUIC</a> that just started <i>might</i> provide an opportunity to push this forward.</p><p>Of course, that’s not everything, and what happens to HTTP in 2023 (and beyond) remains to be seen. HTTP is still evolving, even as it stays compatible with the largest distributed hypertext system ever conceived—the World Wide Web.</p> ]]></content:encoded>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Privacy]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">2ChQ8N8Cg6LwDWafsFjWJN</guid>
            <dc:creator>Mark Nottingham</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP/3 inspection on Cloudflare Gateway]]></title>
            <link>https://blog.cloudflare.com/cloudflare-gateway-http3-inspection/</link>
            <pubDate>Fri, 24 Jun 2022 13:30:05 GMT</pubDate>
            <description><![CDATA[ Today we’re excited to announce the ability for administrators to apply Zero Trust inspection policies to HTTP/3 traffic ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7bX5Iv8MoIXNxPcE67Ki4c/0d0b18fe43468e09e23536cf549acfef/pasted-image-0--1-.png" />
            
            </figure><p>Today, we’re excited to announce upcoming support for <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> inspection through Cloudflare Gateway, our comprehensive <a href="https://www.cloudflare.com/learning/access-management/what-is-a-secure-web-gateway/">secure web gateway</a>. HTTP/3 currently powers 25% of the Internet and delivers a faster browsing experience, without compromising security. Until now, administrators seeking to filter and inspect HTTP/3-enabled websites or APIs needed to either compromise on performance by falling back to HTTP/2 or lose visibility by bypassing inspection. With HTTP/3 support in Cloudflare Gateway, you can have full visibility on all traffic and provide the fastest browsing experience for your users.</p>
    <div>
      <h3>Why is the web moving to HTTP/3?</h3>
      <a href="#why-is-the-web-moving-to-http-3">
        
      </a>
    </div>
    <p>HTTP is one of the oldest technologies that powers the Internet. All the way back in 1996, security and performance were afterthoughts and encryption was left to the transport layer to manage. This model doesn’t scale to the performance needs of the modern Internet and has led to HTTP being <a href="/http3-the-past-present-and-future/">upgraded to HTTP/2 and now HTTP/3</a>.</p><p>HTTP/3 accelerates browsing activity by using QUIC, a modern transport protocol that is always encrypted by default. This delivers faster performance by reducing round-trips between the user and the web server and is more performant for users with unreliable connections. For further information about HTTP/3’s performance advantages take a look at our previous blog <a href="/http-3-vs-http-2/">here</a>.</p>
    <div>
      <h3>HTTP/3 development and adoption</h3>
      <a href="#http-3-development-and-adoption">
        
      </a>
    </div>
    <p>Cloudflare’s mission is to help build a better Internet. We see HTTP/3 as an important building block to make the Internet faster and more secure. We worked closely with the IETF to iterate on the HTTP/3 and QUIC standards documents. These efforts combined with progress made by popular browsers like Chrome and Firefox to enable QUIC by default have translated into HTTP/3 now being used by over 25% of all websites and for an even more thorough <a href="/cloudflare-view-http3-usage/">analysis</a>.</p><p>We’ve advocated for HTTP/3 extensively over the past few years. We first introduced support for the underlying transport layer QUIC in <a href="/the-quicening/">September 2018</a> and then from there worked to introduce HTTP/3 support for our reverse proxy services the following year in <a href="/http3-the-past-present-and-future/">September of 2019</a>. Since then our efforts haven’t slowed down and today we support the latest revision of HTTP/3, using the final “h3” identifier matching RFC 9114.</p>
    <div>
      <h3>HTTP/3 inspection hurdles</h3>
      <a href="#http-3-inspection-hurdles">
        
      </a>
    </div>
    <p>But while there are many advantages to HTTP/3, its introduction has created deployment complexity and security tradeoffs for administrators seeking to filter and inspect HTTP traffic on their networks. HTTP/3 offers familiar HTTP request and response semantics, but the use of QUIC changes how it looks and behaves "on the wire". Since QUIC runs atop UDP, it  is architecturally distinct from legacy TCP-based protocols and has poor support from legacy secure web gateways. The combination of these two factors has made it challenging for administrators to keep up with the evolving technological landscape while maintaining the users’ performance expectations and ensuring visibility and control over Internet traffic.</p><p>Without proper secure web gateway support for HTTP/3, administrators have needed to choose whether to compromise on security and/or performance for their users. Security tradeoffs include not inspecting UDP traffic, or even worse forgoing critical security capabilities such as inline anti-virus scanning, data-loss prevention, <a href="https://www.cloudflare.com/learning/access-management/what-is-browser-isolation/">browser isolation</a> and/or traffic logging. Naturally, for any security conscious organization discarding security and visibility is not an acceptable approach and this has led administrators to proactively disable HTTP/3 on their end user devices. This introduces deployment complexity and sacrifices performance as it requires disabling QUIC-support within the users web browsers.</p>
    <div>
      <h3>How to enable HTTP/3 Inspection</h3>
      <a href="#how-to-enable-http-3-inspection">
        
      </a>
    </div>
    <p>Once support for HTTP/3 inspection is available for select browsers later this year, you’ll be able to enable HTTP/3 inspection through the dashboard. Once logged into the Zero Trust dashboard you will need to toggle on proxying, click the box for UDP traffic, and enable TLS decryption under <b>Settings &gt; Network &gt; Firewall.</b> Once these settings have been enabled; AV-scanning, remote browser isolation, DLP, and HTTP filtering can be applied via HTTP policies to all of your organization’s proxied HTTP traffic.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4OeWgTcKJHjJoKJ7GmCJC5/2af6f099a4b93c9d396507ee2734b186/pasted-image-0--2-.png" />
            
            </figure>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Administrators will no longer need to make security tradeoffs based on the evolving technological landscape and can focus on protecting their organization and teams. We’ll reach out to all Cloudflare One customers once HTTP/3 inspection is available and are excited to simplify secure web gateway deployments for administrators.</p><p>HTTP/3 traffic inspection will be available to all administrators of all plan types; if you have not signed up already <a href="https://dash.cloudflare.com/sign-up/teams">click here</a> to get started.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare One Week]]></category>
            <category><![CDATA[Cloudflare Gateway]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Cloudflare One]]></category>
            <category><![CDATA[Cloudflare Zero Trust]]></category>
            <category><![CDATA[Zero Trust]]></category>
            <guid isPermaLink="false">63ahcyUqUmiwPyDOIZFyi8</guid>
            <dc:creator>Ankur Aggarwal</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP RFCs have evolved: A Cloudflare view of HTTP usage trends]]></title>
            <link>https://blog.cloudflare.com/cloudflare-view-http3-usage/</link>
            <pubDate>Mon, 06 Jun 2022 20:49:17 GMT</pubDate>
            <description><![CDATA[ HTTP/3 is now RFC 9114. We explore Cloudflare's view of how it is being used ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, a cluster of Internet standards were published that rationalize and modernize the definition of HTTP - the application protocol that underpins the web. This work includes updates to, and <a href="https://www.cloudflare.com/learning/cloud/how-to-refactor-applications/">refactoring</a> of, HTTP semantics, HTTP caching, HTTP/1.1, HTTP/2, and the brand-new <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>. Developing these specifications has been no mean feat and today marks the culmination of efforts far and wide, in the Internet Engineering Task Force (IETF) and beyond. We thought it would be interesting to celebrate the occasion by sharing some analysis of Cloudflare's view of HTTP traffic over the last 12 months.</p><p>However, before we get into the traffic data, for quick reference, here are the new RFCs that you should make a note of and start using:</p><ul><li><p>HTTP Semantics - <a href="https://www.rfc-editor.org/rfc/rfc9110.html">RFC 9110</a></p><ul><li><p>HTTP's overall architecture, common terminology and shared protocol aspects such as request and response messages, methods, status codes, header and trailer fields, message content, representation data, content codings and much more. Obsoletes RFCs <a href="https://www.rfc-editor.org/rfc/rfc2818.html">2818</a>, <a href="https://www.rfc-editor.org/rfc/rfc7231.html">7231</a>, <a href="https://www.rfc-editor.org/rfc/rfc7232.html">7232</a>, <a href="https://www.rfc-editor.org/rfc/rfc7233.html">7233</a>, <a href="https://www.rfc-editor.org/rfc/rfc7235.html">7235</a>, <a href="https://www.rfc-editor.org/rfc/rfc7538.html">7538</a>, <a href="https://www.rfc-editor.org/rfc/rfc7615.html">7615</a>, <a href="https://www.rfc-editor.org/rfc/rfc7694.html">7694</a>, and portions of <a href="https://www.rfc-editor.org/rfc/rfc7230.html">7230</a>.</p></li></ul></li><li><p>HTTP Caching - <a href="https://www.rfc-editor.org/rfc/rfc9111.html">RFC 9111</a></p><ul><li><p>HTTP caches and related header fields to control the behavior of response caching. Obsoletes RFC <a href="https://www.rfc-editor.org/rfc/rfc7234.html">7234</a>.</p></li></ul></li><li><p>HTTP/1.1 - <a href="https://www.rfc-editor.org/rfc/rfc9112.html">RFC 9112</a></p><ul><li><p>A syntax, aka "wire format", of HTTP that uses a text-based format. Typically used over TCP and TLS. Obsolete portions of RFC <a href="https://www.rfc-editor.org/rfc/rfc7230.html">7230</a>.</p></li></ul></li><li><p>HTTP/2 - RFC <a href="https://www.rfc-editor.org/rfc/rfc9113.html">9113</a></p><ul><li><p>A syntax of HTTP that uses a binary framing format, which provides streams to support concurrent requests and responses. Message fields can be compressed using HPACK. Typically used over TCP and TLS. Obsoletes RFCs <a href="https://www.rfc-editor.org/rfc/rfc7540.html">7540</a> and <a href="https://www.rfc-editor.org/rfc/rfc8740.html">8740</a>.</p></li></ul></li><li><p>HTTP/3 - RFC <a href="https://www.rfc-editor.org/rfc/rfc9114.html">9114</a></p><ul><li><p>A syntax of HTTP that uses a binary framing format optimized for the QUIC transport protocol. Message fields can be compressed using QPACK.</p></li></ul></li><li><p>QPACK - RFC <a href="https://www.rfc-editor.org/rfc/rfc9204.html">9204</a></p><ul><li><p>A variation of HPACK field compression that is optimized for the QUIC transport protocol.</p></li></ul></li></ul><p>On May 28, 2021, we <a href="/quic-version-1-is-live-on-cloudflare/">enabled</a> QUIC version 1 and HTTP/3 for all Cloudflare customers, using the final "h3" identifier that matches RFC 9114. So although today's publication is an occasion to celebrate, for us nothing much has changed, and it's business as usual.</p><p><a href="https://caniuse.com/http3">Support for HTTP/3 in the stable release channels of major browsers</a> came in November 2020 for Google Chrome and Microsoft Edge and April 2021 for Mozilla Firefox. In Apple Safari, HTTP/3 support currently needs to be <a href="https://developer.apple.com/forums/thread/660516">enabled</a> in the “Experimental Features” developer menu in production releases.</p><p>A browser and web server typically automatically negotiate the highest HTTP version available. Thus, HTTP/3 takes precedence over HTTP/2. We looked back over the last year to understand HTTP/3 usage trends across the Cloudflare network, as well as analyzing HTTP versions used by traffic from leading browser families (Google Chrome, Mozilla Firefox, Microsoft Edge, and Apple Safari), major search engine indexing bots, and bots associated with some popular social media platforms. The graphs below are based on aggregate HTTP(S) traffic seen globally by the Cloudflare network, and include requests for website and application content across the Cloudflare customer base between May 7, 2021, and May 7, 2022. We used <a href="https://developers.cloudflare.com/bots/concepts/bot-score/">Cloudflare bot scores</a> to restrict analysis to “likely human” traffic for the browsers, and to “likely automated” and “automated” for the search and social bots.</p>
    <div>
      <h3>Traffic by HTTP version</h3>
      <a href="#traffic-by-http-version">
        
      </a>
    </div>
    <p>Overall, HTTP/2 still comprises the majority of the request traffic for Cloudflare customer content, as clearly seen in the graph below. After remaining fairly consistent through 2021, HTTP/2 request volume increased by approximately 20% heading into 2022. HTTP/1.1 request traffic remained fairly flat over the year, aside from a slight drop in early December. And while HTTP/3 traffic initially trailed HTTP/1.1, it surpassed it in early July, growing steadily and  roughly doubling in twelve months.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UKNCgWJPAocsCrvTmqKOG/6c1d9ff45b8c4430f4663f4fe8a41964/image13-1.png" />
            
            </figure>
    <div>
      <h3>HTTP/3 traffic by browser</h3>
      <a href="#http-3-traffic-by-browser">
        
      </a>
    </div>
    <p>Digging into just HTTP/3 traffic, the graph below shows the trend in daily aggregate request volume over the last year for HTTP/3 requests made by the surveyed browser families. Google Chrome (orange line) is far and away the leading browser, with request volume far outpacing the others.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fOBxNVQis3KRP9qMJtN0h/07df569e787dcfd3b918124a9c324b30/image6-21.png" />
            
            </figure><p>Below, we remove Chrome from the graph to allow us to more clearly see the trending across other browsers. Likely because it is also based on the Chromium engine, the trend for Microsoft Edge closely mirrors Chrome. As noted above, Mozilla Firefox first enabled production support in <a href="https://hacks.mozilla.org/2021/04/quic-and-http-3-support-now-in-firefox-nightly-and-beta/">version 88</a> in April 2021, making it available by default by the end of May. The increased adoption of that updated version during the following month is clear in the graph as well, as HTTP/3 request volume from Firefox grew rapidly. HTTP/3 traffic from Apple Safari increased gradually through April, suggesting growth in the number of users enabling the experimental feature or running a Technology Preview version of the browser. However, Safari’s HTTP/3 traffic has subsequently dropped over the last couple of months. We are not aware of any specific reasons for this decline, but our most recent observations indicate HTTP/3 traffic is recovering.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Mupv6iXQ195JfJkFLJQjX/cb0cc4153c043740e92e93fb2e041626/image2-57.png" />
            
            </figure><p>Looking at the lines in the graph for Chrome, Edge, and Firefox, a weekly cycle is clearly visible in the graph, suggesting greater usage of these browsers during the work week. This same pattern is absent from Safari usage.</p><p>Across the surveyed browsers, Chrome ultimately accounts for approximately 80% of the HTTP/3 requests seen by Cloudflare, as illustrated in the graphs below. Edge is responsible for around another 10%, with Firefox just under 10%, and Safari responsible for the balance.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Yph7V9e1W31PWkSry6pCy/6c874447bfa49392244e587dfb3d35fe/image1-64.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1zPj3TMsZuiirtoljldJYy/12f92447d2d0b3c26afd0b5754c510f1/image8-10.png" />
            
            </figure><p>We also wanted to look at how the mix of HTTP versions has changed over the last year across each of the leading browsers. Although the percentages vary between browsers, it is interesting to note that the trends are very similar across Chrome, Firefox and Edge. (After Firefox turned on default HTTP/3 support in May 2021, of course.)  These trends are largely customer-driven – that is, they are likely due to changes in Cloudflare customer configurations.</p><p>Most notably we see an increase in HTTP/3 during the last week of September, and a decrease in HTTP/1.1 at the beginning of December. For Safari, the HTTP/1.1 drop in December is also visible, but the HTTP/3 increase in September is not. We expect that over time, once Safari supports HTTP/3 by default that its trends will become more similar to those seen for the other browsers.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4P1m2PJH7GBq9kBUL4vH0/fd19391109337e16a255967b54120392/image7-12.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5Fj8pBr6Z5tpMV9XUTC1lJ/68b10faf3b1f840844d1cf97f8204b64/image9-6.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6iMw3Aj3IXWpjn4LyxBAsG/7b59629b937fb39d352a126a5bd178d3/image12-1.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2dtxJLKq2N23CBwcZlEz8s/64bdc5275320e16bd0b8780db49e0ffc/image11-2.png" />
            
            </figure>
    <div>
      <h3>Traffic by search indexing bot</h3>
      <a href="#traffic-by-search-indexing-bot">
        
      </a>
    </div>
    <p>Back in 2014, Google <a href="https://developers.google.com/search/blog/2014/08/https-as-ranking-signal">announced</a> that it would start to consider HTTPS usage as a ranking signal as it indexed websites. However, it does not appear that Google, or any of the other major search engines, currently consider support for the latest versions of HTTP as a ranking signal. (At least not directly – the performance improvements associated with newer versions of HTTP could theoretically influence rankings.) Given that, we wanted to understand which versions of HTTP the indexing bots themselves were using.</p><p>Despite leading the charge around the development of QUIC, and integrating HTTP/3 support into the Chrome browser early on, it appears that on the indexing/crawling side, Google still has quite a long way to go. The graph below shows that requests from GoogleBot are still predominantly being made over HTTP/1.1, although use of HTTP/2 has grown over the last six months, gradually approaching HTTP/1.1 request volume. (A <a href="https://developers.google.com/search/blog/2020/09/googlebot-will-soon-speak-http2">blog post</a> from Google provides some potential insights into this shift.) Unfortunately, the volume of requests from GoogleBot over HTTP/3 has remained extremely limited over the last year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gTr2C26AB8SF6aK0CaiK/9b555a912b3428ad9e15572936bf4fb1/image4-32.png" />
            
            </figure><p>Microsoft’s BingBot also fails to use HTTP/3 when indexing sites, with near-zero request volume. However, in contrast to GoogleBot, BingBot prefers to use HTTP/2, with a wide margin developing in mid-May 2021 and remaining consistent across the rest of the past year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/444sdNtnh5h0LNsGtUWmuV/b4d3a2f76ec4a5b8fc579371c9f005a2/image10-5.png" />
            
            </figure>
    <div>
      <h3>Traffic by social media bot</h3>
      <a href="#traffic-by-social-media-bot">
        
      </a>
    </div>
    <p>Major social media platforms use custom bots to retrieve metadata for shared content, <a href="https://developers.facebook.com/docs/sharing/bot/">improve language models for speech recognition technology</a>, or otherwise index website content. We also surveyed the HTTP version preferences of the bots deployed by three of the leading social media platforms.</p><p>Although <a href="https://http3check.net/?host=www.facebook.com">Facebook supports HTTP/3</a> on their main website (and presumably their mobile applications as well), their back-end FacebookBot crawler does not appear to support it. Over the last year, on the order of 60% of the requests from FacebookBot have been over HTTP/1.1, with the balance over HTTP/2. Heading into 2022, it appeared that HTTP/1.1 preference was trending lower, with request volume over the 25-year-old protocol dropping from near 80% to just under 50% during the fourth quarter. However, that trend was abruptly reversed, with HTTP/1.1 growing back to over 70% in early February. The reason for the reversal is unclear.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6upA1FtAbR6TWhexL8CkxT/87b3f1d676e1f9189ad5b7dc1d869e4a/image3-44.png" />
            
            </figure><p>Similar to FacebookBot, it appears TwitterBot’s use of HTTP/3 is, unfortunately, pretty much non-existent. However, TwitterBot clearly has a strong and consistent preference for HTTP/2, accounting for 75-80% of its requests, with the balance over HTTP/1.1.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2c9sz97ViywLHaRd4vxCwE/9c981e7c39f8c894957447b4a3337c1a/image14-1.png" />
            
            </figure><p>In contrast, LinkedInBot has, over the last year, been firmly committed to making requests over HTTP/1.1, aside from the apparently brief anomalous usage of HTTP/2 last June. However, in mid-March, it appeared to tentatively start exploring the use of other HTTP versions, with around 5% of requests now being made over HTTP/2, and around 1% over HTTP/3, as seen in the upper right corner of the graph below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ozCJpCXILw6ulzIAxDyXn/70f9a1c95d76f4fd1f6f9e70e4d3e270/image5-23.png" />
            
            </figure>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>We're happy that HTTP/3 has, at long last, been published as <a href="https://www.rfc-editor.org/rfc/rfc9114.html">RFC 9114</a>. More than that, we're super pleased to see that regardless of the wait, browsers have steadily been enabling support for the protocol by default. This allows end users to seamlessly gain the advantages of HTTP/3 whenever it is available. On Cloudflare's global network, we've seen continued growth in the share of traffic speaking HTTP/3, demonstrating continued interest from customers in enabling it for their sites and services. In contrast, we are disappointed to see bots from the major search and social platforms continuing to rely on aging versions of HTTP. We'd like to build a better understanding of how these platforms chose particular HTTP versions and welcome collaboration in exploring the advantages that HTTP/3, in particular, could provide.</p><p>Current statistics on HTTP/3 and QUIC adoption at a country and autonomous system (ASN) level can be found on <a href="https://radar.cloudflare.com/">Cloudflare Radar</a>.</p><p>Running HTTP/3 and QUIC on the edge for everyone has allowed us to monitor a wide range of aspects related to interoperability and performance across the Internet. Stay tuned for future blog posts that explore some of the technical developments we've been making.</p><p>And this certainly isn't the end of protocol innovation, as HTTP/3 and QUIC provide many exciting new opportunities. The IETF and wider community are already underway building new capabilities on top, such as <a href="/unlocking-quic-proxying-potential/">MASQUE</a> and <a href="https://datatracker.ietf.org/wg/webtrans/documents/">WebTransport</a>. Meanwhile, in the last year, the QUIC Working Group has adopted new work such as <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-v2/">QUIC version 2</a>, and the <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-multipath/">Multipath Extension to QUIC</a>.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div><p></p> ]]></content:encoded>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">4Dd2QedroFWYvUMb5Ba3ha</guid>
            <dc:creator>Lucas Pardue</dc:creator>
            <dc:creator>David Belson</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Last Call for QUIC, a giant leap for the Internet]]></title>
            <link>https://blog.cloudflare.com/last-call-for-quic/</link>
            <pubDate>Thu, 22 Oct 2020 14:08:51 GMT</pubDate>
            <description><![CDATA[ QUIC and HTTP/3 are open standards that have been under development in the IETF for almost exactly 4 years. On October 21, 2020, following two rounds of Working Group Last Call, draft 32 of the family of documents that describe QUIC and HTTP/3 were put into IETF Last Call. ]]></description>
            <content:encoded><![CDATA[ <p>QUIC is a new Internet transport protocol for secure, reliable and multiplexed communications. <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> builds on top of QUIC, leveraging the new features to fix performance problems such as Head-of-Line blocking. This enables web pages to load faster, especially over troublesome networks.</p><p>QUIC and HTTP/3 are open standards that have been under development in the IETF <a href="/http-3-from-root-to-tip">for almost exactly 4 years</a>. On October 21, 2020, following two rounds of Working Group Last Call, draft 32 of the family of documents that describe QUIC and HTTP/3 were put into <a href="https://mailarchive.ietf.org/arch/msg/quic/ye1LeRl7oEz898RxjE6D3koWhn0/">IETF Last Call</a>. This is an important milestone for the group. We are now telling the entire IETF community that we think we're almost done and that we'd welcome their final review.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78vaeSkXoriOtIbPyOU5rw/8ce2942b6542d94b5d0d42fc9e91d7b7/image2-24.png" />
            
            </figure><p>Speaking personally, I've been involved with QUIC in some shape or form for many years now. Earlier this year I was honoured to be asked to help co-chair the Working Group. I'm pleased to help shepherd the documents through this important phase, and grateful for the efforts of everyone involved in getting us there, especially the editors. I'm also excited about future opportunities to evolve on top of QUIC v1 to help build a better Internet.</p><p>There are two aspects to protocol development. One aspect involves writing and iterating upon the documents that describe the protocols themselves. Then, there's implementing, deploying and testing libraries, clients and/or servers. These aspects operate hand in hand, helping the Working Group move towards satisfying the goals listed in its charter. IETF Last Call marks the point that the group and their responsible Area Director (in this case Magnus Westerlund) believe the job is almost done. Now is the time to solicit feedback from the wider IETF community for review. At the end of the Last Call period, the stakeholders will take stock, address feedback as needed and, fingers crossed, go onto the next step of requesting the documents be published as RFCs on the Standards Track.</p><p>Although specification and implementation work hand in hand, they often progress at different rates, and that is totally fine. The QUIC specification has been mature and deployable for a long time now. HTTP/3 has been <a href="/http3-the-past-present-and-future/">generally available</a> on the Cloudflare edge since September 2019, and we've been delighted to see support roll out in user agents such as Chrome, Firefox, Safari, curl and so on. Although draft 32 is the latest specification, the community has for the time being settled on draft 29 as a solid basis for interoperability. This shouldn't be surprising, as foundational aspects crystallize the scope of changes between iterations decreases. For the average person in the street, there's not really much difference between 29 and 32.</p><p>So today, if you visit a website with HTTP/3 enabled—such as <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a>—you’ll probably see response headers that contain Alt-Svc: h3-29="… . And in a while, once Last Call completes and the RFCs ship, you'll start to see websites simply offer Alt-Svc: h3="… (note, no draft version!).</p>
    <div>
      <h3>Need a deep dive?</h3>
      <a href="#need-a-deep-dive">
        
      </a>
    </div>
    <p>We've collected a bunch of resource links at <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a>. If you're more of an interactive visual learner, you might be pleased to hear that I've also been hosting a series on <a href="https://cloudflare.tv/live">Cloudflare TV</a> called "Levelling up Web Performance with HTTP/3". There are over 12 hours of content including the basics of QUIC, ways to measure and debug the protocol in action using tools like Wireshark, and several deep dives into specific topics. I've also been lucky to have some guest experts join me along the way. The table below gives an overview of the episodes that are available on demand.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/41Oavd19lBk474V1BOr1ZQ/3b6d0466c42b3940e6329c754c63863d/image1-36.png" />
            
            </figure><p>Episode</p><p>Description</p><p><a href="https://cloudflare.tv/event/6jJjzbBoFwvARsoaNiUt9i">1</a></p><p>Introduction to QUIC.</p><p><a href="https://cloudflare.tv/event/5rcGVibHCKs9l9xUUMdJqg">2</a></p><p>Introduction to HTTP/3.</p><p><a href="https://cloudflare.tv/event/3OM7upT7p3vpAdzphFdhnx">3</a></p><p>QUIC &amp; HTTP/3 logging and analysis using qlog and qvis. Featuring Robin Marx.</p><p><a href="https://cloudflare.tv/event/45tQd4UPkZGULg59BZPl1p">4</a></p><p>QUIC &amp; HTTP/3 packet capture and analysis using Wireshark. Featuring Peter Wu.</p><p><a href="https://cloudflare.tv/event/4YgvMrif2yma7pM6Srv6wi">5</a></p><p>The roles of Server Push and Prioritization in HTTP/2 and HTTP/3. Featuring Yoav Weiss.</p><p><a href="https://cloudflare.tv/event/7ufIyfjZfn2aQ2K635EH3t">6</a></p><p>"After dinner chat" about curl and QUIC. Featuring Daniel Stenberg.</p><p><a href="https://cloudflare.tv/event/6vMyFU2jyx2iKXZVp7YjHW">7</a></p><p>Qlog vs. Wireshark. Featuring Robin Marx and Peter Wu.</p><p><a href="https://cloudflare.tv/event/3miIPtXnktpzjslJlnkD9c">8</a></p><p>Understanding protocol performance using WebPageTest. Featuring Pat Meenan and Andy Davies.</p><p><a href="https://cloudflare.tv/event/6Qv7zmY2oi6j28M5HZNZmV">9</a></p><p>Handshake deep dive.</p><p><a href="https://cloudflare.tv/event/3gqUUBcl40LvThxO7UQH0T">10</a></p><p>Getting to grips with quiche, Cloudflare's QUIC and HTTP/3 library.</p><p><a href="https://cloudflare.tv/event/3Mrq6DHoA9fy4ATT3Wigrv">11</a></p><p>A review of SIGCOMM's EPIQ workshop on evolving QUIC.</p><p><a href="https://cloudflare.tv/event/CHrSpig5nqKeFGFA3fzLq">12</a></p><p>Understanding the role of congestion control in QUIC. Featuring Junho Choi.</p><p></p>
    <div>
      <h3>Whither QUIC?</h3>
      <a href="#whither-quic">
        
      </a>
    </div>
    <p>So does Last Call mean QUIC is "done"? Not by a long shot. The new protocol is a giant leap for the Internet, because it enables new opportunities and innovation. QUIC v1 is basically the set of documents that have gone into Last Call. We'll continue to see people gain experience deploying and testing this, and no doubt cool blog posts about tweaking parameters for efficiency and performance are on the radar. But QUIC and HTTP/3 are extensible, so we'll see people interested in trying new things like multipath, different congestion control approaches, or new ways to carry data unreliably such as the <a href="https://datatracker.ietf.org/doc/draft-ietf-quic-datagram/">DATAGRAM frame</a>.</p><p>We're also seeing people interested in using QUIC for other use cases. Mapping other application protocols like DNS to QUIC is a rapid way to get its improvements. We're seeing people that want to use QUIC as a substrate for carrying other transport protocols, hence the formation of the <a href="https://datatracker.ietf.org/wg/masque/about/">MASQUE Working Group</a>. There's folks that want to use QUIC and HTTP/3 as a "supercharged WebSocket", hence the formation of the <a href="https://datatracker.ietf.org/wg/webtrans/documents/">WebTransport Working Group</a>.</p><p>Whatever the future holds for QUIC, we're just getting started, and I'm excited.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">4kkjRctgxi0uvF46ddKnp6</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[Speeding up HTTPS and HTTP/3 negotiation with... DNS]]></title>
            <link>https://blog.cloudflare.com/speeding-up-https-and-http-3-negotiation-with-dns/</link>
            <pubDate>Wed, 30 Sep 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ A look at a new DNS resource record intended to speed-up negotiation of HTTP security and performance features and how it will help make the web faster. ]]></description>
            <content:encoded><![CDATA[ <p>In late June 2019, Cloudflare's resolver team noticed a spike in DNS requests for the 65479 Resource Record thanks to data exposed through <a href="/introducing-cloudflare-radar/">our new Radar service</a>. We began investigating and found these to be a part of <a href="https://developer.apple.com/videos/play/wwdc2020/10111/">Apple’s iOS14 beta release</a> where they were testing out a new SVCB/HTTPS record type.</p><p>Once we saw that Apple was requesting this record type, and while the iOS 14 beta was still on-going, we rolled out support across the Cloudflare customer base.</p><p>This blog post explains what this new record type does and its significance, but there’s also a deeper story: Cloudflare customers get automatic support for new protocols like this.</p><p>That means that today if you’ve enabled HTTP/3 on an Apple device running iOS 14, when it needs to talk to a Cloudflare customer (say you browse to a Cloudflare-protected website, or use an app whose API is on Cloudflare) it can find the best way of making that connection automatically.</p><p>And if you’re a Cloudflare customer you have to do… absolutely nothing… to give Apple users the best connection to your Internet property.</p>
    <div>
      <h3>Negotiating HTTP security and performance</h3>
      <a href="#negotiating-http-security-and-performance">
        
      </a>
    </div>
    <p>Whenever a user types a URL in the browser box without specifying a scheme (like “https://” or “http://”), the browser cannot assume, without prior knowledge such as a <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security">Strict-Transport-Security (HSTS)</a> cache or preload list entry, whether the requested website supports HTTPS or not. The browser will first try to fetch the resources using plaintext HTTP, and only if the website redirects to an HTTPS URL, or if it specifies an HSTS policy in the initial HTTP response, the browser will then fetch the resource again over a secure connection.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iz2JFuI19whcuN931y8pk/8d46dd114b946940a6cdcd49502d7b07/image4.gif" />
            
            </figure><p>This means that the latency incurred in fetching the initial resource (say, the index page of a website) is doubled, due to the fact that the browser needs to re-establish the connection over TLS and request the resource all over again. But worse still, the initial request is leaked to the network in plaintext, which could potentially be modified by malicious on-path attackers (think of all those unsecured public WiFi networks) to redirect the user to a completely different website. In practical terms, this weakness is sometimes used by said unsecured public WiFi network operators to sneak advertisements into people’s browsers.</p><p>Unfortunately, that’s not the full extent of it. This problem also impacts <a href="/http3-the-past-present-and-future/">HTTP/3</a>, the newest revision of the HTTP protocol that provides increased performance and security. <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is advertised using the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Alt-Svc">Alt-Svc</a> HTTP header, which is only returned after the browser has already contacted the origin using a different and potentially less performant HTTP version. The browser ends up missing out on using faster HTTP/3 on its first visit to the website (although it does store the knowledge for later visits).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ulWkZdFUIrgAg7WSKuOwc/acf044f1633f1116567dfd1a0c3d15e0/image2-18.png" />
            
            </figure><p>The fundamental problem comes from the fact that negotiation of HTTP-related parameters (such as whether HTTPS or HTTP/3 can be used) is done through HTTP itself (either via a redirect, HSTS and/or Alt-Svc headers). This leads to a chicken and egg problem where the client needs to use the most basic HTTP configuration that has the best chance of succeeding for the initial request. In most cases this means using plaintext HTTP/1.1. Only after it learns of parameters can it change its configuration for the following requests.</p><p>But before the browser can even attempt to connect to the website, it first needs to resolve the website’s domain to an IP address via DNS. This presents an opportunity: what if additional information required to establish a connection could be provided, in addition to IP addresses, with DNS?</p><p>That’s what we’re excited to be announcing today: Cloudflare has rolled out initial support for HTTPS records to our edge network. Cloudflare’s DNS servers will now automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone.</p>
    <div>
      <h3>Service Bindings via DNS</h3>
      <a href="#service-bindings-via-dns">
        
      </a>
    </div>
    <p><a href="https://tools.ietf.org/html/draft-ietf-dnsop-svcb-https-01">The new proposal</a>, currently discussed by the Internet Engineering Task Force (IETF) defines a family of DNS resource record types (“SVCB”) that can be used to negotiate parameters for a variety of application protocols.</p><p>The generic DNS record “SVCB” can be instantiated into records specific to different protocols. The draft specification defines one such instance called “HTTPS”, specific to the HTTP protocol, which can be used not only to signal to the client that it can connect in over a secure connection (skipping the initial unsecured request), but also to advertise the different HTTP versions supported by the website. In the future, potentially even more features could be advertised.</p>
            <pre><code>example.com 3600 IN HTTPS 1 . alpn=”h3,h2”</code></pre>
            <p>The DNS record above advertises support for the HTTP/3 and HTTP/2 protocols for the example.com origin.</p><p>This is best used alongside DNS over HTTPS or DNS over TLS, and DNSSEC, to again prevent malicious actors from manipulating the record.</p><p>The client will need to fetch not only the typical A and AAAA records to get the origin’s IP addresses, but also the HTTPS record. It can of course do these lookups in parallel to avoid additional latency at the start of the connection, but this could potentially lead to A/AAAA and HTTPS responses diverging from each other. For example, in cases where the origin makes use of <a href="https://www.cloudflare.com/learning/performance/what-is-dns-load-balancing/">DNS load-balancing</a>: if an origin can be served by multiple <a href="https://www.cloudflare.com/learning/cdn/what-is-a-cdn/">CDNs</a> it might happen that the responses for A and/or AAAA records come from one CDN, while the HTTPS record comes from another. In some cases this can lead to failures when connecting to the origin (say, if the HTTPS record from one of the CDNs advertises support for HTTP/3, but the CDN the client ends up connecting to doesn’t support it).</p><p>This is solved by the SVCB and HTTPS records by providing the IP addresses directly, without the need for the client to look at A and AAAA records. This is done via the “ipv4hint” and “ipv6hint” parameters that can optionally be added to these records, which provide lists of IPv4 and IPv6 addresses that can be used by the client in lieu of the addresses specified in A and AAAA records. Of course clients will still need to query the A and AAAA records, to support cases where no SVCB or HTTPS record is available, but these IP hints provide an additional layer of robustness.</p>
            <pre><code>example.com 3600 IN HTTPS 1 . alpn=”h3,h2” ipv4hint=”192.0.2.1” ipv6hint=”2001:db8::1”</code></pre>
            <p>In addition to all this, SVCB and HTTPS can also be used to define alternative endpoints that are authoritative for a service, in a similar vein to SRV records:</p>
            <pre><code>example.com 3600 IN HTTPS 1 example.net alpn=”h3,h2”
example.com 3600 IN HTTPS 2 example.org alpn=”h2”</code></pre>
            <p>In this case the “example.com” HTTPS service can be provided by both “example.net” (which supports both HTTP/3 and HTTP/2, in addition to HTTP/1.x) as well as “example.org” (which only supports HTTP/2 and HTTP/1.x). The client will first need to fetch A and AAAA records for “example.net” or “example.org” before being able to connect, which might increase the connection latency, but the service operator can make use of the IP hint parameters discussed above in this case as well, to reduce the amount of required DNS lookups the client needs to perform.</p><p>This means that SVCB and HTTPS records might finally provide a way for SRV-like functionality to be supported by popular browsers and other clients that have historically not supported SRV records.</p>
    <div>
      <h3>There is always room at the top apex</h3>
      <a href="#there-is-always-room-at-the-top-apex">
        
      </a>
    </div>
    <p>When setting up a website on the Internet, it’s common practice to use a “www” subdomain (like in “<a href="http://www.cloudflare.com">www.cloudflare.com</a>”) to identify the site, as well as the “apex” (or “root”) of the domain (in this case, “cloudflare.com”). In order to avoid duplicating the DNS configuration for both domains, the “www” subdomain can typically be configured as a <a href="/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/#cnamesforthewin">CNAME (Canonical Name) record</a>, that is, a record that maps to a different DNS record.</p>
            <pre><code>cloudflare.com.   3600 IN A 192.0.2.1
cloudflare.com.   3600 IN AAAA 2001:db8::1
www               3600 IN CNAME cloudflare.com.</code></pre>
            <p>This way the list of IP addresses of the websites won’t need to be duplicated all over again, but clients requesting A and/or AAAA records for “<a href="http://www.cloudflare.com">www.cloudflare.com</a>” will still get the same results as “cloudflare.com”.</p><p>However, there are some cases where using a CNAME might seem like the best option, but ends up subtly breaking the DNS configuration for a website. For example when setting up services such as <a href="https://docs.gitlab.com/ee/user/project/pages/">GitLab Pages</a>, <a href="https://docs.github.com/en/github/working-with-github-pages">GitHub Pages</a> or <a href="https://www.netlify.com/">Netlify</a> with a custom domain, the user is generally asked to add an A (and sometimes AAAA) record to the DNS configuration for their domain. Those IP addresses are hard-coded in users’ configurations, which means that if the provider of the service ever decides to change the addresses (or add new ones), even if just to provide some form of load-balancing, all of their users will need to manually change their configuration.</p><p>Using a CNAME to a more stable domain which can then have variable A and AAAA records might seem like a better option, and some of these providers do support that, but it’s important to note that this generally only works for subdomains (like “www” in the previous example) and not apex records. This is because the DNS specification that defines CNAME records states that when a CNAME is defined on a particular target, there can’t be any other records associated with it. This is fine for subdomains, but apex records will need to have additional records defined, such as SOA and NS, for the DNS configuration to work properly and could also have records such as MX to make sure emails get properly delivered. In practical terms, this means that defining a CNAME record at the apex of a domain might appear to be working fine in some cases, but be subtly broken in ways that are not immediately apparent.</p><p>But what does this all have to do with SVCB and HTTPS records? Well, it turns out that those records can also solve this problem, by defining an alternative format called “alias form” that behaves in the same manner as a CNAME in all the useful ways, but without the annoying historical baggage. A domain operator will be able to define a record such as:</p>
            <pre><code>example.com. 3600 IN HTTPS example.org.</code></pre>
            <p>and expect it to work as if a CNAME was defined, but without the subtle side-effects.</p>
    <div>
      <h3>One more thing</h3>
      <a href="#one-more-thing">
        
      </a>
    </div>
    <p><a href="/encrypted-sni/">Encrypted SNI</a> is an extension to TLS intended to improve privacy of users on the Internet. You might remember how it makes use of a custom DNS record to advertise the server’s public key share used by clients to then derive the secret key necessary to actually encrypt the SNI. In newer revisions of the specification (which is now called “Encrypted ClientHello” or “ECH”) the custom TXT record used previously is simply replaced by a new parameter, called “echconfig”, for the SVCB and HTTPS records.</p><p>This means that SVCB/HTTPS are a requirement to support newer revisions of Encrypted SNI/Encrypted ClientHello. More on this later this year.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4EBQrn9Y1mOFSs0PtAWtri/f77c27a9e29ef4492e11e0b055e4eacf/image1-28.png" />
            
            </figure>
    <div>
      <h3>What now?</h3>
      <a href="#what-now">
        
      </a>
    </div>
    <p>This all sounds great, but what does it actually mean for Cloudflare customers? As mentioned earlier, we have enabled initial support for HTTPS records across our edge network. Cloudflare’s DNS servers will automatically generate HTTPS records on the fly to advertise whether a particular zone supports HTTP/3 and/or HTTP/2, based on whether those features are enabled on the zone, and we will later also add Encrypted ClientHello support.</p><p>Thanks to Cloudflare’s large network that spans millions of web properties (<a href="https://w3techs.com/technologies/history_overview/dns_server">we happen to be one of the most popular DNS providers</a>), serving these records on our customers' behalf will help build a more secure and performant Internet for anyone that is using a supporting client.</p><p>Adopting new protocols requires cooperation between multiple parties. We have been working with various browsers and clients to increase the support and adoption of HTTPS records. Over the last few weeks, Apple’s iOS 14 release has included <a href="https://mailarchive.ietf.org/arch/msg/dnsop/eeP4H9fli712JPWnEMvDg1sLEfg/">client support for HTTPS records</a>, allowing connections to be upgraded to QUIC when the HTTP/3 parameter is returned in the DNS record. Apple has reported that so far, of the population that has manually enabled HTTP/3 on iOS 14, 8% of the QUIC connections had the HTTPS record response.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tcR9Q2DOynyF2kGZDNwKz/47cb2bc54fdd25f8778e3cd6f94d773e/image3-15.png" />
            
            </figure><p>Other browser vendors, such as <a href="https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/brZTXr6-2PU/g0g8wWwCAwAJ">Google</a> and <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1634793">Mozilla</a>, are also working on shipping support for HTTPS records to their users, and we hope to be hearing more on this front soon.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[HTTPS]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7KRAY9gQONDIoKm8SZ3a8y</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[How to test HTTP/3 and QUIC with Firefox Nightly]]></title>
            <link>https://blog.cloudflare.com/how-to-test-http-3-and-quic-with-firefox-nightly/</link>
            <pubDate>Tue, 30 Jun 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Now that Firefox Nightly supports HTTP/3 we thought we'd share some instructions to help you enable and test it yourselves. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LFuqXEde60ewdAIKCGpBU/bf9372291e6c9ed483d3df9b8d1c2018/HTTP3-partnership-nightly-_3x-1.png" />
            
            </figure><p><a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is the third major version of the Hypertext Transfer Protocol, which takes the bold step of moving away from TCP to the new transport protocol QUIC in order to provide performance and security improvements.</p><p>During Cloudflare's Birthday Week 2019, we were <a href="/http3-the-past-present-and-future/">delighted to announce</a> that we had enabled QUIC and HTTP/3 support on the Cloudflare edge network. This was joined by support from Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all. A big part of developing new standards is interoperability, which typically means different people analysing, implementing and testing a written specification in order to prove that it is precise, unambiguous, and actually implementable.</p><p>At the time of our announcement, Chrome Canary had experimental HTTP/3 support and we were eagerly awaiting a release of Firefox Nightly. Now that Firefox supports HTTP/3 we thought we'd share some instructions to help you enable and test it yourselves.</p>
    <div>
      <h3>How do I enable HTTP/3 for my domain?</h3>
      <a href="#how-do-i-enable-http-3-for-my-domain">
        
      </a>
    </div>
    <p>Simply go to the Cloudflare dashboard and flip the switch from the "Network" tab manually:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/zkM5pyAf9MD0HhWqueVP3/e4d9d2d8ae7a37b5da5c0bda7e9447e5/http3-toggle-1.png" />
            
            </figure>
    <div>
      <h3>Using Firefox Nightly as an HTTP/3 client</h3>
      <a href="#using-firefox-nightly-as-an-http-3-client">
        
      </a>
    </div>
    <p>Firefox Nightly has experimental support for HTTP/3. In our experience things are pretty good but be aware that you might experience some teething issues, so bear that in mind if you decide to enable and experiment with HTTP/3. If you're happy with that responsibility, you'll first need to download and install the <a href="https://www.mozilla.org/firefox/channel/desktop/">latest Firefox Nightly build</a>. Then open Firefox and enable HTTP/3 by visiting "about:config" and setting "network.http.http3.enabled" to true. There are some other parameters that can be tweaked but the defaults should suffice.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UGnSGsahi5Qm5bqtL5qho/dfa340c0825d974ae1164bc1e6eaedda/firefox-h3-about-config.png" />
            
            </figure><p>about:config can be filtered by using a search term like "http3".</p><p>Once HTTP/3 is enabled, you can visit your site to test it out. A straightforward way to check if HTTP/3 was negotiated is to check the Developer Tools "Protocol" column in the "Network" tab (on Windows and Linux the Developer Tools keyboard shortcut is Ctrl+Shift+I, on macOS it's Command+Option+I). This "Protocol" column might not be visible at first, so to enable it right-click one of the column headers and check "Protocol" as shown below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/61omvb6CpN2nUSd6mFvqiB/c0a23a5a0551f5583efa1a5d76f7a94f/firefox-h3-protocol-column.png" />
            
            </figure><p>Then reload the page and you should see that "HTTP/3" is reported.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Q8evveLC4cyU0PGHISWmJ/58930682b8a9751208a2a6c3845db588/firefox-h3-success.png" />
            
            </figure><p>The aforementioned teething issues might cause HTTP/3 not to show up initially. When you enable HTTP/3 on a zone, we add a header field such as <code><i>alt-svc: h3-27=":443"; ma=86400, h3-28=":443"; ma=86400, h3-29=":443"; ma=86400</i></code> to all responses for that zone. Clients see this as an advertisement to try HTTP/3 out and will take up the offer on the <b>next</b> request. So to make this happen you can reload the page but make sure that you bypass the local browser cache (via the "Disable Cache" checkbox, or use the Shift-F5 key combo) or else you'll just see the protocol used to fetch the resource the first time around. Finally, Firefox provides the "about:networking" page which provides a list of visited zones and the HTTP version that was used to load them; for example, this very blog.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/72pladWtRSe3QS38rhRR0F/59b97837d711a7f5bad8cf94f1d9cb43/firefox-h3-about-networking.png" />
            
            </figure><p>about:networking contains a table of all visited zones and the connection properties.</p><p>Sometimes browsers can get sticky to an existing HTTP connection and will refuse to start an HTTP/3 connection, this is hard to detect by humans, so sometimes the best option is to close the app completely and reopen it. Finally, we've also seen some interactions with Service Workers that make it appear that a resource was fetched from the network using HTTP/1.1, when in fact it was fetched from the local Service Worker cache. In such cases if you're keen to see HTTP/3 in action then you'll need to deregister the Service Worker. If you're in doubt about what is happening on the network it is often useful to verify things independently, for example capturing a packet trace and dissecting it with Wireshark.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>The QUIC Working Group recently <a href="https://mailarchive.ietf.org/arch/msg/quic/F7wvKGnA1FJasmaE35XIxsc2Tno/">announced a "Working Group Last Call"</a>, which marks an important milestone in the continued maturity of the standards. From the announcement:</p><blockquote><p><i>After more than three and a half years and substantial discussion, all 845 of the design issues raised against the QUIC protocol drafts have gained consensus or have a proposed resolution. In that time the protocol has been considerably transformed; it has become more secure, much more widely implemented, and has been shown to be interoperable. Both the Chairs and the Editors feel that it is ready to proceed in standardisation.</i></p></blockquote><p>The coming months will see the specifications settle and we anticipate that implementations will continue to improve their QUIC and HTTP/3 support, eventually enabling it in their stable channels. We're pleased to continue working with <a href="https://www.cloudflare.com/case-studies/mozilla/">industry partners such as Mozilla</a> to help build a better Internet together.</p><p>In the meantime, you might want to <a href="https://developers.cloudflare.com/http3/intro">check out our guides</a> to testing with other implementations such as Chrome Canary or curl. As compatibility becomes proven, implementations will shift towards optimizing their performance; you can read about Cloudflare's efforts on <a href="/http-3-vs-http-2/">comparing HTTP/3 to HTTP/2</a> and the work we've done to improve performance by <a href="/cubic-and-hystart-support-in-quiche/">adding support for CUBIC and HyStart++</a> to our congestion control module.</p> ]]></content:encoded>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[HTTP3]]></category>
            <guid isPermaLink="false">3vxyDdvWdI550GPFsqj0D5</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
        <item>
            <title><![CDATA[Comparing HTTP/3 vs. HTTP/2 Performance]]></title>
            <link>https://blog.cloudflare.com/http-3-vs-http-2/</link>
            <pubDate>Tue, 14 Apr 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ We announced support for HTTP/3, the successor to HTTP/2, during Cloudflare’s birthday week last year. Our goal is and has always been to help build a better Internet. Even though HTTP/3 is still in draft status, we've seen a lot of interest from our users. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We announced <a href="/http3-the-past-present-and-future/">support for HTTP/3</a>, the successor to HTTP/2 during Cloudflare’s <a href="/birthday-week-2019/">birthday week</a> last year. Our goal is and has always been to help build a better Internet. Collaborating on standards is a big part of that, and we're very fortunate to do that here.</p><p>Even though <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> is still in draft status, we've seen a lot of interest from our users. So far, over 113,000 zones have activated HTTP/3 and, if you are using an experimental browser those zones can be accessed using the new protocol! It's been great seeing so many people enable HTTP/3: having real websites accessible through HTTP/3 means browsers have more diverse properties to test against.</p><p>When we <a href="/http3-the-past-present-and-future/">launched support</a> for HTTP/3, we did so in partnership with Google, who simultaneously launched experimental support in Google Chrome. Since then, we've seen more browsers add experimental support: Firefox to their nightly <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1581637">builds</a>, other Chromium-based browsers such as Opera and Microsoft Edge through the underlying Chrome browser engine, and Safari via their <a href="https://developer.apple.com/safari/technology-preview/release-notes/">technology preview</a>. We closely follow these developments and partner wherever we can help; having a large network with many sites that have HTTP/3 enabled gives browser implementers an excellent testbed against which to try out code.</p>
    <div>
      <h3>So, what's the status and where are we now?</h3>
      <a href="#so-whats-the-status-and-where-are-we-now">
        
      </a>
    </div>
    <p>The IETF standardization process develops protocols as a series of document draft versions with the ultimate aim of producing a final draft version that is ready to be marked as an RFC. The members of the QUIC Working Group collaborate on analyzing, implementing and interoperating the specification in order to find things that don't work quite right. We launched with support for <a href="https://tools.ietf.org/html/draft-ietf-quic-http-23">Draft-23 for HTTP/3</a> and have since kept up with each new draft, with <a href="https://tools.ietf.org/html/draft-ietf-quic-http-27">27</a> being the latest at time of writing. With each draft the group improves the quality of the QUIC definition and gets closer to "rough consensus" about how it behaves. In order to avoid a perpetual state of analysis paralysis and endless tweaking, the bar for proposing changes to the specification has been increasing with each new draft. This means that changes between versions are smaller, and that a final RFC should closely match the protocol that we've been running in production.</p>
    <div>
      <h3>Benefits</h3>
      <a href="#benefits">
        
      </a>
    </div>
    <p>One of the main touted advantages of HTTP/3 is increased performance, specifically around fetching multiple objects simultaneously. With HTTP/2, any interruption (packet loss) in the TCP connection blocks all streams (Head of line blocking). Because HTTP/3 is UDP-based, if a packet gets dropped that only interrupts that one stream, not all of them.</p><p>In addition, HTTP/3 offers <a href="/even-faster-connection-establishment-with-quic-0-rtt-resumption/">0-RTT</a> support, which means that subsequent connections can start up much faster by eliminating the TLS acknowledgement from the server when setting up the connection. This means the client can start requesting data much faster than with a full TLS negotiation, meaning the website starts loading earlier.</p><p>The following illustrates the packet loss and its impact: HTTP/2 multiplexing two requests . A request comes over HTTP/2 from the client to the server requesting two resources (we’ve colored the requests and their associated responses green and yellow). The responses are broken up into multiple packets and, alas, a packet is lost so both requests are held up.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5MQCKq4WKwM4fCqDGYJ8Ui/1f9306b608618fe78e3e57a1392a9b11/image1-1.gif" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DSF88gTgVSsmiVExIa3Ry/5e26b2b32f6bd00dda851caa4c9c46e6/image4-1.gif" />
            
            </figure><p>The above shows HTTP/3 multiplexing 2 requests. A packet is lost that affects the yellow response but the green one proceeds just fine.</p><p>Improvements in session startup mean that ‘connections’ to servers start much faster, which means the browser starts to see data more quickly. We were curious to see how much of an improvement, so we ran some tests. To measure the improvement resulting from 0-RTT support, we ran some benchmarks measuring <i>time to first byte</i> (TTFB). On average, with HTTP/3 we see the first byte appearing after 176ms. With HTTP/2 we see 201ms, meaning HTTP/3 is already performing 12.4% better!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5slXFJulfUbmW5NsZqm8r/f7087c07f55ab903735524e70dd04bb0/image5-6.png" />
            
            </figure><p>Interestingly, not every aspect of the protocol is governed by the drafts or RFC. Implementation choices can affect performance, such as efficient <a href="/accelerating-udp-packet-transmission-for-quic/">packet transmission</a> and choice of congestion control algorithm. Congestion control is a technique your computer and server use to adapt to overloaded networks: by dropping packets, transmission is subsequently throttled. Because QUIC is a new protocol, getting the congestion control design and implementation right requires experimentation and tuning.</p><p>In order to provide a safe and simple starting point, the Loss Detection and Congestion Control specification recommends the <a href="https://en.wikipedia.org/wiki/TCP_congestion_control#TCP_Tahoe_and_Reno">Reno</a> algorithm but allows endpoints to choose any algorithm they might like.  We started with <a href="https://en.wikipedia.org/wiki/TCP_congestion_control#TCP_New_Reno">New Reno</a> but we know from experience that we can get better performance with something else. We have recently moved to <a href="https://en.wikipedia.org/wiki/CUBIC_TCP">CUBIC</a> and on our network with larger size transfers and packet loss, CUBIC shows improvement over New Reno. Stay tuned for more details in future.</p><p>For our existing HTTP/2 stack, we currently support <a href="https://github.com/google/bbr">BBR v1</a> (TCP). This means that in our tests, we’re not performing an exact apples-to-apples comparison as these congestion control algorithms will behave differently for smaller vs larger transfers. That being said, we can already see a speedup in smaller websites using HTTP/3 when compared to HTTP/2. With larger zones, the improved congestion control of our tuned HTTP/2 stack shines in performance.</p><p>For a small test page of 15KB, HTTP/3 takes an average of 443ms to load compared to 458ms for HTTP/2. However, once we increase the page size to 1MB that advantage disappears: HTTP/3 is just slightly slower than HTTP/2 on our network today, taking 2.33s to load versus 2.30s.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UrCQVwplKQxT7aSzqwMl6/4ff62d1bdb67e54c29b553d260f11aa8/image2-11.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2KPuZxBl9m6TOyvrGZfMFw/5d22e954c62325af32c99c5783f62560/image6-4.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/JDw2bdaT0Vcmd4Jyg7Yhl/af2af92b3bdc2dd1891062844027fe70/image3-11.png" />
            
            </figure><p>Synthetic benchmarks are interesting, but we wanted to know how HTTP/3 would perform in the real world.</p><p>To measure, we wanted a third party that could load websites on our network, mimicking a browser. WebPageTest is a common framework that is used to measure the page load time, with nice waterfall charts. For analyzing the backend, we used our in-house <a href="https://support.cloudflare.com/hc/en-us/articles/360033929991-Cloudflare-Browser-Insights">Browser Insights</a>, to capture timings as our edge sees it. We then tied both pieces together with bits of automation.</p><p>As a test case we decided to use <a href="/">this very blog</a> for our <a href="https://www.cloudflare.com/application-services/solutions/app-performance-monitoring/">performance monitoring</a>. We configured our own instances of WebPageTest spread over the world to load these sites over both HTTP/2 and HTTP/3. We also enabled HTTP/3 and Browser Insights. So, every time our test scripts kickoff a webpage test with an HTTP/3 supported browser loading the page, browser analytics report the data back. Rinse and repeat for HTTP/2 to be able to compare.</p><p>The following graph shows the page load time for a real world page -- <a href="/">blog.cloudflare.com</a>, to compare the performance of HTTP/3 and HTTP/2. We have these performance measurements running from different geographical locations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Xwp5YGAE5hCP7m0TQeGeE/52a42c9ad4a53be01e22fba4888c27de/image7-5.png" />
            
            </figure><p>As you can see, HTTP/3 performance still trails HTTP/2 performance, by about 1-4% on average in North America and similar results are seen in Europe, Asia and South America. We suspect this could be due to the difference in congestion algorithms: HTTP/2 on BBR v1 vs. HTTP/3 on CUBIC. In the future, we’ll work to support the same congestion algorithm on both to get a more accurate apples-to-apples comparison.</p>
    <div>
      <h3>Conclusion</h3>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Overall, we’re very excited to be allowed to help push this standard forward. Our implementation is holding up well, offering better performance in some cases and at worst similar to HTTP/2. As the standard finalizes, we’re looking forward to seeing browsers add support for HTTP/3 in mainstream versions. As for us, we continue to support the latest drafts while at the same time looking for more ways to leverage HTTP/3 to get even better performance, be it congestion tuning, <a href="/adopting-a-new-approach-to-http-prioritization/">prioritization</a> or system capacity (CPU and raw network throughput).</p><p>In the meantime, if you’d like to try it out, just enable HTTP/3 on our dashboard and download a nightly version of one of the major browsers. Instructions on how to enable HTTP/3 can be found on our <a href="https://developers.cloudflare.com/http3/intro/">developer documentation</a>.</p> ]]></content:encoded>
            <category><![CDATA[HTTP2]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Performance]]></category>
            <guid isPermaLink="false">7uJaKjNWQi13PNs4O9ylJB</guid>
            <dc:creator>Sreeni Tellakula</dc:creator>
        </item>
        <item>
            <title><![CDATA[A cost-effective and extensible testbed for transport protocol development]]></title>
            <link>https://blog.cloudflare.com/a-cost-effective-and-extensible-testbed-for-transport-protocol-development/</link>
            <pubDate>Tue, 14 Jan 2020 16:07:15 GMT</pubDate>
            <description><![CDATA[ At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on QUIC and HTTP/3, which are still in IETF draft, but gaining a lot of interest. ]]></description>
            <content:encoded><![CDATA[ <p><i>This was originally published on </i><a href="https://calendar.perfplanet.com/2019/how-to-develop-a-practical-transport-protocol/"><i>Perf Planet's 2019 Web Performance Calendar</i></a><i>.</i></p><p>At Cloudflare, we develop protocols at multiple layers of the network stack. In the past, we focused on HTTP/1.1, HTTP/2, and TLS 1.3. Now, we are working on <a href="/http3-the-past-present-and-future/">QUIC and HTTP/3</a>, which are still in IETF draft, but gaining a lot of interest.</p><p>QUIC is a secure and multiplexed transport protocol that aims to perform better than TCP under some network conditions. It is specified in a family of documents: a transport layer which specifies packet format and basic state machine, recovery and congestion control, security based on TLS 1.3, and an HTTP application layer mapping, which is now called <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>.</p><p>Let’s focus on the transport and recovery layer first. This layer provides a basis for what is sent on the wire (the packet binary format) and how we send it reliably. It includes how to open the connection, how to handshake a new secure session with the help of TLS, how to send data reliably and how to react when there is packet loss or reordering of packets. Also it includes flow control and congestion control to interact well with other transport protocols in the same network. With confidence in the basic transport and recovery layer,  we can take a look at higher application layers such as HTTP/3.</p><p>To develop such a transport protocol, we need multiple stages of the development environment. Since this is a network protocol, it’s best to test in an actual physical network to see how works on the wire. We may start the development using localhost, but after some time we may want to send and receive packets with other hosts. We can build a lab with a couple of virtual machines, using Virtualbox, VMWare or even with Docker. We also have a local testing environment with a Linux VM. But sometimes these have a limited network (localhost only) or are noisy due to other processes in the same host or virtual machines.</p><p>Next step is to have a test lab, typically an isolated network focused on protocol analysis only consisting of dedicated x86 hosts. Lab configuration is particularly important for testing various cases - there is no one-size-fits-all scenario for protocol testing. For example, EDGE is still running in production mobile networks but LTE is dominant and 5G deployment is in early stages. WiFi is very common these days. We want to test our protocol in all those environments. Of course, we can't buy every type of machine or have a very expensive network simulator for every type of environment, so using cheap hardware and an open source OS where we can configure similar environments is ideal.</p>
    <div>
      <h2>The QUIC Protocol Testing lab</h2>
      <a href="#the-quic-protocol-testing-lab">
        
      </a>
    </div>
    <p>The goal of the QUIC testing lab is to aid transport layer protocol development. To develop a transport protocol we need to have a way to control our network environment and a way to get as many different types of debugging data as possible. Also we need to get metrics for comparison with other protocols in production.</p><p>The QUIC Testing Lab has the following goals:</p><ul><li><p><b><i>Help with multiple transport protocol development</i></b>: Developing a new transport layer requires many iterations, from building and validating packets as per protocol spec, to making sure everything works fine under moderate load, to very harsh conditions such as low bandwidth and high packet loss. We need a way to run tests with various network conditions reproducibly in order to catch unexpected issues.</p></li><li><p><b><i>Debugging multiple transport protocol development</i></b>: Recording as much debugging info as we can is important for fixing bugs. Looking into packet captures definitely helps but we also need a detailed debugging log of the server and client to understand the what and why for each packet. For example, when a packet is sent, we want to know why. Is this because there is an application which wants to send some data? Or is this a retransmit of data previously known as lost? Or is this a loss probe which is not an actual packet loss but sent to see if the network is lossy?</p></li><li><p><b><i>Performance comparison between each protocol</i></b>: We want to understand the performance of a new protocol by comparison with existing protocols such as TCP, or with a previous version of the protocol under development. Also we want to test with varying parameters such as changing the congestion control mechanism, changing various timeouts, or changing the buffer sizes at various levels of the stack.</p></li><li><p><b><i>Finding a bottleneck or errors easily</i></b>: Running tests we may see an unexpected error - a transfer that timed out, or ended with an error, or a transfer was corrupted at the client side - each test needs to make sure every test is run correctly, by using a checksum of the original file to compare with what is actually downloaded, or by checking various error codes at the protocol of API level.</p></li></ul><p>When we have a test lab with separate hardware, we have benefits, as follows:</p><ul><li><p>Can configure the testing lab without public Internet access - safe and quiet.</p></li><li><p>Handy access to hardware and its console for maintenance purpose, or for adding or updating hardware.</p></li><li><p>Try other CPU architectures. For clients we use the Raspberry Pi for regular testing because this is ARM architecture (32bit or 64bit), similar to modern smartphones. So testing with ARM architecture helps for compatibility testing before going into a smartphone OS.</p></li><li><p>We can add a real smartphone for testing, such as Android or iPhone. We can test with WiFi but these devices also support Ethernet, so we can test them with a wired network for better consistency.</p></li></ul>
    <div>
      <h2>Lab Configuration</h2>
      <a href="#lab-configuration">
        
      </a>
    </div>
    <p>Here is a diagram of our QUIC Protocol Testing Lab:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1O1CdD682XoQE9Q68bPVkh/a63bd6b8a1bafa516719cfcf0a82c033/Screenshot-2019-07-01-00.35.06.png" />
            
            </figure><p>This is a conceptual diagram and we need to configure a switch for connecting each machine. Currently, we have Raspberry Pis (2 and 3) as an Origin and a Client. And small Intel x86 boxes for the Traffic Shaper and Edge server plus Ethernet switches for interconnectivity.</p><ul><li><p>Origin is simply serving HTTP and HTTPS test objects using a web server. Client may download a file from Origin directly to simulate a download direct from a customer's origin server.</p></li><li><p>Client will download a test object from Origin or Edge, using a different protocol. In typical a configuration Client connects to Edge instead of Origin, so this is to simulate an edge server in the real world. For TCP/HTTP we are using the curl command line client and for QUIC, <a href="https://github.com/cloudflare/quiche">quiche’s</a> http3_client with some modification.</p></li><li><p>Edge is running Cloudflare's web server to serve HTTP/HTTPS via TCP and also the QUIC protocol using quiche. Edge server is installed with the same Linux kernel used on Cloudflare's production machines in order to have the same low level network stack.</p></li><li><p>Traffic Shaper is sitting between Client and Edge (and Origin), controlling network conditions. Currently we are using FreeBSD and ipfw + dummynet. Traffic shaping can also be done using Linux' netem which provides additional network simulation features.</p></li></ul><p>The goal is to run tests with various network conditions, such as bandwidth, latency and packet loss upstream and downstream. The lab is able to run a plaintext HTTP test but currently our focus of testing is HTTPS over TCP and HTTP/3 over QUIC. Since QUIC is running over UDP, both TCP and UDP traffic need to be controlled.</p>
    <div>
      <h2>Test Automation and Visualization</h2>
      <a href="#test-automation-and-visualization">
        
      </a>
    </div>
    <p>In the lab, we have a script installed in Client, which can run a batch of testing with various configuration parameters - for each test combination, we can define a test configuration, including:</p><ul><li><p>Network Condition - Bandwidth, Latency, Packet Loss (upstream and downstream)</p></li></ul><p>For example using netem traffic shaper we can simulate LTE network as below,(<a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">RTT</a>=50ms, BW=22Mbps upstream and downstream, with BDP queue size)</p>
            <pre><code>$ tc qdisc add dev eth0 root handle 1:0 netem delay 25ms
$ tc qdisc add dev eth0 parent 1:1 handle 10: tbf rate 22mbit buffer 68750 limit 70000</code></pre>
            <ul><li><p>Test Object sizes - 1KB, 8KB, … 32MB</p></li><li><p>Test Protocols: HTTPS (TCP) and QUIC (UDP)</p></li><li><p>Number of runs and number of requests in a single connection</p></li></ul><p>The test script outputs a CSV file of results for importing into other tools for data processing and visualization - such as Google Sheets, Excel or even a jupyter notebook. Also it’s able to post the result to a database (Clickhouse in our case), so we can query and visualize the results.</p><p>Sometimes a whole test combination takes a long time - the current standard test set with simulated 2G, 3G, LTE, WiFi and various object sizes repeated 10 times for each request may take several hours to run. Large object testing on a slow network takes most of the time, so sometimes we also need to run a limited test (e.g. testing LTE-like conditions only for a smoke test) for quick debugging.</p>
    <div>
      <h3>Chart using Google Sheets:</h3>
      <a href="#chart-using-google-sheets">
        
      </a>
    </div>
    <p>The following comparison chart shows the total transfer time in msec for TCP vs QUIC for different network conditions. The QUIC protocol used here is a development version one.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2umJkG3YBHNnxyD2tJRHQA/c96ef4601d8d20c9760b45ad321e6135/Screen-Shot-2020-01-13-at-3.09.41-PM.png" />
            
            </figure>
    <div>
      <h2>Debugging and performance analysis using of a smartphone</h2>
      <a href="#debugging-and-performance-analysis-using-of-a-smartphone">
        
      </a>
    </div>
    <p>Mobile devices have become a crucial part of our day to day life, so testing the new transport protocol on mobile devices is critically important for mobile app performance. To facilitate that, we need to have a mobile test app which will proxy data over the new transport protocol under development. With this we have the ability to analyze protocol functionality and performance in mobile devices with different network conditions.</p><p>Adding a smartphone to the testbed mentioned above gives an advantage in terms of understanding real performance issues. The major smartphone operating systems, iOS and Android, have quite different networking stack. Adding a smartphone to testbed gives the ability to understand these operating system network stacks in depth which aides new protocol designs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RLA6wO7vjRol9o6nlzj34/7c1a7904a379b1e8079853c35597173c/Screen-Shot-2020-01-13-at-3.52.03-PM.png" />
            
            </figure><p>The above figure shows the network block diagram of another similar lab testbed used for protocol testing where a smartphone is connected both wired and wirelessly. A Linux netem based traffic shaper sits in-between the client and server shaping the traffic. Various networking profiles are fed to the traffic shaper to mimic real world scenarios. The client can be either an Android or iOS based smartphone, the server is a vanilla web server serving static files. Client, server and traffic shaper are all connected to the Internet along with the private lab network for management purposes.</p><p>The above lab has mobile devices for both Android or iOS  installed with a test app built with a proprietary client proxy software for proxying data over the new transport protocol under development. The test app also has the ability to make HTTP requests over TCP for comparison purposes.</p><p>The Android or iOS test app can be used to issue multiple HTTPS requests of different object sizes sequentially and concurrently using TCP and QUIC as underlying transport protocol. Later, TTOTAL (total transfer time) of each HTTPS request is used to compare TCP and QUIC performance over different network conditions. One such comparison is shown below,</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1Eh2Kl4C9RI40oKf8Z3CJY/a318afd23e177895137ff481bba2dfe1/Screen-Shot-2020-01-13-at-4.08.23-PM.png" />
            
            </figure><p>The table above shows the total transfer time taken for TCP and QUIC requests over an LTE network profile fetching different objects with different concurrency levels using the test app. Here TCP goes over native OS network stack and QUIC goes over Cloudflare QUIC stack.</p><p>Debugging network performance issues is hard when it comes to mobile devices. By adding an actual smartphone into the testbed itself we have the ability to take packet captures at different layers. These are very critical in analyzing and understanding protocol performance.</p><p>It's easy and straightforward to capture packets and analyze them using the tcpdump tool on x86 boxes, but it's a challenge to capture packets on iOS and Android devices. On iOS device ‘rvictl’ lets us capture packets on an external interface. But ‘rvictl’ has some drawbacks such as timestamps being inaccurate. Since we are dealing with millisecond level events, timestamps need to be accurate to analyze the root cause of a problem.</p><p>We can capture packets on internal loopback interfaces on jailbroken iPhones and rooted Android devices. Jailbreaking a recent iOS device is nontrivial. We also need to make sure that autoupdate of any sort is disabled on such a phone otherwise it would disable the jailbreak and you have to start the whole process again. With a jailbroken phone we have root access to the device which lets us take packet captures as needed using tcpdump.</p><p>Packet captures taken using jailbroken iOS devices or rooted Android devices connected to the lab testbed help us analyze  performance bottlenecks and improve protocol performance.</p><p>iOS and Android devices different network stacks in their core operating systems. These packet captures also help us understand the network stack of these mobile devices, for example in iOS devices packets punted through loopback interface had a mysterious delay of 5 to 7ms.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Cloudflare is actively involved in helping to drive forward the QUIC and HTTP/3 standards by testing and optimizing these new protocols in simulated real world environments. By simulating a wide variety of networks we are working on our mission of Helping Build a Better Internet. For everyone, everywhere.</p><p><i>Would like to thank SangJo Lee, Hiren Panchasara, Lucas Pardue and Sreeni Tellakula for their contributions.</i></p> ]]></content:encoded>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[TLS 1.3]]></category>
            <category><![CDATA[TCP]]></category>
            <guid isPermaLink="false">58abpUfUPAE7n3X9TDOpyt</guid>
            <dc:creator>Lohith Bellad</dc:creator>
            <dc:creator>Junho Choi</dc:creator>
        </item>
        <item>
            <title><![CDATA[Accelerating UDP packet transmission for QUIC]]></title>
            <link>https://blog.cloudflare.com/accelerating-udp-packet-transmission-for-quic/</link>
            <pubDate>Wed, 08 Jan 2020 17:08:00 GMT</pubDate>
            <description><![CDATA[ Significant work has gone into optimizing TCP, UDP hasn't received as much attention, putting QUIC at a disadvantage. Let's explore a few tricks that help mitigate this. ]]></description>
            <content:encoded><![CDATA[ <p><i>This was originally published on </i><a href="https://calendar.perfplanet.com/2019/accelerating-udp-packet-transmission-for-quic/"><i>Perf Planet's 2019 Web Performance Calendar</i></a><i>.</i></p><p><a href="/the-road-to-quic/">QUIC</a>, the new Internet transport protocol designed to accelerate HTTP traffic, is delivered on top of <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">UDP datagrams</a>, to ease deployment and avoid interference from network appliances that drop packets from unknown protocols. This also allows QUIC implementations to live in user-space, so that, for example, browsers will be able to implement new protocol features and ship them to their users without having to wait for operating systems updates.</p><p>But while a lot of work has gone into optimizing TCP implementations as much as possible over the years, including building offloading capabilities in both software (like in operating systems) and hardware (like in network interfaces), UDP hasn't received quite as much attention as TCP, which puts QUIC at a disadvantage. In this post we'll look at a few tricks that help mitigate this disadvantage for UDP, and by association QUIC.</p><p>For the purpose of this blog post we will only be concentrating on measuring throughput of QUIC connections, which, while necessary, is not enough to paint an accurate overall picture of the performance of the QUIC protocol (or its implementations) as a whole.</p>
    <div>
      <h3>Test Environment</h3>
      <a href="#test-environment">
        
      </a>
    </div>
    <p>The client used in the measurements is h2load, <a href="https://github.com/nghttp2/nghttp2/tree/quic">built with QUIC and HTTP/3 support</a>, while the server is NGINX, built with <a href="/experiment-with-http-3-using-nginx-and-quiche/">the open-source QUIC and HTTP/3 module provided by Cloudflare</a> which is based on quiche (<a href="https://github.com/cloudflare/quiche">github.com/cloudflare/quiche</a>), Cloudflare's own <a href="/enjoy-a-slice-of-quic-and-rust/">open-source implementation of QUIC and HTTP/3</a>.</p><p>The client and server are run on the same host (my laptop) running Linux 5.3, so the numbers don’t necessarily reflect what one would see in a production environment over a real network, but it should still be interesting to see how much of an impact each of the techniques have.</p>
    <div>
      <h3>Baseline</h3>
      <a href="#baseline">
        
      </a>
    </div>
    <p>Currently the code that implements QUIC in NGINX uses the <code>sendmsg()</code> system call to send a single UDP packet at a time.</p>
            <pre><code>ssize_t sendmsg(int sockfd, const struct msghdr *msg,
    int flags);</code></pre>
            <p>The <code>struct msghdr</code> carries a <code>struct iovec</code> which can in turn carry multiple buffers. However, all of the buffers within a single iovec will be merged together into a single UDP datagram during transmission. The kernel will then take care of encapsulating the buffer in a UDP packet and sending it over the wire.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/774r0FpU47qMQ5bbxIOPL5/3560c09b55949e3c406ad958498e7fd4/sendmsg.png" />
            
            </figure><p>The throughput of this particular implementation tops out at around 80-90 MB/s, as measured by h2load when performing 10 sequential requests for a 100 MB resource.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4V8JtNxw1ogFryQ7xBVcgk/74bf3551df4837143a4cb2dbc53e3f84/sendmsg-chart.png" />
            
            </figure>
    <div>
      <h3>sendmmsg()</h3>
      <a href="#sendmmsg">
        
      </a>
    </div>
    <p>Due to the fact that <code>sendmsg()</code> only sends a single UDP packet at a time, it needs to be invoked quite a lot in order to transmit all of the QUIC packets required to deliver the requested resources, as illustrated by the following bpftrace command:</p>
            <pre><code>% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 904539</code></pre>
            <p>Each of those system calls causes an expensive context switch between the application and the kernel, thus impacting throughput.</p><p>But while <code>sendmsg()</code> only transmits a single UDP packet at a time for each invocation, its close cousin <code>sendmmsg()</code> (note the additional “m” in the name) is able to batch multiple packets per system call:</p>
            <pre><code>int sendmmsg(int sockfd, struct mmsghdr *msgvec,
    unsigned int vlen, int flags);</code></pre>
            <p>Multiple <code>struct mmsghdr</code> structures can be passed to the kernel as an array, each in turn carrying a single <code>struct msghdr</code> with its own <code>struct iovec</code> , with each element in the <code>msgvec</code> array representing a single UDP datagram.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5QO4EdUwmU9MJhSCY2wwC1/ac6bc505e3b1f3b4d571910f13905131/sendmmsg.png" />
            
            </figure><p>Let's see what happens when NGINX is updated to use <code>sendmmsg()</code> to send QUIC packets:</p>
            <pre><code>% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 2437
@[tracepoint:syscalls:sys_enter_sendmmsg]: 15676</code></pre>
            <p>The number of system calls went down dramatically, which translates into an increase in throughput, though not quite as big as the decrease in syscalls:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fv3ZaxYLZmteczFchjfiO/e0056c40dacea0422ed53edaf8158869/sendmmsg-chart.png" />
            
            </figure>
    <div>
      <h3>UDP segmentation offload</h3>
      <a href="#udp-segmentation-offload">
        
      </a>
    </div>
    <p>With <code>sendmsg()</code> as well as <code>sendmmsg()</code>, the application is responsible for separating each QUIC packet into its own buffer in order for the kernel to be able to transmit it. While the implementation in NGINX uses static buffers to implement this, so there is no overhead in allocating them, all of these buffers need to be traversed by the kernel during transmission, which can add significant overhead.</p><p>Linux supports a feature, Generic Segmentation Offload (GSO), which allows the application to pass a single "super buffer" to the kernel, which will then take care of segmenting it into smaller packets. The kernel will try to postpone the segmentation as much as possible to reduce the overhead of traversing outgoing buffers (some NICs even support hardware segmentation, but it was not tested in this experiment due to lack of capable hardware). Originally GSO was only supported for TCP, but support for UDP GSO was recently added as well, in Linux 4.18.</p><p>This feature can be controlled using the <code>UDP_SEGMENT</code> socket option:</p>
            <pre><code>setsockopt(fd, SOL_UDP, UDP_SEGMENT, &amp;gso_size, sizeof(gso_size)))</code></pre>
            <p>As well as via ancillary data, to control segmentation for each <code>sendmsg()</code> call:</p>
            <pre><code>cm = CMSG_FIRSTHDR(&amp;msg);
cm-&gt;cmsg_level = SOL_UDP;
cm-&gt;cmsg_type = UDP_SEGMENT;
cm-&gt;cmsg_len = CMSG_LEN(sizeof(uint16_t));
*((uint16_t *) CMSG_DATA(cm)) = gso_size;</code></pre>
            <p>Where <code>gso_size</code> is the size of each segment that form the "super buffer" passed to the kernel from the application. Once configured, the application can provide one contiguous large buffer containing a number of packets of <code>gso_size</code> length (as well as a final smaller packet), that will then be segmented by the kernel (or the NIC if hardware segmentation offloading is supported and enabled).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3vQo11I0RupCQ4msUqj0Ve/dcec6ac6c0bea7c737aa9fa822e69d0a/sendmsg-gso.png" />
            
            </figure><p><a href="https://github.com/torvalds/linux/blob/80a0c2e511a97e11d82e0ec11564e2c3fe624b0d/include/linux/udp.h#L94">Up to 64 segments</a> can be batched with the <code>UDP_SEGMENT</code> option.</p><p>GSO with plain <code>sendmsg()</code> already delivers a significant improvement:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4q2OxEsgZcsw2JXc8JcAfk/a64c9b48cad41378122e7d7c5a88e67a/gso-chart.png" />
            
            </figure><p>And indeed the number of syscalls also went down significantly, compared to plain <code>sendmsg()</code> :</p>
            <pre><code>% sudo bpftrace -p $(pgrep nginx) -e 'tracepoint:syscalls:sys_enter_sendm* { @[probe] = count(); }'
Attaching 2 probes...
 
 
@[tracepoint:syscalls:sys_enter_sendmsg]: 18824</code></pre>
            <p>GSO can also be combined with <code>sendmmsg()</code> to deliver an even bigger improvement. The idea being that each <code>struct msghdr</code> can be segmented in the kernel by setting the <code>UDP_SEGMENT</code> option using ancillary data, allowing an application to pass multiple “super buffers”, each carrying up to 64 segments, to the kernel in a single system call.</p><p>The improvement is again fairly significant:</p>
    <div>
      <h3>Evolving from AFAP</h3>
      <a href="#evolving-from-afap">
        
      </a>
    </div>
    <p>Transmitting packets as fast as possible is easy to reason about, and there's much fun to be had in optimizing applications for that, but in practice this is not always the best strategy when optimizing protocols for the Internet</p><p>Bursty traffic is more likely to cause or be affected by congestion on any given network path, which will inevitably defeat any optimization implemented to increase transmission rates.</p><p>Packet pacing is an effective technique to squeeze out more performance from a network flow. The idea being that adding a short delay between each outgoing packet will smooth out bursty traffic and reduce the chance of congestion, and packet loss. For TCP this was originally implemented in Linux via the fq packet scheduler, and later by the BBR congestion control algorithm implementation, which implements its own pacer.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1PkaZcKDkzzjUDhFLRT1jw/a4247010827f1763bd7560894e30938e/afap.png" />
            
            </figure><p>Due to the nature of current QUIC implementations, which reside entirely in user-space, pacing of QUIC packets conflicts with any of the techniques explored in this post, because pacing each packet separately during transmission will prevent any batching on the application side, and in turn batching will prevent pacing, as batched packets will be transmitted as fast as possible once received by the kernel.</p><p>However Linux provides some facilities to offload the pacing to the kernel and give back some control to the application:</p><ul><li><p><b>SO_MAX_PACING_RATE</b>: an application can define this socket option to instruct the fq packet scheduler to pace outgoing packets up to the given rate. This works for UDP sockets as well, but it is yet to be seen how this can be integrated with QUIC, as a single UDP socket can be used for multiple QUIC connections (unlike TCP, where each connection has its own socket). In addition, this is not very flexible, and might not be ideal when implementing the BBR pacer.</p></li><li><p><b>SO_TXTIME / SCM_TXTIME</b>: an application can use these options to schedule transmission of specific packets at specific times, essentially instructing fq to delay packets until the provided timestamp is reached. This gives the application a lot more control, and can be easily integrated into sendmsg() as well as sendmmsg(). But it does not yet support specifying different times for each packet when GSO is used, as there is no way to define multiple timestamps for packets that need to be segmented (each segmented packet essentially ends up being sent at the same time anyway).</p></li></ul><p>While the performance gains achieved by using the techniques illustrated here are fairly significant, there are still open questions around how any of this will work with pacing, so more experimentation is required.</p> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[UDP]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[HTTP3]]></category>
            <guid isPermaLink="false">3pwKBhG2s8cT4COiXLHTyT</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Experiment with HTTP/3 using NGINX and quiche]]></title>
            <link>https://blog.cloudflare.com/experiment-with-http-3-using-nginx-and-quiche/</link>
            <pubDate>Thu, 17 Oct 2019 14:00:00 GMT</pubDate>
            <description><![CDATA[ Just a few weeks ago we announced the availability on our edge network of HTTP/3, the new revision of HTTP intended to improve security and performance on the Internet. Everyone can now enable HTTP/3 on their Cloudflare zone ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Just a few weeks ago <a href="/http3-the-past-present-and-future/">we announced</a> the availability on our edge network of <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>, the new revision of HTTP intended to improve security and performance on the Internet. Everyone can now enable HTTP/3 on their Cloudflare zone and experiment with it using <a href="/http3-the-past-present-and-future/#using-google-chrome-as-an-http-3-client">Chrome Canary</a> as well as <a href="/http3-the-past-present-and-future/#using-curl">curl</a>, among other clients.</p><p>We have previously made available <a href="https://github.com/cloudflare/quiche/blob/master/examples/http3-server.rs">an example HTTP/3 server as part of the quiche project</a> to allow people to experiment with the protocol, but it’s quite limited in the functionality that it offers, and was never intended to replace other general-purpose web servers.</p><p>We are now happy to announce that <a href="/enjoy-a-slice-of-quic-and-rust/">our implementation of HTTP/3 and QUIC</a> can be integrated into your own installation of NGINX as well. This is made available <a href="https://github.com/cloudflare/quiche/tree/master/extras/nginx">as a patch</a> to NGINX, that can be applied and built directly with the upstream NGINX codebase.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/RGn7FpVUT1wQ1v5yu74c3/d6db309a2e2d99da184b3bbb123f3fb5/quiche-banner-copy_2x.png" />
            
            </figure><p>It’s important to note that <b>this is not officially supported or endorsed by the NGINX project</b>, it is just something that we, Cloudflare, want to make available to the wider community to help push adoption of QUIC and HTTP/3.</p>
    <div>
      <h3>Building</h3>
      <a href="#building">
        
      </a>
    </div>
    <p>The first step is to <a href="https://nginx.org/en/download.html">download and unpack the NGINX source code</a>. Note that the HTTP/3 and QUIC patch only works with the 1.16.x release branch (the latest stable release being 1.16.1).</p>
            <pre><code> % curl -O https://nginx.org/download/nginx-1.16.1.tar.gz
 % tar xvzf nginx-1.16.1.tar.gz</code></pre>
            <p>As well as quiche, the underlying implementation of HTTP/3 and QUIC:</p>
            <pre><code> % git clone --recursive https://github.com/cloudflare/quiche</code></pre>
            <p>Next you’ll need to apply the patch to NGINX:</p>
            <pre><code> % cd nginx-1.16.1
 % patch -p01 &lt; ../quiche/extras/nginx/nginx-1.16.patch</code></pre>
            <p>And finally build NGINX with HTTP/3 support enabled:</p>
            <pre><code> % ./configure                          	\
   	--prefix=$PWD                       	\
   	--with-http_ssl_module              	\
   	--with-http_v2_module               	\
   	--with-http_v3_module               	\
   	--with-openssl=../quiche/deps/boringssl \
   	--with-quiche=../quiche
 % make</code></pre>
            <p>The above command instructs the NGINX build system to enable the HTTP/3 support ( <code>--with-http_v3_module</code>) by using the quiche library found in the path it was previously downloaded into ( <code>--with-quiche=../quiche</code>), as well as TLS and HTTP/2. Additional build options can be added as needed.</p><p>You can check out the full instructions <a href="https://github.com/cloudflare/quiche/tree/master/extras/nginx#readme">here</a>.</p>
    <div>
      <h3>Running</h3>
      <a href="#running">
        
      </a>
    </div>
    <p>Once built, NGINX can be configured to accept incoming HTTP/3 connections by adding the <code>quic</code> and <code>reuseport</code> options to the <a href="https://nginx.org/en/docs/http/ngx_http_core_module.html#listen">listen</a> configuration directive.</p><p>Here is a minimal configuration example that you can start from:</p>
            <pre><code>events {
    worker_connections  1024;
}

http {
    server {
        # Enable QUIC and HTTP/3.
        listen 443 quic reuseport;

        # Enable HTTP/2 (optional).
        listen 443 ssl http2;

        ssl_certificate      cert.crt;
        ssl_certificate_key  cert.key;

        # Enable all TLS versions (TLSv1.3 is required for QUIC).
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
        
        # Add Alt-Svc header to negotiate HTTP/3.
        add_header alt-svc 'h3-23=":443"; ma=86400';
    }
}</code></pre>
            <p>This will enable both HTTP/2 and HTTP/3 on the TCP/443 and UDP/443 ports respectively.</p><p>You can then use one of the available HTTP/3 clients (such as <a href="/http3-the-past-present-and-future/#using-google-chrome-as-an-http-3-client">Chrome Canary</a>, <a href="/http3-the-past-present-and-future/#using-curl">curl</a> or even the <a href="/http3-the-past-present-and-future/#using-quiche-s-http3-client">example HTTP/3 client provided as part of quiche</a>) to connect to your NGINX instance using HTTP/3.</p><p>We are excited to make this available for everyone to experiment and play with HTTP/3, but it’s important to note that <b>the implementation is still experimental</b> and it’s likely to have bugs as well as limitations in functionality. Feel free to submit a ticket to the <a href="https://github.com/cloudflare/quiche">quiche project</a> if you run into problems or find any bug.</p> ]]></content:encoded>
            <category><![CDATA[NGINX]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[HTTP3]]></category>
            <guid isPermaLink="false">2M0hyPXVNiYWjSUGQRypv2</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
        </item>
        <item>
            <title><![CDATA[Birthday Week 2019 Wrap-up]]></title>
            <link>https://blog.cloudflare.com/birthday-week-2019-wrap-up/</link>
            <pubDate>Fri, 27 Sep 2019 19:00:00 GMT</pubDate>
            <description><![CDATA[ This week we celebrated Cloudflare’s 9th birthday by launching a variety of new offerings that support our mission: to help build a better Internet.  Below is a summary recap of how we celebrated Birthday Week 2019. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>This week we celebrated Cloudflare’s 9th birthday by launching a variety of new offerings that support our mission: to help build a better Internet.  Below is a summary recap of how we celebrated Birthday Week 2019.</p>
    <div>
      <h2><a href="/cleaning-up-bad-bots/">Cleaning up bad bots</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Every day Cloudflare protects over 20 million Internet properties from malicious bots, and this week you were invited to join in the fight!  Now you can enable “bot fight mode” in the Firewall settings of the Cloudflare Dashboard and we’ll start deploying CPU intensive code to traffic originating from malicious bots.  This wastes the bots’ CPU resources and makes it more difficult and costly for perpetrators to deploy malicious bots at scale. We’ll also share the IP addresses of malicious bot traffic with our <a href="https://www.cloudflare.com/bandwidth-alliance/">Bandwidth Alliance partners</a>, who can help kick malicious bots offline. Join us in the battle against bad bots – and, as you can read <a href="/cleaning-up-bad-bots/">here</a> – you can help the climate too!</p>
    <div>
      <h2><a href="/introducing-browser-insights/">Browser Insights</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Speed matters, and if you manage a website or app, you want to make sure that you’re delivering a high performing website to all of your global end users. Now you can enable Browser Insights in the Speed section of the Cloudflare Dashboard to analyze website performance from the perspective of your users’ web browsers.  </p>
    <div>
      <h2><a href="/announcing-warp-plus/">WARP, the wait is over</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Several months ago <a href="/1111-warp-better-vpn/">we announced WARP</a>, a free mobile app purpose-built to address the security and performance challenges of the mobile Internet, while also respecting user privacy.  After months of testing and development, this week we (finally) rolled out WARP to approximately 2 million wait-list customers.  We also <a href="/announcing-warp-plus/">enabled WARP+</a>, a WARP experience that uses Argo routing technology to route your mobile traffic across faster, less-congested, routes through the Internet.  WARP and WARP+ are now available in the iOS and Android App stores and we can’t wait for you to give it a try!</p>
    <div>
      <h2><a href="/http3-the-past-present-and-future/">HTTP/3 Support</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Last year we announced early support for QUIC, a UDP based protocol that aims to make everything on the Internet work faster, with built-in encryption. The IETF subsequently decided that QUIC should be the foundation of the next generation of the HTTP protocol, HTTP/3. This week, Cloudflare was the first to introduce support for HTTP/3 in partnership with Google Chrome and Mozilla.</p>
    <div>
      <h2><a href="/workers-sites">Workers Sites</a></h2>
      <a href="#">
        
      </a>
    </div>
    <p>Finally, to wrap up our birthday week announcements, we announced Workers Sites. The Workers serverless platform continues to grow and evolve, and every day we discover new and innovative ways to help developers build and optimize their applications. Workers Sites enables developers to easily deploy lightweight static sites across Cloudflare’s global cloud platform without having to build out the traditional backend server infrastructure to support these sites.</p><p>We look forward to Birthday Week every year, as a chance to showcase some of our exciting new offerings — but we all know building a better Internet is about more than one week.  It’s an effort that takes place all year long, and requires the help of our partners, employees and especially you — our customers. Thank you for being a customer, providing valuable feedback and helping us stay focused on our mission to help build a better Internet.</p><p>Can’t get enough of this week’s announcements, or want to learn more? Register for next week’s Birthday Week Recap webinar to get the inside scoop on every announcement.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Analytics]]></category>
            <category><![CDATA[1.1.1.1]]></category>
            <category><![CDATA[WARP]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[Speed & Reliability]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">ZnBLrvPCUe0SJ84CB5Hm8</guid>
            <dc:creator>Jake Anderson</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP/3: the past, the present, and the future]]></title>
            <link>https://blog.cloudflare.com/http3-the-past-present-and-future/</link>
            <pubDate>Thu, 26 Sep 2019 13:00:00 GMT</pubDate>
            <description><![CDATA[ We are now happy to announce that QUIC and HTTP/3 support is available on the Cloudflare edge network. We’re excited to be joined in this announcement by Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all. ]]></description>
            <content:encoded><![CDATA[ <p>During last year’s Birthday Week <a href="/the-quicening/">we announced preliminary support for QUIC and HTTP/3</a> (or “HTTP over QUIC” as it was known back then), the new standard for the web, enabling faster, more reliable, and more secure connections to web endpoints like websites and <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">APIs</a>. We also let our customers join a waiting list to try QUIC and <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a> as soon as they became available.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5vl1aHmtQdfvJSyQEDaHJu/1f07700e6de58e6928debfc3e502fb6a/http3-tube_2x.png" />
            
            </figure><p>Since then, we’ve been working with industry peers through the <a href="https://ietf.org/">Internet Engineering Task Force</a>, including Google Chrome and Mozilla Firefox, to iterate on the HTTP/3 and QUIC standards documents. In parallel with the standards maturing, we’ve also worked on <a href="/enjoy-a-slice-of-quic-and-rust/">improving support</a> on our network.</p><p><b>We are now happy to announce that QUIC and HTTP/3 support is available on the Cloudflare edge network.</b> We’re excited to be joined in this announcement by Google Chrome and Mozilla Firefox, two of the leading browser vendors and partners in our effort to make the web faster and more reliable for all.</p><p>In the words of Ryan Hamilton, Staff Software Engineer at Google, “HTTP/3 should make the web better for everyone. The Chrome and Cloudflare teams have worked together closely to bring HTTP/3 and QUIC from nascent standards to widely adopted technologies for improving the web. Strong partnership between industry leaders is what makes Internet standards innovations possible, and we look forward to our continued work together.”</p><p>What does this mean for you, a Cloudflare customer who uses our services and edge network to make your web presence faster and more secure? Once HTTP/3 support is <a href="#how-do-i-enable-http-3-for-my-domain">enabled for your domain in the Cloudflare dashboard</a>, your customers can interact with your websites and APIs using HTTP/3. We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone.</p><p>What does this announcement mean if you’re a user of the Internet interacting with sites and APIs through a browser and other clients? Starting today, you can <a href="#using-google-chrome-as-an-http-3-client">use Chrome Canary</a> to interact with Cloudflare and other servers over HTTP/3. For those of you looking for a command line client, <a href="#using-curl">curl also provides support for HTTP/3</a>. Instructions for using Chrome and curl with HTTP/3 follow later in this post.</p>
    <div>
      <h2>The Chicken and the Egg</h2>
      <a href="#the-chicken-and-the-egg">
        
      </a>
    </div>
    <p>Standards innovation on the Internet has historically been difficult because of a chicken and egg problem: which needs to come first, server support (like Cloudflare, or other large sources of response data) or client support (like browsers, operating systems, etc)? Both sides of a connection need to support a new communications protocol for it to be any use at all.</p><p>Cloudflare has a long history of driving web standards forward, from <a href="/introducing-http2/">HTTP/2</a> (the version of HTTP preceding HTTP/3), to <a href="https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/">TLS 1.3</a>, to things like <a href="https://www.cloudflare.com/learning/ssl/what-is-encrypted-sni/">encrypted SNI</a>. We’ve pushed standards forward by partnering with like-minded organizations who share in our desire to help build a better Internet. Our efforts to move HTTP/3 into the mainstream are no different.</p><p>Throughout the HTTP/3 standards development process, we’ve been working closely with industry partners to build and validate client HTTP/3 support compatible with our edge support. We’re thrilled to be joined by Google Chrome and curl, both of which can be used today to make requests to the Cloudflare edge over HTTP/3. Mozilla Firefox expects to ship support in a nightly release soon as well.</p><p>Bringing this all together: today is a good day for Internet users; widespread rollout of HTTP/3 will mean a faster web experience for all, and today’s support is a large step toward that.</p><p>More importantly, today is a good day for the Internet: Chrome, curl, and Cloudflare, and soon, Mozilla, rolling out experimental but functional, support for HTTP/3 in quick succession shows that the Internet standards creation process works. Coordinated by the Internet Engineering Task Force, industry partners, competitors, and other key stakeholders can come together to craft standards that benefit the entire Internet, not just the behemoths.</p><p>Eric Rescorla, CTO of Firefox, summed it up nicely: “Developing a new network protocol is hard, and getting it right requires everyone to work together. Over the past few years, we've been working with Cloudflare and other industry partners to test TLS 1.3 and now HTTP/3 and QUIC. Cloudflare's early server-side support for these protocols has helped us work the interoperability kinks out of our client-side Firefox implementation. We look forward to advancing the security and performance of the Internet together.”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6GH3p4lKpUIwOiWKDsgamN/6ab9009925489ca8a06ff935108401cc/HTTP3-partnership_2x-1.png" />
            
            </figure>
    <div>
      <h2>How did we get here?</h2>
      <a href="#how-did-we-get-here">
        
      </a>
    </div>
    <p>Before we dive deeper into HTTP/3, let’s have a quick look at the <a href="/http-3-from-root-to-tip/">evolution of HTTP over the years</a> in order to better understand why HTTP/3 is needed.</p><p>It all started back in 1996 with the publication of the <a href="https://tools.ietf.org/html/rfc1945">HTTP/1.0 specification</a> which defined the basic HTTP textual wire format as we know it today (for the purposes of this post I’m pretending HTTP/0.9 never existed). In HTTP/1.0 a new TCP connection is created for each request/response exchange between clients and servers, meaning that all requests incur a latency penalty as the TCP and <a href="https://www.cloudflare.com/learning/ssl/what-happens-in-a-tls-handshake/">TLS handshakes</a> are completed before each request.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/oo0toRnPpU4dLMrkXUBk7/cc8d46c65edd60f5d9fc18c282000b97/http-request-over-tcp-tls_2x.png" />
            
            </figure><p>Worse still, rather than sending all outstanding data as fast as possible once the connection is established, TCP enforces a warm-up period called “slow start”, which allows the TCP congestion control algorithm to determine the amount of data that can be in flight at any given moment before congestion on the network path occurs, and avoid flooding the network with packets it can’t handle. But because new connections have to go through the slow start process, they can’t use all of the network bandwidth available immediately.</p><p>The <a href="https://tools.ietf.org/html/rfc2616">HTTP/1.1 revision of the HTTP specification</a> tried to solve these problems a few years later by introducing the concept of “keep-alive” connections, that allow clients to reuse TCP connections, and thus amortize the cost of the initial connection establishment and slow start across multiple requests. But this was no silver bullet: while multiple requests could share the same connection, they still had to be serialized one after the other, so a client and server could only execute a single request/response exchange at any given time for each connection.</p><p>As the web evolved, browsers found themselves needing more and more concurrency when fetching and rendering web pages as the number of resources (CSS, JavaScript, images, …) required by each web site increased over the years. But since HTTP/1.1 only allowed clients to do one HTTP request/response exchange at a time, the only way to gain concurrency at the network layer was to use multiple TCP connections to the same origin in parallel, thus losing most of the benefits of keep-alive connections. While connections would still be reused to a certain (but lesser) extent, we were back at square one.</p><p>Finally, more than a decade later, came SPDY and then <a href="https://tools.ietf.org/html/rfc7540">HTTP/2</a>, which, among other things, introduced the concept of HTTP “streams”: an abstraction that allows HTTP implementations to concurrently multiplex different HTTP exchanges onto the same TCP connection, allowing browsers to more efficiently reuse TCP connections.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/gBysPThlyWyj8a339vwWI/c616564cc352fccac4eb7e1977ddfe28/Screen-Shot-2019-09-25-at-7.43.01-PM.png" />
            
            </figure><p>But, yet again, this was no silver bullet! HTTP/2 solves the original problem — inefficient use of a single TCP connection — since multiple requests/responses can now be transmitted over the same connection at the same time. However, all requests and responses are equally affected by packet loss (e.g. due to network congestion), even if the data that is lost only concerns a single request. This is because while the HTTP/2 layer can segregate different HTTP exchanges on separate streams, TCP has no knowledge of this abstraction, and all it sees is a stream of bytes with no particular meaning.</p><p>The role of TCP is to deliver the entire stream of bytes, in the correct order, from one endpoint to the other. When a TCP packet carrying some of those bytes is lost on the network path, it creates a gap in the stream and TCP needs to fill it by resending the affected packet when the loss is detected. While doing so, none of the successfully delivered bytes that follow the lost ones can be delivered to the application, even if they were not themselves lost and belong to a completely independent HTTP request. So they end up getting unnecessarily delayed as TCP cannot know whether the application would be able to process them without the missing bits. This problem is known as “head-of-line blocking”.</p>
    <div>
      <h2>Enter HTTP/3</h2>
      <a href="#enter-http-3">
        
      </a>
    </div>
    <p>This is where HTTP/3 comes into play: instead of using TCP as the transport layer for the session, it uses <a href="/the-road-to-quic/">QUIC, a new Internet transport protocol</a>, which, among other things, introduces streams as first-class citizens at the transport layer. QUIC streams share the same QUIC connection, so no additional handshakes and slow starts are required to create new ones, but QUIC streams are delivered independently such that in most cases packet loss affecting one stream doesn't affect others. This is possible because QUIC packets are encapsulated on top of <a href="https://www.cloudflare.com/learning/ddos/glossary/user-datagram-protocol-udp/">UDP datagrams</a>.</p><p>Using UDP allows much more flexibility compared to TCP, and enables QUIC implementations to live fully in user-space — updates to the protocol’s implementations are not tied to operating systems updates as is the case with TCP. With QUIC, HTTP-level streams can be simply mapped on top of QUIC streams to get all the benefits of HTTP/2 without the head-of-line blocking.</p><p>QUIC also combines the typical 3-way TCP handshake with <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a>'s handshake. Combining these steps means that encryption and authentication are provided by default, and also enables faster connection establishment. In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1fHLYlTE6rQeewwbb11hVH/1e56b0a3ad747f02222b96ebac3d37a3/http-request-over-quic_2x.png" />
            
            </figure><p>But why not just use HTTP/2 on top of QUIC, instead of creating a whole new HTTP revision? After all, HTTP/2 also offers the stream multiplexing feature. As it turns out, it’s somewhat more complicated than that.</p><p>While it’s true that some of the HTTP/2 features can be mapped on top of QUIC very easily, that’s not true for all of them. One in particular, <a href="/hpack-the-silent-killer-feature-of-http-2/">HTTP/2’s header compression scheme called HPACK</a>, heavily depends on the order in which different HTTP requests and responses are delivered to the endpoints. QUIC enforces delivery order of bytes within single streams, but does not guarantee ordering among different streams.</p><p>This behavior required the creation of a new HTTP header compression scheme, called QPACK, which fixes the problem but requires changes to the HTTP mapping. In addition, some of the features offered by HTTP/2 (like per-stream flow control) are already offered by QUIC itself, so they were dropped from HTTP/3 in order to remove unnecessary complexity from the protocol.</p>
    <div>
      <h2>HTTP/3, powered by a delicious quiche</h2>
      <a href="#http-3-powered-by-a-delicious-quiche">
        
      </a>
    </div>
    <p>QUIC and HTTP/3 are very exciting standards, promising to address many of the shortcomings of previous standards and ushering in a new era of performance on the web. So how do we go from exciting standards documents to working implementation?</p><p>Cloudflare's QUIC and HTTP/3 support is powered by quiche, <a href="/enjoy-a-slice-of-quic-and-rust/">our own open-source implementation written in Rust</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3SjyP0JlJLJAUAGnLJmLm/07b8ae667df06953b1ee5e16014ecf3f/Screen-Shot-2019-09-25-at-7.39.59-PM.png" />
            
            </figure><p>You can find it on GitHub at <a href="https://github.com/cloudflare/quiche">github.com/cloudflare/quiche</a>.</p><p>We announced quiche a few months ago and since then have added support for the HTTP/3 protocol, on top of the existing QUIC support. We have designed quiche in such a way that it can now be used to implement HTTP/3 clients and servers or just plain QUIC ones.</p>
    <div>
      <h2>How do I enable HTTP/3 for my domain?</h2>
      <a href="#how-do-i-enable-http-3-for-my-domain">
        
      </a>
    </div>
    <p>As mentioned above, we have started on-boarding customers that signed up for the waiting list. If you are on the waiting list and have received an email from us communicating that you can now enable the feature for your websites, you can simply go to the <a href="https://dash.cloudflare.com/?to=/:account/:zone/network">Cloudflare dashboard</a> and flip the switch from the "Network" tab manually:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DADHjrJrzR3HCwKXT8G9m/7cefedde11594b83a23845c92359be0f/http3-toggle-1.png" />
            
            </figure><p>We expect to make the HTTP/3 feature available to all customers in the near future.</p><p>Once enabled, you can experiment with HTTP/3 in a number of ways:</p>
    <div>
      <h3>Using Google Chrome as an HTTP/3 client</h3>
      <a href="#using-google-chrome-as-an-http-3-client">
        
      </a>
    </div>
    <p>In order to use the Chrome browser to connect to your website over HTTP/3, you first need to download and install the <a href="https://www.google.com/chrome/canary/">latest Canary build</a>. Then all you need to do to enable HTTP/3 support is starting Chrome Canary with the “--enable-quic” and “--quic-version=h3-23” <a href="https://www.chromium.org/developers/how-tos/run-chromium-with-flags">command-line arguments</a>.</p><p>Once Chrome is started with the required arguments, you can just type your domain in the address bar, and see it loaded over HTTP/3 (you can use the Network tab in Chrome’s Developer Tools to check what protocol version was used). Note that due to how HTTP/3 is negotiated between the browser and the server, HTTP/3 might not be used for the first few connections to the domain, so you should try to reload the page a few times.</p><p>If this seems too complicated, don’t worry, as the HTTP/3 support in Chrome will become more stable as time goes on, enabling HTTP/3 will become easier.</p><p>This is what the Network tab in the Developer Tools shows when browsing this very blog over HTTP/3:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5doD9EStpvkaCUGlV8iCyx/7615f6b8f52b126c7ac12028c2891444/Screen-Shot-2019-09-20-at-1.27.34-PM.png" />
            
            </figure><p>Note that due to the experimental nature of the HTTP/3 support in Chrome, the protocol is actually identified as “http2+quic/99” in Developer Tools, but don’t let that fool you, it is indeed HTTP/3.</p>
    <div>
      <h3>Using curl</h3>
      <a href="#using-curl">
        
      </a>
    </div>
    <p>The curl command-line tool also <a href="https://daniel.haxx.se/blog/2019/09/11/curl-7-66-0-the-parallel-http-3-future-is-here/">supports HTTP/3 as an experimental feature</a>. You’ll need to download the <a href="https://github.com/curl/curl">latest version from git</a> and <a href="https://github.com/curl/curl/blob/master/docs/HTTP3.md#quiche-version">follow the instructions on how to enable HTTP/3 support</a>.</p><p>If you're running macOS, we've also made it easy to install an HTTP/3 equipped version of curl via Homebrew:</p>
            <pre><code> % brew install --HEAD -s https://raw.githubusercontent.com/cloudflare/homebrew-cloudflare/master/curl.rb</code></pre>
            <p>In order to perform an HTTP/3 request all you need is to add the “--http3” command-line flag to a normal curl command:</p>
            <pre><code> % ./curl -I https://blog.cloudflare.com/ --http3
HTTP/3 200
date: Tue, 17 Sep 2019 12:27:07 GMT
content-type: text/html; charset=utf-8
set-cookie: __cfduid=d3fc7b95edd40bc69c7d894d296564df31568723227; expires=Wed, 16-Sep-20 12:27:07 GMT; path=/; domain=.blog.cloudflare.com; HttpOnly; Secure
x-powered-by: Express
cache-control: public, max-age=60
vary: Accept-Encoding
cf-cache-status: HIT
age: 57
expires: Tue, 17 Sep 2019 12:28:07 GMT
alt-svc: h3-23=":443"; ma=86400
expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
server: cloudflare
cf-ray: 517b128df871bfe3-MAN</code></pre>
            
    <div>
      <h3>Using quiche’s http3-client</h3>
      <a href="#using-quiches-http3-client">
        
      </a>
    </div>
    <p>Finally, we also provide an example <a href="https://github.com/cloudflare/quiche/blob/master/examples/http3-client.rs">HTTP/3 command-line client</a> (as well as a command-line server) built on top of quiche, that you can use to experiment with HTTP/3.</p><p>To get it running, first clone quiche’s GitHub repository:</p>
            <pre><code>$ git clone --recursive https://github.com/cloudflare/quiche</code></pre>
            <p>Then build it. You need a working Rust and Cargo installation for this to work (we recommend using <a href="https://rustup.rs/">rustup</a> to easily setup a working Rust development environment).</p>
            <pre><code>$ cargo build --examples</code></pre>
            <p>And finally you can execute an HTTP/3 request:</p>
            <pre><code>$ RUST_LOG=info target/debug/examples/http3-client https://blog.cloudflare.com/</code></pre>
            
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>In the coming months we’ll be working on improving and optimizing our QUIC and <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3 implementation</a>, and will eventually allow everyone to enable this new feature without having to go through a waiting list. We'll continue updating our implementation as standards evolve, which <b>may result in breaking changes</b> between draft versions of the standards.</p><p>Here are a few new features on our roadmap that we're particularly excited about:</p>
    <div>
      <h3>Connection migration</h3>
      <a href="#connection-migration">
        
      </a>
    </div>
    <p>One important feature that QUIC enables is seamless and transparent migration of connections between different networks (such as your home WiFi network and your carrier’s mobile network as you leave for work in the morning) without requiring a whole new connection to be created.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7KaBR6EaZ00sko7yvIPavs/182d187326680c8c92d34486acea0de1/Screen-Shot-2019-09-25-at-7.39.44-PM.png" />
            
            </figure><p>This feature will require some additional changes to our infrastructure, but it’s something we are excited to offer our customers in the future.</p>
    <div>
      <h3>Zero Round Trip Time Resumption</h3>
      <a href="#zero-round-trip-time-resumption">
        
      </a>
    </div>
    <p>Just like TLS 1.3, QUIC supports a <a href="/introducing-0-rtt/">mode of operation that allows clients to start sending HTTP requests before the connection handshake has completed</a>. We don’t yet support this feature in our QUIC deployment, but we’ll be working on making it available, just like we already do for our TLS 1.3 support.</p>
    <div>
      <h2>HTTP/3: it's alive!</h2>
      <a href="#http-3-its-alive">
        
      </a>
    </div>
    <p>We are excited to support HTTP/3 and allow our customers to experiment with it while efforts to standardize QUIC and HTTP/3 are still ongoing. We'll continue working alongside other organizations, including Google and Mozilla, to finalize the QUIC and HTTP/3 standards and encourage broad adoption.</p><p>Here's to a faster, more reliable, more secure web experience for all.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4cfvya4KDyDXaX5DdNkv9x</guid>
            <dc:creator>Alessandro Ghedini</dc:creator>
            <dc:creator>Rustam Lalkaka</dc:creator>
        </item>
        <item>
            <title><![CDATA[HTTP/3: From root to tip]]></title>
            <link>https://blog.cloudflare.com/http-3-from-root-to-tip/</link>
            <pubDate>Thu, 24 Jan 2019 17:57:09 GMT</pubDate>
            <description><![CDATA[ Explore HTTP/3 from root to tip and discover the backstory of this new HTTP syntax that works on top of the IETF QUIC transport. ]]></description>
            <content:encoded><![CDATA[ <p>HTTP is the application protocol that powers the Web. It began life as the so-called HTTP/0.9 protocol in 1991, and by 1999 had evolved to HTTP/1.1, which was standardised within the IETF (Internet Engineering Task Force). HTTP/1.1 was good enough for a long time but the ever changing needs of the Web called for a better suited protocol, and HTTP/2 emerged in 2015. More recently it was announced that the IETF is intending to deliver a new version - <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>. To some people this is a surprise and has caused a bit of confusion. If you don't track IETF work closely it might seem that HTTP/3 has come out of the blue. However,  we can trace its origins through a lineage of experiments and evolution of Web protocols; specifically the QUIC transport protocol.</p><p>If you're not familiar with QUIC, my colleagues have done a great job of tackling different angles. John's <a href="/the-quicening/">blog</a> describes some of the real-world annoyances of today's HTTP, Alessandro's <a href="/the-road-to-quic/">blog</a> tackles the nitty-gritty transport layer details, and Nick's blog covers <a href="/head-start-with-quic/">how to get hands on</a> with some testing. We've collected these and more at <a href="https://cloudflare-quic.com">https://cloudflare-quic.com</a>. And if that tickles your fancy, be sure to check out <a href="/enjoy-a-slice-of-quic-and-rust/">quiche</a>, our own open-source implementation of the QUIC protocol written in Rust.</p><p>HTTP/3 is the HTTP application mapping to the QUIC transport layer. This name was made official in the recent draft version 17 (<a href="https://tools.ietf.org/html/draft-ietf-quic-http-17">draft-ietf-quic-http-17</a>), which was proposed in late October 2018, with discussion and rough consensus being formed during the IETF 103 meeting in Bangkok in November. HTTP/3 was previously known as HTTP over QUIC, which itself was previously known as HTTP/2 over QUIC. Before that we had HTTP/2 over gQUIC, and way back we had SPDY over gQUIC. The fact of the matter, however, is that HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport.</p><p>In this blog post we'll explore the history behind some of HTTP/3's previous names and present the motivation behind the most recent name change. We'll go back to the early days of HTTP and touch on all the good work that has happened along the way. If you're keen to get the full picture you can jump to the end of the article or open this <a href="/content/images/2019/01/web_timeline_large1.svg">highly detailed SVG version</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1rKByCE0o19Q1zSD9Huliu/7f3540be1ff8f02da311c4df909def42/http3-stack.png" />
            
            </figure><p>An HTTP/3 layer cake</p>
    <div>
      <h2>Setting the scene</h2>
      <a href="#setting-the-scene">
        
      </a>
    </div>
    <p>Just before we focus on HTTP, it is worth reminding ourselves that there are two protocols that share the name QUIC. As we explained <a href="/the-road-to-quic/">previously</a>, gQUIC is commonly used to identify Google QUIC (the original protocol), and QUIC is commonly used to represent the IETF standard-in-progress version that diverges from gQUIC.</p><p>Since its early days in the 90s, the web’s needs have changed. We've had new versions of HTTP and added user security in the shape of Transport Layer Security (TLS). We'll only touch on TLS in this post, our other <a href="/tag/tls/">blog posts</a> are a great resource if you want to explore that area in more detail.</p><p>To help me explain the history of HTTP and TLS, I started to collate details of protocol specifications and dates. This information is usually presented in a textual form such as a list of bullets points stating document titles, ordered by date. However, there are branching standards, each overlapping in time and a simple list cannot express the real complexity of relationships. In HTTP, there has been parallel work that refactors core protocol definitions for easier consumption, extends the protocol for new uses, and redefines how the protocol exchanges data over the Internet for performance. When you're trying to join the dots over nearly 30 years of Internet history across different branching work streams you need a visualisation. So I made one -  the Cloudflare Secure Web Timeline. (NB: Technically it is a <a href="https://en.wikipedia.org/wiki/Cladogram">Cladogram</a>, but the term timeline is more widely known).</p><p>I have applied some artistic license when creating this, choosing to focus on the successful branches in the IETF space. Some of the things not shown include efforts in the W3 Consortium <a href="https://www.w3.org/Protocols/HTTP-NG/">HTTP-NG</a> working group, along with some exotic ideas that their authors are keen on explaining how to pronounce:  <a href="https://blog.jgc.org/2012/12/speeding-up-http-with-minimal-protocol.html">HMURR (pronounced 'hammer')</a> and <a href="https://github.com/HTTPWorkshop/workshop2017/blob/master/talks/waka.pdf">WAKA (pronounced “wah-kah”)</a>.</p><p>In the next few sections I'll walk this timeline to explain critical chapters in the history of HTTP. To enjoy the takeaways from this post, it helps to have an appreciation of why standardisation is beneficial, and how the IETF approaches it. Therefore we'll start with a very brief overview of that topic before returning to the timeline itself. Feel free to skip the next section if you are already familiar with the IETF.</p>
    <div>
      <h2>Types of Internet standard</h2>
      <a href="#types-of-internet-standard">
        
      </a>
    </div>
    <p>Generally, standards define common terms of reference, scope, constraint, applicability, and other considerations. Standards exist in many shapes and sizes, and can be informal (aka de facto) or formal (agreed/published by a Standards Defining Organisation such as IETF, ISO or MPEG). Standards are used in many fields, there is even a formal British Standard for making tea - BS 6008.</p><p>The early Web used HTTP and SSL protocol definitions that were published outside the IETF, these are marked as <b>red lines</b> on the Secure Web Timeline. The uptake of these protocols by clients and servers made them de facto standards.</p><p>At some point, it was decided to formalise these protocols (some motivating reasons are described in a later section). Internet standards are commonly defined in the IETF, which is guided by the informal principle of "rough consensus and running code". This is grounded in experience of developing and deploying things on the Internet. This is in contrast to a "clean room" approach of trying to develop perfect protocols in a vacuum.</p><p>IETF Internet standards are commonly known as RFCs. This is a complex area to explain so I recommend reading the blog post "<a href="https://www.ietf.org/blog/how-read-rfc/">How to Read an RFC</a>" by the QUIC Working Group Co-chair Mark Nottingham. A Working Group, or WG, is more or less just a mailing list.</p><p>Each year the IETF hold three meetings that provide the time and facilities for all WGs to meet in person if they wish. The agenda for these weeks can become very congested, with limited time available to discuss highly technical areas in depth. To overcome this, some WGs choose to also hold interim meetings in the months between the the general IETF meetings. This can help to maintain momentum on specification development. The QUIC WG has held several interim meetings since 2017, a full list is available on their <a href="https://datatracker.ietf.org/wg/quic/meetings/">meeting page</a>.</p><p>These IETF meetings also provide the opportunity for other IETF-related collections of people to meet, such as the <a href="https://www.iab.org/">Internet Architecture Board</a> or <a href="https://irtf.org/">Internet Research Task Force</a>. In recent years, an <a href="https://www.ietf.org/how/runningcode/hackathons/">IETF Hackathon</a> has been held during the weekend preceding the IETF meeting. This provides an opportunity for the community to develop running code and, importantly, to carry out interoperability testing in the same room with others. This helps to find issues in specifications that can be discussed in the following days.</p><p>For the purposes of this blog, the important thing to understand is that RFCs don't just spring into existence. Instead, they go through a process that usually starts with an IETF Internet Draft (I-D) format that is submitted for consideration of adoption. In the case where there is already a published specification, preparation of an I-D might just be a simple reformatting exercise. I-Ds have a 6 month active lifetime from the date of publish. To keep them active, new versions need to be published. In practice, there is not much consequence to letting an I-D elapse and it happens quite often. The documents continue to be hosted on the <a href="https://datatracker.ietf.org/doc/recent">IETF document’s website</a> for anyone that wants to read them.</p><p>I-Ds are represented on the Secure Web Timeline as <b>purple lines</b>. Each one has a unique name that takes the form of <i>draft-{author name}-{working group}-{topic}-{version}</i>. The working group field is optional, it might predict IETF WG that will work on the piece and sometimes this changes. If an I-D is adopted by the IETF, or if the I-D was initiated directly within the IETF, the name is <i>draft-ietf-{working group}-{topic}-{version}</i>. I-Ds may branch, merge or die on the vine. The version starts at 00 and increases by 1 each time a new draft is released. For example, the 4th draft of an I-D will have the version 03. Any time that an I-D changes name, its version resets back to 00.</p><p>It is important to note that anyone can submit an I-D to the IETF; you should not consider these as standards. But, if the IETF standardisation process of an I-D does reach consensus, and the final document passes review, we finally get an RFC. The name changes again at this stage. Each RFC gets a unique number e.g. <a href="https://tools.ietf.org/html/rfc7230">RFC 7230</a>. These are represented as <b>blue lines</b> on the Secure Web Timeline.</p><p>RFCs are immutable documents. This means that changes to the RFC require a completely new number. Changes might be done in order to incorporate fixes for errata (editorial or technical errors that were found and reported) or simply to refactor the specification to improve layout. RFCs may <b>obsolete</b> older versions (complete replacement), or just <b>update</b> them (substantively change).</p><p>All IETF documents are openly available on <a href="http://tools.ietf.org">http://tools.ietf.org</a>. Personally I find the <a href="https://datatracker.ietf.org">IETF Datatracker</a> a little more user friendly because it provides a visualisation of a documents progress from I-D to RFC.</p><p>Below is an example that shows the development of <a href="https://tools.ietf.org/html/rfc1945">RFC 1945</a> - HTTP/1.0 and it is a clear source of inspiration for the Secure Web Timeline.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5SlEeIaoaU7r9PkklUSIOj/c24f0ba70885244920a29bea1daffd68/RFC-1945-datatracker.png" />
            
            </figure><p>IETF Datatracker view of RFC 1945</p><p>Interestingly, in the course of my work I found that the above visualisation is incorrect. It is missing <a href="https://tools.ietf.org/html/draft-ietf-http-v10-spec-05">draft-ietf-http-v10-spec-05</a> for some reason. Since the I-D lifetime is 6 months, there appears to be a gap before it became an RFC, whereas in reality draft 05 was still active through until August 1996.</p>
    <div>
      <h2>Exploring the Secure Web Timeline</h2>
      <a href="#exploring-the-secure-web-timeline">
        
      </a>
    </div>
    <p>With a small appreciation of how Internet standards documents come to fruition, we can start to walk the the Secure Web Timeline. In this section are a number of excerpt diagrams that show an important part of the timeline. Each dot represents the date that a document or capability was made available. For IETF documents, draft numbers are omitted for clarity. However, if you want to see all that detail please check out the <a href="/content/images/2019/01/web_timeline_large1.svg">complete timeline</a>.</p><p>HTTP began life as the so-called HTTP/0.9 protocol in 1991, and in 1994 the I-D <a href="https://tools.ietf.org/html/draft-fielding-http-spec-00">draft-fielding-http-spec-00</a> was published. This was adopted by the IETF soon after, causing the name change to <a href="https://tools.ietf.org/html/draft-ietf-http-v10-spec-00">draft-ietf-http-v10-spec-00</a>. The I-D went through 6 draft versions before being published as <a href="https://tools.ietf.org/html/rfc1945">RFC 1945</a> - HTTP/1.0 in 1996.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6fSsHSEtXc1HA38jJorxhO/d1a86966735f27d3b2ccfbcc1c8ec38d/http11-standardisation.png" />
            
            </figure><p>However, even before the HTTP/1.0 work completed, a separate activity started on HTTP/1.1. The I-D <a href="https://tools.ietf.org/html/draft-ietf-http-v11-spec-00">draft-ietf-http-v11-spec-00</a> was published in November 1995 and was formally published as <a href="https://tools.ietf.org/html/rfc2068">RFC 2068</a> in 1997. The keen eyed will spot that the Secure Web Timeline doesn't quite capture that sequence of events, this is an unfortunate side effect of the tooling used to generate the visualisation. I tried to minimise such problems where possible.</p><p>An HTTP/1.1 revision exercise was started in mid-1997 in the form of <a href="https://tools.ietf.org/html/draft-ietf-http-v11-spec-rev-00">draft-ietf-http-v11-spec-rev-00</a>. This completed in 1999 with the publication of <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>. Things went quiet in the IETF HTTP world until 2007. We'll come back to that shortly.</p>
    <div>
      <h2>A History of SSL and TLS</h2>
      <a href="#a-history-of-ssl-and-tls">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6pnAQiXiXCFpQpSkznYI1T/80a5516f4dafa64d1a5b60733b14c913/ssl-tls-standardisation.png" />
            
            </figure><p>Switching tracks to SSL. We see that the SSL 2.0 specification was released sometime around 1995, and that SSL 3.0 was released in November 1996. Interestingly, SSL 3.0 is described by <a href="https://tools.ietf.org/html/rfc6101">RFC 6101</a>, which was released in August 2011. This sits in <b>Historic</b> category, which "is usually done to document ideas that were considered and discarded, or protocols that were already historic when it was decided to document them." according to the <a href="https://www.ietf.org/blog/iesg-statement-designating-rfcs-historic/?primary_topic=7&amp;">IETF</a>. In this case it is advantageous to have an IETF-owned document that describes SSL 3.0 because it can be used as a canonical reference elsewhere.</p><p>Of more interest to us is how SSL inspired the development of TLS, which began life as <a href="https://tools.ietf.org/html/draft-ietf-tls-protocol-00">draft-ietf-tls-protocol-00</a> in November 1996. This went through 6 draft versions and was published as <a href="https://tools.ietf.org/html/rfc2246">RFC 2246</a> - TLS 1.0 at the start of 1999.</p><p>Between 1995 and 1999, the SSL and TLS protocols were used to secure HTTP communications on the Internet. This worked just fine as a de facto standard. It wasn't until January 1998 that the formal standardisation process for HTTPS was started with the publication of I-D <a href="https://tools.ietf.org/html/draft-ietf-tls-https-00">draft-ietf-tls-https-00</a>. That work concluded in May 2000 with the publication of <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a> - HTTP over TLS.</p><p>TLS continued to evolve between 2000 and 2007, with the standardisation of TLS 1.1 and 1.2. There was a gap of 7 years until work began on the next version of TLS, which was adopted as <a href="https://tools.ietf.org/html/draft-ietf-tls-tls13-00">draft-ietf-tls-tls13-00</a> in April 2014 and, after 28 drafts, completed as <a href="https://tools.ietf.org/html/rfc8446">RFC 8446</a> - TLS 1.3 in August 2018.</p>
    <div>
      <h2>Internet standardisation process</h2>
      <a href="#internet-standardisation-process">
        
      </a>
    </div>
    <p>After taking a small look at the timeline, I hope you can build a sense of how the IETF works. One generalisation for the way that Internet standards take shape is that researchers or engineers design experimental protocols that suit their specific use case. They experiment with protocols, in public or private, at various levels of scale. The data helps to identify improvements or issues. The work may be published to explain the experiment, to gather wider input or to help find other implementers. Take up of this early work by others may make it a de facto standard; eventually there may be sufficient momentum that formal standardisation becomes an option.</p><p>The status of a protocol can be an important consideration for organisations that may be thinking about implementing, deploying or in some way using it. A formal standardisation process can make a de facto standard more attractive because it tends to provide stability. The stewardship and guidance is provided by an organisation, such as the IETF, that reflects a wider range of experiences. However, it is worth highlighting that not all all formal standards succeed.</p><p>The process of creating a final standard is almost as important as the standard itself. Taking an initial idea and inviting contribution from people with wider knowledge, experience and use cases can to help produce something that will be of more use to a wider population. However, the standardisation process is not always easy. There are pitfalls and hurdles. Sometimes the process takes so long that the output is no longer relevant.</p><p>Each Standards Defining Organisation tends to have its own process that is geared around its field and participants. Explaining all of the details about how the IETF works is well beyond the scope of this blog. The IETF's "<a href="https://www.ietf.org/how/">How we work</a>" page is an excellent starting point that covers many aspects. The best method to forming understanding, as usual, is to get involved yourself. This can be as easy as joining an email list or adding to discussion on a relevant GitHub repository.</p>
    <div>
      <h2>Cloudflare's running code</h2>
      <a href="#cloudflares-running-code">
        
      </a>
    </div>
    <p>Cloudflare is proud to be early an adopter of new and evolving protocols. We have a long record of adopting new standards early, such as <a href="/introducing-http2/">HTTP/2</a>. We also test  features that are experimental or yet to be final, like <a href="/introducing-tls-1-3/">TLS 1.3</a> and <a href="/introducing-spdy/">SPDY</a>.</p><p>In relation to the IETF standardisation process, deploying this running code on real networks across a diverse body of websites helps us understand how well the protocol will work in practice. We combine our existing expertise with experimental information to help improve the running code and, where it makes sense, feedback issues or improvements to the WG that is standardising a protocol.</p><p>Testing new things is not the only priority. Part of being an innovator is knowing when it is time to move forward and put older innovations in the rear view mirror. Sometimes this relates to security-oriented protocols, for example, Cloudflare <a href="/sslv3-support-disabled-by-default-due-to-vulnerability/">disabled SSLv3 by default</a> due of the POODLE vulnerability. In other cases, protocols become superseded by a more technologically advanced one; Cloudflare <a href="/deprecating-spdy/">deprecated SPDY</a> support in favour of HTTP/2.</p><p>The introduction and deprecation of relevant protocols are represented on the Secure Web Timeline as <b>orange lines</b>. Dotted vertical lines help correlate Cloudflare events to relevant IETF documents. For example, Cloudflare introduced TLS 1.3 support in September 2016, with the final document, <a href="https://tools.ietf.org/html/rfc8446">RFC 8446</a>, being published almost two years later in August 2018.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/ptcxVRf8P4wmKMz6uSzAk/4d37f0581a865bc4f120282e7d9d5ebf/cf-events.png" />
            
            </figure>
    <div>
      <h2>Refactoring in HTTPbis</h2>
      <a href="#refactoring-in-httpbis">
        
      </a>
    </div>
    <p>HTTP/1.1 is a very successful protocol and the timeline shows that there wasn't much activity in the IETF after 1999. However, the true reflection is that years of active use gave implementation experience that unearthed latent issues with <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>, which caused some interoperability issues. Furthermore, the protocol was extended by other RFCs like 2817 and 2818. It was decided in 2007 to kickstart a new activity to improve the HTTP protocol specification. This was called HTTPbis (where "bis" stems from Latin meaning "two", "twice" or "repeat") and it took the form of a new Working Group. The original <a href="https://tools.ietf.org/wg/httpbis/charters?item=charter-httpbis-2007-10-23.txt">charter</a> does a good job of describing the problems that were trying to be solved.</p><p>In short, HTTPbis decided to refactor <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>. It would incorporate errata fixes and buy in some aspects of other specifications that had been published in the meantime. It was decided to split the document up into parts. This resulted in 6 I-Ds published in December 2007:</p><ul><li><p>draft-ietf-httpbis-p1-messaging</p></li><li><p>draft-ietf-httpbis-p2-semantics</p></li><li><p>draft-ietf-httpbis-p4-conditional</p></li><li><p>draft-ietf-httpbis-p5-range</p></li><li><p>draft-ietf-httpbis-p6-cache</p></li><li><p>draft-ietf-httpbis-p7-auth</p></li></ul>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5cCDzKbc2DBLJgD1cdLCor/c928470939df5acd6112503c41c78db3/http11-refactor.png" />
            
            </figure><p>The diagram shows how this work progressed through a lengthy drafting process of 7 years, with 27 draft versions being released, before final standardisation. In June 2014, the so-called RFC 723x series was released (where x ranges from 0 to 5). The Chair of the HTTPbis WG celebrated this achievement with the acclimation "<a href="https://www.mnot.net/blog/2014/06/07/rfc2616_is_dead">RFC2616 is Dead</a>". If it wasn't clear, these new documents obsoleted the older <a href="https://tools.ietf.org/html/rfc2616">RFC 2616</a>.</p>
    <div>
      <h2>What does any of this have to do with HTTP/3?</h2>
      <a href="#what-does-any-of-this-have-to-do-with-http-3">
        
      </a>
    </div>
    <p>While the IETF was busy working on the RFC 723x series the world didn't stop. People continued to enhance, extend and experiment with HTTP on the Internet. Among them were Google, who had started to experiment with something called SPDY (pronounced speedy). This protocol was touted as improving the performance of web browsing, a principle use case for HTTP. At the end of 2009 SPDY v1 was announced, and it was quickly followed by SPDY v2 in 2010.</p><p>I want to avoid going into the technical details of SPDY. That's a topic for another day. What is important, is to understand that SPDY took the core paradigms of HTTP and modified the interchange format slightly in order to gain improvements. With hindsight, we can see that HTTP has clearly delimited semantics and syntax. Semantics describe the concept of request and response exchanges including: methods, status codes, header fields (metadata) and bodies (payload). Syntax describe how to map semantics to bytes on the wire.</p><p>HTTP/0.9, 1.0 and 1.1 share many semantics. They also share syntax in the form of character strings that are sent over TCP connections. SPDY took HTTP/1.1 semantics and changed the syntax from strings to binary. This is a really interesting topic but we will go no further down that rabbit hole today.</p><p>Google's experiments with SPDY showed that there was promise in changing HTTP syntax, and value in keeping the existing HTTP semantics. For example, keeping the format of URLs to use  https:// avoided many problems that could have affected adoption.</p><p>Having seen some of the positive outcomes, the IETF decided it was time to consider what HTTP/2.0 might look like. The <a href="https://github.com/httpwg/wg-materials/blob/gh-pages/ietf83/HTTP2.pdf">slides</a> from the HTTPbis session held during IETF 83 in March 2012 show the requirements, goals and measures of success that were set out. It is also clearly states that "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x".</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3EQMui9QKRGR8Vzcu5wMZA/229e6a1cb1cc1fa78da96dc34f59c220/http2-standardisation.png" />
            
            </figure><p>During that meeting the community was invited to share proposals. I-Ds that were submitted for consideration included <a href="https://tools.ietf.org/html/draft-mbelshe-httpbis-spdy-00">draft-mbelshe-httpbis-spdy-00</a>, <a href="https://tools.ietf.org/html/draft-montenegro-httpbis-speed-mobility-00">draft-montenegro-httpbis-speed-mobility-00</a> and <a href="https://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00">draft-tarreau-httpbis-network-friendly-00</a>. Ultimately, the SPDY draft was adopted and in November 2012 work began on <a href="https://tools.ietf.org/html/draft-ietf-httpbis-http2-00">draft-ietf-httpbis-http2-00</a>. After 18 drafts across a period of just over 2 years, <a href="https://tools.ietf.org/html/rfc7540">RFC 7540</a> - HTTP/2 was published in 2015. During this specification period, the precise syntax of HTTP/2 diverged just enough to make HTTP/2 and SPDY incompatible.</p><p>These years were a very busy period for the HTTP-related work at the IETF, with the HTTP/1.1 refactor and HTTP/2 standardisation taking place in parallel. This is in stark contrast to the many years of quiet in the early 2000s. Be sure to check out the full timeline to really appreciate the amount of work that took place.</p><p>Although HTTP/2 was in the process of being standardised, there was still benefit to be had from using and experimenting with SPDY. Cloudflare <a href="/spdy-now-one-click-simple-for-any-website/">introduced support for SPDY</a> in August 2012 and only deprecated it in February 2018 when our statistics showed that less than 4% of Web clients continued to want SPDY. Meanwhile, we <a href="/introducing-http2/">introduced HTTP/2</a> support in December 2015, not long after the RFC was published, when our analysis indicated that a meaningful proportion of Web clients could take advantage of it.</p><p>Web client support of the SPDY and HTTP/2 protocols preferred the secure option of using TLS. The introduction of <a href="/introducing-universal-ssl/">Universal SSL</a> in September 2014 helped ensure that all websites signed up to Cloudflare were able to take advantage of these new protocols as we introduced them.</p>
    <div>
      <h3>gQUIC</h3>
      <a href="#gquic">
        
      </a>
    </div>
    <p>Google continued to experiment between 2012 and 2015 they released SPDY v3 and v3.1. They also started working on gQUIC (pronounced, at the time, as quick) and the initial public specification was made available in early 2012.</p><p>The early versions of gQUIC made use of the SPDY v3 form of HTTP syntax. This choice made sense because HTTP/2 was not yet finished. The SPDY binary syntax was packaged into QUIC packets that could sent in UDP datagrams. This was a departure from the TCP transport that HTTP traditionally relied on. When stacked up all together this looked like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/40oAdMSmePG37lMEb8Odfi/b5cf02bbe9256889bd0cc103a34484a9/gquic-stack.png" />
            
            </figure><p>SPDY over gQUIC layer cake</p><p>gQUIC used clever tricks to achieve performance. One of these was to break the clear layering between application and transport. What this meant in practice was that gQUIC only ever supported HTTP. So much so that gQUIC, termed "QUIC" at the time, was synonymous with being the next candidate version of HTTP. Despite the continued changes to QUIC over the last few years, which we'll touch on momentarily, to this day, the term QUIC is understood by people to mean that initial HTTP-only variant. Unfortunately this is a regular source of confusion when discussing the protocol.</p><p>gQUIC continued to experiment and eventually switched over to a syntax much closer to HTTP/2. So close in fact that most people simply called it "HTTP/2 over QUIC". However, because of technical constraints there were some very subtle differences. One example relates to how the HTTP headers were serialized and exchanged. It is a minor difference but in effect means that HTTP/2 over gQUIC was incompatible with the IETF's HTTP/2.</p><p>Last but not least, we always need to consider the security aspects of Internet protocols. gQUIC opted not to use TLS to provide security. Instead Google developed a different approach called QUIC Crypto. One of the interesting aspects of this was a new method for speeding up security handshakes. A client that had previously established a secure session with a server could reuse information to do a "zero <a href="https://www.cloudflare.com/learning/cdn/glossary/round-trip-time-rtt/">round-trip time</a>", or 0-RTT, handshake. 0-RTT was later incorporated into TLS 1.3.</p>
    <div>
      <h2>Are we at the point where you can tell me what HTTP/3 is yet?</h2>
      <a href="#are-we-at-the-point-where-you-can-tell-me-what-http-3-is-yet">
        
      </a>
    </div>
    <p>Almost.</p><p>By now you should be familiar with how standardisation works and gQUIC is not much different. There was sufficient interest that the Google specifications were written up in I-D format. In June 2015 <a href="https://tools.ietf.org/html/draft-tsvwg-quic-protocol-00">draft-tsvwg-quic-protocol-00</a>, entitled "QUIC: A UDP-based Secure and Reliable Transport for HTTP/2" was submitted. Keep in mind my earlier statement that the syntax was almost-HTTP/2.</p><p>Google <a href="https://groups.google.com/a/chromium.org/forum/#!topic/proto-quic/otGKB4ytAyc">announced</a> that a Bar BoF would be held at IETF 93 in Prague. For those curious about what a "Bar BoF" is, please consult <a href="https://tools.ietf.org/html/rfc6771">RFC 6771</a>. Hint: BoF stands for Birds of a Feather.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/364tlrFLOtgAYBYxHrOXy8/18f5cebace8c29778e06109fe878c3b8/quic-standardisation.png" />
            
            </figure><p>The outcome of this engagement with the IETF was, in a nutshell, that QUIC seemed to offer many advantages at the transport layer and that it should be decoupled from HTTP. The clear separation between layers should be re-introduced. Furthermore, there was a preference for returning back to a TLS-based handshake (which wasn't so bad since TLS 1.3 was underway at this stage, and it was incorporating 0-RTT handshakes).</p><p>About a year later, in 2016, a new set of I-Ds were submitted:</p><ul><li><p><a href="https://tools.ietf.org/html/draft-hamilton-quic-transport-protocol-00">draft-hamilton-quic-transport-protocol-00</a></p></li><li><p><a href="https://tools.ietf.org/html/draft-thomson-quic-tls-00">draft-thomson-quic-tls-00</a></p></li><li><p><a href="https://tools.ietf.org/html/draft-iyengar-quic-loss-recovery-00">draft-iyengar-quic-loss-recovery-00</a></p></li><li><p><a href="https://tools.ietf.org/html/draft-shade-quic-http2-mapping-00">draft-shade-quic-http2-mapping-00</a></p></li></ul><p>Here's where another source of confusion about HTTP and QUIC enters the fray. <a href="https://tools.ietf.org/html/draft-shade-quic-http2-mapping-00">draft-shade-quic-http2-mapping-00</a> is entitled "HTTP/2 Semantics Using The QUIC Transport Protocol" and it describes itself as "a mapping of HTTP/2 semantics over QUIC". However, this is a misnomer. HTTP/2 was about changing syntax while maintaining semantics. Furthermore, "HTTP/2 over gQUIC" was never an accurate description of the syntax either, for the reasons I outlined earlier. Hold that thought.</p><p>This IETF version of QUIC was to be an entirely new transport protocol. That's a large undertaking and before diving head-first into such commitments, the IETF likes to gauge actual interest from its members. To do this, a formal <a href="https://www.ietf.org/how/bofs/">Birds of a Feather</a> meeting was held at the IETF 96 meeting in Berlin in 2016. I was lucky enough to attend the session in person and the <a href="https://datatracker.ietf.org/meeting/96/materials/slides-96-quic-0">slides</a> don't give it justice. The meeting was attended by hundreds, as shown by Adam Roach's <a href="https://www.flickr.com/photos/adam-roach/28343796722/in/photostream/">photograph</a>. At the end of the session consensus was reached; QUIC would be adopted and standardised at the IETF.</p><p>The first IETF QUIC I-D for mapping HTTP to QUIC, <a href="https://tools.ietf.org/html/draft-ietf-quic-http-00">draft-ietf-quic-http-00</a>, took the Ronseal approach and simplified its name to "HTTP over QUIC". Unfortunately, it didn't finish the job completely and there were many instances of the term HTTP/2 throughout the body. Mike Bishop, the I-Ds new editor, identified this and started to fix the HTTP/2 misnomer. In the 01 draft, the description changed to "a mapping of HTTP semantics over QUIC".</p><p>Gradually, over time and versions, the use of the term "HTTP/2" decreased and the instances became mere references to parts of <a href="https://tools.ietf.org/html/rfc7540">RFC 7540</a>. Roll forward two years to October 2018 and the I-D is now at version 16. While HTTP over QUIC bares similarity to HTTP/2 it ultimately is an independent, non-backwards compatible HTTP syntax. However, to those that don't track IETF development very closely (a very very large percentage of the Earth's population), the document name doesn't capture this difference. One of the main points of standardisation is to aid communication and interoperability. Yet a simple thing like naming is a major contributor to confusion in the community.</p><p>Recall what was said in 2012, "HTTP/2.0 only signifies that the wire format isn't compatible with that of HTTP/1.x". The IETF followed that existing cue. After much deliberation in the lead up to, and during, IETF 103, consensus was reached to rename "HTTP over QUIC" to HTTP/3. The world is now in a better place and we can move on to more important debates.</p>
    <div>
      <h2>But RFC 7230 and 7231 disagree with your definition of semantics and syntax!</h2>
      <a href="#but-rfc-7230-and-7231-disagree-with-your-definition-of-semantics-and-syntax">
        
      </a>
    </div>
    <p>Sometimes document titles can be confusing. The present HTTP documents that describe syntax and semantics are:</p><ul><li><p><a href="https://tools.ietf.org/html/rfc7230">RFC 7230</a> - Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing</p></li><li><p><a href="https://tools.ietf.org/html/rfc7231">RFC 7231</a> - Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content</p></li></ul><p>It is possible to read too much into these names and believe that fundamental HTTP semantics are specific for versions of HTTP i.e. HTTP/1.1. However, this is an unintended side effect of the HTTP family tree. The good news is that the HTTPbis Working Group are trying to address this. Some brave members are going through another round of document revision, as Roy Fielding put it, "one more time!". This work is underway right now and is known as the HTTP Core activity (you may also have heard of this under the moniker HTTPtre or HTTPter; naming things is hard). This will condense the six drafts down to three:</p><ul><li><p>HTTP Semantics (draft-ietf-httpbis-semantics)</p></li><li><p>HTTP Caching (draft-ietf-httpbis-caching)</p></li><li><p>HTTP/1.1 Message Syntax and Routing (draft-ietf-httpbis-messaging)</p></li></ul><p>Under this new structure, it becomes more evident that HTTP/2 and HTTP/3 are syntax definitions for the common HTTP semantics. This doesn't mean they don't have their own features beyond syntax but it should help frame discussion going forward.</p>
    <div>
      <h2>Pulling it all together</h2>
      <a href="#pulling-it-all-together">
        
      </a>
    </div>
    <p>This blog post has taken a shallow look at the standardisation process for HTTP in the IETF across the last three decades. Without touching on many technical details, I've tried to explain how we have ended up with HTTP/3 today. If you skipped the good bits in the middle and are looking for a one liner here it is: HTTP/3 is just a new HTTP syntax that works on IETF QUIC, a UDP-based multiplexed and secure transport. There are many interesting technical areas to explore further but that will have to wait for another day.</p><p>In the course of this post, we explored important chapters in the development of HTTP and TLS but did so in isolation. We close out the blog by pulling them all together into the complete Secure Web Timeline presented below. You can use this to investigate the detailed history at your own comfort. And for the super sleuths, be sure to check out the <a href="/content/images/2019/01/web_timeline_large1.svg">full version including draft numbers</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3eL8vJkYylVmR4T5Aa1Zdf/2f2929308ee42e450917639874835c1d/cf-secure-web-timeline-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[QUIC]]></category>
            <category><![CDATA[TLS]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[HTTP3]]></category>
            <category><![CDATA[IETF]]></category>
            <guid isPermaLink="false">1upTpaZ3pXyoDXxMvZoEC8</guid>
            <dc:creator>Lucas Pardue</dc:creator>
        </item>
    </channel>
</rss>