
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 19:33:45 GMT</lastBuildDate>
        <item>
            <title><![CDATA[A framework for measuring Internet resilience]]></title>
            <link>https://blog.cloudflare.com/a-framework-for-measuring-internet-resilience/</link>
            <pubDate>Tue, 28 Oct 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We present a data-driven framework to quantify cross-layer Internet resilience. We also share a list of measurements with which to quantify facets of Internet resilience for geographical areas. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>On July 8, 2022, a massive outage at Rogers, one of Canada's largest telecom providers, knocked out Internet and mobile services for over 12 million users. Why did this single event have such a catastrophic impact? And more importantly, why do some networks crumble in the face of disruption while others barely stumble?</p><p>The answer lies in a concept we call <b>Internet resilience</b>: a network's ability not just to stay online, but to withstand, adapt to, and rapidly recover from failures.</p><p>It’s a quality that goes far beyond simple "uptime." True resilience is a multi-layered capability, built on everything from the diversity of physical subsea cables to the security of BGP routing and the health of a competitive market. It's an emergent property much like <a href="https://en.wikipedia.org/wiki/Psychological_resilience"><u>psychological resilience</u></a>: while each individual network must be robust, true resilience only arises from the collective, interoperable actions of the entire ecosystem. In this post, we'll introduce a data-driven framework to move beyond abstract definitions and start quantifying what makes a network resilient. All of our work is based on public data sources, and we're sharing our metrics to help the entire community build a more reliable and secure Internet for everyone.</p>
    <div>
      <h2>What is Internet resilience?</h2>
      <a href="#what-is-internet-resilience">
        
      </a>
    </div>
    <p>In networking, we often talk about "reliability" (does it work under normal conditions?) and "robustness" (can it handle a sudden traffic surge?). But resilience is more dynamic. It's the ability to gracefully degrade, adapt, and most importantly, recover. For our work, we've adopted a pragmatic definition:</p><p><b><i>Internet resilience</i></b><i> is the measurable capability of a national or regional network ecosystem to maintain diverse and secure routing paths in the face of challenges, and to rapidly restore connectivity following a disruption.</i></p><p>This definition links the abstract goal of resilience to the concrete, measurable metrics that form the basis of our analysis.</p>
    <div>
      <h3>Local decisions have global impact</h3>
      <a href="#local-decisions-have-global-impact">
        
      </a>
    </div>
    <p>The Internet is a global system but is built out of thousands of local pieces. Every country depends on the global Internet for economic activity, communication, and critical services, yet most of the decisions that shape how traffic flows are made locally by individual networks.</p><p>In most national infrastructures like water or power grids, a central authority can plan, monitor, and coordinate how the system behaves. The Internet works very differently. Its core building blocks are Autonomous Systems (ASes), which are networks like ISPs, universities, cloud providers or enterprises. Each AS controls autonomously how it connects to the rest of the Internet, which routes it accepts or rejects, how it prefers to forward traffic, and with whom it interconnects. That’s why they’re called Autonomous Systems in the first place! There’s no global controller. Instead, the Internet’s routing fabric emerges from the collective interaction of thousands of independent networks, each optimizing for its own goals.</p><p>This decentralized structure is one of the Internet’s greatest strengths: no single failure can bring the whole system down. But it also makes measuring resilience at a country level tricky. National statistics can hide local structures that are crucial to global connectivity. For example, a country might appear to have many international connections overall, but those connections could be concentrated in just a handful of networks. If one of those fails, the whole country could be affected.</p><p>For resilience, the goal isn’t to isolate national infrastructure from the global Internet. In fact, the opposite is true: healthy integration with diverse partners is what makes both local and global connectivity stronger. When local networks invest in secure, redundant, and diverse interconnections, they improve their own resilience and contribute to the stability of the Internet as a whole.</p><p>This perspective shapes how we design and interpret resilience metrics. Rather than treating countries as isolated units, we look at how well their networks are woven into the global fabric: the number and diversity of upstream providers, the extent of international peering, and the richness of local interconnections. These are the building blocks of a resilient Internet.</p>
    <div>
      <h3>Route hygiene: Keeping the Internet healthy</h3>
      <a href="#route-hygiene-keeping-the-internet-healthy">
        
      </a>
    </div>
    <p>The Internet is constructed according to a <i>layered</i> model, by design, so that different Internet components and features can evolve independent of the others. The Physical layer stores, carries, and forwards, all the bits and bytes transmitted in packets between devices. It consists of cables, routers and switches, but also buildings that house interconnection facilities. The Application layer sits above all others and has virtually no information about the network so that applications can communicate without having to worry about the underlying details, for example, if a network is ethernet or Wi-Fi. The application layer includes web browsers, web servers, as well as caching, security, and other features provided by Content Distribution Networks (CDNs). Between the physical and application layers is the Network layer responsible for Internet routing. It is ‘logical’, consisting of software that learns about interconnection and routes, and makes (local) forwarding decisions that deliver packets to their destinations. </p><p>Good route hygiene works like personal hygiene: it prevents problems before they spread. The Internet relies on the <a href="https://www.cloudflare.com/learning/security/glossary/what-is-bgp/"><u>Border Gateway Protocol</u></a> (BGP) to exchange routes between networks, but BGP wasn’t built with security in mind. A single bad route announcement, whether by mistake or attack, can send traffic the wrong way or cause widespread outages.</p><p>Two practices help stop this: The <b>RPKI</b> (Resource Public Key Infrastructure) lets networks publish cryptographic proof that they’re allowed to announce specific IP prefixes. <b>ROV </b>(Route Origin Validation) checks those proofs before accepting routes.</p><p>Together, they act like passports and border checks for Internet routes, helping filter out hijacks and leaks early.</p><p>Hygiene doesn’t just happen in the routing table – it spans multiple layers of the Internet’s architecture, and weaknesses in one layer can ripple through the rest. At the physical layer, having multiple, geographically diverse cable routes ensures that a single cut or disaster doesn’t isolate an entire region. For example, distributing submarine landing stations along different coastlines can protect international connectivity when one corridor fails. At the network layer, practices like multi-homing and participation in Internet Exchange Points (IXPs) give operators more options to reroute traffic during incidents, reducing reliance on any single upstream provider. At the application layer, Content Delivery Networks (CDNs) and caching keep popular content close to users, so even if upstream routes are disrupted, many services remain accessible. Finally, policy and market structure also play a role: open peering policies and competitive markets foster diversity, while dependence on a single ISP or cable system creates fragility.</p><p>Resilience emerges when these layers work together. If one layer is weak, the whole system becomes more vulnerable to disruption.</p><p>The more networks adopt these practices, the stronger and more resilient the Internet becomes. We actively support the deployment of RPKI, ROV, and diverse routing to keep the global Internet healthy.</p>
    <div>
      <h2>Measuring resilience is harder than it sounds</h2>
      <a href="#measuring-resilience-is-harder-than-it-sounds">
        
      </a>
    </div>
    <p>The biggest hurdle in measuring resilience is data access. The most valuable information, like internal network topologies, the physical paths of fiber cables, or specific peering agreements, is held by private network operators. This is the ground truth of the network.</p><p>However, operators view this information as a highly sensitive competitive asset. Revealing detailed network maps could expose strategic vulnerabilities or undermine business negotiations. Without access to this ground truth data, we're forced to rely on inference, approximation, and the clever use of publicly available data sources. Our framework is built entirely on these public sources to ensure anyone can reproduce and build upon our findings.</p><p>Projects like RouteViews and RIPE RIS collect BGP routing data that shows how networks connect. <a href="https://www.cloudflare.com/en-in/learning/network-layer/what-is-mtr/"><u>Traceroute</u></a> measurements reveal paths at the router level. IXP and submarine cable maps give partial views of the physical layer. But each of these sources has blind spots: peering links often don’t appear in BGP data, backup paths may remain hidden, and physical routes are hard to map precisely. This lack of a single, complete dataset means that resilience measurement relies on combining many partial perspectives, a bit like reconstructing a city map from scattered satellite images, traffic reports, and public utility filings. It’s challenging, but it’s also what makes this field so interesting.</p>
    <div>
      <h3>Translating resilience into quantifiable metrics</h3>
      <a href="#translating-resilience-into-quantifiable-metrics">
        
      </a>
    </div>
    <p>Once we understand why resilience matters and what makes it hard to measure, the next step is to translate these ideas into concrete metrics. These metrics give us a way to evaluate how well different parts of the Internet can withstand disruptions and to identify where the weak points are. No single metric can capture Internet resilience on its own. Instead, we look at it from multiple angles: physical infrastructure, network topology, interconnection patterns, and routing behavior. Below are some of the key dimensions we use. Some of these metrics are inspired from existing research, like the <a href="https://pulse.internetsociety.org/en/resilience/"><u>ISOC Pulse</u></a> framework. All described methods rely on public data sources and are fully reproducible. As a result, in our visualizations we intentionally omit country and region names to maintain focus on the methodology and interpretation of the results. </p>
    <div>
      <h3>IXPs and colocation facilities</h3>
      <a href="#ixps-and-colocation-facilities">
        
      </a>
    </div>
    <p>Networks primarily interconnect in two types of physical facilities: colocation facilities (colos), and Internet Exchange Points (IXPs) often housed within the colos. Although symbiotically linked, they serve distinct functions in a nation’s digital ecosystem. A colocation facility provides the foundational infrastructure —- secure space, power, and cooling – for network operators to place their equipment. The IXP builds upon this physical base to provide the logical interconnection fabric, a role that is transformative for a region’s Internet development and resilience. The networks that connect at these facilities are its members. </p><p>Metrics that reflect resilience include:</p><ul><li><p><b>Number and distribution of IXPs</b>, normalized by population or geography. A higher IXP count, weighted by population or geographic coverage, is associated with improved local connectivity.</p></li><li><p><b>Peering participation rates</b> — the percentage of local networks connected to domestic IXPs. This metric reflects the extent to which local networks rely on regional interconnection rather than routing traffic through distant upstream providers.</p></li><li><p><b>Diversity of IXP membership</b>, including ISPs, CDNs, and cloud providers, which indicates how much critical content is available locally, making it accessible to domestic users even if international connectivity is severely degraded.</p></li></ul><p>Resilience also depends on how well local networks connect globally:</p><ul><li><p>How many <b>local networks peer at international IXPs</b>, increasing their routing options</p></li><li><p>How many <b>international networks peer at local IXPs</b>, bringing content closer to users</p></li></ul><p>A balanced flow in both directions strengthens resilience by ensuring multiple independent paths in and out of a region.</p><p>The geographic distribution of IXPs further enhances resilience. A resilient IXP ecosystem should be geographically dispersed to serve different regions within a country effectively, reducing the risk of a localized infrastructure failure from affecting the connectivity of an entire country. Spatial distribution metrics help evaluate how infrastructure is spread across a country’s geography or its population. Key spatial metrics include:</p><ul><li><p><b>Infrastructure per Capita</b>: This metric – inspired by <a href="https://en.wikipedia.org/wiki/Telephone_density"><u>teledensity</u></a>  – measures infrastructure relative to population size of a sub-region, providing a per-person availability indicator. A low IXP-per-population ratio in a region suggests that users there rely on distant exchanges, increasing the bit-risk miles.</p></li><li><p><b>Infrastructure per Area (Density)</b>: This metric evaluates how infrastructure is distributed per unit of geographic area, highlighting spatial coverage. Such area-based metrics are crucial for critical infrastructures to ensure remote areas are not left inaccessible.</p></li></ul><p>These metrics can be summarized using the <a href="https://www.bls.gov/k12/students/economics-made-easy/location-quotients.pdf"><u>Location Quotient (LQ)</u></a>. The location quotient is a widely used geographic index that measures a region’s share of infrastructure relative to its share of a baseline (such as population or area).</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4S52jlwCpQ8WVS6gRSdNqp/4722abb10331624a54b411708f1e576b/image5.png" />
          </figure><p>For example, the figure above represents US states where a region hosts more or less infrastructure that is expected for its population, based on its LQ score. This statistic illustrates how even for the states with the highest number of facilities this number is <i>still</i> lower than would be expected given the population size of those states.</p>
    <div>
      <h4>Economic-weighted metrics</h4>
      <a href="#economic-weighted-metrics">
        
      </a>
    </div>
    <p>While spatial metrics capture the physical distribution of infrastructure, economic and usage-weighted metrics reveal how infrastructure is actually used. These account for traffic, capacity, or economic activity, exposing imbalances that spatial counts miss. <b>Infrastructure Utilization Concentration</b> measures how usage is distributed across facilities, using indices like the <b>Herfindahl–Hirschman Index (HHI)</b>. HHI sums the squared market shares of entities, ranging from 0 (competitive) to 10,000 (highly concentrated). For IXPs, market share is defined through operational metrics such as:</p><ul><li><p><b>Peak/Average Traffic Volume</b> (Gbps/Tbps): indicates operational significance</p></li><li><p><b>Number of Connected ASNs</b>: reflects network reach</p></li><li><p><b>Total Port Capacity</b>: shows physical scale</p></li></ul><p>The chosen metric affects results. For example, using connected ASNs yields an HHI of 1,316 (unconcentrated) for a Central European country, whereas using port capacity gives 1,809 (moderately concentrated).</p><p>The <b>Gini coefficient</b> measures inequality in resource or traffic distribution (0 = equal, 1 = fully concentrated). The <b>Lorenz curve</b> visualizes this: a straight 45° line indicates perfect equality, while deviations show concentration.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/30bh4nVHRX5O3HMKvGRYh7/e0c5b3a7cb8294dfe3caaec98a0557d0/Screenshot_2025-10-27_at_14.10.57.png" />
          </figure><p>The chart on the left suggests substantial geographical inequality in colocation facility distribution across the US states. However, the population-weighted analysis in the chart on the right demonstrates that much of that geographic concentration can be explained by population distribution.</p>
    <div>
      <h3>Submarine cables</h3>
      <a href="#submarine-cables">
        
      </a>
    </div>
    <p>Internet resilience, in the context of undersea cables, is defined by the global network’s capacity to withstand physical infrastructure damage and to recover swiftly from faults, thereby ensuring the continuity of intercontinental data flow. The metrics for quantifying this resilience are multifaceted, encompassing the frequency and nature of faults, the efficiency of repair operations, and the inherent robustness of both the network’s topology and its dedicated maintenance resources. Such metrics include:</p><ul><li><p>Number of <b>landing stations</b>, cable corridors, and operators. The goal is to ensure that national connectivity should withstand single failure events, be they natural disaster, targeted attack, or major power outage. A lack of diversity creates single points of failure, as highlighted by <a href="https://www.theguardian.com/news/2025/sep/30/tonga-pacific-island-internet-underwater-cables-volcanic-eruption"><u>incidents in Tonga</u></a> where damage to the only available cable led to a total outage.</p></li><li><p><b>Fault rates</b> and <b>mean time to repair (MTTR)</b>, which indicate how quickly service can be restored. These metrics measure a country’s ability to prevent, detect, and recover from cable incidents, focusing on downtime reduction and protection of critical assets. Repair times hinge on <b>vessel mobilization</b> and <b>government permits</b>, the latter often the main bottleneck.</p></li><li><p>Availability of <b>satellite backup capacity</b> as an emergency fallback. While cable diversity is essential, resilience planning must also cover worst-case outages. The Non-Terrestrial Backup System Readiness metric measures a nation’s ability to sustain essential connectivity during major cable disruptions. LEO and MEO satellites, though costlier and lower capacity than cables, offer proven emergency backup during conflicts or disasters. Projects like HEIST explore hybrid space-submarine architectures to boost resilience. Key indicators include available satellite bandwidth, the number of NGSO providers under contract (for diversity), and the deployment of satellite terminals for public and critical infrastructure. Tracking these shows how well a nation can maintain command, relief operations, and basic connectivity if cables fail.</p></li></ul>
    <div>
      <h3>Inter-domain routing</h3>
      <a href="#inter-domain-routing">
        
      </a>
    </div>
    <p>The network layer above the physical interconnection infrastructure governs how traffic is routed across the Autonomous Systems (ASes). Failures or instability at this layer – such as misconfigurations, attacks, or control-plane outages – can disrupt connectivity even when the underlying physical infrastructure remains intact. In this layer, we look at resilience metrics that characterize the robustness and fault tolerance of AS-level routing and BGP behavior.</p><p><b>AS Path Diversity</b> measures the number and independence of AS-level routes between two points. High diversity provides alternative paths during failures, enabling BGP rerouting and maintaining connectivity. Low diversity leaves networks vulnerable to outages if a critical AS or link fails. Resilience depends on upstream topology.</p><ul><li><p>Single-homed ASes rely on one provider, which is cheaper and simpler but more fragile.</p></li><li><p>Multi-homed ASes use multiple upstreams, requiring BGP but offering far greater redundancy and performance at higher cost.</p></li></ul><p>The <b>share of multi-homed ASes</b> reflects an ecosystem’s overall resilience: higher rates signal greater protection from single-provider failures. This metric is easy to measure using <b>public BGP data</b> (e.g., RouteViews, RIPE RIS, CAIDA). Longitudinal BGP monitoring helps reveal hidden backup links that snapshots might miss.</p><p>Beyond multi-homing rates, <b>the distribution of single-homed ASes per transit provider</b> highlights systemic weak points. For each provider, counting customer ASes that rely exclusively on it reveals how many networks would be cut off if that provider fails. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ECZveUVwyM6TmGa1SaZnl/1222c7579c81fd62a5d8d80d63000ec3/image1.png" />
          </figure><p>The figure above shows Canadian transit providers for July 2025: the x-axis is total customer ASes, the y-axis is single-homed customers. Canada’s overall single-homing rate is 30%, with some providers serving many single-homed ASes, mirroring vulnerabilities seen during the <a href="https://en.wikipedia.org/wiki/2022_Rogers_Communications_outage"><u>2022 Rogers outage</u></a>, which disrupted over 12 million users.</p><p>While multi-homing metrics provide a valuable, static view of an ecosystem’s upstream topology, a more dynamic and nuanced understanding of resilience can be achieved by analyzing the characteristics of the actual BGP paths observed from global vantage points. These path-centric metrics move beyond simply counting connections to assess the diversity and independence of the routes to and from a country’s networks. These metrics include:</p><ul><li><p><b>Path independence</b> measures whether those alternative routes truly avoid shared bottlenecks. Multi-homing only helps if upstream paths are truly distinct. If two providers share upstream transit ASes, redundancy is weak. Independence can be measured with the Jaccard distance between AS paths. A stricter <b>path disjointness score</b> calculates the share of path pairs with no common ASes, directly quantifying true redundancy.</p></li><li><p><b>Transit entropy</b> measures how evenly traffic is distributed across transit providers. High Shannon entropy signals a decentralized, resilient ecosystem; low entropy shows dependence on few providers, even if nominal path diversity is high.</p></li><li><p><b>International connectivity ratios</b> evaluate the share of domestic ASes with direct international links. High percentages reflect a mature, distributed ecosystem; low values indicate reliance on a few gateways.</p></li></ul><p>The figure below encapsulates the aforementioned AS-level resilience metrics into single polar pie charts. For the purpose of exposition we plot the metrics for infrastructure from two different nations with very different resilience profiles.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/PKxDcl4m1XXCAuvFUcTdZ/d0bce797dcbd5e1baf39ca66e7ac0056/image4.png" />
          </figure><p>To pinpoint critical ASes and potential single points of failure, graph centrality metrics can provide useful insights. <b>Betweenness Centrality (BC)</b> identifies nodes lying on many shortest paths, but applying it to BGP data suffers from vantage point bias. ASes that provide BGP data to the RouteViews and RIS collectors appear falsely central. <b>AS Hegemony</b>, developed by<a href="https://dl.acm.org/doi/10.1145/3123878.3131982"><u> Fontugne et al.</u></a>, corrects this by filtering biased viewpoints, producing a 0–1 score that reflects the true fraction of paths crossing an AS. It can be applied globally or locally to reveal Internet-wide or AS-specific dependencies.</p><p><b>Customer cone size</b> developed by <a href="https://asrank.caida.org/about#customer-cone"><u>CAIDA</u></a> offers another perspective, capturing an AS’s economic and routing influence via the set of networks it serves through customer links. Large cones indicate major transit hubs whose failure affects many downstream networks. However, global cone rankings can obscure regional importance, so <a href="https://www.caida.org/catalog/papers/2023_on_importance_being_as/on_importance_being_as.pdf"><u>country-level adaptations</u></a> give more accurate resilience assessments.</p>
    <div>
      <h4>Impact-Weighted Resilience Assessment</h4>
      <a href="#impact-weighted-resilience-assessment">
        
      </a>
    </div>
    <p>Not all networks have the same impact when they fail. A small hosting provider going offline affects far fewer people than if a national ISP does. Traditional resilience metrics treat all networks equally, which can mask where the real risks are. To address this, we use impact-weighted metrics that factor in a network’s user base or infrastructure footprint. For example, by weighting multi-homing rates or path diversity by user population, we can see how many people actually benefit from redundancy — not just how many networks have it. Similarly, weighting by the number of announced prefixes highlights networks that carry more traffic or control more address space.</p><p>This approach helps separate theoretical resilience from practical resilience. A country might have many multi-homed networks, but if most users rely on just one single-homed ISP, its resilience is weaker than it looks. Impact weighting helps surface these kinds of structural risks so that operators and policymakers can prioritize improvements where they matter most.</p>
    <div>
      <h3>Metrics of network hygiene</h3>
      <a href="#metrics-of-network-hygiene">
        
      </a>
    </div>
    <p>Large Internet outages aren’t always caused by cable cuts or natural disasters — sometimes, they stem from routing mistakes or security gaps. Route hijacks, leaks, and spoofed announcements can disrupt traffic on a national scale. How well networks protect themselves against these incidents is a key part of resilience, and that’s where network hygiene comes in.</p><p>Network hygiene refers to the security and operational practices that make the global routing system more trustworthy. This includes:</p><ul><li><p><b>Cryptographic validation</b>, like RPKI, to prevent unauthorized route announcements. <b>ROA Coverage</b> measures the share of announced IPv4/IPv6 space with valid Route Origin Authorizations (ROAs), indicating participation in the RPKI ecosystem. <b>ROV Deployment</b> gauges how many networks drop invalid routes, but detecting active filtering is difficult. Policymakers can improve visibility by supporting independent measurements, data transparency, and standardized reporting.</p></li><li><p><b>Filtering and cooperative norms</b>, where networks block bogus routes and follow best practices when sharing routing information.</p></li><li><p><b>Consistent deployment across both domestic networks and their international upstreams</b>, since traffic often crosses multiple jurisdictions.</p></li></ul><p>Strong hygiene practices reduce the likelihood of systemic routing failures and limit their impact when they occur. We actively support and monitor the adoption of these mechanisms, for instance through <a href="https://isbgpsafeyet.com/"><u>crowd-sourced measurements</u></a> and public advocacy, because every additional network that validates routes and filters traffic contributes to a safer and more resilient Internet for everyone.</p><p>Another critical aspect of Internet hygiene is mitigating DDoS attacks, which often rely on IP address spoofing to amplify traffic and obscure the attacker’s origin. <a href="https://datatracker.ietf.org/doc/bcp38/"><u>BCP-38</u></a>, the IETF’s network ingress filtering recommendation, addresses this by requiring operators to block packets with spoofed source addresses, reducing a region’s role as a launchpad for global attacks. While BCP-38 does not prevent a network from being targeted, its deployment is a key indicator of collective security responsibility. Measuring compliance requires active testing from inside networks, which is carried out by the <a href="https://spoofer.caida.org/summary.php"><u>CAIDA Spoofer Project</u></a>. Although the global sample remains limited, these metrics offer valuable insight into both the technical effectiveness and the security engagement of a nation’s network community, complementing RPKI in strengthening the overall routing security posture.</p>
    <div>
      <h3>Measuring the collective security posture</h3>
      <a href="#measuring-the-collective-security-posture">
        
      </a>
    </div>
    <p>Beyond securing individual networks through mechanisms like RPKI and BCP-38, strengthening the Internet’s resilience also depends on collective action and visibility. While origin validation and anti-spoofing reduce specific classes of threats, broader frameworks and shared measurement infrastructures are essential to address systemic risks and enable coordinated responses.</p><p>The <a href="https://manrs.org/"><u>Mutually Agreed Norms for Routing Security (MANRS)</u></a> initiative promotes Internet resilience by defining a clear baseline of best practices. It is not a new technology but a framework fostering collective responsibility for global routing security. MANRS focuses on four key actions: filtering incorrect routes, anti-spoofing, coordination through accurate contact information, and global validation using RPKI and IRRs. While many networks implement these independently, MANRS participation signals a public commitment to these norms and to strengthening the shared security ecosystem.</p><p>Additionally, a region’s participation in public measurement platforms reflects its Internet observability, which is essential for fault detection, impact assessment, and incident response. <a href="https://atlas.ripe.net/"><u>RIPE Atlas</u></a> and <a href="https://www.caida.org/projects/ark/"><u>CAIDA Ark</u></a> provide dense data-plane measurements; <a href="https://www.routeviews.org/routeviews/"><u>RouteViews</u></a> and <a href="https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris/"><u>RIPE RIS</u></a> collect BGP routing data to detect anomalies; and <a href="https://www.peeringdb.com/"><u>PeeringDB</u></a> documents interconnection details, reflecting operational maturity and integration into the global peering fabric. Together, these platforms underpin observatories like <a href="https://ioda.inetintel.cc.gatech.edu/"><u>IODA</u></a> and <a href="https://grip.oie.gatech.edu/home"><u>GRIP</u></a>, which combine BGP and active data to detect outages and routing incidents in near real time, offering critical visibility into Internet health and security.</p>
    <div>
      <h2>Building a more resilient Internet, together</h2>
      <a href="#building-a-more-resilient-internet-together">
        
      </a>
    </div>
    <p>Measuring Internet resilience is complex, but it's not impossible. By using publicly available data, we can create a transparent and reproducible framework to identify strengths, weaknesses, and single points of failure in any network ecosystem.</p><p>This isn't just a theoretical exercise. For policymakers, this data can inform infrastructure investment and pro-competitive policies that encourage diversity. For network operators, it provides a benchmark to assess their own resilience and that of their partners. And for everyone who relies on the Internet, it's a critical step toward building a more stable, secure, and reliable global network.</p><p><i>For more details of the framework, including a full table of the metrics and links to source code, please refer to the full paper: </i> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5376106"><u>Regional Perspectives for Route Resilience in a Global Internet: Metrics, Methodology, and Pathways for Transparency</u></a> published at <a href="https://www.tprcweb.com/tprc23program"><u>TPRC23</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Better Internet]]></category>
            <category><![CDATA[Routing Security]]></category>
            <category><![CDATA[Insights]]></category>
            <guid isPermaLink="false">48ry6RI3JhA9H3t280EWUX</guid>
            <dc:creator>Vasilis Giotsas</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
            <dc:creator>Marwan Fayed</dc:creator>
        </item>
        <item>
            <title><![CDATA[Harnessing chaos in Cloudflare offices]]></title>
            <link>https://blog.cloudflare.com/harnessing-office-chaos/</link>
            <pubDate>Fri, 08 Mar 2024 14:00:24 GMT</pubDate>
            <description><![CDATA[ This blog post will cover the new sources of “chaos” that have been added to LavaRand and how you can make use of that harnessed chaos in your next application ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6VAXGAjHjvvY5IAEG63gPu/4c199f8bb127b03fe613ab8dc6c0016f/image12-1.png" />
            
            </figure><p>In the children’s book <a href="https://en.wikipedia.org/wiki/The_Snail_and_the_Whale">The Snail and Whale</a>, after an unexpectedly far-flung adventure, the principal character returns to declarations of “How time’s flown” and “Haven’t you grown?” It has been about four years since we last wrote about LavaRand and during that time the story of how Cloudflare uses physical sources of entropy to add to the security of the Internet has continued to travel and be a source of interest to many. What was initially just a single species of physical entropy source – lava lamps – has grown and diversified. We want to catch you up a little on the story of LavaRand. This blog post will cover the new sources of “chaos” that have been added to LavaRand and how you can make use of that harnessed chaos in your next application. We’ll cover how public randomness can open up uses of publicly trusted randomness — imagine not needing to take the holders of a “random draw” at their word when they claim the outcome is not manipulated in some way. And finally we’ll discuss timelock encryption which is a way to ensure that a message cannot be decrypted until some chosen time in the future.</p>
    <div>
      <h2>LavaRand origins</h2>
      <a href="#lavarand-origins">
        
      </a>
    </div>
    <p>The entropy sourced from our wall of lava lamps in San Francisco has long played its part in the randomness that secures connections made through Cloudflare.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5XdIuQkEWKat2c9YanaCY0/aa873b127b5eea8cea19982f3552ccc2/image11-3.png" />
            
            </figure><p>Lava lamps with flowing wax.</p><p>Cloudflare’s servers collectively handle upwards of 55 million HTTP requests per second, the <a href="https://radar.cloudflare.com/adoption-and-usage#http-vs-https">vast majority of which are secured via the TLS protocol</a> to ensure authenticity and confidentiality. Under the hood, cryptographic protocols like TLS require an underlying source of secure randomness – otherwise, the security guarantees fall apart.</p><p>Secure randomness used in cryptography needs to be computationally indistinguishable from “true” randomness. For this, it must both pass <a href="https://en.wikipedia.org/wiki/Randomness_test">statistical randomness tests</a>, and the output needs to be unpredictable to any computationally-bounded adversary, no matter how much previous output they’ve already seen. The typical way to achieve this is to take some random ‘seed’ and feed it into a <a href="https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator"><i>Cryptographically Secure Pseudorandom Number Generator</i></a> (CSPRNG) that can produce an essentially-endless stream of unpredictable bytes upon request. The properties of a CSPRNG ensure that all outputs are practically indistinguishable from truly random outputs to anyone that does not know its internal state. However, this all depends on having a secure random seed to begin with. Take a look at <a href="/lavarand-in-production-the-nitty-gritty-technical-details">this blog</a> for more details on true randomness versus pseudorandomness, and this blog for some great examples of <a href="/why-randomness-matters">what can go wrong with insecure randomness</a>.</p><p>For many years, Cloudflare’s servers relied on local sources of entropy (such as the precise timing of packet arrivals or keyboard events) to seed their entropy pools. While there’s no reason to believe that the local entropy sources on those servers are insecure or could be easily compromised, we wanted to hedge our bets against that possibility. Our solution was to set up a system where our servers could periodically refresh their entropy pools with true randomness from an external source.</p><p>That brings us to LavaRand. “Lavarand” has long been the name given to <a href="https://en.wikipedia.org/wiki/Lavarand">systems used for the generation of randomness</a> (first by Silicon Graphics in 1997). Cloudflare <a href="/randomness-101-lavarand-in-production/">launched its instantiation of a LavaRand</a> system in 2017 as a system that collects entropy from the wall of lava lamps in our San Francisco office and makes it available via an internal API. Our servers then periodically query the API to retrieve fresh randomness from LavaRand and incorporate it into their entropy pools. The contributions made by LavaRand can be considered spice added to the entropy pool mix! (For more technical details on <a href="/lavarand-in-production-the-nitty-gritty-technical-details">contributions made by LavaRand</a>, read our previous blog post.)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1IPp9Lizp0pL83clLWGGNa/19d397787b5a5adbb337f581d9639fce/image10.jpg" />
            
            </figure><p>Lava lamps in Cloudflare’s San Francisco office.</p>
    <div>
      <h2>Adding to the office chaos</h2>
      <a href="#adding-to-the-office-chaos">
        
      </a>
    </div>
    <p>Our lava lamps in San Francisco have been working tirelessly for years to supply fresh entropy to our systems, but they now have siblings across the world to help with their task! As Cloudflare has grown, so has the variety of entropy sources found in and sourced from our offices. <a href="/cloudflare-top-100-most-loved-workplaces-in-2022">Cloudflare’s Places team works hard</a> to ensure that our offices reflect aspects of our values and culture. Several of our larger office locations include installations of physical systems of entropy, and it is these installations that we have worked to incorporate into LavaRand over time. The tangible and exciting draw of these systems is their basis in physical mechanics that we intuitively consider random. The gloops of warmed ascending “lava” floating past cooler sinking blobs within lava lamps attract our attention just as other unpredictable (and often beautiful) dynamic systems capture our interest.</p>
    <div>
      <h3>London’s unpredictable pendulums</h3>
      <a href="#londons-unpredictable-pendulums">
        
      </a>
    </div>
    <p>Visible to visitors of our London office is a wall of double pendulums whose beautiful swings translate to another source of entropy to LavaRand and to the pool of randomness that Cloudflare’s servers pull from.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1JjgKso6GgfvLX74LEyYsE/7688dcdd10f3f3219f0c569724cb42ab/image8.jpg" />
            
            </figure><p>Close-up of double pendulum display in Cloudflare’s London office.</p><p>To the untrained eye the shadows of the pendulum stands and those cast by the rotating arms on the rear wall might seem chaotic. If so, then this installation should be labeled a success! Different light conditions and those shadows add to the chaos that is captured from this entropy source.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2JWrKSqoaPJQC2VSbHygj/e87c2936282e55730a1db4af7d4f7e7f/Screenshot-2024-03-08-at-13.13.12.png" />
            
            </figure><p>Double pendulum display in Cloudflare’s London office with changing light conditions.</p><p>Indeed, even with these arms restricted to motion in two dimensions, the path traced by the arms is mesmerizingly varied, and can be shown to be <a href="https://en.wikipedia.org/wiki/Double_pendulum">mathematically chaotic</a>. Even if we forget air resistance, temperature, and the environment, and then assume that the mutation is completely deterministic, still the resulting long-term motion is hard to predict. In particular the system is very sensitive to initial conditions, this initial state – how they are set in motion – paired with deterministic behavior produces a unique path that is traced until the pendulum comes to rest, and the system is set in motion by a Cloudflare employee in London once again.</p>
    <div>
      <h3>Austin’s mesmerizing mobiles</h3>
      <a href="#austins-mesmerizing-mobiles">
        
      </a>
    </div>
    <p>The beautiful new Cloudflare office in Austin, Texas recently celebrated its first year since opening. This office contributes its own spin on physical entropy: suspended above the entrance of the Cloudflare office in downtown Austin is an installation of translucent rainbow mobiles. These twirl, reflecting the changing light, and cast coloured patterns on the enclosing walls. The display of hanging mobiles and their shadows are very sensitive to a physical environment which includes the opening and closing of doors, HVAC changes, and ambient light. This chaotic system’s mesmerizing and changing scene is captured periodically and fed into the stream of LavaRand randomness.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5mfXP2V8pX0C0CoheE369Q/83fe1b4bdba232b8c8c722bc49987bfe/Screenshot-2024-03-08-at-13.14.22.png" />
            
            </figure><p>Hanging rainbow mobiles in Cloudflare’s Austin office.</p>
    <div>
      <h2>Mixing new sources into LavaRand</h2>
      <a href="#mixing-new-sources-into-lavarand">
        
      </a>
    </div>
    <p>We incorporated the new sources of office chaos into the LavaRand system (still called LavaRand despite including much more than lava lamps) in the same way as the existing lava lamps, which we’ve previously <a href="/lavarand-in-production-the-nitty-gritty-technical-details">described in detail</a>.</p><p>To recap, at repeated intervals, a camera captures an image of the current state of the randomness display. Since the underlying system is truly random, the produced image contains true randomness. Even shadows and changing light conditions play a part in producing something unique and unpredictable! There is another secret that we should share: at a base level, image sensors in the real world are often a source of sufficient noise that even images taken without the lens cap removed could work well as a source of entropy! We consider this added noise to be a serendipitous addition to the beautiful chaotic motion of these installations.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7zrgpZA2xosqvTzU6dk8V/6e1f061640192f7de4585d7f2959f4a7/Screenshot-2024-03-08-at-13.16.23.png" />
            
            </figure><p>Close-up of hanging rainbow mobiles in Cloudflare’s Austin office.</p><p>Once we have a still image that captures the state of the randomness display at a particular point in time, we compute a compact representation – a hash – of the image to derive a fixed-sized output of truly random bytes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VjuWFkK83t3EkTjPxYGc6/2ddc9da8c2553a8a1dbb04513de6acbd/image4-26.png" />
            
            </figure><p>Process of converting physical entropy displays into random byte strings.</p><p>The random bytes are then used as an input (along with the previous seed and some randomness from the system’s local entropy sources) to a <i>Key Derivation Function</i> (KDF) to compute a new randomness seed that is fed into a <a href="https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator"><i>Cryptographically Secure Pseudorandom Number Generator</i></a> (CSPRNG) that can produce an essentially-endless stream of unpredictable bytes upon request. The properties of a CSPRNG ensure that all outputs are practically indistinguishable from truly random outputs to anyone that does not know its internal state. LavaRand then exposes this stream of randomness via a simple internal API where clients can request fresh randomness.</p>
            <pre><code>seed = KDF(new image || previous seed || system randomness)
rng = CSPRNG(seed)
…
rand1 = rng.random()
rand2 = rng.random()</code></pre>
            
    <div>
      <h2>How can I use LavaRand?</h2>
      <a href="#how-can-i-use-lavarand">
        
      </a>
    </div>
    <p>Applications typically use secure randomness in one of two flavors: private and public.</p><p><b>Private randomness</b> is used for generating passwords, cryptographic keys, user IDs, and other values that are meant to stay secret forever. As we’ve <a href="/lavarand-in-production-the-nitty-gritty-technical-details">previously described</a>, our servers periodically request fresh private randomness from LavaRand to help to update their entropy pools. Because of this, randomness from LavaRand is essentially available to the outside world! One easy way for developers to tap into private randomness from LavaRand is to use the <a href="https://developers.cloudflare.com/workers/runtime-apis/web-crypto/#methods">Web Crypto API’s getRandomValues function</a> from a Cloudflare Worker, or use one that someone has already built, like <a href="https://csprng.xyz/">csprng.xyz</a> (<a href="https://github.com/ejcx/csprng.xyz">source</a>).</p><p><b>Public randomness</b> consists of unpredictable and unbiased random values that are made available to everyone once they are published, and for this reason <b><i>should not be used for generating cryptographic keys</i></b>. The winning lottery numbers and the coin flip at the start of a sporting event are some examples of public random values. A double-headed coin would <i>not</i> be an unbiased and unpredictable source of entropy and would have drastic impacts on the sports betting world.</p><p>In addition to being unpredictable and unbiased, it’s also desirable for public randomness to be <i>trustworthy</i> so that consumers of the randomness are assured that the values were faithfully produced. Not many people would buy lottery tickets if they believed that the winning ticket was going to be chosen unfairly! Indeed, there are known cases of corrupt insiders subverting public randomness for personal gain, like the <a href="https://www.nytimes.com/interactive/2018/05/03/magazine/money-issue-iowa-lottery-fraud-mystery.html">state lottery employee</a> who co-opted the lottery random number generator, allowing his friends and family to win millions of dollars.</p><p>A fundamental challenge of public randomness is that one must trust the authority producing the random outputs. Trusting a well-known authority like <a href="https://beacon.nist.gov/home">NIST</a> may suffice for many applications, but could be problematic for others (especially for applications where decentralization is important).</p>
    <div>
      <h2>drand: distributed and verifiable public randomness</h2>
      <a href="#drand-distributed-and-verifiable-public-randomness">
        
      </a>
    </div>
    <p>To help solve this problem of trust, Cloudflare joined forces with seven other independent and geographically distributed organizations back in 2019 to form the <a href="/league-of-entropy/">League of Entropy</a> to launch a public randomness beacon using the <a href="/inside-the-entropy">drand</a> (pronounced dee-rand) protocol. Each organization contributes its own unique source of randomness into the joint pool of entropy used to seed the drand network – with Cloudflare using randomness from LavaRand, of course!</p><p>While the League of Entropy started out as an experimental network, with the guidance and support from the drand team at <a href="https://protocol.ai/">Protocol Labs</a>, it’s become a reliable and production-ready core Internet service, relied upon by applications ranging from <a href="https://spec.filecoin.io/libraries/drand/">distributed file storage</a> to <a href="https://twitter.com/etherplay/status/1734875536608882799">online gaming</a> to <a href="https://medium.com/tierion/tierion-joins-the-league-of-entropy-replaces-nist-randomness-beacon-with-drand-in-chainpoint-9f3c32f0cd9b">timestamped proofs</a> to <a href="https://drand.love/docs/timelock-encryption/">timelock encryption</a> (discussed further below). The League of Entropy has also grown, and there are now 18 organizations across four continents participating in the drand network.</p><p>The League of Entropy’s drand beacons (each of which runs with different parameters, such as how frequently random values are produced and whether the randomness is <i>chained</i> – more on this below) have two important properties that contribute to their trustworthiness: they are <i>decentralized</i> and <i>verifiable</i>. Decentralization ensures that one or two bad actors cannot subvert or bias the randomness beacon, and verifiability allows anyone to check that the random values are produced according to the drand protocol and with participation from a threshold (at least half, but usually more) of the participants in the drand network. Thus, with each new member, the trustworthiness and reliability of the drand network continues to increase.</p><p>We give a brief overview of how drand achieves these properties using distributed key generation and threshold signatures below, but for an in-depth dive see our <a href="/inside-the-entropy">previous blog post</a> and some of the <a href="https://drand.love/blog/">excellent posts</a> from the drand team.</p>
    <div>
      <h3>Distributed key generation and threshold signatures</h3>
      <a href="#distributed-key-generation-and-threshold-signatures">
        
      </a>
    </div>
    <p>During the initial setup of a drand beacon, nodes in the network run a distributed key generation (DKG) protocol based on the <a href="https://en.wikipedia.org/wiki/Distributed_key_generation">Pedersen commitment scheme</a>, the result of which is that each node holds a “share” (a keypair) for a distributed group key, which remains fixed for the lifetime of the beacon. In order to do something useful with the group secret key like signing a message, at least a threshold (for example 7 out of 9) of nodes in the network must participate in constructing a <a href="https://en.wikipedia.org/wiki/BLS_digital_signature">BLS threshold signature</a>. The group information for the <a href="https://drand.love/blog/2023/10/16/quicknet-is-live/">quicknet</a> beacon on the League of Entropy’s mainnet drand network is shown below:</p>
            <pre><code>curl -s https://drand.cloudflare.com/52db9ba70e0cc0f6eaf7803dd07447a1f5477735fd3f661792ba94600c84e971/info | jq
{
  "public_key": "83cf0f2896adee7eb8b5f01fcad3912212c437e0073e911fb90022d3e760183c8c4b450b6a0a6c3ac6a5776a2d1064510d1fec758c921cc22b0e17e63aaf4bcb5ed66304de9cf809bd274ca73bab4af5a6e9c76a4bc09e76eae8991ef5ece45a",
  "period": 3,
  "genesis_time": 1692803367,
  "hash": "52db9ba70e0cc0f6eaf7803dd07447a1f5477735fd3f661792ba94600c84e971",
  "groupHash": "f477d5c89f21a17c863a7f937c6a6d15859414d2be09cd448d4279af331c5d3e",
  "schemeID": "bls-unchained-g1-rfc9380",
  "metadata": {
    "beaconID": "quicknet"
  }
}</code></pre>
            <p>(The hex value 52db9b… in the URL above is the hash of the beacon’s configuration. Visit <a href="https://drand.cloudflare.com/chains">https://drand.cloudflare.com/chains</a> to see all beacons supported by our mainnet drand nodes.)</p><p>The nodes in the network are configured to periodically (every 3s for quicknet) work together to produce a signature over some agreed-upon message, like the current round number and previous round signature (more on this below). Each node uses its share of the group key to produce a partial signature over the current round message, and broadcasts it to other nodes in the network. Once a node has enough partial signatures, it can aggregate them to produce a group signature for the given round.</p>
            <pre><code>curl -s https://drand.cloudflare.com/52db9ba70e0cc0f6eaf7803dd07447a1f5477735fd3f661792ba94600c84e971/public/13335 | jq
{
  "round": 13335,
  "randomness": "f4eb2e59448d155b1bc34337f2a4160ac5005429644ba61134779a8b8c6087b6",
  "signature": "a38ab268d58c04ce2d22b8317e4b66ecda5fa8841c7215bf7733af8dbaed6c5e7d8d60b77817294a64b891f719bc1b40"
}</code></pre>
            <p>The group signature for a round <i>is</i> the randomness (in the output above, the randomness value is simply the sha256 hash of the signature, for applications that prefer a shorter, fixed-sized output). The signature is unpredictable in advance as long as enough (at least a majority, but can be configured to be higher) of the nodes in the drand network are honest and do not collude. Further, anyone can validate the signature for a given round using the beacon’s group public key. It’s recommended that developers use the drand client <a href="https://drand.love/developer/clients/">libraries</a> or <a href="https://drand.love/developer/drand-client/">CLI</a> to perform verification on every value obtained from the beacon.</p>
    <div>
      <h3>Chained vs unchained randomness</h3>
      <a href="#chained-vs-unchained-randomness">
        
      </a>
    </div>
    <p>When the League of Entropy launched its first generation of drand beacons in 2019, the per-round message over which the group signature was computed included the previous round’s signature. This creates a chain of randomness rounds all the way to the first “genesis” round. Chained randomness provides some nice properties for single-source randomness beacons, and is included as a requirement in <a href="https://csrc.nist.gov/projects/interoperable-randomness-beacons">NIST’s spec for interoperable public randomness beacons</a>.</p><p>However, back in 2022 the drand team introduced the notion of <a href="https://drand.love/blog/2022/02/21/multi-frequency-support-and-timelock-encryption-capabilities/#unchained-randomness-timed-encryption">unchained randomness</a>, where the message to be signed is <i>predictable</i> and doesn’t depend on any randomness from previous rounds, and showed that it provides the same security guarantees as chained randomness for the drand network (both require an honest threshold of nodes). In the implementation of unchained randomness in the <a href="https://drand.love/blog/2023/10/16/quicknet-is-live/">quicknet</a>, the message to be signed simply consists of the round number.</p>
            <pre><code># chained randomness
signature = group_sign(round || previous_signature)

# unchained randomness
signature = group_sign(round)</code></pre>
            <p>Unchained randomness provides some powerful properties and usability improvements. In terms of usability, a consumer of the randomness beacon does not need to reconstruct the full chain of randomness to the genesis round to fully validate a particular round – the only information needed is the current round number and the group public key. This provides much more flexibility for clients, as they can choose how frequently they consume randomness rounds without needing to continuously follow the randomness chain.</p><p>Since the messages to be signed are known in advance (since they’re just the round number), unchained randomness also unlocks a powerful new property: timelock encryption.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PVw1hyLALNYG3p20U2f2D/eeac0fd2fe805cabc1b75055cc0b0076/image7-7.png" />
            
            </figure><p>Rotating double pendulums.</p>
    <div>
      <h2>Timelock encryption</h2>
      <a href="#timelock-encryption">
        
      </a>
    </div>
    <p>Timelock (or “timed-release”) encryption is a method for encrypting a message such that it cannot be decrypted until a certain amount of time has passed. Two basic approaches to timelock encryption were described by <a href="https://dspace.mit.edu/bitstream/handle/1721.1/149822/MIT-LCS-TR-684.pdf">Rivest, Shamir, and Wagner</a>:</p><p> There are two natural approaches to implementing timed release cryptography:</p><p>  - Use “time-lock puzzles” – computational problems that cannot be solved without running a computer continuously for at least a certain amount of time.</p><p>  - Use trusted agents who promise not to reveal certain information until a specified date.</p><p>Using trusted agents has the obvious problem of ensuring that the agents are trustworthy. Secret sharing approaches can be used to alleviate this concern.</p><p>The drand network is a group of independent agents using secret sharing for trustworthiness, and the ‘certain information’ not to be revealed until a specified date sounds a lot like the per-round randomness! We describe next how timelock encryption can be implemented on top of a drand network with unchained randomness, and finish with a practical demonstration. While we don’t delve into the bilinear groups and pairings-based cryptography that make this possible, if you’re interested we encourage you to read <a href="https://eprint.iacr.org/2023/189">tlock: Practical Timelock Encryption from Threshold BLS</a> by Nicolas Gailly, Kelsey Melissaris, and Yolan Romailler.</p>
    <div>
      <h3>How to timelock your secrets</h3>
      <a href="#how-to-timelock-your-secrets">
        
      </a>
    </div>
    <p>First, identify the randomness round that, once revealed, will allow your timelock-encrypted message to be decrypted. An important observation is that since drand networks produce randomness at fixed intervals, each round in a drand beacon is closely tied to a specific timestamp (modulo small delays for the network to actually produce the beacon) which can be easily computed taking the beacon’s genesis timestamp and then adding the round number multiplied by the beacon’s period.</p><p>Once the round is decided upon, the properties of bilinear groups allow you to encrypt your message to some round with the drand beacon’s group public key.</p>
            <pre><code>ciphertext = EncryptToRound(msg, round, beacon_public_key)</code></pre>
            <p>After the nodes in the drand network cooperate to derive the randomness for the round (really, just the signature on the round number using the beacon’s group secret key), <i>anyone</i> can decrypt the ciphertext (this is where the magic of bilinear groups comes in).</p>
            <pre><code>random = Randomness(round)
message = Decrypt(ciphertext,random)</code></pre>
            <p>To make this practical, the timelocked message is actually the secret key for a symmetric scheme. This means that we encrypt the message with a symmetric key and encrypt the key with timelock, allowing for a decryption in the future.</p><p>Now, for a practical demonstration of timelock encryption, we use a tool that one of our own engineers built on top of Cloudflare Workers. The <a href="https://github.com/thibmeu/tlock-worker">source code</a> is publicly available if you’d like to take a look under the hood at how it works.</p>
            <pre><code># 1. Create a file
echo "A message from the past to the future..." &gt; original.txt

# 2. Get the drand round 1 minute into the future (20 rounds) 
BEACON="52db9ba70e0cc0f6eaf7803dd07447a1f5477735fd3f661792ba94600c84e971"
ROUND=$(curl "https://drand.cloudflare.com/$BEACON/public/latest" | jq ".round+20")

# 3. Encrypt and require that round number
curl -X POST --data-binary @original.txt --output encrypted.pem https://tlock-worker.crypto-team.workers.dev/encrypt/$ROUND

# 4. Try to decrypt it (and only succeed 20 rounds x 3s later)
curl -X POST --data-binary @encrypted.pem --fail --show-error https://tlock-worker.crypto-team.workers.dev/decrypt</code></pre>
            
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We hope you’ve enjoyed revisiting the tale of LavaRand as much as we have, and are inspired to visit one of Cloudflare’s offices in the future to see the randomness displays first-hand, and to use verifiable public randomness and timelock encryption from drand in your next project.</p><p>Chaos is required by the encryption that secures the Internet. LavaRand at Cloudflare will continue to turn the chaotic beauty of our physical world into a randomness stream – even as new sources are added – for novel uses all of us explorers – just like that snail – have yet to dream up.</p><p>And she gazed at the sky, the sea, the landThe waves and the caves and the golden sand.She gazed and gazed, amazed by it all,And she said to the whale, “I feel so small.”</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/aUx8oEz7t6W649nYlAmzD/f4658fe8a6b467804f2e6c21c9dec2cb/image1-30.png" />
            
            </figure><p>A snail on a whale.</p><div>
  
</div><p>Tune in for more news, announcements and thought-provoking discussions! Don't miss the full <a href="https://cloudflare.tv/shows/security-week">Security Week hub page</a>.</p> ]]></content:encoded>
            <category><![CDATA[Security Week]]></category>
            <category><![CDATA[Randomness]]></category>
            <category><![CDATA[LavaRand]]></category>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2V4nElKOJ2taKnxH7Q9pw6</guid>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Thibault Meunier</dc:creator>
        </item>
        <item>
            <title><![CDATA[Privacy Pass: upgrading to the latest protocol version]]></title>
            <link>https://blog.cloudflare.com/privacy-pass-standard/</link>
            <pubDate>Thu, 04 Jan 2024 16:07:22 GMT</pubDate>
            <description><![CDATA[ In this post, we explore the latest changes to Privacy Pass protocol. We are also excited to introduce a public implementation of the latest IETF draft of the Privacy Pass protocol — including a set of open-source templates that can be used to implement Privacy Pass Origins, Issuers, and Attesters ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2LZJxp89GI8PxGwGSPRQJL/9cfe61e756369dcad6cb78f5ad89ec1f/image9.png" />
            
            </figure>
    <div>
      <h2>Enabling anonymous access to the web with privacy-preserving cryptography</h2>
      <a href="#enabling-anonymous-access-to-the-web-with-privacy-preserving-cryptography">
        
      </a>
    </div>
    <p>The challenge of telling humans and bots apart is almost as old as the web itself. From online ticket vendors to dating apps, to ecommerce and finance — there are many legitimate reasons why you'd want to know if it's a person or a machine knocking on the front door of your website.</p><p>Unfortunately, the tools for the web have traditionally been clunky and sometimes involved a bad user experience. None more so than the CAPTCHA — an irksome solution that humanity wastes a <a href="/introducing-cryptographic-attestation-of-personhood/">staggering</a> amount of time on. A more subtle but intrusive approach is IP tracking, which uses IP addresses to identify and take action on suspicious traffic, but that too can come with <a href="/consequences-of-ip-blocking/">unforeseen consequences</a>.</p><p>And yet, the problem of distinguishing legitimate human requests from automated bots remains as vital as ever. This is why for years Cloudflare has invested in the Privacy Pass protocol — a novel approach to establishing a user’s identity by relying on cryptography, rather than crude puzzles — all while providing a streamlined, privacy-preserving, and often frictionless experience to end users.</p><p>Cloudflare began <a href="/cloudflare-supports-privacy-pass/">supporting Privacy Pass</a> in 2017, with the release of browser extensions for Chrome and Firefox. Web admins with their sites on Cloudflare would have Privacy Pass enabled in the Cloudflare Dash; users who installed the extension in their browsers would see fewer CAPTCHAs on websites they visited that had Privacy Pass enabled.</p><p>Since then, Cloudflare <a href="/end-cloudflare-captcha/">stopped issuing CAPTCHAs</a>, and Privacy Pass has come a long way. Apple uses a version of Privacy Pass for its <a href="https://developer.apple.com/news/?id=huqjyh7k">Private Access Tokens</a> system which works in tandem with a device’s secure enclave to attest to a user’s humanity. And Cloudflare uses Privacy Pass as an important signal in our Web Application Firewall and Bot Management products — which means millions of websites natively offer Privacy Pass.</p><p>In this post, we explore the latest changes to Privacy Pass protocol. We are also excited to introduce a public implementation of the latest IETF draft of the <a href="https://www.ietf.org/archive/id/draft-ietf-privacypass-protocol-16.html">Privacy Pass protocol</a> — including a <a href="https://github.com/cloudflare?q=pp-&amp;type=all&amp;language=&amp;sort=#org-repositories">set of open-source templates</a> that can be used to implement Privacy Pass <a href="https://github.com/cloudflare/pp-origin"><i>Origins</i></a><i>,</i> <a href="https://github.com/cloudflare/pp-issuer"><i>Issuers</i></a>, and <a href="https://github.com/cloudflare/pp-attester"><i>Attesters</i></a>. These are based on Cloudflare Workers, and are the easiest way to get started with a new deployment of Privacy Pass.</p><p>To complement the updated implementations, we are releasing a new version of our Privacy Pass browser extensions (<a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>, <a href="https://chromewebstore.google.com/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi">Chrome</a>), which are rolling out with the name: <i>Silk - Privacy Pass Client</i>. Users of these extensions can expect to see fewer bot-checks around the web, and will be contributing to research about privacy preserving signals via a set of trusted attesters, which can be configured in the extension’s settings panel.</p><p>Finally, we will discuss how Privacy Pass can be used for an array of scenarios beyond differentiating bot from human traffic.</p><p><b>Notice to our users</b></p><ul><li><p>If you use the Privacy Pass API that controls Privacy Pass configuration on Cloudflare, you can remove these calls. This API is no longer needed since Privacy Pass is now included by default in our Challenge Platform. Out of an abundance of caution for our customers, we are doing a <a href="https://developers.cloudflare.com/fundamentals/api/reference/deprecations/">four-month deprecation notice</a>.</p></li><li><p>If you have the Privacy Pass extension installed, it should automatically update to <i>Silk - Privacy Pass Client</i> (<a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>, <a href="https://chromewebstore.google.com/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi">Chrome</a>) over the next few days. We have renamed it to keep the distinction clear between the protocol itself and a client of the protocol.</p></li></ul>
    <div>
      <h2>Brief history</h2>
      <a href="#brief-history">
        
      </a>
    </div>
    <p>In the last decade, we've seen the <a href="/next-generation-privacy-protocols/">rise of protocols</a> with privacy at their core, including <a href="/building-privacy-into-internet-standards-and-how-to-make-your-app-more-private-today/">Oblivious HTTP (OHTTP)</a>, <a href="/deep-dive-privacy-preserving-measurement/">Distributed aggregation protocol (DAP)</a>, and <a href="/unlocking-quic-proxying-potential/">MASQUE</a>. These protocols improve privacy when browsing and interacting with services online. By protecting users' privacy, these protocols also ask origins and website owners to revise their expectations around the data they can glean from user traffic. This might lead them to reconsider existing assumptions and mitigations around suspicious traffic, such as <a href="/consequences-of-ip-blocking/">IP filtering</a>, which often has unintended consequences.</p><p>In 2017, Cloudflare announced <a href="/cloudflare-supports-privacy-pass/">support for Privacy Pass</a>. At launch, this meant improving content accessibility for web users who would see a lot of interstitial pages (such as <a href="https://www.cloudflare.com/learning/bots/how-captchas-work/">CAPTCHAs</a>) when browsing websites protected by Cloudflare. Privacy Pass tokens provide a signal about the user’s capabilities to website owners while protecting their privacy by ensuring each token redemption is unlinkable to its issuance context. Since then, the technology has turned into a <a href="https://datatracker.ietf.org/wg/privacypass/documents/">fully fledged protocol</a> used by millions thanks to academic and industry effort. The existing browser extension accounts for hundreds of thousands of downloads. During the same time, Cloudflare has dramatically evolved the way it allows customers to challenge their visitors, being <a href="/end-cloudflare-captcha/">more flexible about the signals</a> it receives, and <a href="/turnstile-ga/">moving away from CAPTCHA</a> as a binary legitimacy signal.</p><p>Deployments of this research have led to a broadening of use cases, opening the door to different kinds of attestation. An attestation is a cryptographically-signed data point supporting facts. This can include a signed token indicating that the user has successfully solved a CAPTCHA, having a user’s hardware attest it’s untampered, or a piece of data that an attester can verify against another data source.</p><p>For example, in 2022, Apple hardware devices began to offer Privacy Pass tokens to websites who wanted to reduce how often they show CAPTCHAs, by using the hardware itself as an attestation factor. Before showing images of buses and fire hydrants to users, CAPTCHA providers can request a <a href="https://developer.apple.com/news/?id=huqjyh7k">Private Access Token</a> (PAT). This native support does not require installing extensions, or any user action to benefit from a smoother and more private web browsing experience.</p><p>Below is a brief overview of changes to the protocol we participated in:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3YImfph78oDPj3kgEcyvV6/37bcd89ffcfff8b636b00c8e931f3218/image8.png" />
            
            </figure><p>The timeline presents cryptographic changes, community inputs, and industry collaborations. These changes helped shape better standards for the web, such as VOPRF (<a href="https://www.rfc-editor.org/rfc/rfc9497">RFC 9497</a>), or RSA Blind Signatures (<a href="https://www.rfc-editor.org/rfc/rfc9474">RFC 9474</a>). In the next sections, we dive in the Privacy Pass protocol to understand its ins and outs.</p>
    <div>
      <h2>Anonymous credentials in real life</h2>
      <a href="#anonymous-credentials-in-real-life">
        
      </a>
    </div>
    <p>Before explaining the protocol in more depth, let's use an analogy. You are at a music festival. You bought your ticket online with a student discount. When you arrive at the gates, an agent scans your ticket, checks your student status, and gives you a yellow wristband and two drink tickets.</p><p>During the festival, you go in and out by showing your wristband. When a friend asks you to grab a drink, you pay with your tickets. One for your drink and one for your friend. You give your tickets to the bartender, they check the tickets, and give you a drink. The characteristics that make this interaction private is that the drinks tickets cannot be traced back to you or your payment method, but they can be verified as having been unused and valid for purchase of a drink.</p><p>In the web use case, the Internet is a festival. When you arrive at the gates of a website, an agent scans your request, and gives you a session cookie as well as two Privacy Pass tokens. They could have given you just one token, or more than two, but in our example ‘two tokens’ is the given website’s policy. You can use these tokens to attest your humanity, to authenticate on certain websites, or even to confirm the legitimacy of your hardware.</p><p>Now, you might wonder if this is a technique we have been using for years, why do we need fancy cryptography and standardization efforts? Well, unlike at a real-world music festival where most people don’t carry around photocopiers, on the Internet it is pretty easy to copy tokens. For instance, how do we stop people using a token twice? We could put a unique number on each token, and check it is not spent twice, but that would allow the gate attendant to tell the bartender which numbers were linked to which person. So, we need cryptography.</p><p>When another website presents a challenge to you, you provide your Privacy Pass token and are then allowed to view a gallery of beautiful cat pictures. The difference with the festival is this challenge might be interactive, which would be similar to the bartender giving you a numbered ticket which would have to be signed by the agent before getting a drink. The website owner can verify that the token is valid but has no way of tracing or connecting the user back to the action that provided them with the Privacy Pass tokens. With Privacy Pass terminology, you are a Client, the website is an Origin, the agent is an Attester, and the bar an Issuer. The next section goes through these in more detail.</p>
    <div>
      <h2>Privacy Pass protocol</h2>
      <a href="#privacy-pass-protocol">
        
      </a>
    </div>
    <p>Privacy Pass specifies an extensible protocol for creating and redeeming anonymous and transferable tokens. In fact, Apple has their own implementation with Private Access Tokens (PAT), and later we will describe another implementation with the Silk browser extension. Given PAT was the first to implement the IETF defined protocol, Privacy Pass is sometimes referred to as PAT in the literature.</p><p>The protocol is generic, and defines four components:</p><ul><li><p>Client: Web user agent with a Privacy Pass enabled browser. This could be your <a href="/eliminating-captchas-on-iphones-and-macs-using-new-standard/">Apple device with PAT</a>, or your web browser with <a href="https://github.com/cloudflare/pp-browser-extension">the Silk extension installed</a>. Typically, this is the actor who is requesting content and is asked to share some attribute of themselves.</p></li><li><p>Origin: Serves content requested by the Client. The Origin trusts one or more Issuers, and presents Privacy Pass challenges to the Client. For instance, Cloudflare Managed Challenge is a Privacy Pass origin serving two Privacy Pass challenges: one for Apple PAT Issuer, one for Cloudflare Research Issuer.</p></li><li><p>Issuer: Signs Privacy Pass tokens upon request from a trusted party, either an Attester or a Client depending on the deployment model. Different Issuers have their own set of trusted parties, depending on the security level they are looking for, as well as their privacy considerations. An Issuer validating device integrity should use different methods that vouch for this attribute to acknowledge the diversity of Client configurations.</p></li><li><p>Attester: Verifies an attribute of the Client and when satisfied requests a signed Privacy Pass token from the Issuer to pass back to the Client. Before vouching for the Client, an Attester may ask the Client to complete a specific task. This task could be a CAPTCHA, a location check, or age verification or some other check that will result in a single binary result. The Privacy Pass token will then share this one-bit of information in an unlinkable manner.</p></li></ul><p>They interact as illustrated below.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7tX1xRQv6Ltif1NRj2fCOa/eeb412fa39d73e2232f4b062d95cd708/Frame-699-1-.png" />
            
            </figure><p>Let's dive into what's really happening with an example. The User wants to access an Origin, say store.example.com. This website has suffered attacks or abuse in the past, and the site is using Privacy Pass to help avoid these going forward. To that end, the Origin returns <a href="https://www.rfc-editor.org/rfc/rfc9110#field.www-authenticate">an authentication request</a> to the Client: <code>WWW-Authenticate: PrivateToken challenge="A==",token-key="B=="</code>. In this way, the Origin signals that it accepts tokens from the Issuer with public key “B==” to satisfy the challenge. That Issuer in turn trusts reputable Attesters to vouch for the Client not being an attacker by means of the presence of a cookie, CAPTCHA, Turnstile, or <a href="/introducing-cryptographic-attestation-of-personhood/">CAP challenge</a> for example. For accessibility reasons for our example, let us say that the Client likely prefers the Turnstile method. The User’s browser prompts them to solve a Turnstile challenge. On success, it contacts the Issuer “B==” with that solution, and then replays the initial requests to store.example.com, this time sending along the token header <code>Authorization: PrivateToken token="C=="</code>, which the Origin accepts and returns your desired content to the Client. And that’s it.</p><p>We’ve described the Privacy Pass authentication protocol. While Basic authentication (<a href="https://www.rfc-editor.org/rfc/rfc7617">RFC 7671</a>) asks you for a username and a password, the PrivateToken authentication scheme allows the browser to be more flexible on the type of check, while retaining privacy. The Origin store.example.com does not know your attestation method, they just know you are reputable according to the token issuer. In the same spirit, the Issuer "B==" does not see your IP, nor the website you are visiting. This separation between issuance and redemption, also referred to as unlinkability, is what <a href="https://www.ietf.org/archive/id/draft-ietf-privacypass-architecture-16.html">makes Privacy Pass private</a>.</p>
    <div>
      <h2>Demo time</h2>
      <a href="#demo-time">
        
      </a>
    </div>
    <p>To put the above in practice, let’s see how the protocol works with Silk, a browser extension providing Privacy Pass support. First, download the relevant <a href="https://chromewebstore.google.com/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi">Chrome</a> or <a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a> extension.</p><p>Then, head to <a href="https://demo-pat.research.cloudflare.com/login">https://demo-pat.research.cloudflare.com/login</a>. The page returns a 401 Privacy Pass Token not presented. In fact, the origin expects you to perform a PrivateToken authentication. If you don’t have the extension installed, the flow stops here. If you have the extension installed, the extension is going to orchestrate the flow required to get you a token requested by the Origin.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZPDrhytZNVoB81Q7RILu5/7c115c9ed069aa09694373ec1adcc4d0/image10.png" />
            
            </figure><p>With the extension installed, you are directed to a new tab <a href="https://pp-attester-turnstile.research.cloudflare.com/challenge">https://pp-attester-turnstile.research.cloudflare.com/challenge</a>. This is a page provided by an Attester able to deliver you a token signed by the Issuer request by the Origin. In this case, the Attester checks you’re able to solve a Turnstile challenge.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fmDWo3548oMK8jgZ7V0Kd/94ee9ab9bc1df6fee6e6a76dc4fb3e02/image2.png" />
            
            </figure><p>You click, and that’s it. The Turnstile challenge solution is sent to the Attester, which upon validation, sends back a token from the requested Issuer. This page appears for a very short time, as once the extension has the token, the challenge page is no longer needed.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3KROIlp9njiXlfceDzRU7W/d1e306da3012c949e3fa5b80934f83a4/image11.png" />
            
            </figure><p>The extension, now having a token requested by the Origin, sends your initial request for a second time, with an Authorization header containing a valid Issuer PrivateToken. Upon validation, the Origin allows you in with a 200 Privacy Pass Token valid!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3qOSkMc5wIqS50CuNNNoZY/b36b88ba01ffa1c5f4d78727e602062f/image3.png" />
            
            </figure><p>If you want to check behind the scenes, you can right-click on the extension logo and go to the preference/options page. It contains a list of attesters trusted by the extension, one per line. You can add your own attestation method (API described below). This allows the Client to decide on their preferred attestation methods.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/78BCHYQuOBC2aFlnPshu83/c6ee6b54d1d24b6f92f34577267a1146/image7.png" />
            
            </figure>
    <div>
      <h2>Privacy Pass protocol — extended</h2>
      <a href="#privacy-pass-protocol-extended">
        
      </a>
    </div>
    <p>The Privacy Pass protocol is new and not a standard yet, which implies that it’s not uniformly supported on all platforms. To improve flexibility beyond the existing standard proposal, we are introducing two mechanisms: an API for Attesters, and a replay API for web clients. The API for attesters allows developers to build new attestation methods, which only need to provide their URL to interface with the Silk browser extension. The replay API for web clients is a mechanism to enable websites to cooperate with the extension to make PrivateToken authentication work on browsers with Chrome user agents.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2TLz1CPx9OHczqLabCRmyc/c54b0b4bb637a97812c637ca0eebc78c/image12.png" />
            
            </figure><p>Because more than one Attester may be supported on your machine, your Client needs to understand which Attester to use depending on the requested Issuer. As mentioned before, you as the Client do not communicate directly with the Issuer because you don’t necessarily know their relation with the attester, so you cannot retrieve its public key. To this end, the Attester API exposes all Issuers reachable by the said Attester via an endpoint: /v1/private-token-issuer-directory. This way, your client selects an appropriate Attester - one in relation with an Issuer that the Origin trusts, before triggering a validation.</p><p>In addition, we propose a replay API. Its goal is to allow clients to fetch a resource a second time if the first response presented a Privacy pass challenge. Some platforms do this automatically, like Silk on Firefox, but some don’t. That’s the case with the Silk Chrome extension for instance, which in its support of <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/manifest.json/manifest_version">manifest v3</a> cannot block requests and only supports Basic authentication in the onAuthRequired extension event. The Privacy Pass Authentication scheme proposes the request to be sent once to get a challenge, and then a second time to get the actual resource. Between these requests to the Origin, the platform orchestrates the issuance of a token. To keep clients informed about the state of this process, we introduce a <code>private-token-client-replay: UUID header</code> alongside WWW-Authenticate. Using a platform defined endpoint, this UUID informs web clients of the current state of authentication: pending, fulfilled, not-found.</p><p>To learn more about how you can use these today, and to deploy your own attestation method, read on.</p>
    <div>
      <h2>How to use Privacy Pass today?</h2>
      <a href="#how-to-use-privacy-pass-today">
        
      </a>
    </div>
    <p>As seen in the section above, Privacy Pass is structured around four components: Origin, Client, Attester, Issuer. That’s why we created four repositories: <a href="https://github.com/cloudflare/pp-origin">cloudflare/pp-origin</a>, <a href="https://github.com/cloudflare/pp-browser-extension">cloudflare/pp-browser-extension</a>, <a href="https://github.com/cloudflare/pp-attester">cloudflare/pp-attester</a>, <a href="https://github.com/cloudflare/pp-issuer">cloudflare/pp-issuer</a>. In addition, the underlying cryptographic libraries are available <a href="https://github.com/cloudflare/privacypass-ts">cloudflare/privacypass-ts</a>, <a href="https://github.com/cloudflare/blindrsa-ts">cloudflare/blindrsa-ts</a>, and <a href="https://github.com/cloudflare/voprf-ts">cloudflare/voprf-ts</a>. In this section, we dive into how to use each one of these depending on your use case.</p><blockquote><p>Note: All examples below are designed in JavaScript and targeted at Cloudflare Workers. Privacy Pass is also implemented in <a href="https://github.com/ietf-wg-privacypass/base-drafts#existing-implementations">other languages</a> and can be deployed with a configuration that suits your needs.</p></blockquote>
    <div>
      <h3>As an Origin - website owners, service providers</h3>
      <a href="#as-an-origin-website-owners-service-providers">
        
      </a>
    </div>
    <p>You are an online service that people critically rely upon (health or messaging for instance). You want to provide private payment options to users to maintain your users’ privacy. You only have one subscription tier at $10 per month. You have <a href="https://datatracker.ietf.org/doc/html/draft-davidson-pp-architecture-00#autoid-60">heard</a> people are making privacy preserving apps, and want to use the latest version of Privacy Pass.</p><p>To access your service, users are required to prove they've paid for the service through a payment provider of their choosing (that you deem acceptable). This payment provider acknowledges the payment and requests a token for the user to access the service. As a sequence diagram, it looks as follows:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3CDt5NsDY4c2DuYbggdleT/c2084b1b7cb141a8b528de78392833b3/image4.png" />
            
            </figure><p>To implement it in Workers, we rely on the <a href="https://www.npmjs.com/package/@cloudflare/privacypass-ts"><code>@cloudflare/privacypass-ts</code></a> library, which can be installed by running:</p>
            <pre><code>npm i @cloudflare/privacypass-ts</code></pre>
            <p>This section is going to focus on the Origin work. We assume you have an Issuer up and running, which is described in a later section.</p><p>The Origin defines two flows:</p><ol><li><p>User redeeming token</p></li><li><p>User requesting a token issuance</p></li></ol>
            <pre><code>import { Client } from '@cloudflare/privacypass-ts'

const issuer = 'static issuer key'

const handleRedemption =&gt; (req) =&gt; {
    const token = TokenResponse.parse(req.headers.get('authorization'))
    const isValid = token.verify(issuer.publicKey)
}

const handleIssuance = () =&gt; {
    return new Response('Please pay to access the service', {
        status: 401,
        headers: { 'www-authenticate': 'PrivateToken challenge=, token-key=, max-age=300' }
    })
}

const handleAuth = (req) =&gt; {
    const authorization = req.headers.get('authorization')
    if (authorization.startsWith(`PrivateToken token=`)) {
        return handleRedemption(req)
    }
    return handleIssuance(req)
}

export default {
    fetch(req: Request) {
        return handleAuth(req)
    }
}</code></pre>
            <p>From the user’s perspective, the overhead is minimal. Their client (possibly the Silk browser extension) receives a WWW-Authenticate header with the information required for a token issuance. Then, depending on their client configuration, they are taken to the payment provider of their choice to validate their access to the service.</p><p>With a successful response to the PrivateToken challenge a session is established, and the traditional web service flow continues.</p>
    <div>
      <h3>As an Attester - CAPTCHA providers, authentication provider</h3>
      <a href="#as-an-attester-captcha-providers-authentication-provider">
        
      </a>
    </div>
    <p>You are the author of a new attestation method, such as <a href="/introducing-cryptographic-attestation-of-personhood/">CAP,</a> a new CAPTCHA mechanism, or a new way to validate cookie consent. You know that website owners already use Privacy Pass to trigger such challenges on the user side, and an Issuer is willing to trust your method because it guarantees a high security level. In addition, because of the Privacy Pass protocol you never see which website your attestation is being used for.</p><p>So you decide to expose your attestation method as a Privacy Pass Attester. An Issuer with public key B== trusts you, and that's the Issuer you are going to request a token from. You can check that with the Yes/No Attester below, whose code is on <a href="https://cloudflareworkers.com/#eedc5a7a6560c44b23a24cc1414b29d7:https://tutorial.cloudflareworkers.com/v1/challenge">Cloudflare Workers playground</a></p>
            <pre><code>const ISSUER_URL = 'https://pp-issuer-public.research.cloudflare.com/token-request'

const b64ToU8 = (b) =&gt;  Uint8Array.from(atob(b), c =&gt; c.charCodeAt(0))

const handleGetChallenge = (req) =&gt; {
    return new Response(`
    &lt;html&gt;
    &lt;head&gt;
      &lt;title&gt;Challenge Response&lt;/title&gt;
    &lt;/head&gt;
    &lt;body&gt;
    	&lt;button onclick="sendResponse('Yes')"&gt;Yes&lt;/button&gt;
		&lt;button onclick="sendResponse('No')"&gt;No&lt;/button&gt;
	&lt;/body&gt;
	&lt;script&gt;
	function sendResponse(choice) {
		fetch(location.href, { method: 'POST', headers: { 'private-token-attester-data': choice } })
	}
	&lt;/script&gt;
	&lt;/html&gt;
	`, { status: 401, headers: { 'content-type': 'text/html' } })
}

const handlePostChallenge = (req) =&gt; {
    const choice = req.headers.get('private-token-attester-data')
    if (choice !== 'Yes') {
        return new Response('Unauthorised', { status: 401 })
    }

    // hardcoded token request
    // debug here https://pepe-debug.research.cloudflare.com/?challenge=PrivateToken%20challenge=%22AAIAHnR1dG9yaWFsLmNsb3VkZmxhcmV3b3JrZXJzLmNvbSBE-oWKIYqMcyfiMXOZpcopzGBiYRvnFRP3uKknYPv1RQAicGVwZS1kZWJ1Zy5yZXNlYXJjaC5jbG91ZGZsYXJlLmNvbQ==%22,token-key=%22MIIBUjA9BgkqhkiG9w0BAQowMKANMAsGCWCGSAFlAwQCAqEaMBgGCSqGSIb3DQEBCDALBglghkgBZQMEAgKiAwIBMAOCAQ8AMIIBCgKCAQEApqzusqnywE_3PZieStkf6_jwWF-nG6Es1nn5MRGoFSb3aXJFDTTIX8ljBSBZ0qujbhRDPx3ikWwziYiWtvEHSLqjeSWq-M892f9Dfkgpb3kpIfP8eBHPnhRKWo4BX_zk9IGT4H2Kd1vucIW1OmVY0Z_1tybKqYzHS299mvaQspkEcCo1UpFlMlT20JcxB2g2MRI9IZ87sgfdSu632J2OEr8XSfsppNcClU1D32iL_ETMJ8p9KlMoXI1MwTsI-8Kyblft66c7cnBKz3_z8ACdGtZ-HI4AghgW-m-yLpAiCrkCMnmIrVpldJ341yR6lq5uyPej7S8cvpvkScpXBSuyKwIDAQAB%22
    const body = b64ToU8('AALoAYM+fDO53GVxBRuLbJhjFbwr0uZkl/m3NCNbiT6wal87GEuXuRw3iZUSZ3rSEqyHDhMlIqfyhAXHH8t8RP14ws3nQt1IBGE43Q9UinwglzrMY8e+k3Z9hQCEw7pBm/hVT/JNEPUKigBYSTN2IS59AUGHEB49fgZ0kA6ccu9BCdJBvIQcDyCcW5LCWCsNo57vYppIVzbV2r1R4v+zTk7IUDURTa4Mo7VYtg1krAWiFCoDxUOr+eTsc51bWqMtw2vKOyoM/20Wx2WJ0ox6JWdPvoBEsUVbENgBj11kB6/L9u2OW2APYyUR7dU9tGvExYkydXOfhRFJdKUypwKN70CiGw==')
    // You can perform some check here to confirm the body is a valid token request

    console.log('requesting token for tutorial.cloudflareworkers.com')
    return fetch(ISSUER_URL, {
      method: 'POST',
      headers: { 'content-type': 'application/private-token-request' },
      body: body,
    })
}

const handleIssuerDirectory = async () =&gt; {
    // These are fake issuers
    // Issuer data can be fetch at https://pp-issuer-public.research.cloudflare.com/.well-known/private-token-issuer-directory
    const TRUSTED_ISSUERS = {
        "issuer1": { "token-keys": [{ "token-type": 2, "token-key": "A==" }] },
        "issuer2": { "token-keys": [{ "token-type": 2, "token-key": "B==" }] },
    }
    return new Response(JSON.stringify(TRUSTED_ISSUERS), { headers: { "content-type": "application/json" } })
}

const handleRequest = (req) =&gt; {
    const pathname = new URL(req.url).pathname
    console.log(pathname, req.url)
    if (pathname === '/v1/challenge') {
        if (req.method === 'POST') {
            return handlePostChallenge(req)
        }
        return handleGetChallenge(req)
    }
    if (pathname === '/v1/private-token-issuer-directory') {
        return handleIssuerDirectory()
    }
    return new Response('Not found', { status: 404 })
}

addEventListener('fetch', event =&gt; {
    event.respondWith(handleRequest(event.request))
})</code></pre>
            <p>The validation method above is simply checking if the user selected yes. Your method might be more complex, the wrapping stays the same.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PnBuinoRKUpYjrBsHQbn/966c266e7de411503c5bf9a5dc9a184d/Screenshot-2024-01-04-at-10.30.04.png" />
            
            </figure><p><i>Screenshot of the Yes/No Attester example</i></p><p>Because users might have multiple Attesters configured for a given Issuer, we recommend your Attester implements one additional endpoint exposing the keys of the issuers you are in contact with. You can try this code on <a href="https://cloudflareworkers.com/#4eeeef2fa895e519addb3ae442ee351d:https://tutorial.cloudflareworkers.com/v1/private-token-issuer-directory">Cloudflare Workers playground</a>.</p>
            <pre><code>const handleIssuerDirectory = () =&gt; {
    const TRUSTED_ISSUERS = {
        "issuer1": { "token-keys": [{ "token-type": 2, "token-key": "A==" }] },
        "issuer2": { "token-keys": [{ "token-type": 2, "token-key": "B==" }] },
    }
    return new Response(JSON.stringify(TRUSTED_ISSUERS), { headers: { "content-type": "application/json" } })
}

export default {
    fetch(req: Request) {
        const pathname = new URL(req.url).pathname
        if (pathname === '/v1/private-token-issuer-directory') {
            return handleIssuerDirectory()
        }
    }
}</code></pre>
            <p>Et voilà. You have an Attester that can be used directly with the Silk browser extension (<a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>, <a href="https://chromewebstore.google.com/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi">Chrome</a>). As you progress through your deployment, it can also be directly integrated into your applications.</p><p>If you would like to have a more advanced Attester and deployment pipeline, look at <a href="https://github.com/cloudflare/pp-attester">cloudflare/pp-attester</a> template.</p>
    <div>
      <h3>As an Issuer - foundation, consortium</h3>
      <a href="#as-an-issuer-foundation-consortium">
        
      </a>
    </div>
    <p>We've mentioned the Issuer multiple times already. The role of an Issuer is to select a set of Attesters it wants to operate with, and communicate its public key to Origins. The whole cryptographic behavior of an Issuer is specified <a href="https://www.ietf.org/archive/id/draft-ietf-privacypass-protocol-16.html">by the IETF</a> draft. In contrast to the Client and Attesters which have discretionary behavior, the Issuer is fully standardized. Their opportunity is to choose a signal that is strong enough for the Origin, while preserving privacy of Clients.</p><p>Cloudflare Research is operating a public Issuer for experimental purposes to use on <a href="https://pp-issuer-public.research.cloudflare.com">https://pp-issuer-public.research.cloudflare.com</a>. It is the simplest solution to start experimenting with Privacy Pass today. Once it matures, you can consider joining a production Issuer, or deploying your own.</p><p>To deploy your own, you should:</p>
            <pre><code>git clone github.com/cloudflare/pp-issuer</code></pre>
            <p>Update wrangler.toml with your Cloudflare Workers account id and zone id. The open source Issuer API works as follows:</p><ul><li><p>/.well-known/private-token-issuer-directory returns the issuer configuration. Note it does not expose non-standard token-key-legacy</p></li><li><p>/token-request returns a token. This endpoint should be gated (by Cloudflare Access for instance) to only allow trusted attesters to call it</p></li><li><p>/admin/rotate to generate a new public key. This should only be accessible by your team, and be called prior to the issuer being available.</p></li></ul><p>Then, <code>wrangler publish</code>, and you're good to onboard Attesters.</p>
    <div>
      <h2>Development of Silk extension</h2>
      <a href="#development-of-silk-extension">
        
      </a>
    </div>
    <p>Just like the protocol, the browser technology on which Privacy Pass was proven viable has changed as well. For 5 years, the protocol got deployed along with a browser extension for Chrome and Firefox. In 2021, Chrome released a new version of extension configurations, usually referred to as <a href="https://developer.chrome.com/docs/extensions/mv3/intro/platform-vision/">Manifest version 3</a> (MV3). Chrome also started enforcing this new configuration for all newly released extensions.</p><p>Privacy Pass <i>the extension</i> is based on an agreed upon Privacy Pass <a href="https://datatracker.ietf.org/doc/draft-ietf-privacypass-auth-scheme/"><i>authentication protocol</i></a>. Briefly looking at <a href="https://developer.chrome.com/docs/extensions/reference/webRequest/">Chrome’s API documentation</a>, we should be able to use the onAuthRequired event. However, with PrivateToken authentication not yet being standard, there are no hooks provided by browsers for extensions to add logic to this event.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1iQsRopHuLfmHqjsppwImc/1a379a0cdd3de3e17de04811b1c08ac0/Screenshot-2024-01-04-at-10.32.44.png" />
            
            </figure><p><i>Image available under CC-BY-SA 4.0 provided by</i> <a href="https://developer.chrome.com/docs/extensions/reference/webRequest/"><i>Google For Developers</i></a></p><p>The approach we decided to use is to define a client side replay API. When a response comes with 401 WWW-Authenticate PrivateToken, the browser lets it through, but triggers the private token redemption flow. The original page is notified when a token has been retrieved, and replays the request. For this second request, the browser is able to attach an authorization token, and the request succeeds. This is an active replay performed by the client, rather than a transparent replay done by the platform. A specification is available on <a href="https://github.com/cloudflare/pp-browser-extension#chrome-support-via-client-replay-api">GitHub</a>.</p><p>We are looking forward to the standard progressing, and simplifying this part of the project. This should improve diversity in attestation methods. As we see in the next section, this is key to identifying new signals that can be leveraged by origins.</p>
    <div>
      <h2>A standard for anonymous credentials</h2>
      <a href="#a-standard-for-anonymous-credentials">
        
      </a>
    </div>
    <p>IP remains as a key identifier in the anti abuse system. At the same time, IP fingerprinting techniques have become a bigger concern and platforms have started to remove some of these ways of tracking users. To enable anti abuse systems to not rely on IP, while ensuring user privacy, Privacy Pass offers a reasonable alternative to deal with potentially abusive or suspicious traffic. The attestation methods vary and can be chosen as needed for a particular deployment. For example, Apple decided to back their attestation with hardware when using Privacy Pass as the authorization technology for iCloud Private Relay. Another example is Cloudflare Research which decided to deploy a Turnstile attester to signal a successful solve for Cloudflare’s challenge platform.</p><p>In all these deployments, Privacy Pass-like technology has allowed for specific bits of information to be shared. Instead of sharing your location, past traffic, and possibly your name and phone number simply by connecting to a website, your device is able to prove specific information to a third party in a privacy preserving manner. Which user information and attestation methods are sufficient to prevent abuse is an open question. We are looking to empower researchers with the release of this software to help in the quest for finding these answers. This could be via new experiments such as testing out new attestation methods, or fostering other privacy protocols by providing a framework for specific information sharing.</p>
    <div>
      <h2>Future recommendations</h2>
      <a href="#future-recommendations">
        
      </a>
    </div>
    <p>Just as we expect this latest version of Privacy Pass to lead to new applications and ideas we also expect further evolution of the standard and the clients that use it. Future development of Privacy Pass promises to cover topics like batch token issuance and rate limiting. From our work building and deploying this version of Privacy Pass we have encountered limitations that we expect to be resolved in the future as well.</p><p>The division of labor between Attesters and Issuers and the clear directions of trust relationships between the Origin and Issuer, and the Issuer and Attester make reasoning about the implications of a breach of trust clear. Issuers can trust more than one Attester, but since many current deployments of Privacy Pass do not identify the Attester that lead to issuance, a breach of trust in one Attester would render all tokens issued by any Issuer that trusts the Attester untrusted. This is because it would not be possible to tell which Attester was involved in the issuance process. Time will tell if this promotes a 1:1 correspondence between Attesters and Issuers.</p><p>The process of developing a browser extension supported by both Firefox and Chrome-based browsers can at times require quite baroque (and brittle) code paths. Privacy Pass the protocol seems a good fit for an extension of the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/webRequest/onAuthRequired">webRequest.onAuthRequired</a> browser event. Just as Privacy Pass appears as an alternate authentication message in the WWW-Authenticate HTTP header, browsers could fire the onAuthRequired event for Private Token authentication too and include and allow request blocking support within the onAuthRequired event. This seems a natural evolution of the use of this event which currently is limited to the now rather long-in-the-tooth Basic authentication.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Privacy Pass provides a solution to one of the longstanding challenges of the web: anonymous authentication. By leveraging cryptography, the protocol allows websites to get the information they need from users, and solely this information. It's already used by millions to help distinguish human requests from automated bots in a manner that is privacy protective and often seamless. We are excited by the protocol’s broad and growing adoption, and by the novel use cases that are unlocked by this latest version.</p><p>Cloudflare’s Privacy Pass implementations are available on GitHub, and are compliant with the standard. We have open-sourced a <a href="https://github.com/cloudflare?q=pp-&amp;type=all&amp;language=&amp;sort=#org-repositories">set of templates</a> that can be used to implement Privacy Pass <a href="https://github.com/cloudflare/pp-origin"><i>Origins</i></a><i>,</i> <a href="https://github.com/cloudflare/pp-issuer"><i>Issuers</i></a>, and <a href="https://github.com/cloudflare/pp-attester"><i>Attesters</i></a>, which leverage Cloudflare Workers to get up and running quickly.</p><p>For those looking to try Privacy Pass out for themselves right away, download the <i>Silk - Privacy Pass Client</i> browser extensions (<a href="https://addons.mozilla.org/en-US/firefox/addon/privacy-pass/">Firefox</a>, <a href="https://chromewebstore.google.com/detail/privacy-pass/ajhmfdgkijocedmfjonnpjfojldioehi">Chrome</a>, <a href="https://github.com/cloudflare/pp-browser-extension">GitHub</a>) and start browsing a web with fewer bot checks today.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Privacy Pass]]></category>
            <category><![CDATA[Firefox]]></category>
            <category><![CDATA[Chrome]]></category>
            <category><![CDATA[Privacy]]></category>
            <guid isPermaLink="false">47vZ5BZfqt5cU38XabKyUA</guid>
            <dc:creator>Thibault Meunier</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
            <dc:creator>Armando Faz-Hernández</dc:creator>
        </item>
        <item>
            <title><![CDATA[Defending against future threats: Cloudflare goes post-quantum]]></title>
            <link>https://blog.cloudflare.com/post-quantum-for-all/</link>
            <pubDate>Mon, 03 Oct 2022 13:01:00 GMT</pubDate>
            <description><![CDATA[ The future of a private and secure Internet is at stake; that is why today we have enabled post-quantum cryptography support for all our customers ]]></description>
            <content:encoded><![CDATA[ <p>There is an expiration date on the cryptography we use every day. It’s not easy to read, but somewhere <a href="https://globalriskinstitute.org/download/quantum-threat-timeline-report-2021-full-report/">between 15 or 40 years</a>, a sufficiently powerful quantum computer is expected to be built that will be <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">able to decrypt</a> essentially any encrypted data on the Internet today.</p><p>Luckily, there is a solution: <a href="https://www.cloudflare.com/learning/ssl/quantum/what-is-post-quantum-cryptography/">post-quantum (PQ) cryptography</a> has been designed to be secure against the threat of quantum computers. Just three months ago, in July 2022, after a six-year worldwide competition, the US National Institute of Standards and Technology (NIST), known for AES and SHA2, <a href="/nist-post-quantum-surprise/">announced</a> which post-quantum cryptography they will standardize. NIST plans to publish the final standards in 2024, but we want to help drive early adoption of post-quantum cryptography.</p><p>Starting today, as a beta service, <b>all</b> websites and APIs served through Cloudflare support post-quantum hybrid key agreement. This is on by default<sup>1</sup>; no need for an opt-in. This means that if your browser/app supports it, the connection to our network is also secure against any future quantum computer.</p><p>We offer this post-quantum cryptography free of charge: we believe that post-quantum security should be the new baseline for the Internet.</p><p>Deploying post-quantum cryptography seems like a no-brainer with quantum computers on the horizon, but it’s not without risks. To start, this is new cryptography: even with years of scrutiny, it is not inconceivable that a catastrophic attack might still be discovered. That is why we are deploying <i>hybrids</i>: a combination of a tried and tested key agreement together with a new one that adds post-quantum security.</p><p>We are primarily worried about what might seem mere practicalities. Even though the protocols used to secure the Internet are designed to allow smooth transitions like this, in reality there is a lot of buggy code out there: trying to create a post-quantum secure connection might fail for many reasons — for example a middlebox being confused about the larger post-quantum keys and other reasons we have yet to observe because these post-quantum key agreements are brand new. It’s because of these issues that we feel it is important to deploy post-quantum cryptography early, so that together with browsers and other clients we can find and work around these issues.</p><p>In this blog post we will explain how TLS, the protocol used to secure the Internet, is designed to allow a smooth and secure migration of the cryptography it uses. Then we will discuss the technical details of the post-quantum cryptography we have deployed, and how, in practice, this migration might not be that smooth at all. We finish this blog post by explaining how you can build a better, post-quantum secure, Internet by helping us test this new generation of cryptography.</p>
    <div>
      <h2>TLS: Transport Layer Security</h2>
      <a href="#tls-transport-layer-security">
        
      </a>
    </div>
    <p>When you’re browsing a website using a <i>secure connection</i>, whether that’s using HTTP/1.1 or <a href="/quic-version-1-is-live-on-cloudflare/">QUIC</a>, you are using the Transport Layer Security (<b>TLS</b>) protocol under the hood. There are two major versions of TLS <a href="https://radar.cloudflare.com/adoption-and-usage">in common use today</a>: the new <a href="/rfc-8446-aka-tls-1-3/">TLS 1.3</a> (~90%) and the older TLS 1.2 (~10%), which is on the decline.</p><p>TLS 1.3 is a <a href="/rfc-8446-aka-tls-1-3/">huge improvement</a> over TLS 1.2: it’s faster, more secure, simpler and more flexible in just the right places. This makes it easier to add post-quantum security to TLS 1.3 compared to 1.2. For the moment, we will leave it at that: we’ve only added post-quantum support to TLS 1.3.</p><p>So, what is TLS all about? The goal is to set up a connection between a browser and website such that</p><ul><li><p><b>Confidentiality and integrity</b>, no one can read along or tamper with the data undetected.</p></li><li><p><b>Authenticity</b> you know you’re connected to the right website; not an imposter.</p></li></ul>
    <div>
      <h3>Building blocks: AEAD, key agreement and signatures</h3>
      <a href="#building-blocks-aead-key-agreement-and-signatures">
        
      </a>
    </div>
    <p>Three different types of cryptography are used in TLS to reach this goal.</p><ul><li><p><b>Symmetric encryption</b>, or more precisely <i>Authenticated Encryption With Associated Data</i> (AEAD), is the workhorse of cryptography: it’s used to ensure confidentiality and integrity. This is a straight-forward kind of encryption: there is a <i>single key</i> that is used to encrypt and decrypt the data. Without the right key you cannot decrypt the data and any tampering with the encrypted data results in an error while decrypting.</p></li></ul><p>In TLS 1.3, <a href="/do-the-chacha-better-mobile-performance-with-cryptography/">ChaCha20-Poly1305</a> and AES128-GCM are in common use today. What about quantum attacks? At first glance, it looks like we need to switch to 256-bit symmetric keys to defend against <a href="https://en.wikipedia.org/wiki/Grover%27s_algorithm">Grover’s algorithm</a>. In practice, however, Grover’s algorithm <a href="/nist-post-quantum-surprise/#post-quantum-security-levels">doesn’t parallelize well</a>, so the currently deployed AEADs will serve just fine.</p><p>So if we can agree on a shared key to use with symmetric encryption, we’re golden. But how to get to a shared key? You can’t just pick a key and send it to the server: anyone listening in would know the key as well. One might think it’s an impossible task, but this is where the magic of asymmetric cryptography helps out:</p><ul><li><p>A <b>key agreement</b>, also called <i>key exchange</i> or <i>key distribution</i>, is a cryptographic protocol with which two parties can agree on a shared key without an eavesdropper being able to learn anything. Today the <a href="https://cr.yp.to/ecdh.html">X25519</a> Elliptic Curve <a href="https://developers.cloudflare.com/internet/protocols/tls#ephemeral-diffie-hellman-handshake">Diffie–Hellman</a> protocol (ECDH) is the de facto standard key agreement used in TLS 1.3. The security of X25519 is based on the <a href="https://en.wikipedia.org/wiki/Discrete_logarithm">discrete logarithm problem</a> for elliptic curves, which is vulnerable to quantum attacks, as it is easily solved by a cryptographically relevant quantum computer using <a href="https://en.wikipedia.org/wiki/Shor%27s_algorithm">Shor’s algorithm</a>. The solution is to use a post-quantum key agreement, such as <a href="https://pq-crystals.org/kyber/index.shtml">Kyber</a>.</p></li></ul><p>A key agreement only protects against a passive attacker. An active attacker, that can intercept and modify messages (<a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack">MitM</a>), can establish separate shared keys with both the server and the browser, re-encrypting all data passing through. To solve this problem, we need the final piece of cryptography.</p><ul><li><p>With a <b>digital</b> <b>signature</b> algorithm, such as <a href="https://en.wikipedia.org/wiki/RSA_(cryptosystem)">RSA</a> or <a href="https://www.cloudflare.com/learning/dns/dnssec/ecdsa-and-dnssec/">ECDSA</a>, there are two keys: a <i>public</i> and a <i>private key</i>. Only with the private key, one can create a <i>signature</i> for a message. Anyone with the corresponding public key can check whether a signature is indeed valid for a given message. These digital signatures are at the heart of <a href="https://www.cloudflare.com/application-services/products/ssl/"><i>TLS certificates</i></a> that are used to authenticate websites. Both RSA and ECDSA are vulnerable to quantum attacks. We haven’t replaced those with post-quantum signatures, yet. The reason is that authentication is less urgent: we only need to have them replaced by the time a sufficiently large quantum computer is built, whereas any data secured by a vulnerable key agreement today can be stored and decrypted in the future. Even though we have more time, deploying post-quantum authentication will be <a href="/sizing-up-post-quantum-signatures/">quite challenging</a>.</p></li></ul><p>So, how do these building blocks come together to create TLS?</p><h2>High-level overview of TLS 1.3</h2><p>A TLS connection starts with a <b>handshake</b> which is used to authenticate the server and derive a shared key. The browser (client) starts by sending a <i>ClientHello</i> message that contains a list of the AEADs, signature algorithms, and key agreement methods it supports. To remove a roundtrip, the client is allowed to make a guess of what the server supports and start the key agreement by sending one or more <i>client keyshares</i>. That guess might be correct (on the left in the diagram below) or the client has to retry (on the right).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4jeNtG1jP4LRmICiNeBUIG/87c30c89aeef2ce319bb25c0c5cddc2d/image4.png" />
            
            </figure><p>Protocol flow for server-authenticated TLS 1.3 with a supported client keyshare on the left and a HelloRetryRequest on the right.</p>
    <div>
      <h4><b>Key agreement</b></h4>
      <a href="#key-agreement">
        
      </a>
    </div>
    <p>Before we explain the rest of this interaction, let’s dig into the key agreement: what is a keyshare? The way the key agreement for Kyber and X25519 work <a href="/nist-post-quantum-surprise/#kem-versus-diffie-hellman">is different</a>: the first is a Key Encapsulation Mechanism (KEM), while the latter is a Diffie–Hellman (DH) style agreement. The latter is more flexible, but for TLS it doesn’t make a difference.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/25zN3n7EymPx0ZLTOKBEO6/71928fb621191c1de4883f77b1e9cae5/image3.png" />
            
            </figure><p>The shape of a KEM and Diffie–Hellman key agreement in TLS-compatible handshake is the same.</p><p>In both cases the client sends a <i>client keyshare</i> to the server. From this <i>client keyshare</i> the server generates the <i>shared key</i>. The server then returns a <i>server keyshare</i> with which the client can also compute the shared key.</p><p>Going back to the TLS 1.3 flow: when the server receives the <i>ClientHello</i> message it picks an AEAD (cipher), signature algorithm and client keyshare that it supports. It replies with a <i>ServerHello</i> message that contains the chosen AEAD and the <i>server keyshare</i> for the selected key agreement. With the AEAD and shared key locked in, the server starts encrypting data (shown with blue boxes).</p>
    <div>
      <h4><b>Authentication</b></h4>
      <a href="#authentication">
        
      </a>
    </div>
    <p>Together with the AEAD and server keyshare, the server sends a signature, the <i>handshake signature</i>, on the transcript of the communication so far together with a <a href="https://www.cloudflare.com/learning/ssl/what-is-an-ssl-certificate/"><i>certificate</i></a><i> (chain)</i> for the public key that it used to create the signature. This allows the client to authenticate the server: it checks whether it trusts the <i>certificate authority</i> (e.g. <a href="https://letsencrypt.org/">Let’s Encrypt</a>) that certified the public key and whether the signature verifies for the messages it sent and received so far. This not only authenticates the server, but it also protects against downgrade attacks.</p>
    <div>
      <h4><b>Downgrade protection</b></h4>
      <a href="#downgrade-protection">
        
      </a>
    </div>
    <p>We cannot upgrade all clients and servers to post-quantum cryptography at once. Instead, there will be a transition period where only some clients and some servers support post-quantum cryptography. The key agreement negotiation in TLS 1.3 allows this: during the transition servers and clients will still support non post-quantum key agreements, and can fall back to it if necessary.</p><p>This flexibility is great, but also scary: if both client and server support post-quantum key agreement, we want to be sure that they also negotiate the post-quantum key agreement. This is the case in TLS 1.3, but it is not obvious: the keyshares, the chosen keyshare and the list of supported key agreements are all sent in plain text. Isn’t it possible for an attacker in the middle to remove the post-quantum key agreements? This is called a <i>downgrade attack</i>.</p><p>This is where the transcript comes in: the handshake signature is taken over all messages received and sent by the server so far. This includes the supported key agreements and the key agreement that was picked. If an attacker changes the list of supported key agreements that the client sends, then the server will not notice. However, the client checks the server’s handshake signature against the list of supported key agreements it has actually sent and thus will detect the mischief.</p><p>The downgrade attack problems are <a href="https://eprint.iacr.org/2018/298">much</a> <a href="https://eprint.iacr.org/2016/072.pdf">more</a> <a href="https://www.rfc-editor.org/rfc/rfc7627">complicated</a> for TLS 1.2, which is one of the reasons we’re hesitant to retrofit post-quantum security in TLS 1.2.</p>
    <div>
      <h4><b>Wrapping up the handshake</b></h4>
      <a href="#wrapping-up-the-handshake">
        
      </a>
    </div>
    <p>The last part of the server’s response is <i>“server finished”,</i> a <i>message authentication code</i> (MAC) on the whole transcript so far. Most of the work has been done by the handshake signature, but in other operating modes of TLS without handshake signature, such as session resumption, it’s important.</p><p>With the chosen AEAD and server keyshare, the client can compute the shared key and decrypt and verify the certificate chain, handshake signature and handshake MAC. We did not mention it before, but the shared key is not used directly for encryption. Instead, for good measure, <a href="https://www.rfc-editor.org/rfc/rfc8446.html#page-93">it’s mixed together</a> with communication transcripts, to derive several specific keys for use during the handshake and the main connection afterwards.</p><p>To wrap up the handshake, the client sends its own handshake MAC, and can then proceed to send application-specific data encrypted with the keys derived during the handshake.</p>
    <div>
      <h4><b>Hello! Retry Request?</b></h4>
      <a href="#hello-retry-request">
        
      </a>
    </div>
    <p>What we just sketched is the desirable flow where the client sends a keyshare that is supported by the server. That might not be the case. If the server doesn’t accept any key agreements advertised by the client, then it will tell the client and abort the connection.</p>If there is a key agreement that both support, but for which the client did not send a keyshare, then the server will respond with a HelloRetryRequest (HRR) message requesting a keyshare of a specific key agreement that the client supports as shown <a href="#tls-anchor">on the diagram on the right</a>. In turn, the client responds with a new ClientHello with the selected keyshare.
<p></p><p>This is not the whole story: a server is also allowed to send a <i>HelloRetryRequest</i> to request a different key agreement that it prefers over those for which the client sent shares. For instance, a server can send a <i>HelloRetryRequest</i> to a post-quantum key agreement if the client supports it, but didn’t send a keyshare for it.</p><p>_HelloRetryRequest_s are rare today. Almost every server supports the X25519 key-agreement and almost every client (98% today) sends a X25519 keyshare. Earlier P-256 was the de facto standard and for a long time many browsers would send both a P-256 and X25519 keyshare to prevent a HelloRetryRequest. As we will discuss later, we might not have the luxury to send two post-quantum keyshares.</p>
    <div>
      <h4><b>That’s the theory</b></h4>
      <a href="#thats-the-theory">
        
      </a>
    </div>
    <p>TLS 1.3 is designed to be flexible in the cryptography it uses without sacrificing security or performance, which is convenient for our migration to post-quantum cryptography. That is the theory, but there are some serious issues in practice — we’ll go into detail later on. But first, let’s check out the post-quantum key agreements we’ve deployed.</p>
    <div>
      <h3>What we deployed</h3>
      <a href="#what-we-deployed">
        
      </a>
    </div>
    <p>Today we have enabled support for the <b>X25519Kyber512Draft00</b> and <b>X25519Kyber768Draft00</b> key agreements using TLS identifiers 0xfe30 and 0xfe31 respectively. These are exactly the same key agreements <a href="/experiment-with-pq/">we enabled</a> on a limited number of zones this July.</p><p>These two key agreements are a combination, a <a href="https://datatracker.ietf.org/doc/draft-stebila-tls-hybrid-design/"><b>hybrid</b></a>, of the classical <a href="https://www.rfc-editor.org/rfc/rfc8410">X25519</a> and the new post-quantum Kyber512 and Kyber768 respectively and in that order. That means that even if Kyber turns out to be insecure, the connection remains as secure as X25519.</p><p><a href="https://pq-crystals.org/kyber/index.shtml">Kyber</a>, for now, is the only key agreement that NIST <a href="/nist-post-quantum-surprise/">has selected</a> for standardization. Kyber is very light on the CPU: it is faster than X25519 which is already known for its speed. On the other hand, its keyshares are much bigger:</p>
<table>
<thead>
  <tr>
    <th></th>
    <th><span>Size keyshares(in bytes)</span></th>
    <th><span>Ops/sec (higher is better)</span></th>
  </tr>
</thead>
<tbody>
  <tr>
    <td><span>Algorithm</span></td>
    <td><span>PQ</span></td>
    <td><span>Client</span></td>
    <td><span>Server</span></td>
    <td><span>Client</span></td>
    <td><span>Server</span></td>
  </tr>
  <tr>
    <td><span>Kyber512</span></td>
    <td><span>✅</span></td>
    <td><span>800</span></td>
    <td><span>768</span></td>
    <td><span>50,000</span></td>
    <td><span>100,000</span></td>
  </tr>
  <tr>
    <td><span>Kyber768</span></td>
    <td><span>✅</span></td>
    <td><span>1,184</span></td>
    <td><span>1,088</span></td>
    <td><span>31,000</span></td>
    <td><span>70,000</span></td>
  </tr>
  <tr>
    <td><span>X25519</span></td>
    <td><span>❌</span></td>
    <td><span>32</span></td>
    <td><span>32</span></td>
    <td><span>17,000</span></td>
    <td><span>17,000</span></td>
  </tr>
</tbody>
</table><p><i>Size and CPU performance compared between X25519 and Kyber. Performance varies considerably by hardware platform and implementation constraints and should be taken as a rough indication only.</i></p><p>Kyber is expected to change in minor, but backwards incompatible ways, before final standardization by NIST in 2024. Also, the integration with TLS, including the choice and details of the hybrid key agreement, are not yet finalized by the TLS working group. Once they are, we will adopt them promptly.</p><p>Because of this, we will not support the preliminary key agreements announced today for the long term; they’re provided as a beta service. We will post updates on our deployment on <a href="http://pq.cloudflareresearch.com">pq.cloudflareresearch.com</a> and announce it on the <a href="https://mailman3.ietf.org/mailman3/lists/pqc.ietf.org/">IETF PQC mailing list</a>.</p><p>Now that we know how TLS negotiation works in theory, and which key agreements we’re adding, how could it fail?</p>
    <div>
      <h2>Where things might break in practice</h2>
      <a href="#where-things-might-break-in-practice">
        
      </a>
    </div>
    
    <div>
      <h3>Protocol ossification</h3>
      <a href="#protocol-ossification">
        
      </a>
    </div>
    <p>Protocols are often designed with flexibility in mind, but if that flexibility is not exercised in practice, it’s often lost. This is called <i>protocol ossification</i>. The roll-out of TLS 1.3 <a href="/why-tls-1-3-isnt-in-browsers-yet/">was difficult</a> because of several instances of ossification. One poignant example is TLS’ version negotiation: there is a version field in the ClientHello message that indicates the latest version supported by the client. A new version was assigned to TLS 1.3, but in testing it turned out that many servers would not fallback properly to TLS 1.2, but crash the connection instead. How do we deal with ossification?</p>
    <div>
      <h4><b>Workaround</b></h4>
      <a href="#workaround">
        
      </a>
    </div>
    <p>Today, TLS 1.3 masquerades itself as TLS 1.2 down to including many legacy fields in the <i>ClientHello</i>. The actual version negotiation is moved into a new <i>extension</i> to the message. A TLS 1.2 server will ignore the new extension and ignorantly continue with TLS 1.2, while a TLS 1.3 server picks up on the extension and continues with TLS 1.3 proper.</p>
    <div>
      <h4><b>Protocol grease</b></h4>
      <a href="#protocol-grease">
        
      </a>
    </div>
    <p>How do we prevent ossification? Having learnt from this experience, browsers will regularly advertise dummy versions in this new version field, so that misbehaving servers are caught early on. This is not only done for the new version field, but in many other places in the TLS handshake, and presciently also for the key agreement identifiers. Today, 40% of browsers send two client keyshares: one X25519 and another a bogus 1-byte keyshare to keep key agreement flexibility.</p><p>This behavior is standardized in <a href="https://datatracker.ietf.org/doc/html/rfc8701">RFC 8701</a>: <i>Generate Random Extensions And Sustain Extensibility</i> (GREASE) and we call it protocol <i>greasing</i>, as in “greasing the joints” from Adam Langley’s metaphor of <a href="https://www.imperialviolet.org/2016/05/16/agility.html">protocols having rusty joints</a> in need of oil.</p><p>This keyshare grease helps, but it is not perfect, because it is the size of the keyshare that in this case causes the most concern.</p>
    <div>
      <h3>Fragmented ClientHello</h3>
      <a href="#fragmented-clienthello">
        
      </a>
    </div>
    <p>Post-quantum keyshares are big. The two Kyber hybrids are 832 and 1,216 bytes. Compared to that, X25519 is tiny with only 32 bytes. It is not unlikely that some implementations will fail when seeing such large keyshares.</p><p>Our biggest concern is with the larger Kyber768 based keyshare. A ClientHello with the smaller 832 byte Kyber512-based keyshare will just barely fit in a typical network packet. On the other hand, the larger 1,216 byte Kyber768-keyshare will typically fragment the ClientHello into two packets.</p><p>Assembling packets together isn’t free: it requires you to keep track of the partial messages around. Usually this is done transparently by the operating system’s TCP stack, but optimized middleboxes and load balancers that look at each packet separately, have to (and might not) keep track of the connections themselves.</p>
    <div>
      <h3><b>QUIC</b></h3>
      <a href="#quic">
        
      </a>
    </div>
    <p>The situation for <a href="https://www.cloudflare.com/learning/performance/what-is-http3/">HTTP/3</a>, which is built on <a href="/quic-version-1-is-live-on-cloudflare/">QUIC</a>, is particularly interesting. Instead of a simple port number chosen by the client (as in TCP), a QUIC packet from the client contains a <i>connection ID</i> that is chosen by the server. Think of it as “your reference” and “our reference” in snailmail. This allows a QUIC load-balancer to encode the particular machine handling the connection into the connection ID.</p><p>When opening a connection, the QUIC client doesn’t know which connection ID the server would like and sends a random one instead. If the client needs multiple initial packets, such as with a big ClientHello, then the client will use the same random connection ID. Even though multiple initial packets are allowed by the QUIC standard, a QUIC load balancer might not expect this, and won’t be able to refer to an underlying TCP connection.</p>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>Aside from these hard failures, <i>soft</i> failures, such as performance degradation are also of concern: if it’s too slow to load, a website might as well have been broken to begin with.</p><p>Back in 2019 in a joint experiment with Google, we deployed two post-quantum key agreements: CECPQ2, based on NTRU-HRSS, and CECPQ2b, based on SIKE. NTRU-HRSS is very similar to Kyber: it’s a bit larger and slower. <a href="/the-tls-post-quantum-experiment/">Results from 2019</a> are very promising: X25519+NTRU-HRSS (orange line) is hard to distinguish from X25519 on its own (blue line).</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3aqgMsHQUv4sMeIF3eHDOK/ac2d93d5156efd813d007997eb712c5f/image2-2.png" />
            
            </figure><p>We will continue to keep a close eye on performance, especially on the tail performance: we want a smooth transition for everyone, from the fastest to the slowest clients on the Internet.</p>
    <div>
      <h2>How to help out</h2>
      <a href="#how-to-help-out">
        
      </a>
    </div>
    <p>The Internet is a very heterogeneous system. To find all issues, we need sufficient numbers of diverse testers. We are working with browsers to add support for these key agreements, but there may not be one of these browsers in every network.</p><p>So, to help the Internet out, try and switch a small part of your traffic to Cloudflare domains to use these new key agreement methods. We have open-sourced forks for <a href="https://github.com/cloudflare/boringssl-pq">BoringSSL</a>, <a href="https://github.com/cloudflare/go">Go</a> and <a href="https://github.com/cloudflare/qtls-pq">quic-go</a>. For BoringSSL and Go, check out <a href="/experiment-with-pq/#boringssl">the sample code here</a>. If you have any issues, please let us know at <a href="#">ask-research@cloudflare.com</a>. We will be discussing any issues and workarounds at the IETF <a href="https://datatracker.ietf.org/group/tls/about/">TLS working group</a>.</p>
    <div>
      <h2>Outlook</h2>
      <a href="#outlook">
        
      </a>
    </div>
    <p>The transition to a post-quantum secure Internet is urgent, but not without challenges. Today we have deployed a preliminary post-quantum key agreement on all our servers — a sizable portion of the Internet — so that we can all start testing the big migration today. We hope that come 2024, when NIST puts a bow on Kyber, we will all have laid the groundwork for a smooth transition to a Post-Quantum Internet.</p><p>.....</p><p><sup>1</sup>We only support these post-quantum key agreements in protocols based on TLS 1.3 including HTTP/3. There is one exception: for the moment we disable these hybrid key exchanges for websites in FIPS-mode.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Post-Quantum]]></category>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Cryptography]]></category>
            <category><![CDATA[Security]]></category>
            <guid isPermaLink="false">4UeqNREMkVNSaC66RE5DEo</guid>
            <dc:creator>Bas Westerbaan</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
        </item>
        <item>
            <title><![CDATA[Privacy-Preserving Compromised Credential Checking]]></title>
            <link>https://blog.cloudflare.com/privacy-preserving-compromised-credential-checking/</link>
            <pubDate>Thu, 14 Oct 2021 12:59:53 GMT</pubDate>
            <description><![CDATA[ Announcing a public demo and open-sourced implementation of a privacy-preserving compromised credential checking service ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today we’re announcing a <a href="https://migp.cloudflare.com">public demo</a> and an <a href="https://github.com/cloudflare/migp-go">open-sourced Go implementation</a> of a next-generation, privacy-preserving compromised credential checking protocol called MIGP (“Might I Get Pwned”, a nod to Troy Hunt’s “<a href="https://haveibeenpwned.com/About">Have I Been Pwned</a>”). Compromised credential checking services are used to alert users when their credentials might have been exposed in data breaches. Critically, the ‘privacy-preserving’ property of the MIGP protocol means that clients can check for leaked credentials without leaking <i>any</i> information to the service about the queried password, and only a small amount of information about the queried username. Thus, not only can the service inform you when one of your usernames and passwords may have become compromised, but it does so without exposing any unnecessary information, keeping credential checking from becoming a vulnerability itself. The ‘next-generation’ property comes from the fact that MIGP advances upon the current state of the art in credential checking services by allowing clients to not only check if their <i>exact</i> password is present in a data breach, but to check if <i>similar</i> passwords have been exposed as well.</p><p>For example, suppose your password last year was amazon20\$, and you change your password each year (so your current password is amazon21\$). If last year’s password got leaked, MIGP could tell you that your current password is weak and guessable as it is a simple variant of the leaked password.</p><p>The MIGP protocol was designed by researchers at Cornell Tech and the University of Wisconsin-Madison, and we encourage you to <a href="https://arxiv.org/pdf/2109.14490.pdf">read the paper</a> for more details. In this blog post, we provide motivation for why compromised credential checking is important for security hygiene, and how the MIGP protocol improves upon the current generation of credential checking services. We then describe our implementation and the deployment of MIGP within Cloudflare’s infrastructure.</p><p>Our MIGP demo and public API are not meant to replace existing credential checking services today, but rather demonstrate what is possible in the space. We aim to push the envelope in terms of privacy and are excited to employ some cutting-edge cryptographic primitives along the way.</p>
    <div>
      <h2>The threat of data breaches</h2>
      <a href="#the-threat-of-data-breaches">
        
      </a>
    </div>
    <p>Data breaches are rampant. The <a href="https://lmddgtfy.net/?q=million%20customer%20records">regularity of news articles</a> detailing how tens or hundreds of millions of customer records have been compromised have made us almost numb to the details. Perhaps we all hope to stay safe just by being a small fish in the middle of a very large school of similar fish that is being predated upon. But we can do better than just hope that our particular authentication credentials are safe. We can actually check those credentials against known databases of the very same compromised user information we learn about from the news.</p><p>Many of the security breaches we read about involve leaked databases containing user details. In the worst cases, user data entered during account registration on a particular website is made available (often offered for sale) after a data breach. Think of the addresses, password hints, credit card numbers, and other private details you have submitted via an online form. We rely on the care taken by the online services in question to protect those details. On top of this, consider that the same (or quite similar) usernames and passwords are commonly used on more than one site. Our information across all of those sites may be as vulnerable as the site with the weakest security practices. Attackers take advantage of this fact to actively compromise accounts and exploit users every day.</p><p><a href="https://www.cloudflare.com/learning/bots/what-is-credential-stuffing/">Credential stuffing</a> is an attack in which malicious parties use leaked credentials from an account on one service to attempt to log in to a variety of <i>other</i> services. These attacks are effective because of the prevalence of reused credentials across services and domains. After all, who hasn’t at some point had a favorite password they used for everything? (Quick plug: please use a password manager like LastPass to generate unique and complex passwords for each service you use.)</p><p>Website operators have (or should have) a vested interest in making sure that users of their service are using secure and non-compromised credentials. Given the sophistication of techniques employed by malevolent actors, the standard requirement to “include uppercase, lowercase, digit, and special characters” really is not enough (and can be actively harmful according to <a href="https://pages.nist.gov/800-63-3/sp800-63b.html#appA">NIST’s latest guidance</a>). We need to offer better options to users that keep them safe and preserve the privacy of vulnerable information. Dealing with account compromise and recovery is an expensive process for all parties involved.</p><p>Users and organizations need a way to know if their credentials have been compromised, but how can they do it? One approach is to scour dark web forums for data breach torrent links, download and parse gigabytes or terabytes of archives to your laptop, and then search the dataset to see if their credentials have been exposed. This approach is not workable for the majority of Internet users and website operators, but fortunately there’s a better way — have someone with terabytes to spare do it for you!</p>
    <div>
      <h2>Making compromise checking fast and easy</h2>
      <a href="#making-compromise-checking-fast-and-easy">
        
      </a>
    </div>
    <p>This is exactly what compromised credential checking services do: they aggregate breach datasets and make it possible for a client to determine whether a username and password are present in the breached data. <a href="https://haveibeenpwned.com/">Have I Been Pwned</a> (HIBP), launched by Troy Hunt in 2013, was the first major public breach alerting site. It provides a service, Pwned Passwords, where users can <a href="https://www.troyhunt.com/i-wanna-go-fast-why-searching-through-500m-pwned-passwords-is-so-quick/">efficiently check</a> if their passwords have been compromised. The initial version of Pwned Passwords required users to send the full password hash to the service to check if it appears in a data breach. In a <a href="/validating-leaked-passwords-with-k-anonymity/">2018 collaboration</a> with Cloudflare, the service was upgraded to allow users to run range queries over the password dataset, leaking only the salted hash prefix rather than the entire hash. Cloudflare <a href="https://haveibeenpwned.com/Passwords">continues to support</a> the HIBP project by providing CDN and security support for organizations to download the raw Pwned Password datasets.</p><p>The HIBP approach was replicated by <a href="https://www.usenix.org/system/files/sec19-thomas.pdf">Google Password Checkup</a> (GPC) in 2019, with the primary difference that GPC alerts are based on username-password pairs instead of passwords alone, which limits the rate of false positives. <a href="https://www.enzoic.com/">Enzoic</a> and <a href="https://www.microsoft.com/en-us/research/blog/password-monitor-safeguarding-passwords-in-microsoft-edge/">Microsoft Password Monitor</a> are two other similar services. This year, Cloudflare also released <a href="https://developers.cloudflare.com/waf/exposed-credentials-check">Exposed Credential Checks</a> as part of our <a href="https://www.cloudflare.com/learning/ddos/glossary/web-application-firewall-waf/">Web Application Firewall (WAF)</a> to help inform opted-in website owners when login attempts to their sites use compromised credentials. In fact, we use MIGP on the backend for this service to ensure that plaintext credentials <a href="/account-takeover-protection/">never leave the edge server</a> on which they are being processed.</p><p>Most standalone credential checking services work by having a user submit a query containing their password's or username-password pair’s hash prefix. However, this leaks some information to the service, which could be problematic if the service turns out to be malicious or is compromised. In a collaboration with researchers at Cornell Tech published at <a href="https://dl.acm.org/doi/pdf/10.1145/3319535.3354229">CCS’19</a>, we showed just how damaging this leaked information can be. Malevolent actors with access to the data shared with most credential checking services can drastically improve the effectiveness of password-guessing attacks. This left open the question: how can you do compromised credential checking without sharing (leaking!) vulnerable credentials to the service provider itself?</p>
    <div>
      <h3>What does a privacy-preserving credential checking service look like?</h3>
      <a href="#what-does-a-privacy-preserving-credential-checking-service-look-like">
        
      </a>
    </div>
    <p>In the aforementioned <a href="https://dl.acm.org/doi/pdf/10.1145/3319535.3354229">CCS'19 paper</a>, we proposed an alternative system in which only the hash prefix of the <i>username</i> is exposed to the MIGP server (<a href="https://www.usenix.org/system/files/sec19-thomas.pdf">independent work out of Google and Stanford</a> proposed a similar system). No information about the password leaves the user device, alleviating the risk of password-guessing attacks. These credential checking services help to preserve password secrecy, but still have a limitation: they can only alert users if the <i>exact</i> queried password appears in the breach.</p><p>The present evolution of this work, <a href="https://arxiv.org/pdf/2109.14490.pdf">Might I Get Pwned (MIGP)</a>, proposes a next-generation <i>similarity-aware</i> compromised credential checking service that supports checking if a password <i>similar</i> to the one queried has been exposed in the data breach. This approach supports the detection of <i>credential tweaking</i> attacks, an advanced version of credential stuffing.</p><p>Credential tweaking takes advantage of the fact that many users, when forced to change their password, use simple variants of their original password. Rather than just attempting to log in using an exact leaked password, say ‘password123’, a credential tweaking attacker might also attempt to log in with easily-predictable variants of the password such as ‘password124’ and ‘password123!’.</p><p>There are two main mechanisms described in the MIGP paper to add password variant support: client-side generation and server-side precomputation. With client-side generation, the client simply applies a series of transform rules to the password to derive the set of variants (e.g., truncating the last letter or adding a ‘!’ at the end), and runs multiple queries to the MIGP service with each username and password variant pair. The second approach is server-side precomputation, where the server applies the transform rules to generate the password variants when encrypting the dataset, essentially treating the password variants as additional entries in the breach dataset. The MIGP paper describes tradeoffs between the two approaches and techniques for generating variants in more detail. Our demo service includes variant support via server-side precomputation.</p>
    <div>
      <h3>Breach extraction attacks and countermeasures</h3>
      <a href="#breach-extraction-attacks-and-countermeasures">
        
      </a>
    </div>
    <p>One challenge for credential checking services are <i>breach extraction</i> attacks, in which an adversary attempts to learn username-password pairs that are present in the breach dataset (which might not be publicly available) so that they can attempt to use them in future credential stuffing or tweaking attacks. Similarity-aware credential checking services like MIGP can make these attacks more effective, since adversaries can potentially check for more breached credentials per API query. Fortunately, additional measures can be incorporated into the protocol to help counteract these attacks. For example, if it is problematic to leak the number of ciphertexts in a given bucket, dummy entries and padding can be employed, or an alternative length-hiding bucket format can be used. <a href="https://arxiv.org/pdf/2109.14490.pdf">Slow hashing and API rate limiting</a> are other common countermeasures that credential checking services can deploy to slow down breach extraction attacks. For instance, our demo service applies the memory-hard slow hash algorithm scrypt to credentials as part of the key derivation function to slow down these attacks.</p><p>Let’s now get into the nitty-gritty of how the MIGP protocol works. For readers not interested in the cryptographic details, feel free to skip to the demo below!</p>
    <div>
      <h2>MIGP protocol</h2>
      <a href="#migp-protocol">
        
      </a>
    </div>
    <p>There are two parties involved in the MIGP protocol: the client and the server. The server has access to a dataset of plaintext breach entries (username-password pairs), and a secret key used for both the precomputation and the online portions of the protocol. In brief, the client performs some computation over the username and password and sends the result to the server; the server then returns a response that allows the client to determine if their password (or a similar password) is present in the breach dataset.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7wWJeFPjrWKw67UctL5CPa/86bd7f838d10f71329227ba85daac96f/image8-11.png" />
            
            </figure><p>Full protocol description from the <a href="https://arxiv.org/pdf/2109.14490.pdf">MIGP paper</a>: clients learn if their credentials are in the breach dataset, leaking only the hash prefix of the queried username to the server</p>
    <div>
      <h3>Precomputation</h3>
      <a href="#precomputation">
        
      </a>
    </div>
    <p>At a high level, the MIGP server partitions the breach dataset into <i>buckets</i> based on the hash prefix of the username (the <i>bucket identifier</i>), which is usually 16-20 bits in length.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2t6QnE6TyphfMbfBOAPMnc/def2d8eb92120a6af7ee0745663d19da/unnamed--1--2.png" />
            
            </figure><p>During the precomputation phase of the MIGP protocol, the server derives password variants, encrypts entries, and stores them in buckets based on the hash prefix of the username</p><p>We use server-side precomputation as the variant generation mechanism in our implementation. The server derives one ciphertext for each exact username-password pair in the dataset, and an additional ciphertext per password variant. A bucket consists of the set ciphertexts for all breach entries and variants with the same username hash prefix. For instance, suppose there are n breach entries assigned to a particular bucket. If we compute m variants per entry, counting the original entry as one of the variants, there will be n*m ciphertexts stored in the bucket. This introduces a large expansion in the size of the processed dataset, so in practice it is necessary to limit the number of variants computed per entry. Our demo server stores 10 ciphertexts per breach entry in the input: the exact entry, eight variants (see <a href="https://arxiv.org/pdf/2109.14490.pdf">Appendix A of the MIGP paper</a>), and a special variant for allowing username-only checks.</p><p>Each ciphertext is the encryption of a username-password (or password variant) pair along with some associated metadata. The metadata describes whether the entry corresponds to an exact password appearing in the breach, or a variant of a breached password. The server derives a per-entry secret key pad using a key derivation function (KDF) with the username-password pair and server secret as inputs, and uses XOR encryption to derive the entry ciphertext. The bucket format also supports storing optional encrypted metadata, such as the date the breach was discovered.</p>
            <pre><code>Input:
  Secret sk       // Server secret key
  String u        // Username
  String w        // Password (or password variant)
  Byte mdFlag     // Metadata flag
  String mdString // Optional metadata string

Output:
  String C        // Ciphertext

function Encrypt(sk, u, w, mdFlag, mdString):
  padHdr=KDF1(u, w, sk)
  padBody=KDF2(u, w, sk)
  zeros=[0] * KEY_CHECK_LEN
  C=XOR(padHdr, zeros || mdFlag) || mdString.length || XOR(padBody, mdString)</code></pre>
            <p>The precomputation phase only needs to be done rarely, such as when the MIGP parameters are changed (in which case the entire dataset must be re-processed), or when new breach datasets are added (in which case the new data can be appended to the existing buckets).</p>
    <div>
      <h3>Online phase</h3>
      <a href="#online-phase">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/38tgoXWJDA2F7AXTWK63y6/606db165b48b09a953044b2e216af88a/image1-39.png" />
            
            </figure><p>During the online phase of the MIGP protocol, the client requests a bucket of encrypted breach entries corresponding to the queried username, and with the server’s help derives a key that allows it to decrypt an entry corresponding to the queried credentials</p><p>The online phase of the MIGP protocol allows a client to check if a username-password pair (or variant) appears in the server’s breach dataset, while only leaking the hash prefix of the username to the server. The client and server engage in an <a href="https://datatracker.ietf.org/doc/html/draft-irtf-cfrg-voprf">OPRF</a> protocol message exchange to allow the client to derive the per-entry decryption key, without leaking the username and password to the server, or the server’s secret key to the client. The client then computes the bucket identifier from the queried username and downloads the corresponding bucket of entries from the server. Using the decryption key derived in the previous step, the client scans through the entries in the bucket attempting to decrypt each one. If the decryption succeeds, this signals to the client that their queried credentials (or a variant thereof) are in the server’s dataset. The decrypted metadata flag indicates whether the entry corresponds to the exact password or a password variant.</p><p>The MIGP protocol solves many of the shortcomings of existing credential checking services with its solution that avoids leaking <i>any</i> information about the client’s queried password to the server, while also providing a mechanism for checking for similar password compromise. Read on to see the protocol in action!</p>
    <div>
      <h2>MIGP demo</h2>
      <a href="#migp-demo">
        
      </a>
    </div>
    <p>As the state of the art in attack methodologies evolve with new techniques such as credential tweaking, so must the defenses. To that end, we’ve collaborated with the designers of the MIGP protocol to prototype and deploy the MIGP protocol within Cloudflare’s infrastructure.</p><p>Our MIGP demo server is deployed at <a href="https://migp.cloudflare.com">migp.cloudflare.com</a>, and runs entirely on top of <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>. We use <a href="https://www.cloudflare.com/products/workers-kv/">Workers KV</a> for efficient storage and retrieval of buckets of encrypted breach entries, capping out each bucket size at the current <a href="https://developers.cloudflare.com/workers/platform/limits#kv">KV value limit</a> of 25MB. In our instantiation, we set the username hash prefix length to 20 bits, so that there are a total of 2^20 (or just over 1 million) buckets.</p><p>There are currently two ways to interact with the demo MIGP service: via the browser client at <a href="https://migp.cloudflare.com">migp.cloudflare.com</a>, or via the Go client included in our <a href="https://github.com/cloudflare/migp-go">open-sourced MIGP library</a>. As shown in the screenshots below, the browser client displays the request from your device and the response from the MIGP service. You should take caution to not input any sensitive credentials in a third-party service (feel free to use the test credentials <a href="#">username1@example.com</a> and password1 for the demo).</p><p>Keep in mind that “absence of evidence is not evidence of absence”, especially in the context of data breaches. We intend to periodically update the breach datasets used by the service as new public breaches become available, but no breach alerting service will be able to provide 100% accuracy in assuring that your credentials are safe.</p><p>See the MIGP demo in action in the attached screenshots. Note that in all cases, the username (<a href="#">username1@example.com</a>) and corresponding username prefix hash (000f90f4) remain the same, so the client retrieves the exact same bucket contents from the server each time. However, the blindElement parameter in the client request differs per request, allowing the client to decrypt different bucket elements depending on the queried credentials.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/14k0Waq8epRl1bJ2ixvalS/2da5c4cb0bd12ff46190a3e7e3fb502c/image7-10.png" />
            
            </figure><p>Example query in which the credentials are exposed in the breach dataset</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39Efa4f8dTFJvEyh1nviV7/500bbd1a954cbb5ec7f96d2d3b1886ea/image4-23.png" />
            
            </figure><p>Example query in which similar credentials were exposed in the breach dataset</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7xtaESNMejE9ZJrEZMYZZB/30d679b03c5e027413ab7eac02e4db22/image2-25.png" />
            
            </figure><p>Example query in which the username is present in the breach dataset</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/IpsaGiNzjKSmZcjJ9fao9/7110906f0c3cff82af25d09a324dd0ec/image3-23.png" />
            
            </figure><p>Example query in which the credentials are not found in the dataset</p>
    <div>
      <h2>Open-sourced MIGP library</h2>
      <a href="#open-sourced-migp-library">
        
      </a>
    </div>
    <p>We are open-sourcing our implementation of the MIGP library under the BSD-3 License. The code is written in Go and is available at <a href="https://github.com/cloudflare/migp-go">https://github.com/cloudflare/migp-go</a>. Under the hood, we use Cloudflare’s <a href="https://github.com/cloudflare/circl">CIRCL library</a> for OPRF support and Go’s supplementary cryptography library for <a href="https://pkg.go.dev/golang.org/x/crypto/scrypt">scrypt</a> support. Check out the repository for instructions on setting up the MIGP client to connect to Cloudflare’s demo MIGP service. Community contributions and feedback are welcome!</p>
    <div>
      <h2>Future directions</h2>
      <a href="#future-directions">
        
      </a>
    </div>
    <p>In this post, we announced our open-sourced implementation and demo deployment of MIGP, a next-generation breach alerting service. Our deployment is intended to lead the way for other credential compromise checking services to migrate to a more privacy-friendly model, but is not itself currently meant for production use. However, we identify several concrete steps that can be taken to improve our service in the future:</p><ul><li><p>Add more breach datasets to the database of precomputed entries</p></li><li><p>Increase the number of variants in server-side precomputation</p></li><li><p>Add library support in more programming languages to reach a broader developer base</p></li><li><p>Hide the number of ciphertexts per bucket by padding with dummy entries</p></li><li><p>Add support for efficient client-side variant checking by batching API calls to the server</p></li></ul><p>For exciting future research directions that we are investigating — including one proposal to remove the transmission of plaintext passwords from client to server entirely — take a look at <a href="/research-directions-in-password-security">https://blog.cloudflare.com/research-directions-in-password-security</a>.</p><p>We are excited to share and build upon these ideas with the wider Internet community, and hope that our efforts impact positive change in the password security ecosystem. We are particularly interested in collaborating with stakeholders in the space to develop, test, and deploy next-generation protocols to improve user security and privacy. You can reach us with questions, comments, and research ideas at <a href="#">ask-research@cloudflare.com</a>. For those interested in joining our team, please visit our <a href="https://www.cloudflare.com/careers/jobs/?department=Technology%20Research&amp;location=default">Careers Page</a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">4eEeycGCnihnf3zSNxUYLY</guid>
            <dc:creator>Luke Valenta</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
            <dc:creator>Christopher Wood</dc:creator>
        </item>
        <item>
            <title><![CDATA[Announcing Cloudflare Research Hub]]></title>
            <link>https://blog.cloudflare.com/announcing-cloudflare-research-hub/</link>
            <pubDate>Mon, 11 Oct 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Announcing a new landing page where you can learn more about our research and additional resources. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>As highlighted <a href="/cloudflare-research-two-years-in">yesterday</a>, research efforts at Cloudflare have been growing over the years as well as their scope. Cloudflare Research is proud to support computer science research to help build a better Internet, and we want to tell you where you can learn more about our efforts and how to get in touch.</p>
    <div>
      <h3>Why are we announcing a website for Cloudflare Research?</h3>
      <a href="#why-are-we-announcing-a-website-for-cloudflare-research">
        
      </a>
    </div>
    <p>Cloudflare is built on a foundation of open standards which are the result of community consensus and research. Research is integral to Cloudflare’s mission as is the commitment to contribute back to the research and standards communities by establishing and maintaining a growing number of collaborations.</p><p>Throughout the years we have cherished many collaborations and one-on-one relationships, but we have probably been missing a lot of interesting work happening elsewhere. This is our main motivation for this Research hub of information: to help us build further collaborations with industrial and academic research groups, and individuals across the world. We are eager to interface more effectively with the wider research and standards communities: practitioners, researchers and educators. And as for you, dear reader, we encourage you to recognize that you are our audience too: we often hear that Cloudflare’s commitment to sharing technical writing and resources is very attractive to many. This site also hopes to serve as a  starting point for engagement with research that underpins development of the Internet.</p><p>We welcome you to reach out to us and share your ideas!</p>
    <div>
      <h3>How we arrived at the site as it is</h3>
      <a href="#how-we-arrived-at-the-site-as-it-is">
        
      </a>
    </div>
    <p>The opportunity to create a new website to share our growing library of information led us to an almost reflexive search for the right blog hosting system to fit the need. For our first prototype we gave the <a href="https://docusaurus.io/">Docusaurus</a> project a try. A few questions led us to evaluate our needs more carefully: does a static site need to use much JavaScript? Was an SPA (Single Page Application) the best fit and did we need to use a generic CSS framework?</p><p>Having this conversation led us to re-examine the necessity of using client-side scripts for the site at all. Why not remove the dependency on JavaScript? Cloudflare's business model is based on making websites faster, not on tracking users, so why would we require JavaScript when we do not need much client-side dynamism? Could we build such an informational site simply, use tools easily inspectable by developers and deploy with <a href="https://pages.cloudflare.com/">Cloudflare Pages</a> from Github?</p><p>We have avoided the use of frameworks, keeping our HTML simple and avoided minification since it is not really necessary here. We appreciate being able to peek under the hood and these choices allow the browser’s “View Page Source” right-click option on site pages to reveal human-readable code!</p><p>We did not want the HTML and CSS to be difficult to follow. Instead of something like:</p>
            <pre><code>&lt;article class="w-100 w-50-l mb4 ph3 bb b--gray8 bn-l"&gt;
  &lt;p class="f3 fw5 gray5 my"&gt;September 17, 2021 2:00PM&lt;/p&gt;
  &lt;a href="/tag/another/" class="dib pl2 pr2 pt1 pb1 mb2 bg-gray8 no-underline blue3 f2"&gt;Another&lt;/a&gt;
...
&lt;/article&gt;</code></pre>
            <p>where CSS classes are repeated, again and again, in source code we decided to lean on the kind of traditional hierarchical style declarations that put the C for “Cascading” in CSS.</p><p>We questioned whether in our serving of the site we needed to force the browser to re-check for content changes on every page visit. For this kind of website, always returning <code>"**max-age=0, must-revalidate, public"**</code> didn’t seem necessary.</p><p>The <a href="https://research.cloudflare.com">research.cloudflare.com</a> site is a work in progress and embraces standards-based web development choices. We do not use JavaScript to enable lazy loading of images but instead lean on the <a href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/img#attr-loading">loading</a> attribute of the <code>img</code> tag. Because we do not have many images that lie beneath the fold it is okay for us to use this even as some browsers work to add support for this specification. We use the limited <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/Using_CSS_custom_properties">standardized CSS variable support</a> to avoid needing a style pre-processor while still using custom colour variables.</p><p>Many times dynamic frameworks need to introduce quite complex mechanisms to restore accessibility for users. The standards-based choices we have made for the HTML and CSS that compose this site made a <i>100</i> accessibility score in <a href="https://developers.google.com/web/tools/lighthouse">Lighthouse</a> (a popular performance, accessibility, and best practices measure) more easily achievable.</p>
    <div>
      <h3>Explore and connect</h3>
      <a href="#explore-and-connect">
        
      </a>
    </div>
    <p>While we wanted this <a href="https://research.cloudflare.com">website</a> to be clean, we certainly didn’t want it to be empty!</p><p>Our research work spreads across multiple areas from <a href="https://www.cloudflare.com/network-security/">network security</a>, privacy, cryptography, authentication, Internet measurements, to distributed systems. We have compiled a first set of information about the research projects we have been recently working on, together with a handful of related resources, publications, and additional pointers to help you learn more about our work. We are also sharing results about the experiments we are running and code we have released to the community. This research work results in many cases from collaborations between multiple Cloudflare teams and industry and academic partners.</p><p>And, as will be highlighted during this week, you can learn more about our standardisation efforts, how we engage with standards bodies and contribute to several working groups and to shaping protocol specifications.</p><p>So, stay tuned, more is coming! <a href="https://research.cloudflare.com/updates/mailinglist/">Subscribe</a> to our latest updates about research work and <a href="https://research.cloudflare.com/contact/">reach out</a> if you want to collaborate with us. And if you are interested in joining the team, learn more about our <a href="https://www.cloudflare.com/en-gb/careers/jobs/?department=Technology%20Research&amp;location=default">career</a> and <a href="https://research.cloudflare.com/outreach/interns/">internship</a> opportunities and the <a href="https://research.cloudflare.com/outreach/researchers/">visiting researcher program</a>.</p> ]]></content:encoded>
            <category><![CDATA[Research]]></category>
            <guid isPermaLink="false">2gG4RPN19SZJwkslzWJ7H</guid>
            <dc:creator>Vânia Gonçalves</dc:creator>
            <dc:creator>Cefan Daniel Rubin</dc:creator>
        </item>
    </channel>
</rss>