
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 21:17:18 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Protecting GraphQL APIs from malicious queries]]></title>
            <link>https://blog.cloudflare.com/protecting-graphql-apis-from-malicious-queries/</link>
            <pubDate>Mon, 12 Jun 2023 13:00:08 GMT</pubDate>
            <description><![CDATA[ Starting today, Cloudflare’s API Gateway can protect GraphQL APIs against malicious requests that may cause a denial of service to the origin ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Starting today, Cloudflare’s API Gateway can protect GraphQL APIs against malicious requests that may cause a denial of service to the origin. In particular, <a href="https://www.cloudflare.com/application-services/products/api-gateway/">API Gateway</a> will now protect against two of the most common GraphQL abuse vectors: <b>deeply nested queries</b> and <b>queries that request more information</b> than they should.</p><p>Typical RESTful HTTP APIs contain tens or hundreds of endpoints. GraphQL APIs differ by typically only providing a single endpoint for clients to communicate with and offering highly flexible queries that can return variable amounts of data. While GraphQL’s power and usefulness rests on the flexibility to query an API about only the specific data you need, that same flexibility adds an increased risk of abuse. Abusive requests to a single GraphQL API can place disproportional load on the origin, abuse <a href="https://hygraph.com/blog/graphql-n-1-problem">the N+1 problem</a>, or exploit a recursive relationship between data dimensions. In order to add GraphQL security features to API Gateway, we needed to obtain visibility <i>inside</i> the requests so that we could apply different security settings based on request parameters. To achieve that visibility, we built our own GraphQL query parser. Read on to learn about how we built the parser and the security features it enabled.</p>
    <div>
      <h3>The power of GraphQL</h3>
      <a href="#the-power-of-graphql">
        
      </a>
    </div>
    <p>Unlike a REST API, where the API’s users are limited to what data they can query and change on a per-endpoint basis, a GraphQL API offers users the ability to query and change any data they wish with an open-ended, yet structured request to a single endpoint. This open-endedness makes GraphQL APIs very <a href="https://graphql.org/learn/">powerful</a>. Each user can query for a completely custom set of data and receive their custom response in a single HTTP request. Here are two example queries and their responses. These requests are typically sent via HTTP POST methods to an endpoint at /graphql.</p>
            <pre><code># A query asking for multiple nested subfields of the "hero" object. This query has a depth level of 2.
{
  hero {
    name
    friends {
      name
    }
  }
}

# The corresponding response.
{
  "data": {
    "hero": {
      "name": "R2-D2",
      "friends": [
        {
          "name": "Luke Skywalker"
        },
        {
          "name": "Han Solo"
        },
        {
          "name": "Leia Organa"
        }
      ]
    }
  }
}</code></pre>
            
            <pre><code># A query asking for just one subfield on the same "hero" object. This query has a depth level of 1.
{
  hero {
    name
  }
}

# The corresponding response.
{
  "data": {
    "hero": {
      "name": "R2-D2"
    }
  }
}</code></pre>
            <p>These custom queries give GraphQL endpoints <i>more flexibility than conventional REST endpoints</i>. But this flexibility also means GraphQL APIs can be subject to very different load or security risks based on the requests that they are receiving. For example, an attacker can request the exact same, valid data as a benevolent user would, but exploit the data’s self-referencing structure and ask that an origin return hundreds of thousands of rows replicated over and over again. Let’s consider an example, in which we operate a petitioning platform where our data model contains petitions and signers objects. With GraphQL, an attacker can, in a single request, query for a single petition, then for all people who signed that petition, then for all petitions each of those people have signed, then for all people that signed any of those petitions, then for all petitions that… you see where this is going!</p>
            <pre><code>query {
 petition(ID: 123) {
   signers {
     nodes {
       petitions {
         nodes {
           signers {
             nodes {
               petitions {
                 nodes {
                    ...
                 }
               }
             }
           }
         }
       }
     }
   }
 }
}</code></pre>
            <p>A rate limit won’t protect against such an attack because the entire query fits into a single request.</p><p>So how can we secure GraphQL APIs? There is little agreement in the industry around what makes a GraphQL endpoint secure. For some, this means rejecting invalid queries. Normally, an invalid query refers to a query that would fail to compile by a GraphQL server and not cause any substantial load on the origin, but would still add noise and error logs and reduce operational visibility. For others, this means creating complexity-based rate limits or perhaps flagging <a href="https://owasp.org/API-Security/editions/2023/en/0xa1-broken-object-level-authorization/">broken object-level authorization</a>. Still others want deeper visibility into query behavior and an ability to validate queries against a predefined schema.</p><p>When creating new features in API Gateway, we often start by providing deeper visibility for customers into their traffic behavior related to the feature in question. This way we create value from the large amount of data we see in the Cloudflare network, and can have conversations with customers where we ask: <i>“Now that you have these data insights, what actions would you like to take with them?”</i>. This process puts us in a good position to build a second, more actionable iteration of the feature.</p><p>We decided to follow the same process with GraphQL protection, with parsing GraphQL requests and gathering data as our first goal.</p>
    <div>
      <h3>Parsing GraphQL quickly</h3>
      <a href="#parsing-graphql-quickly">
        
      </a>
    </div>
    <p>As a starting point, we wanted to collect request query size and depth attributes. These attributes offer a surprising amount of insight into the query – if the query is requesting a single field at depth level 15, is it really innocuous or is it exploiting some recursive data relationship? if the query is asking for hundreds of fields at depth level 3, why wouldn’t it just ask for the entire object at level 2 instead?</p><p>To do this, we needed to parse queries without adding latency to incoming requests. We evaluated multiple open source GraphQL parsers and quickly realized that their performance would put us at the risk of adding hundreds of microseconds of latency to the request duration. Our goal was to have a p95 parsing time of under 50 microseconds. Additionally, the infrastructure we were planning to use to ship this functionality has a strict no-heap-allocation policy – this means that any memory allocated by a parser to process a request has to be amortized by being reused when parsing any subsequent requests. Parsing GraphQL in a no-allocation manner is not a fundamental technical requirement for us over the long-term, but it was a necessity if we wanted to build something quickly with confidence that the proof of concept will meet our performance expectations.</p><p>Meeting the latency and memory allocation constraints meant that we had to write a parser of our own. Building an entire abstract syntax tree of unpredictable structure requires allocating memory on the heap, and that’s what made conventional parsers unfit for our requirements. What if instead of building a tree, we processed the query in a streaming fashion, token by token? We realized that if we were to write our own GraphQL lexer that produces a list of GraphQL tokens (“comment”, “string”, “variable name”, “opening parenthesis”, etc.), we could use a number of heuristics to infer the query depth and size without actually building a tree or fully validating the query. Using this approach meant that we could deliver the new feature fast, both in engineering time and wall clock time – and, most importantly, visualize data insights for our customers.</p><p>To start, we needed to prepare GraphQL queries for parsing. <a href="https://graphql.org/learn/serving-over-http/">Most of the time</a>, GraphQL queries are delivered as <code>HTTP POST</code> requests with <code>application/json</code> or <code>application/graphql Content-Type</code>. Requests with <code>application/graphql</code> content type are easy to work with – they contain the raw query you can just parse. However, JSON-encoded queries present a challenge since JSON objects contain escaped characters – normally, any deserialization library will allocate new memory into which the raw string is copied with escape sequences removed, but we committed to allocating no memory, remember? So to parse GraphQL queries encoded in JSON fields, we used <a href="https://docs.rs/serde_json/latest/serde_json/value/struct.RawValue.html">serde RawValue</a> to locate the JSON field in which the escaped query was placed and then iterated over the constituent bytes one-by-one, feeding them into our tokenizer and removing escape sequences on the fly.</p><p>Once we had our query input ready, we built a simple Rust program that converts raw GraphQL input into a list of <a href="https://spec.graphql.org/October2021/#sec-Appendix-Grammar-Summary.Lexical-Tokens">lexical tokens according to the GraphQL grammar</a>. Tokenization is the first step in any parser – our insight was that this step was all we needed for what we wanted to achieve in the MVP.</p>
            <pre><code>mutation CreateMessage($input: MessageInput) {
    createMessage(input: $input) {
        id
    }
}</code></pre>
            <p>For example, the mutation operation above gets converted into the following list of tokens:</p>
            <pre><code>name
name
punctuator (
punctuator $
name
punctuator :
name
punctuator )
punctuator {
name
punctuator (
name
punctuator :
punctuator $
name
punctuator )
punctuator {
name
punctuator }
punctuator }</code></pre>
            <p>With this list of tokens available to us, we built our validation engine and added the ability to calculate query depth and size. Again, everything is done one-the-fly in a single pass. A limitation of this approach is that we can’t parse 100% of the requests – there are some syntactic features of GraphQL that we have to fail open on; however, a major advantage of this approach is its performance – in our initial trial run against a stream of 10s of thousands of requests per second, we achieved a p95 parsing time of 25 microseconds. This is a good starting point to collect some data and to prototype our first GraphQL security features.</p>
    <div>
      <h3>Getting started</h3>
      <a href="#getting-started">
        
      </a>
    </div>
    <p>Today, any API Gateway customer can use the Cloudflare GraphQL API to retrieve information about depth and size of GraphQL queries we see for them on the edge.</p><p>As an example, we’ve run the analysis below visualizing over 400,000 data points for query sizes and depths for a production domain utilizing API Gateway.</p><p>First let’s look at query sizes in our sample:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5eeTSSluDMta6Plvi8qHZQ/568e15f9b4f325795f9d2511038a1707/image3-1.png" />
            
            </figure><p>It looks like queries almost never request more than 60 fields. Let’s also look at query depths:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kLstN02f8pItr1RHNyyiO/16acff492c433861d1bba2365ec6633e/image2-1.png" />
            
            </figure><p>It looks like queries are never more than seven levels deep.</p><p>These two insights can be converted into security rules: we added three new Wirefilter fields that API Gateway customers can use to protect their GraphQL endpoints:</p>
            <pre><code>1. cf.api_gateway.graphql.query_size
2. cf.api_gateway.graphql.query_depth
3. cf.api_gateway.graphql.parsed_successfully</code></pre>
            <p>For now, we recommend the use of <code>cf.api_gateway.graphql.parsed_successfully</code> in all rules. Rules created with the use of this field will be backwards compatible with future GraphQL protection releases.</p><p>If a customer feels that there is nothing out of the ordinary with the traffic sample and that it represents a meaningful amount of normal usage, they can manually create and deploy the following custom rule to log all queries that were parsed by Cloudflare and that look like outliers:</p>
            <pre><code>cf.api_gateway.graphql.parsed_successfully and
(cf.api_gateway.graphql.query_depth &gt; 7 or 
cf.api_gateway.graphql.query_size &gt; 60)</code></pre>
            <p>Learn more and run your own analysis with our <a href="https://developers.cloudflare.com/api-shield/security/graphql-protection/configure/">documentation</a>.</p>
    <div>
      <h3>What’s next?</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We are already receiving feedback from our first customers and are planning out the next iteration of this feature. These are the features we will build next:</p><ul><li><p>Integrating GraphQL security with <a href="https://developers.cloudflare.com/waf/rate-limiting-rules/request-rate#complexity-based-rate-limiting">complexity-based rate limiting</a> such that we automatically calculate query <i>cost</i> and let customers rate limit eyeballs based on the total query execution cost the eyeballs use during their entire session.</p></li><li><p>Allowing customers to configure specifically which endpoints GraphQL security features run on.</p></li><li><p>Creating data insights on the relationship between query complexity and the time it takes the customer origin to respond to the query.</p></li><li><p>Creating automatic GraphQL threshold recommendations based on historical trends.</p></li></ul><p>If you’re an Enterprise customer that hasn't purchased API Gateway and you’re interested in protecting your GraphQL APIs today, you can get started by <a href="https://dash.cloudflare.com/?to=/:account/:zone/security/api-shield">enabling the API Gateway trial</a> inside the Cloudflare Dashboard or by contacting your account manager. Check out our <a href="https://developers.cloudflare.com/api-shield/security/graphql-protection/">documentation</a> on the feature to get started once you have access.</p> ]]></content:encoded>
            <category><![CDATA[API Shield]]></category>
            <category><![CDATA[API Gateway]]></category>
            <category><![CDATA[API Security]]></category>
            <guid isPermaLink="false">3jtCfJe8BOPYKyIvFdXqhj</guid>
            <dc:creator>John Cosgrove</dc:creator>
            <dc:creator>Ilya Andreev</dc:creator>
        </item>
        <item>
            <title><![CDATA[Cloudflare’s Always Online and the Internet Archive Team Up to Fight Origin Errors]]></title>
            <link>https://blog.cloudflare.com/cloudflares-always-online-and-the-internet-archive-team-up-to-fight-origin-errors/</link>
            <pubDate>Thu, 17 Sep 2020 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today is exciting for all those who want the Internet to be stronger, more resilient, and have important redundancies: Cloudflare is pleased to announce a partnership with the Internet Archive to bring new functionality to our Always Online service.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Every day, all across the Internet, something bad but entirely normal happens: thousands of <a href="https://www.cloudflare.com/learning/cdn/glossary/origin-server/">origin servers</a> go down, resulting in connection errors and frustrated users. Cloudflare’s users collectively spend over four and a half years each day waiting for unreachable origin servers to respond with error messages. But visitors don’t want to see error pages, they want to see content!</p><p>Today is exciting for all those who want the Internet to be stronger, more resilient, and have important redundancies: Cloudflare is pleased to announce a partnership with the Internet Archive to bring new functionality to our Always Online service.</p><p>Always Online serves as insurance for our customers’ websites. Should a customer’s origin go offline, timeout, or otherwise break, Always Online is there to step in and serve archived copies of webpages to visitors. The <a href="https://archive.org/">Internet Archive</a> is a nonprofit organization that runs the Wayback Machine, a service which saves snapshots of billions of websites across the Internet. By partnering with the Internet Archive, Cloudflare is able to seamlessly deliver responses for unreachable websites from the Internet Archive, while the Internet Archive can continue their mission of archiving the web to provide access to all knowledge.</p><p>Enabling Always Online in the Cloudflare dashboard allows us to share your hostname with the Wayback Machine so that they can archive your website. When a website’s origin is down, Cloudflare will go to the Internet Archive to retrieve the most recently archived version of the site, so that visitors will still be able to view the site’s content.</p>
    <div>
      <h3>Trying to reach a busted origin</h3>
      <a href="#trying-to-reach-a-busted-origin">
        
      </a>
    </div>
    <p>When a person visits a Cloudflare website, a request is made from their laptop/phone/tablet/smart fridge to Cloudflare’s edge. Our edge first looks to see if we can respond with cached content; if the requested content is not in cache, or is determined to be expired, we then obtain a fresh copy from the origin. As part of fulfilling an uncached/expired origin fetch, we also update our cache to allow subsequent requests to be served to visitors faster and more securely. If we are unable to reach the origin, our edge tries a few more times to connect before marking the origin as being down and serving an error page to the visitor. Receiving an error page is not ideal for anyone, so we try really hard to ensure that visitors to websites using Cloudflare can get <i>some</i> content, even if an origin is struggling.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4UQvDJTgMjVKUNJC4CMool/823ee42b3a3faad9910cb90193a10d4c/image3-8.png" />
            
            </figure>
    <div>
      <h3>A brief history of Always Online</h3>
      <a href="#a-brief-history-of-always-online">
        
      </a>
    </div>
    <p>When Cloudflare started 10 years ago, most of our customers were small and running on hosts that were subject to frequent downtime. These early customers feared that their host may go down at the same time a search engine was indexing their site. The search engine’s crawler would report the downed site as non-responsive and the site would drop in their search ranking. Always Online was born from that concern.</p><p>Through operating Always Online over the <a href="/always-online-because-downtime-sucks/">past 10 years</a>, we’ve learned that fighting Internet downtime with simple, unobtrusive tools was something that our customers and their users deeply value. Though some features have undergone rewrite upon rewrite, other parts of our code have remained relatively untouched by the sands of time, a testament to their robustness. For example, Always Online clearly shows a banner indicating that it is serving an archived version of the page due to the origin being unreachable, and this transparency is well-received by both website owners and visitors.</p><p>We recently set out to make Always Online even better. We wanted to preserve what customers loved — as seamless an experience as possible for their users when their origin servers are down — while increasing the amount of content available through Always Online, ensuring it is as fresh as possible, and performing this archiving in a way that helps make the Internet a better place.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5PdftwORCurkDNOT3fF0pd/6c4a7657f984ade2be754d04b6eb8587/image1-10.png" />
            
            </figure><p>What a visitor will see with Always Online. </p>
    <div>
      <h3>Enter the Internet Archive</h3>
      <a href="#enter-the-internet-archive">
        
      </a>
    </div>
    <p>Partnering with the Internet Archive’s Wayback Machine to power the next generation of Always Online accomplishes all of these goals. The Internet Archive’s mission is to provide universal access to all knowledge. Since 1996, the Internet Archive’s Wayback Machine has been archiving much of the public Web: preserving and making available millions of websites and pages that would otherwise be lost. In pursuit of that mission, they have archived more than 468 billion <a href="http://archive.org/web/">web pages</a>, amounting to more than 45 petabytes of information.</p><p>Always Online’s integration with the Internet Archive will help the Archive expand their record of the Internet; many of the domains that opt-in to Always Online functionality may not have been otherwise discovered by the Archive’s crawler. And for Cloudflare customers, the Archive will seamlessly provide visitors access to content that would otherwise be errors.</p><p>In other words, Cloudflare partnering with the Internet Archive makes the Internet better, stronger, and more available to everyone.</p><blockquote><p><i>“Through our partnership with Cloudflare, we are learning about, and archiving, webpages we might not have otherwise known about, and by integrating with Cloudflare’s Always Online service, archives of those pages are available to people trying to access them if they become unavailable via the live web”</i><i>—</i><b><i>Mark Graham</i></b><i>, Director of the Internet Archive’s Wayback Machine</i></p></blockquote><blockquote><p><i>“We are excited to work with Cloudflare and expect this partnership to bring important redundancy to the Internet and allow for us to advance our ongoing efforts to make the Internet more useful and reliable.”</i><i>—</i><b><i>Brewster Kahle</i></b><i>, Founder and Digital Librarian of the Internet Archive</i></p></blockquote>
    <div>
      <h3>How does the new Always Online work behind the scenes?</h3>
      <a href="#how-does-the-new-always-online-work-behind-the-scenes">
        
      </a>
    </div>
    <p>Upgrading to the new Always Online in the Cloudflare dashboard allows us to share some basic information about your website with the Internet Archive (like hostname and popular URLs), so they can begin to crawl and archive your website at regular intervals. This information sharing and crawling ensures content is available to Always Online and also serves to deepen the library of content available directly through the Archive.</p><p>If your origin goes down or is unreachable, Cloudflare’s edge will return a status code in the <a href="https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors">520 to 527</a> range, indicating an issue connecting to the origin. When this happens, Cloudflare will first look to the local edge datacenter to see if there is a stale or expired version of content we can serve to the website visitor. If there isn’t a version in the local cache, Cloudflare will then go to the Internet Archive and fetch the most recently archived version of the site to serve to your visitors. When that happens, Always Online serves the archived content with a banner to let your visitors know that your origin is having problems. The banner allows for your visitors to check and see if your origin is back online with a single click. While dynamic content that requires communication with an origin server will still show an error to visitors (e.g. web applications or shopping carts), basic content will often be available with Always Online.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3rEChK9jt95qwILX9qsuki/bb442dd4bb47a9aef0352c84819a76de/image5-5.png" />
            
            </figure>
    <div>
      <h3>Enabling the new Always Online</h3>
      <a href="#enabling-the-new-always-online">
        
      </a>
    </div>
    <p>For now, the old Always Online service will still be available, but we plan to fully transition to the Internet Archive-backed version soon.</p><p>Cloudflare customers can enable Always Online in the dashboard:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bPl4B1VGluBuQxJrfgyeE/b2f2c5bee301cf3ff98342c4c8f59a7b/J42AtNZv8xNcyQPPefVywiAGEhJyNbtqEE1pclEjIEwdDKdrNri9IFx8QK21QKo5wZD1B96CplW24pAEXJlpIZJ-LQesFDkcCeyIF3ITXhv-owHH63Az-zIptuJk.png" />
            
            </figure>
    <div>
      <h3>Learn More</h3>
      <a href="#learn-more">
        
      </a>
    </div>
    <ul><li><p>For more about Always Online, and how it works, please check out our <a href="https://support.cloudflare.com/hc/articles/200168436-Understanding-Cloudflare-Always-Online">documentation</a>.</p></li><li><p>To get started using Always Online, please log into your Cloudflare dashboard and toggle it on.</p></li><li><p>Please see the Internet Archive’s announcement of our partnership <a href="http://blog.archive.org/2020/09/17/internet-archive-partners-with-cloudflare-to-help-make-the-web-more-useful-and-reliable/">here</a>.</p></li><li><p>To help improve Always Online, or other parts of our slice of the Internet, drop us a <a href="https://www.cloudflare.com/careers/jobs/">line</a>.</p></li></ul> ]]></content:encoded>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Always Online]]></category>
            <guid isPermaLink="false">1bK0cMIe0TV2hYW1lGYef5</guid>
            <dc:creator>Alex Krivit</dc:creator>
            <dc:creator>Ilya Andreev</dc:creator>
        </item>
        <item>
            <title><![CDATA[Helping sites get back online: the origin monitoring intern project]]></title>
            <link>https://blog.cloudflare.com/helping-sites-get-back-online-the-origin-monitoring-intern-project/</link>
            <pubDate>Mon, 13 Apr 2020 11:00:00 GMT</pubDate>
            <description><![CDATA[ Over the course of ten weeks, our team of three interns (two engineering, one product management) went from a problem statement to a new feature, which is still working in production for all Cloudflare customers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The most impactful internship experiences involve building something meaningful from scratch and learning along the way. Those can be tough goals to accomplish during a short summer internship, but our experience with Cloudflare’s 2019 intern program met both of them and more! Over the course of ten weeks, our team of three interns (two engineering, one product management) went from a problem statement to a new feature, which is still working in production for all Cloudflare customers.</p>
    <div>
      <h2>The project</h2>
      <a href="#the-project">
        
      </a>
    </div>
    <p>Cloudflare sits between customers’ origin servers and end users. This means that all traffic to the origin server runs through Cloudflare, so we know when something goes wrong with a server and sometimes reflect that status back to users. For example, if an origin is refusing connections and there’s no cached version of the site available, Cloudflare will display a <a href="https://support.cloudflare.com/hc/en-us/articles/115003011431-Troubleshooting-Cloudflare-5XX-errors#521error">521 error</a>. If customers don’t have monitoring systems configured to detect and notify them when failures like this occur, their websites may go down silently, and they may hear about the issue for the first time from angry users.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kbCMvc5jyJkluEFbAIT3b/0134bb038cbb7958001da92f67435a66/image4-11.png" />
            
            </figure><p>When a customer’s origin server is unreachable, Cloudflare sends a 5xx error back to the visitor.‌‌</p><p>This problem became the starting point for our summer internship project: since Cloudflare knows when customers' origins are down, let’s send them a notification when it happens so they can take action to get their sites back online and reduce the impact to their users! This work became Cloudflare’s <a href="https://support.cloudflare.com/hc/en-us/articles/360037465932-Preventing-site-downtime#5Y1o5Sk9v44PGWnjnfvgh">passive origin monitoring</a> feature, which is currently available on all Cloudflare plans.</p><p>Over the course of our internship, we ran into lots of interesting technical and product problems, like:</p>
    <div>
      <h3>Making big data small</h3>
      <a href="#making-big-data-small">
        
      </a>
    </div>
    <p>Working with data from all requests going through Cloudflare’s 26 million+ Internet properties to look for unreachable origins is unrealistic from a data volume and performance perspective. Figuring out what datasets were available to analyze for the errors we were looking for, and how to adapt our whiteboarded algorithm ideas to use this data, was a challenge in itself.</p>
    <div>
      <h3>Ensuring high alert quality</h3>
      <a href="#ensuring-high-alert-quality">
        
      </a>
    </div>
    <p>Because only a fraction of requests show up in the sampled timing and error dataset we chose to use, false positives/negatives were disproportionately likely to occur for low-traffic sites. These are the sites that are least likely to have sophisticated monitoring systems in place (and therefore are most in need of this feature!). In order to make the notifications as accurate and actionable as possible, we analyzed patterns of failed requests throughout different types of Cloudflare Internet properties. We used this data to determine thresholds that would maximize the number of true positive notifications, while making sure they weren’t so sensitive that we end up spamming customers with emails about sporadic failures.</p>
    <div>
      <h3>Designing actionable notifications</h3>
      <a href="#designing-actionable-notifications">
        
      </a>
    </div>
    <p>Cloudflare has lots of different kinds of customers, from people running personal blogs with interest in DDoS mitigation to large enterprise companies with extremely sophisticated monitoring systems and global teams dedicated to incident response. We wanted to make sure that our notifications were understandable and actionable for people with varying technical backgrounds, so we enabled the feature for small samples of customers and tested many variations of the “origin monitoring email”. Customers responded right back to our notification emails, sent in support questions, and posted on our community forums. These were all great sources of feedback that helped us improve the message’s clarity and actionability.</p><p>We frontloaded our internship with lots of research (both digging into request data to understand patterns in origin unreachability problems and talking to customers/poring over support tickets about origin unreachability) and then spent the next few weeks iterating. We enabled passive origin monitoring for all customers with some time remaining before the end of our internships, so we could spend time improving the supportability of our product, documenting our design decisions, and working with the team that would be taking ownership of the project.</p><p>We were also able to develop some smaller internal capabilities that built on the work we’d done for the customer-facing feature, like notifications on origin outage events for larger sites to help our account teams provide proactive support to customers. It was super rewarding to see our work in production, helping Cloudflare users get their sites back online faster after receiving origin monitoring notifications.</p>
    <div>
      <h2>Our internship experience</h2>
      <a href="#our-internship-experience">
        
      </a>
    </div>
    <p>The Cloudflare internship program was a whirlwind ten weeks, with each day presenting new challenges and learnings! Some factors that led to our productive and memorable summer included:</p>
    <div>
      <h3>A well-scoped project</h3>
      <a href="#a-well-scoped-project">
        
      </a>
    </div>
    <p>It can be tough to find a project that’s meaningful enough to make an impact but still doable within the short time period available for summer internships. We’re grateful to our managers and mentors for identifying an interesting problem that was the perfect size for us to work on, and for keeping us on the rails if the technical or product scope started to creep beyond what would be realistic for the time we had left.</p>
    <div>
      <h3>Working as a team of interns</h3>
      <a href="#working-as-a-team-of-interns">
        
      </a>
    </div>
    <p>The immediate team working on the origin monitoring project consisted of three interns: Annika in product management and Ilya and Zhengyao in engineering. Having a dedicated team with similar goals and perspectives on the project helped us stay focused and work together naturally.</p>
    <div>
      <h3>Quick, agile cycles</h3>
      <a href="#quick-agile-cycles">
        
      </a>
    </div>
    <p>Since our project faced strict time constraints and our team was distributed across two offices (Champaign and San Francisco), it was critical for us to communicate frequently and work in short, iterative sprints. Daily standups, weekly planning meetings, and frequent feedback from customers and internal stakeholders helped us stay on track.</p>
    <div>
      <h3>Great mentorship &amp; lots of freedom</h3>
      <a href="#great-mentorship-lots-of-freedom">
        
      </a>
    </div>
    <p>Our managers challenged us, but also gave us room to explore our ideas and develop our own work process. Their trust encouraged us to set ambitious goals for ourselves and enabled us to accomplish way more than we may have under strict process requirements.</p>
    <div>
      <h2>After the internship</h2>
      <a href="#after-the-internship">
        
      </a>
    </div>
    <p>In the last week of our internships, the engineering interns, who were based in the Champaign, IL office, visited the San Francisco office to meet with the team that would be taking over the project when we left and present our work to the company at our all hands meeting. The most exciting aspect of the visit: our presentation was preempted by Cloudflare’s co-founders announcing public S-1 filing at the all hands! :)</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Fo3hKDoRLIKjjEmB6HHSf/30dc74250dd5588e9fb20d528c9187db/image5-5.png" />
            
            </figure>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7q6XuSukWKdp5CmOEieU9/3762c15a4fb0438a54f14aefbf76c34c/image1-11.png" />
            
            </figure><p>Over the next few months, Cloudflare added a notifications page for easy configurability and <a href="/new-tools-to-monitor-your-server-and-avoid-downtime/">announced</a> the availability of passive origin monitoring along with some other tools to help customers monitor their servers and avoid downtime.</p><p>Ilya is working for Cloudflare part-time during the school semester and heading back for another internship this summer, and Annika is joining the team full-time after graduation this May. We’re excited to keep working on tools that help make the Internet a better place!</p><p>Also, Cloudflare is <a href="/cloudflare-doubling-size-of-2020-summer-intern-class/">doubling the size of the 2020 intern class</a>—if you or someone you know is interested in an internship like this one, check out the <a href="https://boards.greenhouse.io/cloudflare/jobs/2156436?gh_jid=2156436&amp;gh_src=d193c1b71us">open positions</a> in software engineering, security engineering, and product management.</p> ]]></content:encoded>
            <category><![CDATA[Internship Experience]]></category>
            <category><![CDATA[Monitoring]]></category>
            <category><![CDATA[Life at Cloudflare]]></category>
            <guid isPermaLink="false">4LahDyjkddXNdKsN2e2q9r</guid>
            <dc:creator>Annika Garbers</dc:creator>
            <dc:creator>Ilya Andreev</dc:creator>
            <dc:creator>Zhengyao Lin</dc:creator>
        </item>
    </channel>
</rss>