
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 03:52:22 GMT</lastBuildDate>
        <item>
            <title><![CDATA[How We Design Features for Wrangler, the Cloudflare Workers CLI]]></title>
            <link>https://blog.cloudflare.com/how-we-design-features-for-wrangler/</link>
            <pubDate>Tue, 17 Sep 2019 15:55:00 GMT</pubDate>
            <description><![CDATA[ The most recent update to Wrangler, version 1.3.1, introduces important new features for developers building Cloudflare Workers — from built-in deployment environments to first class support for Workers KV. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>The most recent update to Wrangler, <a href="https://github.com/cloudflare/wrangler/releases/tag/v1.3.1">version 1.3.1</a>, introduces important new features for developers building <a href="https://workers.cloudflare.com">Cloudflare Workers</a> — from built-in deployment environments to first class support for <a href="https://developers.cloudflare.com/workers/reference/storage/overview/">Workers KV</a>. Wrangler is Cloudflare’s first officially supported CLI. Branching into this field of software has been a novel experience for us engineers and product folks on the Cloudflare Workers team.</p><p>As part of the 1.3.1 release, the folks on the Workers Developer Experience team dove into the thought process that goes into building out features for a CLI and thinking like users. Because while we wish building a CLI were as easy as our teammate Avery tweeted...</p><blockquote><p>If I were programming a CLI, I would simply design it in a way that is not controversial and works for every type of user.</p><p>— avery harnish (@SmoothAsSkippy) <a href="https://twitter.com/SmoothAsSkippy/status/1166842441711980546?ref_src=twsrc%5Etfw">August 28, 2019</a></p></blockquote><p>… it brings design challenges that many of us have never encountered. To overcome these challenges successfully requires deep empathy for users across the entire team, as well as the ability to address ambiguous questions related to how developers write Workers.</p>
    <div>
      <h2>Wrangler, meet Workers KV</h2>
      <a href="#wrangler-meet-workers-kv">
        
      </a>
    </div>
    <p>Our new KV functionality introduced a host of new features, from creating KV namespaces to bulk uploading key-value pairs for use within a Worker. This new functionality primarily consisted of logic for interacting with the Workers KV API, meaning that the technical work under “the hood” was relatively straightforward. Figuring out how to cleanly represent these new features to Wrangler users, however, became the fundamental question of this release.</p><p>Designing the invocations for new KV functionality unsurprisingly required multiple iterations, and taught us a lot about usability along the way!</p>
    <div>
      <h2>Attempt 1</h2>
      <a href="#attempt-1">
        
      </a>
    </div>
    <p>For our initial pass, the path originally seemed so obvious. (Narrator: It really, really wasn’t). We hypothesized that having Wrangler support familiar commands — like ls and rm — would be a reasonable mapping of familiar command line tools to Workers KV, and ended up with the following set of invocations below:</p>
            <pre><code># creates a new KV Namespace
$ wrangler kv add myNamespace									
	
# sets a string key that doesn't expire
$ wrangler kv set myKey=”someStringValue”

# sets many keys
$ wrangler kv set myKey=”someStringValue” myKey2=”someStringValue2” ...

# sets a volatile (expiring) key that expires in 60 s
$ wrangler kv set myVolatileKey=path/to/value --ttl 60s

# deletes three keys
$ wrangler kv rm myNamespace myKey1 myKey2 myKey3

# lists all your namespaces
$ wrangler kv ls

# lists all the keys for a namespace
$ wrangler kv ls myNamespace

# removes all keys from a namespace, then removes the namespace		
$ wrangler kv rm -r myNamespace</code></pre>
            <p>While these commands invoked familiar shell utilities, they made interacting with your KV namespace a lot more like interacting with a filesystem than a key value store. The juxtaposition of a well-known command like <code>ls</code> with a non-command, <code>set</code>, was confusing. Additionally, mapping preexisting command line tools to KV actions was not a good 1-1 mapping (especially for <code>rm -r</code>; there is no need to recursively delete a KV namespace like a directory if you can just delete the namespace!)</p><p>This draft also surfaced use cases we needed to support: namely, we needed support for actions like easy bulk uploads from a file. This draft required users to enter every KV pair in the command line instead of reading from a file of key-value pairs; this was also a non-starter.</p><p>Finally, these KV subcommands caused confusion about actions to different resources. For example, the command for listing your Workers KV namespaces looked a lot like the command for listing keys within a namespace.</p><p>Going forward, we needed to meet these newly identified needs.</p>
    <div>
      <h2>Attempt 2</h2>
      <a href="#attempt-2">
        
      </a>
    </div>
    <p>Our next attempt shed the shell utilities in favor of simple, declarative subcommands like <code>create</code>, <code>list</code>, and <code>delete</code>. It also addressed the need for easy-to-use bulk uploads by allowing users to pass a JSON file of keys and values to Wrangler.</p>
            <pre><code># create a namespace
$ wrangler kv create namespace &lt;title&gt;

# delete a namespace
$ wrangler kv delete namespace &lt;namespace-id&gt;

# list namespaces
$ wrangler kv list namespace

# write key-value pairs to a namespace, with an optional expiration flag
$ wrangler kv write key &lt;namespace-id&gt; &lt;key&gt; &lt;value&gt; --ttl 60s

# delete a key from a namespace
$ wrangler kv delete key &lt;namespace-id&gt; &lt;key&gt;

# list all keys in a namespace
$ wrangler kv list key &lt;namespace-id&gt;

# write bulk kv pairs. can be json file or directory; if dir keys will be file paths from root, value will be contents of files
$ wrangler kv write bulk ./path/to/assets

# delete bulk pairs; same input functionality as above
$ wrangler kv delete bulk ./path/to/assets</code></pre>
            <p>Given the breadth of new functionality we planned to introduce, we also built out a taxonomy of new subcommands to ensure that invocations for different resources — namespaces, keys, and bulk sets of key-value pairs — were consistent:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2UyT0JmQEJSq3ejZTtQkaO/e12c5b3fd7b93b48004067ef096a30cf/Attempt2Diagram.png" />
            
            </figure><p>Designing invocations with taxonomies became a crucial part of our development process going forward, and gave us a clear look at the “big picture” of our new KV features.</p><p>This approach was closer to what we wanted. It offered bulk put and bulk delete operations that would read multiple key-value pairs from a JSON file. After specifying an action subcommand (e.g. <code>delete</code>), users now explicitly stated which resource an action applied to (<code>namespace</code> , <code>key</code>, <code>bulk</code>) and reduced confusion about which action applied to which KV component.</p><p>This draft, however, was still not as explicit as we wanted it to be. The distinction between operations on <code>namespaces</code> versus <code>keys</code> was not as obvious as we wanted, and we still feared the possibility of different <code>delete</code> operations accidentally producing unwanted deletes (a possibly disastrous outcome!)</p>
    <div>
      <h2>Attempt 3</h2>
      <a href="#attempt-3">
        
      </a>
    </div>
    <p>We really wanted to help differentiate where in the hierarchy of structs a user was operating at any given time. Were they operating on <code>namespaces</code>, <code>keys</code>, or <code>bulk</code> sets of keys in a given operation, and how could we make that as clear as possible? We looked around, comparing the ways CLIs from kubectl to Heroku’s handled commands affecting different objects. We landed on a pleasing pattern inspired by Heroku’s CLI: colon-delimited command namespacing:</p>
            <pre><code>plugins:install PLUGIN    # installs a plugin into the CLI
plugins:link [PATH]       # links a local plugin to the CLI for development
plugins:uninstall PLUGIN  # uninstalls or unlinks a plugin
plugins:update            # updates installed plugins</code></pre>
            <p>So we adopted <code>kv:namespace</code>, <code>kv:key</code>, and <code>kv:bulk</code> to semantically separate our commands:</p>
            <pre><code># namespace commands operate on namespaces
$ wrangler kv:namespace create &lt;title&gt; [--env]
$ wrangler kv:namespace delete &lt;binding&gt; [--env]
$ wrangler kv:namespace rename &lt;binding&gt; &lt;new-title&gt; [--env]
$ wrangler kv:namespace list [--env]
# key commands operate on individual keys
$ wrangler kv:key write &lt;binding&gt; &lt;key&gt;=&lt;value&gt; [--env | --ttl | --exp]
$ wrangler kv:key delete &lt;binding&gt; &lt;key&gt; [--env]
$ wrangler kv:key list &lt;binding&gt; [--env]
# bulk commands take a user-generated JSON file as an argument
$ wrangler kv:bulk write &lt;binding&gt; ./path/to/data.json [--env]
$ wrangler kv:bulk delete &lt;binding&gt; ./path/to/data.json [--env]</code></pre>
            <p>And ultimately ended up with this topology:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1lL9YrgjMs0VX3vXm31lcP/fd71a9cee1e23752913113bed2960d5b/Attempt3Diagram.png" />
            
            </figure><p>We were even closer to our desired usage pattern; the object acted upon was explicit to users, and the action applied to the object was also clear.</p><p>There was one usage issue left. Supplying <code>namespace-id</code>s--a field that specifies which Workers KV namespace to perform an action to--required users to get their clunky KV <code>namespace-id</code> (a string like <code>06779da6940b431db6e566b4846d64db</code>) and provide it in the command-line under the <code>namespace-id</code> option. This <code>namespace-id</code> value is what our Workers KV API expects in requests, but would be cumbersome for users to dig up and provide, let alone frequently use.</p><p>The solution we came to takes advantage of the <code>wrangler.toml</code> present in every Wrangler-generated Worker. To publish a Worker that uses a Workers KV store, the following field is needed in the Worker’s <code>wrangler.toml</code>:</p>
            <pre><code>kv-namespaces = [
	{ binding = "TEST_NAMESPACE", id = "06779da6940b431db6e566b4846d64db" }
]</code></pre>
            <p>This field specifies a Workers KV namespace that is bound to the name <code>TEST_NAMESPACE</code>, such that a Worker script can access it with logic like:</p>
            <pre><code>TEST_NAMESPACE.get(“my_key”);</code></pre>
            <p>We also decided to take advantage of this <code>wrangler.toml</code> field to allow users to specify a KV binding name instead of a KV namespace id. Upon providing a KV binding name, Wrangler could look up the associated <code>id</code> in <code>wrangler.toml</code> and use that for Workers KV API calls.</p><p>Wrangler users performing actions on KV namespaces could simply provide <code>--binding TEST_NAMESPACE</code> for their KV calls let Wrangler retrieve its ID from <code>wrangler.toml</code>. Users can still specify <code>--namespace-id</code> directly if they do not have namespaces specified in their <code>wrangler.toml</code>.</p><p>Finally, we reached our happy point: Wrangler’s new KV subcommands were explicit, offered functionality for both individual and bulk actions with Workers KV, and felt ergonomic for Wrangler users to integrate into their day-to-day operations.</p>
    <div>
      <h2>Lessons Learned</h2>
      <a href="#lessons-learned">
        
      </a>
    </div>
    <p>Throughout this design process, we identified the following takeaways to carry into future Wrangler work:</p><ol><li><p><b>Taxonomies of your CLI’s subcommands and invocations are a great way to ensure consistency and clarity</b>. CLI users tend to anticipate similar semantics and workflows within a CLI, so visually documenting all paths for the CLI can greatly help with identifying where new work can be consistent with older semantics. Drawing out these taxonomies can also expose missing features that seem like a fundamental part of the “big picture” of a CLI’s functionality.</p></li><li><p><b>Use other CLIs for inspiration and validation</b>. Drawing logic from popular CLIs helped us confirm our assumptions about what users like, and learn established patterns for complex CLI invocations.</p></li><li><p><b>Avoid logic that requires passing in raw ID strings</b>. Testing CLIs a lot means that remembering and re-pasting ID values gets very tedious very quickly. Emphasizing a set of purely human-readable CLI commands and arguments makes for a far more intuitive experience. When possible, taking advantage of configuration files (like we did with <code>wrangler.toml</code>) offers a straightforward way to provide mappings of human-readable names to complex IDs.</p></li></ol><p>We’re excited to continue using these design principles we’ve learned and documented as we grow Wrangler into a one-stop <a href="https://workers.cloudflare.com">Cloudflare Workers</a> shop.</p><p>If you’d like to try out Wrangler, <a href="https://github.com/cloudflare/wrangler">check it out on GitHub</a> and let us know what you think! We would love your feedback.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5CFtKpaVci96ko4xuzfFSW/1ccc3afb9f1fa313c89ed82769bd0743/WranglerCrab-1.png" />
            
            </figure> ]]></content:encoded>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Cloudflare Workers KV]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">4ezmggl8GGox1Mv6DWV62H</guid>
            <dc:creator>Ashley M Lewis</dc:creator>
            <dc:creator>Gabbi Fisher</dc:creator>
        </item>
        <item>
            <title><![CDATA[Writing an API at the Edge with Workers and Cloud Firestore]]></title>
            <link>https://blog.cloudflare.com/api-at-the-edge-workers-and-firestore/</link>
            <pubDate>Thu, 21 Mar 2019 15:00:00 GMT</pubDate>
            <description><![CDATA[ We’re super stoked about bringing you Workers.dev, and we’re even more stoked at every opportunity we have to dogfood Workers. Using what we create keeps us tuned in to the developer experience, which takes a good deal of guesswork out of drawing our roadmaps. ]]></description>
            <content:encoded><![CDATA[ <p>We’re super stoked about bringing you <a href="https://workers.dev">Workers.dev</a>, and we’re even more stoked at every opportunity we have to <a href="/dogfooding-edge-workers/">dogfood Workers</a>. Using what we create keeps us tuned in to the developer experience, which takes a good deal of guesswork out of drawing our roadmaps.</p><p><a href="/announcing-workers-dev/">Our goal with Workers.dev</a> is to provide a way to deploy JavaScript code to our network of 165 data centers without requiring developers to <a href="https://www.cloudflare.com/learning/dns/glossary/what-is-a-domain-name-registrar/">register a domain</a> with Cloudflare first. While we gear up for general availability, we wanted to provide users an opportunity to reserve their favorite subdomain in a fair and consistent way, so we built a system to allow visitors to reserve a subdomain where their Workers will live once Workers.dev is released. This is the story of how we wrote the system backing that submission process.</p>
    <div>
      <h3>Requirements</h3>
      <a href="#requirements">
        
      </a>
    </div>
    <p>Of course, we always want to use the best tool for the job, so designing the Workers that would back <a href="https://workers.dev">Workers.dev</a> started with an inventory of constraints and user experience expectations:</p>
    <div>
      <h3>Constraints</h3>
      <a href="#constraints">
        
      </a>
    </div>
    <ol><li><p>We want to limit reservations to one per email address. It’s no fun if someone writes a bot to claim every good Workers subdomain in ten seconds; they wouldn’t be able to claim them without creating a Cloudflare account for every single one anyway!</p></li><li><p>We only want to allow a single reservation per subdomain to avoid awkward “sorry” messages later on; so, we need a reliable uniqueness constraint within the datastore on write.</p></li><li><p>We want to blocklist a few key subdomains, and be able to detect and blocklist more as we continue on.</p></li></ol>
    <div>
      <h3>User Flow</h3>
      <a href="#user-flow">
        
      </a>
    </div>
    <p>At a high, procedural level, our little system needed to handle the following roadmap:</p><ol><li><p>Visitor submits a form with their desired subdomain and email address.</p></li><li><p>The form is sent off to a worker, whose job is to:a. Sanitize the inputs (make sure that subdomain is valid! The email too!)b. Check Cloud Firestore for existing reservations matching either the subdomain or email addressc. Add the user’s email address to Cloud Firestore and shoot off an email with an auto-generated linkd. Return the results to the landing page.</p></li><li><p>Display feedback to the visitor:a. If the reservation cannot be made (subdomain or email address has already been used), display an error and clear the form.b. If the reservation can be made, indicate success and direct the visitor to their email.</p></li><li><p>Visitor receives verification email, clicks through to Workers.dev again</p></li><li><p>The page slurps in data from the url and shoots off a request to another worker, whose job is to:a. Retrieve the email address associated with the linkb. Check again that the email address is not already associated with a subdomainc. Attempt to create a new reservation. If this request comes back with a 409 error, the subdomain is already reservedd. Return the results to the landing page.</p></li><li><p>Display feedback to the visitor:a. If the reservation cannot be made (subdomain or email address has already been used), display an error and clear the form.b. If the reservation was successful, display a message and celebrate! ?</p></li></ol>
    <div>
      <h3>Design</h3>
      <a href="#design">
        
      </a>
    </div>
    <p>I was tasked with handling the back end operations. A few characteristics make this system an ideal fit to use a Worker. For one, the service is ephemeral; we are only offering reservations for a limited period before official registration. This feature wasn’t going to be permanent, didn’t require access to the existing database, and didn’t depend on another service running on our private network. Also, being able to develop and deploy independently of our larger API was a pretty big bonus, so it seemed like a great job for a Worker.</p>
    <div>
      <h3>Workers</h3>
      <a href="#workers">
        
      </a>
    </div>
    <p>Workers, to me, embody Single Responsibility Principle. I believe they should be as small as possible while still encompassing an atomic operation. This kind of composability makes them well-suited as autonomous components of both larger systems and ad-hoc projects like this one.</p><p>The first thing that jumped out at me was a clear split in the logic of our roadmap: verifying emails and reserving subdomains. With those two  distinct operations we’d use two distinct Worker scripts on two distinct routes. Another valid implementation could use a single worker with branching logic based on the request that calls it, but the two independent steps allows more granular management. At the same time, a lot of the logic is shared, specifically the need to hit the datastore, which means the need for a client module. <a href="https://serverless.com/">Serverless Framework</a> made this not only possible, but relatively painless with minimal configuration.</p>
    <div>
      <h3>Cloud Firestore</h3>
      <a href="#cloud-firestore">
        
      </a>
    </div>
    <p>For persistence, the project required a datastore that was both performant and consistent to prevent double-reserving the same subdomain. <a href="https://www.cloudflare.com/products/workers-kv/">Workers KV</a>, while an excellent storage device for reading data, is eventually consistent (i.e. updates to the store are not immediately available to every node on the edge). Enter Google Cloud Platform’s Cloud <a href="https://cloud.google.com/firestore/">Firestore</a>, which <a href="https://firebase.googleblog.com/2019/01/cloud-firestore-in-general-availability.html">just went GA</a> as we were developing this project</p><blockquote><p>Cloud Firestore is a flexible, scalable database for mobile, web, and server development from Firebase and Google Cloud Platform. Like Firebase Realtime Database, it keeps your data in sync across client apps through real time listeners and offers offline support for mobile and web so you can build responsive apps that work regardless of network latency or Internet connectivity.</p></blockquote><p>Using Cloud Firestore gave us a relatively simple solution to our data storage problem: it is immediately consistent, which allows us to avoid reservation collisions. It can be accessed via a REST API with a simple JWT authentication for a service account, meaning our worker can interface with it using Fetch API. Once I had a client written out in JavaScript, it could be used from the CLI with node-fetch for a few simple data-gathering scripts.</p>
    <div>
      <h3>Building the API</h3>
      <a href="#building-the-api">
        
      </a>
    </div>
    <p>While Cloud Firestore is excellent for handling requests from many users, in this case we want to restrict access to just our worker instances, and for that we need to create a Service Account. You can do this from the IAM console for your Google Cloud project. We’ll build one with the role “Cloud Datastore User”, which is the role recommended for our use case.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5r2uRna3dKkKS6Nvl80tgc/69b46d1d04d677417df1a66038310bf3/serviceaccount.png" />
            
            </figure><p>I used a service account for making API calls from a Worker</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2OheId26j2uftzDzzjdZG8/cf0a1ba880c118ec1334506c81d9e676/serviceaccount2.png" />
            
            </figure><p>Adding a name and description helps track users of the project</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1yXA0Uux7VfoZF1hyJ1fRr/7d2d84d78ffa9eb45a64d43cd6c43615/serviceaccount3.png" />
            
            </figure><p>'Cloud Datastore User' is the appropriate Role for Service Accounts</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6DofJvrlAkH8EZk7o3ryCn/747cd1a94d040105b1e1bdfa80c976f8/serviceaccount4.png" />
            
            </figure><p>I then saved my key as a JSON file for use in building JWTs for authenticating requests</p><p>I click <code>Create key</code> to save my Service Account Configuration as a JSON file. This is used in the next step to build out the JWT for authentication requests to the Firestore API.</p>
    <div>
      <h3>Firestore Authentication</h3>
      <a href="#firestore-authentication">
        
      </a>
    </div>
    <p>Because we are planning on running this client in the Workers runtime and we don’t have access to either the DOM or the Node runtime, we’ll have to rely on the REST API rather than either of the JavaScript client libraries.That’s okay though; the REST API is quite robust, and <a href="https://developers.google.com/identity/protocols/OAuth2ServiceAccount#jwt-auth">authentication is possible using just a JWT</a>, rather than the full OAuth 2.0 handshake procedure.</p>
    <div>
      <h3>Generating JWTs</h3>
      <a href="#generating-jwts">
        
      </a>
    </div>
    <p>To keep the configuration out of the source code, I wrote a node script for assembling the various pieces of the configuration into a JSON blob.</p>
            <pre><code>const fs = require('fs')
const path = require('path')
const YAML = require('yaml-js')

// Service Definition for Cloud Firestore can be found here:
// https://github.com/googleapis/googleapis/blob/master/google/firestore/firestore_v1.yaml
// Service Account Config should be the JSON file you saved in the last step
let [serviceDefinitionPath, serviceAccountConfigPath] = process.argv.slice(2)

let serviceDefinition = YAML.load(fs.readFileSync(serviceDefinitionPath))
let serviceAccountConfig = require(path.resolve(serviceAccountConfigPath))

// JWT spec at https://developers.google.com/identity/protocols/OAuth2ServiceAccount#jwt-auth
let payload = {
  aud: `https://${serviceDefinition.name}/${serviceDefinition.apis[0].name}`,
  iss: serviceAccountConfig.client_email,
  sub: serviceAccountConfig.client_email,
}

let privateKey = serviceAccountConfig.private_key
let privateKeyID = serviceAccountConfig.private_key_id
let algorithm = 'RS256'
let url = `https://firestore.googleapis.com/v1beta1/projects/${serviceAccountConfig.project_id}/databases/(default)/documents`

// The object we want to send to KV
let FIREBASE_JWT_CONFIG = {
  payload,
  privateKey,
  privateKeyID,
  algorithm,
  url,
}

// Write out to JSON file to send to KV
fs.writeFileSync('./config/metadata.json', JSON.stringify(FIREBASE_JWT_CONFIG))

console.log('Worker metadata file created at', metadataFilename)</code></pre>
            <p>I leaned on <code>node-jose</code> for JWT generation, and wrapped it into a small token function that adds the required timestamps to the payload, and generates a JWT:</p>
            <pre><code>import jose from 'node-jose';

/**
 * Generate a Google Cloud API JWT
 *
 * @param config - the JWT configuration
 */
export default async function generateJWT(config) {
  const iat = new Date().getTime() / 1000;
  let payload = {
    ...config.payload,
    iat: iat,
    exp: iat + 3600
  };

  const signingKey = await jose.JWK.asKey(
    config.privateKey.replace(/\\n/g, '\n'),
    'pem'
  );

  const sign = await jose.JWS.createSign(
    { fields: { alg: config.algorithm, kid: config.privateKeyID } },
    signingKey
  )
    .update(JSON.stringify(payload), 'utf8')
    .final();

  const signature = sign.signatures[0];
  return [signature.protected, sign.payload, signature.signature].join('.');
}</code></pre>
            
    <div>
      <h3>Adding JWT Configuration to KV</h3>
      <a href="#adding-jwt-configuration-to-kv">
        
      </a>
    </div>
    <p>We needed somewhere to keep this configuration that is easily accessible by our Workers but not simply committed in our source code. KV works well in this situation, so I created a namespace and add the JSON-stringified value under they key config.</p>
            <pre><code>const fetch = require('node-fetch')

const {
  CLOUDFLARE_AUTH_KEY,
  CLOUDFLARE_AUTH_EMAIL,
  CLOUDFLARE_ACCOUNT_ID,
} = process.env

const JWT_CONFIG_NAMESPACE = 'gcpAuth'

// URL and headers for KV API calls https://api.cloudflare.com/#workers-kv-namespace-properties
const kvURI = `https://api.cloudflare.com/client/v4/accounts/${CLOUDFLARE_ACCOUNT_ID}/storage/kv/namespaces`
const headers = {
  'X-Auth-Email': CLOUDFLARE_AUTH_EMAIL,
  'X-Auth-Key': CLOUDFLARE_AUTH_KEY,
  'Content-Type': 'application/json',
}

async function setUpKV() {
  // Add a KV namespace
  // note: if you are using serverless framework, you can skip this set
  // kv namespace bindings in serverless.yaml
  // if not, you'll want to add logic here to get the list of namespaces
  // and update only if the namespace you want is not already set.
  let namespaceId = await fetch(kvURI, {
    method: 'POST',
    headers,
    body: JSON.stringify({ title: JWT_CONFIG_NAMESPACE })
  }).then(response =&gt; response.json()).then(data =&gt; {
    if (!data.success) throw new Error(JSON.stringify(data.errors))

    return data.result.id
  })

  // set the config variable to the json blob with our jwt settings
  await fetch(`${kvURI}/${namespaceId}/values/config`, {
    method: 'PUT',
    headers,
    body: JSON.stringify(require('../config/metadata.json'))
  }).then(response =&gt; response.json()).then(data =&gt; {
    if (!data.success) {
      throw new Error(JSON.stringify(data.errors))
    }
  })
}

setUpKV()
  .catch(console.error)</code></pre>
            <p>A note: this method for handling secrets is better than committing them in scripts, but still not the greatest. We’re working on improving support for secrets.</p>
    <div>
      <h3>Querying Cloud Firestore</h3>
      <a href="#querying-cloud-firestore">
        
      </a>
    </div>
    <p>After that, I spent a significant amount of time fiddling with <a href="https://developers.google.com/apis-explorer/#p/">Google’s API Explorer</a> to figure out precisely how to make the requests I needed and parse the responses appropriately. <a href="https://firebase.google.com/docs/firestore/reference/rest/">The docs</a> are pretty comprehensive, but the API Explorer is key to navigating requests against your own datastore.</p><p>I built a small client that can create and retrieve documents from GCP; this is bare bones, you can add methods that follow this pattern for updating and other operations as you wish. The Worker can then pull the config out of KV and use it to initialize the client.</p>
            <pre><code>import { generateJWT } from './generateJWT'

async function buildGCPClient() {
  let config = await firebaseConfig.get('config', 'json')
  return new GCPClient(config)
}

export class GCPClient {
  constructor(config) {
    this.url = config.url
    this.config = config
  }

  async authHeaders() {
    let token = await generateJWT(this.config)
    return { Authorization: `Bearer ${token}` }
  }

  async getDocument(collection, documentId) {
    let headers = await this.authHeaders()
    return fetch(`${this.url}/${collection}/${documentId}`, {
      headers,
    })
  }

  async postDocument(collection, doc, documentId = null) {
    let headers = await this.authHeaders()
    let fields = Object.entries(doc).reduce((acc, [k, v]) =&gt; ({ ...acc, [k]: { stringValue: v } }), {})
    let qs = ''
    if (documentId) {
      qs = `?documentId=${encodeURIComponent(documentId)}`
    }
    return fetch(`${this.url}/${collection}${qs}`, {
      headers,
      method: 'POST',
      body: JSON.stringify({
        fields,
      })
    })
  }

  async listDocuments(collection, nextPageToken) {
    let headers = await this.authHeaders()
    let qs = new URLSearchParams({
      fields: 'documents(fields,name),nextPageToken',
    })
    if (nextPageToken) qs.append('pageToken', nextPageToken)
    return fetch(`${this.url}/${collection}?${qs.toString()}`, {
      method: 'GET',
      headers,
    })
  }
}

export async function buildGCPClient() {
  let config = await firebaseConfig.get('config', 'json')
  let url = FIREBASE_API_URL
  return new GCPClient(url, config)
}</code></pre>
            <p>I can also use this same client locally to run queries against the store. In that case, rather than grabbing the config from kv, I construct the client using the configuration file I created above locally. I also bind `node-fetch` to `global.fetch` in these scripts.</p>
            <pre><code>import { GCPClient } from './lib/GCPClient'

// Use the listDocuments endpoint to query all current reservations
async function getReservations(client) {
  let nextPageToken
  let count = 0
  do {
    let reservations = await client.listDocuments(RESERVATIONS, nextPageToken).then(response =&gt; response.json())
    count += reservations.length
    nextPageToken = reservations.nextPageToken
  }
  while (nextPageToken)

  return count
}

global.fetch = nodeFetch

let config = fs.readFileSync('./config/metadata.json')
let client = new GCPClient(config)

getReservations(client)
    .then(console.log)
    .catch(console.error)</code></pre>
            
    <div>
      <h3>Conclusions</h3>
      <a href="#conclusions">
        
      </a>
    </div>
    <p>Specifically for this project, Workers fit the use-case really well for a few reasons:</p><ul><li><p>We only intend to use this during the run-up to registration, so being able to re-deploy a function completely independent of the main configuration API is incredibly freeing, especially for smaller tweaks.</p></li><li><p>The lessons learned during this prototyping experience will prove extremely valuable as we implement the more permanent registration system.</p></li><li><p>Finally, even though our datastore is effectively centralized, using Workers means that all the requests to various APIs - our email service, logging service, and of course GCP- are made from the Edge.Running at the edge leverages our network and keeps our auth data where we want it, while using Cloud Firestore guarantees immediate consistency and performant querying for our Workers running around the world.</p></li></ul><p>Building out this API using Workers was an eye-opening experience. We love any opportunity to use our own products, keeping us in touch with the experience and guiding our roadmap for future development. We’re also extremely excited to see what all of you do on Workers.dev!</p><hr /><p>Interested in deploying a Cloudflare Worker without setting up a domain on Cloudflare? We’re making it easier to get started building serverless applications with custom subdomains on <a href="https://workers.dev">workers.dev</a>. <i>If you’re already a Cloudflare customer, you can add Workers to your existing website</i> <a href="https://dash.cloudflare.com/workers"><i>here</i></a>.</p><p><a href="https://workers.dev">Reserve a workers.dev subdomain</a></p><hr /><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[API]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Programming]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3BNAxexzxUY0DMhpYjJllX</guid>
            <dc:creator>Ashley M Lewis</dc:creator>
        </item>
    </channel>
</rss>