
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Wed, 15 Apr 2026 19:47:41 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Announcing Pages support for monorepos, wrangler.toml, database integrations and more!]]></title>
            <link>https://blog.cloudflare.com/pages-workers-integrations-monorepos-nextjs-wrangler/</link>
            <pubDate>Thu, 04 Apr 2024 13:00:16 GMT</pubDate>
            <description><![CDATA[ Today, we’re launching four improvements to Pages that bring functionality previously restricted to Workers, with the goal of unifying the development experience between the two.  Support for monorepos, wrangler.toml, new additions to Next.js support and database integrations ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Pages <a href="/cloudflare-pages-ga">launched</a> in 2021 with the goal of empowering developers to go seamlessly from idea to production. With <a href="https://developers.cloudflare.com/pages/get-started/git-integration/#configure-your-deployment">built-in CI/CD</a>, <a href="https://developers.cloudflare.com/pages/configuration/preview-deployments/">Preview Deployments</a>, <a href="https://developers.cloudflare.com/pages/configuration/git-integration/">integration with GitHub and GitLab</a>, and support for all the most popular <a href="https://developers.cloudflare.com/pages/framework-guides/">JavaScript frameworks</a>, Pages lets you build and deploy both static and full-stack apps globally to our network in seconds.</p><p>Pages has superpowers like these that Workers does not have, and vice versa. Today you have to choose upfront whether to build a Worker or a Pages project, even though the two products largely overlap. That’s why during 2023’s <a href="/pages-and-workers-are-converging-into-one-experience">Developer Week</a>, we started bringing both products together to give developers the benefit of the best of both worlds. And it’s why we announced that like Workers, Pages projects can now directly access <a href="https://developers.cloudflare.com/workers/configuration/bindings/">bindings</a> to Cloudflare services — using <a href="https://github.com/cloudflare/workerd">workerd</a> under-the-hood — even when using the local development server provided by a full-stack framework like <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-an-astro-site/">Astro,</a> <a href="https://developers.cloudflare.com/pages/framework-guides/nextjs/deploy-a-nextjs-site/">Next.js,</a> <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-nuxt-site/">Nuxt,</a> <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-qwik-site/">Qwik,</a> <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-remix-site/">Remix,</a> <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-solid-site/">SolidStart, or</a> <a href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/">SvelteKit</a>. Today, we’re thrilled to be launching some new improvements to Pages that bring functionality previously restricted to Workers. Welcome to the stage: monorepos, wrangler.toml, new additions to Next.js support, and database integrations!</p>
    <div>
      <h3>Pages now supports monorepos</h3>
      <a href="#pages-now-supports-monorepos">
        
      </a>
    </div>
    <p>Many development teams use monorepos – repositories that contain multiple apps, with each residing in its own subdirectory. This approach is extremely helpful when these apps share code.</p><p>Previously, the Pages CI/CD set-up limited users to one repo per project. To use a monorepo with Pages, you had to <a href="https://developers.cloudflare.com/pages/get-started/direct-upload/">directly upload it</a> on your own, using the Wrangler CLI. If you did this, you couldn’t use Pages’ integration with GitHub or Gitlab, or have Pages CI/CD handle builds and deployments. With Pages support for monorepos, development teams can trigger builds to their various projects with each push.</p><p><b>Manage builds and move fast</b>You can now include and exclude specific paths to watch for in each of your projects to avoid unnecessary builds from commits to your repo.</p><p>Let’s say a monorepo contains 4 subdirectories – a marketing app, an ecommerce app, a design library, and a package. The marketing app depends on the design library, while the ecommerce app depends on the design library and the package.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/435MhqmGH7CJ4dXIvMEbN2/2ec159aecf92ea18f5b686b75024484d/image3-7.png" />
            
            </figure><p>Updates to the design library should rebuild and redeploy both applications, but an update to the marketing app shouldn’t rebuild and deploy the ecommerce app. However, by default, any push you make to my-monorepo triggers a build for both projects regardless of which apps were changed. Using the include/exclude build controls, you can specify paths to build and ignore for your project to help you track dependencies and build more efficiently.</p><p><b>Bring your own tools</b>Already using tools like <a href="https://turbo.build/">Turborepo</a>, <a href="https://nx.dev/">NX</a>, and <a href="https://lerna.js.org/">Lerna</a>? No problem! You can also bring your favorite <a href="https://developers.cloudflare.com/pages/configuration/monorepos/#monorepo-management-tools">monorepo management tooling</a> to Pages to help manage your dependencies quickly and efficiently.</p><p>Whatever your tooling and however you’re set up, check out our <a href="https://developers.cloudflare.com/pages/configuration/monorepos/">documentation</a> to get started with your monorepo right out of the box.</p>
    <div>
      <h3>Configure Pages projects with wrangler.toml</h3>
      <a href="#configure-pages-projects-with-wrangler-toml">
        
      </a>
    </div>
    <p>Today, we’re excited to announce that you can now configure Pages projects using wrangler.toml — the same configuration file format that is already used for configuring Workers.</p><p>Previously, Pages projects had to be configured exclusively in the dashboard. This forced you to context switch from your development environment any time you made a configuration change, like adding an environment variable or <a href="https://developers.cloudflare.com/workers/configuration/bindings/">binding</a>. It also separated configuration from code, making it harder to know things like what bindings are being used in your project. If you were developing as a team, all the users on your team had to have access to your account to make changes – even if they had access to make changes to the source code via your repo.</p><p>With wrangler.toml, you can:</p><ul><li><p><b>Store your configuration file in source control.</b> Keep your configuration in your repo alongside the rest of your code.</p></li><li><p><b>Edit your configuration via your code editor.</b> Remove the need to switch back and forth between interfaces.</p></li><li><p><b>Write configuration that is shared across environments.</b> Define bindings and environment variables for local, preview, and production in one file.</p></li><li><p><b>Ensure better access control.</b> By using a configuration file in your repo, you can control who has access to make changes without giving access to your Cloudflare dashboard.</p></li></ul><p><b>Migrate existing projects</b>If you have an existing Pages project, we’ve added a new Wrangler CLI command that downloads your existing configuration and provides you with a valid <code>wrangler.toml</code> file.</p>
            <pre><code>$ npx wrangler@latest pages download config &lt;PROJECT_NAME&gt;</code></pre>
            <p>Run this command, add the wrangler.toml file that it generates to your project’s root directory, and then when you deploy, your project will be configured based on this configuration file.</p><p>If you are already using wrangler.toml to define your local development configuration, you can continue doing so. By default, your existing wrangler.toml file will continue to only apply to local development. When you run <code>wrangler pages deploy</code>, Wrangler will show you the additional fields that you must add in order for your configuration to apply to production and preview environments. Add these fields to your wrangler.toml, and then when you deploy your changes, the configuration you’ve defined in wrangler.toml will be used by your Pages project.</p><p>Refer to the <a href="https://developers.cloudflare.com/pages/functions/wrangler-configuration/">documentation</a> for more information on exactly what’s supported and how to leverage wrangler.toml in your development workflows.</p>
    <div>
      <h3>Integrate Pages projects with your favorite database</h3>
      <a href="#integrate-pages-projects-with-your-favorite-database">
        
      </a>
    </div>
    <p>You can already connect to <a href="https://developers.cloudflare.com/d1/">D1</a>, Cloudflare’s <a href="https://www.cloudflare.com/developer-platform/products/d1/">serverless SQL database</a>, directly from Pages projects. And you can connect directly to your existing PostgreSQL database using <a href="https://developers.cloudflare.com/hyperdrive/">Hyperdrive</a>. Today, we’re making it even easier for you to connect 3rd party databases to Pages with just a couple of clicks. Pages now integrates directly with <a href="https://developers.cloudflare.com/workers/databases/native-integrations/neon/">Neon</a>, <a href="https://developers.cloudflare.com/workers/databases/native-integrations/planetscale/">PlanetScale</a>, <a href="https://developers.cloudflare.com/workers/databases/native-integrations/supabase/">Supabase</a>, <a href="https://developers.cloudflare.com/workers/databases/native-integrations/turso/">Turso</a>, <a href="https://developers.cloudflare.com/workers/databases/native-integrations/upstash/">Upstash</a>, and <a href="https://developers.cloudflare.com/workers/databases/native-integrations/xata/">Xata</a>!</p><p>Simply navigate to your Pages project’s settings, select your database provider, and we’ll add <a href="https://developers.cloudflare.com/pages/functions/bindings/#environment-variableshttps://developers.cloudflare.com/pages/functions/bindings/#environment-variables">environment variables</a> with credentials needed to connect as well a <a href="https://developers.cloudflare.com/pages/functions/bindings/#secrets">secret</a> with the API key from the provider for you automatically.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2seOXZ1bXjWnBbHTV7zs4U/6e180084ddfcbcc3a661afdacb8b1dc9/image1-4.png" />
            
            </figure><p>Not ready to ship to production yet? You can deploy your changes to Pages’ preview environment alongside your staging database and test your deployment with its unique preview URL.</p><p><b>What’s coming up for integrations?</b>We’re just getting started with database integrations, with many more providers to come. In the future, we’re also looking to expand our integrations platform to include seamless set up when building other components of your app – think authentication and observability!</p><p>Want to bring your favorite tools to Cloudflare but don’t see the integration option? Want to build out your own integration?</p><p>Not only are we looking for <a href="https://docs.google.com/forms/d/e/1FAIpQLScUzm1bpWzR0SlJLGI80HchcAz9emPWG2lIXO107KNZTcfo-w/viewform">user input on new integrations</a> to add, but we’re also opening up the integrations platform to builders who want to submit their own products! We’ll be releasing step-by-step documentation and tooling to easily build and publish your own integration. If you’re interested in submitting your own integration, please fill out our <a href="https://docs.google.com/forms/d/e/1FAIpQLScUzm1bpWzR0SlJLGI80HchcAz9emPWG2lIXO107KNZTcfo-w/viewform">integration intake form</a> and we’ll be in touch!</p>
    <div>
      <h3>Improved Next.js Support for Pages</h3>
      <a href="#improved-next-js-support-for-pages">
        
      </a>
    </div>
    <p>With <a href="https://github.com/cloudflare/next-on-pages/releases">30 minor and patch releases</a> since the 1.0 launch of <a href="https://github.com/cloudflare/next-on-pages">next-on-pages</a> during Dev Week 2023, our <a href="https://nextjs.org/">Next.js</a> integration has been continuously maturing and keeping up with the evolution of Next.js. In addition to performance improvements, and compatibility and bug fixes, we released three significant improvements.</p><p>First, the <a href="https://eslint.org/">ESLint</a> plugin <a href="https://www.npmjs.com/package/eslint-plugin-next-on-pages">eslint-plugin-next-on-pages</a> is a great way to catch and fix compatibility issues as you are writing your code before you build and deploy applications. The plugin contains <a href="https://github.com/cloudflare/next-on-pages/tree/main/packages/eslint-plugin-next-on-pages/docs/rules">several rules</a> for the most common coding mistakes we see developers make, with more being added as we identify problematic scenarios.</p><p>Another noteworthy change is the addition of <a href="https://github.com/cloudflare/next-on-pages/blob/3846730c4a0d12/packages/next-on-pages/README.md#cloudflare-platform-integration">getRequestContext()</a> APIs, which provides you with access to Cloudflare-specific resources and metadata about the request currently being processed by your application, allowing for example you to take client’s location or browser preferences into account when generating a response.</p><p>Last but not least, we have completely <a href="https://developers.cloudflare.com/pages/framework-guides/nextjs/deploy-a-nextjs-site/">overhauled the local development workflow for Next.js</a> as well as other full-stack frameworks. Thanks to the new <a href="https://github.com/cloudflare/next-on-pages/tree/main/internal-packages/next-dev">setupDevPlatform()</a> API, you can now use the default development server <code>next dev</code>, with support for instant edit &amp; refresh experience, while also using D1, <a href="https://www.cloudflare.com/developer-platform/r2/">R2</a>, KV and other resources provided by the Cloudflare development platform. Want to take it for a quick spin? Use <a href="https://developers.cloudflare.com/pages/get-started/c3/">C3</a> to scaffold a new Next.js application with just one command.</p><p>To learn more about our Next.js integration, check out our <a href="https://developers.cloudflare.com/pages/framework-guides/nextjs/deploy-a-nextjs-site/">Next.js framework guide</a>.</p>
    <div>
      <h3>What’s next for the convergence of Workers and Pages?</h3>
      <a href="#whats-next-for-the-convergence-of-workers-and-pages">
        
      </a>
    </div>
    <p>While today’s launch represents just a few of the many upcoming additions to converge Pages and Workers, we also wanted to share a few milestones that are on the horizon, planned later in 2024</p><p><b>Pages features coming soon to Workers</b></p><ul><li><p><b>Workers CI/CD.</b> Later this year, we plan to bring the <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">CI/CD system</a> from Cloudflare Pages to Cloudflare Workers. Connect your repositories to Cloudflare and trigger builds for your Workers with every commit.</p></li><li><p><b>Serve static assets from Workers.</b> You will be able to deploy and serve static assets as part of Workers – just like you can with Pages today – and build Workers using full-stack frameworks! This will also extend to Workers for Platforms, allowing you to build platforms that let your customers deploy complete, full-stack applications that serve both dynamic and static assets.</p></li><li><p><b>Workers</b> <a href="https://developers.cloudflare.com/pages/configuration/preview-deployments"><b>preview URLs</b></a><b>.</b> Preview versions of your Workers with every change and share a unique URL with your team for testing.</p></li></ul><p><b>Workers features coming soon to Pages</b></p><ul><li><p><b>Add</b> <a href="https://developers.cloudflare.com/workers/observability/logging/tail-workers/"><b>Tail Workers</b></a> <b>to Pages projects.</b> Get observability into your Pages Functions by capturing <code>console.log()</code> messages, unhandled exceptions, and request metadata, and then forward the information to external destinations.</p></li><li><p><a href="https://developers.cloudflare.com/workers/observability/logging/logpush/"><b>Workers Trace Events Logpush</b></a><b>.</b> Push your Pages Functions logs to supported destinations like <a href="https://developers.cloudflare.com/r2/">R2</a>, Datadog, or any HTTP destination for long term storage, auditing, and compliance.</p></li><li><p><a href="https://developers.cloudflare.com/workers/configuration/versions-and-deployments/gradual-deployments/"><b>Gradual Deployments</b></a><b>.</b> Gradually deploy new versions of your Pages Function to reduce risk when making changes to critical applications.</p></li></ul><p>You might also notice that the Pages and Workers interfaces in the Cloudflare Dash will begin to look more similar through the rest of this year. These changes aren’t just superficial, or us porting over functionality from one product to another. Under-the-hood, we are unifying the way that Workers and Pages projects are composed and then deployed to our network, ensuring that as we add new products and features, they can work with both Pages and Workers on day one.</p><p>In the meantime, bring your monorepo, a wrangler.toml, and your favorite databases to Pages and let’s rock! Be sure to show off what you’ve built in the <a href="https://discord.cloudflare.com/">Cloudflare Developer Discord</a> or by giving us a shout at <a href="https://twitter.com/CloudflareDev">@CloudflareDev</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">4P2K139AXugqOLE4sR5wIu</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
            <dc:creator>Igor Minar</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improved Cloudflare Workers testing via Vitest and workerd]]></title>
            <link>https://blog.cloudflare.com/workers-vitest-integration/</link>
            <pubDate>Fri, 15 Mar 2024 14:00:35 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce a new Workers Vitest integration - allowing you to write unit and integration tests via the popular testing framework, Vitest, that execute directly in our runtime, workerd ]]></description>
            <content:encoded><![CDATA[ <p></p><p></p><p>Today, we’re excited to announce a new Workers Vitest integration - allowing you to write unit and integration tests via the popular testing framework, <a href="https://vitest.dev/">Vitest</a>, that execute directly in our runtime, <a href="https://github.com/cloudflare/workerd">workerd</a>!</p><p>This integration provides you with the ability to test <b><i>anything</i></b> related to your Worker!</p><p>For the first time, you can write unit tests that run within the same <a href="https://github.com/cloudflare/workerd">runtime</a> that Cloudflare Workers run on in production, providing greater confidence that the behavior of your Worker in tests will be the same as when deployed to production. For integration tests, you can now write tests for Workers that are triggered by <a href="https://developers.cloudflare.com/workers/configuration/cron-triggers/">Cron Triggers</a> in addition to traditional <code>fetch()</code> events. You can also more easily test complex applications that interact with <a href="https://developers.cloudflare.com/kv/">KV</a>, <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a>, <a href="https://developers.cloudflare.com/d1/">D1</a>, <a href="https://developers.cloudflare.com/queues/">Queues</a>, <a href="https://developers.cloudflare.com/workers/configuration/bindings/about-service-bindings/">Service Bindings</a>, and more Cloudflare products.</p><p>For all of your tests, you have access to <a href="https://vitest.dev/guide/features.html">Vitest features</a> like snapshots, mocks, timers, and spies.</p><p>In addition to increased testing and functionality, you’ll also notice other developer experience improvements like hot-module-reloading, watch mode on by default, and per-test isolated storage. Meaning that, as you develop and edit your tests, they’ll automatically re-run, without you having to restart your test runner.</p>
    <div>
      <h2>Get started testing Workers with Vitest</h2>
      <a href="#get-started-testing-workers-with-vitest">
        
      </a>
    </div>
    <p>The easiest way to get started with testing your Workers via Vitest is to start a new Workers project via our create-cloudflare tool:</p>
            <pre><code>npm create cloudflare@latest hello-world -- --type=hello-world</code></pre>
            <p>Running this command will scaffold a new project for you with the Workers Vitest integration already set up. An example unit test and integration test are also included.</p>
    <div>
      <h3>Manual install and setup instructions</h3>
      <a href="#manual-install-and-setup-instructions">
        
      </a>
    </div>
    <p>If you prefer to manually install and set up the Workers Vitest integration, begin by installing <code>@cloudflare/vitest-pool-workers</code> from npm:</p>
            <pre><code>$ npm install --save-dev @cloudflare/vitest-pool-workers</code></pre>
            <p><code>@cloudflare/vitest-pool-workers</code> has a peer dependency on a specific version of <code>vitest</code>. Modern versions of <code>npm</code> will install this automatically, but we recommend you install it explicitly too. Refer to the <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/write-your-first-test/">getting started guide</a> for the current supported version. If you’re using TypeScript, add <code>@cloudflare/vitest-pool-workers</code> to your <code>tsconfig.json</code>’s <code>types</code> to get types for the <code>cloudflare:test</code> module:</p>
            <pre><code>{
  "compilerOptions": {
    "module": "esnext",
    "moduleResolution": "bundler",
    "lib": ["esnext"],
    "types": [
      "@cloudflare/workers-types/experimental",
      "@cloudflare/vitest-pool-workers"
    ]
  }
}</code></pre>
            <p>Then, enable the pool in your Vitest configuration file:</p>
            <pre><code>// vitest.config.js
import { defineWorkersConfig } from "@cloudflare/vitest-pool-workers/config";

export default defineWorkersConfig({
  test: {
    poolOptions: {
      workers: {
        wrangler: { configPath: "./wrangler.toml" },
      },
    },
  },
});</code></pre>
            <p>After that, define a compatibility date after “2022-10-31” and enable the <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/#nodejs-compatibility-flag"><code>nodejs_compat</code> compatibility flag</a> in your <code>wrangler.toml</code>:</p>
            <pre><code># wrangler.toml
main = "src/index.ts"
compatibility_date = "2024-01-01"
compatibility_flags = ["nodejs_compat"]</code></pre>
            
    <div>
      <h2>Test anything exported from a Worker</h2>
      <a href="#test-anything-exported-from-a-worker">
        
      </a>
    </div>
    <p>With the new Workers Vitest Integration, you can test anything exported from your Worker in both unit and integration-style tests. Within these tests, you can also test connected resources like R2, KV, and Durable Objects, as well as applications involving multiple Workers.</p>
    <div>
      <h3>Writing unit tests</h3>
      <a href="#writing-unit-tests">
        
      </a>
    </div>
    <p>In a Workers context, a unit test imports and directly calls functions from your Worker then asserts on their return values. Let’s say you have a Worker that looks like this:</p>
            <pre><code>export function add(a, b) {
  return a + b;
}

export default {
  async fetch(request) {
    const url = new URL(request.url);
    const a = parseInt(url.searchParams.get("a"));
    const b = parseInt(url.searchParams.get("b"));
    return new Response(add(a, b));
  }
}</code></pre>
            <p>After you’ve setup and installed the Workers Vitest integration, you can unit test this Worker by creating a new test file called <code>index.spec.js</code> with the following code:</p>
            <pre><code>import { env, createExecutionContext, waitOnExecutionContext, } from "cloudflare:test";
import { describe, it, expect } from "vitest";
import { add }, worker from "./src";

describe("Hello World worker", () =&gt; {
  it(“adds two numbers”, async () =&gt; {
    expect(add(2,3).toBe(5);
  });
  it("sends request (unit style)", async () =&gt; {
    const request = new Request("http://example.com/?a=3&amp;b=4");
    const ctx = createExecutionContext();
    const response = await worker.fetch(request, env, ctx);
    await waitOnExecutionContext(ctx);
    expect(await response.text()).toMatchInlineSnapshot(`"7"`);
  });
});</code></pre>
            <p>Using the Workers Vitest integration, you can write unit tests like these for any of your Workers.</p>
    <div>
      <h3>Writing integration tests</h3>
      <a href="#writing-integration-tests">
        
      </a>
    </div>
    <p>While unit tests are great for testing individual parts of your application, integration tests assess multiple units of functionality, ensuring that workflows and features work as expected. These are usually more complex than unit tests, but provide greater confidence that your app works as expected. In the Workers context, an integration test sends HTTP requests to your Worker and asserts on the HTTP responses.</p><p>With the Workers Vitest Integration, you can run integration tests by importing <code>SELF</code> from the new <code>cloudflare:test</code> utility like this:</p>
            <pre><code>// test/index.spec.ts
import { SELF } from "cloudflare:test";
import { it, expect } from "vitest";
import "../src";

// an integration test using SELF
it("sends request (integration style)", async () =&gt; {
   const response = await SELF.fetch("http://example.com/?a=3&amp;b=4");
   expect(await response.text()).toMatchInlineSnapshot(`"7"`);
});</code></pre>
            <p>When using <code>SELF</code> for integration tests, your Worker code runs in the same context as the test runner. This means you can use mocks to control your Worker.</p>
    <div>
      <h3>Testing different scenarios</h3>
      <a href="#testing-different-scenarios">
        
      </a>
    </div>
    <p>Whether you’re writing unit or integration tests, if your application uses Cloudflare Developer Platform products (e.g. KV, R2, <a href="https://www.cloudflare.com/developer-platform/products/d1/">D1</a>, Queues, or Durable Objects), you can test them. To demonstrate this, we have created a set of <a href="https://github.com/cloudflare/workers-sdk/tree/main/fixtures/vitest-pool-workers-examples">examples</a> to help get you started testing.</p>
    <div>
      <h2>Better testing experience === better testing</h2>
      <a href="#better-testing-experience-better-testing">
        
      </a>
    </div>
    <p>Having better testing tools makes it easier to test your projects right from the start, which leads to better overall quality and experience for your end users. The Workers Vitest integration provides that better experience, not just in terms of developer experience, but in making it easier to test your entire application.</p><p>The rest of this post will focus on <i>how</i> we built this new testing integration, diving into the internals of how Vitest works, the problems we encountered trying to get a framework to work within our runtime, and ultimately how we solved it and the improved DX that it unlocked.</p>
    <div>
      <h2>How Vitest traditionally works</h2>
      <a href="#how-vitest-traditionally-works">
        
      </a>
    </div>
    <p>When you start Vitest’s CLI, it first collects and sequences all your test files. By default, Vitest uses a “threads” pool, which spawns <a href="https://nodejs.org/api/worker_threads.html">Node.js worker threads</a> for isolating and running tests in parallel. Each thread gets a test file to run, dynamically requesting and evaluating code as needed. When the test runner imports a module, it sends a request to the host’s “Vite Node Server” which will either return raw JavaScript code transformed by Vite, or an external module path. If raw code is returned, it will be executed using the <a href="https://nodejs.org/api/vm.html#vmruninthiscontextcode-options"><code>node:vm</code> <code>runInThisContext()</code> function</a>. If a module path is returned, it will be imported using <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/import">dynamic <code>import()</code></a>. Transforming user code with Vite allows hot-module-reloading (HMR) — when a module changes, it’s invalidated in the module cache and a new version will be returned when it’s next imported.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/bmF35IKzPZ9mTXK7lnTA8/c6924d8fd533ea835035fb67dbf0e0c4/Untitled-1.png" />
            
            </figure><p>Miniflare is a fully-local simulator for Cloudflare's Developer Platform. <a href="/miniflare/">Miniflare v2</a> provided a <a href="https://miniflare.dev/testing/vitest">custom environment</a> for Vitest that allowed you to run your tests <i>inside</i> the Workers sandbox. This meant you could import and call any function using Workers runtime APIs in your tests. You weren’t restricted to integration tests that just sent and received HTTP requests. In addition, this environment provided per-test isolated storage, automatically undoing any changes made at the end of each test. In Miniflare v2, this environment was relatively simple to implement. We’d already reimplemented Workers Runtime APIs in a Node.js environment, and could inject them using Vitest’s APIs into the global scope of the test runner.</p><p>By contrast, Miniflare v3 runs your Worker code <a href="/miniflare-and-workerd">inside the same <code>workerd</code> runtime</a> that Cloudflare uses in production. Running tests directly in <a href="https://github.com/cloudflare/workerd"><code>workerd</code></a> presented a challenge — <code>workerd</code> runs in its own process, separate from the Node.js worker thread, and it’s not possible to reference JavaScript classes across a process boundary.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3Ngw1zLJWN4jfTko19ZhOn/e052b51d4bf2f00d89826cd02d2a2ad5/Untitled--1--1.png" />
            
            </figure>
    <div>
      <h2>Solving the problem with custom pools</h2>
      <a href="#solving-the-problem-with-custom-pools">
        
      </a>
    </div>
    <p>Instead, we use <a href="https://vitest.dev/advanced/pool.html">Vitest’s custom pools</a> feature to run the test runner in Cloudflare Workers running locally with <a href="/workerd-open-source-workers-runtime"><code>workerd</code></a>. A pool receives test files to run and decides how to execute them. By executing the runner inside <code>workerd</code>, tests have direct access to Workers runtime APIs as they’re running in a Worker. WebSockets are used to send and receive serialisable RPC messages between the Node.js host and <code>workerd</code> process. Note we’re running the exact same test runner code originally designed for a Node-context inside a Worker here. This means our Worker needs to provide Node’s built-in modules, support for dynamic code evaluation, and loading of arbitrary modules from disk with <a href="https://nodejs.org/api/esm.html#resolution-algorithm-specification">Node-resolution behavior</a>. The <a href="/workers-node-js-asynclocalstorage/"><code>nodejs_compat</code> compatibility flag</a> provides support for some of Node’s built-in modules, but does not solve our other problems. For that, we had to get creative…</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2xktF5P98mL5puD3UaQnZx/0a66f121c5cc631009df97c3b0f33dd7/Untitled--2--1.png" />
            
            </figure>
    <div>
      <h2>Dynamic code evaluation</h2>
      <a href="#dynamic-code-evaluation">
        
      </a>
    </div>
    <p>For <a href="https://developers.cloudflare.com/workers/runtime-apis/web-standards/#javascript-standards">security reasons</a>, the Cloudflare Workers runtime does not allow dynamic code evaluation via <code>eval()</code> or <code>new Function()</code>. It also requires all modules to be defined ahead-of-time before execution starts. The test runner doesn't know what code to run until we start executing tests, so without lifting these restrictions, we have no way of executing the raw JavaScript code transformed by Vite nor importing arbitrary modules from disk. Fortunately, code that is only meant to run locally – like tests – has a much more relaxed security model than deployed code. To support local testing and other development-specific use-cases such as <a href="https://vitejs.dev/guide/api-vite-runtime">Vite’s new Runtime API</a>, we added <a href="https://github.com/cloudflare/workerd/pull/1338">“unsafe-eval bindings”</a> and <a href="https://github.com/cloudflare/workerd/pull/1423">“module-fallback services”</a> to <code>workerd</code>.</p><p>Unsafe-eval bindings provide local-only access to the <code>eval()</code> function, and <code>new Function()</code>/<code>new AsyncFunction()</code>/<code>new WebAssembly.Module()</code> constructors. By exposing these through a binding, we retain control over which code has access to these features.</p>
            <pre><code>// Type signature for unsafe-eval bindings
interface UnsafeEval {
  eval(script: string, name?: string): unknown;
  newFunction(script: string, name?: string, ...args: string[]): Function;
  newAsyncFunction(script: string, name?: string, ...args: string[]): AsyncFunction;
  newWasmModule(src: BufferSource): WebAssembly.Module;
}</code></pre>
            <p>Using the unsafe-eval binding <code>eval()</code> method, we were able to implement a <a href="https://github.com/cloudflare/workers-sdk/blob/main/packages/vitest-pool-workers/src/worker/lib/node/vm.ts">polyfill for the required <code>vm.runInThisContext()</code></a> function. While we could also implement loading of arbitrary modules from disk using unsafe-eval bindings, this would require us to rebuild <code>workerd</code>’s module resolution system in JavaScript. Instead, we allow workers to be configured with module fallback services. If enabled, imports that cannot be resolved by <code>workerd</code> become HTTP requests to the fallback service. These include the specifier, referrer, and whether it was an <code>import</code> or <code>require</code>. The service may respond with a module definition, or a redirect to another location if the resolved location doesn’t match the specifier. Requests originating from synchronous <code>require</code>s will block the main thread until the module is resolved. The Workers Vitest pool’s <a href="https://github.com/cloudflare/workers-sdk/blob/main/packages/vitest-pool-workers/src/pool/module-fallback.ts">fallback service</a> implements <a href="https://nodejs.org/api/esm.html#resolution-algorithm">Node-like resolution</a> with Node-style <a href="https://nodejs.org/api/esm.html#interoperability-with-commonjs">interoperability between CommonJS and ES modules</a>.</p>
    <div>
      <h2>Durable Objects as test runners</h2>
      <a href="#durable-objects-as-test-runners">
        
      </a>
    </div>
    <p>Now that we can run and import arbitrary code, the next step is to get Vitest’s thread worker running inside <code>workerd</code>. Every incoming request has its own request context. To improve overall performance, I/O objects such as streams, request/response bodies and WebSockets created in one request context cannot be used from another. This means if we want to use a WebSocket for RPC between the pool and our <code>workerd</code> processes, we need to make sure the WebSocket is only used from one request context. To coordinate this, we define a singleton Durable Object for accepting the RPC connection and running tests from. Functions using RPC such as resolving modules, reporting results and console logging will always use this singleton. We use <a href="https://github.com/cloudflare/miniflare/pull/639">Miniflare’s “magic proxy” system</a> to get a reference to the singleton’s stub in Node.js, and send a WebSocket upgrade request directly to it. After adding a few more Node.js polyfills, and a basic <code>cloudflare:test</code> module to provide access to bindings and a function for creating <code>ExecutionContext</code>s, we’re able to write basic Workers unit tests! 🎉</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2vRactoOowJXYwInIONXC9/68880e0b835a82a299d434f859607a1c/Vitest-Pool-Workers-Architecture--4-.png" />
            
            </figure>
    <div>
      <h2>Integration tests with hot-module-reloading</h2>
      <a href="#integration-tests-with-hot-module-reloading">
        
      </a>
    </div>
    <p>In addition to unit tests, we support integration testing with a special <code>SELF</code> service binding in the <code>cloudflare:test</code> module. This points to a special <code>export default { fetch(...) {...} }</code> handler which uses Vite to import your Worker’s <code>main</code> module.</p><p>Using Vite’s transformation pipeline here means your handler gets hot-module-reloading (HMR) for free! When code is updated, the module cache is invalidated, tests are rerun, and subsequent requests will execute with new code. The same approach of wrapping user code handlers applies to Durable Objects too, providing the same HMR benefits.</p><p>Integration tests can be written by calling <code>SELF.fetch()</code>, which will dispatch a <code>fetch()</code> event to your user code in the same global scope as your test, but under a different request context. This means global mocks apply to your Worker’s execution, as do request context lifetime restrictions. In particular, if you forget to call <code>ctx.waitUntil()</code>, you’ll see an appropriate error message. This wouldn’t be the case if you called your Worker’s handler directly in a unit test, as you’d be running under the runner singleton’s Durable Object request context, whose lifetime is automatically extended.</p>
            <pre><code>// test/index.spec.ts
import { SELF } from "cloudflare:test";
import { it, expect } from "vitest";
import "../src/index";

it("sends request", async () =&gt; {
   const response = await SELF.fetch("https://example.com");
   expect(await response.text()).toMatchInlineSnapshot(`"body"`);
});</code></pre>
            
    <div>
      <h2>Isolated per-test storage</h2>
      <a href="#isolated-per-test-storage">
        
      </a>
    </div>
    <p>Most Workers applications will have at least one binding to a Cloudflare storage service, such as KV, R2 or D1. Ideally, tests should be self-contained and runnable in any order or on their own. To make this possible, writes to storage need to be undone at the end of each test, so reads by other tests aren’t affected. Whilst it’s possible to do this manually, it can be tricky to keep track of all writes and undo them in the correct order. For example, take the following two functions:</p>
            <pre><code>// helpers.ts
interface Env {
  NAMESPACE: KVNamespace;
}
// Get the current list stored in a KV namespace
export async function get(env: Env, key: string): Promise&lt;string[]&gt; {
  return await env.NAMESPACE.get(key, "json") ?? [];
}
// Add an item to the end of the list
export async function append(env: Env, key: string, item: string) {
  const value = await get(env, key);
  value.push(item);
  await env.NAMESPACE.put(key, JSON.stringify(value));
}</code></pre>
            <p>If we wanted to test these functions, we might write something like below. Note we have to keep track of all the keys we might write to, and restore their values at the end of tests, even if those tests fail.</p>
            <pre><code>// helpers.spec.ts
import { env } from "cloudflare:test";
import { beforeAll, beforeEach, afterEach, it, expect } from "vitest";
import { get, append } from "./helpers";

let startingList1: string | null;
let startingList2: string | null;
beforeEach(async () =&gt; {
  // Store values before each test
  startingList1 = await env.NAMESPACE.get("list 1");
  startingList2 = await env.NAMESPACE.get("list 2");
});
afterEach(async () =&gt; {
  // Restore starting values after each test
  if (startingList1 === null) {
    await env.NAMESPACE.delete("list 1");
  } else {
    await env.NAMESPACE.put("list 1", startingList1);
  }
  if (startingList2 === null) {
    await env.NAMESPACE.delete("list 2");
  } else {
    await env.NAMESPACE.put("list 2", startingList2);
  }
});

beforeAll(async () =&gt; {
  await append(env, "list 1", "one");
});

it("appends to one list", async () =&gt; {
  await append(env, "list 1", "two");
  expect(await get(env, "list 1")).toStrictEqual(["one", "two"]);
});

it("appends to two lists", async () =&gt; {
  await append(env, "list 1", "three");
  await append(env, "list 2", "four");
  expect(await get(env, "list 1")).toStrictEqual(["one", "three"]);
  expect(await get(env, "list 2")).toStrictEqual(["four"]);
});</code></pre>
            <p>This is slightly easier with the recently introduced <a href="https://vitest.dev/api/#ontestfinished"><code>onTestFinished()</code> hook</a>, but you still need to remember which keys were written to, or enumerate them at the start/end of tests. You’d also need to manage this for KV, R2, Durable Objects, caches and any other storage service you used. Ideally, the testing framework should just manage this all for you.</p><p>That’s exactly what the Workers Vitest pool does with the <code>isolatedStorage</code> option which is enabled by default. Any writes to storage performed in a test are automagically undone at the end of the test. To support seeding data in <code>beforeAll()</code> hooks, including those in nested <code>describe()</code>-blocks, a stack is used. Before each suite or test, a new frame is pushed to the storage stack. All writes performed by the test or associated <code>beforeEach()</code>/<code>afterEach()</code> hooks are written to the frame. After each suite or test, the top frame is popped from the storage stack, undoing any writes.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4Ixe0KPm6lrn7dvt7N1AhY/525418edee96350e76dffebf7c95895d/Untitled--3--1.png" />
            
            </figure><p>Miniflare implements simulators for storage services <a href="https://github.com/cloudflare/miniflare/pull/656">on top of Durable Objects</a> with <a href="https://github.com/cloudflare/miniflare/discussions/525">a separate blob store</a>. When running locally, <code>workerd</code> uses SQLite for Durable Object storage. To implement isolated storage, we implement an on-disk stack of <code>.sqlite</code> database files by backing up the databases when “pushing”, and restoring backups when “popping”. Blobs stored in the separate store are retained through stack operations, and cleaned up at the end of each test run. Whilst this works, it involves copying lots of <code>.sqlite</code> files. Looking ahead, we’d like to explore using SQLite <a href="https://www.sqlite.org/lang_savepoint.html"><code>SAVEPOINTS</code></a> for a more efficient solution.</p>
    <div>
      <h2>Declarative request mocking</h2>
      <a href="#declarative-request-mocking">
        
      </a>
    </div>
    <p>In addition to storage, most Workers will make outbound <code>fetch()</code> requests. For tests, it’s often useful to mock responses to these requests. Miniflare already allows you to specify an <a href="https://undici.nodejs.org/#/docs/api/MockAgent"><code>undici</code> <code>MockAgent</code></a> to route all requests through. The <code>MockAgent</code> class provides a declarative interface for specifying requests to mock and the corresponding responses to return. This API is relatively simple, whilst being flexible enough for advanced use cases. We provide an instance of <code>MockAgent</code> as <code>fetchMock</code> in the <code>cloudflare:test</code> module.</p>
            <pre><code>import { fetchMock } from "cloudflare:test";
import { beforeAll, afterEach, it, expect } from "vitest";

beforeAll(() =&gt; {
  // Enable outbound request mocking...
  fetchMock.activate();
  // ...and throw errors if an outbound request isn't mocked
  fetchMock.disableNetConnect();
});
// Ensure we matched every mock we defined
afterEach(() =&gt; fetchMock.assertNoPendingInterceptors());

it("mocks requests", async () =&gt; {
  // Mock the first request to `https://example.com`
  fetchMock
    .get("https://example.com")
    .intercept({ path: "/" })
    .reply(200, "body");

  const response = await fetch("https://example.com/");
  expect(await response.text()).toBe("body");
});</code></pre>
            <p>To implement this, we bundled a stripped down version of <code>undici</code> containing just the <code>MockAgent</code> code. We then <a href="https://github.com/cloudflare/workers-sdk/blob/main/packages/vitest-pool-workers/src/worker/fetch-mock.ts">built a custom <code>undici</code> <code>Dispatcher</code></a> that used the Worker’s global <code>fetch()</code> function instead of <code>undici</code>’s built-in HTTP implementation based on <a href="https://github.com/nodejs/llhttp"><code>llhttp</code></a> and <a href="https://nodejs.org/api/net.html"><code>node:net</code></a>.</p>
    <div>
      <h2>Testing Durable Objects directly</h2>
      <a href="#testing-durable-objects-directly">
        
      </a>
    </div>
    <p>Finally, Miniflare v2’s custom Vitest environment provided support for accessing the instance methods and state of Durable Objects in tests directly. This allowed you to unit test Durable Objects like any other JavaScript class—you could mock particular methods and properties, or immediately call specific handlers like <code>alarm()</code>. To implement this in <code>workerd</code>, we rely on our existing wrapping of user Durable Objects for Vite transforms and hot-module reloading. When you call the <code>runInDurableObject(stub, callback)</code> function from <code>cloudflare:test</code>, we store <code>callback</code> in a global cache and send a special <code>fetch()</code> request to <code>stub</code> which is intercepted by the wrapper. The wrapper executes the <code>callback</code> in the request context of the Durable Object, and stores the result in the same cache. <code>runInDurableObject()</code> then reads from this cache, and returns the result.</p><p>Note that this assumes the Durable Object is running in the same isolate as the <code>runInDurableObject()</code> call. While this is true for same-Worker Durable Objects running locally, it means Durable Objects defined in auxiliary workers can’t be accessed directly.</p>
    <div>
      <h2>Try it out!</h2>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>We are excited to release the <code>@cloudflare/vitest-pool-workers</code> package on npm, and to provide an improved testing experience for you.</p><p>Make sure to read the <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/write-your-first-test/">Write your first test guide</a> and begin writing unit and integration tests today! If you’ve been writing tests using one of our previous options, our <code>unstable_dev</code> <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/migrate-from-unstable-dev/">migration guide</a> or our Miniflare 2 <a href="https://developers.cloudflare.com/workers/testing/vitest-integration/get-started/migrate-from-miniflare-2/">migration guide</a> should explain key differences and help you move your tests over quickly.</p><p>If you run into issues or have suggestions for improvements, please <a href="https://github.com/cloudflare/workers-sdk/issues/new/choose">file an issue</a> in our GitHub repo or reach out via our <a href="https://discord.com/invite/cloudflaredev">Developer Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Testing]]></category>
            <guid isPermaLink="false">P0mpqczsiU6cJvsOQWpbi</guid>
            <dc:creator>Brendan Coll</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[Better debugging for Cloudflare Workers, now with breakpoints]]></title>
            <link>https://blog.cloudflare.com/debugging-cloudflare-workers/</link>
            <pubDate>Tue, 28 Nov 2023 14:00:20 GMT</pubDate>
            <description><![CDATA[ We provide many tools to help you debug Cloudflare Workers; from your local environment all the way into production. In this post, we highlight some of the tools we currently offer, and do a deep dive into one specific area - breakpoint debugging - a tool we recently added into our workerd runtime ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2kmdQgGjQDnMeJ0amrvTDS/9577ec3eefd6e91d3e9a3aeb76e9f299/Debugging-1.png" />
            
            </figure><p>As developers, we’ve all experienced times when our code doesn’t work like we expect it to. Whatever the root cause is, being able to quickly dive in, diagnose the problem, and ship a fix is invaluable.</p><p>If you’re developing with Cloudflare Workers, we provide many tools to help you debug your applications; from your local environment all the way into production. This additional insight helps save you time and resources and provides visibility into how your application actually works — which can help you optimize and refactor code even before it goes live.</p><p>In this post, we’ll explore some of the tools we currently offer, and do a deep dive into one specific area — breakpoint debugging — looking at not only how to use it, but how we recently implemented it in our runtime, <a href="https://github.com/cloudflare/workerd">workerd</a>.</p>
    <div>
      <h2>Available Debugging Tools</h2>
      <a href="#available-debugging-tools">
        
      </a>
    </div>
    
    <div>
      <h3>Logs</h3>
      <a href="#logs">
        
      </a>
    </div>
    <p><code>console.log</code>. It might be the simplest tool for a developer to debug, but don’t underestimate it. Built into the Cloudflare runtime is node-like logging, which provides detailed, color-coded logs. Locally, you can view these logs in a terminal window, and they will look like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6FOAbByfvb94W0iinSF3dp/47b7086fcbf1ff54a39ae39fb09e6f95/image5-2.png" />
            
            </figure><p>Outside local development, once your Worker is deployed, <code>console.log</code> statements are visible via the Real-time Logs interface in the Cloudflare Dashboard or via the Workers CLI tool, <a href="https://developers.cloudflare.com/workers/wrangler/install-and-update/">Wrangler</a>, using the <a href="https://developers.cloudflare.com/workers/wrangler/commands/#tail"><code>wrangler tail</code></a> command. Each log that comes through <code>wrangler tail</code> is structured JSON, and the command has options to filter and search incoming logs to make results as relevant as possible.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PA8wVYsgB1hkhz6zyhk97/17410babcdcbfef6bc8ef74a261b7c12/image2-4.png" />
            
            </figure><p>If you’d like to send these logs to third-parties for processing and storage, you can leverage <a href="https://developers.cloudflare.com/workers/observability/logpush/">Workers Trace Events Logpush</a> which supports a variety of <a href="https://developers.cloudflare.com/logs/get-started/enable-destinations/">destinations</a>.</p>
    <div>
      <h3>DevTools</h3>
      <a href="#devtools">
        
      </a>
    </div>
    <p>In addition to logging, you can also leverage <a href="https://github.com/cloudflare/workers-sdk/tree/main/packages/wrangler-devtools">our implementation</a> of <a href="https://developer.chrome.com/docs/devtools/overview/">Chrome’s DevTools</a> to do things like view and debug network requests, take memory heap snapshots, and monitor CPU usage.</p><p>This interactive tool provides even further insight and information about your Cloudflare Workers, and can be started from within Wrangler by running <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev"><code>wrangler dev</code></a> and pressing <b>[d]</b> once the dev server is spun up. It can also be accessed by the editor that is built into the <a href="https://dash.cloudflare.com/login?redirect_uri=https%3A%2F%2Fdash.cloudflare.com%2F%3Faccount%3Dworkers">Cloudflare Dashboard</a> or the <a href="https://workers.new">Workers Playground</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5FjYb3g7ceyAbIa5gmgPPb/3f94eda9e0ccb498daed4c1fc01b40bb/image6-1.png" />
            
            </figure>
    <div>
      <h3>Breakpoints</h3>
      <a href="#breakpoints">
        
      </a>
    </div>
    <p>Breakpoints allow developers to stop code execution at specific points (lines) to evaluate what is happening. This is great for situations where you might have a race condition or times when you don’t know exactly what is happening, and your code isn’t behaving as expected. Breakpoints allow you to walk through your code line by line to see how it behaves.</p><div>
  
</div><p>You can get started with breakpoint debugging from within the Wrangler CLI by running <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev"><code>wrangler dev</code></a> and pressing <b>[d]</b> to open up a DevTools debugger session. If you prefer to debug via your IDE, we support VSCode and WebStorm.</p><p><b>Setting up VSCode</b>To set up VSCode to debug Cloudflare Workers with breakpoints, you’ll need to create a new <code>.vscode/launch.json</code> file with the following content:</p>
            <pre><code>{
  "configurations": [
    {
  "name": "Wrangler",
  "type": "node",
  "request": "attach",
  "port": 9229,
  "cwd": "/",
  "resolveSourceMapLocations": null,
  "attachExistingChildren": false,
  "autoAttachChildProcesses": false
    }
  ]
}</code></pre>
            <p>Once you’ve created this configuration in <code>launch.json</code>, open your project in VSCode. Open a new terminal window from VSCode, and run <code>npx wrangler dev</code> to start a local dev server.</p><p>At the top of the <b>Run &amp; Debug</b> panel, you should see an option to select a configuration. Choose <b>Wrangler</b>, and select the play icon. You should see <b>Wrangler: Remote Process [0]</b> show up in the Call Stack panel on the left. Go back to a <b>.js</b> or <b>.ts</b> file in your project and add at least one breakpoint.</p><p>Open your browser and go to the Worker’s local URL (default <a href="http://127.0.0.1:8787">http://127.0.0.1:8787</a>). The breakpoint should be hit, and you should see details about your code at the specified line.</p><p><b>Setting up WebStorm</b>To set up WebStorm with breakpoint debugging, create a new “Attach to Node.js/Chrome” Debug Configuration, setting the port to <code>9229</code>:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4DTxZCPd3CPizjkeZoSeKv/17b9aeb346b798d02755b7a463485a90/image4-2.png" />
            
            </figure><p>Run <code>npx wrangler dev</code> to start a local dev server, then start the Debug Configuration:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4TDefT0FNe3cWNuPvTzxSK/7656e30ced53a2f15d504c74703d438f/Screenshot-2023-11-28-at-10.07.57.png" />
            
            </figure><p>Add a breakpoint, then open your browser and go to the Worker’s local URL (default <a href="http://127.0.0.1:8787">http://127.0.0.1:8787</a>). The breakpoint should be hit, and you should see details about your code at the specified line.</p>
    <div>
      <h2>How we enabled breakpoint debugging via workerd</h2>
      <a href="#how-we-enabled-breakpoint-debugging-via-workerd">
        
      </a>
    </div>
    <p>Both <a href="/workerd-open-source-workers-runtime/">workerd</a> and Cloudflare Workers embed <a href="https://v8.dev/">V8</a> to run workers code written in JavaScript and WASM. V8 is a component of the world’s most widely used web browser today, <a href="https://www.google.com/chrome/">Google Chrome</a>, and it is also widely used embedded into open source projects like <a href="https://nodejs.org/">Node.js</a>.</p><p>The Google Chrome team has created a set of web developer tools, <a href="https://developer.chrome.com/docs/devtools/">Chrome DevTools</a>, that are built directly into the browser. These provide a wide range of features for inspecting, debugging, editing, and optimizing web pages. Chrome DevTools are exposed through a UI in Chrome that talks to the components of the browser, such as V8, using the <a href="https://chromedevtools.github.io/devtools-protocol/">Chrome DevTools Protocol</a> (CDP). The protocol uses JSON-RPC transmitted over a websocket to exchange messages and notifications between clients, like the DevTools UI, and the components of Chrome. Within Chrome Devtools protocols are domains (DOM, Debugger, Media) that group related commands by functionality that can be implemented by different components in Chrome.</p><p>V8 supports the following CDP domains:</p><ul><li><p>Runtime</p></li><li><p>Debugger</p></li><li><p>Profiler</p></li><li><p>HeapProfiler</p></li></ul><p>These domains are available to all projects that embed V8, including workerd, so long as the embedding application is able to route messages between a DevTools client and V8. DevTools clients use the Debugger domain to implement debugging functionality. The Debugger domain exposes all the commands to debug an application, such as setting breakpoints. It also sends debugger events, like hitting a breakpoint, up to DevTools clients, so they can present the state of the script in a debugger UI.</p><p>While workerd has supported CDP since its first release, support for the Debugger domain is new. The Debugger domain differs from the other domains exposed by V8 because it requires the ability to suspend the execution of a script whilst it is being debugged. This presents a complication for introducing breakpoint debugging in workerd, because workerd runs each Worker in a V8 isolate in which there is just a single thread that receives incoming requests and runs the scripts associated with them.</p><p>Why is this a problem? Workerd uses an event-driven programming model and its single thread is responsible for both responding to incoming requests and for running JavaScript / WASM code. In practice, this is implemented via an event loop that sits at the bottom of the call stack that sends and receives network messages and calls event handlers that run JavaScript code. The thread needs to fall back into the event loop after running event handlers to be able to process network messages. However, the V8 API for handling breakpoints expects execution to be suspended within a method implemented by the embedder that is called from V8 when a breakpoint is hit. This method is called from the event handler that is running JavaScript in V8. Unfortunately, this prevents the workerd thread from falling back into the event loop and processing any incoming network events, including all CDP commands relating to debugging. So if a client asks to resume execution by sending a CDP command, it cannot be relayed to the executing thread because it is unable to fall into the eventloop whilst in a breakpoint.</p><p>We solved this event processing problem by adding an I/O thread to workerd. The I/O thread handles sending and receiving CDP messages, because the thread executing JavaScript can be suspended due to hitting a breakpoint or a JavaScript `debugger` statement. The I/O thread wakes the JavaScript thread when CDP commands arrive and also handles sending responses back to the CDP client. Conceptually, this was not difficult, but it required some careful synchronization to avoid dropped messages.</p>
    <div>
      <h2>Use the Source</h2>
      <a href="#use-the-source">
        
      </a>
    </div>
    <p>When debugging, JavaScript developers expect to see their source code in the debugger. For this to work, the embedded V8 needs to be able to locate sources. It is common for JavaScript code to be generated either by combining and minifying multiple JavaScript sources, or by transpiling to JavaScript from another language, such as <a href="https://www.typescriptlang.org/">TypeScript</a>, <a href="https://dart.dev/">Dart</a>, <a href="https://coffeescript.org/">CoffeeScript</a>, or <a href="https://elm-lang.org/">Elm</a>. To render the source code in the debugger in its original form, the embedded V8 needs to know 1) where the source code came from and 2) how any given line of JavaScript visible to V8 maps back to the original sources before any transformation of the original sources was applied. The standard solution to this problem is to embed a <a href="https://firefox-source-docs.mozilla.org/devtools-user/debugger/how_to/use_a_source_map/index.html">source map</a> into the JavaScript code that JavaScript engine runs. The embedding is performed through a special comment in the JavaScript running in the JavaScript engine:</p><p><code>//# sourceMappingURL=generated.js.map</code></p><p>This source map’s URL is resolved relative to the source URL. This can be set when instantiating a source file with the V8 API, or via another special comment:</p><p><code>//# sourceURL=file:///absolute/path/to/generated.js</code></p><p>An example source map looks something like this:</p>
            <pre><code>{
  "version": 3,
  "sources": ["../src/index.ts"],
  "sourcesContent": ["interface Env { ... }\n\nexport default ..."],
  "mappings": ";AAIA,IAAO,mBAA8B;AAAA,EACjC,MAAM,MAAM,SAAS,KAAK,KAAK;...",
  "names": []
}</code></pre>
            <p>Each of the relative paths in <code>sources</code> are resolved relative to the source map’s fully-qualified URL. When DevTools connects to V8 and enables the Debugger domain, V8 will send information on all parsed scripts including the source map’s fully-qualified URL. In our example, this would be <code>file:///absolute/path/to/generated.js.map</code>. DevTools needs to fetch this URL along with source URLs to perform source mapping. Unfortunately, our patched version of DevTools is hosted at <a href="https://devtools.devprod.cloudflare.dev/">https://devtools.devprod.cloudflare.dev/</a>, and browsers prohibit fetching <code>file://</code> URLs from non-<code>file://</code> origins for security reasons. However, we need to use <code>file://</code> URLs so IDEs like Visual Studio Code can match up source files from source maps to files on disk. To get around this, we used Wrangler's inspector proxy to rewrite the CDP script parsed messages sent by V8 to use a different protocol if the <code>User-Agent</code> of the inspector WebSocket handshake is a browser.</p><p>Now that we can set breakpoints and fetch source maps, DevTools works as normal. When a user tries to set a breakpoint in an original source file, DevTools will use the map’s <code>mappings</code> to find the location in the generated JavaScript file and set a breakpoint there. This is the opposite problem to source mapping error stack traces. When V8 hits this JavaScript breakpoint, DevTools will pause on the location in the original source file. Stepping through the source file requires mapping the stepped-over segment to generated code, sending the step-over command to V8, then mapping back the new paused location in generated code to the original source file.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4biqz5FNB0gTzSFbvM3O9B/5a9d60f06cab849c4b56a14b164600d4/image3-5.png" />
            
            </figure><p><i>(sequence diagram showing process of setting, hitting, and stepping over breakpoints) (</i><a href="https://mermaid.live/edit#pako:eNq1VUtvEzEQ_iuWTyClG1DUA6sqh6pCQmpLlRJxqHNw1pONidde_AitVvnvjOPdNgmbEgTsyfbMfN88dxpaGAE0pw6-B9AFXEleWl4xTfDjhTeWTB3YdK-59bKQNdeeXMH6izHK_Sr5arkuVZ_ND2NXYEUSdABn43FnkePjPJQl2Aw0nytImp0YNVuEnCRJez3rx3CFlbW_49aBuJjbcUOcCbaAG15PJ9eIQRdSQT4cZlk2lFrAY_bNZRWvGR0QfCObA36k6bz-I5rkZkt0EYIU4wOK3mTcgo-WmTJctOcJJOzEE6w6DX4ngy8BNMTDo48ADWOMrsE6aXQ85qOtNaOdfWyBPdt78OTSAl_VRmofvXmTEuhd_uHtKQV24F8ALp-mVu3F1FOZGJOSGm5DNY9Q56NjMR7pkl0_GjJ_Zv8UNen7_HyUv8t7iV_NYxJp44FYWS49MYttwmKWtHBkEkfL-VM6tubhuYkKrtRHHEVAigd0ZzYgUbCUO3mLotc9n3Wu4yzLNUcf94awv7UP6oeyFM_d1r1jLeGhJp_X3dz_pvqoHHXbeVnJ-lo67MUHvHj8Y8QC7deabNpqz04veD-3BReqLgwB_yIxk13Ivyjx_yrWsSiZpgNaga24FLgEmvjKqF9CBfgXwKOABQ_KM8r0BlV58Ob-SRc09zbAgIZaIGa7M7pHEBLXxk3aK9v1svkJ1EsiBw"><i>mermaid URL</i></a><i>)</i></p>
    <div>
      <h2>Future Work</h2>
      <a href="#future-work">
        
      </a>
    </div>
    <p>Both the Visual Studio Code and WebStorm configurations for breakpoint debugging require attaching to an <i>existing</i> dev server. It would be great if your IDE could <i>launch</i> the dev server too, and automatically attach to it.</p><p>When you debug a Node.js program in Visual Studio Code or WebStorm, an additional <a href="https://nodejs.org/api/cli.html#-r---require-module"><code>--require</code></a> hook is added to the <a href="https://nodejs.org/api/cli.html#node_optionsoptions"><code>NODE_OPTIONS</code></a> environment variable. This hook registers the process’s inspector URL with the editor over a well-known socket. This means if your Node.js process spawns another Node.js child process, your editor will debug that child process too. This is how Visual Studio Code’s <a href="https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_javascript-debug-terminal">JavaScript Debug Terminal</a> works, and is how editors can debug Node.js processes started by npm scripts.</p><p>Our plan is to detect this <code>--require</code> hook, and register <code>workerd</code> child processes started by Wrangler and Miniflare. This will mean you can debug <code>npm</code> launch tasks, without having to worry about starting the dev server and then attaching to it.</p>
    <div>
      <h2>Start debugging!</h2>
      <a href="#start-debugging">
        
      </a>
    </div>
    <p>All the debugging tools listed above are ready to be used today. Logs and DevTools can be accessed either by logging into the Cloudflare dashboard or by downloading <a href="https://www.npmjs.com/package/wrangler">Wrangler</a>, the command-line tool for the Cloudflare Developer Platform. Breakpoint debugging and Node-style logging is built into the latest version of Wrangler, and can be accessed by running <code>npx wrangler@latest dev</code> in a terminal window. Let us know what you think in the #wrangler channel on the <a href="https://discord.gg/cloudflaredev">Cloudflare Developers Discord</a>, and please <a href="https://github.com/cloudflare/workers-sdk/issues/new/choose">open a GitHub issue</a> if you hit any unexpected behavior.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">1wZwoTGB5bvoUK93vzeTWx</guid>
            <dc:creator>Adam Murray</dc:creator>
            <dc:creator>Brendan Coll</dc:creator>
        </item>
        <item>
            <title><![CDATA[Re-introducing the Cloudflare Workers Playground]]></title>
            <link>https://blog.cloudflare.com/workers-playground/</link>
            <pubDate>Thu, 28 Sep 2023 13:00:43 GMT</pubDate>
            <description><![CDATA[ Today, we’re excited to announce an updated Cloudflare Workers playground, where users can develop and test Workers before sharing or deploying them ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Since the very <a href="/introducing-cloudflare-workers/">initial announcement</a> of Cloudflare Workers, we’ve provided a playground. The motivation behind that being a belief that users should have a convenient, low-commitment way to play around with and learn more about Workers.</p><p>Over the last few years, while <a href="https://www.cloudflare.com/developer-platform/workers/">Cloudflare Workers</a> and our <a href="https://www.cloudflare.com/developer-platform/products/">Developer Platform</a> have changed and grown, the original playground has not. Today, we’re proud to announce a revamp of the playground that demonstrates the power of Workers, along with new development tooling, and the ability to share your playground code and deploy instantly to Cloudflare’s global network.</p>
    <div>
      <h3>A focus on origin Workers</h3>
      <a href="#a-focus-on-origin-workers">
        
      </a>
    </div>
    <p>When Workers was first introduced, many of the examples and use-cases centered around middleware, where a Worker intercepts a request to an origin and does something before returning a response. This includes things like: modifying headers, redirecting traffic, helping with A/B testing, or caching. Ultimately the Worker isn’t acting as an origin in these cases, it sits between the user and the destination.</p><p>While Workers are still great for these types of tasks, for the updated playground, we decided to focus on the Worker-as-origin use-case. This is where the Worker receives a request and is responsible for returning the full response. In this case, the Worker is the destination, not middle-ware. This is a great way for you to develop more complex use-cases like user interfaces or APIs.</p>
    <div>
      <h3>A new editor experience</h3>
      <a href="#a-new-editor-experience">
        
      </a>
    </div>
    <p>During Developer Week in May, we <a href="/improved-quick-edit/">announced</a> a new, authenticated dashboard editor experience powered by VSCode. Now, this same experience is available to users in the playground.</p><p>Users now have a more robust IDE experience that supports: multi-module Workers, type-checking via JSDoc comments and the <a href="https://www.npmjs.com/package/@cloudflare/workers-types">`workers-types` package</a>, pretty error pages, and real previews that update as you edit code. The new editor only supports <a href="/workers-javascript-modules/">Module syntax</a>, which is the preferred way for users to develop new Workers.</p><p>When the playground first loads, it looks like this:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6mfWVX3KmQjEM1YQ7W0sjN/2d0cc6ec11fd81a59ef151a912aa5a2a/image4-18.png" />
            
            </figure><p>The content you see on the right is coming from the code on the left. You can modify this just as you would in a code editor. Once you make an edit, it will be updated shortly on the right as demonstrated below:</p><div>
  
</div>
<p></p><p>You’re not limited to the starter demo. Feel free to edit and remove those files to create APIs, user interfaces, or any other application that you come up with.</p>
    <div>
      <h3>Updated developer tooling</h3>
      <a href="#updated-developer-tooling">
        
      </a>
    </div>
    <p>Along with the updated editor, the new playground also contains numerous developer tools to help give you visibility into the Worker.</p><p>Playground users have access to the same Chrome DevTools technology that we use in the Wrangler CLI and the Dashboard. Within this view, you can: view logs, view network requests, and profile your Worker among other things.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5nlF1OQyvv1NhGcf6cNKZI/c51f5a2d0980f545cc8519d59ae51c03/Screenshot-2023-09-12-at-4.12.10-PM.png" />
            
            </figure><p>At the top of the playground, you’ll also see an “HTTP” tab which you can use to test your Worker against various HTTP methods.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4wjS7nrqnVdcfqVSk7DI0w/9d7b5ad7a29c027ab1b68cb665b073a7/Screenshot-2023-09-12-at-4.08.49-PM.png" />
            
            </figure>
    <div>
      <h3>Share what you create</h3>
      <a href="#share-what-you-create">
        
      </a>
    </div>
    <p>With all these improvements, we haven’t forgotten the core use of a playground—to share Workers with other people! Whatever your use-case; whether you’re building a demo to showcase the power of Workers or sending someone an example of how to fix a specific issue, all you need to do is click “Copy Link” in the top right of the Playground then paste the URL in any URL bar.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6LpxCAHIMkHpuc3UDMcHAJ/019a198ee37f89ab9eb9d3eb0c19a355/Screenshot-2023-09-28-at-13.35.41.png" />
            
            </figure><p>The unique URL will be shareable and deployable as long as you have it. This means that you could create quick demos by creating various Workers in the Playground, and bookmark them to share later. They won’t expire.</p>
    <div>
      <h3>Deploying to the Supercloud</h3>
      <a href="#deploying-to-the-supercloud">
        
      </a>
    </div>
    <p>We also wanted to make it easier to go from writing a Worker in the Playground to deploying that Worker to Cloudflare’s global network. We’ve included a “Deploy” button that will help you quickly deploy the Worker you’ve just created.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6PxsN7ji6lsDeLmbG6fZmV/00ce6d09999333fe2f7321cb61f3d5c9/image5-10.png" />
            
            </figure><p>If you don’t already have a Cloudflare account, you will also be guided through the onboarding process.</p>
    <div>
      <h3>Try it out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>This is now available to all users in Region:Earth. Go to <a href="https://workers.cloudflare.com/playground">https://workers.cloudflare.com/playground</a> and give it a go!</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Dashboard]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7GiFEdoyaqaiVfqZzzyF6g</guid>
            <dc:creator>Adam Murray</dc:creator>
            <dc:creator>Samuel Macleod</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improving Worker Tail scalability]]></title>
            <link>https://blog.cloudflare.com/improving-worker-tail-scalability/</link>
            <pubDate>Fri, 01 Sep 2023 13:00:46 GMT</pubDate>
            <description><![CDATA[ We’re excited to announce improvements to Workers Tail that means it can now be enabled for Workers at any size and scale ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56gEEDPmOntJoXI16u8qU5/0ed51f115308dc4941b6553fbdaed0f0/image2-15.png" />
            
            </figure><p>Being able to get real-time information from applications in production is extremely important. Many times software passes local testing and automation, but then users report that something isn’t working correctly. Being able to quickly see what is happening, and how often, is critical to debugging.</p><p>This is why we originally developed the Workers Tail feature - to allow developers the ability to view requests, exceptions, and information for their Workers and to provide a window into what’s happening in real time. When we developed it, we also took the opportunity to build it on top of our own Workers technology using products like Trace Workers and Durable Objects. Over the last couple of years, we’ve continued to iterate on this feature - allowing users to quickly access logs <a href="/introducing-workers-dashboard-logs/">from the Dashboard</a> and via <a href="/10-things-i-love-about-wrangler/">Wrangler CLI</a>.</p><p>Today, we’re excited to announce that tail can now be enabled for Workers at any size and scale! In addition to telling you about the new and improved scalability, we wanted to share how we built it, and the changes we made to enable it to scale better.</p>
    <div>
      <h3>Why Tail was limited</h3>
      <a href="#why-tail-was-limited">
        
      </a>
    </div>
    <p>Tail leverages <a href="https://developers.cloudflare.com/workers/runtime-apis/durable-objects/#durable-objects">Durable Objects</a> to handle coordination between the Worker producing messages and consumers like <code>wrangler</code> and the Cloudflare dashboard, and Durable Objects are a great choice for handling real-time communication like this. However, when a single Durable Object instance starts to receive a very high volume of traffic - like the kind that can come with tailing live Workers - it can see some performance issues.</p><p>As a result, Workers with a high volume of traffic could not be supported by the original Tail infrastructure. Tail had to be limited to Workers receiving 100 requests/second (RPS) or less. This was a significant limitation that resulted in many users with large, high-traffic Workers having to turn to their own tooling to get proper observability in production.</p><p>Believing that every feature we provide should scale with users during their development journey, we set out to improve Tail's performance at high loads.</p>
    <div>
      <h3>Updating the way filters work</h3>
      <a href="#updating-the-way-filters-work">
        
      </a>
    </div>
    <p>The first improvement was to the existing filtering feature. When starting a Tail with <a href="https://developers.cloudflare.com/workers/wrangler/commands/#tail"><code>wrangler tail</code></a> (and now with the Cloudflare dashboard) users have the ability to filter out messages based on information in the requests or logs.Previously, this filtering was handled within the Durable Object, which meant that even if a user was filtering out the majority of their traffic, the Durable Object would still have to handle every message. Often users with high traffic Tails were using many filters to better interpret their logs, but wouldn’t be able to start a Tail due to the 100 RPS limit.</p><p>We moved filtering out of the Durable Object and into the Tail message producer, preventing any filtered messages from reaching the Tail Durable Object, and thereby reducing the load on the Tail Durable Object. Moving the filtering out of the Durable Object was the first step in improving Tail’s performance at scale.</p>
    <div>
      <h3>Sampling logs to keep Tails within Durable Object limits</h3>
      <a href="#sampling-logs-to-keep-tails-within-durable-object-limits">
        
      </a>
    </div>
    <p>After moving log filtering outside of the Durable Object, there was still the issue of determining when Tails could be started since there was no way to determine to what degree filters would reduce traffic for a given Tail, and simply starting a Durable Object back up would mean that it more than likely hit the 100 RPS limit immediately.</p><p>The solution for this was to add a safety mechanism for the Durable Object while the Tail was running.</p><p>We created a simple controller to track the RPS hitting a Durable Object and sample messages until the desired volume of 100 RPS is reached. As shown below, sampling keeps the Tail Durable Object RPS below the target of 100.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7a0f6KNAQCBfodwsGwzlEs/340c5f194ea626f41552f090787257a0/image4-12.png" />
            
            </figure><p>When messages are sampled, the following message appears every five seconds to let the user know that they are in sampling mode:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7jSffNOJOWuhVam9g2kjo2/daafee133ea30515a5b1ca30fbe559e7/image3-9.png" />
            
            </figure><p>This message goes away once the Tail is stopped or filters are applied that drop the RPS below 100.</p>
    <div>
      <h3>A final failsafe</h3>
      <a href="#a-final-failsafe">
        
      </a>
    </div>
    <p>Finally as a last resort a failsafe mechanism was added in the case the Durable Object gets fully overloaded. Since RPS tracking is done within the Durable Object, if the Durable Object is overloaded due to an extremely large amount of traffic, the sampling mechanism will fail.</p><p>In the case that an overload is detected, all messages forwarded to the Durable Object are stopped periodically to prevent any issues with Workers infrastructure.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7FXCXIYrmzHVJHFlRF1wXQ/d454b9bffab6eeddb19ec5718636113e/image1-24.png" />
            
            </figure><p>Here we can see a user who had a large amount of traffic that started to become sampled. As the traffic increased, the number of sampled messages grew. Since the traffic was too fast for the sampling mechanism to handle, the Durable Object got overloaded. However, soon excess messages were blocked and the overload stopped.</p>
    <div>
      <h3>Try it out</h3>
      <a href="#try-it-out">
        
      </a>
    </div>
    <p>These new improvements are in place currently and available to all users 🎉</p><p>To Tail Workers via the Dashboard, log in, navigate to your Worker, and click on the Logs tab. You can then start a log stream via the default view.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/F5Z80VaPFuPW71NWvfA3g/27e7fb654ac3eac556611c81a2ad0a1d/image5-9.png" />
            
            </figure><p>If you’re using the Wrangler CLI, you can start a new Tail by running <code>wrangler tail</code>.</p>
    <div>
      <h3>Beyond Worker tail</h3>
      <a href="#beyond-worker-tail">
        
      </a>
    </div>
    <p>While we're excited for tail to be able to reach new limits and scale, we also recognize users may want to go beyond the live logs provided by Tail.</p><p>For example, if you’d like to push log events to additional destinations for a historical view of your application’s performance, we offer <a href="https://developers.cloudflare.com/workers/observability/logpush/">Logpush</a>. If you’d like more insight into and control over log messages and events themselves, we offer <a href="https://developers.cloudflare.com/workers/observability/tail-workers/">Tail Workers</a>.</p><p>These products, and others, can be read about in our <a href="https://developers.cloudflare.com/logs/">Logs documentation</a>. All of them are available for use today.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Logs]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">7JrxctC3rnpXqgmtN5ffYL</guid>
            <dc:creator>Joshua Johnson</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[A whole new Quick Edit in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/improved-quick-edit/</link>
            <pubDate>Wed, 17 May 2023 13:00:57 GMT</pubDate>
            <description><![CDATA[ We’re proud to announce an improved dashboard editor experience that allows users to edit multimodule Workers, use real edge previews, and debug their Workers more easily - all powered by Workers and VSCode for the Web. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2B1L3aFtxDPZhKGtqfkFxy/f35e5927cfc211894c5fed5c17466eeb/image1-42.png" />
            
            </figure><p>Quick Edit is a development experience for Cloudflare Workers, embedded right within the Cloudflare dashboard. It’s the fastest way to get up and running with a new worker, and lets you quickly preview and deploy changes to your code.</p><p>We’ve spent a lot of recent time working on upgrading the <i>local</i> development experience to be as <a href="/miniflare-and-workerd/">useful as possible</a>, but the Quick Edit experience for editing Workers has stagnated since the release of <a href="/just-write-code-improving-developer-experience-for-cloudflare-workers/">workers.dev</a>. It’s time to give Quick Edit some love and bring it up to scratch with the expectations of today's developers.</p><p>Before diving into what’s changed—a quick overview of the current Quick Edit experience:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68vBJYkfVP0kh5oNFxCXFX/a977ab2c917072d0b7cbd359b95bc486/download-11.png" />
            
            </figure><p>We used the robust <a href="https://microsoft.github.io/monaco-editor/">Monaco editor</a>, which took us pretty far—it’s even what VSCode uses under the hood! However, Monaco is fairly limited in what it can do. Developers are used to the full power of their local development environment, with advanced IntelliSense support and all the power of a full-fledged IDE. Compared to that, a single file text editor is a step-down in expressiveness and functionality.</p>
    <div>
      <h2>VSCode for Web</h2>
      <a href="#vscode-for-web">
        
      </a>
    </div>
    <p>Today, we’re rolling out a new Quick Edit experience for Workers, powered by <a href="https://code.visualstudio.com/docs/editor/vscode-web">VSCode for Web</a>. This is a huge upgrade, allowing developers to work in a familiar environment. This isn’t just about familiarity though—using VSCode for Web to power Quick Edit unlocks significant new functionality that was previously only possible with a local development setup using <a href="/10-things-i-love-about-wrangler/">Wrangler</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7Dg5Pph7cGcx3fKb4zMiYZ/f26625e62400119738e40a0d366d57c0/download--1--7.png" />
            
            </figure>
    <div>
      <h3>Support for multiple modules!</h3>
      <a href="#support-for-multiple-modules">
        
      </a>
    </div>
    <p>Cloudflare Workers released support for the <a href="/workers-javascript-modules/">Modules syntax</a> in 2021, which is the recommended way to write Workers. It leans into modern JavaScript by leveraging the ES Module syntax, and lets you define Workers by exporting a default object containing event handlers.</p>
            <pre><code>export default {
 async fetch(request, env) {
   return new Response("Hello, World!")
 }
}</code></pre>
            <p>There are two sides of the coin when it comes to ES Modules though: exports <i>and imports</i>. Until now, if you wanted to organise your worker in multiple modules you had to use Wrangler and a local development setup. Now, you’ll be able to write multiple modules in the dashboard editor, and import them, just as you can locally. We haven’t enabled support for importing modules from npm yet, but that’s something we’re actively exploring—stay tuned!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7iWt9JogEsGbMwwlZgI9dt/0ca18de31eba4518ec230707211cceb6/download--2--6.png" />
            
            </figure>
    <div>
      <h3>Edge Preview</h3>
      <a href="#edge-preview">
        
      </a>
    </div>
    
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2BCsQLD1rEt5zDC2FMVN58/06ac07ce324b0a3e1ba9ddfd41c009dc/download--3--4.png" />
            
            </figure><p>When editing a worker in the dashboard, Cloudflare spins up a preview of your worker, deployed from the code you’re currently working on. This helps speed up the feedback loop when developing a worker, and makes it easy to test changes without impacting production traffic (see also, <a href="/announcing-wrangler-dev-the-edge-on-localhost/">wrangler dev</a>).</p><p>However, the in-dashboard preview hasn’t historically been a high-fidelity match for the deployed Workers runtime. There were various differences in behaviour between the dashboard preview environment and a deployed worker, and it was difficult to have full confidence that a worker that worked in the preview would work in the deployed environment.</p><p>That changes today! We’ve changed the dashboard preview environment to use the same system that powers <a href="/announcing-wrangler-dev-the-edge-on-localhost/"><code>wrangler dev</code></a>. This means that your preview worker will be run on Cloudflare's global network, the same environment as your deployed workers.</p>
    <div>
      <h3>Helpful error messages</h3>
      <a href="#helpful-error-messages">
        
      </a>
    </div>
    <p>In the previous dashboard editor, the experience when your code throws an error wasn’t great. Unless you wrap your worker code in a try-catch handler, the preview will show a blank page when your worker throws an error. This can make it really tricky to debug your worker, and is pretty frustrating. With the release of the new Quick Editor, we now wrap your worker with error handling code that shows helpful error pages, complete with error stack traces and detailed descriptions.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/39MmsVyio0cM7lPDy4JtEg/570a8fed7b3aa76cd4ef547bff819224/download--4--4.png" />
            
            </figure>
    <div>
      <h3>Typechecking</h3>
      <a href="#typechecking">
        
      </a>
    </div>
    <p>TypeScript is incredibly popular, and developers are more and more used to writing their workers in TypeScript. While the dashboard editor still only allows JavaScript files (and you’re unable to write TypeScript directly) we wanted to support modern typed JavaScript development as much as we could. To that end, the new dashboard editor has full support for <a href="https://www.typescriptlang.org/docs/handbook/type-checking-javascript-files.html">JSDoc TypeScript syntax</a>, with <a href="https://www.npmjs.com/package/@cloudflare/workers-types">the TypeScript environment for workers preloaded</a>. This means that writing code with type errors will show a familiar squiggly red line, and Cloudflare APIs like HTMLRewriter will be autocompleted.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4vXG2YZQvTAdXe9hghyD67/164bb87679259a6a27bb6bb92dcb3cea/download--5--4.png" />
            
            </figure><div></div>
<p></p>
    <div>
      <h2>How we built it</h2>
      <a href="#how-we-built-it">
        
      </a>
    </div>
    <p>It wouldn’t be a Cloudflare blog post without a deep dive into the nuts and bolts of what we’ve built!</p><p>First, an overview—how does this work at a high level? We embed VSCode for Web in the Cloudflare dashboard as an <code>iframe</code>, and communicate with it over a <a href="https://developer.mozilla.org/en-US/docs/Web/API/MessageChannel"><code>MessageChannel</code></a>. When the <code>iframe</code> is loaded, the Cloudflare dashboard sends over the contents of your worker to a VSCode for Web extension. This extension seeds an in-memory filesystem from which VSCode for Web reads. When you edit files in VSCode for Web, the updated files are sent back over the same <code>MessageChannel</code> to the Cloudflare dashboard, where they’re uploaded as a previewed worker to Cloudflare's global network.</p><p>As with any project of this size, the devil is in the details. Let’s focus on a specific area —how we communicate with VSCode for Web’s <code>iframe</code> from the Cloudflare dashboard.</p><p>The <a href="https://developer.mozilla.org/en-US/docs/Web/API/MessageChannel"><code>MessageChannel</code></a> browser API enables relatively easy cross-frame communication—in this case, from an iframe embedder to the iframe itself. To use it, you construct an instance and access the <code>port1</code> and <code>port2</code> properties:</p>
            <pre><code>const channel = new MessageChannel()

// The MessagePort you keep a hold of
channel.port1

// The MessagePort you send to the iframe
channel.port2</code></pre>
            <p>We store a reference to the <code>MessageChannel</code> to use across component renders with <code>useRef()</code>, since React would otherwise create a new <code>MessageChannel</code> instance with every render.</p><p>With that out of the way, all that remains is to send <code>channel.port2</code> to VSCode for Web’s iframe, via a call to <code>postMessage()</code>.</p>
            <pre><code>// A reference to the iframe embedding VSCode for Web
const editor = document.getElementById("vscode")

// Wait for the iframe to load 
editor.addEventListener('load', () =&gt; {
	// Send over the MessagePort
editor.contentWindow.postMessage('PORT', '*', [
channel.port2
]);
});</code></pre>
            <p>An interesting detail here is how the <code>MessagePort</code> is sent over to the iframe. The third argument to <code>postMessage()</code> indicates a sequence of <a href="https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Transferable_objects">Transferable objects</a>. This <i>transfers</i> ownership of <code>port2</code> to the iframe, which means that any attempts to access it in the original context will throw an exception.</p><p>At this stage the dashboard has loaded an iframe containing VSCode for Web, initialised a <code>MessageChannel</code>, and sent over a <code>MessagePort</code> to the iframe. Let’s switch context—the iframe now needs to catch the <code>MessagePort</code> and start using it to communicate with the embedder (Cloudflare’s dashboard).</p>
            <pre><code>window.onmessage = (e) =&gt; {
if (e.data === "PORT") {
	// An instance of a MessagePort
const port = e.ports[0]
}
};</code></pre>
            <p>Relatively straightforward! With not <i>that</i> much code, we’ve set up communication and can start sending more complex messages across. Here’s an example of how we send over the initial worker content from the dashboard to the VSCode for Web iframe:</p>
            <pre><code>// In the Cloudflare dashboard

// The modules that make up your worker
const files = [
  {
    path: 'index.js',
    contents: `
		import { hello } from "./world.js"
export default {
			fetch(request) {
				return new Response(hello)
			}
		}`
  },
  {
    path: 'world.js',
    contents: `export const hello = "Hello World"`
  }
];

channel.port1.postMessage({
  type: 'WorkerLoaded',
  // The worker name
  name: 'your-worker-name',
  // The worker's main module
  entrypoint: 'index.js',
  // The worker's modules
  files: files
});</code></pre>
            <p>If you’d like to learn more about our approach, you can explore the code we’ve open sourced as part of this project, including the <a href="https://github.com/cloudflare/workers-sdk/tree/main/packages/quick-edit-extension">VSCode extension</a> we’ve written to load data from the Cloudflare dashboard, our <a href="https://github.com/cloudflare/workers-sdk/tree/main/packages/quick-edit">patches to VSCode</a>, and our <a href="https://github.com/cloudflare/workers-sdk/tree/main/packages/solarflare-theme">VSCode theme</a>.</p>
    <div>
      <h2>We’re not done!</h2>
      <a href="#were-not-done">
        
      </a>
    </div>
    <p>This is a huge overhaul of the dashboard editing experience for Workers, but we’re not resting on our laurels! We know there’s a long way to go before developing a worker in the browser will offer the same experience as developing a worker locally with Wrangler, and we’re working on ways to close that gap. In particular, we’re working on adding Typescript support to the editor, and supporting syncing to external Git providers like GitHub and GitLab.</p><p>We’d love to hear any feedback from you on the new editing experience—come say hi and ask us any questions you have on the <a href="https://discord.cloudflare.com">Cloudflare Discord</a>!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">72WrH18NXIGrxYukjwVtxr</guid>
            <dc:creator>Samuel Macleod</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[Improved local development with wrangler and workerd]]></title>
            <link>https://blog.cloudflare.com/wrangler3/</link>
            <pubDate>Wed, 17 May 2023 13:00:15 GMT</pubDate>
            <description><![CDATA[ We’re proud to announce the release of Wrangler v3 – the first version of Wrangler with local-by-default development, powered by Miniflare v3 and the open-source Workers `workerd` runtime. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iSHHNyNYsb21AEw80h3lW/953b982411c18fcb0721afc8dd83ac7e/image5-7.png" />
            
            </figure><p>For over a year now, we’ve been working to improve the Workers local development experience. Our goal has been to improve parity between users' local and production environments. This is important because it provides developers with a fully-controllable and easy-to-debug local testing environment, which leads to increased developer efficiency and confidence.</p><p>To start, we integrated <a href="https://github.com/cloudflare/miniflare">Miniflare</a>, a fully-local simulator for Workers, directly <a href="/miniflare/">into Wrangler</a>, the Workers CLI. This allowed users to develop locally with Wrangler by running <code>wrangler dev --local</code>. Compared to the <code>wrangler dev</code> default, which relied on remote resources, this represented a significant step forward in local development. As good as it was, it couldn’t leverage the actual Workers runtime, which led to some inconsistencies and behavior mismatches.</p><p>Last November, we <a href="/miniflare-and-workerd/">announced the experimental version of Miniflare v3,</a> powered by the newly open-sourced <a href="https://github.com/cloudflare/workerd"><code>workerd</code> runtime</a>, the same runtime used by Cloudflare Workers. Since then, we’ve continued to improve upon that experience both in terms of accuracy with the real runtime and in cross-platform compatibility.</p><p>As a result of all this work, we are proud to announce the release of Wrangler v3 – the first version of Wrangler with local-by-default development.</p>
    <div>
      <h2>A new default for Wrangler</h2>
      <a href="#a-new-default-for-wrangler">
        
      </a>
    </div>
    <p>Starting with Wrangler v3, users running <code>wrangler dev</code> will be leveraging Miniflare v3 to run your Worker locally. This local development environment is effectively as accurate as a production Workers environment, providing an ability for you to test every aspect of your application before deploying. It provides the same runtime and bindings, but has its own simulators for KV, R2, D1, Cache and Queues. Because you’re running everything on your machine, you won’t be billed for operations on KV namespaces or R2 buckets during development, and you can try out paid-features like Durable Objects for free.</p><p>In addition to a more accurate developer experience, you should notice performance differences. Compared to remote mode, we’re seeing a 10x reduction to startup times and 60x reduction to script reload times with the new local-first implementation. This massive reduction in reload times drastically improves developer velocity!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iwuiO3yvOuQQLFa9DbQZU/21379c38d9ce9a73e0f7a1c6eb8fbfb4/image4-12.png" />
            
            </figure><p>Remote development isn’t going anywhere. We recognise many developers still prefer to test against real data, or want to test Cloudflare services like <a href="https://developers.cloudflare.com/images/image-resizing/resize-with-workers">image resizing</a> that aren’t implemented locally yet. To run <code>wrangler dev</code> on Cloudflare’s network, just like previous versions, use the new <code>--remote</code> flag.</p>
    <div>
      <h2>Deprecating Miniflare v2</h2>
      <a href="#deprecating-miniflare-v2">
        
      </a>
    </div>
    <p>For users of Miniflare, there are two important pieces of information for those updating from v2 to v3. First, if you’ve been using Miniflare’s CLI directly, you’ll need to switch to <code>wrangler dev</code>. Miniflare v3 no longer includes a CLI. Secondly, if you’re using Miniflare’s API directly, upgrade to <code>miniflare@3</code> and follow the <a href="https://miniflare.dev/get-started/migrating">migration guide</a>.</p>
    <div>
      <h2>How we built Miniflare v3</h2>
      <a href="#how-we-built-miniflare-v3">
        
      </a>
    </div>
    <p>Miniflare v3 is now built using <code>workerd</code>, the open-source Cloudflare Workers runtime. As <code>workerd</code> is a server-first runtime, every configuration defines at least one socket to listen on. Each socket is configured with a service, which can be an external server, disk directory or most importantly for us, a Worker! To start a <code>workerd</code> server running a Worker, create a <code>worker.capnp</code> file as shown below, run <code>npx workerd serve worker.capnp</code> and visit <a href="http://localhost:8080">http://localhost:8080</a> in your browser:</p>
            <pre><code>using Workerd = import "/workerd/workerd.capnp";


const helloConfig :Workerd.Config = (
 services = [
   ( name = "hello-worker", worker = .helloWorker )
 ],
 sockets = [
   ( name = "hello-socket", address = "*:8080", http = (), service = "hello-worker" )
 ]
);


const helloWorker :Workerd.Worker = (
 modules = [
   ( name = "worker.mjs",
     esModule =
       `export default {
       `  async fetch(request, env, ctx) {
       `    return new Response("Hello from workerd! 👋");
       `  }
       `}
   )
 ],
 compatibilityDate = "2023-04-04",
);</code></pre>
            <p>If you’re interested in what else <code>workerd</code> can do, check out the <a href="https://github.com/cloudflare/workerd/tree/main/samples">other samples</a>. Whilst <code>workerd</code> provides the runtime and bindings, it doesn’t provide the underlying implementations for the other products in the Developer Platform. This is where Miniflare comes in! It provides simulators for KV, R2, D1, Queues and the Cache API.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44vdcPO7sdaTtBi0u0HMgE/85e6510a0f8689284add4318e09c4c2c/image1-43.png" />
            
            </figure>
    <div>
      <h3>Building a flexible storage system</h3>
      <a href="#building-a-flexible-storage-system">
        
      </a>
    </div>
    <p>As you can see from the diagram above, most of Miniflare’s job is now providing different interfaces for data storage. In Miniflare v2, we used a custom key-value store to back these, but this had <a href="https://github.com/cloudflare/miniflare/issues/167">a</a> <a href="https://github.com/cloudflare/miniflare/issues/247">few</a> <a href="https://github.com/cloudflare/miniflare/issues/530">limitations</a>. For Miniflare v3, we’re now using the industry-standard <a href="https://sqlite.org/index.html">SQLite</a>, with a separate blob store for KV values, R2 objects, and cached responses. Using SQLite gives us much more flexibility in the queries we can run, allowing us to support future unreleased storage solutions. 👀</p><p>A separate blob store allows us to provide efficient, ranged, <a href="https://streams.spec.whatwg.org/#example-rbs-pull">streamed access</a> to data. Blobs have unguessable identifiers, can be deleted, but are otherwise immutable. These properties make it possible to perform atomic updates with the SQLite database. No other operations can interact with the blob until it's committed to SQLite, because the ID is not guessable, and we don't allow listing blobs. For more details on the rationale behind this, check out the <a href="https://github.com/cloudflare/miniflare/discussions/525">original GitHub discussion</a>.</p>
    <div>
      <h3>Running unit tests inside Workers</h3>
      <a href="#running-unit-tests-inside-workers">
        
      </a>
    </div>
    <p>One of Miniflare’s primary goals is to provide a great local testing experience. Miniflare v2 provided <a href="https://miniflare.dev/testing/vitest">custom environments</a> for popular Node.js testing frameworks that allowed you to run your tests <i>inside</i> the Miniflare sandbox. This meant you could import and call any function using Workers runtime APIs in your tests. You weren’t restricted to integration tests that just send and receive HTTP requests. In addition, these environments provide per-test isolated storage, automatically undoing any changes made at the end of each test.</p><p>In Miniflare v2, these environments were relatively simple to implement. We’d already reimplemented Workers Runtime APIs in a Node.js environment, and could inject them using Jest and Vitest’s APIs into the global scope.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6UrAr6zl1SsIbG2Qyvq1p2/40113914d049e7ad928c16373dab7fa7/image3-13.png" />
            
            </figure><p>For Miniflare v3, this is much trickier. The runtime APIs are implemented in a separate <code>workerd</code> process, and you can’t reference JavaScript classes across a process boundary. So we needed a new approach…</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/44s7Z0ZD3r5hQpdCxr1atZ/8b1d395872b13a1123cf77e25ab07533/image7-7.png" />
            
            </figure><p>Many test frameworks like Vitest use Node’s built-in <a href="https://nodejs.org/api/worker_threads.html"><code>worker_threads</code></a> module for running tests in parallel. This module spawns new operating system threads running Node.js and provides a <code>MessageChannel</code> interface for communicating between them. What if instead of spawning a new OS thread, we spawned a new <code>workerd</code> process, and used WebSockets for communication between the Node.js host process and the <code>workerd</code> “thread”?</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5jGThLJfIpVq47I9KaxaS3/86a1ff8b9739f21e6629eb8f8b793ffa/image8-8.png" />
            
            </figure><p>We have a proof of concept using Vitest showing this approach can work in practice. Existing Vitest IDE integrations and the Vitest UI continue to work without any additional work. We aren’t quite ready to release this yet, but will be working on improving it over the next few months. Importantly, the <code>workerd</code> “thread” needs access to Node.js built-in modules, which we recently started <a href="/workers-node-js-asynclocalstorage/">rolling out support for</a>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/HqXRYKmlaionjMJK7PW6i/974cb97c5b694db5c3f567f628d861ed/image2-23.png" />
            
            </figure>
    <div>
      <h3>Running on every platform</h3>
      <a href="#running-on-every-platform">
        
      </a>
    </div>
    <p>We want developers to have this great local testing experience, regardless of which operating system they’re using. Before open-sourcing, the Cloudflare Workers runtime was originally only designed to run on Linux. For Miniflare v3, we needed to add support for macOS and Windows too. macOS and Linux are both Unix-based, making porting between them relatively straightforward. Windows on the other hand is an entirely different beast… 😬</p><p>The <code>workerd</code> runtime uses <a href="https://github.com/capnproto/capnproto/tree/master/c%2B%2B/src/kj">KJ</a>, an alternative C++ base library, which is already cross-platform. We’d also migrated to the <a href="https://bazel.build/">Bazel</a> build system in preparation for open-sourcing the runtime, which has good Windows support. When compiling our C++ code for Windows, we use LLVM's MSVC-compatible compiler driver <a href="https://llvm.org/devmtg/2014-04/PDFs/Talks/clang-cl.pdf"><code>clang-cl</code></a>, as opposed to using Microsoft’s Visual C++ compiler directly. This enables us to use the "same" compiler frontend on Linux, macOS, and Windows, massively reducing the effort required to compile <code>workerd</code> on Windows. Notably, this provides proper support for <code>#pragma once</code> when using symlinked virtual includes produced by Bazel, <code>__atomic_*</code> functions, a standards-compliant preprocessor, GNU statement expressions used by some KJ macros, and understanding of the <code>.c++</code> extension by default. After switching out <a href="https://github.com/mrbbot/workerd/blob/5e10e308e6683f8f88833478801c07da4fe01063/src/workerd/server/workerd.c%2B%2B#L802-L808">unix API calls for their Windows equivalents</a> using <code>#if _WIN32</code> preprocessor directives, and fixing a bunch of segmentation faults caused by execution order differences, we were finally able to get <code>workerd</code> running on Windows! No WSL or Docker required! 🎉</p>
    <div>
      <h2>Let us know what you think!</h2>
      <a href="#let-us-know-what-you-think">
        
      </a>
    </div>
    <p>Wrangler v3 is now generally available! Upgrade by running <code>npm install --save-dev wrangler@3</code> in your project. Then run <code>npx wrangler dev</code> to try out the new local development experience powered by Miniflare v3 and the open-source Workers runtime. Let us know what you think in the <code>#wrangler</code> channel on the <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developers Discord</a>, and please <a href="https://github.com/cloudflare/workers-sdk/issues/new/choose">open a GitHub issue</a> if you hit any unexpected behavior.</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Wrangler]]></category>
            <category><![CDATA[Miniflare]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6XGVmk1ZbylTfVULuFs2jk</guid>
            <dc:creator>Brendan Coll</dc:creator>
            <dc:creator>Adam Murray</dc:creator>
        </item>
        <item>
            <title><![CDATA[Introducing Rollbacks for Workers Deployments]]></title>
            <link>https://blog.cloudflare.com/introducing-rollbacks-for-workers-deployments/</link>
            <pubDate>Mon, 03 Apr 2023 13:00:00 GMT</pubDate>
            <description><![CDATA[ Deployment rollbacks provide users the ability to quickly visualize and deploy past versions of their Workers, providing even more confidence in the deployment pipeline ]]></description>
            <content:encoded><![CDATA[ <p></p><p>In November, 2022, we introduced <a href="/deployments-for-workers/">deployments for Workers</a>. Deployments are created as you make changes to a Worker. Each one is unique. These let you track changes to your Workers over time, seeing who made the changes, and where they came from.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kYW6tf3Hrwpv3PKkmMOa3/0be12df468ecd4a8a67c08f51104b314/image5.png" />
            
            </figure><p>When we made the announcement, we also said our intention was to build more functionality on top of deployments.</p><p>Today, we’re proud to release rollbacks for deployments.</p>
    <div>
      <h2>Rollbacks</h2>
      <a href="#rollbacks">
        
      </a>
    </div>
    <p>As nice as it would be to know that every deployment is perfect, it’s not always possible - for various reasons. Rollbacks provide a quick way to deploy past versions of a Worker - providing another layer of confidence when developing and deploying with Workers.</p>
    <div>
      <h3>Via the dashboard</h3>
      <a href="#via-the-dashboard">
        
      </a>
    </div>
    <p>In the dashboard, you can navigate to the <b>Deployments</b> tab. For each deployment that’s not the most recent, you should see a new icon on the far right of the deployment. Hovering over that icon will display the option to rollback to the specified deployment.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5U6DDnyubvT7UXnR7sRUDk/cf343fc7b07b51962c90647c62c557d2/image3.png" />
            
            </figure><p>Clicking on that will bring up a confirmation dialog, where you can enter a reason for rollback. This provides another mechanism of record-keeping and helps give more context for why the rollback was necessary.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4A9laQMfOq2KcbFvnMHeey/5a5dae8225559b93da09d0be590a241d/image2.png" />
            
            </figure><p>Once you enter a reason and confirm, a new rollback deployment will be created. This deployment has its own ID, but is a duplicate of the one you rolled back to. A message appears with the new deployment ID, as well as an icon showing the rollback message you entered above.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1tXWsG9Yhh00HF8D2P0EJY/fe9ff085d320216613bf3d9d215612cd/image6.png" />
            
            </figure>
    <div>
      <h3>Via Wrangler</h3>
      <a href="#via-wrangler">
        
      </a>
    </div>
    <p>With Wrangler version 2.13, rolling back deployments via Wrangler can be done via a new command - <code>wrangler rollback</code>. This command takes an optional ID to rollback to a specific deployment, but can also be run without an ID to rollback to the previous deployment. This provides an even faster way to rollback in a situation where you know that the previous deployment is the one that you want.</p><p>Just like the dashboard, when you initiate a rollback you will be prompted to add a rollback reason and to confirm the action.</p><p>In addition to <code>wrangler rollback</code> we’ve done some refactoring to the <code>wrangler deployments</code> command. Now you can run <code>wrangler deployments list</code> to view up to the last 10 deployments.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2ZdZipCnby1lHSs3wuPMSy/6aa6aee246c7238bf9e3173aa5c0f098/image7-1.png" />
            
            </figure><p>Here, you can see two new annotations: <b>rollback from</b> and <b>message</b>. These match the dashboard experience, and provide more visibility into your deployment history.</p><p>To view an individual deployment, you can run wrangler deployments view. This will display the last deployment made, which is the active deployment. If you would like to see a specific deployment, you can run <code>wrangler deployments view [ID]</code>.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lNq1vG8GBcmNpC634bmvB/2e0f0f7544f2572a0a17d93f7c802249/image4.png" />
            
            </figure><p>We’ve updated this command to display more data like: compatibility date, usage model, and bindings. This additional data will help you to quickly visualize changes to Worker or to see more about a specific Worker deployment without having to open your editor and go through source code.</p>
    <div>
      <h2>Keep deploying!</h2>
      <a href="#keep-deploying">
        
      </a>
    </div>
    <p>We hope this feature provides even more confidence in deploying Workers, and encourages you to try it out! If you leverage the Cloudflare dashboard to manage deployments, you should have access immediately. Wrangler users will need to update to version 2.13 to see the new functionality.</p><p>Make sure to check out our updated <a href="https://developers.cloudflare.com/workers/platform/deployments/">deployments docs</a> for more information, as well as information on limitations to rollbacks. If you have any feedback, please let us know via <a href="https://docs.google.com/forms/d/e/1FAIpQLSfVRtmYOlzp6hJG50-8OfqpZameR2fd_5ySlmTlSeW5SSAzZw/viewform">this form</a>.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Edge]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">5q5y2KunPqLJ3nnjxrIqRH</guid>
            <dc:creator>Adam Murray</dc:creator>
        </item>
    </channel>
</rss>