Skip to content
Cloudflare Docs

Changelog

New updates and improvements at Cloudflare.

Subscribe to RSS
View all RSS feeds

hero image
  1. Hyperdrive is now available on the Free plan of Cloudflare Workers, enabling you to build Workers that connect to PostgreSQL or MySQL databases without compromise.

    Low-latency access to SQL databases is critical to building full-stack Workers applications. We want you to be able to build on fast, global apps on Workers, regardless of the tools you use. So we made Hyperdrive available for all, to make it easier to build Workers that connect to PostgreSQL and MySQL.

    If you want to learn more about how Hyperdrive works, read the deep dive on how Hyperdrive can make your database queries up to 4x faster.

    Hyperdrive provides edge connection setup and global connection pooling for optimal latencies.

    Visit the docs to get started with Hyperdrive for PostgreSQL or MySQL.

  1. Hyperdrive now supports connecting to MySQL and MySQL-compatible databases, including Amazon RDS and Aurora MySQL, Google Cloud SQL for MySQL, Azure Database for MySQL, PlanetScale and MariaDB.

    Hyperdrive makes your regional, MySQL databases fast when connecting from Cloudflare Workers. It eliminates unnecessary network roundtrips during connection setup, pools database connections globally, and can cache query results to provide the fastest possible response times.

    Best of all, you can connect using your existing drivers, ORMs, and query builders with Hyperdrive's secure credentials, no code changes required.

    import { createConnection } from "mysql2/promise";
    export interface Env {
    HYPERDRIVE: Hyperdrive;
    }
    export default {
    async fetch(request, env, ctx): Promise<Response> {
    const connection = await createConnection({
    host: env.HYPERDRIVE.host,
    user: env.HYPERDRIVE.user,
    password: env.HYPERDRIVE.password,
    database: env.HYPERDRIVE.database,
    port: env.HYPERDRIVE.port,
    disableEval: true, // Required for Workers compatibility
    });
    const [results, fields] = await connection.query("SHOW tables;");
    ctx.waitUntil(connection.end());
    return new Response(JSON.stringify({ results, fields }), {
    headers: {
    "Content-Type": "application/json",
    "Access-Control-Allow-Origin": "*",
    },
    });
    },
    } satisfies ExportedHandler<Env>;

    Learn more about how Hyperdrive works and get started building Workers that connect to MySQL with Hyperdrive.

  1. When using a Worker with the nodejs_compat compatibility flag enabled, the following Node.js APIs are now available:

    This make it easier to reuse existing Node.js code in Workers or use npm packages that depend on these APIs.

    node:crypto

    The full node:crypto API is now available in Workers.

    You can use it to verify and sign data:

    import { sign, verify } from "node:crypto";
    const signature = sign("sha256", "-data to sign-", env.PRIVATE_KEY);
    const verified = verify("sha256", "-data to sign-", env.PUBLIC_KEY, signature);

    Or, to encrypt and decrypt data:

    import { publicEncrypt, privateDecrypt } from "node:crypto";
    const encrypted = publicEncrypt(env.PUBLIC_KEY, "some data");
    const plaintext = privateDecrypt(env.PRIVATE_KEY, encrypted);

    See the node:crypto documentation for more information.

    node:tls

    The following APIs from node:tls are now available:

    This enables secure connections over TLS (Transport Layer Security) to external services.

    import { connect } from "node:tls";
    // ... in a request handler ...
    const connectionOptions = { key: env.KEY, cert: env.CERT };
    const socket = connect(url, connectionOptions, () => {
    if (socket.authorized) {
    console.log("Connection authorized");
    }
    });
    socket.on("data", (data) => {
    console.log(data);
    });
    socket.on("end", () => {
    console.log("server ends connection");
    });

    See the node:tls documentation for more information.

  1. The Agents SDK now includes built-in support for building remote MCP (Model Context Protocol) servers directly as part of your Agent. This allows you to easily create and manage MCP servers, without the need for additional infrastructure or configuration.

    The SDK includes a new MCPAgent class that extends the Agent class and allows you to expose resources and tools over the MCP protocol, as well as authorization and authentication to enable remote MCP servers.

    export class MyMCP extends McpAgent {
    server = new McpServer({
    name: "Demo",
    version: "1.0.0",
    });
    async init() {
    this.server.resource(`counter`, `mcp://resource/counter`, (uri) => {
    // ...
    });
    this.server.tool(
    "add",
    "Add two numbers together",
    { a: z.number(), b: z.number() },
    async ({ a, b }) => {
    // ...
    },
    );
    }
    }

    See the example for the full code and as the basis for building your own MCP servers, and the client example for how to build an Agent that acts as an MCP client.

    To learn more, review the announcement blog as part of Developer Week 2025.

    Agents SDK updates

    We've made a number of improvements to the Agents SDK, including:

    • Support for building MCP servers with the new MCPAgent class.
    • The ability to export the current agent, request and WebSocket connection context using import { context } from "agents", allowing you to minimize or avoid direct dependency injection when calling tools.
    • Fixed a bug that prevented query parameters from being sent to the Agent server from the useAgent React hook.
    • Automatically converting the agent name in useAgent or useAgentChat to kebab-case to ensure it matches the naming convention expected by routeAgentRequest.

    To install or update the Agents SDK, run npm i agents@latest in an existing project, or explore the agents-starter project:

    Terminal window
    npm create cloudflare@latest -- --template cloudflare/agents-starter

    See the full release notes and changelog on the Agents SDK repository and

  1. Workflows is now Generally Available (or "GA"): in short, it's ready for production workloads. Alongside marking Workflows as GA, we've introduced a number of changes during the beta period, including:

    • A new waitForEvent API that allows a Workflow to wait for an event to occur before continuing execution.
    • Increased concurrency: you can run up to 4,500 Workflow instances concurrently — and this will continue to grow.
    • Improved observability, including new CPU time metrics that allow you to better understand which Workflow instances are consuming the most resources and/or contributing to your bill.
    • Support for vitest for testing Workflows locally and in CI/CD pipelines.

    Workflows also supports the new increased CPU limits that apply to Workers, allowing you to run more CPU-intensive tasks (up to 5 minutes of CPU time per instance), not including the time spent waiting on network calls, AI models, or other I/O bound tasks.

    Human-in-the-loop

    The new step.waitForEvent API allows a Workflow instance to wait on events and data, enabling human-in-the-the-loop interactions, such as approving or rejecting a request, directly handling webhooks from other systems, or pushing event data to a Workflow while it's running.

    Because Workflows are just code, you can conditionally execute code based on the result of a waitForEvent call, and/or call waitForEvent multiple times in a single Workflow based on what the Workflow needs.

    For example, if you wanted to implement a human-in-the-loop approval process, you could use waitForEvent to wait for a user to approve or reject a request, and then conditionally execute code based on the result.

    import { Workflow, WorkflowEvent } from "cloudflare:workflows";
    export class MyWorkflow extends WorkflowEntrypoint {
    async run(event, step) {
    // Other steps in your Workflow
    let event = await step.waitForEvent(
    "receive invoice paid webhook from Stripe",
    { type: "stripe-webhook", timeout: "1 hour" },
    );
    // Rest of your Workflow
    }
    }

    You can then send a Workflow an event from an external service via HTTP or from within a Worker using the Workers API for Workflows:

    export default {
    async fetch(req, env) {
    const instanceId = new URL(req.url).searchParams.get("instanceId");
    const webhookPayload = await req.json();
    let instance = await env.MY_WORKFLOW.get(instanceId);
    // Send our event, with `type` matching the event type defined in
    // our step.waitForEvent call
    await instance.sendEvent({
    type: "stripe-webhook",
    payload: webhookPayload,
    });
    return Response.json({
    status: await instance.status(),
    });
    },
    };

    Read the GA announcement blog to learn more about what landed as part of the Workflows GA.

  1. AutoRAG is now in open beta, making it easy for you to build fully-managed retrieval-augmented generation (RAG) pipelines without managing infrasturcture. Just upload your docs to R2, and AutoRAG handles the rest: embeddings, indexing, retrieval, and response generation via API.

    AutoRAG open beta demo

    With AutoRAG, you can:

    • Customize your pipeline: Choose from Workers AI models, configure chunking strategies, edit system prompts, and more.
    • Instant setup: AutoRAG provisions everything you need from Vectorize, AI gateway, to pipeline logic for you, so you can go from zero to a working RAG pipeline in seconds.
    • Keep your index fresh: AutoRAG continuously syncs your index with your data source to ensure responses stay accurate and up to date.
    • Ask questions: Query your data and receive grounded responses via a Workers binding or API.

    Whether you're building internal tools, AI-powered search, or a support assistant, AutoRAG gets you from idea to deployment in minutes.

    Get started in the Cloudflare dashboard or check out the guide for instructions on how to build your RAG pipeline today.

  1. We’re excited to announce Browser Rendering is now available on the Workers Free plan, making it even easier to prototype and experiment with web search and headless browser use-cases when building applications on Workers.

    The Browser Rendering REST API is now Generally Available, allowing you to control browser instances from outside of Workers applications. We've added three new endpoints to help automate more browser tasks:

    • Extract structured data – Use /json to retrieve structured data from a webpage.
    • Retrieve links – Use /links to pull all links from a webpage.
    • Convert to Markdown – Use /markdown to convert webpage content into Markdown format.

    For example, to fetch the Markdown representation of a webpage:

    Markdown example
    curl -X 'POST' 'https://api.cloudflare.com/client/v4/accounts/<accountId>/browser-rendering/markdown' \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer <apiToken>' \
    -d '{
    "url": "https://example.com"
    }'

    For the full list of endpoints, check out our REST API documentation. You can also interact with Browser Rendering via the Cloudflare TypeScript SDK.

    We also recently landed support for Playwright in Browser Rendering for browser automation from Cloudflare Workers, in addition to Puppeteer, giving you more flexibility to test across different browser environments.

    Visit the Browser Rendering docs to learn more about how to use headless browsers in your applications.

  1. Durable Objects can now be used with zero commitment on the Workers Free plan allowing you to build AI agents with Agents SDK, collaboration tools, and real-time applications like chat or multiplayer games.

    Durable Objects let you build stateful, serverless applications with millions of tiny coordination instances that run your application code alongside (in the same thread!) your durable storage. Each Durable Object can access its own SQLite database through a Storage API. A Durable Object class is defined in a Worker script encapsulating the Durable Object's behavior when accessed from a Worker. To try the code below, click the button:

    Deploy to Cloudflare

    import { DurableObject } from "cloudflare:workers";
    // Durable Object
    export class MyDurableObject extends DurableObject {
    ...
    async sayHello(name) {
    return `Hello, ${name}!`;
    }
    }
    // Worker
    export default {
    async fetch(request, env) {
    // Every unique ID refers to an individual instance of the Durable Object class
    const id = env.MY_DURABLE_OBJECT.idFromName("foo");
    // A stub is a client used to invoke methods on the Durable Object
    const stub = env.MY_DURABLE_OBJECT.get(id);
    // Methods on the Durable Object are invoked via the stub
    const response = await stub.sayHello("world");
    return response;
    },
    };

    Free plan limits apply to Durable Objects compute and storage usage. Limits allow developers to build real-world applications, with every Worker request able to call a Durable Object on the free plan.

    For more information, checkout:

  1. SQLite in Durable Objects is now generally available (GA) with 10GB SQLite database per Durable Object. Since the public beta in September 2024, we've added feature parity and robustness for the SQLite storage backend compared to the preexisting key-value (KV) storage backend for Durable Objects.

    SQLite-backed Durable Objects are recommended for all new Durable Object classes, using new_sqlite_classes Wrangler configuration. Only SQLite-backed Durable Objects have access to Storage API's SQL and point-in-time recovery methods, which provide relational data modeling, SQL querying, and better data management.

    export class MyDurableObject extends DurableObject {
    sql: SqlStorage
    constructor(ctx: DurableObjectState, env: Env) {
    super(ctx, env);
    this.sql = ctx.storage.sql;
    }
    async sayHello() {
    let result = this.sql
    .exec("SELECT 'Hello, World!' AS greeting")
    .one();
    return result.greeting;
    }
    }

    KV-backed Durable Objects remain for backwards compatibility, and a migration path from key-value storage to SQL storage for existing Durable Object classes will be offered in the future.

    For more details on SQLite storage, checkout Zero-latency SQLite storage in every Durable Object blog.

  1. You can now capture a maximum of 256 KB of log events per Workers invocation, helping you gain better visibility into application behavior.

    All console.log() statements, exceptions, request metadata, and headers are automatically captured during the Worker invocation and emitted as JSON object. Workers Logs deserializes this object before indexing the fields and storing them. You can also capture, transform, and export the JSON object in a Tail Worker.

    256 KB is a 2x increase from the previous 128 KB limit. After you exceed this limit, further context associated with the request will not be recorded in your logs.

    This limit is automatically applied to all Workers.