Cloudflare Workers are computing instances on the edges of the internet. They’re purposely built to be lightweight, isolated, and distributed – these features are meant to achieve speed, security and economy. They’re used to run user-defined applications, such as augmenting web requests but at an intermediate point between the origin server and the user.

In this post I’ll go over the What, How, and basic code structure to build up to a fleshed out use case in the future.

What Workers does

Cloudflare Workers provides a serverless execution environment that allows you to create entirely new applications or augment existing ones without configuring or maintaining infrastructure.

How Workers works

Cloudflare Workers behave similar to JavaScript in the browser or in Node.js… but the differences happen at runtime. Rather than running on your local machine or on a centralized server, Workers functions run on Cloudflare’s Edge Network – a global network of thousands of machines distributed across hundreds of locations.

Each of these machines hosts an instance of the Workers runtime, and each of those runtimes is capable of running thousands of user-defined apps.

Now into detail…

The precondition is that you sign up for a Cloudflare account and setup your domain by either 1) using Cloudflare’s workers.dev subdomain or 2) changing your domain to use Cloudflare’s nameservers. Now, Cloudflare’s edge newtork is setup to receive your domain’s requests.

When a request to your domain is received, the Workers script creates an event handler. The event handler can use any of the following to control what happens next. I focus on the responding function today.

🛵 respondWith() intercepts the request and allows users to send a custom response.

Most Workers scripts are a variation on the default Workers flow:

/*
What a Workers flow looks like
*/

addEventListener("fetch", event => {
  event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  return new Response("Hello world")
}

The event is a FetchEvent object that generates a response on the spot, calling the handleRequest() function and passing the request onward so that the actual Response is created and returned. It is done this way so that there are single responsibilities.

In the example below, I create a barebones API endpoint that returns a list of songs when a GET /songs request occurs. Or check it out live in the Workers playground. You can read the console log to follow the event handling.

The first thing that occurs when the URL is requested is the triggering of the fetch event. The event responds by calling handleRequest(), which filters the request to return the proper responses. On line 10, I check that the request is a GET, and on line 17, I check that it is the /songs endpoint. For simplicity, I just check the last path and didn’t validate the prefix path to home url in this example. The getData() function serves to form and return a json response.

addEventListener("fetch", event => {
    event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
    /* Returns json data for valid endpoint
    Input: Fetch request object
    Output: Response object
    */
    if (request.method != "GET") {
        return new Response("Expected GET", { status: 500 })      
    }
    
    const urlParts = request.url.split('/')

    let response    
    if (urlParts[urlParts.length-1] == 'songs') {
        response = getData()
    } else {
        response = new Response("Requested unknown endpoint", { status: 400 })
    }       
    return response
}

function getData() {
    const data = [
        { "song": "Don't Go Jason Waterfalls ", "year": "1995" },
        { "song": "Happy Birthday remix", "year": "2000" },
        { "song": "Wikipedia theme song", "year": "2020" }
    ]
    
    const json = JSON.stringify(data, null, 2)
    return new Response(json, {
        headers: {
            "content-type": "application/json;charset=UTF-8",
        }
    })  
}

If I were to deploy this endpoint, my application would reside among other applications on the edge network, but its context and memory are isolated. A single runtime can run hundreds or thousands of “isolates”, seamlessly switching between them. Each piece of code is protected from other untrusted or user-written code on the runtime.

But wait there’s more…

The Workers API has a few other examples showing the range of functionalities. The next post in this series will go through another example. 🐱‍👓