Frontend Is Edge Computing

Part 1 of 8 — Series index

Somewhere along the way, we stopped noticing what the browser had become.

The vocabulary never updated. We still call it "the client." We still draw it as a rectangle at the edge of our architecture diagrams, usually labelled "UI." We still treat frontend work as the layer that turns data into pixels, and we still allocate engineering effort accordingly — a frontend team, a frontend roadmap, frontend tickets.

Meanwhile, the actual thing running in the browser has quietly become one of the most interesting distributed systems nodes in our stack. It caches. It reconciles. It orchestrates. It holds state across sessions, tabs, and device restarts. It ships out to hardware we don't own, runs on operating systems we can't upgrade, and communicates over networks that routinely fail. It keeps humans productive while the rest of our infrastructure paginates through incidents.

That's edge computing. We just haven't been calling it that.

What "edge" actually means

The phrase "edge computing" gets used in a few overlapping ways. CDN engineers mean one thing. Telco folks mean another. IoT people mean a third. The common thread is useful: an edge node is a compute environment that sits close to the user, operates semi-autonomously from the central system, and has to cope with limited resources and unreliable connectivity.

By that definition, your browser tab is textbook edge. It runs on the user's own device — sometimes a laptop, sometimes a six-year-old phone on a flaky connection. It can't assume the central services are reachable. It holds enough state to be useful even when they aren't. It deals with the messy reality that the central system doesn't — real networks, real devices, real humans clicking real buttons.

And it does all of this on the node's terms, not yours. You don't get to provision it. You don't get to scale it horizontally. You can't SSH into it when it misbehaves. If it runs out of memory, it just dies.

The old frame: frontend as presentation

The presentation-layer frame came from a world where interactivity meant submitting a form and reading the next page. In that world, the client genuinely was thin. Fetch HTML, render HTML, repeat.

That's not the world anymore, and hasn't been for years. A non-trivial web application today:

None of that is presentation. That's a distributed systems node doing distributed systems work.

What changes when you take the frame seriously

The useful thing about naming something correctly is that all the literature suddenly applies.

If the browser is an edge node, then the questions you already know how to ask about distributed systems are the right questions. How does this node behave when the central API is slow? What's its backpressure strategy? How does it reconcile after a partition heals? What's the cache hierarchy? Where's the observability? What's the failure mode when the node runs out of local resources?

These are not new problems. The patterns are well-known. Circuit breakers, bulkheads, exponential backoff, idempotency keys, conflict-free replicated data types, cache-aside, write-through — the whole vocabulary of distributed systems is available to you the moment you admit where you actually are.

Some of the most important work in modern frontend has been exactly this: porting distributed systems primitives into the browser. React Query and SWR are cache-aside layers with staleness semantics. Service workers are on-host reverse proxies. IndexedDB is a local database. AbortController is a cancellation token. BroadcastChannel is pub/sub for tabs. Transferable ArrayBuffer is zero-copy IPC. You've been doing edge computing for a while. Naming it gives you access to the parts you haven't yet discovered.

A case I lived through

A few years ago I was asked to build a product that gave customers a single view of their infrastructure across every region of a major cloud provider — every virtual machine, every network, every security group, every routing table, everywhere.

The interesting constraint: there was no global API. The cloud's APIs were regional by design. A customer with resources in thirty-plus regions needed thirty-plus round trips to see their own fleet. Doing that server-side would have meant building a new central service — a fleet of aggregators, a global cache, a new team to operate it, new failure modes, new cost.

We did it at the edge instead. The browser orchestrates roughly a thousand concurrent API calls across every region, applies adaptive throttling so the client doesn't get itself rate-limited, persists the results to local durable storage so the second visit is instant, progressively renders as regions respond so users see value in under a second, and gracefully degrades when specific regions are slow or unreachable. The product serves over a million customers a month. It has zero backend infrastructure cost, because there is no backend. The edge is the aggregation layer.

That wasn't a frontend feature. That was a distributed system whose compute happens to live in JavaScript. And it works because we treated it that way — we designed it as we'd design any other edge-deployed service.

Why this matters for how we hire, staff, and build

If the browser is an edge node, the "frontend vs. backend" split stops being a useful organizational boundary. The skills that matter on the edge — distributed systems intuition, performance engineering, operational thinking, API design, caching strategy, resilience patterns — are exactly the skills we've been locating on backend teams for twenty years.

The consequence: frontend work deserves the same architectural attention and the same seniority distribution as service work. Treating it as a pixel-pushing layer leaves your edge tier in the hands of people you haven't asked to think about it. The results are predictable — memory leaks that kill tabs, thundering-herd fetches that DDoS your own APIs, cache strategies that were never cache strategies, retry loops with no circuit breaker. All the failures you'd never tolerate in a service, silently shipping to production in your client.

The fix isn't to make frontend engineers learn backend. It's to recognize that good frontend engineers are distributed systems engineers, and the work they're doing is systems work.

Where this series is going

The next seven posts are the rest of the argument. Each one takes a specific assumption that backend and platform engineers bring into frontend work and replaces it with the mental model the edge actually demands.

We'll start at the lowest layer — why the browser runs on a single thread, and why that's a feature — and work our way up through engines, patterns, rendering, parallelism, memory, and architecture. By the end, I want you to have a coherent vocabulary for treating the browser as what it is: the compute node closest to your user, doing the hardest kind of work in our industry.

Welcome to the edge.


New posts in this series publish weekly. Back to the series index →