The pitch for the Edge Runtime sounds irresistible: your code runs in 300+ cities, the cold start is under 50ms, and your users always hit a server within a few hundred miles of them. Latency disappears.
The reality, after building several apps that targeted Edge from day one and a few that migrated back: the Edge Runtime is a meaningful win for a narrow set of use cases, and a meaningful regression for everything else. This article is the rubric for telling them apart.
What the Edge Runtime Actually Is
The Edge Runtime is a V8-based JavaScript runtime. not Node. It runs at the CDN edge (Cloudflare Workers, Vercel Edge Functions, AWS Lambda@Edge), close to the user. It boots in milliseconds because it doesn't load Node's full standard library.
The trade-off is constraints:
- No Node APIs:
fs,child_process, nativecrypto, nativepathare gone. You get Web APIs (fetch,URL, Web Crypto) only. - No native modules: anything with a
.nodebinary won't load. This rules out most database drivers (pg,mysql2,mongodb), most image-processing libs (sharp), and a chunk of the npm ecosystem. - Smaller bundle size limits: Vercel's Edge limit is 1MB compressed; Cloudflare's is 10MB. Node serverless functions have no such ceiling.
- Shorter execution limits: typically 30-50 seconds maximum, vs 5-15 minutes for Node functions.
These constraints are why Edge cold starts are fast: less code to load.
Where Edge Wins
Geo-aware routing and personalisation. A middleware that reads the user's country from the request and routes them to the right localised content. perfect for Edge. The work is tiny, the latency saving is large.
// middleware.ts - runs on Edge by default
import { NextResponse } from "next/server";
export function middleware(request: Request) {
const country = request.headers.get("x-vercel-ip-country") ?? "US";
if (country === "DE" && !request.url.includes("/de")) {
return NextResponse.redirect(new URL("/de", request.url));
}
return NextResponse.next();
}
A/B test variant assignment. Read a cookie, decide which variant the user gets, set a header. Sub-millisecond work. Doing this from Node would add 100ms of cold-start tax for nothing.
Auth token validation (without DB lookup). If your auth is JWT-based and the public key is in env, Edge is a perfect fit. Verify the token with Web Crypto, attach the user ID to the request, hand off.
Static asset transformation and image optimisation. When the work is per-byte and per-request, running it close to the user matters.
Where Edge Loses
Anything that talks to a real database. Most database drivers are native modules, which Edge can't run. The workaround is to use HTTP-based clients (Neon's @neondatabase/serverless, PlanetScale's @planetscale/database, Supabase's REST/Postgrest API). but you've now added 30-80ms of cold connection time per request to a service that may not be near your edge.
If your DB is in us-east-1 and your user is in Sydney, Edge runs in Sydney but every query goes to us-east-1. The round-trip dominates. You'd have been faster running everything in us-east-1.
Anything that processes images, video, or large payloads. No sharp. No ffmpeg. No pdfkit. Edge isn't the right runtime.
Long-running work. A Server Action that takes 90 seconds to send batch emails or generate a report. Node is the answer.
Heavy npm dependencies. If your route handler pulls in 800KB of code, Edge cold starts are no longer free. The bundle has to deserialise on every cold start, and bundles >500KB start to feel sluggish.
The Hybrid Pattern (What Most Production Apps End Up With)
The cleanest production architecture is Edge for the front door, Node for the work.
// middleware.ts → Edge
// Auth, geo routing, A/B assignment, request rewriting
// app/api/* → Node (default)
// Database queries, file processing, anything heavy
// app/page.tsx → Node Server Component
// Database fetches, complex rendering
// app/api/personalise/route.ts → Edge (opt-in)
export const runtime = "edge";
export async function GET(request: Request) {
// Just reads cookies + does math, no DB
}
This gives you the best of both: a fast, distributed front door for the lightweight per-request decisions, and a powerful Node backend for the actual work.
Cost Considerations
Edge functions are billed per request (with a small CPU time component); Node serverless is billed per invocation + duration. For high-volume, fast-completing requests, Edge can be 2-5× cheaper. For long-running requests, Node is significantly cheaper because Edge billing penalises wall-clock time more aggressively.
Run your numbers before committing to a runtime. The provider pricing pages are the only source of truth.
When to Default to Edge, When to Default to Node
My current default rules:
- Default to Node for new applications. Migrate routes to Edge only when you have a measured latency win.
- Default to Edge for middleware, image transformations, and anything that fits within the constraints by design.
- Never put Edge on the critical path of database-heavy work unless your DB is also globally distributed (Cloudflare D1, Turso, distributed Postgres setups).
The Edge Runtime is a tool with sharp edges. Used well, it's a meaningful performance and cost win. Used as a default for everything, it's a regression that takes weeks to dig out of.