Supabase is the open-source Firebase alternative built on Postgres. It gives you a hosted Postgres database, REST and GraphQL APIs auto-generated from your schema, real-time subscriptions, built-in authentication, file storage, and serverless Edge Functions. all with a generous free tier and a clean TypeScript SDK.
Unlike Firebase (which locks you into Firestore's document model), Supabase gives you a real relational database. You write SQL. Your data has proper schemas and foreign keys. And if you ever want to self-host or migrate away, you can. it's Postgres underneath.
This guide covers everything: project setup, schema design, the JavaScript/TypeScript client, Row Level Security, authentication, realtime, storage, Edge Functions, and deployment.
Why Supabase?
The Firebase comparison comes up constantly, so let's address it directly:
| Feature | Supabase | Firebase |
|---|---|---|
| Database | PostgreSQL (relational) | Firestore (document) |
| SQL | Yes | No |
| Joins | Yes | No (client-side only) |
| Migrations | SQL-based, versionable | Schema-less |
| Auth | Built-in + Social | Built-in + Social |
| Realtime | Postgres CDC | Firestore listeners |
| Self-hostable | Yes | No |
| Open source | Yes | No |
| Vendor lock-in | Low (just Postgres) | High |
Choose Supabase when: your data is relational, you value SQL, you want self-hosting options, or you're concerned about vendor lock-in.
Choose Firebase when: you're building a prototype fast, your team doesn't know SQL, or you're in the Google Cloud ecosystem.
Step 1: Create a Project
- Go to supabase.com and sign up
- Create a new project (choose a region close to your users)
- Note your Project URL and
anonkey from Settings → API
Install the client SDK:
npm install @supabase/supabase-js
Initialize the client:
// lib/supabase.ts
import { createClient } from "@supabase/supabase-js";
import type { Database } from "./database.types"; // generated by Supabase CLI
const supabaseUrl = process.env.NEXT_PUBLIC_SUPABASE_URL!;
const supabaseAnonKey = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!;
export const supabase = createClient<Database>(supabaseUrl, supabaseAnonKey);
Always type your client with generated Database types (covered below). Untyped Supabase client queries are a common source of bugs.
Step 2: Generate TypeScript Types
Install the Supabase CLI:
npm install --save-dev supabase
npx supabase login
npx supabase gen types typescript --project-id YOUR_PROJECT_ID > src/lib/database.types.ts
This generates a Database type that maps exactly to your schema. table names, column names, types. Your editor will autocomplete table names and column names in queries.
Add this to package.json scripts:
"db:types": "supabase gen types typescript --project-id YOUR_PROJECT_ID > src/lib/database.types.ts"
Run it after every schema change.
Step 3: Schema Design
Create tables in the Supabase Table Editor or write SQL migrations. SQL migrations are strongly recommended for production:
-- supabase/migrations/20260402000000_init.sql
CREATE TABLE IF NOT EXISTS profiles (
id UUID PRIMARY KEY REFERENCES auth.users(id) ON DELETE CASCADE,
username TEXT UNIQUE NOT NULL,
full_name TEXT,
avatar_url TEXT,
updated_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE TABLE IF NOT EXISTS posts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id UUID NOT NULL REFERENCES profiles(id) ON DELETE CASCADE,
title TEXT NOT NULL,
body TEXT NOT NULL,
published BOOLEAN DEFAULT FALSE,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);
CREATE INDEX ON posts(user_id);
CREATE INDEX ON posts(created_at DESC) WHERE published = TRUE;
-- Auto-update updated_at
CREATE OR REPLACE FUNCTION update_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER posts_updated_at
BEFORE UPDATE ON posts
FOR EACH ROW EXECUTE FUNCTION update_updated_at();
Apply migrations:
npx supabase db push
Step 4: CRUD with the JavaScript Client
// Reading data
const { data: posts, error } = await supabase
.from("posts")
.select("id, title, created_at, profiles(username, avatar_url)")
.eq("published", true)
.order("created_at", { ascending: false })
.limit(20);
// Creating
const { data: post, error } = await supabase
.from("posts")
.insert({ title: "My First Post", body: "Hello world", user_id: userId })
.select()
.single();
// Updating
const { error } = await supabase
.from("posts")
.update({ published: true })
.eq("id", postId)
.eq("user_id", userId); // Ensure ownership
// Deleting
const { error } = await supabase
.from("posts")
.delete()
.eq("id", postId);
The .select() call with foreign table names enables joins: profiles(username, avatar_url) performs a JOIN to the profiles table and returns those columns nested in the result.
Step 5: Authentication
Supabase Auth supports email/password, magic links, and 20+ OAuth providers (Google, GitHub, Apple, etc.):
// Sign up
const { data, error } = await supabase.auth.signUp({
email: "user@example.com",
password: "strongpassword",
});
// Sign in
const { data, error } = await supabase.auth.signInWithPassword({
email: "user@example.com",
password: "strongpassword",
});
// OAuth (Google)
const { data, error } = await supabase.auth.signInWithOAuth({
provider: "google",
options: { redirectTo: "https://yourapp.com/auth/callback" },
});
// Get current user
const { data: { user } } = await supabase.auth.getUser();
// Sign out
await supabase.auth.signOut();
// Listen to auth state changes
supabase.auth.onAuthStateChange((event, session) => {
if (event === "SIGNED_IN") handleSignIn(session);
if (event === "SIGNED_OUT") handleSignOut();
});
Creating a profile on sign-up
Automatically create a profile row when a new user signs up, using a database trigger:
CREATE FUNCTION handle_new_user()
RETURNS TRIGGER AS $$
BEGIN
INSERT INTO public.profiles (id, username, full_name)
VALUES (
NEW.id,
split_part(NEW.email, '@', 1), -- Default username from email
NEW.raw_user_meta_data ->> 'full_name'
);
RETURN NEW;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;
CREATE TRIGGER on_auth_user_created
AFTER INSERT ON auth.users
FOR EACH ROW EXECUTE FUNCTION handle_new_user();
Step 6: Row Level Security
Row Level Security (RLS) is Supabase's superpower. It lets you define access rules directly in Postgres that are enforced on every query. regardless of whether the query comes from your application, the Supabase client, or a direct database connection.
-- Enable RLS on the posts table
ALTER TABLE posts ENABLE ROW LEVEL SECURITY;
-- Policy: Anyone can read published posts
CREATE POLICY "Published posts are publicly readable"
ON posts FOR SELECT
USING (published = TRUE);
-- Policy: Users can read their own drafts
CREATE POLICY "Users can read own posts"
ON posts FOR SELECT
USING (auth.uid() = user_id);
-- Policy: Users can insert their own posts
CREATE POLICY "Users can create posts"
ON posts FOR INSERT
WITH CHECK (auth.uid() = user_id);
-- Policy: Users can update their own posts
CREATE POLICY "Users can update own posts"
ON posts FOR UPDATE
USING (auth.uid() = user_id);
-- Policy: Users can delete their own posts
CREATE POLICY "Users can delete own posts"
ON posts FOR DELETE
USING (auth.uid() = user_id);
With RLS enabled, a logged-in user querying posts will automatically only see their own drafts and all published posts. without any application-level filtering. The database enforces it. See the companion deep-dive for multi-tenant RLS patterns.
Step 7: Realtime Subscriptions
// Subscribe to new posts
const subscription = supabase
.channel("public:posts")
.on(
"postgres_changes",
{ event: "INSERT", schema: "public", table: "posts", filter: "published=eq.true" },
(payload) => {
console.log("New post:", payload.new);
addPostToList(payload.new);
}
)
.subscribe();
// Clean up
return () => supabase.removeChannel(subscription);
Realtime works via Postgres logical replication. Changes to the database are streamed to connected clients in real time. You can filter by table, event type (INSERT, UPDATE, DELETE), and column values.
Step 8: Storage
// Upload a file
const { data, error } = await supabase.storage
.from("avatars")
.upload(`${userId}/avatar.jpg`, file, {
cacheControl: "3600",
upsert: true,
contentType: "image/jpeg",
});
// Get a public URL
const { data: { publicUrl } } = supabase.storage
.from("avatars")
.getPublicUrl(`${userId}/avatar.jpg`);
// Download a file (for private buckets)
const { data, error } = await supabase.storage
.from("private-files")
.download("document.pdf");
Storage buckets can be public (direct URL access) or private (requires a signed URL). Storage policies use the same RLS syntax as database policies.
Step 9: Edge Functions
For server-side logic (webhooks, payment processing, sending emails), use Supabase Edge Functions. Deno-based TypeScript functions deployed globally:
// supabase/functions/send-welcome-email/index.ts
import { serve } from "https://deno.land/std@0.177.0/http/server.ts";
import { createClient } from "https://esm.sh/@supabase/supabase-js@2";
serve(async (req) => {
const { userId } = await req.json();
const supabase = createClient(
Deno.env.get("SUPABASE_URL")!,
Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!,
);
const { data: user } = await supabase.auth.admin.getUserById(userId);
// Send welcome email via Resend/SendGrid...
return new Response(JSON.stringify({ success: true }), {
headers: { "Content-Type": "application/json" },
});
});
Deploy: npx supabase functions deploy send-welcome-email
Call from your app: await supabase.functions.invoke("send-welcome-email", { body: { userId } })
Common Pitfalls
1. Not enabling RLS: all tables without RLS are accessible to anyone with the anon key. Enable RLS on every table from day one.
2. Using service_role key in the browser: this key bypasses RLS entirely. It must only be used server-side. The anon key is the browser-safe key.
3. Forgetting the realtime filter: without a filter, a realtime subscription receives all changes to a table. Add filters to limit traffic and avoid leaking data.
4. N+1 queries via nested selects: Supabase's nested select syntax performs a JOIN, not a separate query per row. Use it freely.
5. Schema changes without type regeneration: after any migration, run npm run db:types to keep TypeScript types in sync.
FAQ
Q: Can I use Supabase with Next.js App Router?
Yes. Use @supabase/ssr package for Server Components and Route Handlers. It handles cookie-based auth sessions correctly in the App Router.
Q: Is Supabase suitable for production? Yes. Supabase runs on their own Postgres infrastructure (backed by AWS/Fly) with daily backups, connection pooling via PgBouncer, and a 99.9% uptime SLA on Pro+ plans.
Q: Can I run SQL queries directly?
Yes. via the Supabase dashboard SQL editor, via supabase db CLI, or via the rpc() client method for stored procedures.
Q: How does pricing compare to Firebase? Supabase's free tier is generous (500MB database, 1GB storage, 50k monthly active users). Pro is $25/month. Firebase pricing can be unpredictable with complex rules around read/write counts.
Q: What about migrations in production?
Use Supabase CLI migrations. supabase db push applies pending migrations; supabase db diff generates a migration from schema changes.
Conclusion
Supabase is the right BaaS for teams that want the velocity of Firebase without giving up SQL, proper data modeling, or the option to self-host. The combination of Postgres, auto-generated APIs, RLS, built-in auth, and realtime makes it a complete backend for most SaaS and app projects.
Next: Supabase RLS Patterns for Multi-Tenant SaaS. a deep dive into advanced RLS patterns.