How to Migrate From Supabase — Complete Step-by-Step Guide
This guide walks through migrating from Supabase to another platform — step by step, with actual commands and scripts. We'll cover database export, auth migration, storage transfer, RLS policy translation, and zero-downtime cutover.
Target audience: developers who've decided to move off Supabase (whether due to the India block, pricing, or wanting more control).
Step 0: Decide Your Target
Before migrating, know where you're going:
- Neon — Serverless Postgres. Easiest migration from Supabase because it's still Postgres. Your SQL, schemas, and most queries work as-is.
- Self-hosted Supabase — Same stack, your server. Zero code changes but you manage infrastructure.
- Firebase — Google's BaaS. Full rewrite of your data layer (NoSQL vs SQL).
- Custom stack — Postgres (Neon/Railway) + Clerk/Auth0 + S3/R2. Most work upfront, most flexibility long-term.
Not sure? Take our free migration assessment — it recommends the best target based on your project.
Step 1: Export Your Database
Supabase gives you full Postgres access, so you can use standard tools:
Get your connection string
Go to your Supabase dashboard → Settings → Database → Connection string. Copy the URI format.
Export schema + data
# Export everything (schema + data) pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \ --no-owner \ --no-privileges \ -F c \ -f supabase_backup.dump # Or for plain SQL (easier to inspect/edit): pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \ --no-owner \ --no-privileges \ --schema=public \ -f supabase_backup.sql
--schema=public to exclude Supabase's internal schemas (auth, storage, realtime, etc.). You'll migrate those separately.
Export auth users separately
# Export auth.users table pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \ --no-owner \ --schema=auth \ --table=auth.users \ -f auth_users.sql
Step 2: Import to Your New Database
For Neon / Railway / Any Postgres
# Restore from dump pg_restore -d "postgresql://[USER]:[PASSWORD]@[HOST]/[DB]" \ --no-owner \ --no-privileges \ supabase_backup.dump # Or from SQL file psql "postgresql://[USER]:[PASSWORD]@[HOST]/[DB]" \ -f supabase_backup.sql
Verify your tables are there:
psql "postgresql://[USER]:[PASSWORD]@[HOST]/[DB]" \ -c "\dt"
For Firebase (Firestore)
There's no direct SQL → Firestore migration. You need to:
- Export your Supabase data as JSON (use
psqlwith\copyor a script) - Write a migration script that maps your relational data to Firestore's document model
- Denormalize joins into nested documents or subcollections
- Import using Firebase Admin SDK
This is the hardest part of a Firebase migration. Budget 1-3 days depending on schema complexity.
Step 3: Migrate Authentication
Supabase uses GoTrue for auth. Your user data is in the auth.users table.
To Clerk
// Export users from Supabase auth.users
// Then import via Clerk Backend API:
const clerk = require('@clerk/clerk-sdk-node');
for (const user of supabaseUsers) {
await clerk.users.createUser({
emailAddress: [user.email],
password: user.encrypted_password, // Clerk supports bcrypt import
firstName: user.raw_user_meta_data?.first_name,
lastName: user.raw_user_meta_data?.last_name,
});
}To Firebase Auth
// Use Firebase Admin SDK bulk import
const admin = require('firebase-admin');
const users = supabaseUsers.map(u => ({
uid: u.id,
email: u.email,
passwordHash: Buffer.from(u.encrypted_password),
passwordSalt: Buffer.from(''), // Supabase uses bcrypt (salt embedded)
}));
// Firebase supports bcrypt password hashes
await admin.auth().importUsers(users, {
hash: { algorithm: 'BCRYPT' }
});Step 4: Migrate Storage
Supabase Storage is built on S3-compatible APIs. Your files are in buckets.
Download all files
// Using Supabase JS client
const { data: buckets } = await supabase.storage.listBuckets();
for (const bucket of buckets) {
const { data: files } = await supabase.storage
.from(bucket.name)
.list('', { limit: 1000 });
for (const file of files) {
const { data } = await supabase.storage
.from(bucket.name)
.download(file.name);
// Save to disk or upload to new storage
fs.writeFileSync(`./backup/${bucket.name}/${file.name}`, Buffer.from(await data.arrayBuffer()));
}
}Upload to new storage
For Cloudflare R2 (S3-compatible, free egress):
// Using AWS SDK (R2 is S3-compatible)
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');
const r2 = new S3Client({
region: 'auto',
endpoint: 'https://[ACCOUNT_ID].r2.cloudflarestorage.com',
credentials: { accessKeyId: R2_KEY, secretAccessKey: R2_SECRET }
});
// Upload each file
await r2.send(new PutObjectCommand({
Bucket: 'my-bucket',
Key: file.name,
Body: fileBuffer,
ContentType: file.contentType
}));Step 5: Translate RLS Policies
Supabase's Row-Level Security policies run in Postgres. If your new database is Postgres (Neon, self-hosted), your RLS policies migrate as-is.
If you're moving to a non-Postgres platform, you need to translate RLS to application-level security:
// Supabase RLS policy (Postgres):
// CREATE POLICY "Users can only see their own data"
// ON public.profiles FOR SELECT
// USING (auth.uid() = user_id);
// Translated to application code (e.g., Next.js API route):
async function getProfile(userId) {
// Verify the requesting user matches the profile owner
const session = await getSession();
if (session.userId !== userId) {
throw new Error('Unauthorized');
}
return db.query('SELECT * FROM profiles WHERE user_id = $1', [userId]);
}SELECT * FROM pg_policies; — this gives you every policy to translate.
Step 6: Update Your App
Replace the Supabase client with your new stack's client. This is where the effort varies dramatically:
If you moved to Neon (still Postgres)
// Before (Supabase)
import { createClient } from '@supabase/supabase-js'
const supabase = createClient(URL, KEY)
const { data } = await supabase.from('users').select('*')
// After (Neon + Drizzle/Prisma)
import { neon } from '@neondatabase/serverless'
const sql = neon(DATABASE_URL)
const data = await sql('SELECT * FROM users')If you moved to Firebase
// Before (Supabase)
const { data } = await supabase.from('users').select('*').eq('id', userId)
// After (Firebase)
const doc = await getDoc(doc(db, 'users', userId))
const data = doc.data()Every query needs rewriting. Joins become subcollection queries or denormalized reads.
Step 7: Zero-Downtime Cutover
- Set up the new stack in parallel — don't touch your existing Supabase config yet
- Run a final data sync — export + import the latest data from Supabase
- Switch your app's config — update environment variables to point to the new database/auth/storage
- Deploy — one deployment switches everything
- Monitor for 24 hours — watch for auth failures, missing data, broken queries
- Keep Supabase running for 1 week — as a rollback option. Then decommission.
Migration Checklist
- ☐ Database schema exported and imported
- ☐ Data verified in new database (row counts match)
- ☐ Auth users migrated (test login with existing credentials)
- ☐ Storage files transferred (verify file counts and access)
- ☐ RLS policies translated to new platform or app-level security
- ☐ All Supabase client calls replaced
- ☐ Environment variables updated
- ☐ Staging test passed end-to-end
- ☐ Final data sync completed
- ☐ Production deployed and monitoring active
Want Us to Handle the Migration?
Take our free assessment, get your complexity score, and choose DIY ($99) or done-for-you ($299).
Free Migration Assessment →