How to Migrate From Supabase — Complete Step-by-Step Guide

March 31, 2026 · 12 min read · Updated regularly

⚠️ Before you migrate: If you just need Supabase working in India right now, a proxy fix takes 60 seconds. Migration takes days. Use the proxy as a stopgap while following this guide.

This guide walks through migrating from Supabase to another platform — step by step, with actual commands and scripts. We'll cover database export, auth migration, storage transfer, RLS policy translation, and zero-downtime cutover.

Target audience: developers who've decided to move off Supabase (whether due to the India block, pricing, or wanting more control).

Step 0: Decide Your Target

Before migrating, know where you're going:

Not sure? Take our free migration assessment — it recommends the best target based on your project.

💡 Recommendation: If you're staying on Postgres (Neon, Railway, or self-hosted), this guide is mostly copy-paste. If you're moving to Firebase or a non-Postgres platform, expect 2-3x the effort because you're rewriting your data layer.

Step 1: Export Your Database

Supabase gives you full Postgres access, so you can use standard tools:

Get your connection string

Go to your Supabase dashboard → Settings → Database → Connection string. Copy the URI format.

Export schema + data

# Export everything (schema + data)
pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \
  --no-owner \
  --no-privileges \
  -F c \
  -f supabase_backup.dump

# Or for plain SQL (easier to inspect/edit):
pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \
  --no-owner \
  --no-privileges \
  --schema=public \
  -f supabase_backup.sql
💡 Tip: Use --schema=public to exclude Supabase's internal schemas (auth, storage, realtime, etc.). You'll migrate those separately.

Export auth users separately

# Export auth.users table
pg_dump "postgresql://postgres:[PASSWORD]@db.[PROJECT_REF].supabase.co:5432/postgres" \
  --no-owner \
  --schema=auth \
  --table=auth.users \
  -f auth_users.sql

Step 2: Import to Your New Database

For Neon / Railway / Any Postgres

# Restore from dump
pg_restore -d "postgresql://[USER]:[PASSWORD]@[HOST]/[DB]" \
  --no-owner \
  --no-privileges \
  supabase_backup.dump

# Or from SQL file
psql "postgresql://[USER]:[PASSWORD]@[HOST]/[DB]" \
  -f supabase_backup.sql

Verify your tables are there:

psql "postgresql://[USER]:[PASSWORD]@[HOST]/[DB]" \
  -c "\dt"

For Firebase (Firestore)

There's no direct SQL → Firestore migration. You need to:

  1. Export your Supabase data as JSON (use psql with \copy or a script)
  2. Write a migration script that maps your relational data to Firestore's document model
  3. Denormalize joins into nested documents or subcollections
  4. Import using Firebase Admin SDK

This is the hardest part of a Firebase migration. Budget 1-3 days depending on schema complexity.

Step 3: Migrate Authentication

Supabase uses GoTrue for auth. Your user data is in the auth.users table.

To Clerk

// Export users from Supabase auth.users
// Then import via Clerk Backend API:
const clerk = require('@clerk/clerk-sdk-node');

for (const user of supabaseUsers) {
  await clerk.users.createUser({
    emailAddress: [user.email],
    password: user.encrypted_password, // Clerk supports bcrypt import
    firstName: user.raw_user_meta_data?.first_name,
    lastName: user.raw_user_meta_data?.last_name,
  });
}

To Firebase Auth

// Use Firebase Admin SDK bulk import
const admin = require('firebase-admin');

const users = supabaseUsers.map(u => ({
  uid: u.id,
  email: u.email,
  passwordHash: Buffer.from(u.encrypted_password),
  passwordSalt: Buffer.from(''), // Supabase uses bcrypt (salt embedded)
}));

// Firebase supports bcrypt password hashes
await admin.auth().importUsers(users, {
  hash: { algorithm: 'BCRYPT' }
});
⚠️ Important: Supabase uses bcrypt for password hashing. Most auth providers (Clerk, Firebase, Auth0) can import bcrypt hashes directly — users won't need to reset passwords. Verify your target supports bcrypt import before migrating.

Step 4: Migrate Storage

Supabase Storage is built on S3-compatible APIs. Your files are in buckets.

Download all files

// Using Supabase JS client
const { data: buckets } = await supabase.storage.listBuckets();

for (const bucket of buckets) {
  const { data: files } = await supabase.storage
    .from(bucket.name)
    .list('', { limit: 1000 });
  
  for (const file of files) {
    const { data } = await supabase.storage
      .from(bucket.name)
      .download(file.name);
    
    // Save to disk or upload to new storage
    fs.writeFileSync(`./backup/${bucket.name}/${file.name}`, Buffer.from(await data.arrayBuffer()));
  }
}

Upload to new storage

For Cloudflare R2 (S3-compatible, free egress):

// Using AWS SDK (R2 is S3-compatible)
const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');

const r2 = new S3Client({
  region: 'auto',
  endpoint: 'https://[ACCOUNT_ID].r2.cloudflarestorage.com',
  credentials: { accessKeyId: R2_KEY, secretAccessKey: R2_SECRET }
});

// Upload each file
await r2.send(new PutObjectCommand({
  Bucket: 'my-bucket',
  Key: file.name,
  Body: fileBuffer,
  ContentType: file.contentType
}));

Step 5: Translate RLS Policies

Supabase's Row-Level Security policies run in Postgres. If your new database is Postgres (Neon, self-hosted), your RLS policies migrate as-is.

If you're moving to a non-Postgres platform, you need to translate RLS to application-level security:

// Supabase RLS policy (Postgres):
// CREATE POLICY "Users can only see their own data"
//   ON public.profiles FOR SELECT
//   USING (auth.uid() = user_id);

// Translated to application code (e.g., Next.js API route):
async function getProfile(userId) {
  // Verify the requesting user matches the profile owner
  const session = await getSession();
  if (session.userId !== userId) {
    throw new Error('Unauthorized');
  }
  return db.query('SELECT * FROM profiles WHERE user_id = $1', [userId]);
}
💡 Tip: List all your RLS policies before migrating: SELECT * FROM pg_policies; — this gives you every policy to translate.

Step 6: Update Your App

Replace the Supabase client with your new stack's client. This is where the effort varies dramatically:

If you moved to Neon (still Postgres)

// Before (Supabase)
import { createClient } from '@supabase/supabase-js'
const supabase = createClient(URL, KEY)
const { data } = await supabase.from('users').select('*')

// After (Neon + Drizzle/Prisma)
import { neon } from '@neondatabase/serverless'
const sql = neon(DATABASE_URL)
const data = await sql('SELECT * FROM users')

If you moved to Firebase

// Before (Supabase)
const { data } = await supabase.from('users').select('*').eq('id', userId)

// After (Firebase)
const doc = await getDoc(doc(db, 'users', userId))
const data = doc.data()

Every query needs rewriting. Joins become subcollection queries or denormalized reads.

Step 7: Zero-Downtime Cutover

  1. Set up the new stack in parallel — don't touch your existing Supabase config yet
  2. Run a final data sync — export + import the latest data from Supabase
  3. Switch your app's config — update environment variables to point to the new database/auth/storage
  4. Deploy — one deployment switches everything
  5. Monitor for 24 hours — watch for auth failures, missing data, broken queries
  6. Keep Supabase running for 1 week — as a rollback option. Then decommission.
⚠️ Data sync timing matters. Between your final export and the config switch, any writes to Supabase are lost. For high-traffic apps, consider a maintenance window (even 5 minutes) or implement dual-write during the transition.

Migration Checklist

Want Us to Handle the Migration?

Take our free assessment, get your complexity score, and choose DIY ($99) or done-for-you ($299).

Free Migration Assessment →

Related