This guide walks you through migrating users from a legacy authentication provider (Clerk, Auth0, Firebase, a homegrown system, etc.) to the Prelude Auth API. The process is provider-agnostic; throughout the guide we call your existing system the legacy system and Prelude the Auth system. The migration runs in four phases. Each phase is safe to roll out behind a feature flag, and the source of truth shifts from the legacy system to Auth only at the very end.Documentation Index
Fetch the complete documentation index at: https://docs.prelude.so/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before you start, make sure you have:- A Prelude account with access to the Auth API.
- An Application ID (
$APP_ID) — see Applications. - Your Management API key (
$PRELUDE_API_KEY) for backend calls.
- Base URL:
https://api.prelude.dev - Auth:
Authorization: Bearer $PRELUDE_API_KEY $USER_IDrefers to the Auth user ID (usr_…).
Step 1 — Mirror user changes to Auth
Before flipping the source of truth or running the backfill, you need a way to keep Auth in sync with whatever happens on your side in real time. You already have code paths in your own backend that update or delete users — when a profile is edited, when an email is added, when a user is offboarded. Mirror each of those changes into Auth by calling the Management API right next to the write you already do. You don’t need to subscribe to anything from the legacy provider for this: your own system is where these mutations originate (or where they’re observed and persisted), so it’s the right place to fan out an Auth call. At this stage your system is the source of truth, the replication is one-way (your system → Auth), and Auth is a downstream replica — never write to it directly.Where to add the calls
Find every site in your backend that mutates a user — admin endpoints, profile-update handlers, identifier add/remove flows, account-deletion logic, etc. Each of them needs one extra step: after the write to your own DB (and, if applicable, the legacy provider), mirror the same change to Auth.Mirroring an update
- Resolve the Auth user by
external_id= your internaluser_id. If no Auth user exists yet (the user hasn’t been migrated lazily or by the backfill), skip silently — there’s nothing to mirror. - Patch profile fields (first name, last name, picture, locale, …) on the Auth user via the Management API.
- Reconcile identifiers as a diff, not a replacement:
- Compute
desired = {identifiers on your side}andexisting = {Auth identifiers}. - Drop everything in
existing \ desired. - Add everything in
desired \ existing. - Skip everything in the intersection.
- Apply the same logic separately for emails, OAuth identifiers (keyed by
provider:provider_user_id), phone numbers, etc.
- Compute
- Treat
NotFoundandAlreadyExistserrors on individual identifier operations as success — they describe the state you wanted anyway. - Don’t short-circuit on the first failure: collect errors across the profile patch and each identifier op, run them all, then join at the end. Partial progress beats an all-or-nothing retry.
Mirroring a delete
- Resolve the Auth user by
external_id= your internaluser_id. - Call the Management API to delete the Auth user.
NotFound→ treat as success (already gone, which is the desired end state).
Management API calls used in this step
| Operation | Method + path | Reference |
|---|---|---|
Find the Auth user by external_id | GET /v2/session/apps/{appID}/users | list-users |
| Patch profile fields | PATCH /v2/session/apps/{appID}/users/{userID}/profile | update-user-profile |
| Add an identifier | POST /v2/session/apps/{appID}/users/{userID}/identifiers | create-identifier |
| Remove an identifier | DELETE /v2/session/apps/{appID}/users/{userID}/identifiers | delete-identifier |
| Delete the Auth user | DELETE /v2/session/apps/{appID}/users/{userID} | delete-user |
Step 2 — Migrate active users via the migration hook
Active users are migrated lazily: each time a user signs in through the legacy provider, the frontend exchanges their legacy token for an Auth token viasession.migrate({ token: legacyToken }). Auth forwards the legacy token to a webhook you own, which validates it and returns the user identity payload Auth needs to provision the user on its side.
Configure the migration hook
Register the URL Auth forwards legacy tokens to. Until this is configured,session.migrate() has nowhere to validate against.
Register the migration hook
create-migration-config. To rotate the URL later use update-migration-config; to turn migration off once the rollout is done use delete-migration-config.Implement the validation endpoint
Your hook must:
- Validate the legacy token against your legacy provider (configure JWKS / token issuer so your backend can verify it).
- Resolve it to your internal user (email, user ID, org, etc.).
- Return the user identity payload Auth expects, so Auth can provision a user on its side mapped to your internal ID.
user_id as the Auth external_id so backend code doesn’t need a translation table. Make sure the migration hook returns it.Add a feature flag and initialize the SDK
Add a feature flag (e.g.
isSessionMigrationEnabled()) so you can roll the migration out gradually and roll back instantly. Initialize the Auth SDK client once at app startup (singleton like getSessionClient()), and keep the legacy auth client wired in parallel — both must coexist during the migration window.Frontend flow (auth interceptor)
The interceptor decides which token to attach to each outgoing request:- Flag off → use the legacy token, behavior unchanged.
- Flag on → try
session.refresh()first. The SDK returns a cached access token if still valid, otherwise it will hit/refresh. - No Auth yet → fetch a token from the legacy provider, call
session.migrate({ token: legacyToken })to exchange it (this method is thread safe with a lock), thensession.refresh()to get the fresh Auth access token. Cache that the user is now on Auth for this request. - Migration fails → fall back to the legacy token so the user is never locked out.
401retry → if the request used an Auth token and gotUnauthenticated, invalidate the local cache, refresh once, and retry the request a single time. This handles tokens that look valid client-side but were revoked server-side. Keep this rule even after the migration is fully done.
- Don’t loop: retry only once on
401, and only when the failing token was an Auth token. - Don’t double-migrate: after
migrate()succeeds, subsequent calls naturally hit the cached Auth token first, so they skip the legacy path. - Errors during legacy fetch must be surfaced (e.g.
LegacyUnavailableError) so the UI can react — never silently send an unauthenticated request.
Backend: accept both tokens
The backend needs to validate either token format during the migration window:- Inspect the
Authorization: Bearer <token>header. - Detect the token type — typically by the
issclaim of a JWT, or a token prefix (sess_…vs legacy format), or by trying Auth first and falling back. Pick one detection strategy and stick to it; don’t validate against both signers blindly. - Route validation accordingly:
- Auth token → verify against the Auth JWKS endpoint.
- Legacy token → verify against the legacy provider as before.
- Resolve to the same internal user identity in both branches. The user ID, org/tenant, and permissions must come out identical regardless of which token was presented — downstream code shouldn’t care which path it took.
- Log which path was used with a low-cardinality counter (
auth_path{type="session|legacy"}). This is how you know when legacy traffic has drained and the flag can be removed.
Source of truth at this stage
| Action | Owner |
|---|---|
| User creation | Your system → Auth |
| User update | Your system → Auth |
| User deletion | Your system → Auth |
Step 3 — Backfill dormant users
Lazy migration viasession.migrate() only covers users who actually log in during the rollout window. After Auth traffic stabilizes at ~100% of active users, you still have dormant users that exist on the legacy provider and in your own database but were never provisioned on Auth. A one-shot backfill closes the gap.
The backfill is safe to run now: any user who logs in during the backfill still flows through migrate() and gets provisioned on Auth that way. The backfill is idempotent — it skips anyone already provisioned — so the two paths can run concurrently without conflict.
Building the job
Write a CLI tool you can run from a developer machine or a one-shot task with credentials for your local DB, your legacy provider, and Auth:- Iterate your own user table. Your DB is the source of truth for which users you care about. Use keyset pagination ordered by (e.g.)
(created_at, id)—LIMIT/OFFSETdrifts when rows are inserted mid-scan and can’t resume after a crash. Filter to users that (a) have a legacy provider ID and (b) are not soft-deleted. - Batch-fetch the legacy provider. For each DB page, collect the legacy IDs and fetch them from the legacy provider in chunks (most providers cap at ~100 IDs per call).
- Idempotency check first. For each user, look up Auth by
external_id(your internaluser_id). If it already resolves to an Auth profile, skip — that user was migrated lazily by the interceptor or by an earlier run of this job. - Map legacy account → Auth identifiers. Pull every email and every OAuth external account from the legacy user. Normalize the provider name to whatever Auth expects (some legacy providers prefix it, e.g.
oauth_google→google). Map the legacy profile → Auth profile (given name, family name, picture, locale, whatever your app reads). Use the same internaluser_idasexternal_idthat the migration hook uses — this is what guarantees the lazy and backfill paths converge on the same Auth user. - Write atomically. Send the profile + identifiers to Auth in one call. Treat
IdentifierAlreadyExists/ExternalIDAlreadyExistsas benign — that’s the race with a concurrentmigrate()call. Any other error → log and count as failed, do not halt the job.
Create the Auth user
create-user.
After this step, every user on the legacy provider also exists on Auth. You can now use Auth for sign-in.
Step 4 — Use Auth for sign-in and sign-up
Up to this point, users still sign in through the legacy provider’s UI; Auth only sees them viamigrate() after the fact. Now that every user has an Auth profile (backfill done, lazy migration covering active users), you can flip the actual login surface to Auth and stop sending users through the legacy sign-in flow at all.
Before you start
- Backfill must be complete. If a user reaches the new Auth login page and has no Auth profile yet, they cannot sign in.
- Identifier coverage. Audit a sample: do users have at least one identifier per sign-in method you intend to support (email + password, each OAuth provider)? If you’re enabling Google sign-in but only 70% of users have a Google identifier, the other 30% will hit “no account found”.
- Passwords. Most legacy providers won’t let you export password hashes. If yours does and the hash format is compatible with Auth’s verifier, import them with the backfill so existing users keep their passwords. If not, password users will have to use a “forgot password” reset flow on first Auth sign-in — communicate this in advance.
Build the new login UI
- Build new routes powered by the Auth SDK:
/login,/signup,/forgot-password,/verify, OAuth callback, etc. Keep them behind a feature flag (isSessionLoginEnabled()) separate fromisSessionMigrationEnabled()so you can ramp them independently. - Wire each sign-in method Auth-side:
- Email + password →
session.signIn({ email, password }). See Password Authentication. - OTP → Auth sends, user clicks/enters code, SDK exchanges for a session. See OTP Login.
- OAuth → redirect to Auth’s OAuth start URL; Auth handles the provider round-trip and drops the user back on your callback. See Social Login.
- Email + password →
- Preserve the
external_idinvariant. Whichever method the user picks, the Auth profile they end up authenticated as must be the one whoseexternal_idequals your internal user ID. The backfill already guarantees this — your job is not to break it.
Bootstrap an Auth-native sign-up
For new users that sign up via Auth, they will exist on the Auth side but not on the legacy system. After each Auth sign-up:- The frontend now holds an Auth access token whose JWT claims identify the user (email, OAuth identifiers, etc.) but
external_idis empty. - The frontend checks for the absence of
external_idin the token and calls a single bootstrap on your backend — e.g.CreateUser— with no body, just the Auth bearer token. - The backend verifies the Auth token with one important relaxation: for the
CreateUserendpoint only, accept tokens whoseexternal_idclaim is missing. Every other endpoint must continue to require it; otherwise an attacker could call any endpoint with a fresh Auth token and skip user creation entirely. - The backend provisions the internal user in your own DB and triggers whatever side effects new sign-ups normally do (workspace creation, welcome email, analytics event, etc.). If a user with the same email already exists — e.g. a teammate invited them earlier — link to the existing row instead of creating a duplicate.
- The backend writes the new internal ID back to Auth via
update-external-id. This closes the loop: from this point on, the Auth profile hasexternal_idset, every subsequent token contains it, and the standard auth path applies. - Once
CreateUserreturns, the frontend callsrefresh()with cache invalidation to get a fresh token withexternal_id.
update-external-id.
Things to watch for:
- Idempotency on the backend. The frontend may retry
CreateUseron transient failures. MakeCreateUseridempotent: ifexternal_idis already set on the Auth profile, or if a user row already exists for this Auth profile ID / email, return the existing user — don’t insert a second row. update-external-idis the source of truth, not your DB write. If the DB insert succeeds butupdate-external-idfails, the user is half-provisioned. Order the writes so this is recoverable: write the DB row, thenupdate-external-id; on the next bootstrap attempt detect the existing row by Auth profile ID and only retry theupdate-external-idstep.- Authorization gate on the bootstrap. It accepts tokens without
external_id, so make sure it does nothing except provision-or-link the user. No tenant-scoped reads, no admin actions, no side effects that depend on caller identity beyond the Auth profile itself.
Source of truth at this stage
| Action | Owner |
|---|---|
| User creation | Your system → Auth and Auth → Your system |
| User update | Your system → Auth |
| User deletion | Your system → Auth |
Step 5 — Re-point backend writes to Auth
The webhook pipeline from step 1 handles changes that originate inside Auth (a user editing their profile in the Auth UI). But your backend itself probably also writes to the legacy provider in many places — admin endpoints, internal scripts, system-driven flows like “rotate email after security event”, “deactivate user on offboarding”, invitation acceptance, etc. While Auth was a replica, all of those calls correctly targeted the legacy provider. Now that Auth is the source of truth, they all need to be re-pointed at Auth. Audit and migrate every backend write site:- Find every caller of the legacy provider’s SDK. Grep for the legacy client’s import path and inventory each call: who calls it, what does it write, what triggers it? Common categories: admin user CRUD, invitation flows, password resets.
- Replace the call with the equivalent Auth API, not a back-write request. Writing to the legacy provider and waiting for the webhook to push the change to Auth would be slow, indirect, and would re-introduce loops. Write to Auth directly; the back-write pipeline will mirror the change to the legacy provider asynchronously.
- Look users up in Auth by
external_id(your internal user ID).
- Mixed-write transactions. If a single endpoint used to write the legacy provider and your DB in the same flow, make sure the new Auth write happens first (or in the right order) so failure modes are unchanged.
- Latency assumptions. Some callers may have implicitly relied on the legacy provider’s response time or webhook delivery; the Auth API may be faster or slower. Re-check anything that loops or polls.
- Permission scopes. Your Auth backend credentials need every write scope the legacy credentials had. Audit before flipping, not after.
Subscribe to Auth webhooks
At the same time, stop the webhook from the legacy system to Auth and start consuming Auth webhooks instead.Implement POST /webhooks/session/events
Expose the endpoint on the same backend service that handles your authenticated traffic. It needs no auth in the conventional sense — Auth signs the requests, your handler verifies. Implement the activation handshake as a separate code path from the lifecycle handler — easier to test, harder to accidentally break. See Webhooks for the full handshake and signature verification details.
Register the webhook
Subscribe only to the events you actually need. Every extra event type is more dedupe surface and more code paths to keep idempotent.Reference:
create-webhook. The response returns a web_… webhook ID — capture it as $WEBHOOK_ID for the next step.Activate the subscription
Once Auth’s verification handshake on your endpoint passes, activate the subscription so events start flowing:Reference:
activate-webhook. To add or drop a single event later without recreating the subscription, use add-webhook-event and delete-webhook-event; to pause it, use deactivate-webhook.Enqueue, then 2xx fast
Webhook handlers are not the place to do work. Verify, dedupe, push to a queue, return
200. If you do real work in-handler you’ll either time out (Auth retries → duplicate work) or drop events under load.Final source of truth
| Action | Owner |
|---|---|
| User creation | Auth → Legacy |
| User update | Auth → Legacy |
| User deletion | Auth → Legacy (you can make it optional if you want to avoid any risks) |
What’s next?
- Set up Webhooks on the Auth side so your backend reacts to lifecycle events in real time.
- Explore the full Management API for advanced user management.