Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.prelude.so/llms.txt

Use this file to discover all available pages before exploring further.

This guide walks you through migrating users from a legacy authentication provider (Clerk, Auth0, Firebase, a homegrown system, etc.) to the Prelude Auth API. The process is provider-agnostic; throughout the guide we call your existing system the legacy system and Prelude the Auth system. The migration runs in four phases. Each phase is safe to roll out behind a feature flag, and the source of truth shifts from the legacy system to Auth only at the very end.

Prerequisites

Before you start, make sure you have:
  • A Prelude account with access to the Auth API.
  • An Application ID ($APP_ID) — see Applications.
  • Your Management API key ($PRELUDE_API_KEY) for backend calls.
Every Management API call below uses:
  • Base URL: https://api.prelude.dev
  • Auth: Authorization: Bearer $PRELUDE_API_KEY
  • $USER_ID refers to the Auth user ID (usr_…).

Step 1 — Mirror user changes to Auth

Before flipping the source of truth or running the backfill, you need a way to keep Auth in sync with whatever happens on your side in real time. You already have code paths in your own backend that update or delete users — when a profile is edited, when an email is added, when a user is offboarded. Mirror each of those changes into Auth by calling the Management API right next to the write you already do. You don’t need to subscribe to anything from the legacy provider for this: your own system is where these mutations originate (or where they’re observed and persisted), so it’s the right place to fan out an Auth call. At this stage your system is the source of truth, the replication is one-way (your system → Auth), and Auth is a downstream replica — never write to it directly.

Where to add the calls

Find every site in your backend that mutates a user — admin endpoints, profile-update handlers, identifier add/remove flows, account-deletion logic, etc. Each of them needs one extra step: after the write to your own DB (and, if applicable, the legacy provider), mirror the same change to Auth.

Mirroring an update

  1. Resolve the Auth user by external_id = your internal user_id. If no Auth user exists yet (the user hasn’t been migrated lazily or by the backfill), skip silently — there’s nothing to mirror.
  2. Patch profile fields (first name, last name, picture, locale, …) on the Auth user via the Management API.
  3. Reconcile identifiers as a diff, not a replacement:
    • Compute desired = {identifiers on your side} and existing = {Auth identifiers}.
    • Drop everything in existing \ desired.
    • Add everything in desired \ existing.
    • Skip everything in the intersection.
    • Apply the same logic separately for emails, OAuth identifiers (keyed by provider:provider_user_id), phone numbers, etc.
  4. Treat NotFound and AlreadyExists errors on individual identifier operations as success — they describe the state you wanted anyway.
  5. Don’t short-circuit on the first failure: collect errors across the profile patch and each identifier op, run them all, then join at the end. Partial progress beats an all-or-nothing retry.

Mirroring a delete

  1. Resolve the Auth user by external_id = your internal user_id.
  2. Call the Management API to delete the Auth user.
  3. NotFound → treat as success (already gone, which is the desired end state).

Management API calls used in this step

OperationMethod + pathReference
Find the Auth user by external_idGET /v2/session/apps/{appID}/userslist-users
Patch profile fieldsPATCH /v2/session/apps/{appID}/users/{userID}/profileupdate-user-profile
Add an identifierPOST /v2/session/apps/{appID}/users/{userID}/identifierscreate-identifier
Remove an identifierDELETE /v2/session/apps/{appID}/users/{userID}/identifiersdelete-identifier
Delete the Auth userDELETE /v2/session/apps/{appID}/users/{userID}delete-user

Step 2 — Migrate active users via the migration hook

Active users are migrated lazily: each time a user signs in through the legacy provider, the frontend exchanges their legacy token for an Auth token via session.migrate({ token: legacyToken }). Auth forwards the legacy token to a webhook you own, which validates it and returns the user identity payload Auth needs to provision the user on its side.

Configure the migration hook

Register the URL Auth forwards legacy tokens to. Until this is configured, session.migrate() has nowhere to validate against.
1

Register the migration hook

curl -X POST https://api.prelude.dev/v2/session/apps/$APP_ID/config/migration \
  -H "Authorization: Bearer $PRELUDE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "session_validation_url": "https://api.example.com/session/validate"
  }'
Reference: create-migration-config. To rotate the URL later use update-migration-config; to turn migration off once the rollout is done use delete-migration-config.
2

Implement the validation endpoint

Your hook must:
  1. Validate the legacy token against your legacy provider (configure JWKS / token issuer so your backend can verify it).
  2. Resolve it to your internal user (email, user ID, org, etc.).
  3. Return the user identity payload Auth expects, so Auth can provision a user on its side mapped to your internal ID.
Most teams reuse the existing internal user_id as the Auth external_id so backend code doesn’t need a translation table. Make sure the migration hook returns it.
3

Add a feature flag and initialize the SDK

Add a feature flag (e.g. isSessionMigrationEnabled()) so you can roll the migration out gradually and roll back instantly. Initialize the Auth SDK client once at app startup (singleton like getSessionClient()), and keep the legacy auth client wired in parallel — both must coexist during the migration window.

Frontend flow (auth interceptor)

The interceptor decides which token to attach to each outgoing request:
  1. Flag off → use the legacy token, behavior unchanged.
  2. Flag on → try session.refresh() first. The SDK returns a cached access token if still valid, otherwise it will hit /refresh.
  3. No Auth yet → fetch a token from the legacy provider, call session.migrate({ token: legacyToken }) to exchange it (this method is thread safe with a lock), then session.refresh() to get the fresh Auth access token. Cache that the user is now on Auth for this request.
  4. Migration fails → fall back to the legacy token so the user is never locked out.
  5. 401 retry → if the request used an Auth token and got Unauthenticated, invalidate the local cache, refresh once, and retry the request a single time. This handles tokens that look valid client-side but were revoked server-side. Keep this rule even after the migration is fully done.
Key points to handle:
  • Don’t loop: retry only once on 401, and only when the failing token was an Auth token.
  • Don’t double-migrate: after migrate() succeeds, subsequent calls naturally hit the cached Auth token first, so they skip the legacy path.
  • Errors during legacy fetch must be surfaced (e.g. LegacyUnavailableError) so the UI can react — never silently send an unauthenticated request.

Backend: accept both tokens

The backend needs to validate either token format during the migration window:
  1. Inspect the Authorization: Bearer <token> header.
  2. Detect the token type — typically by the iss claim of a JWT, or a token prefix (sess_… vs legacy format), or by trying Auth first and falling back. Pick one detection strategy and stick to it; don’t validate against both signers blindly.
  3. Route validation accordingly:
    • Auth token → verify against the Auth JWKS endpoint.
    • Legacy token → verify against the legacy provider as before.
  4. Resolve to the same internal user identity in both branches. The user ID, org/tenant, and permissions must come out identical regardless of which token was presented — downstream code shouldn’t care which path it took.
  5. Log which path was used with a low-cardinality counter (auth_path{type="session|legacy"}). This is how you know when legacy traffic has drained and the flag can be removed.

Source of truth at this stage

ActionOwner
User creationYour system → Auth
User updateYour system → Auth
User deletionYour system → Auth
Auth is just a view of the users. Don’t make updates and deletes directly on Auth; everything is mirrored from your own system, which remains the source of truth.

Step 3 — Backfill dormant users

Lazy migration via session.migrate() only covers users who actually log in during the rollout window. After Auth traffic stabilizes at ~100% of active users, you still have dormant users that exist on the legacy provider and in your own database but were never provisioned on Auth. A one-shot backfill closes the gap. The backfill is safe to run now: any user who logs in during the backfill still flows through migrate() and gets provisioned on Auth that way. The backfill is idempotent — it skips anyone already provisioned — so the two paths can run concurrently without conflict.

Building the job

Write a CLI tool you can run from a developer machine or a one-shot task with credentials for your local DB, your legacy provider, and Auth:
  1. Iterate your own user table. Your DB is the source of truth for which users you care about. Use keyset pagination ordered by (e.g.) (created_at, id)LIMIT/OFFSET drifts when rows are inserted mid-scan and can’t resume after a crash. Filter to users that (a) have a legacy provider ID and (b) are not soft-deleted.
  2. Batch-fetch the legacy provider. For each DB page, collect the legacy IDs and fetch them from the legacy provider in chunks (most providers cap at ~100 IDs per call).
  3. Idempotency check first. For each user, look up Auth by external_id (your internal user_id). If it already resolves to an Auth profile, skip — that user was migrated lazily by the interceptor or by an earlier run of this job.
  4. Map legacy account → Auth identifiers. Pull every email and every OAuth external account from the legacy user. Normalize the provider name to whatever Auth expects (some legacy providers prefix it, e.g. oauth_googlegoogle). Map the legacy profile → Auth profile (given name, family name, picture, locale, whatever your app reads). Use the same internal user_id as external_id that the migration hook uses — this is what guarantees the lazy and backfill paths converge on the same Auth user.
  5. Write atomically. Send the profile + identifiers to Auth in one call. Treat IdentifierAlreadyExists / ExternalIDAlreadyExists as benign — that’s the race with a concurrent migrate() call. Any other error → log and count as failed, do not halt the job.

Create the Auth user

curl -X POST https://api.prelude.dev/v2/session/apps/$APP_ID/users \
  -H "Authorization: Bearer $PRELUDE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "external_id": "internal-user-42",
    "profile": {
      "first_name": "Jane",
      "last_name": "Doe"
    },
    "identifiers": [
      { "type": "email_address", "value": "jane@example.com" },
      { "type": "phone_number",  "value": "+15551234567" }
    ]
  }'
Reference: create-user. After this step, every user on the legacy provider also exists on Auth. You can now use Auth for sign-in.

Step 4 — Use Auth for sign-in and sign-up

Up to this point, users still sign in through the legacy provider’s UI; Auth only sees them via migrate() after the fact. Now that every user has an Auth profile (backfill done, lazy migration covering active users), you can flip the actual login surface to Auth and stop sending users through the legacy sign-in flow at all.

Before you start

  • Backfill must be complete. If a user reaches the new Auth login page and has no Auth profile yet, they cannot sign in.
  • Identifier coverage. Audit a sample: do users have at least one identifier per sign-in method you intend to support (email + password, each OAuth provider)? If you’re enabling Google sign-in but only 70% of users have a Google identifier, the other 30% will hit “no account found”.
  • Passwords. Most legacy providers won’t let you export password hashes. If yours does and the hash format is compatible with Auth’s verifier, import them with the backfill so existing users keep their passwords. If not, password users will have to use a “forgot password” reset flow on first Auth sign-in — communicate this in advance.

Build the new login UI

  1. Build new routes powered by the Auth SDK: /login, /signup, /forgot-password, /verify, OAuth callback, etc. Keep them behind a feature flag (isSessionLoginEnabled()) separate from isSessionMigrationEnabled() so you can ramp them independently.
  2. Wire each sign-in method Auth-side:
    • Email + password → session.signIn({ email, password }). See Password Authentication.
    • OTP → Auth sends, user clicks/enters code, SDK exchanges for a session. See OTP Login.
    • OAuth → redirect to Auth’s OAuth start URL; Auth handles the provider round-trip and drops the user back on your callback. See Social Login.
  3. Preserve the external_id invariant. Whichever method the user picks, the Auth profile they end up authenticated as must be the one whose external_id equals your internal user ID. The backfill already guarantees this — your job is not to break it.

Bootstrap an Auth-native sign-up

For new users that sign up via Auth, they will exist on the Auth side but not on the legacy system. After each Auth sign-up:
  1. The frontend now holds an Auth access token whose JWT claims identify the user (email, OAuth identifiers, etc.) but external_id is empty.
  2. The frontend checks for the absence of external_id in the token and calls a single bootstrap on your backend — e.g. CreateUser — with no body, just the Auth bearer token.
  3. The backend verifies the Auth token with one important relaxation: for the CreateUser endpoint only, accept tokens whose external_id claim is missing. Every other endpoint must continue to require it; otherwise an attacker could call any endpoint with a fresh Auth token and skip user creation entirely.
  4. The backend provisions the internal user in your own DB and triggers whatever side effects new sign-ups normally do (workspace creation, welcome email, analytics event, etc.). If a user with the same email already exists — e.g. a teammate invited them earlier — link to the existing row instead of creating a duplicate.
  5. The backend writes the new internal ID back to Auth via update-external-id. This closes the loop: from this point on, the Auth profile has external_id set, every subsequent token contains it, and the standard auth path applies.
  6. Once CreateUser returns, the frontend calls refresh() with cache invalidation to get a fresh token with external_id.
curl -X PUT https://api.prelude.dev/v2/session/apps/$APP_ID/users/$USER_ID/external_id \
  -H "Authorization: Bearer $PRELUDE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "external_id": "internal-user-42"
  }'
Reference: update-external-id. Things to watch for:
  • Idempotency on the backend. The frontend may retry CreateUser on transient failures. Make CreateUser idempotent: if external_id is already set on the Auth profile, or if a user row already exists for this Auth profile ID / email, return the existing user — don’t insert a second row.
  • update-external-id is the source of truth, not your DB write. If the DB insert succeeds but update-external-id fails, the user is half-provisioned. Order the writes so this is recoverable: write the DB row, then update-external-id; on the next bootstrap attempt detect the existing row by Auth profile ID and only retry the update-external-id step.
  • Authorization gate on the bootstrap. It accepts tokens without external_id, so make sure it does nothing except provision-or-link the user. No tenant-scoped reads, no admin actions, no side effects that depend on caller identity beyond the Auth profile itself.

Source of truth at this stage

ActionOwner
User creationYour system → Auth and Auth → Your system
User updateYour system → Auth
User deletionYour system → Auth
Both Auth and your own system can create users (new sign-ups go via Auth and bootstrap into your DB; legacy-flow sign-ups that haven’t been flipped to Auth yet still originate in your system). Updates and deletes still originate from your system and are mirrored to Auth — don’t make them directly on Auth yet.

Step 5 — Re-point backend writes to Auth

The webhook pipeline from step 1 handles changes that originate inside Auth (a user editing their profile in the Auth UI). But your backend itself probably also writes to the legacy provider in many places — admin endpoints, internal scripts, system-driven flows like “rotate email after security event”, “deactivate user on offboarding”, invitation acceptance, etc. While Auth was a replica, all of those calls correctly targeted the legacy provider. Now that Auth is the source of truth, they all need to be re-pointed at Auth. Audit and migrate every backend write site:
  • Find every caller of the legacy provider’s SDK. Grep for the legacy client’s import path and inventory each call: who calls it, what does it write, what triggers it? Common categories: admin user CRUD, invitation flows, password resets.
  • Replace the call with the equivalent Auth API, not a back-write request. Writing to the legacy provider and waiting for the webhook to push the change to Auth would be slow, indirect, and would re-introduce loops. Write to Auth directly; the back-write pipeline will mirror the change to the legacy provider asynchronously.
  • Look users up in Auth by external_id (your internal user ID).
Watch out for:
  • Mixed-write transactions. If a single endpoint used to write the legacy provider and your DB in the same flow, make sure the new Auth write happens first (or in the right order) so failure modes are unchanged.
  • Latency assumptions. Some callers may have implicitly relied on the legacy provider’s response time or webhook delivery; the Auth API may be faster or slower. Re-check anything that loops or polls.
  • Permission scopes. Your Auth backend credentials need every write scope the legacy credentials had. Audit before flipping, not after.

Subscribe to Auth webhooks

At the same time, stop the webhook from the legacy system to Auth and start consuming Auth webhooks instead.
1

Implement POST /webhooks/session/events

Expose the endpoint on the same backend service that handles your authenticated traffic. It needs no auth in the conventional sense — Auth signs the requests, your handler verifies. Implement the activation handshake as a separate code path from the lifecycle handler — easier to test, harder to accidentally break. See Webhooks for the full handshake and signature verification details.
2

Register the webhook

Subscribe only to the events you actually need. Every extra event type is more dedupe surface and more code paths to keep idempotent.
curl -X POST https://api.prelude.dev/v2/session/apps/$APP_ID/webhooks \
  -H "Authorization: Bearer $PRELUDE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://api.example.com/webhooks/session/events",
    "verification_token": "a-shared-secret",
    "events": [
      "user.updated",
      "user.profile.updated",
      "user.identifier.created",
      "user.identifier.deleted",
      "user.deleted"
    ]
  }'
Reference: create-webhook. The response returns a web_… webhook ID — capture it as $WEBHOOK_ID for the next step.
3

Activate the subscription

Once Auth’s verification handshake on your endpoint passes, activate the subscription so events start flowing:
curl -X POST https://api.prelude.dev/v2/session/apps/$APP_ID/webhooks/$WEBHOOK_ID/activate \
  -H "Authorization: Bearer $PRELUDE_API_KEY"
Reference: activate-webhook. To add or drop a single event later without recreating the subscription, use add-webhook-event and delete-webhook-event; to pause it, use deactivate-webhook.
4

Enqueue, then 2xx fast

Webhook handlers are not the place to do work. Verify, dedupe, push to a queue, return 200. If you do real work in-handler you’ll either time out (Auth retries → duplicate work) or drop events under load.
5

Reconcile on lag

If you suspect the system is lagging, list everything on the legacy system and the Auth system and compute the diff. Update only what changed.

Final source of truth

ActionOwner
User creationAuth → Legacy
User updateAuth → Legacy
User deletionAuth → Legacy (you can make it optional if you want to avoid any risks)

What’s next?

  • Set up Webhooks on the Auth side so your backend reacts to lifecycle events in real time.
  • Explore the full Management API for advanced user management.