Skip to content

Lever to Ashby Migration: The CTO's Technical Guide

A technical guide for CTOs migrating from Lever to Ashby. Covers data model mapping, API rate limits, the Ashby 200 OK error trap, and migration approaches.

Raaj Raaj · · 20 min read
Lever to Ashby Migration: The CTO's Technical Guide
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Migrating from Lever to Ashby is a data-model translation problem, not a CSV drag-and-drop. Lever is opportunity-centric — a single Contact can have multiple Opportunities, each representing a candidacy tied to a Posting. Ashby separates this into distinct Candidate, Application, and Job records, where Applications link Candidates to Jobs through structured interview plans. A naive CSV export flattens your Opportunity history, silently drops interview feedback, and leaves you with orphaned candidate records on the other side.

The core architectural challenge: Lever treats every interaction as a fluid Opportunity flowing through a pipeline. Ashby enforces structured evaluation with rigid interview stages, feedback forms, and job-specific applications. Translating between these models requires splitting Lever records, deduplicating contacts, re-associating historical interview data, and navigating two very different API architectures — one REST, one RPC.

This guide covers the object-mapping decisions you need to make, every viable migration method and its trade-offs, the API constraints that will bottleneck your ETL scripts, and the edge cases that break most DIY attempts.

Ashby documents three migration routes on its side: API migration, file-based migration, and self-serve bulk import. For Lever, Ashby's historical file-based import can bring over jobs, candidates, applications, resumes, users, notes, interview feedback, and interview stages. The self-serve bulk import is much narrower — Ashby explicitly says it is not a comprehensive view of the candidate lifecycle. (docs.ashbyhq.com)

For related ATS migration context, see our coverage on Lever to Greenhouse migration and common ATS migration gotchas.

Warning

Keep two immutable source keys throughout the project: one for the person (contactId in Lever) and one for the candidacy (opportunityId). Ashby requires separate candidate.create and application.create calls — if you flatten them into a single row, you cannot rebuild the relationship chain.

Why Companies Migrate from Lever to Ashby

The drivers typically fall into three categories:

  • All-in-one consolidation. Lever requires separate tools for scheduling, CRM/sourcing, and analytics. Ashby bundles ATS, CRM, scheduling, and analytics into a single platform, reducing vendor sprawl and per-seat add-on costs.
  • Built-in analytics. Lever's reporting is functional but limited. Ashby ships with out-of-the-box dashboards across all jobs, automated Slack/email reporting, and custom alerts for SLA enforcement — capabilities that require third-party tools or manual exports in Lever.
  • Modern UX and AI features. Ashby's interface includes natural-language candidate search, AI-personalized outreach tokens, and AI-summarized interview feedback as native features, not bolt-on integrations.

None of these benefits matter if the migration corrupts your candidate history. The rest of this guide focuses on making sure it doesn't.

Data Model Mapping: Lever vs. Ashby

Understanding the structural mismatch is the prerequisite for every migration decision.

Lever's data model is opportunity-centric. A Contact represents a person. Each Contact can have multiple Opportunities, where each Opportunity represents a distinct candidacy moving through your pipeline. Each Opportunity can be linked to a Posting (a job). Opportunities carry their own notes, feedback, interview schedules, and forms. The deprecated Candidates endpoints still exist, but the Opportunities endpoints are the canonical source. (hire.lever.co)

Ashby's data model separates concerns differently. A Candidate is a person record. An Application links a Candidate to a Job and carries the pipeline stage, interview plan, and feedback. Application status values include Lead, Active, Hired, and Archived, which helps when you translate sourced leads, active pipelines, rejections, and hires.

Core Object Mapping

Lever Object Ashby Equivalent Notes
Contact Candidate 1:1 mapping. Deduplicate by email before import.
Opportunity Application Each Lever Opportunity becomes an Ashby Application linked to a Candidate and Job.
Posting Job Map Posting IDs to Job IDs. Create Jobs in Ashby first.
Stage Interview Stage Lever stages are per-pipeline; Ashby stages are per-job interview plan.
Feedback / Scorecard Application Feedback Use applicationFeedback.submit in Ashby. Form structure must be pre-configured.
Notes Candidate Notes Use candidate.createNote. Supports HTML formatting.
Tags Candidate Tags Use candidate.addTag.
Sources Candidate Source Map Lever's origin and sources fields to Ashby's source tracking.
Resume / Files Candidate Files Lever allows file download via API; Ashby file upload requires candidate.uploadResume or applicationForm.submit.
Archived Reason Application Archive Reason Map Lever's archive reasons to Ashby's configured reasons.

Field-Level Mapping

Lever Field Type Ashby Field Type Transformation
contact (ID) String candidate.id UUID Generate new; maintain lookup table
name String candidate.name String Direct map
emails [] Array candidate.emailAddresses [] Array Flatten to primary + additional
phones [] Array candidate.phoneNumbers [] Array Direct map
headline String candidate.title String Direct map
location String candidate.location Object May require structured parsing
tags [] Array Candidate Tags Endpoint Call candidate.addTag per tag
sources [] Array candidate.sourceId UUID Map to pre-created Ashby sources
origin Enum candidate.creditedToUser UUID Map origin type to Ashby user
stage.text String application.currentInterviewStageId UUID Map by stage name to Ashby stage ID
archived.reason String application.archiveReasonId UUID Pre-create archive reasons in Ashby
createdAt Timestamp candidate.createdAt ISO 8601 Convert epoch ms → ISO 8601
Info

Lever stores timestamps as Unix epoch milliseconds. Ashby expects ISO 8601 strings. Every date field needs explicit conversion during the transform phase.

Handling Custom Fields

Both platforms support custom fields, but their data types and validation rules differ. Lever allows flexible custom fields on Opportunities, while Ashby requires strict schema definitions for custom fields on Applications or Candidates. Before migration:

  1. Export all custom field definitions from Lever via GET /opportunities?expand=applications
  2. Create matching custom fields in Ashby's admin panel (Admin → Custom Fields)
  3. Build a mapping table of Lever custom field keys → Ashby custom field IDs
  4. Handle type mismatches (e.g., Lever multi-select → Ashby single-select requires a transformation decision)
  5. Pre-load picklist options using Ashby's customField.updateSelectableValues before writing records (developers.ashbyhq.com)

If your team uses CRM-style labels like Accounts, Leads, or custom agency/client entities, treat those as extension work. Ashby's extensibility is centered on custom fields on existing objects, not arbitrary top-level custom records. Most source-only objects need to be remodeled into candidate/application/job fields, notes, or an external archive. (developers.ashbyhq.com)

For more context on mapping complex fields during an ATS transition, see our guide on common ATS migration gotchas.

Evaluating Migration Approaches

There are five viable paths. Each has hard trade-offs.

Approach 1: Native CSV Export/Import

How it works: Export candidate data from Lever as CSV files. Import into Ashby via their self-serve bulk import tool.

When to use it: Fewer than ~500 candidates, no need to preserve interview history or feedback, and you're okay losing relational data.

Limitations:

  • CSV exports flatten the Contact → Opportunity → Posting hierarchy into rows. Multi-opportunity candidates become duplicate rows.
  • Interview feedback, scorecards, and notes are not included in standard CSV exports.
  • File attachments (resumes, cover letters) are not included.
  • Ashby's CSV import does not support setting custom fields, tags, or sources programmatically.
  • Ashby explicitly says self-serve bulk import is not a comprehensive candidate-lifecycle migration. (docs.ashbyhq.com)

For a deeper analysis of CSV-based migration limitations, see Using CSVs for SaaS Data Migrations: Pros and Cons.

Approach 2: Direct API Migration (Lever REST → Ashby RPC)

How it works: Build scripts that extract data from Lever's REST API, transform it to match Ashby's schema, and load it via Ashby's RPC API.

When to use it: You need full relational integrity, historical feedback, and custom field mapping — and you have engineering bandwidth to build and maintain the pipeline.

Limitations:

  • You must handle two very different API architectures (REST vs. RPC)
  • Lever's 10 req/sec rate limit and Ashby's 1,000 req/min limit constrain throughput
  • Ashby's HTTP 200 OK error trap requires custom response validation (see Edge Cases section)
  • Token management: Lever uses OAuth with 1-hour token expiry; Ashby uses long-lived Basic Auth keys
  • Expect 2–4 weeks of engineering time for a production-quality pipeline with error handling, retry logic, and validation

Approach 3: Third-Party Migration Tools or Services

How it works: Use a migration service or unified API vendor instead of integrating both APIs yourself.

When to use it: Low engineering bandwidth, multi-ATS roadmap, or you need full data fidelity without building the pipeline.

Caveats: Coverage is the main problem. Unified API vendors like Merge expose normalized ATS models with passthrough and deleted-data detection, but abstraction layers can miss edge cases. Apideck's Lever connector maps Applicants to Opportunities but marks Applications as not supported — a serious gap for historical Lever migrations.

Ashby itself supports API migrations, file-based migrations, and self-serve bulk import. The file-based migration is the richest vendor-assisted option, supporting jobs, candidates, applications, resumes, users, notes, interview feedback, and interview stages. (docs.ashbyhq.com)

Approach 4: Custom ETL Pipeline

How it works: Land raw Lever data in staging, maintain immutable source IDs, transform into canonical tables, and use replayable workers to load Ashby.

When to use it: Enterprise volume, audit-heavy projects, long coexistence windows, or when you need full replay and reconciliation capabilities.

This differs from Approach 2 in its emphasis on staging, idempotency, and replayability. Ashby's syncToken model fits well once you switch from backfill to delta sync. (developers.ashbyhq.com)

Approach 5: Middleware / Integration Platforms (Zapier, Make)

How it works: Use low-code platforms for event-driven sync after the historical data move, not as the primary migration engine.

When to use it: Lightweight coexistence, notifications, downstream enrichment, or ongoing delta sync post-cutover.

Zapier's Ashby app exposes triggers like Application Changed Stage and Candidate Hired, plus actions like Create Candidate and Create Candidate Note. Lever supports webhooks for applicationCreated, candidateHired, and candidateStageChange, and retries failed deliveries up to five times. Make currently lists Ashby apps as community/partner connectors rather than first-party modules. (zapier.com)

These tools are weak for scorecards, attachments, and bulk history. Use them for sync, not migration.

Comparison Table

Approach Best Fit Relationship Fidelity Ongoing Sync Internal Engineering Complexity
CSV import/export Active pipeline basics Low No Low Low
Direct API migration One-time high-fidelity move High Possible High High
Third-party tool/service Low-bandwidth teams Medium–High Varies Low Low–Medium
Custom ETL pipeline Enterprise, audit-heavy Highest Yes Highest High
Middleware / iPaaS Delta sync after cutover Low–Medium Yes Low Low–Medium

Decision Matrix

Scenario Recommended Approach
< 500 candidates, no historical data needed CSV or vendor-led file migration
Full history, dedicated eng team, 2–4 weeks available Direct API or Custom ETL
Full history, limited eng bandwidth Managed migration service
Enterprise scale (10k+ candidates) Custom ETL or managed service
Ongoing sync during parallel run Custom ETL with sync tokens, or middleware post-backfill

For more on why AI-generated migration scripts often miss critical edge cases, see Why DIY AI Scripts Fail.

Extracting Data from Lever: API Quirks and Constraints

The Lever API is REST-based, available at https://api.lever.co/v1. Here's what you need to know before writing extraction code. (hire.lever.co)

Authentication

Lever uses OAuth 2.0 with tokens that expire after one hour. Your extraction script must handle token refresh automatically. API keys are also available for simpler integrations, using HTTP Basic Auth with the key as the username.

Warning

Lever API keys created without confidential data access cannot retrieve confidential postings, opportunities, or candidates. You must grant confidential access at key creation time — it cannot be added later. If your extraction script misses confidential records, you'll have a silent gap in your migration.

Rate Limits

Lever enforces a steady-state rate limit of 10 requests per second per API key, with bursts up to 20 requests per second. Application POST requests via the Postings API have a stricter limit of 2 requests per second. (github.com) Exceeding these limits returns a 429 Too Many Requests status code.

Implement a token bucket or leaky bucket rate limiter in your extraction script. Exponential backoff on 429s is mandatory — if you do not handle 429 responses correctly, your script will silently drop candidate records during extraction.

Pagination

Lever uses offset-token pagination. Each paginated response includes a next attribute containing an offset token for the next page. The default page size is 100 results, configurable between 1 and 100. You cannot construct offset tokens manually — you must use the token returned in the previous response.

What to Extract

Extract in this order to maintain referential integrity:

  1. Users (GET /users) — needed to map owners, followers, interviewers
  2. Postings (GET /postings) — needed to map to Ashby Jobs
  3. Stages (GET /stages) — needed to map pipeline stages
  4. Opportunities (GET /opportunities?expand=applications,stage,sourcedBy,owner) — the core data
  5. Notes (GET /opportunities/{id}/notes) — per-opportunity
  6. Feedback (GET /opportunities/{id}/feedback) — interview feedback per opportunity
  7. Resumes/Files (GET /opportunities/{id}/resumes) — binary file downloads
Tip

Use Lever's expand parameter aggressively. Expanding applications, stage, sourcedBy, and owner inline reduces the total number of API calls by 4–5x compared to fetching each relationship separately.

import requests
import time
 
BASE_URL = "https://api.lever.co/v1"
API_KEY = "your_lever_api_key"
 
def extract_all_opportunities():
    """Extract all opportunities with pagination and rate limiting."""
    opportunities = []
    offset = None
    
    while True:
        params = {"limit": 100, "expand": "applications,stage,sourcedBy,owner"}
        if offset:
            params["offset"] = offset
        
        response = requests.get(
            f"{BASE_URL}/opportunities",
            auth=(API_KEY, ""),
            params=params
        )
        
        if response.status_code == 429:
            retry_after = int(response.headers.get("Retry-After", 5))
            time.sleep(retry_after)
            continue
        
        response.raise_for_status()
        data = response.json()
        
        opportunities.extend(data.get("data", []))
        
        if data.get("hasNext") and data.get("next"):
            offset = data["next"]
            time.sleep(0.1)  # Respect 10 req/sec limit
        else:
            break
    
    return opportunities

Loading Data into Ashby: Navigating the RPC Architecture

Ashby's API is architecturally different from Lever's. Understanding these differences before you write import code will save you days of debugging.

RPC-Style Endpoints

Ashby uses an RPC-style API where endpoints follow the form /CATEGORY.method. Most endpoints take POST requests, even for what would typically be a GET in a REST API. All request parameters are sent in JSON bodies with Content-Type: application/json. (developers.ashbyhq.com)

# List all candidates — this is a POST, not a GET
curl -X POST https://api.ashbyhq.com/candidate.list \
  -u YOUR_API_KEY: \
  -H "Content-Type: application/json" \
  -d '{"limit": 100}'

Authentication

Ashby uses HTTP Basic Auth with a long-lived API key as the username and an empty password. Unlike Lever's OAuth tokens, Ashby keys don't expire — but each key has scoped permissions configured at creation time.

Ensure your API key has these permissions at minimum:

  • candidatesRead and candidatesWrite
  • interviewsRead
  • jobsRead
  • reportsRead (if using report endpoints for validation)

Rate Limits

Ashby enforces a rate limit of 1,000 requests per minute per API key. Report endpoints have a stricter limit of 15 requests per minute per organization with a maximum of 3 concurrent report operations.

At 1,000 requests/minute, importing 10,000 candidates with their applications, notes, tags, and feedback could require 50,000+ API calls — roughly 50 minutes of sustained throughput at maximum rate. Build in buffer for retries.

Pagination and Sync Tokens

Ashby's list endpoints use cursor-based pagination with a nextCursor and moreDataAvailable flag. They also support sync tokens for incremental syncs — useful if you need a parallel run where both systems operate simultaneously. If you use incremental sync, run it at least weekly so syncToken values do not expire. (developers.ashbyhq.com)

Load Order

Load data in dependency order:

  1. Jobsjob.create (or verify existing jobs match Lever Postings)
  2. Candidatescandidate.create
  3. Applicationsapplication.create (links Candidate to Job)
  4. Notescandidate.createNote
  5. Tagscandidate.addTag
  6. Custom FieldscustomField.setValues
  7. FeedbackapplicationFeedback.submit (requires pre-configured feedback forms)
  8. Resumescandidate.uploadResume
Danger

Always load in dependency order. If you try to create an Application before its Job exists in Ashby, the call will return 200 OK with success: false and a requested_job_not_found error — which your script won't catch unless you're checking the response body.

import requests
 
ASHBY_BASE = "https://api.ashbyhq.com"
ASHBY_KEY = "your_ashby_api_key"
 
def ashby_request(endpoint, payload):
    """Make an Ashby API request with proper error handling."""
    response = requests.post(
        f"{ASHBY_BASE}/{endpoint}",
        auth=(ASHBY_KEY, ""),
        headers={"Content-Type": "application/json"},
        json=payload
    )
    
    # Standard HTTP errors (401, 403) still use proper status codes
    if response.status_code in (401, 403):
        raise Exception(f"Auth error: {response.status_code}")
    
    result = response.json()
    
    # THE TRAP: 200 OK with success: false
    if not result.get("success"):
        error_info = result.get("errorInfo", {})
        code = error_info.get("code", "unknown")
        message = error_info.get("message", "No message")
        request_id = error_info.get("requestId", "N/A")
        raise Exception(
            f"Ashby error [{code}]: {message} (requestId: {request_id})"
        )
    
    return result.get("results")
 
def create_candidate(candidate_data):
    """Create a candidate in Ashby."""
    return ashby_request("candidate.create", {
        "name": candidate_data["name"],
        "emailAddresses": candidate_data.get("emails", []),
        "phoneNumbers": candidate_data.get("phones", []),
        "socialLinks": candidate_data.get("links", []),
    })
 
def create_application(candidate_id, job_id, stage_id=None):
    """Link a candidate to a job via an application."""
    payload = {
        "candidateId": candidate_id,
        "jobId": job_id,
    }
    if stage_id:
        payload["interviewStageId"] = stage_id
    
    return ashby_request("application.create", payload)

Edge Cases That Silently Corrupt Migrations

The Ashby 200 OK Error Trap

This is the single most dangerous behavior in the Ashby API for migration scripts. Ashby returns HTTP 200 OK status codes even when a request fails. The actual success/failure is buried in the response body's success field. (developers.ashbyhq.com)

What would be 4XX errors return 200 with success: false, plus an errorInfo object containing the error code and message:

{
  "success": false,
  "errorInfo": {
    "code": "application_not_found",
    "message": "Application not found - are you lacking permissions to edit candidates?",
    "requestId": "01JRVWPBWZ40S39G83ZETPXF2E"
  }
}

If your migration script checks only HTTP status codes — standard practice for REST APIs — every failed write will appear to succeed. You'll finish the migration thinking everything worked, only to discover missing records days later. This is exactly why DIY AI migration scripts fail.

The fix: Every single Ashby API call must parse the response body and check success === true before proceeding. Wrap this in a utility function and use it everywhere.

Duplicate Candidates

Lever deduplicates candidates by email address automatically. Ashby does not enforce deduplication on candidate.create — you can create multiple candidate records with the same email. Your migration script must deduplicate before loading, or you'll end up with phantom duplicates that break reporting.

Build a local lookup table: {email → ashby_candidate_id}. Before every candidate.create, check the table. If the email exists, reuse the existing ID.

Multi-Opportunity Candidates

In Lever, a single Contact can have 5, 10, or even 20+ Opportunities (one per job they were considered for). In Ashby, this translates to:

  • One Candidate record
  • Multiple Application records, each linked to a different Job

The correct migration flow for a multi-opportunity contact:

  1. Create the Candidate once using contact-level data (name, email, phone)
  2. For each Opportunity, create a separate Application linking the Candidate to the corresponding Job
  3. Attach notes, feedback, and tags to the correct Application or Candidate

If you create a new Candidate per Opportunity, you'll have duplicates. If you skip Opportunities, you'll lose hiring history. (hire.lever.co)

Candidates Without Job Context

Lever allows candidates that are not applied to a specific posting. Decide upfront whether these become candidate-only records in Ashby, lead-stage applications, or archive-only records. There's no single right answer — it depends on your team's reporting needs. (hire.lever.co)

Interview Feedback and Scorecards

Lever stores feedback as structured objects per Opportunity. Ashby's applicationFeedback.submit endpoint requires a pre-configured feedback form ID and expects field values that match the form's schema. You cannot dump Lever feedback text directly into Ashby — you must either:

  • Create a generic "Migrated Feedback" form in Ashby and map all Lever feedback into a single text field
  • Pre-create matching feedback forms in Ashby and map Lever feedback fields to the new form fields

The first option preserves content; the second preserves structure. Most migrations choose the first for speed. If scorecard fidelity matters, plan for API recreation — Ashby's file-based import may convert some forms and interviews into notes. (developers.ashbyhq.com)

Attachments and Resumes

Lever's API lets you download resumes via GET /opportunities/{id}/resumes/{resumeId}/download. Lever can return 422 Unprocessable Entity for file downloads that were not processed correctly. Ashby's file upload capabilities through the API are more limited — plan for resume migration to be a separate, independently monitored stream. (hire.lever.co)

Confidential Data

Lever postings, opportunities, and requisitions can be marked as confidential. This data is only accessible if the API key was granted confidential access at creation time. If your extraction script misses confidential records, you'll have a silent gap in your migration. Always create a Lever API key with confidential data access for migration purposes.

Migration Architecture: The Full Pipeline

Here's the end-to-end data flow for an API-based migration:

┌─────────────┐     ┌───────────────┐     ┌──────────────┐
│   EXTRACT   │────▶│   TRANSFORM   │────▶│     LOAD     │
│  Lever API  │     │  Local Store  │     │  Ashby API   │
│  (REST)     │     │  (JSON/DB)    │     │  (RPC)       │
└─────────────┘     └───────────────┘     └──────────────┘
     │                      │                     │
     ▼                      ▼                     ▼
  10 req/sec          Deduplicate           1000 req/min
  Offset tokens       Map objects           POST for reads
  OAuth tokens        Convert types         Check success:true
  Expand params       Build lookup tables   Cursor pagination

Do not transform directly from one API response into the next API request. Stage raw data locally so the pipeline is replayable and auditable.

Extract Phase

  1. Authenticate with Lever (OAuth 2.0 or API key)
  2. Extract Users → Postings → Stages → Opportunities (with expansions) → Notes → Feedback → Files
  3. Store raw JSON locally or in a staging database
  4. Maintain source IDs for every record

Transform Phase

  1. Deduplicate Contacts by email → produce a unique Candidate list
  2. Map Lever Postings to Ashby Jobs (create Jobs in Ashby if they don't exist)
  3. Map Lever Stages to Ashby Interview Stages (by name matching)
  4. Convert timestamps from Unix epoch ms to ISO 8601
  5. Map custom field keys to Ashby custom field IDs
  6. Build lookup tables: {lever_contact_id → ashby_candidate_id}, {lever_posting_id → ashby_job_id}, {lever_stage_id → ashby_stage_id}

Load Phase

  1. Create Jobs in Ashby (if not already existing)
  2. Create Candidates (deduplicated)
  3. Create Applications (linking Candidates to Jobs)
  4. Attach Notes to Candidates
  5. Submit Feedback to Applications
  6. Add Tags to Candidates
  7. Set Custom Fields
  8. Upload Resumes
  9. Validate record counts against source

At minimum, log: source object type, source ID, target ID, payload hash, attempt count, final status, and last error message.

Validation and Testing

Migration without validation is just data loss you haven't discovered yet.

Record Count Comparison

After migration, compare counts between source and target:

Record Type Lever Count Ashby Count Delta Action
Unique Contacts SELECT COUNT(DISTINCT contact_id) candidate.list total Must be 0 Investigate any mismatch
Opportunities SELECT COUNT(*) application.list total Must be 0 Check for skipped archived records
Postings/Jobs GET /postings count job.list total May differ Only active postings may be migrated

Field-Level Validation

Sample 5–10% of migrated records and compare field-by-field:

  • Candidate name, email, phone
  • Application stage, source, created date
  • Notes content and timestamps
  • Tag assignments
  • Custom field values

Sampling Strategy

  • Random sample: 5% of total records, minimum 50
  • Edge case sample: Candidates with 3+ Opportunities, candidates with confidential data, candidates with custom fields
  • Boundary sample: First and last records by creation date (catches pagination bugs)

UAT Process

  1. Migrate into a test workspace first (Ashby doesn't provide a public sandbox, so use a separate workspace or restricted API key)
  2. Have recruiters spot-check 20–30 candidate profiles against Lever
  3. Verify pipeline stage assignments match expected state
  4. Confirm interview feedback is readable and attributed correctly

Treat rollback as a re-run plan, not as undoing every API write. Ashby does not have a built-in "undo migration" function. Keep Lever active until Ashby validation is complete — do not cancel your Lever subscription until you've confirmed data integrity.

Pre-Migration Planning Checklist

Ashby explicitly recommends setting up departments, teams, and jobs before importing active candidates. That is also the right order for API-based migrations because applications depend on resolved job and stage metadata. (docs.ashbyhq.com)

Before writing a single line of migration code:

  • Data audit: Count all Contacts, Opportunities (active + archived), Postings, Notes, Feedback entries, and Files in Lever
  • Scope definition: Decide what to migrate. Archived opportunities older than 3 years? Draft postings? Confidential roles?
  • Custom field inventory: List all custom fields in Lever, their types, and whether they have Ashby equivalents
  • User mapping: Map Lever users to Ashby users (for ownership, followers, and credited-to fields)
  • Stage mapping: Map Lever pipeline stages to Ashby interview plan stages for each job
  • Source mapping: Map Lever sources/origins to Ashby source categories
  • Archive reason mapping: Map Lever archive reasons to Ashby archive reasons
  • API key creation: Create Lever API key with confidential data access; create Ashby API key with all necessary read/write permissions
  • Migration strategy: Choose between big bang (simplest operationally, highest freeze risk), phased (lower cutover shock, coexistence complexity), or incremental (least interruption, hardest sync logic)
  • Timeline: Plan for a test migration, validation, fixes, and a final production migration
  • Parallel run window: Determine how long both systems will operate simultaneously

Post-Migration Tasks

After data lands in Ashby:

  1. Rebuild interview plans. Lever's pipeline stages don't automatically map to Ashby's structured interview plans. Configure interview stages, feedback forms, and scorecard templates in Ashby for each job.
  2. Recreate automations. Lever's workflow rules (auto-archive, auto-advance, email triggers) don't transfer. Rebuild them using Ashby's automation builder.
  3. Reconnect integrations. Job board feeds, HRIS syncs, Slack notifications, and calendar connections all need to be set up fresh in Ashby.
  4. Verify permissions. Confirm recruiter permissions and confidential job access are correct in Ashby.
  5. Train your team. Ashby's UI paradigm and candidate/application terminology differ from Lever. Schedule hands-on training sessions covering candidate search, application review, scheduling, and reporting.
  6. Monitor for 2 weeks. Watch for missing data, broken reports, or candidate experience issues. Have a point person checking daily.

For broader guidance on ATS migration compliance, see GDPR & CCPA Compliance When Migrating Candidate Data.

Best Practices

  • Backup everything first. Extract all Lever data to local JSON files before starting. This is your safety net.
  • Run test migrations. Never migrate directly into production Ashby. Run at least two dry runs against a test workspace.
  • Validate incrementally. Don't wait until the end to check data. Validate after each phase (candidates, then applications, then notes).
  • Log every API call. Store request payloads, response bodies, and Ashby's requestId for every write operation. When something breaks — and it will — logs are your forensic tool.
  • Automate deduplication. Build email-based dedup into the pipeline. Manual dedup after migration is brutal.
  • Preserve source IDs. Store Lever's contactId and opportunityId in a mapping table alongside Ashby's candidateId and applicationId. You'll need this for post-migration debugging.
  • Separate transport errors from business-rule errors. A 429 from Lever is a retry. A success: false from Ashby with application_not_found is a data integrity problem. Handle them differently.

Why ClonePartner for Lever to Ashby Migrations

The cheap part of this migration is the script. The expensive part is discovering every dependent object, building idempotent loaders, handling Lever's 10 req/sec cap, handling Ashby's success: false / 200 OK behavior, preloading metadata, testing with recruiters, and reconciling the edge cases that only appear after the first dry run.

ClonePartner has handled complex ATS migrations where these exact problems — Ashby's 200 OK error trap, Lever's rate throttling, multi-opportunity candidate splitting — are solved infrastructure, not open questions. Our migration pipeline includes:

  • Pre-built response validation that catches Ashby's non-standard error pattern on every API call
  • Automated rate limit handling for Lever's 10 req/sec and Ashby's 1,000 req/min, including 429 retry logic
  • Relationship-chain preservation that keeps Candidate → Application → Feedback links intact
  • Custom field mapping and stage matching with explicit documentation when the target schema forces a compromise
  • Post-migration validation with record-count reconciliation and field-level sampling

We handle the full extract-transform-load pipeline — typically completed in days, not weeks. Your recruiting team keeps working in Lever until the cutover; your engineering team stays focused on product work.

Frequently Asked Questions

How long does a Lever to Ashby migration take?
A CSV-based migration for under 500 candidates can be done in a day. A full API-based migration preserving interview history, feedback, and relationships typically takes 2-4 weeks for a custom ETL build, or days with a managed migration service.
What are the Lever and Ashby API rate limits?
Lever enforces 10 requests per second per API key (2 req/sec for application POSTs). Ashby allows 1,000 requests per minute per API key. Report endpoints in Ashby are further limited to 15 requests per minute.
Can I migrate interview feedback from Lever to Ashby?
Yes, but not via CSV. You need to extract feedback per Opportunity via Lever's API, then load it into Ashby via the applicationFeedback.submit endpoint. Ashby requires pre-configured feedback forms, so you must create matching forms before importing — or create a generic 'Migrated Feedback' form and map all Lever feedback into a text field.
Why can't I use a CSV export to migrate from Lever to Ashby?
CSV exports flatten relational data. You lose the links between candidates, applications, interview notes, and scorecards. Multi-opportunity candidates become duplicate rows. Ashby explicitly says its self-serve CSV import is not a comprehensive candidate-lifecycle migration.
How do I avoid duplicate candidates during the migration?
Ashby does not enforce deduplication on candidate.create. Build a local lookup table keyed by canonical email address. Before every candidate.create call, check whether that email already has an Ashby candidate ID. One Lever Contact with multiple Opportunities should produce one Candidate and multiple Applications.

More from our Blog

5
ATS

5 "Gotchas" in ATS Migration: Tackling Custom Fields, Integrations, and Compliance

Don't get derailed by hidden surprises. This guide uncovers the 5 critical "gotchas" that derail most projects, from mapping tricky custom fields and preventing broken integrations to navigating complex data compliance rules. Learn how to tackle these common challenges before they start and ensure your migration is a seamless success, not a costly failure.

Raaj Raaj · · 14 min read