Skip to content

Lever to Greenhouse Migration: The CTO's Technical Guide

Technical guide for CTOs on migrating from Lever to Greenhouse. Covers Opportunity-to-Application mapping, API rate limits, scorecard limits, and ETL architecture.

Raaj Raaj · · 22 min read
Lever to Greenhouse Migration: The CTO's Technical Guide
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Migrating from Lever to Greenhouse is a data-model translation problem, not a CSV upload. Lever is opportunity-centric — a single Contact can have multiple Opportunities, each representing a candidacy through your pipeline. Greenhouse separates this into distinct Candidate and Application records, where Applications link Candidates to Jobs. A naive export flattens your Opportunity history, silently drops interview feedback, and leaves you with zero scorecard data on the other side.

The fundamental challenge: Lever was designed for sourcing-heavy teams, treating every interaction as a fluid Opportunity. Greenhouse is built for structured evaluation pipelines with rigid stages, scorecards, and job-specific applications. When you move data between these systems, you're translating a relationship-first model into a process-enforcement model. This requires splitting Lever records, deduplicating contacts, re-associating historical interview data, and navigating strict API rate limits on both sides.

This guide covers the object-mapping decisions you need to make, every viable migration method and its trade-offs, the API constraints that will bottleneck your ETL scripts, and the edge cases that break most DIY attempts.

If you're dealing with other ATS migrations, see our coverage on common ATS migration gotchas and GDPR/CCPA compliance during candidate data transfers.

Warning

Do not collapse Lever's contact and opportunity into a single flat row. Keep two immutable source keys throughout the project: one for the person (contact) and one for the candidacy (opportunityId).

Why Companies Migrate from Lever to Greenhouse

The drivers typically fall into three categories:

  • Structured hiring at scale. Greenhouse enforces consistent, auditable interview processes across departments and geographies. Its scorecard system, approval workflows, and configurable interview plans are more rigid than Lever's — by design.
  • Integration ecosystem. Greenhouse integrates with over 1,000 job boards and has a broader partner network for background checks, assessments, and HRIS connectors. Companies scaling past ~200 employees often find Greenhouse's ecosystem covers more of their tool stack.
  • Compliance and DEI. Greenhouse offers structured EEOC data collection, anonymized interview scoring, and built-in diversity analytics. For companies in regulated industries or with formal DEI mandates, these features reduce the compliance engineering burden.

Lever's strengths — its native CRM, candidate nurture workflows, and relationship-first design — serve teams that prioritize candidate engagement. The migration typically happens when the organization outgrows that model and needs process enforcement over relationship flexibility.

Data Model & Object Mapping: Opportunities vs. Applications

This is the most important section of this guide. Get the object mapping wrong and every downstream step — scripting, validation, UAT — inherits the error.

The Core Structural Mismatch

Lever's model: A Contact represents a unique person. Each Contact can have multiple Opportunities, where each Opportunity represents a specific candidacy for a role. An Opportunity contains the application, notes, feedback, interview panels, offers, and stage history. The contact field is the unique person identifier; the opportunityId is the specific candidacy. (hire.lever.co)

Greenhouse's model: A Candidate represents a unique person. Each Candidate can have multiple Applications, where each Application ties the Candidate to a specific Job. Candidate applications always have exactly one job; prospect applications can have zero or more jobs. Scorecards, scheduled interviews, and offers live as children of the Application. Activity feed items (notes, emails) live on the Candidate record. (developers.greenhouse.io)

Lever
  contact (person)
    └─ opportunity (candidacy)
         ├─ stageChanges
         ├─ notes
         ├─ feedback
         ├─ files
         └─ offers
 
Greenhouse
  candidate (person)
    └─ application (candidacy)
         ├─ current_stage
         ├─ answers / custom_fields
         ├─ attachments
         └─ scorecards / offers

Object Mapping Table

Lever Object Greenhouse Equivalent Notes
Contact Candidate 1:1 mapping. Use Lever's contact field as the dedup key.
Opportunity Application Each Lever Opportunity maps to one Greenhouse Application on a Job.
Posting Job + Job Post Lever combines the internal job config and public posting. Greenhouse separates them.
Feedback (per Opportunity) Scorecard Read-only in Greenhouse API. Cannot create scorecards via API — must import as notes.
Notes Activity Feed (Notes) Map to POST /candidates/{id}/activity_feed/notes.
Interview Panels Scheduled Interviews Greenhouse requires an application_id and valid interviewers.
Offers Offers Field-level mapping required. Offer custom fields only available on Enterprise tier.
Requisition Job Opening / Requisition Lever requisitions map to Greenhouse openings + requisition IDs on jobs.
Tags Candidate Tags Direct mapping.
Archive Reasons Rejection Reasons No 1:1 mapping. Build a lookup table.
Sources Sources Create matching sources in Greenhouse before import.
Resume Files Attachments Lever resume download URLs are temporary. Download immediately during extraction.

Field-Level Mapping Reference

Lever Field Greenhouse Field Transform Required
contact.name candidate.first_name + candidate.last_name Split on first space
contact.emails [] candidate.email_addresses [] Restructure to {value, type}
contact.phones [] candidate.phone_numbers [] Restructure to {value, type}
contact.links [] candidate.website_addresses [] Map LinkedIn to website_addresses
opportunity.sources [] application.source.id Lookup source ID in Greenhouse
opportunity.stage application.current_stage Map stage names → Greenhouse stage IDs
opportunity.tags [] candidate.tags [] Direct copy
opportunity.archived.reason application.rejection_reason Map via rejection reason lookup table
opportunity.createdAt application.applied_at Convert from Unix ms → ISO 8601
opportunity.headline candidate.title Direct copy
opportunity.company candidate.company Direct copy
Warning

Scorecard limitation: Greenhouse's Harvest API provides GET endpoints for scorecards but does not expose a POST endpoint to create them. Interview feedback from Lever cannot be migrated as native Greenhouse scorecards. The standard workaround is to import feedback as structured notes on the candidate's activity feed, preserving the interviewer name, rating, and per-attribute feedback as formatted text. (developers.greenhouse.io)

Import Order: Dependency Chain

Greenhouse requires parent objects to exist before children can reference them. Create records in this sequence:

  1. Users — Map Lever users to Greenhouse users (required for On-Behalf-Of headers and recruiter/coordinator assignments)
  2. Departments and Offices — Set up organizational structure
  3. Jobs — Create jobs referencing Greenhouse template jobs (the API requires a template)
  4. Sources — Create custom sources matching Lever source names
  5. Candidates — Create candidate records with contact info and custom fields
  6. Applications — Create applications linking candidates to jobs
  7. Notes and Emails — Add activity feed items to candidates
  8. Attachments — Upload resumes and documents (load after the application exists)
  9. Offers — Create offers on applications (if migrating historical offers)

Migration Approaches: Native, API, Middleware, or Managed

There are four realistic ways to move your data. The right choice depends on data volume, fidelity requirements, and engineering bandwidth.

1. CSV Export/Import

How it works: Export candidates from Lever as CSV, then use Greenhouse's bulk import spreadsheet feature. Greenhouse's own support guidance recommends separate imports for current candidates, rejected history, and hired history, and limits each import to 8,000 rows. (support.greenhouse.io)

When to use it: Small teams (<500 candidates) doing a one-time migration where you only need basic profile data.

What you get: Candidate contact info, basic notes (if manually added to the spreadsheet), and resumes (as a .zip upload, up to 5 GB).

What you lose: Application-to-job relationships, pipeline stage history, interview scorecards as native objects, source attribution, custom field mappings, and any data requiring relational integrity. Greenhouse notes that interviews can't be backdated, though scorecards can be. Historical data may have reporting limitations. (support.greenhouse.io)

Complexity: Low.

Info

Greenhouse's bulk import supports custom candidate and application fields as additional columns, but the fields must already exist in Greenhouse and the values must pass validation. Historical imports can trigger GDPR or CCPA communications depending on your tenant configuration. (support.greenhouse.io)

2. Custom ETL (API-Based)

How it works: Write scripts that extract data from the Lever API (GET /opportunities, GET /opportunities/{id}/feedback, etc.), transform it to match Greenhouse's schema, and load it via the Greenhouse Harvest API (POST /candidates, POST /candidates/{id}/applications, etc.). This can range from straightforward extraction/loading scripts for medium datasets to a full staged architecture with canonical tables, checkpoint management, and resumable batches for enterprise volumes.

For larger datasets, land raw Lever JSON and files in a staging layer, normalize into canonical tables (person, candidacy, job, feedback, file), pre-create Greenhouse lookups, then load with checkpoints and audit logs.

When to use it: Any migration with >1,000 candidates, custom fields, feedback history, or the need to preserve relationships.

Pros: Full control over mapping logic. Can preserve relationships, stage history, and notes. Repeatable for test migrations. Highest fidelity when architected as a full pipeline.

Cons: You own rate limiting, idempotency, user mapping, and error handling. Greenhouse scorecard creation not supported via API. Attachment migration requires downloading Lever files immediately (URLs are temporary). All writes require the On-Behalf-Of header. Job creation requires a pre-existing template job. A full staged ETL pipeline takes 80–200 hours of engineering; simpler scripts take 40–80 hours.

Complexity: Medium to High.

3. iPaaS Platforms (Zapier, Make, Tray.ai)

How it works: Use drag-and-drop workflow builders to connect Lever triggers/actions to Greenhouse actions.

When to use it: Ongoing sync of new candidates between systems during a transition period. Not suitable for bulk historical migration.

Limitations:

  • Zapier's Greenhouse triggers use polling, not instant webhooks — this introduces latency and risks missed events at high volume
  • Neither Zapier nor Make handles the Greenhouse On-Behalf-Of header natively in their pre-built actions
  • Bulk historical data migration isn't feasible — these tools are designed for event-driven, one-at-a-time record processing
  • Tray.ai can handle more complex workflows but still requires significant configuration for nested objects

Complexity: Medium (for sync). Not viable for bulk migration.

4. Managed Migration Service

How it works: A migration specialist builds and operates the ETL pipeline, handles edge cases, runs test migrations, and validates the output.

When to use it: When your engineering team's time is better spent on product work, when you have complex data (custom fields, >10k candidates, attachments, feedback), or when zero-downtime is a requirement.

Complexity for your team: Low.

Approach Comparison

Factor CSV Import Custom ETL iPaaS (Zapier/Make) Managed Service
Candidate profiles
Application-job relationships Partial
Interview feedback As notes only As notes only
Attachments/resumes Manual
Custom fields Limited Limited
Stage history Partial–Full
Ongoing sync No If extended Yes Optional
Scale <8k rows/import Medium–Enterprise <100/day Enterprise
Engineering effort None 40–200 hours 10–20 hours None
Risk of data loss High Low–Medium High Low

Recommendations by scenario:

  • Small team, basic data: CSV import gets you running in hours. Accept some manual cleanup.
  • Mid-size, full history needed: Custom ETL or managed service.
  • Enterprise, zero downtime: Managed service — the engineering cost of handling all edge cases in-house rarely justifies the investment.
  • Ongoing sync during transition: Webhooks + API writes for real-time forwarding, API scripts for nightly backfill.

Pre-Migration Planning & Data Audit

Before writing a single line of code, audit your Lever instance and decide what actually needs to move. Migrating garbage data into a fresh Greenhouse instance defeats the purpose of the upgrade.

Data Audit Checklist

  • Active candidates — How many Opportunities are in active pipeline stages?
  • Archived candidates — Do you need rejected/hired candidate history in Greenhouse? (Compliance may require it.)
  • Confidential postings — Lever requires a special API key permission to access confidential data. This permission must be granted at key creation time — you cannot add it retroactively. (hire.lever.co)
  • Custom fields — Document every custom field in Lever. Determine whether each maps to a Greenhouse candidate field, application field, or job field.
  • Feedback forms — Count unique feedback forms and questions. These become the basis for your note-formatting template.
  • Attachments — How many resumes and files? This affects migration time significantly due to download/upload bandwidth.
  • Integrations — Which third-party tools (background checks, assessments, HRIS) reference Lever candidate IDs? These will break post-migration.
  • Users — Map Lever users to Greenhouse users. The Greenhouse API requires valid user IDs for On-Behalf-Of headers and hiring team assignments.
  • Compliance — Review data retention policies. Identify candidates who requested deletion in Lever and ensure they are not accidentally resurrected in Greenhouse.

Define Migration Scope

Not everything needs to move. Common exclusions:

  • Candidates archived more than 2–3 years ago (unless compliance requires retention)
  • Test/dummy candidates
  • Duplicate contacts (Lever deduplicates by email, but inconsistent data entry creates duplicates)
  • Confidential postings that are no longer active

For reference-only historical records, Greenhouse recommends a container job such as HISTORICAL DATA — this preserves old history without contaminating active recruiting reports. (support.greenhouse.io)

Migration Strategy

Strategy When to Use Risk Level
Big bang Small dataset, short hiring freeze acceptable Medium
Phased Large dataset, migration by department or job family Low
Parallel run Zero downtime required, sync both systems during transition Low (but high complexity)

For most organizations, a phased approach works best: migrate historical data first, validate, then cut over active pipelines during a low-hiring window.

GDPR/CCPA Compliance

Lever stores data protection consent status per Opportunity. Greenhouse has its own EEOC and data privacy model. During migration:

  • Preserve or re-collect candidate consent status
  • Exclude candidates who requested data deletion in Lever
  • Exclude candidates who opted out of data storage
  • Be aware that Greenhouse historical imports can trigger GDPR or CCPA notifications depending on tenant configuration

For a deep dive, see our GDPR & CCPA compliance guide for ATS migrations.

Migration Architecture: API Constraints That Will Bottleneck Your Scripts

If you're building a custom pipeline, the APIs — not the data mapping — will be your biggest operational challenge.

Lever API (Extraction Side)

  • Base URL: https://api.lever.co/v1
  • Authentication: Basic Auth (API key as username, blank password) or OAuth 2.0
  • Rate limit: 10 requests/second steady state, bursts up to 20 req/sec per API key. Application POST requests are limited to 2 req/sec. (hire.lever.co)
  • Pagination: Offset-token based. Each response includes hasNext and next fields. Page size configurable from 1–100.
  • Key endpoints for extraction:
    • GET /opportunities — Use expand=applications,stage,owner,followers,sourcedBy,contact to hydrate in fewer calls
    • GET /opportunities/{id}/feedback — Interview feedback per opportunity
    • GET /opportunities/{id}/notes — Notes per opportunity
    • GET /opportunities/{id}/offers — Offers per opportunity
    • GET /opportunities/{id}/resumes — Resume metadata and download URLs
    • GET /opportunities/{id}/files — Other attached files
Warning

Lever resume URLs are temporary. Download resume and file content immediately during extraction. Do not store URLs for later retrieval — they will expire. Lever can also return a 422 when a file could not be processed correctly. (hire.lever.co)

Greenhouse Harvest API (Loading Side)

  • Base URL: https://harvest.greenhouse.io/v1 (v1/v2) or https://harvest.greenhouse.io/v3 (v3)
  • Authentication: v1/v2 uses Basic Auth (API token as username, blank password). v3 uses OAuth 2.0 with JWT Bearer tokens.
  • Rate limit (v1/v2): Specified in the X-RateLimit-Limit header — typically 50 requests per 10-second window. Exceeding returns HTTP 429.
  • Rate limit (v3): Fixed 30-second window. 429 response includes a Retry-After header.
  • Pagination: RFC-5988 Link headers. per_page up to 500 records for v1/v2. Requests exceeding 500 return a 422 Unprocessable Entity error. v3 uses cursor-based pagination.
  • On-Behalf-Of header: Required for all write operations (POST, PATCH, DELETE). Must contain a valid, active Greenhouse user ID. If you omit this, or if the user ID belongs to a deactivated employee, the write will fail. (developers.greenhouse.io)
  • Endpoint permissions: Each API key can be configured with granular permissions per endpoint. Access is binary — everything or nothing for a given endpoint.

Rate limit behavior on 429: Official documentation indicates that 429 responses include X-RateLimit-Reset and Retry-After headers. However, X-RateLimit-Limit and X-RateLimit-Remaining may be absent on 429 responses. Your backoff logic should not depend on those two headers being present — use Retry-After or fall back to time-based exponential backoff. (developers.greenhouse.io)

Danger

Harvest v1/v2 API deprecation deadline: August 31, 2026. If you're building a migration pipeline today, build against v3 (OAuth). The auth model is fundamentally different — v3 requires OAuth token lifecycle management instead of static API keys.

Key Greenhouse API Constraints

Constraint Impact
No scorecard creation endpoint Interview feedback migrates as notes only
Job creation requires a template job Must pre-create template jobs in the Greenhouse UI
Application custom fields require Enterprise tier Mid-market plans lose application-level metadata
Create responses may be truncated Poll with GET after POST until the full record is available (developers.greenhouse.io)
Attachment uploads reject shareable cloud links Google Drive / Dropbox URLs can corrupt uploads — stage files yourself and use base64 or machine-accessible URLs (developers.greenhouse.io)
Webhook deliveries retry up to 7 times over ~15 hours Useful for resilience, not a substitute for observability (developers.greenhouse.io)

Step-by-Step Migration Process

An effective migration pipeline follows four phases: Extract, Transform, Load, and Validate. Each phase needs checkpointing and error handling.

Phase 1: Extract from Lever

Use Lever's /opportunities endpoint with expand parameters to pull all candidate journeys. For each opportunity, hydrate feedback, notes, resumes, and offers. Download binary files immediately — Lever's download URLs are temporary.

Store the pagination cursor in a durable store (Postgres, Redis) so the script can resume after a crash.

// Node.js: Extract opportunities from Lever
const LEVER_API_KEY = process.env.LEVER_API_KEY;
const BASE_URL = 'https://api.lever.co/v1';
 
async function extractOpportunities() {
  let allOpportunities = [];
  let hasNext = true;
  let offset = undefined;
 
  while (hasNext) {
    const params = new URLSearchParams({
      expand: 'applications,stage,owner,sourcedBy,contact',
      limit: '100',
    });
    if (offset) params.set('offset', offset);
 
    const response = await rateLimitedFetch(
      `${BASE_URL}/opportunities?${params}`,
      { headers: { Authorization: `Basic ${btoa(LEVER_API_KEY + ':')}` } }
    );
    const body = await response.json();
    allOpportunities.push(...body.data);
 
    hasNext = body.hasNext;
    offset = body.next;
  }
 
  // Hydrate each opportunity with sub-resources
  for (const opp of allOpportunities) {
    opp._feedback = await fetchSubResource(opp.id, 'feedback');
    opp._notes = await fetchSubResource(opp.id, 'notes');
    opp._resumes = await fetchSubResource(opp.id, 'resumes');
    opp._offers = await fetchSubResource(opp.id, 'offers');
 
    // Download resume files immediately — URLs expire
    for (const resume of opp._resumes) {
      resume._fileContent = await downloadFile(resume.downloadUrl);
    }
  }
 
  return allOpportunities;
}
 
async function rateLimitedFetch(url, options) {
  const response = await fetch(url, options);
  if (response.status === 429) {
    await sleep(2000);
    return rateLimitedFetch(url, options);
  }
  return response;
}

Phase 2: Transform

Do not transform directly into Greenhouse payloads. Build a staging model first:

  • person table keyed by Lever contact
  • candidacy table keyed by Lever opportunityId
  • person_to_candidate_id mapping
  • opportunity_to_application_id mapping
  • file_manifest and note_manifest
  • error_queue for retryable and terminal failures

The transform layer handles:

  • Name splitting: Lever stores name as a single string. Greenhouse requires first_name and last_name.
  • Email restructuring: Lever's emails [] is a flat string array. Greenhouse expects {value, type} objects.
  • Timestamp conversion: Lever uses Unix millisecond timestamps. Greenhouse uses ISO 8601.
  • Stage mapping: Build a lookup table mapping Lever stage IDs → Greenhouse stage IDs.
  • Source mapping: Match Lever sources to Greenhouse source IDs. Create missing sources first.
  • Feedback → Notes: Format Lever feedback as structured text notes, preserving interviewer name, overall recommendation, and per-attribute ratings.
  • Custom fields: Map Lever custom field values to Greenhouse custom field IDs. For select fields, match against exact option IDs or option names — Greenhouse will reject mismatches.
function transformOpportunity(leverOpp, mappings) {
  const nameParts = (leverOpp.name || '').split(' ');
  const firstName = nameParts[0] || 'Unknown';
  const lastName = nameParts.slice(1).join(' ') || 'Unknown';
 
  return {
    greenhouse_candidate: {
      first_name: firstName,
      last_name: lastName,
      company: leverOpp.company || null,
      title: leverOpp.headline || null,
      phone_numbers: (leverOpp.phones || []).map(p => ({
        value: p.value, type: p.type || 'other'
      })),
      email_addresses: (leverOpp.emails || []).map(e => ({
        value: e, type: 'personal'
      })),
      tags: leverOpp.tags || [],
      custom_fields: mapCustomFields(leverOpp, mappings.customFields),
    },
    greenhouse_application: {
      job_id: mappings.jobMap[leverOpp.applications?.[0]?.postingId],
      source_id: mappings.sourceMap[leverOpp.sources?.[0]],
    },
    notes: formatFeedbackAsNotes(leverOpp._feedback),
    attachments: leverOpp._resumes.map(r => ({
      filename: r.name,
      type: 'resume',
      content: r._fileContent.toString('base64'),
      content_type: `application/${r.ext || 'pdf'}`,
    })),
  };
}

Phase 3: Load into Greenhouse

Create records in dependency order: Candidates → Applications → Notes → Attachments. Use a queue or rate limiter to stay under the 50 requests/10-second ceiling. On enterprise loads, cap at ~45 calls per 10 seconds to leave headroom for retries.

Greenhouse's documentation notes that create responses for candidates and applications may be truncated. Poll with a follow-up GET until the full record is available before creating child objects. (developers.greenhouse.io)

import PQueue from 'p-queue';
 
const GH_API_KEY = process.env.GREENHOUSE_API_KEY;
const GH_USER_ID = process.env.GREENHOUSE_ON_BEHALF_OF_USER_ID;
const GH_BASE = 'https://harvest.greenhouse.io/v1';
 
const ghWrites = new PQueue({
  interval: 10_000,
  intervalCap: 45,
  concurrency: 4,
});
 
async function loadCandidate(transformed) {
  // 1. Create candidate
  const candidateRes = await ghWrites.add(() =>
    ghPost('/candidates', {
      ...transformed.greenhouse_candidate,
      activity_feed_notes: transformed.notes,
    })
  );
  if (!candidateRes) return null;
  const candidateId = candidateRes.id;
 
  // 2. Create application on job
  if (transformed.greenhouse_application.job_id) {
    await ghWrites.add(() =>
      ghPost(`/candidates/${candidateId}/applications`, {
        job_id: transformed.greenhouse_application.job_id,
        source_id: transformed.greenhouse_application.source_id,
      })
    );
  }
 
  // 3. Upload attachments
  for (const att of transformed.attachments) {
    await ghWrites.add(() =>
      ghPost(`/candidates/${candidateId}/attachments`, att)
    );
  }
 
  return candidateId;
}
 
async function ghPost(path, body) {
  const response = await fetch(`${GH_BASE}${path}`, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Basic ${btoa(GH_API_KEY + ':')}`,
      'On-Behalf-Of': GH_USER_ID, // Required for all writes
    },
    body: JSON.stringify(body),
  });
 
  if (response.status === 429) {
    const retryAfter = response.headers.get('Retry-After') || 10;
    await sleep(retryAfter * 1000);
    return ghPost(path, body);
  }
 
  if (response.status === 422) {
    const error = await response.json();
    logError(path, body, error); // Log but don't crash — this is a mapping defect
    return null;
  }
 
  if (!response.ok) throw new Error(`HTTP ${response.status} on ${path}`);
  return response.json();
}
Tip

Always run a test migration first. Use Greenhouse's sandbox environment. Object IDs differ between sandbox and production — don't hardcode IDs from test runs. Run at least two dry runs: one happy-path job and one ugly job with feedback, files, duplicates, and custom fields.

Edge Cases That Break DIY Migrations

The standard fields — name, email, phone, resume — are trivial. These are the edge cases that consume engineering time.

1. Scorecards Cannot Be Created via API

This is the single biggest limitation. Greenhouse exposes GET /scorecards and GET /applications/{id}/scorecards for reading, but there is no POST endpoint. Your options:

  • Import as structured notes — Format feedback with interviewer name, overall recommendation, and per-attribute ratings. Attach to the candidate's activity feed.
  • Manual re-entry — For active candidates with in-progress interviews, have interviewers re-submit feedback in Greenhouse. Only practical for a handful of records.
  • Accept the loss — For archived historical data, decide whether interview feedback adds enough value to justify the effort. Preserve the raw data in an external archive for compliance.

2. Duplicate Candidates and Multi-Opportunity Contacts

A single Lever Contact may have applied to 5 different roles. In Greenhouse, this must become one Candidate with 5 Applications. Problems start when CSV imports or low-code tools create a new Candidate per Opportunity.

Your script must:

  1. Check if the Candidate already exists in Greenhouse (by email, case-insensitive)
  2. If yes, add a new Application to the existing Candidate
  3. If no, create the Candidate first, then add the Application

Before loading, deduplicate your extracted dataset by primary email address and use the Lever contact ID to consolidate all Opportunities for the same person. Getting this wrong creates duplicate profiles that must be manually merged — Greenhouse has no bulk merge feature.

3. Attachment Migration

Lever hosts attachments and provides temporary download URLs via the API. Greenhouse accepts either base64-encoded content or a publicly accessible URL during upload. Greenhouse warns that shareable cloud links (Google Drive, Dropbox) can corrupt uploads — use machine-accessible URLs or stage the file contents yourself. (developers.greenhouse.io)

Since Lever URLs expire, you must:

  1. Download all files during the extraction phase
  2. Store them locally or in cloud storage
  3. Base64-encode and upload during the load phase

For large migrations (>50k files), this is the most bandwidth-intensive part of the process. Build a file manifest and reconcile it separately from the person/application load.

4. Custom Fields Mismatch

Lever's custom fields live at the Opportunity level. Greenhouse splits custom fields across multiple object types: candidate, application, job, offer, opening, rejection_question, and referral_question. You must decide where each Lever custom field lives in Greenhouse — and some may not have a natural home.

If a Lever custom field is a multi-select dropdown, Greenhouse will reject the payload if the incoming values don't match predefined options. Build a translation dictionary in your ETL script. Application-level custom fields are only available to Greenhouse Enterprise customers — mid-market plans lose this metadata.

Greenhouse custom field types include: short_text, long_text, yes_no, single_select, multi_select, currency, currency_range, number, number_range, date, url, and user.

For more on this, see our guide on 5 'Gotchas' in ATS Migration.

5. Confidential Postings

Lever requires a special API key permission (granted at key creation time) to access confidential postings and opportunities. If your migration key wasn't granted this permission, confidential data will silently return access errors during extraction. You cannot retroactively add this permission — you must create a new key. (hire.lever.co)

6. Historical and Deactivated Users

If an interviewer left the company two years ago, their Lever account is inactive. If you try to assign them as the author of a Greenhouse note or as a recruiter on an application, Greenhouse will reject the write unless that user exists and is active. You must either create "legacy" integration users in Greenhouse or append the original author's name to the body of the note and post it via an active system user.

7. Idempotency and Resumability

Build your pipeline to be idempotent. For each record:

  • Log the Lever source ID and Greenhouse target ID after successful creation
  • On failure, log the error with full context (Lever ID, endpoint, request body, response)
  • Support resuming from the last successful record, not from the beginning
  • Use exponential backoff with jitter for 429 responses
  • Treat 422 as a mapping or validation defect — log and skip, don't retry indefinitely

Store original Lever IDs in Greenhouse custom fields so reconciliation is deterministic and future support teams can trace any record back to its source.

Validation & Testing

Validation is where migrations are won or lost. Use three layers.

Record Count Reconciliation

After migration, compare counts across every object type:

Object Lever Count (Source) Greenhouse Count (Target) Match?
Candidates (unique contacts)
Applications
Jobs/Postings
Notes
Attachments
Tags

Field-Level Spot Check

Sample 50–100 records across categories (active, archived, hired, rejected) and verify:

  • Name, email, phone are correct
  • Application is linked to the correct job
  • Source attribution matches
  • Custom field values carried over
  • Tags are present
  • Notes/feedback content is intact
  • Resume attachment is downloadable

UAT Process

  1. Recruiting team review — Have 2–3 recruiters manually check their pipeline candidates in Greenhouse
  2. Search validation — Verify that candidate search returns expected results for common queries
  3. Report comparison — Run pipeline reports in both systems and compare aggregate numbers
  4. Integration testing — If other tools connect to Greenhouse (HRIS, background check providers), verify they can access migrated data

Rollback Plan

Have a rollback plan before the first test load. Greenhouse does not have a "bulk delete" feature in the UI:

  • Use the Harvest API DELETE /applications/{id} endpoint (only works on candidate applications, not prospect applications)
  • Candidate deletion is not available via API — you must contact Greenhouse support
  • Always keep your Lever instance active and unchanged until validation is complete
  • Keep raw exports, file manifests, lookup snapshots, and rerunnable load scripts

Post-Migration Tasks

The data landing in Greenhouse is not the end of the project.

Rebuild Automations

Nothing from Lever's automation rules, nurture campaigns, or auto-advance logic carries over. You must manually recreate:

  • Interview plans and scorecard templates per job
  • Auto-advance rules and approval workflows
  • Email templates
  • Job board integrations

Train Users

Greenhouse and Lever have fundamentally different UX philosophies. Lever's relationship-first, CRM-style interface is freeform. Greenhouse's structured hiring approach requires interviewers to use scorecards, follow defined stages, and submit feedback within the system. Plan for at least one training session per user role: recruiters, hiring managers, interviewers, and admins.

Monitor for 30 Days

  • Watch for error rate spikes in the Greenhouse API usage dashboard
  • Check for orphaned applications (not linked to jobs)
  • Verify candidate counts in pipeline reports match expectations
  • Watch for duplicate candidate creation from integrations that haven't been updated to point at Greenhouse
  • Monitor webhook delivery if running both systems during transition. Greenhouse disables a webhook if the initial ping on create or update fails, and retries failed deliveries up to 7 times over ~15 hours. (developers.greenhouse.io)

Best Practices That Hold Up in Production

  • Back up everything. Export a full Lever data dump before starting — raw JSON and binary files, not just CSVs.
  • Run at least two test migrations before the production run. Use Greenhouse's sandbox.
  • Map users first. Every Greenhouse write needs an On-Behalf-Of user ID. Build your user mapping table before anything else.
  • Create template jobs before migration. The Greenhouse API requires referencing existing jobs to create new ones.
  • Pre-create sources. Build a Lever-source → Greenhouse-source lookup table and create missing sources in Greenhouse first.
  • Process in batches. Don't load 50k candidates in one run. Batch by job, department, or date range. Validate each batch before proceeding.
  • Store immutable source IDs in Greenhouse custom fields. This makes reconciliation and future support deterministic.
  • Log everything. Every API call, response code, and record mapping should be logged. When something fails, you need to know exactly where to resume.
  • Run below the rate ceiling. A 45-calls-per-10-second cap leaves headroom for retries and operator actions.
  • Automate repetitive work, not judgment. Deduplicate with rules, but let humans review edge cases like merged candidate histories or confidential requisitions.

When to Use a Managed Migration Service

Build in-house when:

  • Your dataset is small (<1,000 candidates)
  • You have dedicated engineering bandwidth (40–120 hours)
  • You don't need feedback history, attachments, or complex custom fields
  • You have experience with both APIs

Don't build in-house when:

  • Your engineering team is already allocated to product work
  • You have >10,000 candidates with rich history (feedback, offers, attachments)
  • You need zero downtime — recruiting can't pause during migration
  • You have GDPR/CCPA compliance requirements that demand auditable data handling
  • Your Lever instance uses confidential postings, complex custom fields, or multi-stage approval workflows

The hidden cost of DIY isn't the first script — it's the debugging. Rate limit edge cases, malformed data in specific records, undocumented API behaviors, and the iterative test-fix-retest cycle eat engineering time fast. We've seen teams estimate "two weeks" and ship in eight.

At ClonePartner, we've handled ATS migrations scoped at months and delivered in days. Our pipeline handles Greenhouse's rate limits, the On-Behalf-Of header requirement, the scorecard-to-notes conversion, and the resume download timing issue out of the box. We run test migrations, field-level validation, and staged cutover so your recruiting team never has to pause hiring.

Let your engineers build your product. Let us move your data.

Frequently Asked Questions

How does Lever's data model map to Greenhouse?
A Lever Contact maps to a Greenhouse Candidate (1:1). Each Lever Opportunity maps to one Greenhouse Application tied to a specific Job. A single Contact with multiple Opportunities becomes one Candidate with multiple Applications. Keep both source IDs as immutable external keys for reconciliation.
Can you migrate interview scorecards from Lever to Greenhouse?
Not as native scorecards. Greenhouse's Harvest API does not expose a POST endpoint for creating scorecards — only GET endpoints for reading them. The standard workaround is to import Lever feedback as structured notes on the candidate's activity feed, preserving interviewer name, overall recommendation, and per-attribute ratings.
What is the Greenhouse Harvest API rate limit?
For Harvest v1/v2, the rate limit is typically 50 requests per 10-second window. Exceeding this returns an HTTP 429. Official docs say 429 responses include Retry-After and X-RateLimit-Reset headers, but X-RateLimit-Limit and X-RateLimit-Remaining may be absent. Harvest v3 uses a fixed 30-second window. Note that v1/v2 will be deprecated on August 31, 2026.
Can Zapier or Make handle a Lever to Greenhouse migration?
They can help with low-volume delta syncs during a transition period, but they are not viable for bulk historical migration. Zapier's Greenhouse triggers use polling rather than instant webhooks, neither tool natively handles the On-Behalf-Of header required for Greenhouse writes, and they lack the state management needed for large backfills.
How long does a Lever to Greenhouse migration take?
It depends on dataset size and complexity. A small team (<500 candidates) using CSV import can finish in hours. API-based migrations of 10,000–50,000 candidates with feedback, attachments, and custom fields typically take 40–200 hours of engineering time for DIY, or 2–5 days with a managed migration service.

More from our Blog

5
ATS

5 "Gotchas" in ATS Migration: Tackling Custom Fields, Integrations, and Compliance

Don't get derailed by hidden surprises. This guide uncovers the 5 critical "gotchas" that derail most projects, from mapping tricky custom fields and preventing broken integrations to navigating complex data compliance rules. Learn how to tackle these common challenges before they start and ensure your migration is a seamless success, not a costly failure.

Raaj Raaj · · 14 min read
Ensuring GDPR & CCPA Compliance When Migrating Candidate Data
ATS

Ensuring GDPR & CCPA Compliance When Migrating Candidate Data

This is your essential guide to ensuring full compliance with GDPR and CCPA. We provide a 7-step, compliance-first plan to manage your ATS data migration securely. Learn to handle lawful basis, data retention policies, DSARs, and secure transfers to avoid massive fines and protect sensitive candidate privacy.

Raaj Raaj · · 10 min read
SuccessFactors to Greenhouse Migration: The CTO's Guide
Migration Guide/Greenhouse/SuccessFactors

SuccessFactors to Greenhouse Migration: The CTO's Guide

A technical guide to migrating from SAP SuccessFactors Recruiting to Greenhouse — covering OData extraction, Harvest v3 rate limits, entity mapping, and edge cases for CTOs.

Raaj Raaj · · 21 min read