---
title: "Greenhouse to Lever Migration: The CTO's Technical Guide"
slug: greenhouse-to-lever-migration-the-ctos-technical-guide
date: 2026-04-21
author: Raaj
categories: [Migration Guide, Greenhouse, Lever]
excerpt: "Technical guide to migrating from Greenhouse to Lever: API rate limits, scorecard mapping, data model translation, and the constraints that break DIY attempts."
tldr: "Greenhouse-to-Lever migration requires translating an application-centric schema into an opportunity-centric CRM — scorecards, attachments, and rate limits are where DIY attempts break."
canonical: https://clonepartner.com/blog/greenhouse-to-lever-migration-the-ctos-technical-guide/
---

# Greenhouse to Lever Migration: The CTO's Technical Guide


Migrating from Greenhouse to Lever is a data-model translation problem disguised as a vendor switch. Greenhouse is **application-centric** — Candidates link to Applications, Applications link to Jobs, and Scorecards hang off Applications with structured attribute ratings. Lever is **opportunity-centric** — a single Contact can spawn multiple Opportunities, each representing a candidacy through your pipeline.

A naive CSV export flattens this relational structure. It silently drops scorecard attribute ratings, breaks multi-application candidate histories, and collapses the Candidate → Application → Job → Scorecard chain into unusable rows.

The core engineering challenge: Greenhouse enforces structured evaluation — rigid stages, mandatory scorecards, job-specific applications. Lever was designed for sourcing-heavy teams where every interaction is a fluid Opportunity tied to a Contact. Moving data between these systems means merging Greenhouse Candidates into Lever Contacts, converting Applications into Opportunities, serializing Scorecards into Lever's note/feedback structures, and reconciling two fundamentally different ID systems (Greenhouse uses integer IDs; Lever uses UUIDs).

This guide covers the object-mapping decisions, every viable migration method and its trade-offs, the API constraints that will bottleneck your ETL scripts, and the edge cases that break most DIY attempts.

For the reverse direction, see our [Lever to Greenhouse migration guide](https://clonepartner.com/blog/blog/lever-to-greenhouse-migration-the-ctos-technical-guide/). For broader ATS migration pitfalls, check [5 "Gotchas" in ATS Migration](https://clonepartner.com/blog/blog/ats-migration-gotchas/).

> [!WARNING]
> Greenhouse Harvest API v1 and v2 will be deprecated and unavailable after **August 31, 2026**. If you're planning a migration, build your extraction pipeline against Harvest v3 (OAuth 2.0) from day one — don't build on v1/v2 only to rewrite months later.

## Why Companies Migrate from Greenhouse to Lever

The drivers typically break down into three categories:

- **Sourcing workflow.** Lever was built around a CRM-like sourcing model. Teams that rely heavily on passive candidate nurturing — agency recruiters, executive search firms, sourcing-heavy tech companies — often find Lever's Opportunity-per-candidacy model more natural than Greenhouse's rigid application gates.
- **Cost consolidation.** Greenhouse's tiered pricing (Core, Plus, Pro) can escalate at scale, especially for teams that need structured interview kits on Plus/Pro tiers. Lever bundles analytics, nurture campaigns, and advanced pipeline management into fewer tiers.
- **Operational simplicity.** Greenhouse's power is its configurability — custom scorecard attributes, multi-stage approval workflows, granular user permissions. For teams that don't need that depth, Lever's flatter workflow model reduces admin overhead.

## Greenhouse vs. Lever: Architecture Differences That Matter

Before writing a single line of migration code, understand where the data models diverge:

| Concept | Greenhouse | Lever |
|---|---|---|
| **Person record** | Candidate (integer ID) | Contact (UUID) |
| **Candidacy** | Application (links Candidate to Job) | Opportunity (links Contact to Posting) |
| **Job definition** | Job (with Openings, Stages, Interview Plans) | Posting (with pipeline Stages) |
| **Interview evaluation** | Scorecard (structured attributes + ratings + questions) | Feedback Form (free-form or template-based) |
| **Pipeline tracking** | Application moves through Job Stages | Opportunity moves through Pipeline Stages |
| **Prospect/sourced** | Prospect Application (0+ Jobs) | Opportunity without a Posting |
| **Custom data** | Custom Fields on Candidates, Applications, Jobs, Offers | Custom Fields on Contacts, Opportunities, Requisitions |
| **ID format** | Integer IDs | UUIDs |
| **Attachments** | On Candidate profile | On Contact/Opportunity |

The relationship chain that matters most: **Candidate → Application → Job → Scorecard** in Greenhouse must become **Contact → Opportunity → Posting** in Lever, with Scorecards serialized into Lever's feedback or notes structure.

A useful mental model: **one Greenhouse Application becomes one Lever Opportunity**. That single decision determines your deduplication logic, stage mapping, and how you preserve history. ([developers.greenhouse.io](https://developers.greenhouse.io/harvest.html))

## Migration Approaches: CSV vs. API vs. Managed Service

There are five viable paths. Each has hard constraints that determine whether it works for your situation.

### 1. Native CSV Export/Import

**How it works:** Export Greenhouse data as CSV files via in-app reporting tools. Clean and reformat to match Lever's import templates. Upload into Lever through their implementation team or bulk import tooling.

**When to use it:** Small datasets (<5,000 candidates), minimal custom fields, no need for historical scorecard data.

**Limitations:**
- CSV exports flatten relational data. You lose the Candidate → Application → Scorecard chain.
- No structured scorecard attribute data. Ratings, interview step names, and per-attribute notes are either missing or collapsed into a single text blob.
- Attachments (resumes, cover letters) cannot be exported via CSV and must be handled separately.
- Custom field picklist values need manual mapping between Greenhouse field types and Lever's equivalents.

**Complexity:** Low

### 2. API-Based Migration (Greenhouse Harvest → Lever Data API)

**How it works:** Extract data programmatically from Greenhouse's Harvest API. Transform the JSON payloads to match Lever's data model. Load into Lever via the Opportunities API (`POST /opportunities`), Notes API, and Feedback API. ([developers.greenhouse.io](https://developers.greenhouse.io/harvest.html))

**When to use it:** Enterprise datasets, need to preserve historical interview data, or ongoing sync requirements.

**Strengths:**
- Full-fidelity extraction. Scorecards come through with structured attributes, ratings, and interviewer assignments.
- Attachments can be downloaded via signed S3 URLs from Greenhouse and re-uploaded to Lever.
- Relationship chains stay intact because you control the transformation logic.
- You can filter by date ranges, statuses, and other parameters to scope the migration precisely.

**Constraints:**
- **Greenhouse Harvest v3 rate limits** use a fixed window. The `X-RateLimit-Limit` header returns the allowed requests per window (commonly 75 for custom integrations). Exceeding this returns HTTP 429 with a `Retry-After` header.
- **Lever API rate limits** allow a steady state of 10 requests per second per API key, with bursts up to 20 req/sec. Application POST requests have a stricter cap of 2 req/sec — and Lever warns this limit may change without notice.
- Greenhouse API keys require **explicit, per-endpoint permissions**. If your key doesn't have Scorecards permission enabled, those requests return 403. Enable every object type in Configure → Dev Center → API Credential Management.
- Lever deduplicates contacts by email when creating Opportunities. If contacts already exist in Lever, the API links the new Opportunity to the existing Contact rather than creating a duplicate.

**Complexity:** High

> [!NOTE]
> Lever's own documentation states that custom integrations typically take **8 to 12 weeks** to develop. That's Lever's scoping estimate, not a vendor scare tactic. Factor it into your build-vs-buy decision realistically. ([lever-old.zendesk.com](https://lever-old.zendesk.com/hc/en-us/articles/4543957810957-Scoping-custom-integrations))

### 3. Third-Party Migration Tools

**How it works:** Use a prebuilt migration engine or specialist service that already knows both schemas. The trade-off is coverage. As of April 2026, Ambrstack's public Greenhouse → Lever page markets migrations starting at $500 and marks 9 of 17 objects as fully compatible. Apideck's normalized Lever ATS surface exposes only Jobs and Applicants. That tells you where off-the-shelf tooling lands: fast for standard data, thin for deep history and platform-specific objects. ([ambrstack.com](https://ambrstack.com/switcher/migrations/migrate-from-greenhouse-to-lever))

**When to use it:** Standard schemas, low engineering bandwidth, moderate history needs.

**Limitations:**
- Custom Greenhouse objects, unusual workflows, and historical feedback often need manual handling.
- Variable object coverage — verify exactly which objects the tool supports before committing.

**Complexity:** Low to Medium

### 4. Middleware/iPaaS (Zapier, Make, Tray.io, Workato)

**How it works:** Use a visual integration platform to build workflows that read from Greenhouse APIs and write to Lever APIs. Greenhouse on Zapier is a premium app using polling triggers; Make's Greenhouse connector is verified but Enterprise-plan only, while its Lever connector is community-developed rather than vendor-maintained. ([zapier.com](https://zapier.com/apps/greenhouse/integrations))

**When to use it:** Ongoing bidirectional sync during phased rollouts, incremental delta sync after cutover, or teams with iPaaS expertise but no custom development capacity. Not a serious historical backfill engine.

**Limitations:**
- iPaaS platforms add abstraction that obscures API error handling. When Lever returns a 429, you're dependent on the platform's built-in retry logic — which may not implement exponential backoff with jitter correctly.
- Historical bulk migration is not what these tools are designed for. Most are optimized for event-driven real-time flows, not batch extraction of 100,000+ candidate records.
- Licensing costs for enterprise iPaaS (Tray.io, Workato) can exceed the cost of a managed migration service for a one-time job.

**Complexity:** Medium

### 5. Managed Migration Service

**How it works:** A dedicated migration team handles extraction, transformation, loading, validation, and rollback planning. They've already solved the Greenhouse ↔ Lever mapping problem and have production-tested ETL scripts with proper rate-limit handling.

**When to use it:** Enterprise datasets (10,000+ candidates), need to preserve full historical data including scorecards and attachments, or engineering team cannot absorb an 8–12 week project.

**Complexity:** Low (for your team)

### Comparison Table

| Approach | Fidelity | Scalability | Engineering Effort | Timeline | Best For |
|---|---|---|---|---|---|
| CSV Export/Import | Low | Small only | Low | 1–2 weeks | <5K candidates, no scorecard history |
| Third-Party Tools | Varies | Small–Medium | Low–Medium | 1–4 weeks | Standard setups, moderate history |
| iPaaS/Middleware | Medium | Medium | Medium | 4–8 weeks | Ongoing sync, existing iPaaS investment |
| API-Based (DIY) | High | Enterprise | Very High | 8–12 weeks | Dedicated dev team, full control needed |
| Managed Service | High | Enterprise | Minimal | Days–2 weeks | Complex data, tight timelines, limited eng bandwidth |

**Recommendation by scenario:**

- **Small team, active-only data, low eng bandwidth:** Specialist service or tightly scoped CSV import.
- **Enterprise, full historical archive:** API-led ETL or managed migration service.
- **Ongoing sync after go-live:** Webhooks plus API loaders; iPaaS only for light deltas or adjacent automation.
- **Dedicated dev team:** Build a custom ETL only if you'll reuse it for long-term sync, reporting, or repeated acquisitions. For a one-time move, it's typically an engineering distraction.

## Data Model and Object Mapping

This section determines whether your migration preserves context or destroys it.

### Core Object Mapping

| Greenhouse Object | Lever Equivalent | Notes |
|---|---|---|
| **Candidate** | **Contact** | 1:1 mapping. Greenhouse integer ID → Lever UUID. Store a cross-reference table. |
| **Application** | **Opportunity** | Each Application becomes an Opportunity linked to the Contact. Preserve `applied_at` as `createdAt`. |
| **Job** | **Posting** | Map Job title, department, location, status. Interview plans don't transfer — rebuild in Lever. |
| **Scorecard** | **Feedback / Note** | No direct equivalent. See the scorecard section below. |
| **Offer** | **Offer** (on Opportunity) | Map offer fields. Lever offers are simpler — custom offer fields may need to be flattened. |
| **Prospect** | **Opportunity** (without Posting) | Greenhouse prospects with 0 jobs become Lever Opportunities not linked to any Posting. |
| **Source** | **Source** (on Opportunity) | Map Greenhouse source names to Lever source values. Create missing sources in Lever first. |
| **Rejection Reason** | **Archive Reason** | Map Greenhouse rejection reasons to Lever's archive reasons. Create custom archive reasons as needed. |
| **Tags** | **Tags** | Direct mapping. Bulk-create in Lever before import. |
| **Activity Feed** (Notes, Emails) | **Notes** | Greenhouse notes map to Lever notes on the Opportunity. Preserve author and timestamp. |
| **Attachments** (Resumes, Cover Letters) | **Files** (on Contact/Opportunity) | Download from Greenhouse S3 URLs, re-upload via Lever's multipart/form-data endpoint. |
| **Custom Fields** | **Custom Fields** | Requires manual mapping. Greenhouse supports 12+ custom field types; verify Lever supports the equivalent data type. Greenhouse application custom fields may only be accessible via API on Enterprise accounts. ([developers.greenhouse.io](https://developers.greenhouse.io/harvest.html)) |
| **Prospect Pool / Stage** | Lead-stage Opportunity, Tag, or Nurture segment | No exact 1:1 object in Lever. |
| **Company / Agency metadata** | Tags, contact headline, note, or external table | Usually not a first-class ATS object on either side. |

### The Scorecard Problem

This is the hardest mapping decision. Greenhouse Scorecards are structured objects with:
- An `overall_recommendation` (yes/no/definitely_not/strong_yes/mixed/no_decision)
- An array of `attributes`, each with a `name`, `type` (Skills, Qualifications, etc.), `rating`, and optional `note`
- An array of `questions` with answers (supporting basic HTML formatting)
- `interviewer` and `submitted_by` user references
- `interviewed_at` and `submitted_at` timestamps

Lever's feedback forms can capture structured data, but the schema is template-driven. Your options:

**1. Map to Lever Feedback** using `POST /opportunities/:opportunity/feedback`. This requires creating feedback templates in Lever that mirror your Greenhouse scorecard structure, then populating field values programmatically. Lever's structured feedback creation requires a `baseTemplateId` and field values that match the template. This preserves the most structure but requires upfront template creation for every unique scorecard type. ([hire.lever.co](https://hire.lever.co/developer/documentation))

**2. Serialize to Notes.** Convert each scorecard into a formatted text note with the interviewer name, date, overall recommendation, per-attribute ratings, and free-text answers. Loses structured queryability but preserves the historical record for human review.

Most migrations use a hybrid: feedback forms for recent/active candidates, serialized notes for archived historical data. If you want scorecards to remain queryable after cutover, invest in prebuilding Lever feedback templates. If you don't, serialize and accept the loss of field-level reporting.

### Field-Level Mapping

| Greenhouse Field | Type | Lever Field | Transformation |
|---|---|---|---|
| `candidate.first_name` + `last_name` | string | `contact.name` | Concatenate first and last name |
| `candidate.emails [].value` | string | `contact.emails []` | Direct array mapping; lowercase and dedupe |
| `candidate.phone_numbers [].value` | string | `contact.phones []` | Direct array mapping; normalize to E.164 if possible |
| `candidate.addresses [].value` | string | `contact.location` | Use primary address |
| `application.applied_at` | ISO-8601 | `opportunity.createdAt` | Convert ISO-8601 to epoch milliseconds |
| `application.source.public_name` | string | `opportunity.sources []` | Map to Lever source string |
| `application.current_stage.name` | string | `opportunity.stage` | Map by semantics to Lever stage UUID (requires pre-lookup) |
| `application.status` | enum | `opportunity.archived` / active | `hired`/`rejected` → archived with reason; `active` → active |
| `scorecard.overall_recommendation` | enum | Feedback or Note | See scorecard section above |
| `candidate.tags [].name` | string | `opportunity.tags []` | Direct mapping |
| `candidate.custom_fields` | varies | `contact/opportunity custom fields` | Type-by-type transformation |

Three field-level rules matter more than the rest. **First**, convert Greenhouse's ISO-8601 timestamps to Lever's millisecond epoch timestamps. **Second**, map stages by meaning, not label — Lever stages are customer-defined and rarely match Greenhouse stage names 1:1. **Third**, maintain a migration ledger of source IDs to target IDs; Lever deduplicates contacts by email, so names are never a safe primary key. ([developers.greenhouse.io](https://developers.greenhouse.io/harvest.html))

> [!TIP]
> Before importing anything, extract Lever's stage UUIDs (`GET /stages`), archive reasons (`GET /archive_reasons`), and sources. Build a lookup map so your ETL can translate Greenhouse stage names into Lever stage IDs without manual intervention. Most failed historical migrations are not API failures — they are missing destination metadata.

## Handling API Rate Limits and Extraction Challenges

### Greenhouse Harvest API (Extraction Side)

**Authentication:** Harvest v3 uses OAuth 2.0 (client credentials flow). Harvest v1/v2 used Basic Auth with the API key as the username and an empty password. **v1/v2 will be unavailable after August 31, 2026** — build on v3. ([developers.greenhouse.io](https://developers.greenhouse.io/harvest.html))

**Rate limits:** Harvest v3 uses a fixed window. The `X-RateLimit-Limit` header (commonly 75 for custom integrations) tells you how many requests you can make per window. Monitor `X-RateLimit-Remaining` and respect the `Retry-After` header on 429s.

**Pagination:** Harvest v3 uses cursor-based pagination. The cursor is a Base64-encoded opaque token in the `Link` header. When following the cursor, it must be the **only** query parameter — don't re-attach filters.

**Per-endpoint permissions:** Each Harvest API key must be explicitly granted access to specific endpoints (Candidates, Applications, Scorecards, Offers, etc.). Missing permissions return 403, not empty results. Enable everything you need upfront in the API Credential Management screen.

**Extraction order matters.** Extract in dependency order:
1. Jobs and Stages (no dependencies)
2. Candidates (no dependencies)
3. Applications (depend on Candidates and Jobs)
4. Scorecards (depend on Applications)
5. Offers (depend on Applications)
6. Activity Feed / Notes (depend on Candidates)
7. Attachments (depend on Candidates)

> [!CAUTION]
> Greenhouse attachment links are temporary signed S3 URLs. Download binaries during extraction and store them in your staging bucket. If your ETL doesn't download and re-upload attachments in the same pipeline run, you risk losing them permanently when the URLs expire. ([developers.greenhouse.io](https://developers.greenhouse.io/harvest.html))

```python
# Greenhouse Harvest v3 extraction with rate-limit handling
import requests
import time

BASE_URL = "https://api.greenhouse.io/v3"

def get_access_token(client_id, client_secret):
    resp = requests.post(
        "https://api.greenhouse.io/oauth/token",
        data={"grant_type": "client_credentials"},
        auth=(client_id, client_secret)
    )
    return resp.json()["access_token"]

def fetch_all_pages(endpoint, token, params=None):
    headers = {"Authorization": f"Bearer {token}"}
    url = f"{BASE_URL}/{endpoint}"
    results = []
    
    while url:
        resp = requests.get(url, headers=headers, params=params)
        
        if resp.status_code == 429:
            wait = int(resp.headers.get("Retry-After", 30))
            time.sleep(wait)
            continue
        
        if resp.status_code == 403:
            raise PermissionError(
                f"403 Forbidden on {endpoint}. "
                "Enable this endpoint in API Credential Management."
            )
        
        resp.raise_for_status()
        results.extend(resp.json())
        
        # Follow cursor — don't re-attach params
        link = resp.headers.get("Link", "")
        url = parse_next_link(link)  # Extract 'next' rel from Link header
        params = None  # Cursor is embedded in the URL
    
    return results
```

### Lever API (Load Side)

**Authentication:** OAuth 2.0 or API key (Basic Auth). The Super Admin role is required for the OAuth initiator to support the necessary scopes. ([hire.lever.co](https://hire.lever.co/developer/documentation))

**Rate limits:** 10 requests/second steady state per API key, bursts up to 20 req/sec. Application POST requests are capped at 2 req/sec. These limits are **not guaranteed** and may change based on server load.

**Pagination:** Offset token-based. Responses include a `next` token and `hasNext` boolean. Default page size varies by endpoint; max is 100.

**Deduplication behavior:** The `POST /opportunities` endpoint deduplicates by email. If a Contact with the same email already exists, the new Opportunity is linked to the existing Contact. This is helpful during migration — it prevents duplicates if you need to re-run a partial load.

**Concurrent workers:** If your ETL runs multiple processes against the Lever API, you need a **shared rate limiter** — not per-process limits. Default in-memory rate limiters fail when multiple workers consume the same API key concurrently, leading to 429 storms. Use a shared store like Redis to coordinate.

**Career site formatting:** Lever's Postings API serves JSON by default but can return HTML with `mode=html`. This parameter is essential when preserving rich-text job description formatting from Greenhouse.

```python
# Lever Opportunity creation with exponential backoff and jitter
import time
import random
import requests

LEVER_BASE = "https://api.lever.co/v1"

def create_opportunity(api_key, payload, max_retries=5):
    headers = {"Content-Type": "application/json"}
    auth = (api_key, "")
    
    for attempt in range(max_retries):
        resp = requests.post(
            f"{LEVER_BASE}/opportunities",
            json=payload,
            auth=auth,
            headers=headers,
            params={"perform_as": LEVER_ADMIN_USER_ID}
        )
        
        if resp.status_code == 429:
            wait = (2 ** attempt) + random.uniform(0, 1)
            time.sleep(wait)
            continue
        
        resp.raise_for_status()
        return resp.json()
    
    raise Exception("Max retries exceeded for Lever API")
```

## Step-by-Step Migration Process

1. **Extract reference data first.** Pull Greenhouse jobs, job posts, stages, users, rejection reasons, sources, and custom field definitions.
2. **Extract core data.** Pull candidates, applications, scorecards, notes/activity, offers, and attachments. Store raw JSON in a staging database — never transform inline during extraction.
3. **Transform in staging.** Map candidate IDs, normalize emails and phones, deduplicate candidates, flatten arrays where Lever requires strings, and convert Greenhouse scorecards into Lever feedback payloads or HTML notes.
4. **Load Lever foundations.** Ensure postings, archive reasons, stage mappings, and feedback templates exist in Lever before loading historical records.
5. **Create contacts, then opportunities.** Use a migration ledger mapping source IDs to target IDs so reruns are idempotent. Push to Lever via `POST /opportunities`, using the `contact` field to link opportunities to existing contacts.
6. **Replay secondary history.** Attach notes, feedback, offers, and files after the primary records exist. Use Lever's multipart/form-data upload endpoints for resume attachments.
7. **Validate.** Compare staging database counts against Lever API responses for every object type. Spot-check records.
8. **Delta replay and cutover.** If running a zero-downtime migration, replay source-side changes made during the migration window before final cutover. Both Greenhouse and Lever expose webhook/event surfaces that make delta sync possible. ([developers.greenhouse.io](https://developers.greenhouse.io/webhooks.html))

```python
# Migration loop pattern (pseudocode)
for app in greenhouse.applications():
    candidate = greenhouse.get_candidate(app['candidate_id'])
    contact_id = lever.upsert_contact(
        email=primary_email(candidate),
        name=full_name(candidate),
        phones=normalize_phones(candidate),
        links=extract_links(candidate)
    )

    opp_id = lever.create_opportunity(
        contact_id=contact_id,
        posting_id=posting_map[app['job_id']],
        created_at=to_epoch_ms(app['applied_at'])
    )

    lever.set_stage(opp_id, stage_map[(app['job_id'], app['current_stage']['id'])])

    if is_closed(app):
        lever.archive_opportunity(
            opp_id,
            reason_id=archive_reason_map[close_reason(app)],
            archived_at=to_epoch_ms(closed_at(app))
        )

    migrate_scorecards(app['id'], opp_id)
    migrate_notes(candidate['id'], opp_id)
    migrate_attachments(app['attachments'], opp_id)

    ledger.write(app['id'], candidate['id'], contact_id, opp_id)
```

Treat `403` from Greenhouse as a permissions bug, not a transient error. Treat `429` and `503` from Lever as queue-pressure signals and back off with jitter. Dead-letter any row that fails schema mapping or lacks required destination references (posting, stage, or archive reason). Log source IDs, target IDs, payload hash, attempt count, and final status for every record.

## Where DIY Migrations Break Down

Based on experience across hundreds of ATS migrations, here's where in-house builds consistently fail:

**Scorecard fidelity loss.** Teams export scorecards as flat CSV rows or skip them entirely. Three months later, a hiring manager searches for a previous candidate's interview feedback and finds nothing. The recruiting team loses trust in the new system.

**Relationship chain breaks.** Without careful ID cross-referencing, the link between a Contact and their historical Opportunities — or between an Opportunity and its feedback — breaks silently. Records exist in Lever, but they're orphaned. The data looks fine at record-count level and fails in recruiter UAT.

**Attachment gaps.** Greenhouse serves resume and cover letter files via signed, temporary S3 URLs. If your ETL doesn't download and re-upload attachments in the same pipeline run, you lose them permanently.

**Rate-limit cascade failures.** A naive `sleep(1)` retry strategy clusters requests and repeatedly triggers the rate limiter. Without exponential backoff with jitter, enterprise-scale imports (50,000+ candidates) can take weeks of babysitting.

**The 8–12 week timeline.** Lever's own documentation states that custom integrations typically take 8 to 12 weeks to develop. That's engineering time pulled from product work — and it doesn't include the testing, validation, and rollback planning that a production migration demands. ([lever-old.zendesk.com](https://lever-old.zendesk.com/hc/en-us/articles/4543957810957-Scoping-custom-integrations))

**Greenhouse API deprecation risk.** With Harvest v1/v2 going away August 31, 2026, any extraction code built on the legacy auth and pagination model has a hard expiration date. Teams that started building on v1/v2 are now facing a rewrite.

For a deeper look at whether to build or buy, see our [in-house vs. outsourced data migration analysis](https://clonepartner.com/blog/blog/in-house-vs-outsourced-data-migration/).

## How ClonePartner Executes This Migration

We've migrated ATS data at scale — including [cutting Highsnobiety's 3-month Greenhouse migration down to days](https://clonepartner.com/blog/blog/highsnobiety-greenhouse-to-teamtailor-ats-case-study/). Here's what the Greenhouse-to-Lever path looks like with a dedicated migration team:

1. **Greenhouse API key handoff.** You generate a Harvest v3 credential with all endpoint permissions enabled. We begin extraction within 48 hours.
2. **Full data extraction.** Candidates, Applications, Scorecards (with structured attributes), Offers, Activity Feeds, Tags, Sources, Custom Fields, and Attachments — all via API, preserving the complete relationship graph.
3. **Mapping workshop.** We walk through stage mapping, archive reason mapping, custom field translation, and scorecard handling decisions with your recruiting ops team.
4. **Transformation and load.** Our ETL pipeline handles ID cross-referencing, email-based deduplication, scorecard serialization, attachment re-upload, and rate-limit management — including shared Redis-backed rate limiters for concurrent workers.
5. **Validation.** Record count reconciliation, field-level spot checks, relationship integrity verification (every Opportunity has the correct Contact, every historical note is attached to the right Opportunity).
6. **Cutover.** Zero-downtime switch. If the migration needs zero downtime, we design the cutover around delta replay instead of a long freeze window. Your recruiting team doesn't stop working during the migration.

What differentiates this from DIY:
- We handle the nested Candidate → Application → Scorecard chain that CSV imports flatten.
- Historical scorecard attribute data is preserved as structured feedback or detailed notes — not lost.
- Lever's 10 req/sec limit is managed with production-grade queueing, not `time.sleep()`.
- The entire process completes in days, not the 8–12 weeks a custom build requires.
- Your engineers stay focused on Lever configuration, downstream integrations, and recruiter enablement instead of one-off migration plumbing.

## Pre-Migration Checklist

Complete this before extracting a single record.

### Data Audit
- [ ] **Count active vs. archived candidates** in Greenhouse. Decide whether you're migrating all historical data or only candidates from the last N years.
- [ ] **Inventory custom fields.** Export the list from Greenhouse (`GET /custom_fields/{field_type}`) and map each to a Lever equivalent. Flag any Greenhouse field types (e.g., `currency_range`, `number_range`) that don't have a direct Lever match.
- [ ] **Review scorecard templates.** Count unique scorecard structures across your Jobs. Each unique structure may need a corresponding Lever feedback template.
- [ ] **Identify attachment volume.** Estimate total file storage. Large volumes (100,000+ attachments) significantly extend migration time due to download/re-upload overhead.
- [ ] **Audit EEOC/demographic data.** This data has regulatory implications. Decide whether to migrate it and how to handle it in Lever's schema. See our [GDPR/CCPA compliance guide for candidate data](https://clonepartner.com/blog/blog/ats-migration-gdpr-ccpa-compliance/).
- [ ] **Identify unused data.** Archive closed jobs and rejected candidates older than your retention policy to reduce payload size.
- [ ] **Confirm which custom fields are actually used** by recruiters. Many Greenhouse instances accumulate dead custom fields over time.
- [ ] **Identify downstream dependencies.** Which integrations, automations, and reports depend on current Greenhouse IDs?

### Scope Decisions
- [ ] **Define cutoff date.** Everything before date X is "historical" (bulk migration). Everything after is "active" (may need manual migration or delta sync).
- [ ] **Big-bang vs. phased vs. incremental.** Big-bang: one cutover weekend, highest coordination risk, simplest execution. Phased: migrate historical data first, run both systems in parallel, then migrate active candidates. Incremental: backfill history, replay deltas until cutover — usually the safest zero-downtime pattern since both platforms expose event/webhook surfaces for change capture.
- [ ] **Who owns stage mapping?** This is a recruiting ops decision, not an engineering one. Greenhouse stages rarely map 1:1 to Lever stages. Get sign-off before the ETL runs.

### Risk Mitigation
- [ ] **Backup everything.** Run a full Greenhouse data export (API + CSV) and store it before you start. Greenhouse data is yours — keep an independent copy.
- [ ] **Run a test migration.** Load 500–1,000 candidates into a Lever sandbox. Validate scorecard rendering, attachment availability, stage assignments, and custom field values before touching production.
- [ ] **Define rollback criteria.** What constitutes a failed migration? Missing attachment rate >5%? Orphaned Opportunities? Set thresholds in advance.

## Validation and Testing

Validation is not optional. It's the difference between a migration that earns recruiter trust and one that generates a Slack channel full of complaints.

### Record Count Reconciliation

| Object | Greenhouse Count | Lever Count | Expected Delta |
|---|---|---|---|
| Candidates / Contacts | `GET /candidates` count | `GET /contacts` count | ≤ (deduplication expected) |
| Applications / Opportunities | `GET /applications` count | `GET /opportunities` count | Should match |
| Scorecards / Feedback + Notes | `GET /scorecards` count | Count of feedback + notes created | Should match |
| Attachments | Count per candidate | Count per contact | Should match |

### Sampling Strategy

Don't validate every record. Instead:
1. **Random sample:** Pull 50 random candidates. Verify every field, every Opportunity, every note, every attachment.
2. **Edge cases:** Pull candidates with 5+ Applications (should have 5+ Opportunities in Lever). Pull candidates with 10+ scorecards. Pull candidates with custom field values across every type.
3. **Recent hires:** Pull the last 20 hired candidates. These are the ones recruiters will look up first.

### UAT Process

Give 3–5 recruiters access to Lever with migrated data for 48 hours. Ask them to:
- Find a specific past candidate and verify their history
- Check that a rejected candidate shows the correct archive reason
- Confirm that a historical scorecard/feedback note contains the expected content
- Verify that attachments (resumes) are downloadable

## Post-Migration Tasks

The migration is the data move. The work that follows makes the new system operational.

- **Rebuild interview plans and pipelines.** Greenhouse interview kits, stage-specific scorecard templates, and approval workflows don't export as transferable configuration. These must be manually recreated in Lever.
- **Reconfigure integrations.** Any tool connected to Greenhouse (HRIS, background check, scheduling) needs to be re-pointed to Lever's API. Budget 2–4 weeks depending on how many integrations you have.
- **User training.** The Contact vs. Opportunity distinction is the biggest UX shift from Greenhouse. Schedule focused sessions targeting specific workflow changes — not generic platform overviews.
- **Monitor for data inconsistencies.** Run validation queries weekly for the first month. Watch for orphaned Opportunities, missing tags, and custom field values that didn't translate correctly.
- **Decommission Greenhouse.** Don't delete your Greenhouse account immediately. Keep it in read-only mode for 90 days minimum as a reference. Some ATS vendors charge for data exports after contract termination — confirm your terms before canceling.

## Limitations and Constraints

Some things cannot be migrated perfectly. Be honest with stakeholders about these:

- **Scorecard structure is lossy.** Greenhouse scorecards have typed attributes with categorical ratings. Lever feedback forms can approximate this, but the exact attribute-level rating taxonomy won't transfer 1:1 unless you invest in creating matching templates.
- **No true custom objects in Lever.** You must rely on Tags, Profile Links, or structured Notes to house Greenhouse's highly customized object data. Greenhouse-only custom entities often end up as tags, profile forms, notes, or an external reporting table.
- **Interview scheduling history doesn't migrate.** Calendar events, interviewer assignments, and panel configurations from Greenhouse have no import path in Lever's API.
- **EEOC/demographic data requires special handling.** Greenhouse stores this at the application level. Lever's approach differs. Consult legal before deciding whether and how to migrate this data.
- **Greenhouse approval workflows vanish.** Job approval chains, offer approval workflows, and multi-stage gating don't have a data export path. They need to be rebuilt manually in Lever.
- **Missing emails force manual deduplication.** Candidates without email addresses can't be automatically deduplicated since Lever relies on email for its native deduplication logic.
- **API rate limits create a hard throughput ceiling.** At 10 req/sec on Lever's side (2 req/sec for Application POSTs), importing 50,000 Opportunities — each requiring multiple API calls for the Opportunity, notes, feedback, and attachments — takes a minimum of several hours assuming zero retries. Plan migration windows accordingly.
- **Lever notes are not directly updateable.** Once created, notes are flat text threads. If something is wrong, you can't patch it — you have to create a new note or delete and recreate. Build your note payloads right the first time.
- **Confidential postings cannot be created through the Lever posting API.** If you have confidential roles in Greenhouse, plan to recreate those manually in Lever.

## Best Practices from Production Migrations

1. **Never migrate without a backup.** Full API extraction + CSV export from Greenhouse, stored independently, before you start.
2. **Run at least two test migrations.** The first catches mapping errors. The second validates your fixes. Production runs should be boring.
3. **Validate incrementally, not at the end.** Check a sample after every 5,000 records loaded. Catching a mapping error after 50,000 records are already in Lever is painful.
4. **Automate everything except mapping decisions.** Extraction, transformation, loading, and validation should be scripted. Stage mapping, scorecard handling strategy, and custom field decisions require human judgment.
5. **Treat the migration as a data project, not an IT ticket.** The people who understand your recruiting data (TA ops, recruiting coordinators) should own mapping decisions. Engineers own the pipeline. Neither can succeed alone.
6. **Use a staged extract → transform → load design.** Extract raw Greenhouse objects exactly as delivered. Transform in your own database. Load Lever in dependency order. Do not transform inline during load — you need replayable state when a batch hits a 429 or a bad field map.
7. **Keep a migration ledger.** Log source IDs, target IDs, payload hashes, and final status for every record. This makes reruns idempotent and post-migration audits possible.
8. **Keep both Greenhouse candidate ID and application ID** as immutable source keys throughout the project. Never make candidate ID the only primary key in Lever.

> Migrating from Greenhouse to Lever? Our team has executed complex ATS migrations — including full scorecard preservation and attachment transfers — in days, not months. Book a 30-minute scoping call and we'll map out exactly what your migration involves.
>
> [Talk to us](https://cal.com/clonepartner/meet?duration=30&utm_source=blog&utm_medium=button&utm_campaign=demo_bookings&utm_content=cta_click&utm_term=demo_button_click)

## Frequently asked questions

### How long does a Greenhouse to Lever migration take?

Lever's own documentation estimates 8–12 weeks for custom integrations. A managed migration service like ClonePartner typically completes the full data transfer in days, though total project time including mapping decisions and validation is usually 1–3 weeks.

### Can Greenhouse scorecards be migrated to Lever?

Yes, but with compromises. Greenhouse scorecards are structured objects with typed attributes and ratings. Lever uses feedback forms with a different, template-driven schema. You can map scorecards to Lever feedback (requires pre-creating templates for each unique scorecard type) or serialize them as detailed notes. Either way, the exact attribute-level rating taxonomy won't transfer 1:1 without custom transformation work.

### What are the Lever API rate limits for data migration?

Lever enforces a steady-state limit of 10 requests per second per API key, with bursts up to 20 req/sec. Application POST requests are capped at 2 req/sec. These limits are not guaranteed and may change. Use exponential backoff with jitter and a shared rate limiter (like Redis) for concurrent workers.

### Will a CSV export from Greenhouse work for migrating to Lever?

Only for small, simple migrations under roughly 5,000 candidates. CSV exports flatten relational data — you lose the Candidate → Application → Scorecard relationship chain, structured interview attribute ratings, and attachments. For anything beyond basic active-only candidate records, API-based migration is the only viable path.

### Is the Greenhouse Harvest API being deprecated?

Harvest API v1 and v2 will be deprecated and unavailable after August 31, 2026. All new integrations should use Harvest v3, which uses OAuth 2.0 instead of Basic Auth and cursor-based pagination instead of page/per_page parameters.
