BambooHR to Lever Migration: The CTO's Technical Guide
A technical guide to migrating from BambooHR to Lever. Learn how to map opportunity-centric data models, bypass API limits, and ensure zero data loss.
Planning a migration?
Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.
Schedule a free call- 1,200+ migrations completed
- Zero downtime guaranteed
- Transparent, fixed pricing
- Project success responsibility
- Post-migration support included
Migrating from BambooHR's built-in ATS to Lever is a data-model translation problem disguised as a vendor switch. BambooHR is an HRIS with a functional, application-centric ATS. It handles inbound applicants well but lacks proactive sourcing tools, CRM features, and advanced pipeline customization. When scaling engineering and talent teams move to Lever, they are adopting an opportunity-centric data model.
A naive CSV export from BambooHR flattens this relational structure. It drops historical interview notes, breaks multi-application candidate histories, and collapses the candidate lifecycle into unusable rows.
This guide covers the object-mapping decisions you must make, the API constraints that will bottleneck your ETL scripts, and the edge cases that break most DIY migration attempts.
The Architectural Shift: Application-Centric vs. Opportunity-Centric Data Models
Before writing a single line of extraction code, you must understand how both systems store talent data.
BambooHR's ATS uses a flat, application-centric model. A candidate applies, and their record is tied directly to that application. If a candidate applies to three different roles over two years, BambooHR often treats these as disconnected applications or requires manual merging that breaks historical context.
Lever utilizes an opportunity-centric data model. Every candidate in your Lever environment has exactly one Candidate Profile. That single profile acts as a central source of truth for contact information. When a candidate applies for a role, Lever creates an Opportunity—representing the consideration of that candidate for a specific job.
Moving data between these systems requires splitting BambooHR's flat application records into Lever's distinct Candidate and Opportunity objects. If you fail to separate these entities, you will flood Lever with duplicate profiles and ruin the reporting metrics your talent team relies on.
Evaluating Migration Approaches: CSV vs. API vs. Middleware
You have three primary paths for moving your historical candidate data.
Native CSV Export/Import
BambooHR allows you to export standard reports to CSV, which you can then format for Lever's bulk import tool.
- How it works: You export applicant data from BambooHR, map the columns in a spreadsheet to Lever's required fields, and upload the file.
- When to use it: Small datasets (under 1,000 records) where historical context does not matter.
- Pros and cons: It is free and requires no engineering bandwidth. However, it is highly manual and prone to breaking relational data. BambooHR's native exports often miss candidate notes and specific permissions due to API and reporting limitations.
- Complexity: Low.
For a deeper dive into why flat files fail relational data, see our guide on using CSVs for SaaS data migrations.
Middleware (Zapier, Make, Bindbee)
Integration platforms are built for point-to-point triggers, not historical bulk data movement.
- How it works: You set up a trigger (e.g., "New Applicant in BambooHR") and an action ("Create Opportunity in Lever").
- When to use it: Excellent for ongoing syncs—such as syncing hired Lever candidates back to BambooHR to create an employee record.
- Pros and cons: Great for forward-looking automation. Terrible for historical data. Middleware lacks the architecture to handle bulk pagination, complex data transformations, and historical timestamp preservation.
- Complexity: Medium.
DIY Custom ETL Scripts
Building a custom extraction, transformation, and load (ETL) pipeline using Python or Node.js.
- How it works: Your script authenticates with BambooHR's API, paginates through all historical records, transforms the JSON payloads into Lever's schema, and POSTs them to Lever's API.
- When to use it: When you have dedicated engineering bandwidth and strict requirements for data fidelity.
- Pros and cons: Offers total control over the data mapping. The downside is the engineering cost. You must handle pagination, OAuth, and strict rate limits without dropping records.
- Complexity: High.
Pre-Migration Planning & Data Mapping Strategy
A successful migration requires mapping BambooHR's rigid statuses to Lever's customizable pipeline stages.
- Audit your data: Identify active candidates, rejected applicants, hired employees, and archived jobs.
- Define the scope: Decide if you are migrating all historical data or only candidates active in the last 24 months.
- Map the fields: Document exactly how BambooHR fields translate to Lever.
Sample Data Mapping Table
| BambooHR Object/Field | Lever Equivalent | Notes |
|---|---|---|
| Candidate Name | Contact: name |
Lever uses a single string for name. |
| Job Title | Opportunity: headline |
Extracted from the BambooHR application. |
| Status | Opportunity: stage |
Requires mapping BambooHR statuses to Lever's custom pipeline stages. |
| Resume Attachment | Opportunity: files |
Must be downloaded from BambooHR and uploaded to Lever via API. |
| Candidate Notes | Profile: notes |
BambooHR API limitations make notes difficult to extract; requires specific API endpoints. |
When handling sensitive candidate data, ensure your extraction methods comply with local regulations. Review our guide on GDPR & CCPA compliance during candidate data transfers.
Migration Architecture & Step-by-Step Execution
If you choose to build a custom ETL pipeline, follow this architectural flow.
1. Extract from BambooHR
BambooHR's API uses basic HTTP authentication where the API key acts as the username. You will need to query the /applicant_tracking/applications endpoint to pull candidate data. Because BambooHR paginates its responses, your script must handle offset logic to retrieve the entire dataset.
2. Transform the Data Model
You must split the BambooHR payload. Extract the candidate's personal information (email, phone, name) to build the Lever Candidate Profile payload. Extract the job-specific information (job ID, status, resume) to build the Lever Opportunity payload.
3. Load into Lever
Lever's API is strict. You must first check if a Candidate Profile exists using the candidate's email. If it does, append the new Opportunity to the existing profile. If it does not, create the profile and the opportunity simultaneously.
For more context on Lever's API behaviors, see our Greenhouse to Lever migration guide.
Pseudo-Code Example
import requests
import time
BAMBOOHR_URL = "https://api.bamboohr.com/api/gateway.php/{company}/v1/applicant_tracking/applications"
LEVER_URL = "https://api.lever.co/v1/opportunities"
# Extract from BambooHR
response = requests.get(
BAMBOOHR_URL,
auth=(BAMBOOHR_API_KEY, ""),
headers={"Accept": "application/json"}
)
applications = response.json()
# Transform and Load into Lever
for app in applications:
payload = {
"name": f"{app['firstName']} {app['lastName']}",
"headline": app.get('jobTitle'),
"emails": [app['email']],
"phones": [{"value": app['phone']}]
}
# Lever limits POST requests to 2 per second
res = requests.post(
LEVER_URL,
auth=(LEVER_API_KEY, ""),
json=payload
)
if res.status_code == 429:
print("Rate limit hit. Initiating exponential backoff.")
time.sleep(2)
# Implement retry logic here
# Enforce rate limit natively
time.sleep(0.6) Handling Edge Cases, API Limits, & Constraints
Migrations fail in the edge cases. Here is what will break your scripts if you do not account for it.
Lever's Strict API Rate Limits
Lever strictly rate-limits application POST requests to 2 per second. If your custom job site or migration script issues more than 2 POST requests per second, Lever will return a 429 TOO MANY REQUESTS status code. Your pipeline must include intelligent queueing, exponential backoff, and retry logic to prevent silent data loss during bulk uploads.
BambooHR API Limitations
Extracting historical data from BambooHR is not always straightforward. BambooHR's API and export tools have limitations around extracting specific historical data. Candidate notes and specific permissions are often unsupported in standard exports, requiring you to hit undocumented endpoints or write custom scraping logic to preserve interviewer feedback.
Multi-Level Relationships
If a candidate applied to three jobs, they should have one Lever Candidate Profile and three Lever Opportunities. If your script creates three separate Candidate Profiles, you will break Lever's reporting and force your recruiters to manually merge duplicate profiles for months. Getting this data model right is also critical for future portability—as we detail in our Lever to Greenhouse migration guide, improperly merged Lever records are notoriously difficult to untangle later.
For a broader look at common pitfalls, read 5 "Gotchas" in ATS Migration.
Validation, Testing, & Post-Migration Tasks
Do not run a migration without a rollback plan.
- Run a sandbox test: Push 5% of your BambooHR data into a Lever sandbox environment.
- Record count validation: Ensure the number of BambooHR applications matches the number of Lever opportunities.
- Field-level validation: Spot-check 50 random candidates to verify resumes attached correctly and historical notes transferred.
- Rebuild automations: Once the data is live, rebuild your Slack notifications, interview scheduling triggers, and HRIS syncs in Lever.
Why Engineering Teams Choose a Managed Migration Service
Building an ETL pipeline for a one-time migration is a poor use of engineering bandwidth. You will spend weeks reading API documentation, handling OAuth token refreshes, writing retry logic for 429 errors, and mapping custom fields—only to throw the code away when the migration is done.
DIY migrations carry the risk of data loss, broken relationships, and extended downtime for your recruiting team. The hidden costs of pulling senior engineers off core product work far exceed the price of a managed service.
Why ClonePartner stands out:
- Structural transformation: We handle the complex logic of converting BambooHR's flat application records into Lever's split Candidate and Opportunity data model.
- Rate limit management: Our infrastructure automatically bypasses Lever's strict 2 requests/second POST limits with intelligent batching and retry logic, guaranteeing zero data loss.
- Deep extraction: We pull hard-to-reach historical data, including candidate notes and attachments, that native BambooHR CSV exports drop.
- Zero downtime: We preserve multi-level relationships (Candidate → Opportunity → Interview Feedback) while your team continues to hire without interruption.
Frequently Asked Questions
- How do I export candidate notes from BambooHR?
- BambooHR's native CSV export has limitations and often drops historical candidate notes. You must use the BambooHR API to extract complete candidate histories and attachments programmatically.
- What is the Lever API rate limit for creating opportunities?
- Lever strictly rate-limits application POST requests to 2 per second. Exceeding this limit will result in a 429 TOO MANY REQUESTS error, requiring your migration script to use exponential backoff.
- How does Lever's data model differ from BambooHR?
- BambooHR uses a flat, application-centric model where candidates are tied directly to specific jobs. Lever uses an opportunity-centric model, where a single candidate profile can have multiple distinct opportunities attached to it.