Skip to content

Bullhorn to JobDiva Migration Guide (2026)

A technical guide to migrating from Bullhorn to JobDiva: API extraction limits, entity mapping, Fair Use Policy constraints, and step-by-step planning.

Raaj Raaj · · 22 min read
Bullhorn to JobDiva Migration Guide (2026)
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Migrating from Bullhorn to JobDiva is a data-model translation and API-orchestration problem wrapped inside a vendor-policy minefield. Bullhorn's highly customizable, decoupled entity model — Candidate, ClientCorporation, ClientContact, JobOrder, JobSubmission, Placement — must be mapped into JobDiva's tighter, all-in-one staffing schema where VMS sync, back-office invoicing, and candidate management live under a single roof. Every mismatch between these two architectures is where candidate notes, submittal histories, and placement records silently break.

If you need a fast decision: Bullhorn's native CSV export is designed for small exports and completely drops DHTML/rich-text fields like formatted notes and embedded resumes. The exact export limit depends on how many columns you include — heavy columns like Submissions can multiply rows and cause exports to fail entirely. API-based extraction via the Bullhorn REST API, combined with JobDiva's API endpoints for loading, is the only path that preserves full relational fidelity at enterprise scale. But Bullhorn's API Fair Use Policy explicitly prohibits bulk data transfer to unauthorized solutions without written permission — plan around that constraint from day one. (kb.bullhorn.com)

This guide covers the real extraction limits on both sides, entity-by-entity mapping, every viable migration approach with trade-offs, and the edge cases that silently corrupt data during ATS-to-ATS moves.

For broader ATS migration patterns, see 5 "Gotchas" in ATS Migration. For a deeper look at why CSV-based migrations fail at scale, read Using CSVs for SaaS Data Migrations: Pros and Cons.

Warning

Bullhorn's API Fair Use Policy (updated December 2025) states: "You must not use the API to transfer data beyond the minimum necessary for the permitted purpose and there can be no export of data to… unauthorized solutions… without Bullhorn's explicit written permission." Get written authorization before running any bulk extraction scripts. (bullhorn.github.io)

Why Staffing Firms Migrate from Bullhorn to JobDiva

The move from Bullhorn to JobDiva is almost always driven by one of three factors:

  1. Native VMS synchronization. JobDiva has offered automated VMS job capture and candidate submittal sync as a core, built-in capability for years. Bullhorn's VMS automation is a relatively newer add-on, and many agencies still rely on third-party marketplace integrations to bridge their ATS and VMS portals. For agencies running dozens of VMS programs, JobDiva's native sync removes an entire category of manual work.

  2. Consolidated back-office. JobDiva bundles time-capture, expense management, invoicing, and payroll processing into the platform. Bullhorn typically requires separate back-office tools (like Bullhorn Back Office or third-party add-ons), each with its own integration point and data silo.

  3. Cost consolidation. Agencies running Bullhorn plus three or four marketplace add-ons (VMS sync, back-office, analytics, texting) sometimes find that JobDiva's all-in-one pricing reduces total technology cost.

The trade-off is real: Bullhorn's integration ecosystem is broader, its CRM functionality is frequently rated stronger, and its customization depth (custom objects, configurable field maps) is significantly greater than JobDiva's more rigid schema. You're moving from a compose-your-stack model to a more integrated staffing workflow model. Understand what you're gaining and what you're giving up before committing. (bullhorn.com)

The Data Model Clash: Bullhorn vs. JobDiva

The core engineering challenge is mapping Bullhorn's highly relational, deeply customizable entity model into JobDiva's flatter, staffing-workflow-centric schema. The hard part is not field renaming — it is deciding how to collapse or re-express Bullhorn relationships that have no clean equivalent in JobDiva.

Bullhorn's Entity Architecture

Bullhorn organizes staffing data around these core entities:

  • Candidate — personal info, skills (to-many), work history (to-many), education (to-many), file attachments, and custom fields (customText1customText20, customDate1customDate3, etc.)
  • ClientCorporation — the company entity. Owns ClientContact records.
  • ClientContact — a person at a client company. Links to ClientCorporation via a to-one association.
  • JobOrder — an open position. Links to ClientCorporation and ClientContact.
  • JobSubmission — a candidate's submittal to a job. Links CandidateJobOrder with a status pipeline.
  • Placement — a confirmed placement. Links CandidateJobOrder with compensation, start/end dates, and billing details.
  • Note — activity records attached to any entity. Contains action (call type), comments (often rich HTML), and personReference.
  • Custom Objects (customObject1s through customObject10s) — flexible, entity-attached records for tracking data Bullhorn doesn't natively model.

JobDiva's Entity Architecture

JobDiva consolidates staffing operations into fewer, broader entities:

  • Candidate — similar to Bullhorn, but with user-defined fields rather than true custom objects.
  • Contact — maps to Bullhorn's ClientContact.
  • Company — maps to Bullhorn's ClientCorporation.
  • Job — maps to Bullhorn's JobOrder. Tightly integrated with VMS feeds.
  • Submittal — maps to Bullhorn's JobSubmission.
  • Start (Placement) — maps to Bullhorn's Placement. Linked to timesheet and invoicing modules.
  • Activity/Note — notes and activities on candidates and contacts.
Info

The critical mismatch: Bullhorn supports up to 10 custom object types per entity, each with its own fields and associations. JobDiva has no equivalent. Custom object data must be flattened into JobDiva's user-defined fields or serialized into notes. This is the single biggest source of data loss in Bullhorn-to-JobDiva migrations. (bullhorn.github.io)

Entity Mapping Table

Bullhorn Entity JobDiva Equivalent Key Differences
Candidate Candidate JobDiva uses user-defined fields instead of typed custom objects
ClientCorporation Company Structurally similar; preserve parent-child company hierarchies
ClientContact Contact Bullhorn links via clientCorporation.id; rebuild association in JobDiva
JobOrder Job JobDiva jobs integrate with VMS feeds; clientCorporation and clientContact links must be rebuilt
JobSubmission Submittal Map status pipeline values; Bullhorn uses configurable statuses
Placement Start Map dateBegin, dateEnd, payRate, billRate, salary
Note (with action, comments) Activity/Note Rich HTML comments must be stripped or converted
CandidateWorkHistory Candidate work history Map companyName, title, startDate, endDate
CandidateEducation Candidate education Map school, degree, major, graduationDate
customObject1scustomObject10s User-defined fields or Notes No direct equivalent. Must flatten or serialize.
File Attachments (resumes, docs) Candidate attachments Extract via /file/{entityType}/{entityId}, upload to JobDiva

Field-Level Mapping (Key Fields)

Bullhorn Field Type JobDiva Field Transformation
Candidate.firstName String firstName Direct map
Candidate.lastName String lastName Direct map
Candidate.email String email Direct map
Candidate.status String Candidate status Map picklist values
Candidate.customText1customText20 String User-defined fields Map via updateCandidateAttribute
Candidate.description HTML (DHTML) Candidate notes or profile Strip HTML or preserve as note
Candidate.owner.id Integer Recruiter ID Map Bullhorn user IDs → JobDiva recruiter IDs
JobOrder.title String Job title Direct map
JobOrder.clientCorporation.id Integer Company ID Requires pre-loaded company ID mapping
Placement.dateBegin Timestamp Start date Convert from Unix millis to MM/dd/yyyy
Placement.payRate BigDecimal Pay rate Direct map
Note.action String Activity action Map Bullhorn action types to JobDiva action types
Note.comments HTML Note text Strip HTML tags; rich text formatting is lost

Bullhorn Extraction Limits: Why CSVs and Standard APIs Fail

This is where most migration plans break down. Bullhorn has three extraction methods, and each has hard limits that aren't obvious until you're mid-migration.

Method 1: Native CSV Export from the ATS

Bullhorn's ATS list view supports CSV export, but with significant constraints:

  • Record limit: Bullhorn's own help center positions CSV export as best when you need only a few thousand records. The exact count depends on how many columns you include — hyperlinked columns like Submissions expand the result set (exporting the five most recent submissions per candidate) and can cause the export to fail entirely. (kb.bullhorn.com)
  • Limited entity scope: Only candidates and sales contacts are directly exportable from the CSV path — not placements, jobs, or submissions.
  • DHTML Editor fields are dropped. Any field with a dataSpecialization of HTML — including formatted notes, rich-text resumes stored in description, and custom DHTML fields — will not appear in CSV exports.
  • No export history logging. Bullhorn does not log who exported what or when.

Bottom line: CSV export works for a quick list of a few thousand flat records. It is not a viable migration path for any agency with more than trivial data. For full copies including files, Bullhorn points users toward a separate Data Backup process — and only Account or Support Contacts can request it.

Method 2: Bullhorn REST API

The REST API is the only extraction method that can reach all entities and all field types. But it comes with its own constraints:

  • To-many entity cap: When you request to-many associations inline (e.g., fields=categories [10](name)), the default count returned is 5 and the maximum is 10. To-many fields can only appear at the top level with no general nesting. If a candidate has 25 skills, you'll only get 10 in a single entity GET — you must make separate /entity/Candidate/{id}/primarySkills calls to retrieve the full set. (bullhorn.github.io)
  • Search pagination: Search results return a maximum of 500 records per request. You paginate using start and count parameters.
  • Session token expiry: The BhRestToken expires after a short period (10 minutes by default). Your scripts must handle token refresh using the OAuth refresh token flow.
  • Authentication flow: Bullhorn uses OAuth 2.0, a loginInfo step to determine the correct data-center URL, and a BhRestToken for subsequent requests. If your client ignores the data-center redirect logic, you'll get brittle failures before you even start batching. (bullhorn.github.io)
  • Rate limiting: Bullhorn's API documentation lists limits including 50 concurrent sessions, 100,000 calls per month (unless otherwise agreed), and 1,500 requests per minute. Generating excessive 429 errors in a short window can get your API account disabled. (kb.bullhorn.com)
  • Edition-dependent access: API access is edition-dependent. Confirm your Bullhorn edition includes REST API entitlements before scoping an API migration.
  • Fair Use Policy: Bullhorn's API Fair Use Policy explicitly states there can be no bulk export of data to unauthorized solutions without explicit written permission. (bullhorn.github.io)
Danger

Warning: Do not attempt to extract to-many entities by requesting a count greater than 10 — the Bullhorn API will reject the request. For candidates with deep histories (50+ notes, dozens of submittals), your extraction script must handle iterative pagination for every single candidate record, exponentially increasing API call volume.

Method 3: Bullhorn Data Loader (Import-Only)

Bullhorn's open-source Data Loader is designed for importing CSV data into Bullhorn, not for extraction. It can export the current state of records being updated (as a pre-update backup), but it is not a general-purpose export tool.

Warning

Before you build a full extractor, confirm Bullhorn edition entitlements, API rights, and whether your migration destination is authorized under Bullhorn's policy. A technically working script can still be the wrong commercial path. (bullhorn.github.io)

JobDiva API: Loading Constraints

On the JobDiva side, the API has its own set of constraints that affect how you load migrated data:

  • createCandidate endpoint: Requires a POST to /apiv2/jobdiva/createCandidate. The request body must include firstName and lastName at minimum. The API supports education, skills, certifications, and user-defined fields in a single call.
  • updateCandidateAttribute endpoint: Used to update specific candidate attributes after creation. Requires candidateId and text in the request body. Returns a boolean for success/failure.
  • createCandidateNote endpoint: Requires candidateId and note as mandatory fields. Optional fields include recruiterId, action, actionDate, and links to jobs or contacts.
  • Date-range mandatory parameters: Endpoints like Get Candidate Application Records require fromDate and toDate in MM/dd/yyyy HH:mm:ss format. You cannot request "all historical applications" — your pipeline must iterate through date windows to retrieve or validate complete histories.
  • Rate limiting: JobDiva imposes per-minute rate limits on API requests. The specific ceilings are not publicly documented in the sources we reviewed — plan for conservative throttling and confirm write behavior during discovery. (jobdiva.com)
  • API access: JobDiva provides API access through a Client ID and API user credentials. Contact JobDiva Support to obtain a Client ID for your integration.
Info

JobDiva's API documentation is not as publicly accessible as Bullhorn's. You'll need to work directly with JobDiva's team or your implementation partner to confirm endpoint coverage, throttling, and attachment behavior in your own tenant before locking the runbook.

Evaluating Migration Approaches

There are five ways to move data from Bullhorn to JobDiva. The right method depends less on company size and more on historical depth, relationship complexity, and how much broken history you can tolerate.

Approach 1: Native CSV Export → Manual Import

How it works: Export records from Bullhorn's ATS list view as CSV. Clean and reformat. Import into JobDiva using their CSV import tools or manually.

When to use it: Only for very small datasets (under 5,000 flat records) where you don't need notes, attachments, or relational data.

Dimension Assessment
Scalability Very limited
Relational data Lost
Rich text / DHTML Lost
Attachments Not included
Complexity Low

Bullhorn's own docs position CSV for smaller exports and warn that heavy columns can cause exports to fail. Treat CSV as a sampling tool, not the backbone of an enterprise migration. (kb.bullhorn.com)

Approach 2: Custom API-Based ETL Pipeline

How it works: Write scripts (Python, Node.js) that authenticate to both APIs, extract entities from Bullhorn via REST in dependency order, transform the data, and load it into JobDiva via their API endpoints.

When to use it: When you have a dedicated engineering team with weeks of spare capacity and need full control over transformation logic.

Dimension Assessment
Scalability Good with proper batching
Relational data Preserved if you handle dependency ordering
Rich text / DHTML Preserved via API extraction
Attachments Requires separate file download/upload pipeline
Complexity High

Hidden costs: OAuth session management (including loginInfo routing and BhRestToken renewal), rate-limit handling with exponential backoff, retry logic, to-many pagination (the 10-item cap means separate API calls for each candidate's full note/skill/submittal history), Fair Use Policy approval, and weeks of QA. Easy to underestimate — and the result is a one-off system nobody wants to maintain.

Approach 3: iPaaS / Middleware (Zapier, Make, Jitterbit)

How it works: Use visual workflow tools to map fields between Bullhorn and JobDiva, triggered by schedules or events.

When to use it: For ongoing sync of small data volumes (new records, status updates) — not historical bulk migration.

Dimension Assessment
Scalability Poor for bulk historical data
Relational data Limited — most iPaaS tools can't handle multi-hop relationships
Attachments Rarely supported
Complexity Medium

As of April 2026, Zapier lists Bullhorn CRM and Jitterbit documents a Bullhorn connector. We did not find a vendor-published Bullhorn-to-JobDiva migration utility. Use iPaaS for post-migration automations, not the historical migration engine. (zapier.com)

Approach 4: Unified API Platforms (Merge, Unified.to)

How it works: Third-party platforms that provide a normalized API layer across ATS systems. You read from their unified Candidate, Application, Job objects.

When to use it: When building a product integration, not a one-time migration. These platforms normalize schemas, which means you lose Bullhorn-specific and JobDiva-specific fields — including custom objects and user-defined fields.

Dimension Assessment
Scalability Moderate
Relational data Partially preserved (normalized schema)
Custom fields Often lost in normalization
Complexity Medium

Approach 5: Managed Migration Service

How it works: A specialized team handles the full pipeline: extraction, transformation, loading, relationship rebuilding, validation, and delta sync.

When to use it: When you have more than 100,000 records, complex relational data, custom objects, attachments, or when you can't afford recruiter downtime.

Dimension Assessment
Scalability Enterprise-grade
Relational data Fully preserved
Rich text / DHTML Preserved
Attachments Included
Complexity Low for your team

Comparison Summary

Approach Best For Relational Integrity Attachments Engineering Effort Cost
CSV Export < 5K flat records Low Free
Custom ETL Mid-size, dev team available ✅ (if built correctly) Very High Engineering time
iPaaS Ongoing sync, small volumes ⚠️ Limited Medium Platform fees
Unified API Product integration ⚠️ Normalized Medium Platform fees
Managed Service Enterprise, complex data Minimal Service fee

Recommendation by Scenario

  • Small agency (< 50 users, < 50K records, minimal custom objects): Custom ETL is feasible if you have an engineer who can dedicate 2–4 weeks. CSV export is a fallback for flat contact lists only.
  • Mid-market agency (50–200 users, 100K–1M records, custom objects, attachments): Managed migration. The hidden engineering cost of handling Bullhorn's API constraints (to-many caps, rate limits, Fair Use Policy) almost always exceeds the cost of a service.
  • Enterprise (200+ users, 1M+ records, multiple offices, active VMS programs): Managed migration with phased cutover and delta sync.
  • Ongoing sync during transition: iPaaS for new-record sync combined with a managed service for historical data.

For a cost and risk breakdown of building internally versus outsourcing, see In-House vs. Outsourced Data Migration.

When Not to Build In-House

The instinct to build migration scripts in-house is strong — especially at engineering-led agencies. But Bullhorn-to-JobDiva migrations have specific properties that make DIY unusually risky:

  1. Bullhorn's Fair Use Policy requires explicit written permission for bulk data export. This isn't a technical issue — it's a legal and commercial one that can block your project entirely. (bullhorn.github.io)
  2. The to-many entity cap (10 per inline request) means extracting deep candidate histories requires N+1 API calls per candidate. For an agency with 500,000 candidates averaging 15 notes and 8 submittals each, that's millions of API calls — all subject to rate limits and monthly call allotments.
  3. DHTML/HTML fields require special handling. Bullhorn stores rich-text content (notes, descriptions) as HTML. JobDiva's note fields may not accept raw HTML. You need HTML-to-text conversion that preserves meaningful formatting without corrupting the data.
  4. Dependency ordering is non-trivial. You must load Companies before Contacts, Contacts before Jobs, Jobs before Submittals, and Submittals before Placements. A single broken link in this chain orphans downstream records.
  5. Custom objects have no target. Bullhorn's customObject1scustomObject10s must be mapped somewhere — user-defined fields, serialized notes, or dropped entirely. This requires business-logic decisions, not just engineering.
  6. Edition entitlements can block you. Bullhorn's API access is edition-dependent. A technically working script can still be commercially unauthorized.

The hidden costs aren't just coding. They include vendor support cycles, entitlement surprises, staging environment setup, QA scripts, user validation, and post-cutover triage. Bullhorn's policy and rate limits make failed experiments expensive, not just annoying.

For a deeper look at why migration and implementation should be separate workstreams, see Why Data Migration Isn't Implementation.

Migration Architecture and Code

Use a standard ETL pattern, but always stage data before loading into the target:

  1. Extract Bullhorn data into a staging database — not directly into JobDiva.
  2. Transform IDs, owners, picklists, dates (Unix millis → MM/dd/yyyy), HTML/rich text, and duplicates.
  3. Load base objects first (Companies, Contacts, Candidates, Jobs), then child objects (Submittals, Placements), then notes and files, then deltas.
  4. Validate every batch before advancing the dependency chain.
Info

Keep two immutable keys from day one: the Bullhorn source ID and your migration run ID. Do not replace them with names or emails. They let you rebuild parent-child relationships, replay deltas, and prove that Bullhorn record 12345 became JobDiva record 98765.

On Bullhorn, the extraction layer means OAuth 2.0, loginInfo-based data-center routing, BhRestToken session handling, and paginated entity queries. On JobDiva, plan around dedicated API credentials, Client ID management, and date-windowed endpoints for historical activity.

Handling Bullhorn's Pagination and Rate Limits

# Pseudocode: Paginated extraction from Bullhorn REST API
import requests
import time
 
BASE_URL = "https://rest99.bullhornstaffing.com/rest-services/{corpToken}"
TOKEN = "your-BhRestToken"
 
def extract_all_candidates():
    start = 0
    count = 500  # Max per search request
    all_candidates = []
    
    while True:
        url = f"{BASE_URL}/search/Candidate"
        params = {
            "query": "isDeleted:0",
            "fields": "id,firstName,lastName,email,status,owner,dateAdded",
            "count": count,
            "start": start,
            "BhRestToken": TOKEN
        }
        resp = requests.get(url, params=params)
        
        if resp.status_code == 429:
            time.sleep(1)  # Exponential backoff on rate limit
            continue
        if resp.status_code == 401:
            TOKEN = refresh_session_token()  # Handle BhRestToken expiry
            continue
            
        data = resp.json()
        all_candidates.extend(data.get("data", []))
        
        total = data.get("total", 0)
        start += count
        if start >= total:
            break
    
    return all_candidates
 
def extract_candidate_notes(candidate_id):
    """Separate call needed — to-many entities cap at 10 inline."""
    url = f"{BASE_URL}/entity/Candidate/{candidate_id}/notes"
    params = {
        "fields": "id,action,comments,dateAdded,personReference",
        "count": 500,
        "start": 0,
        "BhRestToken": TOKEN
    }
    return requests.get(url, params=params).json()

What this script needs around it:

  • A Bullhorn client that handles OAuth, loginInfo-based data-center routing, BhRestToken renewal, and exponential backoff
  • A JobDiva client that handles API user credentials, Client ID, date-windowed reads, and idempotent writes
  • A mapping layer with deterministic, versioned transforms
  • A validator that quarantines bad rows without stopping the run
  • An audit store that records source ID, target ID, batch ID, payload hash, and retry count

Build retries, audit logs, and idempotent upserts into the first version of the pipeline, not the cleanup version. Bullhorn's rate limits and JobDiva's date-windowed endpoints make replayability a requirement, not a nice-to-have.

The Pre-Migration and Go-Live Checklist

Phase 1: Data Audit (2–5 days)

  • Inventory all Bullhorn entities: Count records for Candidates, ClientCorporations, ClientContacts, JobOrders, JobSubmissions, Placements, Notes, and file attachments.
  • Identify custom objects in use: Which of customObject1scustomObject10s are populated? What business data do they hold?
  • Audit custom fields: Document all customText, customDate, customFloat, customInt fields and their business meaning.
  • Flag unused data: Soft-deleted records (isDeleted: true), candidates with no activity in 3+ years, closed jobs older than your retention policy.
  • Document picklist values: Bullhorn status fields, note action types, and category values must be mapped to JobDiva equivalents.
  • Check attachment volumes: Total file count and total size. Plan storage and upload time.

Phase 2: Define Migration Scope

  • Decide what moves and what stays. Not everything needs to migrate. Old, inactive records may be better archived.
  • Choose cutover strategy:
    • Big bang: Everything moves at once over a weekend. Simplest but highest risk.
    • Phased: Move entity types sequentially (Companies → Contacts → Jobs → Candidates → Submittals → Placements). Safer but longer.
    • Incremental with delta sync: Move historical data first, then run delta syncs to catch changes until cutover. Lowest risk.
  • Map user accounts: Bullhorn CorporateUser IDs must map to JobDiva recruiter/user IDs for ownership preservation.
  • Obtain Bullhorn API authorization: Request written permission per the Fair Use Policy. Confirm edition entitlements and OAuth credentials.
  • Address PII handling: Plan for resume files, notes with compensation data, tax info, and work-authorization fields.

Phase 3: Test Migration (1–2 weeks)

  • Run a sample migration with 1,000–5,000 records across all entity types.
  • Validate record counts: Source count vs. target count per entity type.
  • Validate field-level data: Spot-check 50–100 records per entity. Verify names, dates, status values, and associations.
  • Verify relationships: Can you navigate Company → Contact → Job → Submittal → Placement in JobDiva? Are all links intact?
  • Test attachment migration: Are resumes and documents accessible on the correct candidate records?
  • Run at least two full test runs before the real cutover. Each run reveals issues the previous one missed.

Phase 4: Go-Live Cutover

  • Freeze source data (or run final delta sync).
  • Execute final migration run.
  • Run validation suite: Record counts, field spot-checks, relationship verification.
  • UAT with recruiters: Have 2–3 recruiters verify their own candidate records, active jobs, and recent placements.
  • Enable JobDiva for all users.
  • Monitor for 72 hours post-cutover. Watch for missing records, broken links, or data inconsistencies.

Phase 5: Post-Migration

  • Rebuild automations. Bullhorn automation rules don't transfer. Recreate workflows in JobDiva.
  • Reconfigure VMS integrations. Set up JobDiva's native VMS connections.
  • Train users. JobDiva's UI and search paradigm differ from Bullhorn's. Budget for structured training — and train recruiters on the data compromises you made, not just the new interface.
  • Decommission Bullhorn. Keep it read-only during a 30–90 day parallel-run period, then cancel after UAT and finance sign-off.

Expected Timeline

  • Small agency (< 50K records, minimal custom objects): 2–4 weeks including testing.
  • Mid-market (100K–1M records): 4–8 weeks.
  • Enterprise (1M+ records, custom objects, attachments, VMS integrations): 8–12 weeks with phased cutovers.

Validation and Testing Strategy

Do not trust a migration that only counts records. Do not sign off based on import success messages.

Record Count Comparison

Compare total counts per entity type between Bullhorn and JobDiva. Any delta needs investigation.

Field-Level Sampling

For each entity type, randomly sample 2–5% of records (minimum 50) and verify:

  • All mapped fields contain expected values
  • Date fields are correctly converted (Bullhorn uses Unix milliseconds; JobDiva uses MM/dd/yyyy)
  • Picklist values mapped correctly
  • Owner/recruiter associations are correct

Relationship Integrity

Verify end-to-end chains:

  • Pick 20 Placements → verify the linked Candidate, Job, Company, and Contact all exist and are correctly associated
  • Pick 20 Submittals → verify Candidate and Job links
  • Pick 20 Notes → verify they're attached to the correct Candidate or Contact

Attachment Verification

For 50+ records with attachments, verify:

  • File exists on the correct record in JobDiva
  • File is downloadable and not corrupted
  • File name is preserved

Rollback Plan

Before cutover, ensure:

  • Full Bullhorn data backup exists (via API extraction or Bullhorn's Data Backup service)
  • JobDiva can be wiped and reloaded if critical issues are found
  • Parallel-run period is defined (both systems active)

Bullhorn's own docs point to Data Backup for full copies including files. Use that to anchor rollback planning before cutover weekend, not after. (kb.bullhorn.com)

Common Pitfalls and Edge Cases

Duplicate Records

Bullhorn allows duplicate candidates (same email, different records). JobDiva may deduplicate on import. Define your deduplication strategy before migration: merge, skip, or flag. SeekOut's JobDiva integration docs explicitly warn that re-exporting previously exported candidates can create duplicate records in JobDiva. (support.seekout.com)

Dependency Ordering Failures

The ClientCorporation → ClientContact → JobOrder → JobSubmission → Placement chain must be loaded in strict dependency order. If a Placement references a JobOrder that hasn't been created yet in JobDiva, the import fails silently or creates an orphan. Never rely on names to reconnect data — use stored source IDs.

Custom Objects Without a Target

Bullhorn's customObject1scustomObject10s have no equivalent in JobDiva. Options:

  • Flatten to user-defined fields (if the data is simple key-value)
  • Serialize as structured notes (if the data is complex)
  • Archive to a separate system (if the data is rarely accessed)

Do not force relational custom data into random text fields — this is a business-logic decision that needs stakeholder input before any code runs.

Attachments and File Migration

Bullhorn stores files as base64-encoded blobs accessible via /file/{entityType}/{entityId}/{fileId}. Each file requires a separate API call to download. For agencies with 100,000+ attachments, this is a multi-day extraction process that must handle rate limits gracefully.

Rich-Text Data Loss

Bullhorn's description field on Candidates and the comments field on Notes use DHTML (HTML) formatting. When migrating to JobDiva, you'll likely need to strip HTML tags — losing formatting like bold text, bullet lists, and embedded tables. Document this trade-off with stakeholders before migration.

API Failures and Throttling

Both platforms will return errors during large migrations. Your pipeline must:

  • Implement exponential backoff for 429 responses
  • Retry on 500/503 errors with jitter
  • Log every failed record with the full request/response for debugging
  • Support resumable runs (don't restart from zero after a failure)
  • Run duplicate detection after each delta pass

Limitations to Accept

Some things you cannot perfectly preserve in a Bullhorn-to-JobDiva migration:

Constraint Impact
JobDiva has no true custom objects Bullhorn custom object data must be flattened or serialized
JobDiva's API requires date-range parameters for historical queries Complicates validation and reconciliation
DHTML/rich-text formatting Lost or degraded in JobDiva's plain-text fields
Note action types May not map 1:1 to JobDiva activity types
User permissions model Does not transfer; must be reconfigured in JobDiva
Automation rules Cannot be migrated; must be rebuilt in JobDiva
Marketplace integrations Must be replaced with JobDiva-native integrations or rebuilt
Warning

Biggest structural compromise: Bullhorn can model custom objects and richer relational sprawl than JobDiva supports. If your agency depends on that flexibility, decide early what stays in JobDiva, what becomes a note or user-defined field, and what belongs in an external reporting store. (bullhorn.github.io)

Best Practices for Bullhorn-to-JobDiva Migration

  1. Back up everything before you start. Extract a full snapshot via the Bullhorn REST API or request a Data Backup. Store it independently of both systems.
  2. Run test migrations — plural. Minimum two full test runs before the real cutover. Each run reveals issues the previous one missed.
  3. Validate incrementally, not just at the end. Check record counts and sample data after each entity type loads, not just after the entire migration completes.
  4. Automate repetitive validation. Write scripts that compare source and target record counts, check for null fields that shouldn't be null, and verify relationship integrity.
  5. Maintain a persistent mapping table. Map every Bullhorn entity ID to its corresponding JobDiva entity ID. You'll need this for rebuilding relationships, replaying deltas, and post-migration debugging.
  6. Keep transformation logic in code, not spreadsheets. Version it. Make it deterministic.
  7. Separate migration from implementation. These are distinct workstreams with different skillsets and timelines. See Why Data Migration Isn't Implementation.
  8. Plan for the long tail. The first 90% of data moves in the first week. The last 10% — edge cases, broken records, unusual custom objects — takes as long as everything else combined.
  9. Use middleware only after the historical migration is done and stable. iPaaS tools are for ongoing operational sync, not bulk backfill.

Why Teams Choose ClonePartner for Bullhorn-to-JobDiva Migrations

Building custom ETL pipelines to navigate Bullhorn's Fair Use Policy, pagination limits, and JobDiva's rigid schema requires significant engineering bandwidth — and it's a one-time investment that becomes shelfware the day the migration is done.

ClonePartner is an engineer-led service that treats data migration as a distinct technical discipline:

  • We handle Bullhorn's API extraction at scale — managing OAuth sessions, data-center routing, paginating through to-many associations beyond the 10-item cap, and preserving DHTML/rich-text fields that CSV exports silently drop.
  • Zero-downtime cutovers with unlimited delta syncs. Recruiters continue working in Bullhorn while we sync data to JobDiva in the background. The final cutover is a delta — not a full re-migration.
  • Custom object mapping expertise. We work with your operations team to decide what gets flattened into user-defined fields, what gets serialized as notes, and what gets archived — before any code runs.
  • Fair Use Policy navigation. We've been through the Bullhorn authorization process and know what's required.

To understand how we run migrations end-to-end, see How We Run Migrations at ClonePartner.

Frequently Asked Questions

Can I export all my data from Bullhorn using CSV files?
Bullhorn's CSV export is designed for small exports and drops all fields using DHTML Editor formatting — including rich-text notes and formatted resumes. The exact record limit depends on your column selection, and heavy columns like Submissions can multiply rows and cause exports to fail. For anything beyond a few thousand flat records, API extraction is the only viable method.
Does Bullhorn allow bulk API export for migration?
Bullhorn's REST API can reach all entities and field types, but the API Fair Use Policy (updated December 2025) explicitly prohibits bulk data transfer to unauthorized solutions without Bullhorn's written permission. You must obtain authorization before running extraction scripts. Rate limits (including 100,000 calls per month by default) and the 10-item to-many cap also constrain extraction speed.
How do Bullhorn and JobDiva data models differ?
Bullhorn uses a highly customizable entity model with up to 10 custom object types per entity and configurable field maps. JobDiva uses a flatter, all-in-one staffing schema with user-defined fields instead of custom objects. Custom object data must be flattened into user-defined fields, serialized into notes, or archived externally during migration.
What is the biggest risk in a Bullhorn to JobDiva migration?
The biggest risks are data loss from Bullhorn's custom objects (which have no JobDiva equivalent), broken relational chains (Company → Contact → Job → Submittal → Placement loaded out of order), and loss of rich-text formatting from DHTML fields. These require careful mapping decisions and dependency ordering before any code runs.
How long does a Bullhorn to JobDiva migration take?
For a small agency with under 50,000 records and minimal custom objects, expect 2–4 weeks including testing. Mid-market agencies with 100K–1M records typically need 4–8 weeks. Enterprise migrations with complex custom objects, attachments, and VMS integrations can take 8–12 weeks with phased cutovers.

More from our Blog

5
ATS

5 "Gotchas" in ATS Migration: Tackling Custom Fields, Integrations, and Compliance

Don't get derailed by hidden surprises. This guide uncovers the 5 critical "gotchas" that derail most projects, from mapping tricky custom fields and preventing broken integrations to navigating complex data compliance rules. Learn how to tackle these common challenges before they start and ensure your migration is a seamless success, not a costly failure.

Raaj Raaj · · 14 min read
In-House vs. Outsourced Data Migration: A Realistic Cost & Risk Analysis
General

In-House vs. Outsourced Data Migration: A Realistic Cost & Risk Analysis

Choosing between in-house and outsourced data migration? The sticker price is deceptive. An internal team might seem free, but hidden risks like data loss, project delays, and engineer burnout can create massive opportunity costs. This realistic analysis compares the true ROI, security implications, and hidden factors of both approaches, giving you a clear framework to make the right decision for your project.

Raaj Raaj · · 8 min read