Skip to content

Workable to Teamtailor Migration: The CTO's Technical Guide

A CTO-level guide to migrating from Workable to Teamtailor. Covers API rate limits, data model mapping, resume extraction, and custom field handling.

Raaj Raaj · · 21 min read
Workable to Teamtailor Migration: The CTO's Technical Guide
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Migrating from Workable to Teamtailor is a schema-translation and API-orchestration problem. Workable treats candidates as flat records attached to job pipelines — one profile per job, with stages, evaluations, and resume URLs bundled into a single entity. Teamtailor separates this into distinct Candidate, Job Application, and Job objects linked through a relational JSON:API model, where creating a single candidate with custom data requires multiple sequential API requests.

A CSV export from Workable will move basic profile fields. It will not move resumes, custom field values, pipeline stage history, or evaluation data. If you need full-fidelity candidate records in Teamtailor, you need an API-based migration path — and you need to account for both platforms' rate limits, Teamtailor's multi-request-per-record architecture, and the resume hosting requirement that trips up most first-time scripts.

This guide covers the data model mapping between both platforms, every viable migration approach with real trade-offs, the API constraints on both sides, a step-by-step architecture for scripted migration, and the edge cases that silently break data integrity.

For related ATS migration topics, see our guide on common ATS migration gotchas, GDPR/CCPA compliance during candidate data transfers, and the Workable to Greenhouse migration guide for a comparison of Workable's export constraints against a different target platform.

Workable vs. Teamtailor: The Architecture Gap

Workable is a full-featured ATS with a flat, job-centric data model. Each job has its own pipeline, and candidate activity is attached to that job-specific profile. If the same person applies to two jobs, Workable keeps two separate profiles — interactions remain isolated per job. (help.workable.com)

Teamtailor is a recruitment marketing and ATS platform built on a relational JSON:API model. A candidate is a standalone object, their application to a job is another object, and answers to custom fields are separate entities entirely. Teamtailor has a relational data model, which means that objects can have attributes that are unique to them and relations to other objects that are not. These relations are critical to understanding how Teamtailor works.

The key architectural difference: Teamtailor has a relational data model, which means a lot of the objects exist in relation to each other, including candidates and their custom field values and answers. Since only one new object can be created per request, this requires a certain amount of requests to be made for each candidate.

This means importing a single candidate with 3 custom fields and 2 question answers into Teamtailor requires at minimum 6 API calls: 1 for the candidate, 3 for custom-field-values, and 2 for answers. At Teamtailor's rate limit of 50 requests per 10 seconds, that constrains your throughput to roughly 8 full candidate records per 10-second window — before accounting for job applications or file uploads.

Warning

Do not plan this as row-to-row mapping. A row in Workable is often an application-context record, while a record in Teamtailor is a person record plus one or more application-context relations. The real translation is Workable job-candidate profile → Teamtailor candidate + job application + notes/custom-field-values/answers. Resumes widen the gap further: Workable's candidate details CSV excludes them, while Teamtailor's API expects relational follow-up requests for everything beyond the base candidate record.

Why Companies Migrate

The most common reasons we see for Workable-to-Teamtailor moves:

  • Employer branding: Teamtailor's career site builder is significantly stronger than Workable's, which matters for companies that treat their careers page as a marketing channel.
  • Pricing structure: Teamtailor's per-seat pricing can be more favorable for companies with large hiring teams but moderate job volumes. Teamtailor uses quote-based pricing, so a cost comparison needs a real model rather than guesswork.
  • Agency/RPO use cases: Staffing firms and RPOs that need white-label career pages often prefer Teamtailor's multi-brand architecture.
  • Simplicity: Teams that found Workable's feature set overbuilt for their hiring workflow migrate to Teamtailor's more streamlined interface.

Migration Approaches: CSV vs. API vs. Managed Service

There are five viable paths. Each has hard trade-offs.

1. Native CSV Export → Teamtailor Standard Import

How it works: Export the Candidate Details report from Workable as CSV. Reformat the data into Teamtailor's Standard Import Template. Submit it to Teamtailor's import team.

Constraints: Resumes or other files won't be included in the Candidate details report export. To process them externally of Workable, the best option is to do so through the Workable API. With your developers' help, you can set up an automation to download candidate information, including resumes, from the endpoint /candidates/:id.

On the Teamtailor side, since the fields available for import are fixed, you should review them before requesting an import. This method does not allow any customization. The main advantages of this method are that it is free, fast, and efficient. The standard waiting time for standard imports is 2-3 weeks.

When to use it: Small teams (under 500 candidates) that only need basic profile data — name, email, phone, LinkedIn — and can afford to lose resumes, evaluations, and custom field data.

Complexity: Low

2. Teamtailor Custom Import Service

How it works: A custom import is required when the data you need to import falls outside the scope of a standard import. This method allows greater flexibility, including the ability to add extra data fields and control exactly where your existing data is placed within Teamtailor. To ensure a successful import, a mapping document is required.

Constraints: This is a paid service handled by Teamtailor's team. You still need to get data out of Workable yourself (CSV + API for resumes). Turnaround depends on Teamtailor's queue. You lose direct control over transformation logic.

When to use it: Mid-size migrations where you need custom field mapping but don't have engineering capacity to write API scripts.

Complexity: Low–Medium

3. Custom API-Based ETL Script

How it works: This would work by having an internal resource write a script that can take information from their existing database, transform it into a Teamtailor-ready format, and then send it to us. This can be done by either taking the data from the client's existing database through API or through a document. In the end, all that matters is that they have people capable of understanding data and APIs.

You extract data from Workable's REST API (/spi/v3/candidates, /spi/v3/jobs), transform it into Teamtailor's JSON:API format, and POST it to Teamtailor's endpoints (/v1/candidates, /v1/custom-field-values, /v1/answers, /v1/job-applications).

Constraints: Requires writing an essentially unique script that can translate information from one source to a format Teamtailor accepts. Public API has a rate limit — 50 requests per 10 seconds, which means importing a large database might take a lot of time.

A production ETL pipeline is more than a script. It typically means a staging store, file bucket, mapping tables, idempotent workers, audit rows, retry queues, and a checkpointed load process.

When to use it: Engineering teams that need full control over mapping, want to preserve resumes and custom fields, and can dedicate 2–4 weeks of developer time.

Complexity: High

4. Middleware Platforms (Zapier / Make)

How it works: Configure triggers in Workable (e.g., new candidate) to push data into Teamtailor via pre-built or custom actions.

Constraints: These tools are designed for ongoing sync, not bulk historical migration. There are no native "migrate all existing candidates" triggers. You'd need to manually trigger or loop through records, which is fragile at scale and still subject to both platforms' rate limits. Custom field mapping is limited by the connector's field support. (zapier.com)

When to use it: Ongoing sync between Workable and Teamtailor during a transition period — not for bulk historical migration.

Complexity: Medium (for sync); impractical for one-time migration

5. Managed Migration Service

How it works: A specialized provider extracts from Workable, prepares the mapping, runs the load, and validates the results.

When to use it: When preserving data integrity is a hard requirement and engineering resources cannot be spared. Vet the provider carefully: ask how they handle resumes, comments, evaluations, duplicate candidates across jobs, and custom field mapping. A service that ultimately relies on CSV may still flatten history.

Complexity: Low (for the internal team)

Migration Approach Comparison

Approach Resumes Custom Fields Pipeline History Complexity Cost
CSV + Standard Import Low Free
Teamtailor Custom Import ✅ (manual) Partial Low–Med Paid
Custom ETL Script High Dev time
Middleware (Zapier/Make) Partial Medium Subscription
Managed Service (e.g., ClonePartner) Low (for you) Paid

Scenario-Based Recommendations

  • Small team, <500 candidates, basic data: CSV Standard Import.
  • Mid-size, needs custom fields, no dev resources: Teamtailor Custom Import.
  • Enterprise, full fidelity, dedicated dev team: Custom ETL script.
  • Any size, no engineering bandwidth, tight deadline: Managed migration service.
  • Transition period with both systems active: Zapier/Make for ongoing sync only.
  • Ongoing sync after cutover: Workable webhook subscriptions plus Teamtailor webhooks — use Zapier/Make only if the field set is intentionally small. Note that Teamtailor webhooks must be activated as an add-on.

For a deeper look at why flat files fail complex relational schemas, see our guide on Using CSVs for SaaS Data Migrations.

Pre-Migration Planning

Before touching an API, create a data audit covering: jobs, departments, locations, recruiters, candidate profiles, comments, ratings/scorecards, events, inbound and outbound messages, resumes and extra files, interview templates, custom fields/questions/answers, and every live integration or webhook connected to Workable. Workable's full export schema is useful here because it tells you what historical objects exist, while Teamtailor's import docs tell you what the target will accept directly and what must end up as notes or custom fields.

Then decide scope. Do not move everything by default. Drop dead jobs, duplicate test candidates, obsolete stages, old custom fields, and low-value historical noise. For privacy-sensitive candidate data, decide up front whether Teamtailor should apply permission settings from the import date or from a supplied creation date — Teamtailor's Standard Import uses the import date as the candidate creation date, while Custom Import can apply your original creation date instead. (support.teamtailor.com)

Migration Strategy

  • Big bang: One cutover window. Simplest if you can freeze writes in Workable.
  • Phased: Move archived jobs or business units first, then active processes.
  • Incremental: Initial backfill plus ongoing deltas, fed by Workable webhook subscriptions and Teamtailor webhooks.

If you request Workable's full account export, the best time to request a full data export is when all your jobs are archived, so there are no new activities. This naturally favors a big-bang cutover rather than an in-flight export.

The Rate Limit Problem: Workable and Teamtailor API Constraints

Both platforms enforce rate limits that directly impact migration throughput.

Workable API Limits

Rate limits are divided into 10 seconds intervals: Account tokens get 10 requests per 10 seconds, OAuth 2.0 tokens get 50 requests per 10 seconds, and Partner tokens get 50 requests per 10 seconds. It is 10 requests per 10 seconds. If you reach the limit, you will be getting the error 429; in this case, review your implementation and control the requests when the error appears.

For most migrations, you're using an Account token, which means 10 requests per 10 seconds — that's your extraction ceiling. Workable provides X-Rate-Limit-Remaining and X-Rate-Limit-Reset headers to build throttling logic.

Teamtailor API Limits

You get a bucket of 50 requests every 10 seconds. When the number of requests reaches the rate limit, the API will respond with HTTP status code 429.

The rate limit is 50 requests per 10 seconds, so the script needs to cool down when this is reached or the requests should be spaced out so they never hit this limit. Normally adding a 300ms delay between each request is a good way to prevent reaching the rate limit.

Teamtailor's pagination also has constraints: List endpoints return a top-level data array with meta and links for pagination. Use page [size] (max 30) and the next/prev links or page [after]/page [before] parameters to page through results. Large unpaginated requests may produce 500 errors.

The Math That Matters

For a migration of 10,000 candidates, each with 3 custom fields, 1 resume, and 1 job application:

  • Requests per candidate on Teamtailor side: ~6 (1 candidate + 3 custom-field-values + 1 upload + 1 job-application)
  • Total Teamtailor requests: ~60,000
  • At 50 req/10s with 300ms spacing: ~5 hours for the load phase alone
  • Workable extraction at 10 req/10s: ~2.8 hours just to pull candidate detail records

This is why DIY scripts for migrations above a few thousand records need serious scheduling and error recovery logic. For more on why ad-hoc migration scripts fail at scale, see Why DIY AI Scripts Fail.

Data Model & Object Mapping: Workable → Teamtailor

The mapping is not 1:1. Workable's flatter model must be decomposed into Teamtailor's relational structure.

Core Object Mapping

Workable Object Teamtailor Object Notes
Candidate Candidate Core profile: name, email, phone, LinkedIn, resume.
Job Job Title, description, department, location. Imported as Archived by default with an import tag.
Candidate-in-Job (pipeline position) Job Application The join record linking Candidate to Job with stage info.
Pipeline Stage Stage Teamtailor has stage types (Inbox, InProcess, Hired, Rejected). Custom stage names map to these types.
Evaluations / Scorecards Notes (Comments) Teamtailor has no native scorecard equivalent. Evaluations must be serialized into note text.
Custom Fields Custom Field Values Each value is a separate relational object. Requires pre-creating the Custom Field definition.
Application Questions & Answers Answers (linked to Questions) Questions and answers contain data about what questions the candidate has answered and what answers were provided. Both of these exist as individual objects outside of the candidate.
Tags Tags Direct mapping. Normalize whitespace and case.
Comments Notes Notes (Comments) require a user and a candidate. If exact author mapping isn't possible, store the original author and timestamp inside the note body.
Candidate Resume (file) Upload (linked to Candidate) Must be staged at a temporary public URL for API-based import.
Departments Departments Direct mapping.
Offer data No direct equivalent Offer details need to be stored as notes or custom fields.
Warning

CRM-style objects don't have first-class equivalents here. If your Workable account stores agency accounts, client references, or company data, these are usually custom fields or adjacent system data — not native recruiting objects. In Teamtailor, keep them as candidate/job custom fields, agency references, tags, or external-system IDs rather than forcing them into objects that don't exist. (support.teamtailor.com)

Field-Level Mapping

Workable Field Teamtailor Field Transform Required
candidate.name candidate.first-name + candidate.last-name Split on first space (see edge cases below)
candidate.email candidate.email Lowercase, trim. Primary dedupe key.
candidate.phone candidate.phone Normalize to E.164 where possible
candidate.headline candidate.pitch Truncate if >500 chars
candidate.tags [] candidate.tags [] Normalize whitespace and case
candidate.linkedin_url candidate.linkedin-url Validate URL format
candidate.created_at candidate.created-at ISO 8601
candidate.resume_url Upload → candidate.resume Must be publicly accessible URL
candidates.id Store as custom field Immutable, used for reconciliation
Job shortcode Store in mapping table or internal-id Preserve for relationship rebuilding
Custom field (text) Custom Field Value (field-value) POST to /v1/custom-field-values
Custom field (dropdown) Custom Field Select Map option IDs
ratings.score_card notes.body Serialize structured scorecard into note-friendly format
comments.body notes.body Prefix with source timestamp and author metadata
Warning

Teamtailor requires first-name and last-name as separate attributes. Workable stores a single name field. Your transform layer must split this reliably — watch for edge cases like compound last names ("María del Carmen García"), hyphenated names ("Jean-Pierre"), single-name candidates, and empty name fields.

Handling Custom Fields in Teamtailor

Teamtailor operates with an open-ended data model in mind. The bare basics are here in the attributes, but additional data must be added as either Custom fields or Answers.

Before importing custom field values, you must:

  1. Create the Custom Field definitions in Teamtailor (via UI or API at /v1/custom-fields)
  2. Create Custom Field Options for any select/dropdown fields (/v1/custom-field-options)
  3. POST Custom Field Values per candidate (/v1/custom-field-values), linking to both the candidate ID and the custom field ID

This is a multi-step dependency chain. If you POST a custom-field-value referencing a custom field that doesn't exist yet, the request fails silently or returns a 422. Do not rename, repurpose, or delete custom fields during the migration window — deleting a field removes its stored values for every candidate, job, and job offer platform-wide. (support.teamtailor.com)

For a deeper look at custom field mapping gotchas across ATS platforms, see 5 "Gotchas" in ATS Migration.

Potential Data Loss Scenarios

Not everything translates cleanly. Know what you're accepting before you start:

Data Type Risk Level Reason
Evaluations / Scorecards High No Teamtailor equivalent. Must serialize to text.
Pipeline stage history High Only current stage is easily mapped. Stage change timeline is lost.
Email correspondence High Workable's email thread data is not accessible via standard API.
Offer details Medium No native offer object in Teamtailor's public API for import.
Source tracking Medium Original candidate source may not map to Teamtailor's source taxonomy.
Candidate creation dates Low Preserved via API but lost in standard CSV import.

Handling Resumes and Attachments at Scale

Resumes are the hardest part of this migration. Both platforms make it difficult.

Getting Resumes Out of Workable

Resumes or other files won't be included in the Candidate details report export.

Your options:

  1. API extraction: Hit /spi/v3/candidates/{id} for each candidate to get the resume_url. Download the file. This is the only automated path, but at 10 req/10s, extracting 10,000 resumes takes ~2.8 hours minimum.
  2. Full account data export: Request a full data export when all your jobs are archived. A .zip file containing a CSV for each area, plus resumes broken out into job folders. Each candidate has a separate sub-folder. This requires contacting Workable support and archiving all active jobs first — disruptive if you're still hiring.

Getting Resumes Into Teamtailor

If a resume is needed, you will first need to upload this to a public server for at least 30 seconds and then provide the URL as the resume. We will upload the resume file within those 30 seconds, so you can remove it afterward.

This means your migration script must:

  1. Download the resume from Workable
  2. Upload it to temporary public storage (S3 pre-signed URL, GCS, or similar)
  3. POST the candidate to Teamtailor with the temporary public URL as the resume attribute
  4. Wait for Teamtailor to fetch the file (up to 30 seconds)
  5. Clean up the temporary file

This adds significant complexity and infrastructure requirements. You need a temporary file hosting layer that generates publicly accessible URLs, and the timing must be orchestrated carefully. For organizations with GDPR or data residency requirements, this temporary public hosting step needs careful consideration.

Info

Non-resume attachments (extra files, offer letters, work samples) are not centered in Teamtailor's import flow. Plan a fallback — linked external storage, notes with URLs, or curated historical summaries — for any files that aren't standard resumes.

Step-by-Step API Migration Architecture

Here's the extract → transform → load flow for a script-based migration.

Phase 1: Extract from Workable

import requests
import time
 
WORKABLE_BASE = "https://{subdomain}.workable.com/spi/v3"
WORKABLE_TOKEN = "your_account_token"
headers = {"Authorization": f"Bearer {WORKABLE_TOKEN}"}
 
def extract_candidates():
    candidates = []
    url = f"{WORKABLE_BASE}/candidates?limit=100"
    while url:
        resp = requests.get(url, headers=headers)
        if resp.status_code == 429:
            reset = int(resp.headers.get("X-Rate-Limit-Reset", 10))
            time.sleep(reset)
            continue
        data = resp.json()
        candidates.extend(data.get("candidates", []))
        url = data.get("paging", {}).get("next")
        time.sleep(1.1)  # Stay within 10 req / 10s
    return candidates

For each candidate, fetch the full profile (including resume URL):

def extract_candidate_detail(candidate_id):
    url = f"{WORKABLE_BASE}/candidates/{candidate_id}"
    resp = requests.get(url, headers=headers)
    if resp.status_code == 429:
        time.sleep(10)
        return extract_candidate_detail(candidate_id)
    return resp.json().get("candidate", {})

If you need ratings, messages, comments, interview templates, or extra files at full fidelity, use Workable's full account export as a companion data source rather than trying to infer everything from the API alone.

Phase 2: Transform

def transform_candidate(workable_candidate):
    name_parts = workable_candidate.get("name", "").split(" ", 1)
    return {
        "data": {
            "type": "candidates",
            "attributes": {
                "first-name": name_parts[0],
                "last-name": name_parts[1] if len(name_parts) > 1 else "",
                "email": workable_candidate.get("email"),
                "phone": workable_candidate.get("phone"),
                "linkedin-url": workable_candidate.get("linkedin_url"),
                "tags": workable_candidate.get("tags", []),
                "resume": upload_resume_to_temp_storage(
                    workable_candidate.get("resume_url")
                ),
                "merge": True  # Merge if duplicate email exists
            }
        }
    }

Build a canonical person key — usually normalized email plus controlled fallbacks — to deduplicate across Workable's per-job profiles before loading. Split application-context data away from person-context data. Pre-resolve picklists, date formats, phone formats, and legacy stage names.

Phase 3: Load into Teamtailor

All API calls must have X-Api-Version as a header to resolve API versions. Missing this header is one of the most common causes of failed API calls in custom migration scripts.

TT_BASE = "https://api.teamtailor.com/v1"
TT_TOKEN = "your_admin_api_key"
tt_headers = {
    "Authorization": f"Token token={TT_TOKEN}",
    "Content-Type": "application/vnd.api+json",
    "X-Api-Version": "20240904"
}
 
def load_candidate(payload):
    resp = requests.post(f"{TT_BASE}/candidates", 
                         json=payload, headers=tt_headers)
    if resp.status_code == 429:
        time.sleep(10)
        return load_candidate(payload)
    if resp.status_code == 201:
        return resp.json()["data"]["id"]
    else:
        log_error("candidate_create", payload, resp.text)
        return None
 
def load_custom_field_value(candidate_id, field_id, value):
    payload = {
        "data": {
            "type": "custom-field-values",
            "attributes": {"value": value},
            "relationships": {
                "custom-field": {
                    "data": {"type": "custom-fields", "id": field_id}
                },
                "owner": {
                    "data": {"type": "candidates", "id": candidate_id}
                }
            }
        }
    }
    resp = requests.post(f"{TT_BASE}/custom-field-values",
                         json=payload, headers=tt_headers)
    time.sleep(0.3)  # 300ms spacing
    return resp.status_code == 201

Orchestration Pattern

The high-level flow iterates jobs first, then candidates per job, creating relational objects in dependency order:

for job in workable_list_jobs():
    for stub in workable_list_candidates(shortcode=job["shortcode"]):
        candidate = workable_get_candidate(stub["id"])
        activities = workable_get_candidate_activities(stub["id"])
 
        person, application, notes, cfvs, answers = transform(
            candidate, activities
        )
 
        tt_candidate_id = load_candidate(person)
        tt_application_id = ensure_job_application(
            tt_candidate_id, application
        )
 
        for cfv in cfvs:
            load_custom_field_value(
                tt_candidate_id, cfv["field_id"], cfv["value"]
            )
        for answer in answers:
            load_answer(tt_candidate_id, tt_application_id, answer)
        for note in notes:
            load_note(tt_candidate_id, service_user_id, note)
 
        write_audit_row(
            source_candidate_id=stub["id"],
            source_job_id=job["id"],
            target_candidate_id=tt_candidate_id,
            target_application_id=tt_application_id,
        )

Replace ensure_job_application with the exact endpoint pattern supported in your current Teamtailor API version. Teamtailor doesn't offer API implementation support. If something is not listed in the documentation, it is most likely not available for implementation.

Error Logging

Every API call must be logged. At minimum, capture:

  • Workable source record ID and job ID
  • Teamtailor target record ID (if created)
  • HTTP status code and error response body
  • Operation name and attempt count
  • Timestamp
import json
from datetime import datetime
 
def log_error(operation, payload, error_response):
    with open("migration_errors.jsonl", "a") as f:
        f.write(json.dumps({
            "timestamp": datetime.utcnow().isoformat(),
            "operation": operation,
            "payload_summary": str(payload)[:500],
            "error": error_response
        }) + "\n")

Edge Cases That Break Migrations

Duplicate Records

Workable can hold multiple job-specific profiles for the same person. You need a deterministic dedupe rule before load, not after.

If there are duplicates, they will be merged by default (if the 'merge' attribute is set to true). Set merge: true on candidate POST requests. Without this, if the imported data contains candidates who already exist in your Teamtailor account, duplicates will be added as separate profiles.

Keep an exception queue for alias emails, shared inboxes, and agency-submitted candidates where email-based dedup fails.

Multi-Level Relationship Chains

The Candidate → Job Application → Job chain must be created in dependency order:

  1. Create or identify the Job in Teamtailor
  2. Create the Candidate
  3. Create the Job Application linking both

If you reverse the order or skip a step, the relationship data is lost.

Name Splitting

Workable's name field is a single string. Candidates named "María del Carmen García" or "Jean-Pierre" will break naive split(" ", 1) logic. Handle:

  • Compound names with particles (de, del, van, von)
  • Hyphenated names
  • Single-name candidates (some cultures use a single name)
  • Empty name fields

Keep the full original name as a backup custom field if your parsing is imperfect.

Missing Resume URLs

Not every Workable candidate has a resume. Your script must handle resume_url: null gracefully — don't fail the entire candidate import because one optional field is missing.

Custom Field Schema Drift

Do not rename, repurpose, or delete Teamtailor custom fields during the migration window. Changing field definitions mid-run introduces bad data, and deleting a custom field removes its stored values for every candidate, job, and job offer platform-wide.

API Version Header Omission

Missing or incorrect X-Api-Version in request headers is one of the most common Teamtailor API errors. All requests must now include: X-Api-Version: 20240904. Pin this in your code rather than relying on defaults.

Imported Record Defaults

Two defaults catch teams off guard:

  • Imported candidates will, by default, have the Sourced candidate status and an import tag. You cannot set the original application source.
  • When jobs are imported into Teamtailor, they will automatically have Archived as job status and an import tag. The creation date will reflect the date the import was completed. Historical job creation dates are lost.

Workable's Full Export Timing

If you're still actively hiring in Workable during migration, any candidates added after your export will be missed. The full export is a point-in-time snapshot. Plan for this gap with either a delta sync or a defined freeze window.

Validation, Testing, and Post-Migration Checklist

Pre-Go-Live Validation

  1. Record count comparison: Total candidates, jobs, and job applications in Workable vs. Teamtailor. If counts don't match, investigate before proceeding.
  2. Field-level spot check: Pull 50+ random candidate records and verify every mapped field value. Pay special attention to custom fields and phone number formatting.
  3. Resume verification: Confirm resume files are accessible on candidate profiles — not just linked, but actually downloadable.
  4. Relationship integrity: Verify that job applications correctly link candidates to jobs. Check 20+ records across different jobs. Confirm that one person with multiple applications behaves correctly.
  5. Duplicate check: Search for duplicate candidate entries using email address as the primary key.

UAT Process

  • Have 3–5 recruiters from your team review their most recent candidates in Teamtailor
  • Verify that notes/comments appear in the correct chronological order
  • Confirm custom field values display correctly in candidate cards
  • Test the candidate search/filter functionality with migrated data
  • Confirm that candidate-wide vs. job-specific data renders correctly in the Teamtailor interface

Rollback Plan

Teamtailor does not have a native bulk-delete API. If migration data is bad, cleanup is manual and painful. Always run a test migration on a sandbox or secondary Teamtailor instance first. If Teamtailor doesn't offer a sandbox, use a separate test account.

Post-Migration Tasks

  • Rebuild pipelines: Teamtailor's stage configuration likely differs from Workable's. Reconfigure your hiring workflow stages.
  • Rebuild automations: Any Workable auto-actions (auto-reject, auto-advance, email templates) need to be recreated in Teamtailor's trigger system.
  • Reconnect integrations: HRIS connections, background check providers, assessment tools, and job board integrations must be reconfigured.
  • User training: Teamtailor's UX is fundamentally different from Workable's. In Teamtailor, some data is candidate-wide and some is job-specific. Budget 1–2 sessions per hiring team.
  • Monitor for 2 weeks: Watch for data inconsistencies, missing records, and broken candidate-job links that surface during daily use.

Best Practices From Production Migrations

  1. Back up everything first. Request a full Workable data export before you begin, regardless of your migration method. Store it permanently.
  2. Run a test migration with production-like volume. Validate the full pipeline with at least 100 records — including duplicates, multi-job candidates, comments, and files — before committing to the full dataset.
  3. Build an ID mapping table. Maintain a persistent mapping of Workable IDs → Teamtailor IDs for every record type. You'll need this for relationship rebuilding and post-migration debugging.
  4. Validate incrementally. Don't wait until the end to check data quality. Validate after every 1,000 records.
  5. Make your script idempotent. Re-running it should not create duplicates. Use Teamtailor's merge: true flag and check for existing records before creating new ones.
  6. Account for both rate limits simultaneously. The Workable extraction limit (10 req/10s) and Teamtailor load limit (50 req/10s) are independent constraints. Design your pipeline so extraction and loading can run concurrently with separate throttling.
  7. Quarantine bad records. Don't force them through. Flag them for manual review.
  8. Keep the source system read-only or delta-synced until final sign-off.

When to Use a Managed Migration Service

Build in-house if you have dedicated engineering capacity, a timeline longer than 4 weeks, and fewer than 5,000 candidates with simple data.

Don't build in-house if:

  • You have >10,000 candidates with custom fields and resumes
  • Your engineering team is already committed to other projects
  • You need the migration done in days, not weeks
  • You can't afford to debug rate limit failures and broken relationship chains at 2am
  • GDPR/data residency requirements complicate the temporary file hosting needed for resume migration

The hidden cost of DIY isn't the script — it's the weeks of debugging edge cases, handling API failures, and manually cleaning up duplicate records when something goes wrong on record #7,432. Teamtailor offers documentation but explicitly does not provide API implementation support. You're on your own for troubleshooting.

ClonePartner has handled ATS migrations where the source platform's API constraints made DIY impractical — including Highsnobiety's Greenhouse-to-Teamtailor migration, which was scoped at 3 months and completed in days. We handle the rate-limit orchestration, the temporary resume hosting, the multi-request relational loading for Teamtailor's API, and the validation — so your engineering team stays focused on your core product.

Frequently Asked Questions

Can I export resumes from Workable via CSV?
No. Workable's Candidate Details CSV report does not include resume files. You must use the /candidates/:id API endpoint to download resumes individually, or request a full account data export from Workable support (which requires archiving all active jobs first).
What are the Workable and Teamtailor API rate limits?
Workable account tokens are limited to 10 requests per 10 seconds. Teamtailor allows 50 requests per 10 seconds. Exceeding either limit returns HTTP 429. A 300ms delay between Teamtailor requests is recommended, but each candidate with custom fields requires multiple API calls, so effective per-candidate throughput is much lower than the raw limit suggests.
How long does a Workable to Teamtailor migration take?
It depends on volume and method. A standard CSV import through Teamtailor takes 2-3 weeks. A custom API-based migration for 10,000+ candidates typically takes 2-6 weeks of engineering time in-house, or can be completed in days with a managed service like ClonePartner.
What is the X-Api-Version header in the Teamtailor API?
All Teamtailor API requests require an X-Api-Version header (currently 20240904). Omitting it is one of the most common causes of failed API calls in custom migration scripts. It ensures backwards-incompatible changes are handled correctly.
How do I preserve Workable scorecards and evaluations in Teamtailor?
Teamtailor has no native scorecard equivalent. Evaluations and ratings must be serialized into note-friendly text format and imported as Notes (Comments) attached to the candidate record. Verify the exact Notes endpoint behavior in your current Teamtailor API docs before building this.

More from our Blog

5
ATS

5 "Gotchas" in ATS Migration: Tackling Custom Fields, Integrations, and Compliance

Don't get derailed by hidden surprises. This guide uncovers the 5 critical "gotchas" that derail most projects, from mapping tricky custom fields and preventing broken integrations to navigating complex data compliance rules. Learn how to tackle these common challenges before they start and ensure your migration is a seamless success, not a costly failure.

Raaj Raaj · · 14 min read