Skip to content

Pipedrive to Salesforce Migration: The CTO's Technical Guide

A CTO-level technical guide to migrating from Pipedrive to Salesforce, covering data model mapping, API rate limits, migration methods, and edge cases.

Raaj Raaj · · 23 min read
Pipedrive to Salesforce Migration: The CTO's Technical Guide
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Migrating from Pipedrive to Salesforce is a schema-translation project, not a CSV upload. Pipedrive organizes everything around a linear deal pipeline — Organizations, Persons, Deals, Activities — with a flat custom-field layer and no custom objects. Salesforce is a deeply relational platform with Accounts, Contacts, Leads, Opportunities, Tasks, Events, Products, and Files connected by explicit lookup and master-detail relationships, custom objects, and a configurable schema that can model almost any business process.

A CSV export from Pipedrive will move rows, but it will silently flatten multi-object associations, orphan activities, and collapse pipeline stage history that give your CRM data its meaning. (support.pipedrive.com)

This guide covers the structural gaps between the two systems, the object-mapping decisions you need to make, the API constraints on both sides, the five viable migration methods and their trade-offs, and the edge cases that break most DIY attempts — so your engineering team can choose the right approach before writing a single line of migration code.

For teams evaluating HubSpot as an alternative target, the Pipedrive extraction architecture is nearly identical — see Pipedrive to HubSpot Migration: Data Mapping, APIs & Rate Limits. If you're also migrating support operations alongside your sales pipeline, review the Salesforce Service Cloud Migration Checklist.

The Architectural Shift: Why Scaling Teams Move from Pipedrive to Salesforce

Pipedrive is fast because its model is compressed. A Person belongs to one Organization. A Deal sits in a pipeline stage and carries custom fields keyed by 40-character hashes. A Lead uses the deal custom-field schema instead of a separate lead schema. An Activity can attach to a deal, lead, person, and organization — and it can exist without a date or time. That simplicity works for a focused sales team. It becomes limiting when you need many-to-many relationships, custom objects, richer product data, granular automation, or reporting that separates accounts, contacts, opportunities, and activities cleanly. (developers.pipedrive.com)

The drivers we see most often in CTO-led migration projects:

  • Custom objects and relational modeling. Pipedrive does not support custom objects. You get custom fields across standard entities (deals, people, organizations, products, projects), but you cannot create entirely new data categories. When your business needs to model subscriptions, partner programs, or multi-entity deal structures, Pipedrive forces workarounds. Salesforce lets you define custom objects with lookup and master-detail relationships to any standard or custom object.
  • Enterprise reporting and analytics. Salesforce's report builder, dashboards, and Einstein Analytics support cross-object reporting, historical trending, and forecasting that Pipedrive's reporting layer cannot match at scale.
  • Marketing and revenue operations integration. Salesforce natively connects to Marketing Cloud, Pardot, and a deep ecosystem of AppExchange integrations. Teams consolidating their tech stack around Salesforce often find that Pipedrive becomes a bottleneck.
  • Compliance and audit requirements. Salesforce's field-level security, sharing rules, and audit trail capabilities are significantly more granular than Pipedrive's permission model.

Pre-Migration Planning and Data Audit

Before touching any code, inventory your Pipedrive data. Pipedrive exposes organizations, persons, leads, deals, activities, notes, files, and products as separate export or API surfaces, which is why a source audit is mandatory before you estimate effort. (support.pipedrive.com)

Data Category What to Count Watch For
Organizations Total active + archived Duplicates across regions
Persons Total, including unlinked Persons without org association; multi-value phone/email arrays
Deals Open + Won + Lost + Archived Stage history, products attached, deal participants
Leads Inbox leads (unconverted) Leads inherit deal fields, not a separate schema
Activities Calls, meetings, tasks, deadlines Mixed types in a single entity
Notes Per-deal and per-contact notes HTML formatting, inline files
Files/Attachments Files linked to deals, contacts Size, file types; Google Drive files excluded from global export
Custom Fields Per entity, with field types 40-character hash API keys
Products Product catalog + deal-product links Price variations, currency
Warning

Pipedrive exports are visibility-aware. If the exporting user cannot see a record, it will not appear in the file. Activities, notes, and files export separately from core entity exports. Google Drive files are excluded from the global export entirely. Use a top-level admin account for extraction. (support.pipedrive.com)

Identify and exclude dead data. Migrating 50,000 lost deals from 2018 inflates cost and complexity without adding value. Define your migration scope explicitly: which pipelines, which date ranges, which record statuses. Pipedrive accounts can hold up to 300,000 leads and deals, so large extractions need careful scoping and token budgeting.

Migration Strategy: Big Bang vs. Phased vs. Incremental

  • Big bang: Migrate everything in a single cutover window. Works for datasets under ~50K total records with a team that can validate quickly. Risk: if something breaks, everything breaks.
  • Phased by object: Migrate Accounts/Contacts first, validate, then Opportunities, then Activities. Reduces blast radius. Adds 1–2 weeks of dual-system operation.
  • Incremental with sync: Migrate historical data in a batch, then run a real-time or near-real-time sync for records created during the transition period. Required when sales teams can't pause work for a cutover window.

Incremental plans need event design, not just CSVs. Pipedrive webhooks v2 are now the default, but their v2 coverage excludes some metadata objects such as pipeline, stage, and activityType. Webhook delivery is permission-aware — use a top-level admin user's user_id if you want all events. Salesforce Change Data Capture events are retained for only 72 hours, so replay storage matters if subscribers miss events. (developers.pipedrive.com)

Pipedrive vs. Salesforce Data Model: Mapping the Structural Gaps

The core mapping decisions define your Salesforce architecture. Get these wrong and you'll be cleaning up orphaned records for months.

Do not map by field labels alone. On Pipedrive deals, custom field values are returned under long hash keys. Pull field metadata from the Fields API first and build a field dictionary before you transform any records. (developers.pipedrive.com)

Object Mapping

Pipedrive Object Salesforce Object Key Decisions
Organizations Accounts 1:1 mapping. Pipedrive address sub-fields → Salesforce Billing/Shipping address fields.
Persons Contacts 1:1 mapping. Link to Account via AccountId. Multi-value phone/email → Salesforce single-value fields or custom handling.
Leads (Inbox) Leads or Contacts + Opportunities Unconverted Pipedrive Leads can map to Salesforce Leads (if you use the Lead → Opportunity conversion flow) or directly to Contacts + Opportunities. Many teams keep only true pre-qualification records as Salesforce Leads and map later-stage source leads to Contact + Opportunity. (developers.pipedrive.com)
Deals Opportunities Map pipeline stages → Opportunity Stage picklist values. Deal value → Amount. Expected close date → CloseDate.
Deal Participants OpportunityContactRole Pipedrive exposes deal participants separately. Do not collapse them into one contact on the Opportunity — use Salesforce's first-class Opportunity Contact Role model. (developers.pipedrive.com)
Activities Tasks and Events Pipedrive Activities are a single entity encompassing calls, meetings, deadlines, and emails. Salesforce splits these into Tasks (to-dos, calls, deadlines) and Events (meetings with start/end times). Classify each Activity by type and route it to the correct Salesforce object.
Notes ContentNote Pipedrive notes are plain text or light HTML. Use Salesforce Enhanced Notes (ContentNote) — the legacy Note object is read-only in Lightning.
Products Products + PricebookEntry + OpportunityLineItem Pipedrive products link to deals with quantity and price. Salesforce requires a Product → PricebookEntry → OpportunityLineItem chain.
Files ContentVersion + ContentDocumentLink Files must be downloaded from Pipedrive, re-uploaded as ContentVersion records, and linked to parent records via ContentDocumentLink.
Warning

The Activity split is the most common source of migration errors. Pipedrive treats a "meeting" and a "call" as the same object with a different type field. Salesforce enforces a structural separation: Events require StartDateTime and EndDateTime, while Tasks use ActivityDate and Status. If your migration script treats them identically, every meeting will fail Salesforce validation.

Field-Level Mapping

Pipedrive custom fields are referenced via 40-character hashes in the API (e.g., dcf558aac1ae4e8c4f849ba5e668430d8df9be12). You must call the /dealFields, /personFields, and /organizationFields endpoints to build a human-readable mapping before you start.

Key transformations:

  • Picklist fields: Pipedrive single-option and multi-option fields → Salesforce Picklist / Multi-Select Picklist. Values must match exactly, or Salesforce will reject the record if the picklist is restricted.
  • Monetary fields: Pipedrive stores currency with a currency property per deal. Salesforce uses CurrencyIsoCode on each record (multi-currency orgs) or a single org default.
  • Date fields: Pipedrive uses ISO 8601. Salesforce expects YYYY-MM-DD for Date and ISO 8601 for DateTime. Timezone handling differs.
  • Phone/Email: Pipedrive Persons support multiple phone numbers and emails as arrays with labels (work, home, etc.). Salesforce Contacts have Phone, MobilePhone, HomePhone, Email as distinct flat fields. You need a mapping rule for which Pipedrive value goes where — and a plan for overflow values (custom fields, or logged and dropped).
  • Person-type custom fields: Pipedrive "Person" type fields (linking to another Person) must become a Lookup field on the Salesforce side, which requires creating the custom field on the target object first.
  • Time Range fields: No native Salesforce equivalent. Store as two DateTime fields or a text field.
Pipedrive Field Pipedrive Type Salesforce Object.Field Salesforce Type Transformation
org_name String Account.Name String Direct
org_address Address (sub-fields) Account.BillingStreet, BillingCity, etc. Address Split sub-fields
person_name String Contact.FirstName + Contact.LastName String Split on space
person_email [] Array Contact.Email Email First value; overflow to custom field
person_phone [] Array Contact.Phone, Contact.MobilePhone Phone Map by label; overflow to custom field
deal_title String Opportunity.Name String Direct
deal_value Monetary Opportunity.Amount Currency Convert currency if needed
deal_expected_close_date Date Opportunity.CloseDate Date Direct (ISO 8601 → YYYY-MM-DD)
deal_pipeline_id + stage_id Integer RecordType + Opportunity.StageName Picklist Requires a transformation table, not a direct map
deal_status Enum (open/won/lost) Opportunity.StageName / IsClosed + IsWon Boolean/Picklist Map won → Closed Won, lost → Closed Lost
activity_type String Task or Event (object routing) "meeting" → Event; "call","task","deadline" → Task
activity_due_date DateTime Task.ActivityDate / Event.StartDateTime Date/DateTime Split by target object
activity_note Text Task.Description / Event.Description Text Direct
note_content HTML/Text ContentNote.Content Rich Text Strip unsupported HTML tags
Custom field (hash) Varies Custom field (API name) Varies Create corresponding SF field first; resolve hash from field metadata

Handling Relationships and Load Order

Migration order matters. Salesforce enforces referential integrity through lookup fields. If you try to create an Opportunity referencing an Account that doesn't exist yet, the insert fails.

Required load order:

  1. Accounts (from Pipedrive Organizations)
  2. Contacts (from Pipedrive Persons) → link to Account via AccountId
  3. Leads (if keeping unconverted Pipedrive Leads as Salesforce Leads)
  4. Opportunities (from Pipedrive Deals) → link to Account via AccountId
  5. OpportunityContactRoles (from Pipedrive Deal Participants) → link to Contact and Opportunity
  6. Products → PricebookEntries → OpportunityLineItems (if migrating products)
  7. Tasks and Events (from Pipedrive Activities) → link to Contact via WhoId, to Opportunity/Account via WhatId
  8. Notes and Files → link to parent records via ContentDocumentLink

You need to maintain a Pipedrive ID → Salesforce ID cross-reference map throughout the migration. Every record inserted into Salesforce returns a new Id. Store these in a lookup table so downstream records can resolve their parent references.

Info

Use External IDs to simplify relationship linking. Create a custom External ID field on each Salesforce object (e.g., Pipedrive_Org_ID__c). Populate it with the Pipedrive record ID during import. This lets you reference parent records by External ID instead of needing to know the Salesforce ID first, enables upsert operations for idempotent reruns, and makes rollback straightforward — query all records where the External ID is populated and delete them. (developer.salesforce.com)

One structural constraint to internalize: the migration cannot create relationships that never existed in the source. Pipedrive stores one organization per person. If richer many-to-many account-contact relationships don't exist in your source data, Salesforce cannot infer them. (developers.pipedrive.com)

Evaluating Migration Approaches: CSVs, APIs, and Middleware

There are five viable methods. The dividing line: CSV tools move rows, API/ETL methods move relationships, and middleware keeps deltas flowing after the historical load.

1. Native CSV Export → Salesforce Data Import Wizard / Data Loader

How it works: Export CSVs from Pipedrive (Settings → Export Data, or per-entity exports). Import into Salesforce using the Data Import Wizard (browser-based) or Data Loader (desktop client).

When to use it: Small datasets under 50,000 records with simple, flat structures and limited relationship preservation needs.

Constraints:

  • The Salesforce Data Import Wizard is capped at 50,000 records per import. It handles Accounts, Contacts, Leads, and custom objects, but it does not support Opportunity or Task objects — you need Data Loader or the API for those. (help.salesforce.com)
  • Data Loader supports up to 5 million records per operation and handles all standard and custom objects.
  • CSV exports flatten relationships. A Pipedrive deal export includes org_name as a string, not a foreign key. You'll have to reconstruct Account → Contact → Opportunity links manually using external ID matching or VLOOKUP-style joins before import.
  • Custom field hashes in Pipedrive's CSV export headers are unreadable. You'll need to rename columns manually using the field definitions API.

Complexity: Low (for flat data) / Medium (with relationships)

For a deeper analysis of CSV-based migration trade-offs, see Using CSVs for SaaS Data Migrations: Pros and Cons.

2. API-Based Custom Migration Script

How it works: Write a script (Python, Node.js) that extracts data from Pipedrive's REST API, transforms it in memory or staging storage, and loads it into Salesforce via REST API or Bulk API 2.0.

When to use it: Mid-to-large datasets where you need full control over relationship preservation, field transformations, and error handling.

Constraints:

  • Pipedrive now uses token-based rate limiting. Each company account gets a daily budget calculated as 30,000 base tokens × plan multiplier × number of seats. Complex endpoints consume more tokens per request. Burst limits apply on a rolling 2-second window per user token. (developers.pipedrive.com)
  • Salesforce Enterprise Edition orgs start at 100,000 REST API requests per 24-hour rolling window, plus 1,000 per user license. For large loads, use Salesforce Bulk API 2.0. (developer.salesforce.com)
  • You must handle pagination on the Pipedrive side (cursor-based in API v2, offset-based in v1) and job lifecycle management on the Salesforce side.

Complexity: High

Tip

Use Pipedrive API v2 for extraction. Pipedrive deprecated selected v1 endpoints for deals, persons, organizations, activities, products, pipelines, stages, and search effective January 1, 2026. The v2 replacements are more performant and consume fewer tokens per request. For any entity with a stable v2 endpoint, prefer it over v1. (developers.pipedrive.com)

If your plan is to have an LLM write this pipeline in a weekend, read Why DIY AI Scripts Fail and How to Engineer Accountability.

3. Third-Party Migration Tools (Trujay, Import2, Skyvia)

How it works: Cloud-based tools that provide a visual mapping interface. Connect both Pipedrive and Salesforce, map fields, and execute the migration through the tool's pipeline.

When to use it: Teams without engineering bandwidth who need a one-time migration with standard object mapping.

Constraints:

  • Most tools handle standard objects well but struggle with multi-level relationship chains (Account → Contact → Opportunity → Activity).
  • Custom object support varies. Some tools only support Salesforce standard objects.
  • You're subject to the tool's own rate limit handling, which may not be optimized for Pipedrive's token-based system.
  • Data transformation capabilities are limited compared to a custom script.
  • Black-box transforms and thin audit trails can create hidden cleanup work.

Complexity: Low–Medium

4. Custom ETL Pipeline

How it works: Build an extract-transform-load pipeline using tools like Apache Airflow, dbt, or a custom orchestrator. Extract to a staging database, apply transformations with SQL or Python, then bulk-load into Salesforce.

When to use it: Large enterprise migrations (100K+ records) where data cleaning, deduplication, and complex transformation logic are required before loading. Also suited for compliance-heavy environments and coexistence periods where both CRMs remain active.

Constraints:

  • Highest upfront engineering investment. Requires pipeline infrastructure, monitoring, and error handling.
  • Best suited for teams with existing data engineering capabilities.
  • Offers the most control over data quality, validation, and idempotency.

Complexity: High

5. Middleware / iPaaS (Zapier, Make, Skyvia)

How it works: Trigger-action tools like Zapier and Make react to record events and create or update corresponding records. Broader iPaaS tools like Skyvia can also handle scheduled synchronization and replication across larger object sets.

When to use it: Ongoing sync between systems during a transition period, or for small-volume incremental migration. Not as the primary mechanism for historical backfill.

Constraints:

  • Not designed for bulk historical migration. Moving 100K records through Zapier is impractical and expensive (task-based pricing).
  • Relationship preservation is manual — you must build multi-step Zaps that look up parent records before creating child records.
  • Error handling is limited. A single failed step can break the chain.
  • Rate limits on both sides still apply, and middleware platforms add their own concurrency limits.
  • Trigger-action tools are poor at reconstructing deep historical relationships; loops, ordering problems, and partial failures are common. (zapier.com)

Complexity: Low (for sync) / Impractical (for bulk migration)

Migration Approach Comparison

Method Scalability Relationship Preservation Complexity Best For
CSV + Data Loader Up to ~5M records Manual reconstruction Low–Med Simple one-time moves, flat data
Custom API Script 100K+ records Full control High Mid-to-large with complex relationships
Third-Party Tools Varies (typically <100K) Partial (standard objects) Low–Med One-time, standard-object migrations
Custom ETL Pipeline Enterprise scale Full control High Large datasets needing heavy transforms
Middleware (Zapier/Make) <5K records practical Limited Low–Med Ongoing sync, small incremental loads

Scenario Recommendations

  • Small business (<10K records, simple pipeline): CSV export + Data Loader. Fast, free, sufficient if you don't have complex relationship chains.
  • Small business, but activities/files/history matter: Use a managed migration service or direct API approach. CSV alone will lose context.
  • Mid-market (10K–100K records, multiple pipelines): Custom API script or a third-party tool with manual relationship mapping validation.
  • Enterprise (100K+ records, custom fields, products, multi-level relationships): Custom ETL pipeline or a managed migration service. The engineering investment for DIY at this scale is significant.
  • Ongoing sync during transition: Middleware for real-time sync of new records; batch migration for historical data. Do not ask CSV workflows or trigger-action tools to be your historical migration engine.

API Constraints and Migration Architecture

Pipedrive API: Token-Based Rate Limits

Pipedrive completed the rollout of token-based rate limiting to all existing customers by May 31, 2025. (developers.pipedrive.com)

Key details:

  • Daily budget: 30,000 base tokens × plan multiplier × seats (plus purchased top-ups).
  • Token cost varies by endpoint. Lightweight GETs (single entity fetch) are cheap. Complex searches and list endpoints with filters cost more.
  • Burst limits: Rolling 2-second window per individual token. Even with daily budget remaining, hammering the API in a tight loop triggers 429 errors.
  • Escalation: Repeated 429 violations can escalate to a 403 from Cloudflare, returning an HTML error page instead of JSON. Your script must detect this and back off aggressively.
  • API v2 endpoints consume fewer tokens than v1 equivalents. Selected v1 endpoints were deprecated effective January 1, 2026. New migration code should use v2. (developers.pipedrive.com)
  • Pagination: v2 uses cursor-based pagination (next_cursor); v1 uses offset-based (start, limit with max 500 per page). Some entities like Leads and Files are still v1-only with smaller page caps (100 records for Files).

Salesforce API: Bulk API 2.0 for Load

For any migration involving more than a few thousand records, use Salesforce Bulk API 2.0:

  • Daily API limit (REST/SOAP): 100,000 requests per 24-hour rolling window for Enterprise Edition, plus 1,000 per user license. The limit is soft, but sustained overages can trigger REQUEST_LIMIT_EXCEEDED. (developer.salesforce.com)
  • Bulk API 2.0: Designed for large-volume operations. Salesforce auto-batches internally in chunks of 10,000 records. Upload data as CSV (up to 150 MB per job). No manual batch management — submit data, close the job, and Salesforce handles batching and retries.
  • Concurrency: Up to 25 concurrent bulk jobs.
  • sObject Tree API: For small graph-style inserts (e.g., Account + Contact + Opportunity in one call), the Composite Tree API can insert a parent and child records in a single request (up to 200 records per call, max 5 levels). Good for testing, not for bulk loads. (developer.salesforce.com)
  • Record locking: Shared parent lookups, workflow/automation, and group membership operations can cause lock exceptions during bulk loads. For child objects that reference the same parent accounts or opportunities, serial mode often beats noisy parallel retries. (developer.salesforce.com)

ETL Data Flow

Pipedrive REST API (v2 where available)
  │
  ├── GET /api/v2/organizations (cursor pagination)
  ├── GET /api/v2/persons
  ├── GET /api/v2/deals
  ├── GET /api/v2/activities
  ├── GET /api/v1/notes (no v2 yet)
  ├── GET /api/v1/files (v1, 100-record page cap)
  ├── GET /api/v1/leads (v1-style pagination)
  │
  ▼
Staging Layer (DB or flat files)
  │
  ├── Resolve custom field hashes → human-readable names
  ├── Deduplicate records
  ├── Transform fields (split names, map picklists, classify activities)
  ├── Apply lead routing rules (Lead vs Contact + Opportunity)
  ├── Build cross-reference table (Pipedrive ID → SF External ID)
  │
  ▼
Salesforce Bulk API 2.0
  │
  ├── 1. Upsert Accounts (External ID: Pipedrive_Org_ID__c)
  ├── 2. Upsert Contacts (External ID: Pipedrive_Person_ID__c)
  ├── 3. Upsert Leads (if applicable)
  ├── 4. Upsert Opportunities (External ID: Pipedrive_Deal_ID__c)
  ├── 5. Insert OpportunityContactRoles
  ├── 6. Insert Products / PricebookEntries / OpportunityLineItems
  ├── 7. Insert Tasks / Events (WhoId, WhatId via cross-reference)
  ├── 8. Insert Notes / Files (linked via ContentDocumentLink)
  │
  ▼
Validation Layer
  ├── Record count comparison
  ├── Field-level sampling
  └── Relationship integrity checks

Step-by-Step Migration Process

Step 1: Extract from Pipedrive

Use Pipedrive API v2 endpoints for bulk extraction. Paginate through all records using cursor-based pagination:

import requests
import time
 
PIPEDRIVE_API_TOKEN = "your_api_token"
BASE_URL = "https://api.pipedrive.com/api/v2"
 
def extract_all(entity, params=None):
    """Extract all records for an entity using cursor pagination."""
    records = []
    cursor = None
    while True:
        url = f"{BASE_URL}/{entity}"
        query = {"api_token": PIPEDRIVE_API_TOKEN, "limit": 500}
        if cursor:
            query["cursor"] = cursor
        if params:
            query.update(params)
        
        resp = requests.get(url, params=query)
        
        if resp.status_code == 429:
            retry_after = int(resp.headers.get("Retry-After", 2))
            time.sleep(retry_after)
            continue
        
        resp.raise_for_status()
        data = resp.json()
        
        records.extend(data.get("data", []))
        
        cursor = data.get("additional_data", {}).get("next_cursor")
        if not cursor:
            break
    
    return records
 
orgs = extract_all("organizations")
persons = extract_all("persons")
deals = extract_all("deals")
activities = extract_all("activities")

For v1-only endpoints (notes, files, leads), use offset-based pagination with start and limit parameters. Note that the default deals endpoint returns non-archived deals only; if historical archived records are in scope, you need to request them explicitly.

Step 2: Resolve Custom Field Hashes

def get_field_map(entity_type):
    """Map 40-char hash keys to human-readable names."""
    url = f"https://api.pipedrive.com/api/v1/{entity_type}Fields"
    resp = requests.get(url, params={"api_token": PIPEDRIVE_API_TOKEN})
    fields = resp.json().get("data", [])
    return {f["key"]: f["name"] for f in fields}
 
deal_fields = get_field_map("deal")
person_fields = get_field_map("person")
org_fields = get_field_map("organization")

Step 3: Transform Data

Apply mapping rules, classify activities, resolve lead routing, and build CSVs for Salesforce Bulk API:

def transform_activity(activity):
    """Route Pipedrive activity to Salesforce Task or Event."""
    activity_type = activity.get("type", "").lower()
    
    if activity_type in ["meeting", "lunch"]:
        return {
            "sf_object": "Event",
            "Subject": activity.get("subject", ""),
            "StartDateTime": activity.get("due_date") + "T" + activity.get("due_time", "09:00"),
            "EndDateTime": activity.get("due_date") + "T" + activity.get("end_time", "10:00"),
            "Description": activity.get("note", ""),
            "Pipedrive_Activity_ID__c": activity.get("id")
        }
    else:
        return {
            "sf_object": "Task",
            "Subject": activity.get("subject", ""),
            "ActivityDate": activity.get("due_date"),
            "Status": "Completed" if activity.get("done") else "Not Started",
            "Description": activity.get("note", ""),
            "Pipedrive_Activity_ID__c": activity.get("id")
        }

The transform layer is also where you catch duplicates, normalize currencies and dates, and decide business rules for lead routing (keep as Salesforce Lead vs. convert to Contact + Opportunity).

Step 4: Load into Salesforce via Bulk API 2.0

Submit CSVs to Salesforce Bulk API 2.0 in dependency order:

def bulk_upsert(sf_conn, object_name, external_id_field, csv_path):
    """Upsert records via Bulk API 2.0."""
    bulk = sf_conn.bulk2
    with open(csv_path, "r") as f:
        results = bulk.__getattr__(object_name).upsert(
            f.read(),
            external_id_field,
            content_type="CSV"
        )
    
    success = sum(1 for r in results if r.get("success"))
    failed = sum(1 for r in results if not r.get("success"))
    print(f"{object_name}: {success} succeeded, {failed} failed")
    return results

The important part is not the amount of code. It's checkpointing cursors and job IDs, writing a durable source-to-target ID map, and making every load safe to rerun.

Step 5: Validate

After loading, run validation queries:

-- Count comparison
SELECT COUNT(Id) FROM Account WHERE Pipedrive_Org_ID__c != null
SELECT COUNT(Id) FROM Contact WHERE Pipedrive_Person_ID__c != null
SELECT COUNT(Id) FROM Opportunity WHERE Pipedrive_Deal_ID__c != null
 
-- Orphan detection
SELECT Id, Name FROM Contact
  WHERE AccountId = null AND Pipedrive_Person_ID__c != null
SELECT Id, Name FROM Opportunity
  WHERE AccountId = null AND Pipedrive_Deal_ID__c != null

Edge Cases, Limitations, and Failure Modes

Duplicate Records

Pipedrive allows duplicate Organizations and Persons with the same name. Salesforce's duplicate rules may block or merge these on insert. Options:

  • Deduplicate in the staging layer before loading.
  • Disable Salesforce duplicate rules during migration and clean up after.
  • Use External ID upsert to ensure idempotency on re-runs.

External IDs stop reruns from duplicating records, but they do not solve business duplicates that already exist in Pipedrive.

Multi-Level Relationship Preservation

The hardest part of any CRM migration is preserving the chain: Account → Contact → Opportunity → OpportunityContactRole → Activity → Note. If any link breaks, you get orphaned records invisible in Salesforce's related lists. The cross-reference map and sequential load order are non-negotiable.

Activity Split Failures

Pipedrive Activities are one object for calls, meetings, lunches, and custom activity types — and they can be unscheduled (no date or time). In Salesforce, Events require StartDateTime and EndDateTime. If a Pipedrive meeting has no time, your script must either assign defaults or route it to a Task instead. Keep the classification rule set simple and documented.

Multi-Value Phone and Email Fields

Pipedrive Persons support arrays of emails and phone numbers with labels. Salesforce Contacts have fixed phone/email fields. You must decide:

  • Which value becomes the primary Email / Phone.
  • Where overflow values go (custom fields, or dropped with a log entry).

Lead Ambiguity

Pipedrive leads don't have their own custom-field schema — they inherit deal fields. Because they already point to a person or organization, many teams treat only true pre-qualification records as Salesforce Leads and map later-stage source leads directly to Contact + Opportunity. That's an implementation choice, but it's usually cleaner than running Lead conversion flows post-migration. (developers.pipedrive.com)

Attachment and File Migration

Pipedrive files must be downloaded individually via the Files API, then re-uploaded to Salesforce as ContentVersion records and linked via ContentDocumentLink. This is slow and rate-limit-intensive on both sides. Google Drive files linked in Pipedrive are excluded from export — handle those separately. For large file volumes, plan for this step to take as long as the rest of the migration combined. (support.pipedrive.com)

Custom Fields Without Salesforce Equivalents

Pipedrive's field types don't all have direct Salesforce equivalents:

  • "Person" type custom fields (linking to another Person) → must become a Lookup field on the Salesforce side.
  • "Time Range" fields → no native equivalent. Store as two DateTime fields or a text field.

Salesforce-Specific Constraints

  • Data Import Wizard does not support Opportunity or Task objects. Use Data Loader, Bulk API, or REST API for those. (help.salesforce.com)
  • Picklist validation: If Salesforce picklist fields are set to "restrict to defined values," any Pipedrive value that doesn't match causes an insert failure. Audit and pre-create all picklist values.
  • Storage limits: Each org has data storage limits by edition. A large migration can exceed storage, causing all inserts to fail. Check Setup → Storage Usage before starting.
  • Required fields and validation rules: Salesforce objects may have required fields or validation rules that Pipedrive data doesn't satisfy. Temporarily deactivate non-critical validation rules during migration.
  • Bulk API jobs are async with no SLA for completion time. Shared parents and automations can create lock contention. For child objects that reference the same parent accounts, use serial mode.
Danger

Loading Opportunities before Accounts and Contacts are stable is how teams create orphaned history and spend a week on cleanup. Load parent objects first, validate, then child objects, then relationship tables, then files.

Validation and Testing

Record Count Comparison

Compare source counts (Pipedrive) against target counts (Salesforce) for every object. Any discrepancy must be investigated — common causes include duplicate rule blocks, validation rule failures, and storage limit errors.

Field-Level Sampling

Pull a random sample of 50–100 records per object and compare field values side-by-side. Pay special attention to:

  • Currency fields (rounding, currency code)
  • Date fields (timezone shifts)
  • Multi-select picklists (delimiter issues)
  • Custom field data (hash → value resolution)

For large validations, use SOQL aggregates or Bulk API query-style exports rather than hand-checking UI screens. Keep Data Loader success/error CSVs or API job result files as part of your audit set.

Relationship Integrity

Run Salesforce reports or SOQL queries to detect:

  • Contacts without parent Accounts
  • Opportunities without parent Accounts
  • Activities (Tasks/Events) without WhoId or WhatId associations
  • OpportunityContactRoles referencing missing Contacts or Opportunities

UAT Process

Have sales reps verify 10–20 of their own deals in Salesforce. They'll catch data issues — wrong stage, missing notes, unlinked contacts — that automated checks miss.

Rollback Planning

If you used External ID fields, rollback is straightforward: query all records where the External ID is populated and delete them. Avoid hard deletes until business sign-off. Run at least one full dry run in a Salesforce sandbox before touching production. (help.salesforce.com)

Post-Migration Tasks

Do not declare success when the row counts match. The real post-migration work is operational:

  • Rebuild automations. Pipedrive automations do not transfer. Recreate equivalent logic using Salesforce Flows or Apex triggers.
  • Pipeline and stage configuration. Salesforce Opportunity stages live in the Stage picklist, mapped to a Sales Process. Configure these before migration, not after.
  • Retire old integrations. Re-point or shut down integrations that still write to Pipedrive so stale data doesn't flow back into the new system.
  • User training. Salesforce's UI and workflow are fundamentally different from Pipedrive's. Budget time for hands-on training. Users especially need to understand Accounts vs. Contacts vs. Opportunities.
  • Monitor for 30 days. Run weekly validation reports for the first month. Watch for broken automations, duplicate records from parallel data entry, sync drift, and activity logging problems.

Best Practices That Reduce Rework

  1. Back up everything before you start. Export all Pipedrive data to CSV as a baseline. Take a Salesforce sandbox snapshot.
  2. Run at least two test migrations in a Salesforce sandbox before touching production. The first test exposes field mapping errors. The second validates your fixes.
  3. Use External IDs on every object. They enable upsert (idempotent re-runs), simplify relationship linking, and make rollback trivial.
  4. Validate incrementally. Don't wait until the end to check data quality. Validate after each object load.
  5. Disable automations during load. Salesforce Flows, triggers, and workflow rules firing on every inserted record will slow the migration and may produce unintended side effects (emails sent to customers, assignment rule changes).
  6. Log everything. Every API call, every failed record, every transformation decision. When something goes wrong at record 47,000, you need to know exactly what happened.
  7. Control record locking. For child objects that share parent lookups, use serial mode in Bulk API jobs to avoid lock contention from parallel processing.

When to Use a Managed Migration Service

DIY migration makes sense when your dataset is small, your relationships are simple, and you have engineering bandwidth to spare. It stops making sense when:

  • You have 100K+ records with multi-level relationships (Account → Contact → Opportunity → Activity → Note) that must be preserved.
  • Your team is already committed to the Salesforce implementation — building Flows, permission sets, reports — and can't also own the data extraction pipeline. These are different skill sets. See Why Data Migration Isn't Implementation.
  • You need zero downtime — sales can't stop entering deals for a weekend cutover window.
  • You have custom field types or data structures that don't map cleanly and require programmatic transformation logic.
  • Historical activities, files, and notes matter — not just the core pipeline objects.

The hidden cost of DIY isn't the initial script — it's the three weeks of debugging orphaned records, re-running failed batches, and manually fixing relationship chains the script didn't handle.

ClonePartner has executed 1,200+ data migrations, including complex CRM-to-CRM moves involving multi-level relationship preservation, custom object mapping, and activity splitting. Our engineering team handles the full pipeline — extraction, transformation, loading, and validation — with programmatic handling of Pipedrive's token-based rate limits and Salesforce Bulk API optimization. The result is a migration measured in days, not sprints, with zero orphaned records and zero downtime.

Frequently Asked Questions

How do Pipedrive Activities map to Salesforce?
Pipedrive Activities are a single object covering calls, meetings, tasks, and deadlines. Salesforce splits these into two separate objects: Tasks (to-dos, calls, deadlines with ActivityDate) and Events (meetings with required StartDateTime and EndDateTime). Your migration must classify each Activity by type and route it to the correct Salesforce object. Unscheduled Pipedrive meetings need defaults or must be routed to Tasks.
What are Pipedrive's API rate limits for data migration?
Pipedrive uses token-based rate limiting with a daily budget of 30,000 base tokens × plan multiplier × number of seats. Each endpoint consumes a different number of tokens based on complexity. Burst limits apply on a rolling 2-second window. API v2 endpoints are more token-efficient than v1. The rollout completed for all accounts by May 31, 2025.
Can I use the Salesforce Data Import Wizard for a Pipedrive migration?
Only partially. The Data Import Wizard handles Accounts, Contacts, Leads, and custom objects up to 50,000 records per import, but it does not support Opportunity or Task objects. For those, use Salesforce Data Loader (up to 5M records) or the Bulk API. It also cannot preserve cross-object relationships automatically — you must reconstruct links manually.
Should Pipedrive Leads become Salesforce Leads?
Not always. Pipedrive leads inherit deal custom fields and often already reference a person or organization. Many teams keep only true pre-qualification records as Salesforce Leads and map later-stage source leads directly to Contact + Opportunity, which is usually cleaner than running Lead conversion flows post-migration.
How long does a Pipedrive to Salesforce migration take?
For small datasets (<10K records) with simple structures, a CSV-based migration can be done in 1–3 days. For mid-market (10K–100K records) with relationships, expect 1–2 weeks including testing. Enterprise migrations (100K+) with custom fields, products, and multi-level relationships typically take 2–4 weeks with a custom approach, or days with a managed service.

More from our Blog