Salesforce to HubSpot Migration: Pipeline, Tickets & Attachments
A technical guide to migrating Salesforce Cases, pipeline data, and attachments to HubSpot Service Hub — API limits, field mapping, and file workflows.
Migrating from Salesforce to HubSpot is a data-engineering problem, not a connector setup. The native HubSpot-Salesforce integration was designed for ongoing sync between live systems — not for moving years of Cases, pipeline history, ContentVersion files, and custom object associations into a new CRM.
The first decision matters: are you doing a live sync between two coexisting systems, or a historical migration with a cutover date? Those are different jobs. HubSpot's Salesforce integration checks for updates on a recurring sync cycle and relies on records being triggered to sync. (knowledge.hubspot.com) That is useful during a short coexistence window. It is not the same thing as a deterministic, auditable backfill of years of Opportunities, Cases, EmailMessages, files, and custom-object links.
This guide breaks down the exact API constraints on both sides, the structural mismatches between Salesforce Service Cloud and HubSpot Service Hub, the correct workflow for migrating attachments, and why an API-based approach is the only method that preserves full data integrity at scale.
For a pre-launch checklist tailored to HubSpot Service Hub, see the HubSpot Service Hub Migration Checklist. If you are migrating from Zendesk rather than Salesforce, the constraints differ — see Zendesk to HubSpot Migration: The 2026 Technical Guide.
Why the Native Salesforce-to-HubSpot Integration Fails for Full Migrations
The native HubSpot-Salesforce connector is a sync tool, not a migration engine. It keeps two live systems in alignment. When teams repurpose it as a one-time migration path, three constraints break the plan:
Custom object ceiling. The native integration supports up to 10 custom objects per HubSpot Enterprise account. If your Salesforce org uses more than 10 custom objects — common in mature Service Cloud deployments — the connector cannot sync the rest. Custom object sync also requires Enterprise-tier licensing in HubSpot; Professional plans have no custom object sync at all. (knowledge.hubspot.com)
Associations are incomplete. The native connector can sync standard object relationships (Contact → Account, Opportunity → Account), but it cannot handle complex custom object associations. Salesforce lookup fields on custom objects cannot be used in field mappings, and associations only pass between objects that are already syncing. (knowledge.hubspot.com) If your Salesforce data model links Cases to custom objects via lookup or master-detail relationships, those links will not carry over.
No retroactive historical import. When you enable ticket sync between Salesforce Cases and HubSpot Tickets, records only sync when they are created or updated in Salesforce going forward. Existing historical Cases do not flow into HubSpot automatically. HubSpot's documentation states you must run a separate import to bring existing Salesforce Cases into HubSpot.
The native connector also defaults to Salesforce as the source of truth, which can overwrite newer marketing data in HubSpot if conflict resolution rules are not configured perfectly.
The native connector is designed for ongoing bidirectional sync between two live systems. It is not a migration tool. Treating it as one will leave historical data behind, drop custom object relationships, and silently skip file attachments.
The Hidden API Limits: Exporting Data from Salesforce at Scale
Before anything reaches HubSpot, you need to get data out of Salesforce cleanly. There are two paths, and the wrong choice costs weeks.
Salesforce Data Export Tool
This is Salesforce's built-in bulk export (Setup → Data Export). It generates CSV files bundled into ZIP archives. The limits are disqualifying for a full migration:
- ZIP files are capped at roughly 512 MB each. Large exports automatically split across multiple archives.
- Exports are flattened. Relationships between records — parent-child links, lookup fields, master-detail chains — are not preserved in the output. You get disconnected CSV rows, not a relational dataset. Stitching them back together requires manual VLOOKUP operations.
- Frequency is restricted. Enterprise, Performance, and Unlimited editions can export once every 7 days. Professional and Developer editions: once every 29 days.
- Files expire in 48 hours. Download links are deleted after 48 hours. Miss the window and you start over.
Salesforce describes the export as a zip archive of CSV files. (trailhead.salesforce.com) You cannot rebuild associations from flattened CSVs without significant re-linking work, and you cannot iterate quickly when exports are limited to weekly runs.
Salesforce Bulk API with PK Chunking
For high-volume extraction — anything above a few hundred thousand records — the Bulk API with PK Chunking is the correct approach. PK Chunking splits queries on large tables into batches based on record IDs (primary keys), preventing query timeouts and distributing load.
Key parameters:
| Setting | Default | Maximum |
|---|---|---|
| Chunk size | 100,000 records | 250,000 records |
| Batches per 24 hours | — | 15,000 (shared across Bulk API 1.0 and 2.0) |
| Records per batch | — | 10,000 (10 MB payload limit) |
| Daily API request limit (Enterprise) | 100,000 + 1,000/user license | Varies by edition |
To enable PK Chunking on Bulk API 1.0, set the Sforce-Enable-PKChunking header on your query job:
Sforce-Enable-PKChunking: chunkSize=100000Each chunk is processed as a separate batch. Results must be downloaded per-batch and combined. For objects with 10+ million records — such as EmailMessage, CaseComment, or ContentVersion — PK Chunking is not optional; without it, queries will hit QUERY_TIMEOUT errors.
Salesforce's 2024 platform guidance notes that Bulk API 2.0 automatically performs PK chunking when supported, while the older Bulk API 1.0 requires the explicit header. (developer.salesforce.com)
Salesforce's daily API limit is a soft limit — the platform may allow temporary overages, but sustained excess triggers a hard block with HTTP 403 REQUEST_LIMIT_EXCEEDED. Plan your extraction windows to stay well under the ceiling, especially if integrations are still running on the same org.
How to Migrate Salesforce Files and Attachments to HubSpot
This is where most migrations silently fail. Salesforce Files and legacy Attachments cannot be migrated via CSV exports or the native connector. They require API extraction on the Salesforce side and a multi-step upload process on the HubSpot side.
Understanding the Salesforce File Model
Salesforce stores files across three interconnected objects:
- ContentVersion — represents a specific version of a file. Contains the binary data (
VersionData), file metadata (Title,FileExtension,PathOnClient), and ownership info. - ContentDocument — the parent container for all versions of a file. Created automatically when a ContentVersion is inserted.
- ContentDocumentLink — the junction object that links a ContentDocument to a record (Case, Account, Opportunity, etc.) via
LinkedEntityId.
Salesforce notes that ContentVersion does not maintain a direct reference to a related Case; the link back to the Case is recovered through ContentDocumentLink. (developer.salesforce.com)
To extract files with their record associations, you need all three objects. A practical extraction pattern builds a manifest first, then downloads binaries separately:
-- Get file-to-case associations
SELECT Id, LinkedEntityId, ContentDocumentId, ShareType, Visibility
FROM ContentDocumentLink
WHERE LinkedEntityId IN (SELECT Id FROM Case)
-- Get file metadata and binary references
SELECT Id, ContentDocumentId, Title, VersionData, FileExtension, PathOnClient
FROM ContentVersion
WHERE ContentDocumentId IN ('<list_of_ids>')
AND IsLatest = trueThe VersionData field returns the file's binary content via a REST API call. This is the only way to programmatically extract actual file data from Salesforce.
If your Salesforce org still uses the legacy Attachment object (pre-Lightning), those records live in a different schema with a ParentId field instead of ContentDocumentLink. Your migration script must handle both models. If you do not audit both, your attachment counts can look right at the Case level but still be wrong at the file level.
Uploading to HubSpot
HubSpot does not have a direct "attachment" object. Files are managed through the Files API and associated with records via Notes (engagements). A common mistake: teams import a file ID into a HubSpot file property and assume they preserved attachments. They did not. A file property is just a property value. Service agents expect the file to appear on the ticket timeline with the right timestamp and parent association.
The correct workflow per file:
- Upload the file to HubSpot's file manager via
POST /files/v3/files(multipart/form-data). Set the access level explicitly — HubSpot supportsPRIVATE,PUBLIC_NOT_INDEXABLE, andPUBLIC_INDEXABLE. Support attachments containing invoices, contracts, or screenshots should not be left public by default. (developers.hubspot.com) - Create a Note via the Notes API (
POST /crm/v3/objects/notes) with thehs_attachment_idsproperty set to the returned file ID andhs_timestampset to the original Salesforce date. - Associate the Note with the target Ticket (or Contact, Company, Deal) using the appropriate
associationTypeId.
{
"associations": [
{
"to": { "id": "{ticketId}" },
"types": [
{
"associationCategory": "HUBSPOT_DEFINED",
"associationTypeId": 202
}
]
}
],
"properties": {
"hs_note_body": "Migrated attachment from Salesforce Case #12345",
"hs_timestamp": "2024-06-15T10:30:00Z",
"hs_attachment_ids": "{fileId}"
}
}This is a three-API-call process per file: upload, create note, associate. HubSpot's Files API does not support uploading multiple files in a single request. (developers.hubspot.com) At scale — thousands of Cases with multiple attachments each — this requires careful batching and rate-limit management.
Persist a crosswalk mapping sf_contentdocument_id → hubspot_file_id and sf_parent_id → hs_object_id so reruns are idempotent and QA is possible.
Inline image edge case: If historical email HTML references Salesforce-hosted file URLs, moving the binary alone is not enough. The email or note body must be rewritten to point to HubSpot file URLs, or those inline images will still reference Salesforce after cutover.
For a deeper analysis of why CSV-based file migration fails for relational data, see Using CSVs for SaaS Data Migrations: Pros and Cons.
Mapping Salesforce Service Cloud Cases to HubSpot Service Hub Tickets
Salesforce Cases and HubSpot Tickets are structurally similar — both represent support requests with statuses, owners, and associated contacts. But the mapping has real edge cases that affect agent workflows post-migration.
Core Field Mapping
| Salesforce Case Field | HubSpot Ticket Property | Notes |
|---|---|---|
Subject |
subject (hs_ticket_name) |
Direct map |
Description |
content (hs_ticket_description) |
Direct map |
Status |
hs_pipeline_stage |
Requires pipeline stage mapping |
Priority |
hs_ticket_priority |
Values may differ (e.g., "Medium" vs "MEDIUM") |
Origin |
Custom property | No native equivalent — create a custom property |
CaseNumber |
Custom property | Preserve for cross-reference during QA |
CreatedDate |
hs_createdate |
Use hs_timestamp for historical imports |
ClosedDate |
Custom property | No direct "closed date" in HubSpot — use custom property |
ContactId |
Association to Contact | Must resolve Salesforce Contact ID → HubSpot Contact ID |
AccountId |
Association to Company | Must resolve Salesforce Account ID → HubSpot Company ID |
OwnerId |
hubspot_owner_id |
Requires user mapping; Salesforce Queue owners have no HubSpot equivalent |
A practical rule that keeps the migration reversible: map into HubSpot's working fields, but also store the original Salesforce values in dedicated backup properties. For example, map Case.Status to hs_pipeline_stage but also write the raw value to a sf_case_status text property. This gives you a clean audit trail and makes reprocessing possible without re-extracting from Salesforce.
Pipeline Stage Mapping
Salesforce Cases use a Status picklist (New, Working, Escalated, Closed) tied to a Support Process. HubSpot Tickets use Pipelines with discrete Pipeline Stages, each categorized as Open or Closed.
You must create your HubSpot pipeline stages before migration and build a mapping table that translates every Salesforce Status value to a specific HubSpot stage ID. HubSpot's Tickets API requires the internal ID of the pipeline and stage, not just the label. Mismatches cause import failures — HubSpot rejects tickets with stage values that do not exist in the target pipeline.
If you use Salesforce Record Types to separate business processes (e.g., different Case types with different Status picklists), you need to decide whether Record Types map to separate HubSpot Pipelines or flatten into a single pipeline with a custom property. HubSpot's record-type mapping guidance notes that when new record types are added in Salesforce, the matching HubSpot property must be manually updated. (knowledge.hubspot.com)
Case Owner edge case: The Salesforce Case Owner field may not appear in HubSpot's field mapping settings for the native integration. For API-based migrations, map OwnerId to hubspot_owner_id manually, ensuring the user exists in both systems. If Cases are owned by Queues (not individual users), the OwnerId resolves to a Queue record that has no HubSpot equivalent — assign to a default user or map to a HubSpot team.
Ticket History and Thread Context
Salesforce stores Case history across multiple objects: CaseComment (internal/public comments), EmailMessage (email threads), CaseHistory (field change audit trail), and FeedItem (Chatter posts).
HubSpot Tickets use a flat timeline of Notes, Emails, Calls, and Tasks — all modeled as engagements associated with the ticket. There is no native equivalent for Salesforce's CaseHistory audit trail.
To preserve thread context, split case history into three buckets:
- CaseComments → HubSpot Notes with
hs_note_bodycontaining the comment text andhs_timestamppreserving the original date. - EmailMessages → HubSpot Emails (engagement type
EMAIL) — requires mappingFromAddress,ToAddress,Subject,TextBody/HtmlBody. HubSpot says existing emails cannot be updated via import, so getting the data right on first write matters. (knowledge.hubspot.com) - CaseHistory / system events (reassignments, status transitions, SLA changes) → For orgs that need agent visibility into audit data, use HubSpot's Timeline Events API to publish external events to ticket timelines. (developers.hubspot.com) For less critical audit data, append to a structured Note or custom property.
Each of these requires individual API calls to create and associate. A single Case with 20 comments and 5 email threads generates 25+ API requests just for the conversation history. Without hs_timestamp set to the original dates, HubSpot defaults to the import date — destroying your historical reporting.
Navigating HubSpot's Import and API Rate Limits
HubSpot imposes two separate rate-limiting systems that collide during large migrations: import limits and API rate limits.
Bulk Import Limits
HubSpot's bulk import endpoint (used by both the UI importer and the Imports API) enforces:
- 500 imports per rolling 24-hour window per portal — a hard cap. Exceeding it returns a
429error withPORTAL_DAILY_IMPORT_EXCEEDED. (knowledge.hubspot.com) - 1,048,576 rows per file (matching Excel's row limit)
- 10,000,000 rows per day via UI; 80,000,000 rows per day via the Imports API (developers.hubspot.com)
- 3 simultaneous imports maximum; only 2 can exceed 10,000 rows
The 500-import limit catches migration teams first. If you are chunking Salesforce data into small batches to handle associations, you can burn through 500 import jobs in a single afternoon. Once hit, you are locked out for a rolling 24-hour window.
The 429 Error Trap: Ignoring a 429 error and aggressively retrying bulk imports compounds the issue. HubSpot enforces a burst cap of ~190 requests per 10 seconds. Retrying without pacing adds more calls against the same quota, leading to prolonged lockouts.
API Rate Limits
For individual CRM object API calls (create, update, batch), HubSpot enforces separate limits based on subscription tier:
| Tier | Daily Limit | Burst Limit |
|---|---|---|
| Professional | 625,000 requests/day | 190 requests/10 seconds |
| Enterprise | 1,000,000 requests/day | 190 requests/10 seconds |
| API Add-On | +1,000,000 requests/day | 200 requests/10 seconds |
Batch endpoints are your best tool for migration throughput. HubSpot's batch create/update endpoints accept up to 100 records per call — each batch counts as a single API request. This gives you effectively 100x the throughput compared to individual record creation. (developers.hubspot.com)
Two other bottlenecks to plan for:
- Association batch reads are limited to 1,000 inputs per request body. (developers.hubspot.com)
- CRM Search API: 4 requests per second across all object types. If your migration script uses search calls for deduplication (matching Salesforce IDs to existing HubSpot records), this becomes your primary chokepoint.
Skip the bulk import endpoint for relational data. Use HubSpot's individual CRM object APIs (/crm/v3/objects/tickets, /crm/v3/objects/contacts) with batch endpoints. This bypasses the 500-import-per-day ceiling and gives you full control over associations, timestamps, and error handling per record.
Migration Methods Compared: CSV vs. Native Sync vs. API
Four approaches exist for moving data from Salesforce to HubSpot. Each has a real use case — and a failure mode.
CSV Export + HubSpot Import Wizard
Best for: small datasets (<10,000 records) with simple structures and no file attachments.
Export objects from Salesforce via Data Export or Data Loader as CSVs. Import into HubSpot using the UI importer or Imports API.
Where it breaks: Relationships are flattened — you must manually re-link records. Multi-file imports cannot use arbitrary join keys; HubSpot requires hs_object_id, a supported secondary identifier like email or domain, or a custom unique-value property. (developers.hubspot.com) File attachments cannot be exported via CSV. No timestamp preservation for historical comments. Picklist mismatches cause silent import failures.
Native HubSpot-Salesforce Connector
Best for: ongoing bidirectional sync during a transition period.
Install the HubSpot Salesforce integration from the App Marketplace. Configure object sync, field mappings, and pipeline mappings.
Where it breaks: Limited to 10 custom objects (Enterprise only). Does not sync custom object associations. Historical records require a separate import. File attachments are not migrated. Case Owner field mapping may not be available in the UI. No control over rate limiting or error handling.
Verdict: Useful as a transition bridge — keeping both systems in sync while you validate data in HubSpot. Not sufficient as the migration mechanism itself.
No-Code Migration Apps
Tools like Import2 and MigrateMyCRM market one-click migration with auto-mapping of contacts, companies, deals, tickets, notes, and attachments. (ecosystem.hubspot.com)
This can work for lower-complexity CRM moves. It does not remove the need to validate custom associations, record-type logic, ticket timeline fidelity, and file-to-record lineage in your specific data model.
API-Based Migration
Best for: preserving relationships, ticket history, and attachments at scale.
Custom scripts extract data from Salesforce (Bulk API + REST API for files), transform and map it, then push it into HubSpot via CRM APIs with explicit associations.
Why it works: Full control over field mapping, associations, and timestamps. Binary file migration (ContentVersion → HubSpot Files → Notes). Batch API usage for throughput. Deduplication logic and ID crosswalks built in. Per-record error handling and retry logic. Can preserve original CreatedDate values.
Trade-off: Requires engineering effort or a migration partner. Must manage rate limits on both Salesforce and HubSpot sides. Testing and QA cycles are non-trivial.
Verdict: The only method that can migrate Cases, file attachments, email threads, and custom object associations with full fidelity. Required for any migration involving more than basic contact/company data.
Edge Cases That Break Salesforce-to-HubSpot Migrations
These are the issues that do not show up in documentation but surface during QA.
Picklist value mismatches. Salesforce allows spaces, special characters, and mixed case in picklist values. HubSpot internal values are typically lowercase with no spaces. A Salesforce Status of "Waiting on Customer" may need to map to waiting_on_customer in HubSpot. Mismatches cause silent drops during import.
Multi-select picklists use different formats. Salesforce uses semicolons (;) to separate multi-select values. HubSpot uses semicolons too, but the internal value format must match exactly. Salesforce stores "Value A;Value B" while HubSpot expects "value_a;value_b".
Record Type polymorphism. Salesforce Cases can have multiple Record Types, each with different field layouts and picklist values. HubSpot Tickets do not have Record Types. You must decide whether to map Record Types to separate Pipelines or flatten them into a single pipeline with a custom property.
Email-to-Case attachment bloat. When Salesforce's Email-to-Case captures inbound emails, signature images and logos are stored as separate ContentVersion records. A single email thread can generate dozens of small image files. Migrating all of them is technically correct but floods HubSpot's file manager. Filter by file size or type before migration.
Salesforce Person Accounts. If your org uses Person Accounts (a hybrid of Contact and Account), HubSpot has no equivalent. Person Accounts must be split into separate Contact and Company records during migration, with associations preserved.
Sharing-model file visibility. Salesforce queries on ContentDocument and ContentVersion without specifying an ID do not necessarily return all files a user can access, depending on how sharing is set up. If your extraction runs under an integration user with restricted sharing, you may silently miss files. (developer.salesforce.com)
What a Zero-Downtime Cutover Looks Like
If you want the move to feel boring on go-live day, the pattern is:
- Model the target first. Create pipelines, stages, custom properties, unique source-ID properties, and owner crosswalks in HubSpot before loading any data.
- Backfill history in layers. Load base objects first (Contacts, Companies), then Tickets with associations, then activities (Notes, Emails), then file attachments. Dependency order matters — you cannot associate a Note with a Ticket that does not exist yet.
- Run a delta pass. Re-extract records changed after the historical snapshot and update only those rows. This keeps both systems current during the validation period.
- QA against business questions, not just counts. Reopen ten old cases. Check whether attachments load, timestamps sort correctly, owners look right, and company/contact associations match what agents expect. A matching record count is not proof of a good migration — if a reopened ticket is missing the screenshot or the original email thread, agents still lost the context they need.
- Cut over with a small freeze window. Freeze admin changes and routing logic, not the whole support team. Validate live writes immediately after switchover. Agents should log into HubSpot with every pipeline stage, ticket thread, and attachment exactly where it belongs.
For more on how we approach zero-downtime migrations, see Zero Downtime Guaranteed. If your source is heavily customized Service Cloud, pair this with the Salesforce Service Cloud Migration Checklist.
Making the Right Call
If your Salesforce org has fewer than 5,000 Cases, no file attachments, and only standard objects — a CSV import may be sufficient. For everything else — historical ticket threads, file attachments, custom object associations, pipeline mapping — an API-based migration is the only approach that preserves data integrity.
The native connector has a role: use it as a transition bridge to keep systems in sync during cutover. Do not rely on it as your primary migration mechanism.
The real cost of a bad migration is not the rework. It is the agent who opens a ticket six months later, finds the attachment missing, and cannot resolve the customer's issue. That is the failure mode worth engineering against.
Frequently Asked Questions
- Can the native HubSpot-Salesforce integration migrate historical Cases and attachments?
- No. The native connector only syncs records when they are created or updated going forward. Historical Cases must be imported separately via CSV or API-based migration. The connector also cannot migrate file attachments or custom object associations.
- How do you migrate Salesforce file attachments to HubSpot?
- Salesforce stores files as ContentVersion/ContentDocument objects with binary data linked via ContentDocumentLink. You must extract files via the Salesforce REST API, upload each file to HubSpot's Files API, then create a Note with hs_attachment_ids and associate it with the target Ticket. This is a three-API-call process per file — CSV exports cannot handle binary data.
- What is the HubSpot import limit per day?
- HubSpot enforces a hard limit of 500 imports per rolling 24-hour window per portal, regardless of subscription level. Exceeding this triggers a 429 error (PORTAL_DAILY_IMPORT_EXCEEDED). For large migrations, use HubSpot's batch CRM object APIs instead — they bypass the import limit and accept up to 100 records per API call.
- How do you map Salesforce Case Status to HubSpot Ticket Pipeline?
- Salesforce Cases use a Status picklist tied to a Support Process. HubSpot Tickets use Pipeline Stages with Open/Closed categories. Create matching pipeline stages in HubSpot before migration and build a mapping table translating each Salesforce Status value to a specific HubSpot stage ID. The Tickets API requires the internal stage ID, not the label. Mismatches cause import failures.
- Why does PK Chunking matter for Salesforce data extraction?
- PK Chunking splits Bulk API queries into batches based on record IDs, preventing query timeouts on large objects. The default chunk size is 100,000 records (max 250,000). Without it, queries against objects like EmailMessage or ContentVersion with 10M+ records will time out. Bulk API 2.0 performs PK chunking automatically; Bulk API 1.0 requires the Sforce-Enable-PKChunking header.
