How to Export Data from Salesforce Service Cloud: Methods & Limits
Learn every method to export Salesforce Service Cloud data — Data Export Service, Data Loader, Bulk API 2.0 — with exact limits, file extraction steps, and SOQL examples.
No single native Salesforce tool exports a complete, relationally intact copy of your Service Cloud data. The Data Export Service gives you raw CSVs with no preserved relationships. Data Loader requires object-by-object SOQL queries. Reports cap at 100,000 rows. And none of them cleanly extract the Case → EmailMessage → ContentDocumentLink → ContentVersion hierarchy that makes Service Cloud data meaningful.
A complete Service Cloud export typically requires combining multiple approaches: the native Data Export Service for a broad org backup, Data Loader or Bulk API 2.0 for targeted high-volume object pulls, a separate workflow for files and attachments, and — if you need configuration alongside data — a Metadata API retrieval for record types, page layouts, flows, and custom fields. (trailhead.salesforce.com)
This guide covers every method, what each one actually returns, what it silently drops, and the exact limits you will hit.
If you are exporting as part of a platform migration, see our Salesforce Service Cloud Migration Checklist for the full pre-migration planning sequence, or our Salesforce to HubSpot migration guide for destination-specific field mapping.
Why Exporting Salesforce Service Cloud Data Is Uniquely Difficult
Standard Salesforce orgs use a flat Account → Contact → Opportunity model. Service Cloud adds a deep, branching object graph designed for support operations. The core chain looks like this:
Account → Contact → Case → EmailMessage → EmailMessageRelation → ContentDocumentLink → ContentVersion → ContentDocument
Each of those objects must be exported individually. No native tool follows those relationships for you.
Here is what makes it harder than a standard Sales Cloud export:
- Cases reference Accounts, Contacts, and Queues. A single Case can be assigned to a queue, owned by an agent, and linked to a Contact under a different Account. Those lookups are raw Salesforce IDs in the CSV.
- EmailMessage stores case email threads. Every inbound and outbound email on a Case is a separate EmailMessage record with its own
ParentId(Case ID). The recipients — To, CC, BCC — live in a separate junction object called EmailMessageRelation. - Attachments use the Lightning file model. Email attachments are stored as ContentVersion records, linked to EmailMessages via ContentDocumentLink — a junction object with hard SOQL restrictions that prevent unfiltered queries. (salesforce.stackexchange.com)
- Legacy attachments may coexist. Older orgs may have files stored in the classic
Attachmentobject (linked byParentId) alongside newer LightningContentVersionrecords. You need to export both. - Permissions hide files silently. Even users with
View All Datado not automatically query every file. The separateQuery All Filespermission is what lets privileged users bypass file-query restrictions. If your export user lacks that permission, a file gap can look like a data gap when it is actually a visibility gap. (developer.salesforce.com)
For a deeper look at the underlying Service Cloud architecture, see our complete Salesforce Service Cloud technical guide.
Method 1: Native Data Export Service (Scheduled Org Backup)
What it is: A built-in Salesforce feature that generates CSV files for all (or selected) objects in your org, packaged into downloadable ZIP archives. (trailhead.salesforce.com)
Best for: Periodic org-wide backups. Not suitable for migration-ready extracts.
How to run it
- Go to Setup → Quick Find → "Data Export"
- Click Export Now (immediate) or Schedule Export (recurring)
- Select encoding (UTF-8 recommended)
- Check "Include images, documents, and attachments" and "Include Salesforce Files and Content" if needed
- Select objects or check Include all data
- Click Start Export and wait for the completion email
- Return to the Data Export page and download each ZIP file within 48 hours
Limits and gotchas
| Constraint | Detail |
|---|---|
| Export frequency | Every 7 days (Enterprise/Performance/Unlimited) or every 29 days (Professional and lower) (trailhead.salesforce.com) |
| ZIP file size | Each archive capped at ~512 MB; large exports split into multiple ZIPs |
| Download window | Files available for 48 hours only — miss it and they are gone |
| Download throttle | One file at a time with a 60-second wait between downloads (xappex.com) |
| Previous export | Starting a new export immediately deletes all files from the prior export |
| Formula/rollup fields | Not included — Salesforce calculates these dynamically |
| Recycle Bin | Records in the Recycle Bin are excluded |
| Metadata | Not included — no record types, page layouts, flows, or custom field definitions |
| Sandbox | Does not run in sandbox orgs |
| Completion SLA | None — export jobs can take hours or days depending on system queue load |
Starting a new export — manual or scheduled — immediately deletes all files from the previous export, even if those files are still within the 48-hour download window. Salesforce retains only one export set at a time. If you try to download too fast or use a concurrent download manager, Salesforce will return an HTTP 429 "Too Many Requests" error.
What you get: Dozens (sometimes hundreds) of individual CSV files inside ZIP archives. Each CSV corresponds to one Salesforce object. Relationships are represented only as raw 18-character Salesforce IDs. Reassembling Case → EmailMessage → Attachment hierarchies requires scripting or manual VLOOKUP work.
What you don't get: No metadata. No relational joins. No file binaries unless you checked the "Include" boxes — and even then, large file volumes can delay the export significantly.
One important upside: the Data Export Service does not consume your org's API call limits.
Method 2: Salesforce Data Loader for Targeted Exports
What it is: A free desktop client application (Windows/Mac) for importing and exporting Salesforce records via CSV. It uses SOQL queries to pull specific objects. (developer.salesforce.com)
Best for: Exporting specific objects (e.g., all Cases from 2024) with control over which fields are returned.
How to export Cases with Data Loader
- Launch Data Loader and click Export (or Export All to include archived/soft-deleted records)
- Log in with your Salesforce credentials
- Select the Case object
- Write your SOQL query:
SELECT Id, CaseNumber, Subject, Description, Status, Priority,
AccountId, ContactId, OwnerId, CreatedDate, ClosedDate
FROM Case
WHERE CreatedDate > 2024-01-01T00:00:00Z- Choose your output CSV file location and click Finish
Repeat for each object: EmailMessage, EmailMessageRelation, ContentVersion, ContentDocumentLink, Attachment, Account, Contact, etc.
Limits and gotchas
- Record cap: Salesforce's current Data Loader page documents support for up to 5 million records with Bulk API and 150 million with Bulk API 2.0. (developer.salesforce.com)
- One object at a time: There is no way to export Cases with their related EmailMessages in a single query. Each object is a separate export operation.
- Attachment exports are problematic. The official Salesforce Data Loader guide states that Data Loader does not support exporting attachments and recommends using the weekly Data Export Service for
Attachment.csv. (resources.docs.salesforce.com) If you query theBodyfield onAttachmentor theVersionDatafield onContentVersionvia SOQL, Data Loader outputs Base64-encoded strings in a CSV column — not actual files. Your engineering team must write a post-processing script (Python, Node.js, etc.) to parse the CSV, decode the Base64 strings, and write them to disk as binary files. For large datasets, manipulating gigabytes of Base64 strings in memory frequently causes out-of-memory errors. - Polymorphic fields: Data Loader does not support queries that use polymorphic relationships, which matters when flattening owner-style or activity-style lookups in one pass. (resources.docs.salesforce.com)
- Each export consumes API calls. Data Loader uses REST or Bulk API under the hood, so every export counts against your org's daily API allocation.
- No scheduled exports in GUI mode. The graphical interface is manual-only. Automation requires the command-line interface on Windows, configured with
process-conf.xml.
To export soft-deleted records (Recycle Bin) and archived activities, use Export All instead of Export. This is the only way to get archived EmailMessage records for closed Cases.
Sample SOQL for a full Case email thread export
-- Step 1: Export Cases
SELECT Id, CaseNumber, Subject, Status, AccountId, ContactId,
CreatedDate, ClosedDate
FROM Case
-- Step 2: Export EmailMessages linked to Cases
SELECT Id, ParentId, Subject, TextBody, HtmlBody, FromAddress,
ToAddress, MessageDate, Status, Incoming
FROM EmailMessage
WHERE ParentId != null
-- Step 3: Export EmailMessageRelations (To/CC/BCC per email)
SELECT Id, EmailMessageId, RelationId, RelationType, RelationAddress
FROM EmailMessageRelationYou now have three CSV files that must be joined on ParentId and EmailMessageId to reconstruct threaded conversations.
Method 3: Bulk API 2.0 for Programmatic High-Volume Exports
What it is: A RESTful asynchronous API designed for large-scale data operations. You submit a SOQL query as a job, Salesforce processes it in the background, and you retrieve the results as CSV.
Best for: Extracting millions of records programmatically — the right choice for migration scripts, data warehousing, or any export exceeding Data Loader's comfort zone.
How it works
- Create a query job — POST to
/services/data/v66.0/jobs/querywith your SOQL statement - Poll for completion — GET the job status until it returns
JobComplete - Retrieve results — GET from
/services/data/v66.0/jobs/query/{jobId}/results(paginated via locator)
# Create query job
curl -X POST https://yourInstance.salesforce.com/services/data/v66.0/jobs/query \
-H "Authorization: Bearer $ACCESS_TOKEN" \
-H "Content-Type: application/json" \
-d '{"operation": "query", "query": "SELECT Id, CaseNumber, Subject, Status FROM Case"}'If you prefer the Salesforce CLI, the sf data query --bulk command routes queries through Bulk API 2.0:
sf data query --query "SELECT Id, CaseNumber, Subject, Status FROM Case" \
--target-org myOrg --result-format csv --bulk > cases.csvThe CLI is a convenience wrapper — it does not bypass any platform limits. The same timeouts and result size constraints apply. (developer.salesforce.com)
Limits you will hit
| Limit | Value |
|---|---|
| Query timeout | 2 minutes per query — exceeding this returns QUERY_TIMEOUT (resources.docs.salesforce.com) |
| Query data volume | Up to 15 GB per query job, split into files of up to 1 GB each |
| Results retention | Query results must be retrieved within 7 days of job completion |
| Concurrent jobs | Maximum 25 concurrent jobs (shared across Bulk API 1.0 and 2.0) |
| Result retrieval retries | If result retrieval exceeds 1 GB or takes more than five minutes, Salesforce retries. After 30 attempts, the job fails. (resources.docs.salesforce.com) |
| SOQL restrictions | Does not support GROUP BY, OFFSET, TYPEOF, aggregate functions, or compound address/geolocation fields (resources.docs.salesforce.com) |
| API call consumption | Each job creation and result retrieval counts against daily API limits |
The 2-minute query timeout is the constraint that catches most teams. Service Cloud queries — especially those joining Cases, Contacts, and EmailMessages — are notoriously slow. If a query does not complete within the timeout, Salesforce fails the job and you need to rewrite it as a simpler, more selective query.
Workaround: Break large queries into date-range or ID-range partitions. Instead of SELECT ... FROM Case, use:
SELECT Id, CaseNumber, Subject FROM Case
WHERE CreatedDate >= 2024-01-01T00:00:00Z
AND CreatedDate < 2024-04-01T00:00:00ZRun one job per quarter (or month, for very large orgs) to stay under the timeout. Do not rerun the same broad query — make it more selective.
queryAll for deleted records
Salesforce documents the bulk queryAll operation as returning records that have been deleted by merge or hard delete, as well as archived Task and Event records. Use this when you need a complete extract that includes purged data. (resources.docs.salesforce.com)
Method 4: Reports Export (Quick and Dirty)
What it is: Salesforce's built-in reporting interface, which lets you export report results as Excel or CSV files. (trailhead.salesforce.com)
Best for: One-off exports for business users who need a filtered view — not for full data extraction.
Limits
- Display limit: Reports show a maximum of 2,000 rows in the UI
- Export limit: Up to 100,000 rows and 100 columns when exported as XLSX
- No related object depth: A Case report can include Case fields and parent Account/Contact fields, but cannot include child EmailMessage bodies or attachment content
- No binary data: Attachments and files are not exportable via reports
- Formatting lost: Exported reports drop formatting, groupings, and subtotals
Reports are useful for quick sanity checks — "how many open Cases do we have from Q1?" — but they are not a data extraction tool. If someone hands you an exported Salesforce report and calls it a "data migration source," you are missing conversations, attachments, and most relational context.
The Attachment Trap: How to Export Cases with Files
This is where most Service Cloud exports break.
Salesforce stores files using the Lightning file model: ContentDocument → ContentVersion → ContentDocumentLink. When a customer attaches a PDF to an email that creates a Case via Email-to-Case, Salesforce stores the file as a ContentVersion record, creates a ContentDocument parent, and links it to the EmailMessage via a ContentDocumentLink junction record.
The problem: ContentDocumentLink has a hard SOQL restriction that prevents you from running an unfiltered query.
Attempting SELECT Id, ContentDocumentId, LinkedEntityId FROM ContentDocumentLink without a filter returns:
MALFORMED_QUERY: Implementation restriction: ContentDocumentLink requires a filter
by a single Id on ContentDocumentId or LinkedEntityId using the equals operator
or multiple Id's using the IN operator.
You cannot bulk-export all ContentDocumentLinks in one query. You must:
- First export all Case IDs (and EmailMessage IDs)
- Query ContentDocumentLink filtered by
LinkedEntityId IN ('caseId1', 'caseId2', ...) - Use the resulting ContentDocumentIds to query ContentVersion for file data
- Decode the Base64-encoded
VersionDatafield to reconstruct original files
There is one workaround: in API versions 59.0 and later, you can bypass the LinkedEntityId filter requirement if the executing user has the "Query All Files" permission. Enabling "Query All Files" also requires "View All Data" permission, which many infosec teams restrict. Even with these permissions granted, you are still bound by query timeouts and result size limits.
A missing-file problem is often a permission problem. View All Data alone is not enough for unrestricted file queries — Query All Files is the separate permission that changes file visibility behavior. Verify this on the export user before assuming files are missing from the org. (developer.salesforce.com)
Step-by-step: Export all files attached to Cases
-- Step 1: Get all Case IDs
SELECT Id FROM Case
-- Step 2: Get all EmailMessage IDs for those Cases
SELECT Id, ParentId FROM EmailMessage WHERE ParentId IN (<case_ids>)
-- Step 3: Get ContentDocumentLinks for Cases and EmailMessages
SELECT Id, ContentDocumentId, LinkedEntityId
FROM ContentDocumentLink
WHERE LinkedEntityId IN (<case_ids_and_email_message_ids>)
-- Step 4: Get ContentVersions (actual file data)
SELECT Id, ContentDocumentId, Title, FileExtension, VersionData
FROM ContentVersion
WHERE ContentDocumentId IN (<content_document_ids>)
AND IsLatest = trueThe VersionData field contains the Base64-encoded binary. For a 5 MB PDF, that is ~6.7 MB of Base64 text in your CSV. Multiply by thousands of Case attachments and you are looking at massive files that need programmatic decoding.
Legacy Attachments vs. Lightning Files
| Storage Model | Object | Link Method | Binary Field |
|---|---|---|---|
| Lightning Files (current) | ContentVersion | ContentDocumentLink (junction) | VersionData (Base64) |
| Classic Attachments (legacy) | Attachment | ParentId (direct lookup) |
Body (Base64) |
Many Service Cloud orgs running for 5+ years will have both storage models active. A complete export must account for both.
Older Salesforce orgs may have files in the legacy Attachment object alongside newer Lightning ContentVersion records. Query Attachment WHERE ParentId IN (<case_ids>) separately. The Body field on Attachment is also Base64-encoded.
If you are mapping this data to a new CRM, the complexity multiplies. See our guide on Salesforce to HubSpot Migration: Pipeline, Tickets & Attachments for destination-specific file mapping challenges.
Comparison: Which Export Method for Which Use Case?
| Method | Best For | Max Records | Includes Files? | Preserves Relations? | API Calls? |
|---|---|---|---|---|---|
| Data Export Service | Org-wide backup | Entire org | Optional (slows export) | ❌ Raw IDs only | ❌ None |
| Data Loader | Targeted object pulls | 5M (Bulk API) / 150M (Bulk API 2.0) | Not supported natively | ❌ Manual joins | ✅ Yes |
| Bulk API 2.0 | Programmatic large-scale | 15 GB per job | Base64 only | ❌ Manual joins | ✅ Yes |
| Reports | Quick filtered views | 100K rows (XLSX) | ❌ No | ❌ Flat | ❌ None |
| SFDX CLI | Scripted/automated | Same as Bulk API | Base64 only | ❌ Manual joins | ✅ Yes |
DIY Scripts vs. AppExchange Tools vs. Managed Migration Services
Once you understand the limits above, the real question is: who does the work?
DIY (custom scripts against Bulk API 2.0)
Pros: Full control. No per-record licensing fees. Can be tuned for your exact data model.
Cons: You own every failure mode. You must handle:
- Query timeout partitioning
- ContentDocumentLink ID-chunking
- Base64 decoding of all files
- Rate-limit backoff and retry logic
- Relational reassembly across dozens of CSVs
- Error logging and data validation
Realistic engineering time: 40–120 hours for a complete Service Cloud extraction with files, depending on org complexity.
AppExchange file export tools
Tools like Satrang Mass File Download can help with the file extraction piece specifically — downloading ContentVersion binaries without manual Base64 decoding. They do not solve the full relational export problem, but they remove the hardest manual step.
Trade-off: Per-user or per-org licensing cost, and you still need a separate process for structured data (Cases, EmailMessages, etc.). These tools are designed for ad-hoc admin downloads, not for orchestrating automated, high-volume platform migrations where data must be transformed and loaded into a new system.
Managed migration service
A team like ClonePartner handles the full extraction pipeline: Bulk API rate-limit management, ContentDocumentLink ID-chunking, Base64 decoding, relational reassembly, and delivery in your required format (JSON, CSV, or direct API injection into the target platform).
The trade-off is cost vs. time. If your team has the engineering bandwidth and 2–3 weeks of calendar time, DIY is viable. If you need the export done in days with zero data loss, a managed service removes the risk. If the export is on the critical path of a migration and rerunning a failed job is more expensive than engineering it right once, outside help pays for itself.
Pre-Export Checklist for Salesforce Service Cloud
Before running any export, walk through this list:
- Identify all objects in scope. At minimum: Account, Contact, Case, CaseComment, EmailMessage, EmailMessageRelation, ContentDocumentLink, ContentVersion, Attachment, CaseTeamMember, CaseHistory, FeedItem (Chatter on Cases)
- Check for both file storage models. Run
SELECT COUNT() FROM Attachment WHERE ParentId IN (SELECT Id FROM Case)and compare with ContentVersion counts - Audit custom fields and record types. Custom fields on Case, EmailMessage, and Account are unique to your org — make sure your SOQL queries include them
- Estimate data volume. Check record counts in Setup → Storage Usage to plan your export partitioning strategy
- Verify API limits. Go to Setup → System Overview → API Requests to see your remaining daily allocation
- Verify file permissions. Confirm the export user has "Query All Files" if you need unrestricted file access
- Schedule the export window. Run large exports during off-peak hours to minimize API contention with integrations and automations
- Plan for metadata separately. If migrating, you also need record types, page layouts, assignment rules, and escalation rules — these require Metadata API or SFDX retrieval, not data export
For the full migration planning sequence, see our Salesforce Service Cloud Migration Checklist.
Data Export vs. Metadata Retrieval
Data export and metadata retrieval are completely separate operations in Salesforce. The methods above extract records (Cases, Contacts, files). They do not extract configuration: custom objects, fields, validation rules, flows, assignment rules, escalation rules, page layouts, or email templates.
If your export is for migration purposes, you need both. Metadata is retrieved via:
- Metadata API (programmatic, returns XML packages)
- SFDX CLI (
sf project retrieve start) - Change Sets (for org-to-org transfers within the same Salesforce environment)
None of the data export tools — Data Export Service, Data Loader, Bulk API — touch metadata.
When to Call for Help
Most teams underestimate Service Cloud exports for one reason: the export itself is easy until you try to use the output. Downloading CSVs is straightforward. Reassembling 15 interconnected objects with decoded files, preserved threading, and correct relational mappings is a different job entirely.
If your export involves:
- More than 100,000 Cases with email threads
- Thousands of file attachments that need to be decoded and re-linked
- A migration deadline measured in days, not months
- Regulatory requirements for data completeness verification
...then the engineering cost of DIY often exceeds the cost of having it done right the first time.
Frequently Asked Questions
- How often can I export data from Salesforce Service Cloud?
- Using the native Data Export Service, Enterprise, Performance, and Unlimited editions can export every 7 days. Professional and lower editions are limited to every 29 days. Data Loader and Bulk API 2.0 exports can run on-demand at any time, limited only by your org's daily API call allocation.
- Does Salesforce Data Loader export attachments?
- The official Data Loader guide states that Data Loader does not support exporting attachments and recommends using the weekly Data Export Service instead. You can query the Body or VersionData fields via SOQL, but Data Loader outputs Base64-encoded strings in a CSV column — not actual files. You must decode each string programmatically to reconstruct the original PDFs, images, or documents.
- Why does my ContentDocumentLink SOQL query fail with MALFORMED_QUERY?
- Salesforce enforces a hard restriction on ContentDocumentLink: every query must include a WHERE clause filtering on ContentDocumentId or LinkedEntityId using the equals or IN operator. You cannot run an unfiltered query. To bulk-export, first collect your Case IDs or EmailMessage IDs, then query ContentDocumentLink in batches filtered by those IDs. In API version 59.0+, you can bypass this restriction if the user has the 'Query All Files' permission.
- What is the Bulk API 2.0 query timeout limit?
- Salesforce enforces a 2-minute processing limit on Bulk API 2.0 queries. If the query does not complete within 120 seconds, it fails with a QUERY_TIMEOUT error. The workaround is to break large queries into date-range or ID-range partitions to reduce execution time.
- Does the Salesforce Data Export Service include email conversations and file attachments?
- The Data Export Service can include EmailMessage records and file data if you check the relevant options (Include images/documents/attachments and Include Salesforce Files). The output is raw CSVs with no relational joins — you must reassemble Case-to-Email-to-Attachment relationships using Salesforce IDs. Large file volumes can delay the export significantly, and downloads are throttled to one file at a time with a 60-second wait.
