Salesforce to GoHighLevel Migration: The Technical Guide
A technical guide to migrating from Salesforce to GoHighLevel: object mapping, API rate limits, custom object constraints, and step-by-step ETL architecture.
Planning a migration?
Get a free 30-min consultation. Our engineers review your setup and map out a custom migration plan — no obligation.
Schedule Free Consultation- 1,200+ migrations completed
- Zero downtime guaranteed
- Transparent, fixed pricing
Migrating from Salesforce to GoHighLevel (GHL) is a data-architecture problem, not a drag-and-drop setup. Salesforce uses a deeply relational schema — Accounts contain Contacts, Contacts link to Opportunities, Activities tie to both — while GoHighLevel is contact-centric: everything radiates outward from a flat Contact record, with Opportunities living inside Pipelines and Companies treated as a loosely coupled grouping object.
A CSV export from Salesforce will flatten those relationships, silently duplicate company data across contacts, or collapse multi-contact deals into a single row. Understanding the structural mismatch before you export a single record is what separates a clean migration from weeks of manual cleanup.
This guide covers the object-mapping decisions you need to make, the API constraints on both sides, the viable migration methods and their trade-offs, and the edge cases that break most DIY attempts. If you're migrating from a different CRM to GoHighLevel, see our Close to GoHighLevel migration guide for a comparison of GHL's data model against another sales-centric CRM.
Why Companies Migrate from Salesforce to GoHighLevel
The drivers are almost always a combination of three things:
- Cost consolidation. Salesforce Enterprise Edition licensing adds up fast — per-user costs, add-on fees for CPQ, Marketing Cloud, and Pardot. GoHighLevel bundles CRM, pipeline management, marketing automation, funnels, SMS/email, and appointment scheduling into a single subscription.
- Operational simplicity. Salesforce's power is its configurability, but that configurability creates admin overhead. Teams running fewer than ~50 users often don't need Flows, Process Builder, validation rules, and multi-tier permission sets. GoHighLevel's workflow builder covers 80% of those use cases with less setup.
- Agency model fit. GoHighLevel was built with agencies in mind. Its sub-account architecture, white-label options, and SaaS mode let agencies resell a branded CRM to clients — something Salesforce doesn't natively support without significant custom development.
None of this means GoHighLevel is a drop-in replacement. It is a different kind of tool with a much simpler data model. Acknowledging that gap upfront is the first step toward a migration that doesn't break things.
Pre-Migration Planning
Before writing any migration code or mapping any fields:
- Data audit. Run record counts for all Salesforce objects in scope: Accounts, Contacts, Leads, Opportunities, Activities, Custom Objects. Include owners, queues, assignment rules, and integrations that read or write Salesforce.
- Purge dead data. Delete or archive inactive records, unused custom objects, and orphaned lookup references before export. Do not migrate garbage into a fresh system.
- Define migration scope. Not everything needs to migrate. Determine the cut-off date for historical data (e.g., only closed-won opportunities from the last 3 years). Decide what's operationally active, what can be archived, and what's disposable.
- Create GHL target metadata. Set up Pipelines, stages, Custom Fields, and Custom Objects in GoHighLevel before importing any data. If your CSV file includes data for fields that do not yet exist in HighLevel, you will need to create those fields as Custom Fields before completing the import. HighLevel allows you to map CSV columns to both standard and custom fields, but only if those fields are already created in your account.
- Choose a migration strategy:
- Big bang: Full load, short freeze, final delta, cutover. Simpler but higher risk.
- Phased: Migrate by object type or business unit. Lower risk, longer timeline.
- Incremental: Historical data first, then repeated delta passes during a coexistence period until cutover.
- Set a configuration freeze on Salesforce. No new fields, objects, or automations during the migration build.
- Back up everything. Run a full Salesforce Data Export before starting. This is your rollback safety net.
- Define rollback criteria. Document what constitutes a failed migration and how you'd recover.
Freeze schema changes while mapping. New custom fields, picklist values, workflows, or routing rules introduced mid-project are a common reason test results do not match production cutover results.
Data Model & Object Mapping: Salesforce → GoHighLevel
This is the most consequential part of the migration plan. Get the mapping wrong here and every downstream step — import, validation, automation rebuild — compounds the error.
Core Object Mapping
| Salesforce Object | GoHighLevel Equivalent | Notes |
|---|---|---|
| Account | Company | GHL Companies can link to Contacts, Opportunities, and Custom Objects. But the Company object has limited functionality compared to Salesforce Accounts — no direct pipeline association, limited workflow triggers. |
| Contact | Contact | 1:1 mapping. GHL's core entity. All communication, pipelines, and automations run through Contacts. |
| Lead | Contact (with tags/custom fields) | GHL has no separate Lead object. Merge Salesforce Leads into GHL Contacts and use tags or a custom field like lifecycle_stage = Lead to segment. |
| Converted Lead | Final Contact/Company/Opportunity state | Do not recreate converted leads as separate live records unless the business actively uses that history operationally. |
| Opportunity | Opportunity (inside a Pipeline) | GHL Opportunities live inside Pipelines with defined stages. Create matching Pipelines and stages before import. |
| Task / Event | Task / Note / Appointment | GHL supports Tasks on Contacts, Opportunities, Companies, and Custom Objects. Historical activities are best stored as Notes. Future-dated events with scheduling meaning map better to Appointments. |
| Custom Objects | Custom Objects (limited) | GoHighLevel supports up to 10 Custom Objects per location across all plans. Major architectural constraint — see section below. |
Field-Level Mapping Rules
Salesforce picklist fields become GHL Dropdown custom fields. Date fields, currency fields, and multi-select picklists all require explicit type matching in GHL's custom field builder.
Key mapping rules:
- Salesforce Record Types → GHL has no equivalent. Use tags or a custom dropdown field to segment.
- Salesforce Lookup/Master-Detail fields → GHL uses associations between objects (Contact ↔ Company, Contact ↔ Opportunity, Custom Object ↔ Contact). Rebuild as associations, not foreign-key lookups.
- Salesforce Formula fields → No equivalent in GHL. Pre-compute the value during the transform step and store it in a text/number custom field.
- Salesforce IDs → Create hidden custom fields (
sf_account_id,sf_contact_id,sf_opportunity_id) to preserve Salesforce primary keys. These are your reconciliation keys during validation, rollback, and post-go-live support. - Owner IDs → Map with a user crosswalk table, not by display name.
- Multi-select picklists → Map to tags or related records. Comma-joined text degrades filtering and reporting.
Sample Field Mapping Table
| Salesforce Field | GHL Field | Type | Transform Required |
|---|---|---|---|
Contact.FirstName |
contact.firstName |
Text | None |
Contact.LastName |
contact.lastName |
Text | None |
Contact.Email |
contact.email |
None | |
Contact.Phone |
contact.phone |
Phone | Normalize to E.164 |
Contact.MailingAddress |
contact.address1, contact.city, contact.state, contact.postalCode |
Text | Split compound field |
Account.Name |
contact.companyName or Company record |
Text | Decide: flat field vs. Company object |
Account.OwnerId |
Company owner or custom field | Lookup | Requires user crosswalk |
Opportunity.Name |
opportunity.name |
Text | None |
Opportunity.StageName |
opportunity.pipelineStageId |
ID | Map stage names to GHL Pipeline stage IDs |
Opportunity.Amount |
opportunity.monetaryValue |
Number | Preserve currency code separately if multi-currency |
Opportunity.CloseDate |
Custom field on Opportunity | Date | GHL Opportunities have no native CloseDate field |
Lead.LeadSource |
contact.source or custom field |
Text | None |
Lead.Status |
Tag or custom dropdown | Text | Map picklist values |
Task.Subject |
task.title |
Text | None |
Task.Description |
task.body |
Text | Truncate if exceeding GHL limits |
CustomObject__c.ExternalId__c |
Custom Object unique Text field | Text | Only certain field types support uniqueness |
Custom Objects and Uniqueness Constraints in GoHighLevel
Salesforce orgs with 5, 10, or 50+ custom objects hit a hard wall here.
GoHighLevel supports up to 10 Custom Objects per location across all plans. That's a fixed ceiling — Starter, Unlimited, and Pro all share the same limit. If your Salesforce org has more than 10 custom objects carrying active operational data, you must make architectural compromises: flatten some objects into custom fields on Contacts, archive others externally, or accept that not everything migrates into GHL's native schema.
Supported unique field types today are Single Line Text, Multi Line Text, Number, and Phone, with a limit of up to 10 unique fields per object. This matters when migrating Salesforce objects that rely on lookup relationships for deduplication. Lookup relationships, multi-select fields, and date fields cannot be marked as unique in GHL custom objects.
If your Salesforce custom object uses an external ID (a common pattern for integration keys), you can replicate that as a unique Text field. But if your dedup logic depends on a date or a lookup, handle deduplication in your ETL layer before loading into GHL.
Workaround for relational integrity: When migrating complex Salesforce custom objects that rely on relational lookups, generate a concatenated text string (e.g., AccountID_OpportunityID) during the transformation phase and map it to a unique Text field in GoHighLevel. This preserves referential integrity and prevents duplicates.
Custom Objects can be linked to Contacts, Companies, Opportunities, and even other Custom Objects — so relational modeling is possible, but it's shallower than Salesforce's schema. Plan your target data model on paper before writing any migration code.
Older advice that says GoHighLevel has no real custom objects is outdated. Current docs show Custom Objects on all plans with API support, workflow support, associations, and CSV import. The real constraints are the 10-object cap, limited unique field types, and support gaps across some product surfaces — including Email Campaigns, Bulk Email/SMS, Conversations, Calendars, and Payments. (help.gohighlevel.com)
Migration Approaches: CSV, API, Middleware, or Managed Service
Method 1: Native CSV Export → CSV Import
How it works: Export Salesforce data via Data Export, Reports, or Data Loader as CSV files. Clean and transform in a spreadsheet. In HighLevel, go to Contacts → Import Contacts and upload the CSV files exported from Salesforce, mapping fields to ensure accurate data import.
When to use it: Small datasets (under ~5,000 contacts), simple schemas with no custom objects or multi-level relationships.
Pros:
- Zero engineering effort
- Free
- Fast for small datasets
Cons:
- Relationships are destroyed — a flat CSV cannot represent Account → Contact → Opportunity hierarchies
- When importing contact notes via CSV, you can only have one note per contact record with a limit of 5,000 characters. Historical activity threads are lost.
- CSV file size limit is 30 MB — if larger, you must split into smaller files.
- Every object type requires a separate import pass with manual linking afterward
Scalability: Small datasets only. Enterprise Salesforce orgs with 100K+ records across multiple objects will spend more time on spreadsheet cleanup than the migration itself.
For a deeper analysis, see Using CSVs for SaaS Data Migrations: Pros and Cons.
Method 2: API-Based Migration (Custom ETL)
How it works: Extract data using Salesforce REST or Bulk API. Transform in a staging layer (custom script, database, or ETL tool). Load into GoHighLevel via the v2 REST API.
When to use it: Any migration involving more than ~5,000 records, custom objects, multi-level relationships, or where you need deterministic, auditable results.
Pros:
- Full control over data transformation, relationship rebuilding, and error handling
- Preserves Account → Contact → Opportunity links by creating records in dependency order and storing GHL IDs for association
- Idempotent runs are possible with proper design
- Handles large datasets via batching and pagination
Cons:
- Requires engineering effort (Python, Node.js, or equivalent)
- Must manage rate limits on both sides
- Needs thorough testing and error logging
Scalability: Enterprise-grade. This is the only method that reliably handles 100K+ records with relational integrity.
Method 3: Middleware (Zapier / Make)
How it works: Using the LeadConnector Zapier app, you authenticate with your HighLevel CRM login, then choose which sub-account to connect so Zaps can create/update contacts, move data, and trigger actions.
When to use it: Ongoing record-level sync between a still-active Salesforce org and GHL. Not suitable for historical bulk migration.
Pros:
- No-code setup
- Good for real-time, record-by-record sync during a coexistence window
Cons:
- Each GoHighLevel account establishes a unique connection with Zapier, involving distinct API keys and connections for every sub-account. Agencies with multiple sub-accounts can't scale this.
- Zapier processes records one at a time per Zap execution. Migrating 50,000 historical Opportunities means 50,000 task executions — expensive and slow.
- No native support for bulk relationship creation
- Duplicate handling is primitive; Zapier won't deduplicate against existing GHL records unless you build that logic into a multi-step Zap
Scalability: Poor for bulk historical migration. Acceptable for ongoing trickle sync.
Zapier is a sync tool, not a migration engine. HighLevel's official Zapier connection uses the LeadConnector app and authenticates to a specific sub-account. That works for narrow live automation. It is a bad fit for replaying years of Salesforce history with cross-object reconciliation.
Method 4: Managed Migration Service
How it works: An external team writes and operates the extraction, transformation, and loading pipeline. They handle API rate limits, relationship mapping, deduplication, and validation.
When to use it: When your engineering team is focused on product work and can't absorb a migration side project. When the Salesforce schema is complex (multiple custom objects, deep relationship hierarchies). When you need guarantees around data fidelity and downtime.
Pros:
- Fastest time-to-completion
- Relationships, custom objects, and edge cases handled by specialists
- Your team stays focused on their actual job
Cons:
- External cost
- Requires sharing data access with a third party
Scalability: Enterprise-grade.
Approach Comparison
| Criteria | CSV Import | API-Based (Custom) | Zapier / Make | Managed Service |
|---|---|---|---|---|
| Engineering effort | None | High | Low | None |
| Relationship preservation | ❌ | ✅ | ❌ (limited) | ✅ |
| Custom object support | Partial | Full | ❌ | Full |
| Scalability | < 5K records | Unlimited | < 1K records | Unlimited |
| Historical data fidelity | Low | High | Low | High |
| Cost | Free | Dev time | Per-task pricing | Service fee |
| Turnaround | Hours (small) | Days–weeks | Ongoing | Days |
Which Approach Fits Your Scenario?
- Small business, simple schema, < 5K contacts, no custom objects: CSV import works if you accept manual cleanup.
- Enterprise, 10K+ records, custom objects, multi-level relationships: API-based or managed service. CSV will fail silently.
- One-time migration with a cutover date: API-based or managed. Zapier is wrong for this.
- Ongoing sync during coexistence: Zapier/Make for real-time record-level sync, but only for new/updated records — not historical backfill.
- Low engineering bandwidth: Managed service. The opportunity cost of pulling developers off product work to debug CSV imports for two weeks is almost always higher than the service fee.
API Limits and Technical Constraints
Both platforms enforce rate limits that directly impact migration throughput. Ignoring them means failed jobs, incomplete data, and wasted time.
Salesforce Side
Salesforce enforces a 100,000 daily API request limit for Enterprise Edition orgs, plus 1,000 additional requests per user license. For an org with 50 users, that's 150,000 REST API calls per rolling 24-hour window.
REST query responses return up to 2,000 records per call and use a query locator for pagination. Query cursors remain available for up to 2 days. (developer.salesforce.com)
For bulk extraction, use Bulk API 2.0. Bulk API 2.0 automatically chunks the data into multiple internal batches. You only need to create one job per operation, upload the data, and check job status. Salesforce handles internal batch creation, execution, error handling, and retries. Current limits for Bulk API 2.0: 10,000 query jobs per 24-hour window, 1 TB of query results per 24 hours, and 7-day result availability. (developer.salesforce.com)
For detailed extraction methods, see How to Export Data from Salesforce Service Cloud: Methods & Limits.
Salesforce API limits operate on a rolling 24-hour window, not a fixed calendar day. If you exhaust your quota at 3 PM, it won't reset at midnight — it resets at 3 PM the next day. Plan extraction runs accordingly.
GoHighLevel Side
GoHighLevel enforces a burst limit of 100 API requests per 10 seconds and a daily limit of 200,000 API requests per day, both scoped per Marketplace app per resource (Location or Company).
The contacts endpoint is a common bottleneck during validation. By default, only 20 records are returned per request unless you set the limit parameter to 100. For a 50,000-record validation pass, that's at minimum 500 paginated requests — factor this into your daily budget.
Current GHL docs mark GET /contacts/ as deprecated and direct builders to POST /contacts/search. Many older community examples still use the old list endpoint — build against the current API. (marketplace.gohighlevel.com)
GHL API V1 has reached end-of-support. Existing integrations may continue to function, but no support is provided. Build everything against the V2 API.
Authentication options: GoHighLevel supports Private Integration Tokens (PITs) and OAuth 2.0. PITs work for internal, single-sub-account use. OAuth is the better choice for multi-location installs or marketplace behavior. (marketplace.gohighlevel.com)
Duplicate handling is setting-dependent. If Allow Duplicate Contact is disabled, GHL searches by global unique identifier. If enabled, it prioritizes email then phone. Test the duplicate-search and upsert behavior against the location's actual settings before batching production data. (marketplace.gohighlevel.com)
GoHighLevel rate limits are scoped per Marketplace app per Location. If you're migrating multiple sub-accounts, each sub-account gets its own 200K daily quota. For single-account migrations, you have one pool of 200K requests to work with.
Step-by-Step Migration Architecture
The reliable pattern is Extract → Transform → Load, executed in dependency order.
Step 1: Extract from Salesforce
Use Salesforce Bulk API 2.0 for all major object types:
import requests
import time
SF_BASE = "https://yourorg.my.salesforce.com"
SF_TOKEN = "Bearer <access_token>"
def create_bulk_query_job(query):
"""Create a Bulk API 2.0 query job."""
resp = requests.post(
f"{SF_BASE}/services/data/v60.0/jobs/query",
headers={
"Authorization": SF_TOKEN,
"Content-Type": "application/json"
},
json={
"operation": "query",
"query": query
}
)
resp.raise_for_status()
return resp.json()["id"]
def poll_job_status(job_id):
"""Poll until job completes."""
while True:
resp = requests.get(
f"{SF_BASE}/services/data/v60.0/jobs/query/{job_id}",
headers={"Authorization": SF_TOKEN}
)
state = resp.json()["state"]
if state == "JobComplete":
return True
if state in ("Failed", "Aborted"):
raise Exception(f"Job {job_id} failed: {resp.json()}")
time.sleep(5)
def download_results(job_id):
"""Download CSV results."""
resp = requests.get(
f"{SF_BASE}/services/data/v60.0/jobs/query/{job_id}/results",
headers={"Authorization": SF_TOKEN, "Accept": "text/csv"}
)
return resp.text
# Extract in dependency order
account_job = create_bulk_query_job(
"SELECT Id, Name, Industry, Phone, Website, OwnerId FROM Account"
)Extraction order matters. Accounts first, then Contacts (they reference Accounts), then Opportunities (they reference Contacts and Accounts), then Activities and Custom Objects. Store Salesforce IDs — you need them to rebuild relationships. Keep raw exports immutable so you can re-transform without re-extracting.
Step 2: Transform
This is where the mapping table becomes code. Key transformations:
- Merge Salesforce Leads and Contacts into a single GHL Contact dataset. Add a
lifecycle_stagecustom field to distinguish them. - Normalize phone numbers to E.164 format.
- Map Salesforce picklist values to GHL dropdown option IDs (or create matching options first via API).
- Map Opportunity stages to GHL Pipeline stage IDs. Create the Pipeline and stages in GHL first, capture the IDs, then reference them in the transform.
- Pre-compute formula fields. Salesforce formula fields export as the computed value at export time, not the formula. Decide which to keep as static values.
- Flatten compound addresses into GHL's separate
address1,city,state,postalCode,countryfields. - Resolve user ID mappings. Build a crosswalk from Salesforce Owner IDs to GoHighLevel User IDs.
Step 3: Load into GoHighLevel
Load in dependency order:
- Companies → capture GHL Company IDs
- Contacts → associate with Companies, capture GHL Contact IDs
- Pipelines/Stages → create if not already set up
- Opportunities → associate with Contacts and Pipeline stages
- Custom Objects → create records, build associations
- Notes/Tasks → attach to Contacts
// Node.js: Create contacts in GoHighLevel v2 with rate limiting
const axios = require('axios');
const GHL_BASE = 'https://services.leadconnectorhq.com';
const GHL_TOKEN = 'Bearer <your_token>';
const LOCATION_ID = '<your_location_id>';
async function createContact(contactData) {
try {
const resp = await axios.post(
`${GHL_BASE}/contacts/`,
{
firstName: contactData.firstName,
lastName: contactData.lastName,
email: contactData.email,
phone: contactData.phone,
companyName: contactData.companyName,
locationId: LOCATION_ID,
tags: contactData.tags || [],
customFields: contactData.customFields || []
},
{
headers: {
'Authorization': GHL_TOKEN,
'Content-Type': 'application/json',
'Version': '2021-07-28'
}
}
);
return resp.data.contact.id; // Store for relationship linking
} catch (err) {
console.error(`Failed: ${contactData.email}:`, err.response?.data);
throw err;
}
}
// Rate-limit wrapper: max 100 requests per 10 seconds
const BATCH_SIZE = 90; // Leave headroom below the 100/10s limit
const BATCH_INTERVAL_MS = 10000;
async function batchLoadContacts(contacts) {
for (let i = 0; i < contacts.length; i += BATCH_SIZE) {
const batch = contacts.slice(i, i + BATCH_SIZE);
const results = await Promise.allSettled(
batch.map(c => createContact(c))
);
results.forEach((r, idx) => {
if (r.status === 'rejected') {
logError(batch[idx], r.reason);
}
});
if (i + BATCH_SIZE < contacts.length) {
await new Promise(resolve => setTimeout(resolve, BATCH_INTERVAL_MS));
}
}
}Build an ID mapping table. For every Salesforce record loaded into GHL, store a mapping of {salesforceId: ghlId}. You need this to rebuild relationships (e.g., linking an Opportunity to its Contact) and for post-migration validation.
Step 4: Rebuild Relationships
After loading Contacts and Opportunities separately, use your ID mapping table to:
- Associate Contacts with Companies via the GHL API
- Link Opportunities to the correct Contact and Pipeline stage
- Create Custom Object associations
This step is impossible with CSV imports. It's the primary reason API-based or managed migrations exist.
Step 5: Validate
See the full Validation section below. At minimum: compare record counts, sample field-level data, and verify relationship integrity before cutover.
Error Handling and Logging
At minimum, log for every API call:
- Source object + source ID
- Target object + target ID
- Request payload hash
- Response code + error body
- Attempt count
- Final status: success / retry / dead-letter
Treat 429 as a retry/backoff signal. Treat most 4xx responses as mapping defects that need human review. Build idempotency into your load script: if a record already exists (matched by email or a Salesforce ID stored in a custom field), update it instead of failing.
Edge Cases That Break Migrations
Duplicate Records
HighLevel automatically merges contacts based on phone number or email. If a match is found, the system updates existing records instead of creating duplicates. This is helpful during import but dangerous if your Salesforce data has legitimately different contacts sharing an email (e.g., info@company.com). Pre-deduplicate in the transform step and split shared emails into primary/additional email fields.
GHL duplicate behavior is also setting-dependent — the same load logic can behave differently across sub-accounts if the Allow Duplicate Contact setting isn't standardized first. (marketplace.gohighlevel.com)
Multi-Level Relationships
Salesforce's Account → Contact → Opportunity chain doesn't map to a single GHL import. You must:
- Create Companies (from Accounts)
- Create Contacts and associate with Companies
- Create Opportunities and associate with Contacts
Each step depends on IDs from the previous step. A CSV import cannot do this — it processes one object type at a time with no cross-referencing.
Notes and Activity History
Salesforce stores a rich activity timeline: Tasks, Events, EmailMessages, CaseComments. GHL's note model is simpler. CSV imports allow only one note per contact record with a 5,000-character limit. For full activity history, use the API to create multiple notes per contact, or accept that some historical context gets archived externally (e.g., in a PDF or data warehouse).
Attachments and Files
Salesforce stores files as ContentVersion / ContentDocument objects. GHL has no bulk file-import mechanism via API for contact-level attachments. Extracting files from Salesforce requires the standard REST API — the Bulk API does not support base64 file extraction. Your options:
- Upload files to GHL's media library via API and link manually
- Store files in an external system (Google Drive, S3) and add links as custom fields on GHL Contacts
- Accept that file migration is out of scope and archive separately
API Failures and Retries
Both APIs will return errors. Common patterns:
- GHL 429 (Too Many Requests): Implement exponential backoff. Respect the
Retry-Afterheader. - Salesforce
REQUEST_LIMIT_EXCEEDED: Pause and retry after the rolling window refreshes. - GHL validation errors: Missing required fields, invalid phone formats, duplicate unique fields. Log to an error file for manual review.
If you use GoHighLevel webhooks for delta sync, verify X-GHL-Signature on incoming payloads. GHL only retries webhook deliveries on 429 responses, not on 5xx errors. (help.gohighlevel.com)
Limitations You Cannot Work Around
Be honest with stakeholders about what GHL cannot replicate from Salesforce:
- No true relational schema. Salesforce supports unlimited custom objects with complex lookup/master-detail relationships. GHL caps custom objects at 10 per location with limited relationship types. Association limits for one-to-N relationships top out at 1,000. (help.gohighlevel.com)
- No formula fields or roll-up summaries. Any derived data must be pre-computed and stored statically.
- Simplified reporting. Salesforce Reports and Dashboards with cross-object joins, grouped summaries, and scheduled snapshots don't have equivalents in GHL.
- No approval processes or multi-level workflows. Salesforce's approval chains and complex Flow logic must be rebuilt as simpler GHL Workflows or handled externally.
- Opportunities are simpler. The Opportunities module includes both "Status" and "Stage" fields which serve very similar purposes, and the default "Status" field is a locked standard field with predefined values that cannot be modified or removed. GHL Opportunities have no native CloseDate field.
- Custom Object surface gaps. Some product surfaces don't support custom objects — including Email Campaigns, Bulk Email/SMS, Conversations, Calendars, and Payments. (help.gohighlevel.com)
- Unique field constraints are one-way. If you downgrade a unique custom object field to non-unique, current docs indicate you cannot make it unique again. (help.gohighlevel.com)
These are real constraints, not temporary gaps. If your business process depends on Salesforce-grade object relationships or reporting depth, that part of the workflow may need to live in a different tool post-migration.
Validation & Testing
Migration without validation is just copying data and hoping.
Record Count Comparison
For every object type, compare:
- Source count (Salesforce) via SOQL:
SELECT COUNT() FROM Contact - Target count (GHL) via API: paginate through all records and count
Mismatch means something was dropped, duplicated, or filtered incorrectly.
Field-Level Validation
Sample 1–5% of records (minimum 50, maximum 500). For each sampled record:
- Compare every mapped field value between source and target
- Flag truncations, encoding issues, or missing values
- Pay special attention to phone number formatting, date fields, and currency values
Prioritize records most likely to break: multi-contact accounts, reopened opportunities, closed-lost deals, converted leads, and custom objects.
Relationship Validation
For sampled Accounts with multiple Contacts in Salesforce:
- Verify the same Contacts are associated with the correct Company in GHL
- Verify Opportunities are linked to the correct Contact
- Check Custom Object associations for orphaned records
UAT Process
Have actual users (sales reps, ops managers) work through their typical workflows in GHL:
- Find a known customer → verify data is complete
- Move an opportunity through the pipeline → verify automation fires
- Search by company name → verify grouping works
Rollback Planning
Before going live:
- Take a full GHL export (CSV) as a baseline
- Keep Salesforce active in read-only mode for at least 30 days post-cutover
- Document how to re-import from backup if a critical issue is discovered
Post-Migration Tasks
Data in GHL is only the first half. The operational migration follows:
- Rebuild automations. Salesforce Flows, Process Builder rules, and email alerts must be recreated as GHL Workflows. Document each Salesforce automation, map its logic to GHL's trigger/action model, and rebuild.
- Rebuild reports. GHL's reporting is simpler. Accept that some Salesforce report complexity won't translate. Build what you can natively; export to Google Sheets or a BI tool for anything complex.
- Run a final delta sync. Capture any records created or modified in Salesforce during the migration window.
- Reconnect integrations. Any downstream systems reading from or writing to Salesforce need to be re-pointed to GHL or replaced.
- Train users. GHL's UI is fundamentally different from Salesforce. Schedule hands-on sessions focused on daily workflows: finding contacts, moving opportunities, logging activities.
- Monitor for 30 days. Track data inconsistencies, missing records, broken automations, and user-reported issues. Dedicate a person to triage migration bugs for the first month.
- Don't decommission Salesforce immediately. Keep it in read-only mode until you've confirmed everything works in GHL. Take a final backup before shutting it down.
Best Practices
- Run a test migration first. Load 100–500 records from each object type into a GHL sandbox sub-account. Validate your mapping logic. Fix issues. Then run full production.
- Back up everything. Salesforce full Data Export + GHL CSV export before any import pass.
- Validate incrementally. Don't wait until the end to check data quality. Validate after each object type is loaded.
- Log everything. Every API call, every error, every retry. When something breaks at record 47,392, you need to know exactly what happened.
- Don't migrate what you don't need. Closed-lost opportunities from 2018? Contacts with no email and no activity in 3 years? Leave them in the archive.
- Create custom fields before importing. HighLevel allows you to map CSV columns to both standard and custom fields, but only if those fields are already created in your account.
- Keep Salesforce IDs in GHL. They're your reconciliation keys for validation, rollback, and post-go-live debugging.
- Automate the repeatable parts. Use scripts for extraction, transformation, and loading. Reserve human judgment for mapping decisions and validation.
When to Use a Managed Migration Service
DIY works when your Salesforce schema is simple (no custom objects, < 5K records), you have a developer with API experience and time to spare, and the migration timeline is flexible.
DIY fails when:
- Your Salesforce org has custom objects with complex relationships
- You need to preserve Account → Contact → Opportunity hierarchies
- Your engineering team is already at capacity
- The business can't tolerate extended downtime or data gaps
- You're migrating multiple Salesforce orgs into multiple GHL sub-accounts
The hidden cost of DIY isn't the coding — it's the debugging. Rate limit errors at 2 AM. Duplicate records discovered three weeks after cutover. Opportunities linked to the wrong Contact because the ID mapping had an off-by-one error. These are the failure modes that consume engineering time long after the "migration" was supposedly complete.
At ClonePartner, we handle the API constraints, relationship mapping, custom-object workarounds, and validation so your engineering team doesn't have to context-switch off product work. We build around Salesforce's Bulk API and GHL's rate limits to move data accurately in days, not weeks, with zero downtime.
For a closer look at how we operate, see How We Run Migrations at ClonePartner.
Frequently Asked Questions
- Can I migrate Salesforce custom objects to GoHighLevel?
- GoHighLevel supports up to 10 Custom Objects per location across all plans. If your Salesforce org has more than 10 active custom objects, you'll need to flatten some into custom fields on Contacts or archive them externally. Unique field constraints are limited to Single Line Text, Multi Line Text, Number, and Phone types — lookups and dates cannot be marked unique.
- What are GoHighLevel's API rate limits for migration?
- GoHighLevel's V2 API enforces a burst limit of 100 requests per 10 seconds and a daily limit of 200,000 requests, both scoped per Marketplace app per Location. The contacts endpoint returns a maximum of 100 records per request, requiring cursor-based pagination for larger datasets. The GET /contacts/ endpoint is deprecated — use POST /contacts/search instead.
- Can I use Zapier to migrate data from Salesforce to GoHighLevel?
- Zapier (via the LeadConnector app) works for real-time, record-level sync but is impractical for bulk historical migration. Each sub-account requires separate authentication, records process one at a time, and there's no support for rebuilding relational hierarchies like Account → Contact → Opportunity chains.
- How do Salesforce Leads map to GoHighLevel?
- GoHighLevel has no separate Lead object. Salesforce Leads should be merged into GHL Contacts during migration. Use tags or a custom dropdown field (e.g., lifecycle_stage = 'Lead') to distinguish them from converted Contacts.
- Will a CSV import preserve Salesforce relationships in GoHighLevel?
- No. CSV imports flatten relational data. Account → Contact → Opportunity hierarchies are destroyed because each object type imports independently with no cross-referencing. Use the GoHighLevel API to create records in dependency order and rebuild associations programmatically.