Skip to content

Why ERP Migrations Fail at the Data Layer: 9 Core Patterns

Gartner predicts 70%+ of ERP initiatives will fail by 2027. Here are 9 data-layer failure patterns from 1,200+ projects — with detection checklists and fixes.

Raaj Raaj · · 15 min read
Why ERP Migrations Fail at the Data Layer: 9 Core Patterns
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Most ERP projects don't fail because the software is wrong. They fail because the data underneath it is wrong — dirty, incomplete, untransformed, or never reconciled. The software gets blamed, the integrator gets sued, and the business absorbs hundreds of millions in damage. But the root cause, project after project, is data.

Gartner predicts that by 2027, more than 70% of recently implemented ERP initiatives will fail to fully meet their original business case goals, and as many as 25% will fail catastrophically. (gartner.com) A McKinsey and University of Oxford study of over 5,400 large-scale IT projects found they deliver, on average, 56% less value than predicted. (mckinsey.com)

The business model of a new ERP only becomes real when legacy data is translated into the target system's posting logic, document model, and controls. If that translation is wrong, the software can be configured perfectly and still fail in production. Implementation partners are incentivized to focus on software configuration and business process mapping. Data migration gets treated as a secondary ETL task — a line item near the end of the Gantt chart. The result is a multi-million dollar system populated with corrupted master data, broken relational links, and unreconciled financial ledgers.

This article documents the 9 specific failure patterns we see repeatedly across ERP data migrations — drawn from over 1,200 migration projects and corroborated by the industry's most public disasters. Each pattern includes what it looks like, why it kills projects, how to detect it early, and how to fix it.

Warning

If your project has a testing lead, a training lead, and a change lead — but no named owner for reconciliation, cutover rehearsal, and rollback — the data layer is already underfunded.

The 9 Patterns of ERP Data Migration Failure

Pattern 1: Data Migration Treated as the Last 10%

What it is: The implementation plan spends months on software selection, configuration, and process workshops, then gives data migration a thin final phase — often a single line item that reads "migrate data" with no detail on scope, transformation rules, or validation criteria.

Why it kills projects: Data migration isn't a task. It's a workstream that consumes roughly 30–40% of total project effort when you include profiling, cleansing, mapping, transformation, validation, mock loads, reconciliation, and cutover execution. McKinsey's research groups migration and rollout plans alongside architecture, QA, and scope as core project disciplines — not end-stage admin. (mckinsey.com)

When data migration is squeezed into the final weeks, every defect becomes a go-live blocker with no time to fix it. Your new routing logic can't be tested if the legacy Bill of Materials lacks the dimensional data the new logic requires. The entire testing phase runs on synthetic data. When real data is finally loaded during UAT, the system breaks.

How to detect it early: Look at the project plan. If data migration has fewer than 10 line items, starts after UAT, or if the implementation partner hasn't requested full access to your legacy database by month one, you're already behind.

How to fix it: Separate data migration from implementation. Run it as a parallel workstream with its own timeline, team, and acceptance criteria. For a deeper breakdown of why this split matters, see Why Data Migration Isn't Implementation.

Pattern 2: Master Data Assumed Clean

What it is: The project team assumes existing customer, vendor, product, and chart-of-accounts data is accurate because the business has been running on it for years. Nobody profiles it until the first mock load — at which point 30–60% of records turn out to be duplicates, stale, or structurally incompatible with the target ERP's validation rules.

Why it kills projects: Legacy systems accumulate decades of technical debt. A retailer might discover "Vendor A", "Vendor A Inc", and "VNDR A" are the same entity, tied to different payment terms and historical purchase orders. The target ERP's stricter validation rules reject records the legacy system happily accepted for years.

Target Canada's SAP implementation required entering up to 50 data elements for each of approximately 75,000 products. A court opinion later found the data was accurate only about 30% of the time. (ecf.ca8.uscourts.gov)

How to detect it early: Run a data profiling exercise in week one. Measure duplicate rates, null rates, invalid codes, inactive records, and cross-system mismatches on key entities. If duplicates exceed 15%, mandate a cleansing sprint before any mapping work begins.

How to fix it: Profile early, cleanse iteratively, and validate against the target schema — not the source schema. Establish a golden source for each master domain and force business approval of cleanup decisions. Don't wait until UAT to discover your vendor master has 40,000 records and only 12,000 are active.

Pattern 3: IT-Only Decision Making

What it is: Data migration decisions — what to migrate, how to map fields, what transformation rules to apply — are made entirely by IT or the systems integrator. Business stakeholders aren't involved until UAT, at which point they discover the data doesn't match how they actually run the business.

Why it kills projects: IT can move data. Only the business knows what the data means. A field called "customer type" in the old system might map to three different fields in the new ERP depending on the business unit. IT maps the status field perfectly, but the sales team relies on a custom text field to track the actual order status. The semantic gap between database schema and business reality destroys data utility. Gartner specifically warns that tech-centric, IT-driven ERP programs miss business expectations. (gartner.com)

How to detect it early: Ask who signed off on the data mapping document. If the answer is only technical leads, no business process owners have validated the logic.

How to fix it: Assign a business data steward to every major entity — customers, products, financials, vendors. They review mapping rules, approve transformation logic, and participate in mock cutover validation. Data mapping is a business decision translated into code, not an IT exercise.

Pattern 4: Generic Migration Tools Used for Non-Generic Data

What it is: The team relies on the ERP vendor's native migration templates — SAP Migration Cockpit, Microsoft FastTrack data packages — or a general-purpose ETL tool. These work for standard objects. They break on custom fields, complex hierarchies, multi-currency ledgers, and any data structure that doesn't match the vendor's predefined templates.

Why it kills projects: Native tools are designed for the vendor's ideal data model, not your actual data. SAP's migration tooling is built around predefined migration objects with template-based extension when the delivered objects aren't enough. (help.sap.com) Dynamics 365 uses data entities that encapsulate standard business concepts. Both expect pre-transformed, largely flat data. They choke on hierarchical asset maintenance records, nested parent-child relationships, and 15 years of accumulated customizations — forcing you to flatten, truncate, or abandon critical information just to get the data to load.

The distinction matters: vendor-native migration tools fit standard objects and standard destinations. Generic ETL and iPaaS tools handle extraction, transformation, and connectivity at scale. Neither category automatically understands your custom approval logic, reporting controls, or composite documents.

How to detect it early: Ask the integrator to demonstrate how they'll handle your top 5 most complex data objects. If the answer is "we'll map it to a custom field" or "we'll handle it in a later phase," push harder. If they hand your team empty CSV templates and tell you to fill them out, that's your signal.

How to fix it: Use native tooling for standard master data and open transactions. Engineer the exceptions separately with purpose-built extraction and transformation logic. Accept that the standard tool handles 60–70% of your data and build custom scripts for the rest. See Dynamics 365 On-Premise Migration: Microsoft FastTrack vs. Partner for a technical breakdown of vendor tool limitations.

When Reconciliation and Rollbacks Are Ignored (Patterns 5–9)

Pattern 5: No Reconciliation Strategy

What it is: The team loads data into the new ERP but has no systematic process to verify that what went in matches what came out. Trial balances don't tie. Record counts are off. Nobody notices until the first month-end close.

Why it kills projects: A simple row count is not reconciliation. If a currency conversion script fails on a subset of invoices, the general ledger will be off by millions. National Grid's 2012 SAP go-live is the textbook example: unpaid supplier invoices exceeded 15,000, auditors could not rely on the financial data, and the monthly close stretched from 4 working days to 43. (regmedia.co.uk)

How to detect it early: Ask the project team: "What is your reconciliation methodology for the trial balance at cutover?" If the answer is vague or deferred, the go-live trial balance will not match the source. If nobody can show you a reconciliation matrix 90 days before go-live, you don't have one.

How to fix it: Define reconciliation checkpoints for every mock cutover: record counts by entity, financial balances by GL account, subledger-to-GL ties, open transaction totals, and trial balance. Automate the comparison and run it after every mock load. For detailed guidance, see 7 Costly Mistakes to Avoid When Migrating Financial Data.

Pattern 6: Customizations Migrated as Data

What it is: The old ERP has years of customizations — pricing rules, approval workflows, tax logic, commission calculations — embedded in configuration tables or hacked into data fields. The migration team treats these as data to copy rather than process logic to re-implement in the new system's native framework.

Why it kills projects: Copying configuration data from SAP ECC into S/4HANA, or from GP into Business Central, doesn't preserve behavior. The new system's processing engine interprets the data differently. You end up with pricing rules that don't fire, workflows that route to nonexistent roles, and tax calculations that silently produce wrong numbers.

A legacy system might use a "dummy" customer record to park unallocated inventory — the new ERP imports this as a legitimate customer, skewing analytics and triggering automated workflows. The National Grid audit found an overly ambitious SAP design with 636 RICEFW components (reports, interfaces, conversions, enhancements, forms, and workflows). That is what custom-logic carryover looks like at scale. (regmedia.co.uk)

How to detect it early: Look for records with names like "DO NOT USE", "SYSTEM ADMIN", or "PARKING". Check if the mapping workbook is full of custom flags and lookup tables with no documentation explaining the business behavior those fields trigger.

How to fix it: Audit every customization in the source system. Classify each as: (a) rebuild natively in the target, (b) migrate as data with validation, or (c) retire. Never assume a one-to-one copy preserves behavior.

Pattern 7: Single Mock Cutover

What it is: The project plan includes one mock cutover before go-live. That single rehearsal is the first time anyone sees the full migration executed end-to-end — and it inevitably surfaces dozens of issues. With only one rehearsal, there's no time to fix those issues and re-test.

Why it kills projects: A mock cutover is a full-speed test of every extraction script, transformation rule, load sequence, validation check, and rollback procedure. The first one always fails in ways you didn't predict. If the mock takes 48 hours, the team assumes the real cutover will be faster. It takes 72, blowing past the weekend downtime window and bleeding into Monday morning operations.

The National Grid audit documented exactly this: errors were still being found in final test stages, fixes were installed, and there was no time for retesting. (regmedia.co.uk)

How to detect it early: Check the project timeline. If there's only one mock cutover scheduled, or if mocks are scheduled less than 4 weeks before go-live, the real cutover will be the team's actual dress rehearsal.

How to fix it: Schedule a minimum of three mock cutovers, each followed by a formal retrospective. Track defects found in each iteration. The first mock finds surprises. The second proves the fixes. The third proves repeatability — it should be clean enough that you'd be comfortable going live from it. If it's not, the go-live date moves.

Pattern 8: Historical Data Over-Scope

What it is: The project attempts to migrate 10–15 years of transactional history — closed purchase orders, resolved support tickets, archived invoices — into the new ERP. The justification is usually "someone might need it" or "legal requires it."

Why it kills projects: Every year of history multiplies data volume, transformation complexity, and cutover duration. Modern cloud ERPs enforce API rate limits. Attempting to push 50 million closed invoices at 100 API calls per second throttles the migration to a halt, extending the cutover by weeks — not hours. Most of that history is never accessed in the new system. It bloats storage costs and slows reporting queries.

Microsoft's Dynamics guidance says explicitly: don't migrate more data than users need. Data people need to see doesn't always have to live in the new system. (learn.microsoft.com)

How to detect it early: Ask: "What is the oldest transaction date in the migration scope?" Then ask: "When was the last time anyone accessed a record from that year?" If no one can answer the second question, the scope is too broad.

How to fix it: Migrate only the minimum history required for operational continuity — typically open transactions, current balances, and 1–2 years of closed transactions for audit purposes. Archive everything else in a read-only format accessible outside the ERP. For a detailed scoping framework, see What Data Should You Actually Migrate to Your New ERP?.

Pattern 9: No Rollback Plan

What it is: The team plans a big-bang cutover weekend with no defined abort criteria. When something breaks at hour 36 — and something always breaks — no one knows whether to push forward or roll back. The decision gets made politically, not technically.

Why it kills projects: Without pre-defined abort criteria ("if the trial balance is off by more than $X, we roll back"), the go/no-go decision becomes a negotiation between exhausted project managers and nervous executives. National Grid's 2012 go-live decision was made under heavy operational pressure from Hurricane Sandy recovery. They pushed forward into a system that wasn't ready. The cleanup cost hundreds of millions.

How to detect it early: Ask: "What are the specific conditions under which we abort the cutover and revert to the legacy system? Who has the authority to make that call? How long does a rollback take?" If these questions produce blank stares, you don't have a rollback plan.

How to fix it: Document explicit abort criteria before the first mock cutover. Assign a single decision-maker with authority to call the rollback. Test the rollback procedure during at least one mock cutover. Define reversible and irreversible steps, the last safe abort point, source freeze and unfreeze steps, and the communication tree.

Danger

A cutover runbook without explicit abort criteria is not a runbook. It is a hope document.

Case Studies in Catastrophe: Target Canada and National Grid

These aren't obscure failures. They're two of the most expensive and well-documented ERP disasters in the last 15 years, and both trace directly to data-layer failures.

Target Canada (2013)

Target entered Canada by acquiring 220 Zellers leases and planned to open 124 stores by end of 2013, choosing SAP as their supply chain backbone. Because they were entering a new market rather than migrating from a legacy system, they had to enter product data from scratch — up to 50 data elements for each of approximately 75,000 items.

Under extreme time pressure, data entry was outsourced to staff with minimal training and no validation guardrails. A court opinion found the system's data was accurate only about 30% of the time — compared to 98–99% accuracy in Target's US operations. (ecf.ca8.uscourts.gov)

Product dimensions, weights, costs, and currency fields were riddled with errors. The auto-replenishment algorithms failed entirely. The warehouse systems couldn't pack trucks properly because the ERP thought a single toothbrush was the size of a pallet. Store shelves sat empty while distribution centers overflowed with undeliverable inventory.

Target Canada filed for bankruptcy in January 2015, closing all 133 stores and leaving approximately 17,600 people without jobs.

National Grid USA (2012)

National Grid's SAP implementation went live on November 5, 2012, days after Hurricane Sandy devastated their service area. The system immediately failed across payroll, procurement, and financial reporting.

Employees were paid incorrectly — the utility absorbed roughly $8 million in unrecoverable overpayments and paid $12 million in settlements for underpayments. Within two months, over 15,000 vendor invoices were unprocessable. The financial close stretched from 4 working days to 43, cutting the company off from short-term borrowing. (regmedia.co.uk)

A state-sponsored audit cited poor legacy data quality, limited data availability during testing, and an overly ambitious SAP design. The broader program ultimately cost approximately $945.1 million against a sanctioned budget of $383.8 million. (regmedia.co.uk) The remediation effort alone required 850+ contractors at a burn rate of roughly $30 million per month. National Grid sued Wipro (the systems integrator) and settled for $75 million. (upperedge.com)

Both cases share the same DNA: data wasn't profiled, wasn't validated, wasn't reconciled, and wasn't tested under real-world conditions.

The 90-Day Go-Live Detection Checklist

If you're a project sponsor, CIO, or audit committee member, ask these 12 questions exactly 90 days before go-live. Each maps to one or more of the 9 failure patterns above.

  1. What percentage of the project budget is allocated to data migration? (Pattern 1 — if it's under 20%, it's underfunded)
  2. When was the last data profiling report produced, and what were the duplicate and null rates on master data? (Pattern 2)
  3. Which business process owners have signed off on the data mapping document? (Pattern 3)
  4. How are we handling data objects that don't fit the native migration templates? (Pattern 4)
  5. What is the reconciliation methodology for the trial balance at cutover? (Pattern 5)
  6. How many legacy customizations are being migrated as data versus rebuilt natively? (Pattern 6)
  7. How many mock cutovers have been completed, and how many defects were found in each? (Pattern 7)
  8. What is the oldest transaction record in migration scope, and who requested it? (Pattern 8)
  9. What are the specific abort criteria for the cutover weekend? (Pattern 9)
  10. Who has single-point authority to call a rollback, and have they rehearsed it? (Pattern 9)
  11. What is the expected cutover duration, and how does it compare to the last mock's actual duration? (Patterns 7, 9)
  12. Is the data migration team the same team doing implementation, or is it a dedicated workstream? (Pattern 1)

If more than three of these questions produce vague or deferred answers, your project is exhibiting the failure patterns documented above. Escalate before go-live, not after.

What a Well-Run ERP Data Migration Looks Like

A good migration isn't defined by the absence of problems — it's defined by catching them early enough to fix. Here's the sequence that works:

  1. Data profiling and scope definition (weeks 1–3): Profile every source entity. Measure completeness, uniqueness, accuracy. Define what migrates, what archives, what gets retired.
  2. Business-validated mapping (weeks 3–6): Data stewards from each business unit review and sign off on field-level mapping. Transformation rules are documented and testable.
  3. Custom extraction and transformation scripts (weeks 4–8): Build purpose-specific migration logic for complex entities. Don't force non-standard data through standard templates.
  4. Mock cutover #1 (week 8): Full end-to-end run. Expect failures. Capture every defect. Measure duration.
  5. Defect remediation and re-profiling (weeks 9–10): Fix issues from mock #1. Re-profile source data for any new quality issues.
  6. Mock cutover #2 (week 11): Faster, cleaner. Reconciliation checks should pass on most entities. Rollback procedure tested.
  7. Mock cutover #3 (week 13): This should be clean enough to be the real thing. If it's not, the go-live date moves.
  8. Go-live cutover (week 14–15): Execute with continuous sync running to minimize downtime. Reconciliation checks pass before the business is switched over.
  9. Post-go-live validation (week 15–16): Compare source and target across all reconciliation checkpoints. Confirm with business users that operational data matches expectations.

This isn't a luxury timeline. It's a 16-week sprint that prevents the 2-year cleanup.

Warning

The cost of skipping steps is not linear. A data defect caught during profiling costs hours to fix. The same defect caught during cutover costs days. The same defect caught after go-live costs months — and sometimes lawsuits.

How ClonePartner Approaches ERP Data Migration

We've completed over 1,200 migration projects. The patterns above aren't theoretical — they're the specific failure modes we've been hired to prevent or rescue.

Our approach is built around the pain points this article describes:

  • Custom scripts for non-generic data. We don't rely on vendor migration templates for complex entities. We write purpose-built extraction and transformation logic that handles your actual data structures — hierarchical records, multi-currency ledgers, custom relational objects.
  • Reconciliation built into every migration. Trial balance matching, record count verification, and automated comparison reports are standard, not optional add-ons.
  • Multiple mock cutovers. We run iterative mock cutovers with formal defect tracking and retrospectives. The real cutover should never surface surprises.
  • Continuous data sync. We keep legacy and target systems in sync during the transition period, reducing the freeze window for reference and in-flight data. For open financial periods and operational cutover, we maintain a controlled period boundary with a tested rollback path.
  • Defined rollback criteria. Every migration plan includes explicit abort conditions and a tested rollback procedure.

If your ERP project is approaching go-live and the detection checklist above raised concerns, we can assess your data migration readiness and close the gaps — before they become $585 million problems.

Frequently Asked Questions

What is the single most common cause of ERP migration failure?
Treating data migration as the last 10% of the project instead of the first 30–40%. When data profiling, cleansing, and reconciliation are deferred to the final weeks, every defect becomes a go-live blocker with no time to fix. This single scheduling decision cascades into dirty master data, failed reconciliation, and blown cutover windows.
How do we know if our ERP project is already in trouble?
If you're 90 days from go-live and can't produce profiling results, a signed reconciliation matrix, dates for at least two mock cutovers, and written abort criteria, the project is at risk. Use the 12-question detection checklist in this article — if more than three questions produce vague answers, escalate immediately.
Can we recover an ERP migration mid-project?
Usually yes, if you act before go-live. The fastest recovery path: separate data migration into a dedicated workstream, run an emergency data profiling sprint, cut historical data scope, add business owners to data decisions, and make reconciliation and rollback non-negotiable before the next mock cutover. Post-go-live recovery is 10–100x more expensive — National Grid's post-go-live cleanup cost $585 million.
How many mock cutovers should an ERP migration include?
A minimum of three. The first mock cutover surfaces unexpected failures in extraction scripts, transformation rules, and load sequences. The second should be significantly cleaner. The third should be clean enough that you'd be comfortable going live from it. If the third mock still has critical defects, move the go-live date.
Should we migrate all historical data into the new ERP?
No. Migrate only what supports day-one operations, statutory reporting, and near-term reference — typically open transactions, current balances, and 1–2 years of closed transactions for audit. Archive everything older in a read-only format accessible outside the ERP. Migrating 10–15 years of history multiplies cutover duration and transformation complexity without corresponding business value.

More from our Blog

7 Costly Mistakes to Avoid When Migrating Financial Data
Accounting

7 Costly Mistakes to Avoid When Migrating Financial Data

One error can corrupt your entire history. This in-depth guide reveals the 7 costliest mistakes to avoid, including botching opening balances, incorrect data mapping, and failing to run parallel reports. We cover the "what not to do" pitfalls, from "Garbage In, Garbage Out" to ignoring multi-currency complexities. Read this before you migrate to ensure 100% data integrity, avoid tax season nightmares, and achieve a stress-free "go-live" on your new accounting system.

Raaj Raaj · · 13 min read
Dynamics 365 On-Premise Migration: Microsoft FastTrack vs. Migration Partner (Advisory vs. Execution)
Microsoft Dynamics 365

Dynamics 365 On-Premise Migration: Microsoft FastTrack vs. Migration Partner (Advisory vs. Execution)

Microsoft FastTrack offers excellent advice, but it won’t rewrite your legacy SQL or fix your broken integrations. As the Dynamics 365 on-premise deadline approaches, understanding the "Hard Boundary" between advisory and execution is the difference between a stalled project and a successful migration. Learn where FastTrack stops and why engineering-led execution is required to cross the finish line.

Raaj Raaj · · 4 min read