Skip to content

Dynamics GP to Business Central: Migrating 20 Years of History

How to move from Dynamics GP to Business Central without losing audit-critical history: what to migrate live, summarize, and archive.

Raaj Raaj · · 15 min read
Dynamics GP to Business Central: Migrating 20 Years of History
TALK TO AN ENGINEER

Planning a migration?

Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.

Schedule a free call
  • 1,200+ migrations completed
  • Zero downtime guaranteed
  • Transparent, fixed pricing
  • Project success responsibility
  • Post-migration support included

Moving from Dynamics GP to Business Central is not a lift-and-shift. It is a structural translation — from a segmented, on-premise ERP built on Dexterity to a cloud-native, dimension-based platform built on AL. The migration itself is not what keeps controllers up at night. It is the 15–25 years of GL detail, paid vendor invoices, reconciled bank lines, and check images sitting in GP's SQL database.

The question every finance leader asks: "How do I move without losing access to the past?"

The answer: you do not migrate everything. You tier your data, archive strategically, and build an audit trail that proves chain-of-custody from the old system to the new. This guide covers the exact GP sunset timeline, the three-tier data strategy that works, the technical quirks that trip up every migration, and the reconciliation playbook your auditors will demand.

For broader context on Microsoft's on-premise sunset strategy, see The Ultimate 2026 Guide to Dynamics 365 On-Premise End of Life.

The Dynamics GP sunset timeline: what "end of life" actually means

Microsoft is not flipping a switch. GP is being phased out over a defined timeline, and the distinction between "no longer evolving" and "unsupported" matters for planning.

Here are the dates that matter, sourced from Microsoft's official lifecycle documentation:

Date What happens
April 1, 2025 End of new perpetual license sales to new customers
April 1, 2026 End of new subscription license sales; GP available only to existing licensed users
December 31, 2029 End of product enhancements, regulatory/tax updates, service packs, and technical support
April 30, 2031 End of security patches; end of subscription billing and SPLA usage

The December 31, 2029 date is a revision — it replaces the previously announced September 30, 2029. (learn.microsoft.com)

December 2029 is the operational cliff. After that, GP will no longer receive year-end payroll updates, state tax table releases, or regulatory compliance patches. If you process payroll through GP or rely on it for 1099 reporting, your system becomes a compliance liability on January 1, 2030.

April 2031 is the hard stop for subscription and SPLA customers — Microsoft will terminate billing and you must uninstall. Perpetual license holders can technically keep running GP past 2031, but with zero patches and a shrinking pool of Dexterity developers, that is a managed decline, not a strategy. (learn.microsoft.com)

The takeaway for controllers: "end of life" does not mean "broken tomorrow." You still have a support runway, but not an innovation runway.

Warning

If you are running GP 2018 R2 or earlier under the Fixed Lifecycle Policy, your support end date may have already passed. Only versions on the Modern Lifecycle Policy (v18.1+) get the 2029/2031 timeline.

The historical data anxiety: what retention actually requires

Your GP database likely holds every GL entry, every vendor payment, every customer invoice, and every bank reconciliation from the last 10–25 years. GP tables like GL20000 (Year-to-Date Transaction Open) and GL30000 (Account Transaction History) can hold millions of rows across hundreds of gigabytes.

Auditors expect rapid access to this data. But the IRS does not impose one blanket seven-year rule for every record type. The IRS says to keep records as long as the relevant period of limitations remains open — often 3 years, sometimes 6 years, 7 years for certain bad-debt or worthless-securities claims, and indefinitely in no-return or fraud cases. Many finance teams adopt a seven-year retention policy because it is easy to operate, but that is a policy choice, not a universal IRS rule. (stayexempt.irs.gov)

Retention does not mean everything has to live in Business Central. The IRS requires records to be "available for inspection" — not stored inside your active ERP. A queryable SQL archive or Azure Data Lake satisfies this requirement just as well as a live ledger entry. Microsoft's own GP-to-BC migration flow supports copying the entire GP database to Azure Data Lake for future reference and for access to historical data that is not migrated into BC. (learn.microsoft.com)

The real risk is not regulatory — it is operational. Your controller needs to answer questions like "What did we pay Vendor X in Q3 2019?" or "Show me the original invoice for PO-41822." The migration plan must guarantee those answers are still reachable after cutover.

The limitations of the Microsoft Cloud Migration Tool

Microsoft provides a native Cloud Migration Tool designed to move GP data to Business Central. It handles the standard path, but it has architectural limitations that advisory partners often fail to disclose until the project is underway.

The native tool posts summary GL transactions for open and historical years and stores detailed GP Historical Snapshot data in extension tables within Business Central. These extension tables are not part of the live GL — they are accessible through dedicated list pages and can be surfaced via Power BI, Power Apps, or third-party reporting tools. (learn.microsoft.com)

Warning

The native tool limitation: The Microsoft Cloud Migration Tool does not migrate detailed historical GL transactions into the live Business Central ledger. It only migrates summary amounts for historical years. The deep transaction detail is pushed to extension tables that cannot be queried natively within the standard BC interface. You must build custom Power BI reports or use Power Apps to access your own historical data.

Because of this limitation, attempting to force decades of GP history through the standard Microsoft pipeline usually results in broken reporting and frustrated finance teams.

The three-tier historical data strategy for Business Central

Not everything belongs in BC. Not everything belongs in an archive. The tiers are defined by how frequently the data is accessed and whether it needs to be transactionally active.

Tier What belongs here Where it lives after go-live Why
Tier 1: Migrate live Open AR/AP, active master data, current FY + prior FY GL, bank/checkbook data, open POs, on-hand inventory Live Business Central company Users need it to transact on day one
Tier 2: Migrate as historical 2–5 years of summary GL balances, closed transaction history, aging snapshots at cutover Extension tables, historical BC company, or reporting model Finance needs comparatives, not operational posting
Tier 3: Archive, don't migrate Everything older, plus low-trust or low-use legacy detail SQL Server, Azure Data Lake, document repository, or read-only GP Audit access without contaminating the live ERP

This model aligns with Microsoft's tooling. The native GP migration posts summary GL amounts for open and historical years, lets you choose the oldest historical year to bring into BC, migrates open receivables/payables by their remaining amount, brings over open purchase orders, and stores deeper history in GP Historical Snapshot extension tables meant for Power BI and other reporting tools. (learn.microsoft.com)

Tier 1: Migrate live into Business Central

This is data that users will interact with on Day 1:

  • Open transactions: Unpaid AP invoices, outstanding AR invoices, un-received purchase orders, un-shipped sales orders
  • Current FY + prior FY general ledger: Summary GL balances for the current and immediately prior fiscal year, posted as opening balances — enabling immediate year-over-year comparative reporting
  • All active master data: Customers, vendors, items, chart of accounts, payment terms, shipping methods
  • Bank reconciliation state: The most recent reconciled bank balance and any outstanding items (uncleared checks, deposits in transit)
  • Inventory: On-hand quantities, serial/lot data

Microsoft's native tool supports active/inactive filtering for customers, vendors, items, and checkbooks. Outstanding receivables and payables migrate with the remaining balance only — partially paid invoices come across at net, not gross. (learn.microsoft.com)

Danger

Deposit all posted cash receipts in GP before the final migration run. Undeposited receipts are not migrated by the native tool. If you skip this step, bank reconciliation drifts on day one. (learn.microsoft.com)

Tier 2: Migrate as historical snapshots

This is reference data that finance needs for reporting but does not need to be transactionally active:

  • 2–5 years of summary GL balances by period, broken out by account and dimension
  • Closed AP/AR transactions in summary form (vendor/customer aging snapshots at cutover)
  • Historical sales and purchase transaction headers for lookup purposes

The Cloud Migration Tool supports a "GP Detail Snapshot" feature that stores this data in extension tables within Business Central. This data does not post to the live GL.

Info

You control how far back the snapshot goes using the Oldest Snapshot Year field in the GP Company Migration Configuration page. Set this deliberately — every additional year increases migration time and extension table size.

Some partners recommend a separate historical or archive company within BC to keep the production company clean. This is a reasonable pattern if your finance team needs frequent access to Tier 2 data but wants to keep the live environment fast.

Tier 3: Archive, don't migrate

Everything older than your Tier 2 window — typically anything beyond 5–7 years — should be extracted and moved to a queryable archive:

  • SQL Server database: Restore the GP backup to a read-only SQL instance. Simplest option.
  • Azure Data Lake or Microsoft Fabric: Cloud-native querying with low storage costs. Microsoft's Data Lake copy option is built for exactly this. (learn.microsoft.com)
  • Document repository: Scanned checks, invoices, and supporting documents.

Keep GP itself available in read-only mode if needed — a dormant GP instance on a locked-down VM costs far less than trying to force decades of history into Business Central. Plan for this archive to remain accessible for at least 7 years post-cutover.

The test is simple: can finance search the archive quickly, export evidence from it, and prove where it came from?

Why the "migrate everything" approach fails

The instinct to migrate all 20 years of transactional detail into Business Central is understandable. It feels safer. In practice, it creates three compounding problems.

1. Sub-ledger contamination. Legacy GP systems accumulate messy data over decades. Users make manual journal entries directly to control accounts, causing the GL to drift from AR and AP sub-ledgers. Orphaned transactions from module-level fixes and voided-and-reissued checks create apparent duplicates. If you migrate this uncleaned data, your new Business Central environment starts life out of balance. BC enforces strict relational integrity — it will reject unbalanced entries. Cleaning 20 years of legacy data before migration adds 3–6 months to the project timeline. (archerpoint.com)

See 7 Costly Mistakes to Avoid When Migrating Financial Data for a deeper look at how this pattern derails projects.

2. Performance degradation. Business Central online is a multi-tenant SaaS environment with real operational limits: the environment database has a 3 TB ceiling, list searches time out after 10 seconds, uploads after 65 seconds, and OData/SOAP requests can be throttled with 429 responses. Loading millions of historical GL entries, closed invoices, and voided checks into live tables degrades query performance, slows report generation, and increases environment size. (learn.microsoft.com)

3. License cost implications. GP uses concurrent-user licensing; Business Central uses a named-user model. If you design BC as a giant live archive for occasional lookups, you are paying for named-user licenses for people who only need archive reporting. BC's per-environment storage limits mean that bloated historical data triggers additional storage fees — every gigabyte you push into BC is data you pay to host, back up, and replicate indefinitely. (archerpoint.com)

The three-tier approach avoids all of this. Live data in BC stays clean and fast. Historical reference data lives in extension tables. Deep archive lives outside BC entirely.

GP-specific data quirks: account frameworks, SmartLists, and ISVs

The account framework to dimensions translation

This is the single most consequential architectural difference between GP and BC.

Dynamics GP uses a segmented account structure — up to 10 segments and 66 characters that form a single GL code (e.g., 300-6170-00 = Repairs & Maintenance, Sales Department). The account framework is set at installation and is very difficult to change later. (learn.microsoft.com)

Business Central uses a single-segment account number plus dimensions — flexible tags that categorize transactions (Department, Division, Project, Region). BC supports two Global Dimensions stored directly on ledger entry tables, plus Shortcut Dimensions 3–8 stored in sub-tables.

During migration, the Cloud Migration Tool maps the main GP segment → BC Account Number and remaining segments → Dimensions. You choose which segments become Global Dimension 1 and Global Dimension 2 in the GP Company Migration Configuration.

GP account:      100-4100-02-ATL
BC Account No.:  4100
Global Dim 1:    DEPARTMENT = 100
Global Dim 2:    DIVISION   = 02
Shortcut Dim 3:  LOCATION   = ATL

Choosing which two GP segments become your Global Dimensions is the most critical architectural decision of the migration. Global Dimensions are indexed on every ledger entry and deliver the fastest query performance. Getting this wrong means rebuilding your reporting layer post-go-live. Changing Global Dimensions later can be time-consuming, affect performance, and may lock tables while entries are updated. (learn.microsoft.com)

Tip

Choose your two highest-volume reporting dimensions as Global Dimensions. If your finance team filters by Department and Division on nearly every report, those should be Global Dimension 1 and 2.

One critical catch: in GP, you control valid account/segment combinations by simply not creating certain GL codes. In BC, dimensions are available on any account by default. Use BC's Allowed Values Filter on GL Account Cards to replicate the same restrictions — otherwise users can post to combinations that never existed in GP.

For a deeper chart-of-accounts planning framework, see Your Chart of Accounts Migration Plan.

SmartList migration and reporting architecture

If you have been on GP for 15+ years, your users rely heavily on SmartLists for ad-hoc reporting. SmartLists do not migrate to Business Central.

In Business Central online, the supported pattern for reading data is APIs and analytics tools — not direct SQL against the live application database. SmartList migration is a reporting-architecture problem, not just a data export problem. (learn.microsoft.com)

  • eOne SmartList Builder → Replaced by Popdock in BC, which can also surface legacy GP data from Azure Data Lake inside BC without migrating it
  • Custom SmartLists → Inventory every high-value list and decide whether it becomes a BC analysis view, Power BI report, custom API, or external reporting layer

You must catalog the underlying SQL views your critical SmartLists reference and rebuild that logic in the new platform.

ISV add-ons and integrations

Most GP environments are heavily customized with Independent Software Vendor (ISV) tools. No ISV add-on migrates automatically. Each one requires a replacement assessment, data extraction, and re-implementation.

Common ISV migration paths:

  • eOne SmartConnect → Cloud version (SmartConnect.com) works with BC, but GP-style SQL staging tables must be rearchitected since direct SQL access is not available in BC cloud. Plan for API rate limits: 429 throttling, request time limits, and OData filter limitations. (learn.microsoft.com)
  • Mekorma Payment Hub → Mekorma is actively building a BC-native version; confirm feature parity and availability before cutover. (mekorma.com)
  • Integrity Data (payroll/HR) → Offers BC-compatible solutions, but consider decoupling payroll entirely and moving to a dedicated modern HRIS.
  • Binary Stream (multi-entity management) → Has BC equivalents.

Microsoft's GP documentation is explicit about the old customization model: modified forms and reports live in Forms.dic and Reports.dic, and additional products are loaded through Dynamics.set. None of that becomes a BC page or report automatically, because Business Central's extension model is built around AL extension objects. (learn.microsoft.com)

Inventory every ISV before scoping the project. This is where timelines blow up. Treat every ISV as a mini-project with its own fit-gap review and test script.

For a comparison of migration tooling approaches including SSIS and Azure Data Factory, see SSIS vs Azure Data Factory vs ClonePartner.

The reconciliation playbook: proving chain-of-custody to auditors

Your first post-migration audit will test whether you can prove that data moved correctly from GP to BC. Auditors will not accept "we ran the migration tool" as evidence. You need artifacts.

Pre-cutover reconciliation steps

Step 1: Freeze GP and run final routines. Halt all user access to Dynamics GP. Post all open batches. Run the Reconcile utility on all financial modules (Financial, Payables, Receivables, Inventory). Deposit all posted cash receipts — undeposited receipts are not migrated.

Step 2: Match the trial balance. Run the Detailed Trial Balance in GP as of the cutover date and export to Excel. This is your baseline. Once Tier 1 data is loaded into Business Central, run the equivalent Trial Balance in BC. Every account must match to the penny. Export both as signed PDFs with timestamps. Note that historical years marked in GP come into BC as open and must be closed in BC after migration. (learn.microsoft.com)

Step 3: Match AR and AP aging. Extract the Historical Aged Trial Balance for both Payables and Receivables in GP. Map the open invoices into Business Central. Run the BC Customer and Vendor Aging reports. Document the exact match. If there is a discrepancy — often caused by partially-paid invoice netting (GP migrates remaining balance, not original amount) or legacy sub-ledger drift — document the adjusting journal entry required to balance it. Pay special attention to credit memos and prepayments; these frequently map differently between systems.

Step 4: Verify inventory valuation and bank reconciliation. Extract the Historical Inventory Trial Balance (HITB) from GP. Match the total valuation to the BC inventory load. Watch for rounding differences on average-cost items. Note that GP Unit of Measure Schedules do not have a direct BC equivalent. (learn.microsoft.com)

For bank reconciliation: extract the final GP Bank Reconciliation report. Ensure the uncleared checks and deposits in transit are loaded into BC as open bank ledger entries. The last reconciled bank statement in GP becomes the starting point in BC.

Step 5: Handle tax edge cases. Microsoft's current documentation says the first supported year for migrated 1099 amounts is 2024. Older 1099 history belongs in the archive or reporting layer, not in live BC. (learn.microsoft.com)

Step 6: Generate the audit evidence package. Package the following into a single, reproducible artifact:

  • Checksums on key tables: Row counts and control totals (sum of debit/credit) for GL, AR, AP, and inventory before and after migration. NIST defines a hash value as a string used to substantiate the integrity of digital evidence — hashing extracted tables before and after transfer creates a simple integrity proof. (nist.gov)
  • Data lineage documentation: A mapping document showing which GP tables map to which BC tables/extension tables, with transformation rules
  • Sign-off log: Controller and IT sign-off on each reconciliation checkpoint, with dates
  • Archive access procedure: Written documentation showing how to access Tier 3 archived data, who has credentials, and how to produce records on demand

Have the CFO and the lead migration engineer sign off. This document is handed directly to the auditors during your first post-migration audit.

Warning

Auditors will specifically ask: "Can you produce the original source document for a transaction from [random historical year]?" If the answer requires booting up a decommissioned server, you have failed the test. Make sure your archive is online and queryable before decommissioning GP.

One more detail to keep in mind: during cloud migration, GP remains the operative environment until you complete the migration. BC online data that is part of the migration can be overwritten while replication continues. Microsoft explicitly says data entry in BC is limited during this period — which is why the last mock cutover and the disable-migration step matter so much. (learn.microsoft.com)

How ClonePartner engineers your GP to BC data migration

The native Cloud Migration Tool handles the standard path — but it has limits. It does not clean dirty GP data before migration. It does not handle 100GB+ databases without timeout issues. It does not build the Tier 3 archive or produce audit-grade reconciliation reports.

ClonePartner builds custom migration scripts that handle the full three-tier strategy: live data into BC, historical snapshots into extension tables, and deep archive extraction into Azure Data Lake or SQL Server — with checksums, reconciliation reports, and chain-of-custody documentation at every stage.

If your migration partner's plan is "run the wizard and hope," that is not a plan. That is a risk.

What to do next

If you are in the decision-to-implementation phase, do these five things now:

  1. Write your retention rules down. Clarify what must remain retrievable after go-live, who needs access, and for how long.
  2. Pick Tier 1 now. Define exactly which data must be transactionally active in BC on day one.
  3. Inventory GP customizations and integrations. Catalog every SmartList, SQL report, modified form, third-party product, and integration.
  4. Prototype the dimension map. Map GP account segments to BC dimensions before you migrate balances. Test it against your most critical reports.
  5. Run a mock migration with reconciliations. Validate trial balance, AR/AP aging, inventory valuation, bank activity, and tax edge cases in a dry run.

That is the point where a GP to BC migration stops being a fear-driven ERP replacement and becomes a controlled finance data project.

Frequently Asked Questions

When does Microsoft end support for Dynamics GP?
Microsoft ends product enhancements, regulatory/tax updates, and technical support on December 31, 2029 (revised from the earlier September 30, 2029 date). Security patches end April 30, 2031. New subscription license sales ended April 1, 2026. Perpetual license holders can keep running GP past 2031, but with zero patches and no ecosystem support.
Does the Microsoft Cloud Migration Tool move all GP historical data into Business Central?
No. The native tool posts summary GL balances for historical years into the live BC ledger. Detailed historical transactions are stored in GP Historical Snapshot extension tables, which require Power BI, Power Apps, or custom reporting to query. For the oldest detail, Microsoft supports copying the full GP database to Azure Data Lake as a queryable archive.
How do Dynamics GP account segments map to Business Central?
GP's main account segment becomes the BC Account Number. All remaining segments are converted to Dimensions. You choose which two segments become Global Dimensions (indexed on every ledger entry for fast filtering). Additional segments become Shortcut Dimensions 3–8. Choosing the wrong Global Dimensions means rebuilding your reporting layer post-go-live.
What happens to GP ISV add-ons like Mekorma and SmartConnect after migration?
No ISV add-on migrates automatically. eOne SmartList Builder is replaced by Popdock in BC, SmartConnect has a cloud version (SmartConnect.com), and Mekorma is building a BC-native Payment Hub. Each add-on requires a separate replacement assessment and re-implementation. Inventory every ISV before scoping the project — this is where timelines blow up.
How long do I need to keep financial records after migrating off GP?
The IRS requires records as long as the period of limitations remains open — generally 3 years, sometimes 6, and up to 7 years for bad-debt or worthless-securities claims. Many CPA firms recommend a blanket 7-year retention policy. The IRS requires records to be available for inspection, not stored in your active ERP — a queryable SQL archive or Azure Data Lake satisfies this.

More from our Blog

7 Costly Mistakes to Avoid When Migrating Financial Data
Accounting

7 Costly Mistakes to Avoid When Migrating Financial Data

One error can corrupt your entire history. This in-depth guide reveals the 7 costliest mistakes to avoid, including botching opening balances, incorrect data mapping, and failing to run parallel reports. We cover the "what not to do" pitfalls, from "Garbage In, Garbage Out" to ignoring multi-currency complexities. Read this before you migrate to ensure 100% data integrity, avoid tax season nightmares, and achieve a stress-free "go-live" on your new accounting system.

Raaj Raaj · · 13 min read
Microsoft Dynamics 365 On-Premise to Cloud Migration: SSIS vs Azure Data Factory vs ClonePartner
Microsoft Dynamics 365

Microsoft Dynamics 365 On-Premise to Cloud Migration: SSIS vs Azure Data Factory vs ClonePartner

The Microsoft Dynamics 365 migration landscape is divided between self-serve toolsets (KingswaySoft/ADF) and managed frameworks (ClonePartner). While tools like ADF provide scale, managed frameworks prioritize data integrity and security by executing migrations within the client’s own VPC, addressing the inherent limitations of standard web API connectors in handling complex legacy CRM metadata.

Raaj Raaj · · 4 min read
Your Chart of Accounts Migration Plan: How to Map & Clean Your Data for a New System
Accounting

Your Chart of Accounts Migration Plan: How to Map & Clean Your Data for a New System

Don't just "lift and shift." This expert guide provides a comprehensive 4-phase plan to audit, design, and map your CoA for a new accounting system. Learn how to properly clean your financial data, create a "Rosetta Stone" data mapping blueprint, and validate your migration in a sandbox for 100% accuracy. This is the technical plan you need to ensure a successful, clean data transfer.

Raaj Raaj · · 11 min read