Confluence to Coda Migration: Methods, API Limits & Data Mapping
A technical guide to migrating Confluence to Coda: native importer limits, CSV constraints, API rate limits, macro handling, and data mapping strategies.
Coda has a native Confluence importer that works through a /Confluence slash command. It handles small Cloud workspaces well enough. For anything with deep page hierarchies across dozens of spaces, heavy attachments, dynamic macros, or enterprise-scale data volumes, the native path has hard limits that will stall your migration.
This guide covers the three real approaches to moving from Confluence to Coda — native import, CSV export, and API-led migration — the specific limits that constrain each one, and how to map Confluence's wiki architecture into Coda's doc-centric model without losing your page tree.
For a deep dive into getting data out of Confluence before worrying about getting it into Coda, see our Confluence export guide. For teams evaluating Coda against other modern platforms, our Coda vs Notion comparison covers the structural differences.
How Confluence and Coda Structure Data Differently
Before you export anything, understand what you are translating between. The two platforms use fundamentally different organizational models.
Confluence is structured as a traditional wiki, meaning within your workspace, you have giant "spaces" where all of your information lives in individual pages and subpages. The hierarchy is rigid: Instance → Space → Page → Child Page. Spaces are the top-level container, and everything nests beneath them. Macros are embedded within pages to pull in dynamic content, but the foundation is always the page. You can explore this architecture in our Notion vs. Confluence (2026) comparison.
Coda's hub model works differently. Hubs start at the workspace level where all of your teammates and their docs live. Docs are the atomic unit of Coda — they house everything you need to know about a team, project, campaign, and more in one place, and they consist of endless pages and subpages.
The practical hierarchy in Coda is: Workspace → Folders → Docs → Pages/Subpages. Tables live inside docs as first-class citizens, not as static embedded elements like Confluence tables.
Coda doesn't recommend endlessly dumping pages into docs like you would a Confluence space. Instead, aim to create a new doc whenever you're creating a new project, product, campaign, or team.
This architectural gap is the root cause of most migration problems. A Confluence space with 500 pages does not cleanly map to a single Coda doc. You need a deliberate mapping strategy — covered below — or you end up with a bloated, slow doc that hits Coda's size limits.
How the Native Coda Importer Works (and Where It Breaks)
You launch the Confluence importer by typing /Confluence on any blank line in the canvas, then selecting Confluence within the Import options. You can connect an existing Confluence account or add a new one, which redirects you to Atlassian to authenticate. Then you select the pages or spaces you want to import — either by pasting a Confluence page link or browsing spaces from a list.
What the importer supports
- A Confluence space maps roughly to a Coda doc. You can import multiple spaces at once into a single Coda doc.
- You can select and import a folder that lives inside a space. The page hierarchy in Coda should automatically match the folder hierarchy from Confluence.
- After import, Coda generates a "Read me: Import next steps" page that lists all issues in your recent import with suggested solutions and links to each issue location.
The browser-session problem
Large files may take up to a few hours to fully import. You can minimize the browser window, but closing it will cancel the import of your Confluence space(s) into Coda.
This is the single biggest risk for enterprise imports. A laptop going to sleep, a browser update restarting Chrome, a VPN disconnect — any of these kills a multi-hour import with no resume capability. You start over from zero.
Username matching for @mentions
To preserve people references (i.e. @Jane Doe), start with a blank Coda doc and share it with those referenced in your Confluence space before you start your import. Usernames must match across Confluence and Coda in order for people references to be preserved.
If your Atlassian accounts use email addresses as usernames and your Coda workspace uses different email domains, every @mention breaks silently. There is no post-import remapping tool — you either set this up correctly beforehand, or you manually fix every reference after the fact.
Cloud-only limitation
The importer is only for Confluence Cloud. For customers on Coda's Enterprise tier, a separate Confluence Data Center / on-premises importer is available. If you are on Confluence Server or Data Center and not on Coda Enterprise, the native importer is not an option.
Pre-import checklist: Share the target Coda doc with all users who are @mentioned in Confluence before starting the import. Ensure their Coda usernames match their Confluence usernames exactly. There is no way to batch-fix broken mentions after import.
The DIY CSV Route: Export Limits and Flat Hierarchies
When the native importer doesn't fit — wrong Confluence deployment type, too many spaces, need for selective extraction — teams fall back to CSV as a universal bridge format.
Confluence CSV export constraints
Space admins can export a space to PDF, CSV, HTML, or XML. Confluence creates a zipped archive of the CSV, HTML, or XML files, or a single downloadable PDF file.
The problem: native CSV and HTML exports flatten or break parent-child relationships. You get a flat dump of page content with no structural metadata indicating which pages are children of which parents. For a 50-page space, this is manageable. For a space with 500+ pages in a deep tree, the navigational structure is gone.
PDF and HTML exports also silently drop blog posts and comments without warning.
Manual export is also permission-scoped. Only content visible to the exporting user is included, except CSV or XML exports run by a site admin. If a non-admin runs the export, restricted pages silently disappear from the output. (support.atlassian.com)
No API for space exports on Cloud
Confluence Cloud does not provide a REST API endpoint for triggering space exports. The feature request (CONFCLOUD-40457) has been open since 2016 and remains unresolved. The old XML-RPC and SOAP APIs that included exportSpace functionality were deprecated in Confluence 5.5. Confluence Data Center 8.3+ has a REST API endpoint for XML exports, but this does not apply to Cloud.
Every CSV export from Confluence Cloud is a manual, click-through-the-UI operation. You cannot script it. You cannot schedule it. For a migration involving 20+ spaces, that is hours of manual clicking and waiting. We cover this limitation in depth in our Confluence export guide.
Coda CSV import limits
Coda supports importing data from CSV files into new tables or existing ones. At this time, Coda supports importing CSVs up to 10,000 rows. If your CSV exceeds 10,000 rows, uploading data in batches is recommended.
This is a hard cap. Users have reported receiving the error "Importing CSV failed because the number of rows exceeded the limit of 10000" even on tables they had previously imported into without issues.
For page-level content where each row is a page, 10,000 rows is rarely a problem. But if you are migrating Confluence tables or databases where individual tables contain tens of thousands of rows, you must split the data into batches — each requiring a separate import operation, each needing manual column mapping.
Which export format fits which task
Not every piece of your Confluence data should travel in the same format:
| Migration Task | Recommended Format | Why |
|---|---|---|
| Page content with hierarchy | API-to-API (Confluence REST → Coda API) | Only method that preserves the parent-child tree |
| Structured tabular data | CSV | Tabular data maps cleanly to Coda tables; 10k row batch limit |
| Rendered page bodies | HTML export | Stores attachments in folders keyed by page ID; drops comments |
| Attachments | Binary download + re-upload | Direct file transfer, no format conversion |
| Page metadata (labels, authors, dates) | JSON via Confluence API | Structured metadata needs programmatic extraction |
| Read-only archives | Not for migration; useful for compliance only |
When to use CSV: CSV is the right format when you need to migrate structured tabular data from Confluence (e.g., Confluence Databases or Page Properties reports) into Coda tables. It is the wrong format for migrating page content with hierarchy, because it destroys parent-child relationships.
Handling Attachments, Macros, and API Limits
Attachments
Confluence handles attachments as page-level entities. Coda handles them within canvas blocks or table columns. The native Coda importer handles inline images but not all attachment types consistently. Large attachments (50 MB+) may fail silently during browser-based import. Files embedded via Confluence macros (like the Attachment macro) may not transfer because the macro context is lost.
Relying on HTML exports often results in broken image links because the export references Atlassian's hosted URLs, which expire or require authentication. For attachment-heavy spaces — engineering specs with PDFs, design files, embedded videos — plan for a separate attachment migration pass.
On the Coda side, paid plans allow individual files up to 250 MB, and Team/Enterprise plans have unlimited total attachments per doc. (help.coda.io) But Coda's API does not accept image files directly — it expects Image URL values for Image URL columns. Attachment-heavy migrations typically need a separate service layer or canonical-link strategy rather than pure REST upserts. (help.coda.io)
Confluence macros have no Coda equivalent
Instead of using macros, extensions, plug-ins, and other add-ons to connect your wiki to the rest of your workflows, Coda uses native embeds, Packs, and automations to sync to external tools.
Here is what happens to specific macros during import:
| Confluence Macro | What Happens in Coda |
|---|---|
| Expand/Collapse | Flattened to plain text or toggle (if supported) |
| Page Properties + Report | Lost — no Coda equivalent. Must rebuild as table columns |
| Jira Issue/Filter | Replaced with Coda's Jira Pack (requires manual setup post-import) |
| Code Block | Preserved as code block in most cases |
| Table of Contents | Dropped — Coda generates its own TOC from page headings |
| Include Page / Excerpt Include | Broken — Coda does not support transclusion across docs |
| Draw.io / Gliffy diagrams | Dropped — must re-embed or re-create |
The Page Properties macro deserves special attention. It is one of the most heavily used Confluence macros in enterprise wikis. Page Properties lets you define structured key-value metadata on any page and aggregate it with Page Properties Report. Users have been exploring alternatives in Coda's community forums for replicating this pattern. The typical Coda solution is to model this data as a table with one row per page — a fundamentally different structure that requires rethinking your information architecture, not just migrating it.
Before you migrate, use CQL to build a macro exception queue so you know exactly which pages need manual treatment:
# Inventory pages using macros that will break on import
GET /wiki/rest/api/content/search?cql=type=page AND macro in ('toc','children','excerpt','page-properties-report')Coda's 125 MB API size limit
Coda API Size Limit: On all plan types, docs with a size of 125 MB or more will no longer be accessible via the Coda API. This limitation is set to preserve the performance of the API. This 125 MB does not include file attachments.
This limit matters for two reasons:
- Post-migration automation breaks. If your imported Confluence space pushes a Coda doc past 125 MB, you cannot use the Coda API to programmatically update, query, or integrate that doc.
- Cross-doc syncing stops. To successfully set up Cross-doc, the source doc can't exceed 125 MB. If you import a large Confluence space into a single doc, you cannot reference that data from other Coda docs.
Community members have reported that docs which previously worked fine over 125 MB suddenly received warnings and blocked behavior for Cross-doc and API access. This is not a theoretical risk — it is being actively enforced.
Coda API rate limits
API rate limits are not plan-dependent. Reading data: 100 requests per 6 seconds. Limits apply per-user across all endpoints.
The limit for API requests is 2 MB per request, with a limit of 85 KB for any given row.
For write operations, the observed limits are significantly lower. Community testing has shown POST request rate limits as low as 10 per minute for row inserts. Coda's own docs show a discrepancy on doc-content write sub-limits: the Help Center lists 3 requests per 10 seconds, while the developer API reference lists 5 requests per 10 seconds. (help.coda.io) Code to the lower number, retry on 429, and remember that many writes return HTTP 202 — the change is accepted and queued, not committed immediately.
The 85 KB row limit also has design implications. If you push rendered HTML or markdown blobs into table rows, large Confluence pages can exceed that limit. Put long-form content on Coda pages; use tables for indexes, mappings, ownership, labels, and reporting.
Inline comments
Confluence inline comments are tied to specific text nodes within the page's storage format (XHTML). Native exports drop them. HTML export drops page comments. PDF drops all comments. Preserving them requires parsing Confluence's storage format and mapping those text strings to Coda's commenting API — or storing them in a separate annotation table.
If comments matter, scope them explicitly. HTML export drops page comments, PDF drops all comments, and blog posts are missing from several export routes. Preserve comments as data, not as a last-minute afterthought.
Data Mapping: Translating Confluence Spaces to Coda's Hierarchy
This is where most migrations succeed or fail. A bad mapping strategy creates a Coda workspace that is harder to navigate than the Confluence wiki it replaced.
Recommended mapping framework
| Confluence Concept | Coda Equivalent | Notes |
|---|---|---|
| Instance | Workspace | 1:1 mapping |
| Space | Folder or Doc | Small spaces → single Doc. Large spaces → Folder containing multiple Docs |
| Top-level pages | Pages within a Doc | Each top-level Confluence page becomes a Coda page |
| Child pages (2+ levels deep) | Subpages | Coda supports nesting, but deep nesting (5+ levels) degrades UX |
| Page Properties macro | Table rows | One table per "page type" with columns matching property keys |
| Confluence Database | Coda table | Closest 1:1 mapping. Requires separate CSV export |
| Attachments | Embedded files or file columns | Re-upload, link from cloud storage, or use canonical-link tables |
| @Mentions | People Column / Canvas Mention | Requires exact email/username mapping prior to import |
| Blog posts | Separate Doc or page section | Coda has no native "blog" concept |
| Inline comments | Annotation table or appendix section | No 1:1 target; preserve as structured data |
| Labels | Multi-select or text columns in an index table | Enables filtering and landing pages in Coda |
Splitting large spaces
A Confluence space with 1,000 pages should not become a single Coda doc. Based on Coda's own guidance and the 125 MB API limit:
- < 100 pages, < 50 MB content: Single Coda doc. Manageable.
- 100–500 pages: Split by topic or team into 3–5 Coda docs inside one folder.
- 500+ pages: Create a dedicated folder. Use one doc per major section. Use Cross-doc to link shared tables.
If you have over 500 pages in your doc, this could contribute to size issues. Delete old or unnecessary pages. You may have a doc with data that isn't necessarily dependent on each other — split it into smaller docs focused on each individual use case.
Migration manifest
Two fields matter more than teams expect: SourcePageId and OriginalUrl. Keep both in a migration manifest so you can rebuild internal links, run QA against source objects, and rerun only failed items.
sourcePageId: 123456
spaceKey: ENG
title: Ingress Runbook
parentPageId: 123400
originalUrl: CONFLUENCE_PAGE_URL
targetFolder: Engineering
targetDoc: Platform KB
targetPath:
- Runbooks
- Kubernetes
ownerMap: jane.doe@example.com
macroFlags:
- toc
- children
attachmentMode: link
commentMode: annotation-tablePermissions
Confluence can be heavily page-restricted. Coda's main sharing surfaces are docs and folders — there is no page-level ACL. A literal page-by-page permission clone is not possible. Most successful migrations redraw boundaries into separate docs or folders for restricted content branches instead of trying to simulate every Confluence restriction.
Identity mapping
Because Coda preserves people references only when usernames match, build a user crosswalk before bulk migration. When a user cannot be resolved cleanly, downgrade the mention to text plus source metadata instead of leaving broken people objects in the destination.
Approach Comparison: Native Importer vs. CSV vs. API-Led Migration
| Native Coda Importer | DIY CSV Pipeline | API-Led Migration | |
|---|---|---|---|
| Setup effort | Low — slash command | Medium — manual export + import | Low — handled by migration team |
| Confluence Cloud | ✅ | ✅ | ✅ |
| Confluence Data Center | Enterprise tier only | ✅ (manual export) | ✅ |
| Page hierarchy | ✅ Preserved | ❌ Flattened | ✅ Reconstructed |
| @mention preservation | ✅ (if usernames match) | ❌ | ✅ (with user ID mapping) |
| Attachments | Partial | ❌ Not in CSV | ✅ Full binary transfer |
| Macros | Partial (basic only) | ❌ | Mapped to Coda equivalents where possible |
| 10k row limit | N/A | ✅ Hard limit per file | Bypassed via API batching |
| 125 MB doc limit | Risk if space is large | Risk if importing into one doc | Managed via doc splitting strategy |
| Browser must stay open | ✅ Yes | No | No |
| Resume on failure | ❌ Start over | Manual | ✅ Checkpoint-based resume |
| Best for | Small Cloud workspaces (< 100 pages) | Tabular data only | Enterprise, multi-space, or high-fidelity migrations |
Edge Cases That Catch Teams Off Guard
-
Confluence Databases are a separate workstream. If your site uses Confluence Databases, whole-site backups show databases in the tree but do not retain their content, data, or functionality. Database exports must be handled separately, per database, in CSV, HTML, or PDF. These need their own migration path into Coda tables.
-
Coda Free plan limits are severe. Shared docs on the Free plan have up to 50 objects (pages, tables, views, buttons, formulas) and up to 1,000 table rows per doc. If you are testing a migration on a Free plan, you will hit these limits almost immediately on any real-world Confluence space.
-
Pack sync table row limits. Free plan allows a max of 100 rows per sync table. Pro, Team, and Enterprise plans allow a max of 10,000 rows per Pack sync table, including Cross-doc. If you plan to use a Confluence Pack for ongoing sync rather than a one-time migration, these row limits constrain how much data you can keep live.
-
Include Page / Excerpt Include macros create invisible dependencies. A Confluence page that transcludes content from other pages looks self-contained to users, but the imported Coda version will have gaps where the included content should be. Audit these before migration.
-
Confluence page restrictions don't transfer. Page-level permissions in Confluence have no mapping in Coda's permission model. All imported content lands with the doc's default sharing settings.
-
Confluence API constraints on extraction. Atlassian's API is not designed for bulk export. Searches that expand
body.export_vieworbody.styled_vieware limited to 25 results per call, bulk content-body conversion allows only 10 conversions per request, and completed conversion results expire after 5 minutes. (developer.atlassian.com) As of March 2, 2026, Atlassian began enforcing points-based quota limits for Forge, Connect, and OAuth 2.0 apps. Any REST API call can return429, so migration scripts needRetry-Afterhandling, backoff, and resumable checkpoints. (developer.atlassian.com)
When Integration Makes More Sense Than a One-Time Import
Sometimes the right answer is coexistence first, migration second. If the team wants Coda for reporting, ownership tracking, page indexes, or workflow layers while Confluence stays the content system for a while, a narrow integration can buy time.
Be clear about the constraints. Coda Pack sync tables are built for synced tabular data, not full wiki reconstruction. On paid plans, Pack sync tables top out at 10,000 rows each, and once a source doc exceeds 125 MB, new Cross-doc syncs from that doc are not supported. (help.coda.io) Packs and custom integrations are useful for selected datasets and staged coexistence — not for mirroring an entire knowledge base indefinitely.
ClonePartner's Approach: API-Led Migration
When a team comes to us with 20+ Confluence spaces and a deadline, we do not point them at the /Confluence slash command.
Parallel API extraction from Confluence. We use the Confluence REST API to pull page content, metadata, labels, attachments, and the full page tree — including parent-child relationships that native exports destroy. On Cloud, we use the atlassian-python-api library and custom scripts that call the content API page-by-page, pulling body content and attachments individually.
Intelligent doc splitting. Instead of dumping an entire Confluence space into one Coda doc and hitting the 125 MB API limit, we pre-analyze space size and page structure, then split into appropriately sized Coda docs grouped by topic or section. Cross-doc references are set up so tables remain linked.
User ID mapping. We don't rely on username matching. We build a mapping table between Atlassian account IDs and Coda user IDs, ensuring @mentions resolve correctly even when usernames differ between platforms.
No browser dependency. Our scripts run server-side. A 4-hour import won't fail because someone closed their laptop. We use checkpoint-based batching — if a write fails, we resume from the last successful record, not from the beginning.
Macro audit and rebuild plan. Before migration, we audit every macro used across your Confluence spaces. We provide a mapping document showing which macros transfer, which need rebuilding as Coda tables or Packs, and which require architectural changes in Coda. No surprises after go-live.
Delta passes for active workspaces. For teams that cannot afford a long content freeze, we stage the main migration, then replay late changes so the cutover window stays tight.
Read more about our methodology in How We Run Migrations at ClonePartner.
Making the Decision
The right migration method depends on your scale and fidelity requirements:
- Under 100 pages, single Cloud space, minimal macros: Use Coda's native importer. Keep your browser open and verify @mentions afterward.
- Structured tabular data from Confluence Databases: Export as CSV, import into Coda tables in 10,000-row batches.
- Multi-space enterprise migration with hierarchy, attachments, and user mapping requirements: You need an API-led approach that handles extraction, transformation, and loading as a coordinated pipeline.
The common mistake is underestimating how much breaks. The native importer gets your text across. It does not get your macros, your page restrictions, your inline comments, your blog posts (via CSV/HTML), or your cross-page dependencies. The gap between "text is imported" and "our wiki works in Coda" is where the real migration work happens.
Frequently Asked Questions
- Can I import Confluence Data Center into Coda?
- Only if you are on Coda's Enterprise tier. The native Confluence importer is Cloud-only for non-Enterprise plans. Data Center users on other plans must export to CSV or use API-based extraction.
- What happens to Confluence macros when migrating to Coda?
- Most Confluence macros are dropped or flattened during import. Page Properties, Include Page, Excerpt Include, and Jira macros have no automatic Coda equivalent. You must rebuild them using Coda tables, Packs, or automations.
- What is the Coda doc size limit for API access?
- Coda docs that reach 125 MB (excluding attachments) lose API access and cannot be used as Cross-doc sources. This limit applies on all plan types and is actively enforced.
- Can I automate a full Confluence Cloud space export via REST API?
- No. Confluence Cloud does not provide a REST API endpoint for triggering space exports. The feature request (CONFCLOUD-40457) has been open since 2016, so automated migrations rely on page-by-page API extraction.
- What is the Coda CSV import row limit?
- Coda limits CSV imports to 10,000 rows per file. Larger datasets must be split into batches and imported separately, with manual column mapping for each batch.