Slab to Confluence Migration Guide: API Limits & Data Mapping
Technical guide to migrating Slab to Confluence. Covers ADF translation, topic-to-space mapping, API rate limits, and internal link preservation.
Planning a migration?
Get a free 30-min call with our engineers. We'll review your setup and map out a custom migration plan — no obligation.
Schedule a free call- 1,200+ migrations completed
- Zero downtime guaranteed
- Transparent, fixed pricing
- Project success responsibility
- Post-migration support included
There is no native migration path from Slab to Confluence. Confluence's import options don't list Slab as a source — no connector, no plugin, no one-click wizard. Your options: export Markdown files from Slab and manually import them through Confluence's space import (hitting the 200 MB uncompressed XML limit fast), or build a custom pipeline that reads from Slab's GraphQL API and writes to Confluence's REST API using Atlassian Document Format (ADF). For any workspace beyond a handful of posts, the manual path breaks down — and pushing raw Markdown or HTML into Confluence lands your pages in the legacy editor, which Atlassian is fully deprecating on April 1, 2026. (support.atlassian.com)
This guide covers the exact export limitations on the Slab side, the ADF translation challenge on the Confluence side, how to map Slab's flexible topic structure into Confluence's rigid space hierarchy, and the API constraints you'll hit during an automated migration.
Admin does not mean full access. Slab admins can bulk-download published posts and topics, but they cannot export secret topics they are not part of, or drafts. Slab's GraphQL API only returns content the Slab Bot user has access to. If you skip the access audit, your migration will be incomplete before it starts. (help.slab.com)
Why Engineering Teams Migrate from Slab to Confluence
Slab is a focused, well-designed knowledge base. Teams don't leave because Slab is broken — they leave because the rest of their stack is Atlassian.
When engineering is already running Jira for sprint boards, incident tracking, and release management, having docs in a separate wiki creates friction. The most common triggers we see:
- Jira-heavy engineering orgs. Confluence's native Jira macros — embedding issue lists, linking to epics, auto-updating status badges — eliminate the tab-switching tax. Engineers can link PRs, epics, and incident reports directly inside documentation.
- Atlassian ecosystem consolidation. One vendor, one SSO config, one admin console. Companies standardize on Atlassian for compliance or procurement simplicity.
- Permission model requirements. Confluence's space-level and page-level permission system is more granular than Slab's topic-based access. Regulated industries often need this.
- Scale. Slab works well for teams under roughly 200 people. Larger orgs with thousands of pages across dozens of teams often need Confluence's space-based partitioning and advanced search.
This isn't about Slab being inadequate — it's about tool consolidation. If your org is already paying for Confluence Cloud, running a parallel wiki adds cost and cognitive load without clear upside.
How Slab Exports Data (And What's Missing)
Slab offers two extraction paths: a manual bulk export and a GraphQL API.
Manual Bulk Export
Slab admins can download all published content from Team Settings → Import & Export. The export produces a ZIP archive containing individual files — one per post — in either Markdown or Docx format. (help.slab.com)
What the export includes:
- All published posts across all non-secret topics the admin has access to
- Folder structure mirroring Slab's topic hierarchy
What the export does not include:
- Secret topics the admin is not a member of
- Drafts — only published posts are exported
- Comments and discussion threads — completely absent from the export
- Post metadata — creation dates, authors, edit history are stripped
- Embedded integration content — Slab embeds from GitHub, Figma, Loom, etc. export as bare URLs or are dropped entirely
- Images as downloadable files — inline images appear as embedded references, not downloaded assets
We covered these gaps in our Slab to Notion migration guide. The same limitations apply here — Slab's export is designed for portability, not fidelity.
The secret topic trap: An admin can only export content they have permission to view. If engineers created secret topics and did not invite the administrator, those documents silently fail to export. You must conduct an access audit and ensure the migration service account is added to all restricted topics before initiating the export.
Slab's GraphQL API
Slab exposes a GraphQL API, but it's gated: only available on Business or Enterprise plans. The API authenticates via a token generated in Team Settings → Developer Tools.
The critical constraint: the API returns only what the Slab Bot user can access. If Slab Bot isn't added to a private topic, that content is invisible to the API — same blindspot as the manual export, but easier to miss when scripting.
A typical extraction query pulls post titles, content, topic assignments, and timestamps. The API has no bulk export endpoint — you paginate through posts one query at a time.
query {
posts(first: 50) {
edges {
node {
id
title
content
insertedAt
updatedAt
topics {
id
name
}
}
}
pageInfo {
hasNextPage
endCursor
}
}
}The cleanest migration plan is usually hybrid: use the bulk export for the baseline content payload, and use the API for metadata, validation, and exception handling.
Current-state reality: Slab's current help docs describe bulk export as Markdown or Docx — not a full HTML or JSON workspace dump. Plan around the formats Slab documents now, not the formats you wish existed. (help.slab.com)
The Confluence API and Atlassian Document Format (ADF) Translation
This is where Slab-to-Confluence migrations get hard. Confluence Cloud doesn't store pages as Markdown or HTML — it uses Atlassian Document Format (ADF), a JSON-based document tree. (developer.atlassian.com)
ADF is a hierarchical structure of typed nodes: paragraphs, headings, lists, code blocks, tables, and media. Each node can carry attributes, marks (inline formatting), and child content. It's more expressive than Markdown but also more verbose and rigid.
A simple Markdown heading like ## Hello World becomes:
{
"type": "doc",
"version": 1,
"content": [
{
"type": "heading",
"attrs": { "level": 2 },
"content": [
{ "type": "text", "text": "Hello World" }
]
}
]
}Every page you create via the Confluence REST API v2 must be submitted in ADF. There is no server-side Markdown-to-ADF conversion endpoint.
Why You Can't Skip ADF and Push HTML
The Confluence REST API v1 accepts storage format (XHTML-based), which lands pages in the legacy editor. This looks like a shortcut — skip ADF, push HTML, done.
The problem: Atlassian is deprecating the legacy editor on a hard timeline:
- Phase 1 (January 21, 2026): New pages can no longer be created in the legacy editor.
- Phase 2 (January–February 2026): Opening a legacy page auto-converts it to the cloud editor.
- Phase 3 (April 1, 2026): Full deprecation — all content forced into the cloud editor with no revert option.
Pages pushed via the v1 storage format will be auto-converted. That auto-conversion is lossy for complex content: nested macros, custom CSS, and advanced table layouts break. Content that can't be cleanly converted gets wrapped in a Legacy Content Macro — a read-only block that can't be edited, doesn't support inline comments, and is ignored by Atlassian's Rovo AI features.
Many DIY migration scripts take this shortcut. VEVO's open-source slabporter repo, for example, converts Slab content to Confluence Storage Format — a useful reference implementation that also illustrates the real scope of work: translation logic, attachment handling, mention mapping, and a backlog of link exceptions. A repo is not a migration plan. (github.com)
For a deeper look at how legacy editor deprecation impacts migrations, see our Confluence Server to Cloud migration guide.
The HTML Import Alternative (And Its Limits)
Confluence does have an HTML import path. Users with permission to create spaces can upload a .zip containing .html files and related media, then create a new space from that upload. Atlassian's FAQ is explicit about what survives and what doesn't: headings, images, tables, quotes, dividers, inline code, and links can transfer. But iframe content is unsupported, code blocks and equations may degrade to plain text, customized text styling can be lost, and page-to-page links come across as generic hyperlinks — not durable Confluence page references. (support.atlassian.com)
For a small wiki with simple formatting, this path can work. For an engineering knowledge base with code blocks, internal cross-references, and embedded tools, it will create more cleanup work than it saves.
Markdown-to-ADF Translation Options
Several open-source libraries handle Markdown-to-ADF conversion:
| Library | Language | Notes |
|---|---|---|
marklassian |
JavaScript/TypeScript | Lightweight (~12kb gzipped), covers common Markdown syntax, supports embedded raw ADF nodes |
md-to-adf |
JavaScript | GitHub-flavored Markdown focused, built for Jira/Confluence |
marklas |
Python | Bidirectional (MD↔ADF), supports roundtrip metadata preservation |
@atlaskit/editor-markdown-transformer |
JavaScript | Official Atlassian library — heavier dependency, requires manual tree shaking |
None of these handle Slab-specific formatting out of the box. Slab's callout blocks, embedded integrations, and custom formatting require post-processing rules that map Slab-specific Markdown extensions to ADF panel macros or info blocks.
In practice, the migration pipeline that holds up is: Slab content → normalized AST → ADF output → page create/update calls. Skipping the normalization layer usually means you end up debugging edge cases page by page.
ADF is a superset of Markdown. Markdown-to-ADF conversion is inherently lossy in one direction: ADF supports panels, status badges, mentions, and media nodes that have no Markdown equivalent. Going the other direction — Slab Markdown to ADF — means you won't lose content from the Markdown itself, but you also won't gain Confluence-native features without custom mapping rules.
Mapping Slab Topics to Confluence Spaces and Page Trees
This structural mismatch is the single biggest architecture decision in the migration. Do the information architecture work before you write import code.
Slab's model: Topics are flexible groupings that can be hierarchical. A single post can belong to multiple topics. (slab.com)
Confluence's model: Spaces are isolated containers, each with its own page tree, permissions, and settings. A page lives in exactly one space. Within a space, pages form a strict parent-child tree.
That sounds like a small difference until you hit a post that lives in three Slab topics, or a secret topic that needs different access controls in Confluence. The default mapping should be conservative: fewer spaces, more parent pages, and labels for secondary classification.
The three common mapping strategies:
Strategy 1: One Slab Topic → One Confluence Space
Best for teams with well-separated topic domains (e.g., "Engineering," "Product," "HR"). Each major topic becomes a dedicated Confluence space.
Tradeoff: Posts that belong to multiple Slab topics must be assigned to one space. You lose multi-topic tagging. Confluence labels partially compensate, but they're not equivalent to Slab's topic model.
Strategy 2: All Content → One Confluence Space
Best for smaller teams (<500 posts) who want a single searchable space. Slab topics become top-level parent pages, with individual posts nested underneath.
Tradeoff: Permissions become coarser — Confluence space permissions apply to the whole space. If different topics had different access levels in Slab, you'll need page-level restrictions, which are tedious to manage at scale.
Strategy 3: Hybrid — Major Topics as Spaces, Minor Topics as Labels
Best for medium-to-large orgs. Core team topics (Engineering, Design, Ops) each get a space. Cross-cutting topics (e.g., "Onboarding," "Style Guide") become Confluence labels applied to pages in the appropriate spaces.
Tradeoff: Requires upfront taxonomy work. But it's the approach that best preserves Slab's multi-topic model while fitting Confluence's architecture.
Canonical-home rule: Every Slab post gets one true Confluence page. Everything else — secondary topics, cross-cutting tags — becomes metadata or navigation. This single decision prevents most duplicate-page and broken-link cleanup later.
Separate source visibility from target permissions. If secret Slab topics are missing from the export because the admin or Slab Bot can't see them, that's a source-access problem, not a Confluence mapping problem. Fix it before the migration window. Use restricted spaces or restricted page branches in Confluence when the source topic was secret or audience-limited in Slab.
For a broader comparison of Confluence's architectural patterns, see our Notion vs. Confluence architecture guide.
Preserving Internal Links Between Slab Posts
Internal links are where most Slab migrations silently break. In Slab, posts link to each other using Slab-internal URLs (e.g., https://yourteam.slab.com/posts/some-post-slug). When those posts land in Confluence, they get new page IDs and URLs. Every internal link now points to a dead Slab URL.
If you migrate thousands of engineering documents without updating internal links, you have effectively destroyed the knowledge base. Engineers rely on cross-references between API specs, architectural decision records, and runbooks.
The fix is a three-pass migration strategy:
Pass 1 — Ingest: Create all pages in Confluence as ADF documents. Don't attempt to fix links yet. As the Confluence API responds with new Page IDs for each created document, log them in a mapping table alongside the original Slab URLs.
Pass 2 — Map: Build the complete link-mapping table as a first-class artifact, not a debug byproduct.
source_url,source_post_id,target_space,target_page_id,target_url,status
/slab/runbooks/deploy,8421,ENGDOCS,987654,/wiki/spaces/ENGDOCS/pages/987654,current
/slab/oncall/rollback,8533,ENGDOCS,987710,/wiki/spaces/ENGDOCS/pages/987710,currentPass 3 — Rewrite: Query the Confluence API to retrieve the ADF JSON of every migrated page. Scan for old Slab URLs, replace them with the corresponding Confluence page links using the mapping table, and issue a PUT request to update the page.
# Pseudocode: link rewriting pass
for page in migrated_pages:
adf_body = get_page_body(page.confluence_id)
updated = False
for link in find_links(adf_body):
if link.url in slab_to_confluence_map:
link.url = slab_to_confluence_map[link.url]
updated = True
if updated:
update_page(page.confluence_id, adf_body)This multi-pass approach is non-negotiable. If you skip it, every cross-reference in your knowledge base is broken on day one.
Do the same for attachments and embeds. Slab inline images reference Slab's CDN. Download each image, upload it as a Confluence attachment via the Attachments API, and update the ADF media nodes to reference the new attachment ID. Unsupported embeds (iframes, Figma frames, Loom videos) should be downgraded to explicit links rather than left as broken boxes. (support.atlassian.com)
Don't forget anchor links. Slab supports linking to specific sections within a post. Confluence's cloud editor supports anchor macros, but the anchor ID scheme is different. Section-level links need their own mapping rules.
Confluence API Limits That Affect Migration
Three hard constraints shape how fast — and whether — you can run an automated Slab-to-Confluence migration. For a complete reference, see our Confluence import methods and API limits guide.
Pagination: 250 Objects Per Request (Max)
The Confluence REST API v2 uses cursor-based pagination with a maximum of 250 objects per page (default is 25). When listing spaces, pages, or child pages for tree construction, you must paginate through the full result set. Scripts that don't handle the next cursor will truncate at 250 documents. (developer.atlassian.com)
Points-Based Rate Limiting
As of March 2, 2026, Confluence Cloud enforces a points-based rate limiting model for Forge, Connect, and OAuth 2.0 apps. Each API call consumes points based on operation complexity — a simple GET costs less than a page creation with embedded media. API-token-based traffic (used in most migration scripts) continues under existing burst rate limits, but you'll still hit HTTP 429 responses during sustained write operations. (developer.atlassian.com)
The practical limit for migration scripts: plan for 50–80 page creates per minute with exponential backoff on 429s. Your scripts must honor Retry-After headers. Attempting to blast hundreds of parallel calls will get you throttled fast.
The 200 MB XML Import Wall
Confluence Cloud's native space import enforces a strict 200 MB limit on uncompressed XML (specifically the entities.xml file inside the backup ZIP). Attachments don't count against this limit, but page content, history, and metadata do. A space with 10,000+ pages blows past this easily.
This limit exists as an open issue on Atlassian's tracker — closed as "Won't Fix." The workaround is the API-driven approach: bypass XML import entirely and create pages programmatically via the REST API. The v2 create-page docs also list 413 Request Entity Too Large as a possible error on individual requests, so page payloads with heavy inline content need size checks too. (developer.atlassian.com)
Step-by-Step: The API-Driven Migration Method
For teams consolidating a real engineering wiki, here's the repeatable workflow:
Step 1: Audit Access and Freeze Scope
Add the Slab Bot to all topics — including private ones — you want to migrate. Verify access by running a test query against the GraphQL API to confirm all expected posts are returned. Secret topics, drafts, and inaccessible content need to be resolved before you size the migration.
Step 2: Extract All Posts via Slab's GraphQL API
Paginate through the posts query, pulling ID, title, Markdown content, topic assignments, creation dates, and author info. Store everything locally in a structured format (JSON or SQLite) for quick lookups. Use the bulk export as a cross-reference for the baseline content payload.
Step 3: Create Confluence Spaces
Based on your topic-to-space mapping decision (one topic per space, single space, or hybrid), create the target Confluence spaces via the REST API. Capture space keys.
Step 4: Convert Markdown to ADF
Run each post's Markdown through your ADF translation pipeline. Apply custom rules for Slab-specific formatting: callouts map to panel macros, embeds become smart links or plain URLs, code blocks need correct language attributes. Validate the output ADF against Atlassian's schema.
Step 5: Create Pages via Confluence REST API v2
For each post, create a page in the target space with the ADF body. Set parent page IDs to build the correct tree structure. Explicitly set page position if you need to preserve ordering. Throttle requests to 50–80 per minute. Log every (slab_post_id, confluence_page_id) mapping.
Step 6: Rewrite Internal Links (Second Pass)
Using the mapping table from Step 5, scan every migrated page's ADF body for Slab URLs and replace them with Confluence page links. Update pages via the API. Emit an exception report for deleted, missing, or access-blocked targets.
Step 7: Migrate Attachments and Images
Slab inline images reference Slab's CDN. Download each image, upload it as a Confluence attachment via the Attachments API, and update the ADF media nodes to reference the new attachment ID. Downgrade unsupported embeds to explicit links.
Step 8: Validate
Spot-check 10–15% of migrated pages for formatting accuracy, working links, and image rendering. Run an automated link audit to catch any remaining Slab URLs that weren't rewritten. Verify permissions, search indexing, and high-value runbooks in a staging space before the production cutover.
What Breaks: Known Failure Modes
Every Slab-to-Confluence migration has predictable breakage points. Knowing them upfront saves you from discovering them in production.
Slab embeds → bare URLs. Slab's embedded integrations (GitHub PR previews, Figma frames, Loom videos, Google Docs) have no equivalent in Confluence's ADF. They migrate as plain URLs. Confluence Smart Links render previews for some services, but raw iframe embeds from Slab won't translate. Plan downgrade rules, not perfect preservation. (support.atlassian.com)
Comments and discussions are lost. Slab's post comments don't appear in the standard export or in API responses suitable for mapping to Confluence's inline comment system. If your team uses Slab comments heavily for decision documentation, archive them separately before cutover. (help.slab.com)
Slab callout blocks. These must be mapped to Confluence's panel macros (info, note, warning, error). The mapping isn't automatic — your ADF translation pipeline needs explicit rules for each callout type.
Code block language tags. Slab preserves language identifiers on fenced code blocks. ADF code blocks also support language attributes, but the tag format differs. Most Markdown-to-ADF libraries handle common languages, but verify edge cases (tsx, hcl, yaml).
Table formatting. Slab supports basic Markdown tables. ADF tables are structurally more complex — each cell is its own node with content. Simple tables translate cleanly; tables with merged cells or complex formatting don't.
Post ordering within topics. Slab allows manual ordering of posts within a topic. Confluence page trees also support ordering, but you must explicitly set the position when creating pages. If you don't, pages land in creation-time order.
Authorship. Confluence's HTML import path labels all pages and comments as created by the importing user. If preserving original authorship matters for audit or context, you need the API-driven approach with explicit author mapping — and even then, Confluence Cloud does not let you backdate the "created by" field on pages. (support.atlassian.com)
When to Build vs. When to Get Help
If your Slab workspace has fewer than 50 posts, no private topics, and minimal internal linking, you can handle this with the manual export and a Markdown-to-ADF script in a weekend.
If you're dealing with hundreds of posts, a complex topic taxonomy, heavy internal cross-references, and embedded integrations — the edge cases multiply. The ADF translation pipeline alone typically takes 2–3 engineering days to build and debug. Add link rewriting, image re-hosting, and the second-pass update logic, and you're looking at a week or more of focused engineering time.
At ClonePartner, we've built Slab extraction and ADF translation pipelines that handle the full scope: topic-to-space mapping, multi-pass link rewriting, image migration, and formatting preservation — all API-driven to bypass Confluence's XML import limits. We handle the edge cases that break DIY scripts so your engineers keep shipping product.
If you want to understand the full landscape of knowledge base migration planning, our knowledge base migration checklist covers the process end-to-end.
Frequently Asked Questions
- Can I import Slab content directly into Confluence?
- No. Confluence has no native Slab importer. You can export Markdown from Slab and either use Confluence's space import (limited to 200 MB of uncompressed XML) or build an API-driven pipeline that translates Markdown to Atlassian Document Format (ADF) and creates pages via the Confluence REST API. Pushing raw HTML through the v1 API lands pages in the legacy editor, which Atlassian is fully deprecating in April 2026.
- Does Slab have an API for migration?
- Yes, Slab offers a GraphQL API, but it's only available on Business or Enterprise plans. The API returns content based on what the Slab Bot user has access to — secret topics and drafts are excluded unless Slab Bot is explicitly granted access.
- How do I preserve internal links when migrating from Slab to Confluence?
- Build a link-mapping table during migration. First, import all pages to generate new Confluence Page IDs. Then build a mapping table linking old Slab URLs to the new IDs. Finally, run a second pass through every page's ADF body, replacing Slab URLs with the corresponding Confluence page links.
- What is the Confluence Cloud XML import size limit?
- Confluence Cloud enforces a strict 200 MB limit on the uncompressed entities.xml file inside the import archive. Attachments don't count toward this limit, but page content, history, and metadata do. Spaces exceeding 200 MB must be migrated via the REST API instead.
- Will Slab comments and embeds survive the migration?
- Usually not cleanly. Slab's post comments are absent from the standard export and not reliably available via the API for mapping to Confluence's inline comment system. Embedded integrations (GitHub previews, Figma frames, Loom videos) migrate as plain URLs since Confluence does not support iframe content in ADF.