SharePoint to Coda Migration: Methods, API Limits & Data Mapping
No native importer exists for SharePoint to Coda. This guide covers CSV export, Graph API extraction, Coda API limits, and field mapping for lists and pages.
There is no native migration path from SharePoint to Coda. No importer, no connector, no "Move to Coda" button. Coda's built-in importers support CSV, Notion, Confluence, Airtable, Trello, Google Docs, and Markdown — but not SharePoint.
SharePoint stores pages as .aspx files built from complex Web Part JSON structures, lists as typed columns backed by SQL, and documents in libraries with versioned metadata. Coda stores everything as docs containing pages of rich text and relational tables with formulas. These data models are fundamentally incompatible.
The realistic approach is a two-track extraction: pull SharePoint list data via CSV export or Microsoft Graph API, then separately extract Site Page content by parsing the canvasLayout and webParts structures through the Graph API. Once extracted, list data goes into Coda tables via CSV import or API row insertion, while page content requires Markdown conversion and manual rebuild or a managed migration service.
| Method | Fidelity | Lists/Tables | Page Content | Attachments | Best For |
|---|---|---|---|---|---|
| Manual CSV Export → Import | Low–Medium | ✅ (≤30K rows) | ❌ | Manual re-upload | Small lists (<50 columns) |
| Copy-Paste (Excel/Sheets) | Low | Partial | ❌ | ❌ | Quick table transfers |
| Zapier / Make (iPaaS) | Medium | ✅ (row-by-row) | ❌ | ❌ | Ongoing sync, small lists |
| API-Driven Scripts (Graph + Coda API) | High | ✅ | Partial (text only) | ✅ (separate upload) | Full workspace migrations |
| ClonePartner Managed Migration | Highest | ✅ | ✅ | ✅ | Complex sites, zero downtime |
The hard platform constraints behind that summary: Coda's CSV importer is row-based and capped at 10,000 rows per file; SharePoint's CSV export caps at 30,000 rows and doesn't include attachments; modern page bodies only come from the Graph canvasLayout endpoint; and Zapier exposes row-centric actions only.
For a deep dive on getting data out of SharePoint first, see our SharePoint export guide. If you're comparing the same SharePoint-to-block-editor mismatch in another target, our SharePoint to Notion guide covers similar structural issues. For background on when CSV works and when it breaks, see our CSV for SaaS migrations guide.
Understanding the Data Model Mismatch: Lists vs. Tables, Pages vs. Canvas
Before exporting anything, understand what you're translating between. Misunderstanding the structural differences between SharePoint and Coda is the #1 cause of failed migrations.
SharePoint's architecture
SharePoint organizes content into Sites (mapped to SPWeb in the Microsoft Graph API), each containing:
- Lists — Typed columns (Choice, Lookup, Person, Managed Metadata, Calculated, Currency, DateTime) backed by SQL. Each row is a
listItemwithfields. - Document Libraries — File repositories with metadata columns. Each file is a
driveItemwith versioning. The same object matters in two forms:listItemfor metadata anddriveItemfor the actual file and file versions. - Site Pages — Modern
.aspxpages rendered from acanvasLayoutJSON structure containinghorizontalSections,verticalSection, andwebParts. These are not standard HTML.
The Graph API returns pages as JSON objects with properties like id, name, webUrl, title, pageLayout, createdBy, and lastModifiedBy, with all pages in the site returned with pagination.
Coda's architecture
Coda organizes content into Docs — the atomic unit. Each doc can contain:
- Pages — Canvas-based rich text with embedded content blocks.
- Tables — Relational data with typed columns (Text, Number, Date, People, Select, Lookup, Formula, Button).
- Views — Filtered/sorted perspectives on a single table.
- Automations — Rules triggered by row changes, schedules, or buttons.
Where the models collide
| SharePoint Concept | Coda Equivalent | Migration Complexity |
|---|---|---|
| Site | Doc (or Folder of Docs) | Low |
| List | Table | Medium — column types need remapping |
| List Item | Table Row | Low |
| Document Library | Table + file attachments | High — files need separate handling |
Site Page (.aspx) |
Page (canvas) | Very High — Web Parts have no Coda equivalent |
| Managed Metadata | Select List or Lookup | Medium — taxonomy flattening required |
| Person column | People column | Medium — email matching required |
| Calculated column | Formula column | High — formula syntax is completely different |
| Lookup column | Relation + Lookup formula | Medium — foreign keys must be re-established |
| Page versioning | Doc version history | Not migratable — Coda versions are per-doc, not per-page |
The hardest part of this migration is Site Pages. SharePoint stores page content as nested Web Part structures — not HTML, not Markdown, not anything Coda can directly ingest. A good SharePoint-to-Coda migration is typically two-pass: move flat entities first, then rebuild page bodies, relations, and internal links after the target objects exist.
Exporting Data from SharePoint: Methods and Limitations
The extraction method depends on what you're migrating: lists (structured data), pages (content), or files (document libraries and attachments).
Exporting SharePoint Lists
Option 1: Native CSV export (simplest, most limited)
SharePoint Online offers a built-in "Export to CSV" option from the list command bar. The maximum number of rows you can export to a CSV file is 30,000. The Export to CSV option was briefly enabled for document libraries but was disabled — it was enabled unintentionally.
Key limitations:
- 30,000 row cap — lists larger than this need pagination or API extraction
- No attachment export — file attachments on list items are not included
- Column type flattening — Person, Lookup, and Managed Metadata columns export as display names, not IDs
- The export feature cannot export SharePoint list color coding and JSON formatting.
- If any single cell exceeds Excel's maximum cell size limit of 32,767 characters, the information that exceeds this limit will be lost.
If your SharePoint article body lives in a multi-line text field and some entries exceed 32,767 characters, do not use Excel as your staging format. Go straight to API extraction.
Option 2: Export to Excel (.iqy)
The "Export to Excel" option downloads a .iqy file that creates a one-way data connection. This option can sometimes encounter delays or incomplete data transfers because it downloads a .iqy file which dynamically refreshes data from SharePoint during export. Better for ongoing reads than one-time migration exports.
Option 3: Microsoft Graph API (recommended for programmatic extraction)
The Graph API gives full control over list data extraction:
GET /sites/{site-id}/lists/{list-id}/items?expand=fields(select=Column1,Column2)This returns paginated results with typed field values. Filter, select specific columns, and handle large lists by iterating through @odata.nextLink pagination tokens.
The Graph API heavily throttles bulk requests. Implement exponential backoff and handle 429 Too Many Requests errors when extracting large document libraries.
Option 4: PowerShell / PnP
For on-premises SharePoint or hybrid environments, PowerShell with PnP modules lets you script bulk extraction to CSV with full control over batching and field selection.
Exporting SharePoint Site Pages
Site Pages are not exportable via CSV — they're stored as Web Part structures inside a canvasLayout.
The Graph API exposes page web parts through endpoints like GET /sites/{site-id}/pages/{page-id}/microsoft.graph.sitePage/canvasLayout/horizontalSections/{id}/columns/{id}/webparts/{index}.
To extract page content:
- List all pages:
GET /sites/{site-id}/pages/microsoft.graph.sitePage - Expand canvasLayout: Use the
$expand=canvasLayoutquery string parameter to include the content of an item when retrieving the metadata. - Parse the Web Parts: Each section contains
textWebPartobjects (withinnerHtml) andstandardWebPartobjects (with structureddataproperties).
Known Graph API instability: The web parts and canvasLayout $expand endpoint returns HTTP 500 for some modern Site Pages. For some pages it works correctly, but roughly half of pages in some tenants fail. Budget time for fallback extraction methods.
Classic SharePoint blogs add another layer of complexity. Microsoft notes that blogs and wikis are complex forms made up of multiple tables — Posts, Categories, Comments, Links, and Other Blogs. If your articles live in a classic blog, plan to migrate Posts, Categories, and Comments as separate entities. (learn.microsoft.com)
If you are working in a large tenant and don't trust site owners' inventories, Graph Search is useful for discovery. Microsoft's search API supports queryString filters like path:"...", isDocument=true, and date restrictions such as (LastModifiedTime > 2021-02-01 AND Created > 2021-02-01), which makes it practical to locate article content outside the expected site. (learn.microsoft.com)
Exporting Files and Attachments
For document libraries, the same object matters in two forms: listItem for metadata and driveItem for the actual file and file versions. If you only export rows, you miss the binaries. If you only export files, you miss column metadata. Pull both.
List item attachments need separate handling — Microsoft's basic export flow does not save attachments. Teams typically use the SharePoint REST AttachmentFiles endpoint for download. Treat list attachments as a separate extraction stream from the main list export. (learn.microsoft.com)
For a comprehensive breakdown of every SharePoint export method (including PnP PowerShell, CSOM, and content packages), see our SharePoint export guide.
Formatting and Transforming Data for Coda
SharePoint data doesn't map cleanly into Coda's expected formats. You need a transformation layer between extraction and import.
Transforming list data
SharePoint list exports require cleanup before Coda import:
- Person columns: SharePoint exports display names (e.g., "Jane Smith"). Coda's People column type expects email addresses. You need a user directory lookup to map names → emails. Coda's People choices come from users who already have access to the doc or its folder, so ex-employees and outside contributors often cannot be represented cleanly as People values. (help.coda.io)
- Choice / Managed Metadata columns: Export as semicolon-delimited strings. Coda's Select List accepts comma-delimited values. Reformat the delimiter. For taxonomy-heavy intranets, preserve the original term labels and GUIDs even if you also map a friendly category field in Coda.
- Lookup columns: Export as display values only — the relational link is lost. In Coda, rebuild these as Relation columns pointing to another table, then use Lookup formulas to pull related values. Load parent tables first, then resolve child references in a second pass.
- DateTime columns: SharePoint exports ISO 8601 timestamps. Coda parses most date formats, but verify timezone handling — SharePoint stores UTC, and your CSV export may localize it.
- Calculated columns: These contain display values only, not the formulas. Manually recreate the formula logic in Coda's formula language. There is no automated translation.
- Hyperlink columns: SharePoint stores URL + description as a pair. CSV export may split or concatenate them. Coda's Link column type accepts URLs, but the display text needs a separate column.
For authors, keep two fields during migration: a raw author_email or author_name text column, and an optional Coda People column. Coda's People column only resolves users who already have access to the doc or folder, so ex-employees and outside contributors won't resolve cleanly.
Field mapping reference
| SharePoint Source | Recommended Coda Target | Notes |
|---|---|---|
| Page title / post title | Page name or Title text column |
Keep original source URL and source ID alongside it |
| Multi-line article body | Coda page canvas | Convert webpart output to cleaned HTML or Markdown first |
| Choice | Select list | Safe if values are stable |
| Multi-choice | Multi-select | Normalize capitalization first |
| Lookup | Relation column + Lookup formula | Load referenced table first, then resolve relations |
| Person/Group | People column or text/email column | Use People only when the user exists in the Coda doc/folder |
| Managed metadata / taxonomy | Text + term ID, or taxonomy table + relation | Preserve raw term label and GUID if governance matters |
| Created / Modified | Custom date or date-time columns | Preserve original timestamps explicitly |
| Created By / Modified By | Text/email, optionally People | Do not rely on target system metadata for historical authorship |
| Attachments | File column | Multiple files per row are supported |
| Images | Image column, file column, or page embed | Rewrite references after file upload |
| Source page URL | URL mapping table | Use this to repair internal links in a second pass |
| Version history | Separate audit table or archive | Do not force it into the current-state row |
Microsoft's columnDefinition API surface is still missing some SharePoint field types — in some cases you only get basic column properties without a populated type facet. Plan for manual inspection when mapping complex column types. (learn.microsoft.com)
Transforming page content
SharePoint textWebPart objects contain innerHtml — HTML fragments like <p><b>Hello!</b></p>. The conversion pipeline:
- Iterate through the
canvasLayout: Read the rows and columns to determine the structural order of the content. - Extract
innerHtmlfrom eachtextWebPart. IdentifystandardWebPartobjects separately — these may be images, embedded views, or custom widgets. - Convert HTML → Markdown using a library like
turndown(JavaScript) ormarkdownify(Python). - Strip unsupported elements — Coda doesn't render
<iframe>,<script>, or custom Web Part embeds. Sanitize SharePoint-specific div classes and span tags. - Handle images —
standardWebPartimage references point to SharePoint-hosted URLs. Since SharePoint images are authenticated, download them and re-upload to Coda, or host them at publicly accessible (but secure) temporary URLs and let Coda ingest them.
# Example: Extract text from SharePoint page canvasLayout
import requests
from markdownify import markdownify as md
def extract_page_markdown(graph_client, site_id, page_id):
url = f"https://graph.microsoft.com/v1.0/sites/{site_id}/pages/{page_id}/microsoft.graph.sitePage?$expand=canvasLayout"
response = graph_client.get(url)
page = response.json()
markdown_parts = []
for section in page.get("canvasLayout", {}).get("horizontalSections", []):
for column in section.get("columns", []):
for wp in column.get("webparts", []):
if wp.get("@odata.type") == "#microsoft.graph.textWebPart":
html = wp.get("innerHtml", "")
markdown_parts.append(md(html))
return "\n\n".join(markdown_parts)What you'll lose in page conversion: SharePoint Web Parts like Hero banners, News feeds, Quick Links, Highlighted Content, Weather, embedded Power BI, and custom SPFx components have no Coda equivalent. These will either be dropped entirely or reduced to plain text descriptions. Plan for manual recreation of visual layouts.
Importing into Coda: CSV vs. API vs. iPaaS vs. Managed Migration
Method 1: CSV Import (lists only)
Coda supports importing CSVs up to 10,000 rows. If your CSV exceeds 10,000 rows, Coda recommends uploading this data in batches.
Steps:
- Type
/importin the Coda doc canvas - Select CSV from the import options
- If you leave the "Use first row as headers" option toggled on, Coda will automatically look for this first row and convert them into column names.
- If you selected to add your CSV to an existing table, you'll then need to map the columns from the existing table with values from the file.
CSV import limitations for SharePoint migrations:
- 10,000 row cap per import — large SharePoint lists need multiple batches
- No rich text — page content can't be imported this way
- No attachments — files must be uploaded separately
- The importer will only import the first workbook (tab) from your spreadsheet. If you export multiple SharePoint lists to a single Excel file, import each tab separately.
- CSV import in Coda does not detect duplicates and is not capable of updating existing pages automatically.
CSV is a good fit for flat tables like article registries, comments, categories, author directories, and redirect maps. It is the wrong tool for page bodies, Site Pages, document libraries, or anything that depends on embedded structure. For more on when CSV helps and when it hurts, see our CSV for SaaS migrations guide.
Method 2: Coda API (programmatic row insertion)
The Coda API is a RESTful API that lets you programmatically interact with Coda docs: list and search docs, create new docs, discover pages, tables, formulas, and controls, and read, insert, upsert, update, and delete rows.
For SharePoint list data, the Coda API's upsert endpoint is the best fit:
POST /docs/{docId}/tables/{tableId}/rows
Content-Type: application/json
{
"rows": [
{
"cells": [
{"column": "c-Title", "value": "Q1 Planning Doc"},
{"column": "c-Status", "value": "Active"},
{"column": "c-Owner", "value": "jane@company.com"}
]
}
],
"keyColumns": ["c-Title"]
}Coda API rate limits are the primary bottleneck for bulk migration:
| Operation | Rate Limit |
|---|---|
| Reading data | 100 requests per 6 seconds |
| Writing data (POST/PUT/PATCH) | 10 requests per 6 seconds |
| Writing doc content data | 5 requests per 10 seconds |
| Listing docs | 4 requests per 6 seconds |
The limit for requests is 2 MB, but there is also a limit of 85 KB for any given row.
If docs become too large, they can reach a point where they can no longer be supported by the Coda API. On all plan types, docs with a size of 125 MB or more will no longer be accessible via the API.
The Coda API returns 202 Accepted for many write operations because writes are queued for processing rather than applied synchronously. Budget for checking mutation status and be aware that API reads can lag the live doc by a few seconds during heavy import. If you need the freshest snapshot during QA, use the X-Coda-Doc-Version: latest header and handle the possible 400 if the snapshot is not ready. (coda.io)
Critical Coda API limitation for pages: The API's page operations are limited to listing pages in a doc and updating page title/image/icon. This means SharePoint Site Page content cannot be reliably pushed into Coda pages via API alone. For page body content, you must use manual paste, Markdown import, or a managed migration service.
If you are migrating a SharePoint library with thousands of items, implement a queue system with exponential backoff. The Coda API supports batching row inserts (up to 500 rows per request for tables), which helps with throughput. Use IDs everywhere — Coda's API warns that names are fragile and can change, and duplicate names can exist. For more on Coda's API constraints, see our Coda to Confluence migration guide.
Method 3: Zapier / Make (ongoing sync only)
Whenever a new list item is added in SharePoint, a corresponding row is created in Coda, keeping your data organized and updated. Zapier supports SharePoint triggers like "New List Item" and "Updated List Item" paired with Coda actions like "Create Row" and "Upsert Row."
This works for:
- Ongoing sync of new SharePoint list items to Coda tables
- Small, incremental data flows during a transition period
This does not work for:
- Bulk historical migration (Zapier processes items one at a time)
- Page content migration (no trigger for page body content)
- Attachment migration (file content isn't passed through Zapier)
- Preserving original timestamps (items are created with the current time)
If you need coexistence instead of a hard freeze during transition, a useful pattern is: backfill once with API scripts, then sync list-item deltas via Zapier until cutover. Microsoft Graph includes a delta method for list items, which makes this practical for structured lists.
Method 4: ClonePartner Managed Migration
For organizations with complex SharePoint environments — multiple sites, deep page hierarchies, large document libraries, custom Web Parts, and metadata-heavy lists — a managed migration handles the edge cases that break DIY approaches.
ClonePartner's approach to SharePoint-to-Coda migrations:
- Web Part translation: We parse the full
canvasLayoutstructure and converttextWebPartcontent into native Coda blocks, preserving heading levels, bold/italic formatting, and embedded images. - Dynamic URL mapping: We maintain a lookup table that translates every SharePoint
.aspxURL to its correspondingcoda.io/d/...URL, then rewrite all internal links in bulk post-migration. - Metadata preservation: Since the Coda API attributes all API-created content to the token owner, we map historical
createdBy,modifiedBy, and timestamp fields to custom Coda columns so your audit trail survives. - Attachment extraction: We pull files from SharePoint Document Libraries via the Graph API's
driveItemendpoints and embed them into the appropriate Coda tables and pages.
Handling Edge Cases: Attachments, Internal Links, and Metadata
These areas cause the most post-migration support tickets. Plan for them before you start.
Broken internal links
SharePoint internal links follow patterns like:
/sites/marketing/SitePages/Q1-Plan.aspx/sites/hr/Lists/Employees/AllItems.aspx
Coda URLs follow a completely different structure:
coda.io/d/doc-name_d1234/Page-Name_su5678
Every internal link in every migrated page and list cell will break unless you build a URL mapping table during migration:
- Before migration, enumerate all SharePoint pages and lists
- Create the corresponding Coda docs, pages, and tables
- Build a lookup:
{sharepoint_url → coda_url} - After content migration, run a find-and-replace pass across all Coda content to swap old URLs for new ones
Use IDs as the stable key in your mapping table — not names. Your mapping table should store at minimum:
| Column | Purpose |
|---|---|
source_page_id |
SharePoint page/item ID |
source_url |
SharePoint webUrl |
target_page_id |
Coda page ID |
target_browser_link |
Coda browserLink |
status |
Migration/validation status |
If the SharePoint content was publicly facing and search ranking matters, you'll also need 301 redirects at the web or reverse-proxy layer. Coda itself cannot serve legacy *.aspx paths.
Lost author and timestamp metadata
When you create rows or pages via the Coda API, the Created By field is set to the API token owner — not the original SharePoint author. There is no way to override this in Coda's native fields.
Workaround: Create custom columns in your Coda tables:
Original Author(Text or People type)Original Created Date(DateTime type)Original Modified Date(DateTime type)
Populate these from the SharePoint createdBy.user.email and createdDateTime fields during import. Your team loses the native "Created By" attribution but retains the historical data.
Orphaned attachments and document libraries
SharePoint Document Libraries are file-centric — each file is a first-class object with metadata, versioning, and permissions. Coda treats files as attachments on table rows or embedded objects in pages. The mapping isn't 1:1.
For Document Library migration:
- Extract files via Graph API:
GET /sites/{site-id}/drives/{drive-id}/items/{item-id}/content - Upload to Coda via the attachment column type (or embed publicly hosted URLs)
- Map library metadata columns to Coda table columns
On Coda's Free plan, you can store 1GB of media including JPEG, PNG, PDF, and MP4, with files having a maximum size of 10MB. Paid plans have higher limits, but verify your total library size before starting.
For inline page images, do not trust the raw HTML until you inspect it. A SharePoint text web part may include innerHtml, but image references often need rewriting after the actual files are extracted and uploaded. Treat image migration as content transformation, not text copy.
Preventing duplicate content
Use a stable uniqueness key from day one. Good candidates are SharePoint item ID, page ID, source webUrl, or a compound key like site_id + page_id. Coda's API upsert with keyColumns depends on choosing a durable match field. Without that, every test load creates duplicate rows.
Post-Migration QA and Validation Checklist
Do not declare the migration complete until you've verified every item on this list. A successful 200 OK API response does not mean the data is correct.
Row counts and data integrity
- Row count match: Compare total row count in each SharePoint list against the corresponding Coda table. Account for filtered views — SharePoint may show fewer rows than the list actually contains.
- Spot-check 5% of records: Pick random rows and verify all field values match the source. Pay special attention to multi-value fields (Choice, Lookup), date/time values (timezone shifts), currency/number formatting, and rich text fields (HTML stripping artifacts).
- Long-body check: Sample the longest article bodies first, especially anything that would have exceeded Excel's 32,767-character limit.
Page content verification
- Heading structure: Confirm H1/H2/H3 hierarchy survived the HTML → Markdown conversion.
- Images render: Check that embedded images load — not broken links to SharePoint-hosted URLs.
- Tables within pages: SharePoint pages with embedded list views or data tables need separate validation.
Link integrity
- Internal links: Click through 10–20 internal links across migrated pages. Every one should resolve to the correct Coda page, not a dead SharePoint URL.
- External links: Verify that outbound URLs (to external sites) weren't modified during transformation.
Metadata and permissions
- Author attribution: Confirm that custom
Original AuthorandOriginal Datecolumns are populated correctly. - Sharing settings: Coda's permission model (doc-level, page-level on paid plans) is different from SharePoint's granular item-level permissions. Re-apply access controls manually.
Performance
- Doc load time: Table size in Coda is calculated as row count × column count × column content. Coda has seen docs that run fine with 100,000 rows and others that have trouble at 10,000 rows. Test your largest migrated docs on both desktop and mobile.
- Formula performance: If you recreated SharePoint calculated columns as Coda formulas, verify they compute correctly and don't cause doc slowdowns.
- Async-write verification: If you used the Coda API, wait for queued mutations to settle before signing off. Validate through the mutation-status endpoint.
- Spot diff: Export a small sample of Coda pages and compare them against your normalized source pages.
Split large SharePoint sites into multiple Coda docs. A single SharePoint site with 15 lists and 200 pages will create a doc that's slow and hard to navigate. Map each major list to its own Coda doc, and group related pages into focused docs by topic or team.
Platform Limitations to Plan Around
SharePoint-side constraints
- CSV export capped at 30,000 rows per list
- Graph API for page content (
$expand=canvasLayout) is unreliable on some page types - Classic pages (non-modern
.aspx) are not supported by the Graph Pages API — only modern pages - Custom SPFx Web Parts have proprietary data structures that no migration tool can automatically translate
- Microsoft's
columnDefinitionAPI surface is still missing some SharePoint field types
Coda-side constraints
- CSV import limited to 10,000 rows per upload
- Shared docs on the Free plan can have up to 50 objects and up to 1,000 table rows per doc.
- API cannot reliably create rich page body content — only table rows and page metadata
- Docs exceeding 125 MB become inaccessible via the API.
- No native SharePoint importer
- No equivalent to SharePoint's granular item-level permissions (Coda permissions are doc-level, or page-level on Team/Enterprise)
- Formula language is completely different from SharePoint calculated columns — no automated translation
- People column only resolves users who already have access to the doc or folder
When to DIY vs. When to Get Help
DIY is viable when:
- You're migrating fewer than 5 SharePoint lists with under 10,000 rows each
- You have no Site Pages to migrate (or are willing to manually recreate them)
- Your lists use simple column types (Text, Number, Date, Choice)
- You have engineering capacity to write and maintain extraction scripts
- Internal links between pages aren't critical
Get help when:
- You have 100+ Site Pages with rich content and cross-references
- Document Libraries contain thousands of files with metadata
- You need to preserve author attribution and timestamps
- Your lists exceed 30,000 rows or use complex column types (Managed Metadata, multi-value Lookups)
- Zero downtime is a requirement — your team can't tolerate broken links during the transition
- The Graph API's
canvasLayoutendpoint fails on your pages (a known issue with no fix timeline)
The clean version of this migration is simple: CSV for flat tables, Graph for modern pages, a second pass for links and relations, and explicit preservation of historical metadata. The messy version is what most teams actually have: classic blog tables, weird column types, library files, orphaned attachments, and years of links that assume SharePoint URLs will live forever. The right plan acknowledges that mess early.
For teams evaluating the reverse direction or other Coda migration paths, see our Coda to Confluence guide.
Frequently Asked Questions
- Can I import SharePoint data directly into Coda?
- No. Coda has native importers for Notion, Confluence, Airtable, Trello, CSV, and Google Docs — but not SharePoint. You must export SharePoint data to CSV or use the Microsoft Graph API, then import into Coda via CSV upload or the Coda API.
- What is the row limit for CSV import into Coda?
- Coda supports importing CSVs up to 10,000 rows per import. If your SharePoint list exceeds this, you need to split the CSV into batches or use the Coda API for programmatic row insertion.
- How do I migrate SharePoint Site Pages to Coda?
- SharePoint Site Pages are stored as Web Part structures, not HTML. Use the Microsoft Graph API to extract the canvasLayout, parse textWebPart innerHtml content, convert it to Markdown, and then manually paste or import it into Coda pages. The Coda API cannot create rich page body content programmatically.
- Will internal links survive a SharePoint to Coda migration?
- No — all internal links will break unless you build a URL mapping table that translates SharePoint .aspx URLs to Coda coda.io/d/ URLs, then run a find-and-replace pass across all migrated content.
- What are Coda's API rate limits for migration?
- Coda's API allows 100 read requests per 6 seconds, 10 write requests per 6 seconds, and 5 doc content writes per 10 seconds. The maximum request size is 2 MB, with a per-row limit of 85 KB. Docs over 125 MB become inaccessible via the API.