Skip to content

How to Import Data into Coda: Methods, Limits & Data Mapping

A complete guide to importing data into Coda: CSV limits (10k rows), native importers, API rate limits, data mapping for relations, and when each method works or breaks.

Raaj Raaj · · 23 min read
How to Import Data into Coda: Methods, Limits & Data Mapping

Coda supports CSV imports (capped at 10,000 rows per file), native importers for Notion, Confluence, Airtable, Trello, Quip, Google Docs, Asana, and Markdown, a full REST API, and iPaaS connectors through Zapier, Make, and n8n. For flat spreadsheet data under 10k rows, the CSV importer works in under a minute. For relational databases, nested page hierarchies, or anything at scale, you need the API — and you need to understand the hard constraints before you start.

This guide covers every realistic import method, the specific limits that will stall your migration, and the data-mapping decisions that determine whether your import preserves structure or silently drops it.

For teams moving data out of Coda, see our Coda export guide. If you're coming from a specific platform, we have dedicated guides for Notion to Coda, Confluence to Coda, and Quip to Coda.

How Coda Structures Data (And Why It Matters for Imports)

Before importing anything, understand Coda's architecture. Coda is a document-database hybrid. The hierarchy is: Workspace → Folder → Doc → Page → Table/View.

  • Docs are the atomic unit. A single Coda doc can contain rich-text pages, relational tables, formulas, buttons, automations, and views — all woven together.
  • Tables in Coda are fully typed. Columns have explicit types: text, number, date, person, relation (lookup), select, URL, image, and more. This is not a spreadsheet — it's closer to a database.
  • Grids look like tables but are not. Grids are lighter layout blocks that do not support table views, formulas, or advanced database behavior. If your imported content lands as a grid instead of a table, convert it before you try to build views, lookups, or relation columns.
  • Pages provide the document layer. They can contain both free-form rich text and embedded tables or views.
  • Relations are first-class citizens. Tables can reference rows in other tables via lookup columns, enabling relational data models inside a single doc.

This hybrid model is what makes Coda powerful — and what makes importing into it tricky. Most source tools (spreadsheets, wikis, project boards) don't have an equivalent structure, so your import is always a translation, not a copy. If you treat every source export like a flat spreadsheet, you'll land data in Coda, but not a usable system.

All Supported Import Methods

Method Data Types Row Limit Attachments Relations Best For
CSV/TSV Flat tabular data 10,000 per file ❌ (text only) Spreadsheet data, CRM exports
Notion importer Pages, databases ~2 GB export ✅ (via HTML export) ❌ (values only) Notion workspace migrations
Confluence importer Pages, page trees Multiple spaces N/A Confluence Cloud workspaces
Airtable importer Bases, tables Account-connected Partial Partial Airtable base migrations
Trello importer Boards, cards Board-level N/A Project board migrations
Quip importer Docs, folders Folder-level ✅ (most) N/A Quip workspace migrations
Google Docs Document content Single doc N/A Individual doc imports
Asana importer Projects, tasks Project-level Partial N/A Task/project migrations
Markdown .md files Multiple files N/A Dev docs, static content
Copy-paste Spreadsheet data Browser-limited Quick table seeding
Coda API Tables, rows, pages Rate-limited ✅ (separate upload) ✅ (programmatic) Large-scale, relational
Packs (sync tables) Row-by-row sync 10,000 per sync table Ongoing sync, small datasets
Zapier/Make/n8n Row-by-row Rate-limited Event-driven imports

Coda's help docs list native importers for Notion, Airtable, Google Docs, Trello, Confluence, Quip, and Asana, plus CSV, Markdown, manual table paste, Pack sync tables, and Zapier. There is no current official HTML-file or Evernote/ENEX importer — those sources require conversion to Markdown or CSV before import.

Method 1: CSV Import and the 10,000-Row Cap

CSV import is the most common starting point. Type /import in any Coda doc, select CSV, upload your file, and Coda creates a table with columns mapped from your headers.

Step-by-step workflow

  1. Type /import on any blank line in your doc
  2. Select CSV from the import menu
  3. Upload your .csv or .tsv file (multiple files create separate tables)
  4. Toggle Use first row as headers (on by default)
  5. Choose to create a new table or add to an existing table
  6. If adding to an existing table, map columns and optionally select a key column for merge
  7. Click Next and wait for the import to complete

Hard limits

Coda supports importing CSVs up to 10,000 rows. If your CSV exceeds 10,000 rows, they recommend uploading the data in batches. This is a hard cap — not a performance suggestion. Files with 10,001+ rows will be rejected outright.

The importer will only import the first workbook (tab) from your spreadsheet. If you're importing an Excel file with multiple sheets, export each sheet as a separate CSV and import them one at a time.

Duplicate handling

When importing into an existing table, Coda offers a merge option. You can choose to merge rows by selecting a single key column mapping. Coda will use this key column to look for matching data, so that existing data will be updated rather than replaced or duplicated. Just be sure to choose a column that has unique values for each unique data point.

This requires a unique key column to function correctly. If your data doesn't have a natural unique identifier, or if you're creating a new table, "import CSV" in Coda does not detect duplicates and is not capable of updating existing pages automatically. You'll end up with duplicate rows that require manual cleanup or API-based deduplication.

Coda's merge only supports a single key column — no composite-key matching. If deduplication depends on multiple fields, create a composite key column before upload:

migration_key,customer_id,invoice_id,amount
cust_42__inv_1001,42,1001,49.00
cust_42__inv_1002,42,1002,75.00

A precomputed key like this lets the CSV importer do reliable single-column merge behavior, which is much safer than hoping names or labels stay unique.

What CSV imports preserve vs. drop

Preserved Dropped
Column headers → column names Column types (everything imports as text)
Row data → row values Formulas (only values, not expressions)
Basic text formatting Relations/links between tables
Images/attachments
Comments, history, permissions

After import, you'll need to set column types manually (date, number, email, person, etc.). Sometimes there may be a column that is of type "email" that may be formatted as simple text through the import. Coda may attempt some type inference, but it's unreliable — plan for a column-typing pass after every CSV import.

Warning

Watch your test environment. CSV may work perfectly in testing and still fail your pilot if you are importing into a shared Free-plan doc. Coda's Free-plan limits for shared docs are 1,000 table rows and 50 objects. That's enough for a sample, not enough for a serious migration rehearsal.

For a broader look at when CSVs work for migrations and when they don't, see our guide on using CSVs for SaaS data migrations.

Method 2: Native Platform Importers

Native importers are the right first move when your source app is officially supported, because they understand more structure than CSV. They are still not full-fidelity migration engines. The right expectation is fast first pass, then guided cleanup.

Notion importer

The Notion importer is HTML-based. You export your Notion workspace as HTML (with subpages included), then upload the .zip file into Coda.

HTML exports do not support formulas, filtering, and alternative database views, so they won't come through "as is" in your imports to Coda. However, Coda will retain the values and data, allowing you to apply new formulas or filters after importing.

What this means in practice:

  • Notion databases import as Coda tables with values intact, but no formulas
  • Relations between Notion databases become plain text — you must manually rebuild them as Coda lookup columns
  • Rollups are lost entirely (they depend on relations that don't survive)
  • Views (board, calendar, gallery) are not preserved; only the default table view data comes through
  • Cell background colors are lost — the Notion export does not provide information about cell background colors in databases, and these cannot be imported.

If your zipped Notion export file is over 2GB, you will need to first unzip the file before you can upload it to Coda. Large imports may take up to a few hours to complete (e.g. 2 GB+).

Tip

People references: To preserve @mentions, start with a blank Coda doc and share it with the referenced users before importing. Usernames must match across Notion and Coda.

Other common issues:

  • You may sometimes encounter a "Something's not right" error if your workspace is particularly large. You can try to work around it by importing smaller portions (such as single pages) at a time, rather than the entire workspace at once.
  • Toggle blocks from Notion sometimes flatten into plain text
  • Synced blocks import as static copies, not synced
  • Inline databases may not import if subpages aren't included in the export

For a full deep-dive into Notion-to-Coda structural mapping, see our Notion to Coda migration guide.

Confluence importer

Coda has built a Confluence importer to bring information from Confluence spaces into Coda. It works through a /Confluence slash command and supports both Cloud and On-premises (Enterprise plan only for on-prem).

Key behaviors:

  • A Confluence space maps to roughly a single Coda doc. Coda supports importing multiple spaces at once into a single doc.
  • Page hierarchy (parent/child) is preserved as Coda page nesting
  • Confluence macros are not imported — dynamic content (Jira macros, roadmap macros, status macros) is dropped
  • Large files may take up to a few hours to fully import into Coda.
  • The Confluence On-premises importer is only available to customers on the Enterprise tier.
  • Closing the browser window during import cancels it — plan for long imports accordingly

For people references to carry over, usernames must match across Confluence and Coda, and the destination doc should already be shared with those users.

For the complete approach to Confluence-to-Coda migration, including macro handling, see our Confluence to Coda guide.

Markdown importer

Coda supports the basic syntax markdown flavor and as such does not support text colors, highlights, and other extended features. The markdown importer supports selecting single or multiple .md files to import.

What the Markdown importer does not handle:

  • Embedded tables (Markdown tables may not render as Coda tables)
  • Local image paths (images must be hosted URLs or re-uploaded)
  • Inline HTML blocks
  • Extended Markdown features (footnotes, definition lists, abbreviations)
  • YAML frontmatter

For dev teams migrating documentation from Git repos, this means a pre-processing step: strip frontmatter, convert local image paths to hosted URLs, and simplify any extended Markdown to basic syntax before import. Markdown is a good staging format for articles and wiki pages, but a poor staging format for anything that needs typed tables or relational links.

Other native importers

  • Airtable: Connects via account authentication and pulls bases directly. Preserves table structures reasonably well, but Airtable-specific features (interfaces, extensions, synced views, formulas, rollups, and automations) don't transfer. Large bases may import slowly due to Airtable's API rate limiting of 100 records at a time on pull.
  • Trello: Imports boards as Coda tables with cards as rows. Labels, descriptions, and comments come through. Checklist items can optionally become a second table linked by a relation column. Power-Ups and automations don't transfer.
  • Quip: Connects with a personal access token and preserves folder structure, comments, and @mentions. User mapping is based on email address. Quip spreadsheets arrive as grids, not fully relational tables — convert to tables if you need views or relation columns. FILE columns do not bring file data one-to-one, and most live apps do not recreate in Coda. Coda recommends splitting very large folders rather than importing more than about 500 files into one doc at once. See our Quip to Coda guide for details.
  • Google Docs: Pulls Google Docs content into Coda pages. Useful for content pages but not for structured data. Does not support Google Sheets — use CSV or a sync method for spreadsheet data.
  • Asana: Not a true workspace importer. Coda's Asana flow starts by exporting one project at a time to CSV, then importing those CSVs into a table.

Method 3: API-Driven Imports

When native importers can't handle your data's complexity — relational structures, large volumes, custom metadata, or specific column types — the Coda API is the right tool.

API rate limits

Reading data: 100 requests per 6 seconds. Writing data (POST/PUT/PATCH): 10 requests per 6 seconds. Writing doc content data (POST/PUT/PATCH): 5 requests per 10 seconds. Listing docs: 4 requests per 6 seconds.

Note: Coda's developer reference lists the doc-content write limit at 5 requests per 10 seconds, while their doc-limits help article lists it at 3 requests per 10 seconds. Engineer for the stricter ceiling.

For robustness, all API scripts should check for HTTP 429 Too Many Requests errors and back off and retry the request. Limits apply per-user across all endpoints that share the same limit and across all docs.

In practice, the 10 writes per 6 seconds limit means you can insert roughly 100 rows per minute if each request inserts a single row. To go faster, use the batch upsert endpoint, which can insert multiple rows per request — but each batch still counts as one write request.

Additional API constraints

  • Request size: API requests are limited to 2 MB
  • Row size: Individual rows are limited to 85 KB
  • Date handling: The API returns dates in Pacific time
  • Image uploads: The API does not accept uploaded image files directly. Pass an Image URL into an Image URL column, or handle file uploads separately.
  • URL parameter length: 2,048 characters max
Warning

The 125 MB wall: On all plan types, docs with a size of 125 MB or more will no longer be accessible via the Coda API. Attachment size is not included in the overall doc size — the 125 MB limit for API access does not include attachment size. If your doc grows past this threshold during import, your entire API pipeline breaks. Plan your doc architecture to stay under this limit.

Async write behavior

Changes made via the API, such as updating a row, are not immediate. These endpoints all return an HTTP 202 status code, instead of a standard 200, indicating that the edit has been accepted and queued for processing.

This matters for migration scripts that need to read back data they just wrote (e.g., to get the Coda-generated row ID for building relations). Build in a delay between writes and reads, or use the X-Coda-Doc-Version: latest header — but know that if the API's view of the doc is not up to date, the API will return an HTTP 400 response.

Building relations via API

This is where API imports become essential. To programmatically recreate relations:

  1. Create the parent table first (e.g., Organizations) and insert all rows
  2. Capture the Coda row IDs returned for each parent row
  3. Create the child table (e.g., Contacts) with a relation column pointing to the parent table
  4. Insert child rows with the parent Coda row ID in the relation field

This requires maintaining a mapping between source IDs and Coda row IDs throughout the migration. The concept is straightforward, but with rate limits and async writes, it demands careful orchestration.

import requests
import time
 
API_TOKEN = "your-coda-api-token"
BASE_URL = "https://coda.io/apis/v1"
HEADERS = {"Authorization": f"Bearer {API_TOKEN}", "Content-Type": "application/json"}
 
def upsert_rows(doc_id, table_id, rows, key_columns=None):
    payload = {"rows": rows}
    if key_columns:
        payload["keyColumns"] = key_columns
    resp = requests.post(
        f"{BASE_URL}/docs/{doc_id}/tables/{table_id}/rows",
        headers=HEADERS,
        json=payload
    )
    if resp.status_code == 429:
        retry_after = int(resp.headers.get("Retry-After", 6))
        time.sleep(retry_after)
        return upsert_rows(doc_id, table_id, rows, key_columns)
    return resp.json()

Method 4: iPaaS Tools and Pack Sync Tables

Pack sync tables

Pack sync tables are the native ongoing-sync option inside Coda. They pull external data into Coda tables, support scheduled refresh, and many Packs support two-way sync so edits in Coda can be written back to the source system. This is ideal when Coda is an operating layer over Jira, HubSpot, Salesforce, or similar tools. It is not the same as a one-time migration.

Pack sync table limits by plan:

  • Free: 100 rows, manual refresh only
  • Pro: 10,000 rows, daily refresh
  • Team/Enterprise: 10,000 rows, up to hourly refresh

Zapier, Make, and n8n

Automation platforms can push data into Coda row-by-row via the API. This works for:

  • Event-driven imports: A new row in Google Sheets triggers a new row in Coda
  • Ongoing sync: Keep a CRM or helpdesk table current in Coda
  • Small-batch imports: Under a few thousand rows where real-time matters more than speed

Tools such as Make, Zapier, n8n, or Pipedream can facilitate automated import processes. A typical workflow: stage data in Google Sheets, connect via the iPaaS platform, and push rows to a Coda table.

Limitations:

  • Row-by-row processing means even 5,000 rows can take hours through Zapier
  • No relation handling — you can't create lookup columns or relational links through standard iPaaS actions
  • Rate limits still apply — the Coda API limits are per-user, so an iPaaS tool using your token shares your quota
  • Zapier upsert limitation: Zapier's upsert flow cannot target a formula-based row ID column
  • Automation quotas on Coda's side: Free plan: 35 time-based and 100 event-based automations per month, per doc. Pro plan: 100 time-based and 500 event-based per month. Team & Enterprise: Unlimited.

Data Mapping: How Coda Interprets Imported Data

This is where most migrations silently break. Understanding how Coda translates incoming data is the difference between a clean import and weeks of manual cleanup.

CSV columns → Coda column types

Every CSV column imports as text by default. Coda may attempt some type inference, but don't rely on it. After import:

Source Data Coda Default What You Need to Do
Dates (2024-01-15) Text Change column type to Date
Numbers (1,234.56) Text Change to Number (watch locale formatting)
Emails Text Change to Email (enables mailto: links)
URLs Text Change to Link
Currency values Text Strip symbols before import, change to Number
Multi-select values Text (comma-separated) Change to Select List, remap values
Relations (foreign keys) Text Create a Relation column, match on display column

In the "Organizations" Coda table, set the organization name as the display column. In the "Contacts" Coda table, turn the "Organization" column into a relation column type. This will automatically turn the values into relational chips if the column contains the organization name corresponding to the ones in the Organizations table.

This is the most practical way to rebuild relations from CSV imports: import both tables, then convert the foreign-key text column into a relation column. Coda will automatically match values if the display column of the target table contains the exact same text.

Notion databases → Coda tables

Coda will import the values from your Notion formulas, but it will not import the equations. This means:

  • Formula columns become static values (numbers, text)
  • Relation columns become plain text (the display name of the related record)
  • Rollup columns are empty or show the last computed value
  • Select/multi-select values come through, but color coding is lost

Source-to-Coda mapping reference

Source Pattern Best Coda Target What to Watch
Folder, space, or wiki tree Doc + pages/subpages Best for navigation and readable content
Flat spreadsheet tab Table Good for structured rows and formulas
Visual matrix or note layout Grid (convert to table later) Grids don't support views or relation columns
Foreign key or linked record Relation column Usually rebuilt after import, not auto-created
Rich article body Page or canvas column Best for text-heavy content
Attachments and images File/Image columns or separate upload Plan separately from row import
Source formulas, rollups, live apps Rebuild in Coda Values may transfer; behavior does not

Nested content and hierarchy

Coda handles hierarchy through pages and subpages within a doc. When importing from hierarchical sources:

  • Confluence page trees → Coda page nesting (generally preserved)
  • Notion page nesting → Coda subpages (preserved for content pages, but database-within-a-page structures can flatten)
  • Quip folders → Coda page organization

What doesn't translate well: deeply nested wiki structures where the hierarchy itself carries semantic meaning (e.g., a 5-level-deep Confluence space tree). Coda docs start to become unwieldy past ~100 pages, and performance degrades with depth.

Limits & Constraints Reference

Plan-based limits

Limit Free (Shared) Pro Team / Enterprise
Rows per doc 1,000 Unlimited Unlimited
Objects per doc 50 Unlimited Unlimited
Attachments per doc 1 GB (10 MB/file) 5 GB (250 MB/file) Unlimited (250 MB/file)
Pack sync table rows 100 10,000 10,000
Pack sync refresh Manual Daily Up to hourly
Automation runs/month 35 time + 100 event 100 time + 500 event Unlimited

On the Free plan, there are different doc size limitations. Personal docs that aren't shared with anyone have no size limits. That means no limitations on number of rows or objects per doc. But the moment you share a doc, the caps apply.

Universal limits (all plans)

Limit Value
CSV import rows 10,000 per file
API doc size for access 125 MB (excl. attachments)
API write rate 10 requests / 6 seconds
API doc content write rate 3–5 requests / 10 seconds
API read rate 100 requests / 6 seconds
API request size 2 MB
API row size 85 KB
Cross-doc source doc size 125 MB max
URL parameter length 2,048 characters
Danger

The 125 MB API cutoff is the most dangerous limit for migrations. If your target doc grows past 125 MB during an API-driven import, the API stops responding. Your script fails mid-migration with no automatic recovery. The fix is to architect your Coda workspace to split large datasets across multiple docs before you start importing.

Edge Cases and Real Migration Problems

Encoding issues

CSV imports assume UTF-8 encoding. If your source file is in Latin-1, Windows-1252, or Shift-JIS, special characters will corrupt on import. Symptoms: accented characters become é instead of é, or Asian characters render as ???.

Fix: Convert your CSV to UTF-8 BOM before uploading. In Python: df.to_csv('output.csv', encoding='utf-8-sig'). Most spreadsheet apps default to locale-specific encoding, not UTF-8.

Circular references

If Table A links to Table B, and Table B links back to Table A, native importers often can't resolve the dependency. The result is blank relation fields. This requires a two-pass approach: import both tables with plain text references first, then convert to relation columns in the correct order.

Large dataset performance

Documents get slow with lots of data. One user reported a doc with about 5,000 rows distributed across multiple tables where loading sometimes took 10 seconds, scrolling stuttered, and formulas calculated with delay.

Coda is not designed to be a high-volume data warehouse. If you're importing 50,000+ rows, plan to split across multiple docs or use Cross-doc to query between them — keeping in mind the 125 MB source doc limit for Cross-doc.

Duplicate rows on re-import

If you run the same CSV import twice into an existing table without setting a key column for merge, you'll get duplicate rows. Coda doesn't deduplicate by default. For API imports, use the upsert endpoint with keyColumns specified to handle idempotent writes.

Attachment URL expiry

When migrating from platforms like Quip or Notion, exported attachment URLs are often temporary. If your migration script doesn't download and re-upload attachments promptly, the URLs expire and your attachments silently disappear. Always download attachments to local or S3 storage as a first step, then re-upload to Coda separately.

Broken formatting from Notion imports

  • Toggle blocks sometimes flatten into plain text
  • Synced blocks import as static copies, not synced
  • Inline databases may not import if subpages aren't included in the export

Comparing Import Approaches: Decision Matrix

Factor CSV Native Importer API Script iPaaS (Zapier/Make) Managed Migration
Setup time Minutes Minutes Hours–Days 30–60 min Days (done for you)
Row volume ≤10k per batch Varies Unlimited (rate-limited) Low thousands Unlimited
Relations preserved ❌ (manual rebuild) ❌ (text only)
Formulas preserved ❌ (must rewrite) Translated
Attachments Partial ✅ (separate upload)
Error handling Manual Retry from scratch Custom retry logic Platform retries Handled
Best for Quick flat imports Platform-specific moves Complex/relational data Ongoing sync Enterprise workspaces

When Standard Imports Break

The methods above work for straightforward scenarios. They fail when:

  • Relations must be preserved. If your Notion workspace has 8 interconnected databases with rollups and relations, the native importer drops all of it. You're left rebuilding the relational schema from scratch.
  • Scale exceeds limits. A 50,000-row dataset requires 5+ CSV batches with manual column-type correction for each, or an API script that takes hours under rate limits.
  • Structure is deeply nested. A 200-page Confluence space with macros, nested tables, and embedded Jira issues won't cleanly translate to Coda's page model.
  • Data types are mixed. A workspace with tables, rich-text documents, embedded images, and attached PDFs all referencing each other requires a migration pipeline, not a single import.
  • Composite-key matching is needed. Coda's CSV merge only supports a single key column. If deduplication depends on multiple fields, you need pre-processing or the API.
  • Zero downtime is required. If your team is actively working in the source tool while you migrate, you need delta syncing — not a one-shot import.

The decision is usually not tool vs. no tool. It's where the tool stops. Native import gets you a fast first pass. Automation tools keep live rows moving. API work handles ordering, batching, retries, and validation. The mistake is expecting one layer to do the whole job.

Best Practices Before You Import

1. Audit and clean your source data first

  • Remove orphaned rows, test records, and duplicate entries before export
  • Standardize date formats (ISO 8601: YYYY-MM-DD)
  • Normalize multi-select values (consistent casing, no trailing spaces)
  • Strip currency symbols from monetary values so Coda can format them natively
  • Ensure every table has a unique identifier column for merge/upsert operations

2. Choose the right format for your use case

  • Flat data (CRM contacts, product inventory): CSV → direct import
  • Notion workspace with pages + databases: HTML export → native importer, then manual relation rebuild
  • Relational data with foreign keys: API-driven import with staged table creation
  • Content pages (blog drafts, docs): Markdown import or Google Docs importer

3. Plan your Coda doc architecture before importing

Don't import everything into one doc. Consider:

  • Keep each major data domain in its own doc (CRM data in one doc, project tracking in another)
  • Use Cross-doc for references between docs (but watch the 125 MB limit)
  • Plan column types in advance — create the target table with typed columns before importing, then use the "add to existing table" option with column mapping

4. Treat attachments as a separate migration stream

Don't assume row import will carry attachments. The API does not accept direct image uploads. Exported attachment URLs from platforms like Notion and Quip are often temporary. Download attachments to local or cloud storage first, then upload to Coda via Image URL columns or the file upload flow.

5. Run a test import first

Import 50–100 rows as a test. Verify:

  • Column types are detected or manually correctable
  • Special characters render correctly (encoding test)
  • Date formats parse as expected
  • The import creates the structure you expect
  • You're testing on a paid plan if you need more than 1,000 rows in a shared doc

6. Document expected rebuild work

Write down which source features are known limitations versus bugs. Notion formulas, Airtable rollups, Quip live apps, and Confluence macros are all expected rebuild tasks, not import failures. If stakeholders aren't aware of this ahead of time, they'll mistake known constraints for data loss.

Real-World Migration Scenarios

Scenario 1: Spreadsheet → Coda relational database

Situation: A team has a 15,000-row product catalog in Google Sheets with separate tabs for Products, Categories, and Suppliers. Products reference Categories and Suppliers by name.

Approach:

  1. Export each Sheet tab as a separate CSV
  2. Import Categories first (under 10k rows, single CSV)
  3. Import Suppliers second
  4. Split Products into two 7,500-row batches and import each
  5. Set the display column on Categories and Suppliers tables
  6. Convert the "Category" and "Supplier" text columns in Products to relation columns
  7. Verify relational chips resolve correctly

Watch out for: Name mismatches between the foreign-key text and the display column will break relation resolution. Clean the data first.

Scenario 2: Notion workspace → Coda

Situation: A 40-person team uses Notion with 12 interconnected databases (Tasks, Projects, Clients, Invoices, etc.) and 200+ wiki pages.

Approach:

  1. Export from Notion as HTML with subpages enabled
  2. Import into a fresh Coda doc using the native importer
  3. Review the auto-generated "Import next steps" page for flagged issues
  4. Manually rebuild all relation columns (12 databases × multiple relations = significant work)
  5. Rewrite all Notion formulas in Coda's formula language
  6. Recreate views (board, calendar, gallery) as Coda views

Reality check: For 12 interconnected databases, the manual rebuild of relations alone can take 2–3 days for someone who knows Coda well. If the team can't afford that downtime or doesn't have Coda expertise in-house, this is where an API-driven or managed approach makes more sense.

Scenario 3: Multi-tool consolidation into Coda

Situation: A company wants to move their CRM data from a legacy tool (exported as CSV), their project wiki from Confluence, and their task boards from Trello — all into Coda.

Approach:

  1. Migrate Trello boards first using the native importer (simplest, lowest risk)
  2. Import Confluence spaces using the native importer (review macro loss)
  3. Stage CRM CSV data, clean it, and batch-import with typed columns pre-configured
  4. Build cross-references between the three datasets using Coda's relation and lookup columns
  5. Set up automations to replace workflows from the source tools

Key decision: Do the tools need to reference each other? If your Confluence pages reference Trello cards and CRM contacts, you're building a relational model that no native importer can create. That cross-referencing must be done manually or via API.

When You Need a Migration Partner

The gap between "I imported my CSV" and "my entire workspace is live in Coda with all relations intact" is where most teams get stuck. The native tools handle the first scenario well. The second scenario — relational data, large volumes, mixed content types, zero downtime — requires custom engineering.

At ClonePartner, we've handled migrations into (and out of) Coda for teams dealing with exactly these problems: rebuilding lookup columns programmatically, batching large datasets to stay under the 125 MB API threshold, handling rate limits with exponential backoff, and translating complex formula logic between platforms. Our scripts handle the orchestration — parent tables first, child tables second, relation IDs mapped automatically — so the import preserves your data model instead of flattening it.

If your migration involves relational databases, 10,000+ rows, or multiple source platforms converging into Coda, talk to us before you spend a week discovering the limits the hard way.

Making the Right Import Decision

The right import method depends on three variables: data complexity, volume, and acceptable manual cleanup time.

  • Under 10k rows of flat data? CSV import. Done in minutes.
  • Moving from a supported platform (Notion, Confluence, Trello, Airtable, Quip)? Start with the native importer, budget time for post-import cleanup.
  • Relational data that must stay connected? API import is the only path that preserves structure.
  • Need ongoing sync with an external system? Pack sync tables or iPaaS tools.
  • Large scale, multiple sources, or zero tolerance for data loss? You need a migration pipeline — built in-house or outsourced.

Coda is a powerful destination for your data. But its import tools are designed for getting started, not for enterprise migration. Understanding the difference — and planning accordingly — is what separates a clean migration from a costly rework.

Frequently Asked Questions

What is the row limit for CSV imports in Coda?
Coda's CSV importer has a hard limit of 10,000 rows per file. Files exceeding this are rejected outright. You need to split larger datasets into multiple CSV files and import them in batches, or use the Coda API for higher volumes.
Does importing a CSV into Coda preserve column types like dates and numbers?
No. All CSV columns import as text by default. Coda may attempt some type inference, but it's unreliable. You must manually change each column to the correct type (date, number, email, etc.) after import.
Can I import Notion databases into Coda with relations intact?
No. The Notion importer uses an HTML export that converts relation columns to plain text. Rollups are lost entirely, and formulas import as static values. You need to manually rebuild relations as Coda lookup columns after import, or use an API-driven migration.
What happens when a Coda doc exceeds 125 MB?
Docs over 125 MB (excluding attachments) lose API access completely. This breaks automated imports, Cross-doc syncs, and Pack sync tables. You need to split large datasets across multiple docs to stay under this threshold.
What are Coda's API rate limits for data imports?
Coda limits write requests to 10 per 6 seconds and doc content writes to 3–5 per 10 seconds. Read requests are capped at 100 per 6 seconds. These limits are per-user across all docs and endpoints.

More from our Blog