Skip to content

How to Import Data into Quip: API Limits & Migration Guide

Quip's native import was retired in 2024. Learn how to import CSVs, spreadsheets, and documents via the Automation API, including rate limits, HTML formatting, and chunking strategies.

Raaj Raaj · · 18 min read
How to Import Data into Quip: API Limits & Migration Guide

The native Upload/Import button in Quip is gone. Salesforce retired both the Compose-menu Upload/Import option and the Import API endpoint on January 19, 2024. If you're searching for how to import a CSV, spreadsheet, or document into Quip, every path now runs through the Quip Automation API — with custom HTML formatting, payload chunking, and careful rate-limit management. (quip.com)

This guide covers the methods that actually work, the hard API constraints that shape every import, and the field-mapping decisions that determine whether your data lands correctly or silently drops.

Danger

Quip End-of-Life (March 2027): Salesforce has announced that all Quip products are being retired. Subscriptions cannot be renewed after March 1, 2027. After your subscription expires, the site enters a 90-day read-only phase, followed by blocked logins, then data deletion. If you're importing data into Quip now, plan your exit strategy at the same time. See our Quip to Coda migration guide or Quip to Notion guide for outbound options.

Every Way to Import Data into Quip (2026)

With the native import gone, here are the methods that still work, ranked by complexity:

Method Spreadsheet Data Rich Text Docs Attachments Folder Structure Automation Best For
Quip Automation API ✅ (HTML tables) ✅ (HTML/Markdown) ✅ (blob upload) ✅ (folder API) Any programmatic import
Quip Admin API ✅ (HTML tables) ✅ (HTML/Markdown) Org-wide bulk operations
Template copy APIs ✅ (from template) ✅ (from template) Recurring doc structures
iPaaS (Zapier / Make) Row-by-row only Single docs Appending rows from triggers
Copy & paste Partial Partial Manual <10 documents, one-off
Managed migration service Complex, large-scale imports

What was retired

Before January 2024, Quip supported direct file upload via the UI — you could drag an Excel, CSV, or DOCX file into the Compose menu and Quip would parse it. That feature and the corresponding Import API endpoint are both permanently gone. Old documentation and blog posts referencing "Import Document" or drag-and-drop import from Word, Dropbox, Google Drive, Evernote, Box, or OpenOffice are now outdated. (quip.com)

Template copy APIs

If you're generating many similar documents — account plans, deal rooms, review packets, project docs with a fixed structure — copying an existing Quip template is often cleaner than rendering every section from scratch. Quip's copy APIs support mail_merge_values for variable substitution, and the legacy copy endpoint supports copy_annotations for copying comments from an existing thread. This gives you a more predictable QA surface because the destination layout starts from a known-good Quip object rather than converter output. (quip.com)

iPaaS tools: a deprecation warning

Zapier still lists Quip actions such as Create Document, Add Row to Spreadsheet, Add Item to List, and Send Message. But Quip's July 18, 2025 release notes state that out-of-the-box legacy integrations including Zapier are no longer supported. Treat that as a risk signal before building any migration around it. (zapier.com)

How to Import Spreadsheets and CSVs via the Quip API

This is the most common import scenario: getting tabular data from a CSV or Excel file into a Quip spreadsheet. There is no CSV upload endpoint. You must convert your data to HTML <table> markup and send it to the POST /1/threads/new-document endpoint with type=spreadsheet.

Step 1: Generate an API access token

Visit https://quip.com/dev/token for a personal access token (useful for testing), or create an API key in the Quip Admin Console under Settings > Integrations for production OAuth flows. You need USER_READ and USER_WRITE scopes.

If you're on a VPC Quip instance, swap platform.quip.com for your onquip.com or quip-<customer>.com domain. (quip.com)

Step 2: Choose the right Quip target shape

Use a plain spreadsheet when your data is just rows and columns. Use a project tracker live app when you need typed fields such as TEXT, PERSON, STATUS, DATE, or FILE. Quip's docs show project tracker payloads with those typed columns, while plain spreadsheet operations work on a flat grid.

If you're flattening a relational source into Quip, read our guide on using CSVs for SaaS data migrations first. CSV is a fine interchange format for flat data — it's a bad place to pretend relations still exist.

Step 3: Convert your CSV to an HTML table

Quip's API requires spreadsheet content wrapped in <table> tags. The first <tr> row is automatically interpreted as the column header when more than one row is present. Quip treats <th> tags identically to <td> — don't rely on <thead> semantics.

If you want Quip to assign default headers (A, B, C), provide a first row with empty <td> elements.

Here's a Python example with proper HTML escaping:

import csv
import html
import requests
 
TOKEN = 'YOUR_TOKEN'
TITLE = 'Imported Dataset'
 
def csv_to_html_table(csv_path):
    rows = []
    with open(csv_path, 'r', encoding='utf-8') as f:
        reader = csv.reader(f)
        for row in reader:
            cells = ''.join(f'<td>{html.escape(cell)}</td>' for cell in row)
            rows.append(f'<tr>{cells}</tr>')
    return f'<table>{"\n".join(rows)}</table>'
 
html_content = csv_to_html_table('data.csv')
 
response = requests.post(
    'https://platform.quip.com/1/threads/new-document',
    headers={'Authorization': f'Bearer {TOKEN}'},
    data={
        'content': html_content,
        'title': TITLE,
        'type': 'spreadsheet',
        'member_ids': 'TARGET_FOLDER_ID'
    },
    timeout=60,
)
response.raise_for_status()
print(response.json()['thread']['id'])
Warning

<th> tags are ignored. The Quip parser treats <th> identically to <td>. The first <tr> becomes the header row by convention, not by tag type. Don't rely on <thead> semantics.

Step 4: Place the spreadsheet in the correct folder

Pass a folder ID in member_ids to file the spreadsheet. Use GET /1/folders/{id} to discover folder IDs, or GET /1/users/current to find the user's desktop, private, and shared folder IDs.

Step 5: Append remaining rows in batches

For large datasets, create the spreadsheet with headers only (or the first batch of rows), then append subsequent batches using POST /1/threads/edit-document with location=2 (AFTER_SECTION) and the section_id of the last row. Stay under both the 1 MB content limit and the 1,000-row-per-call limit.

Importing Documents: Rich Text, Markdown, and HTML

For non-tabular content, use the same POST /1/threads/new-document endpoint with type=document. You can pass content as HTML or Markdown (set format=markdown).

curl https://platform.quip.com/1/threads/new-document \
  -d "content=<h1>Q2 Report</h1><p>Summary goes here.</p>" \
  -d "title=Q2 Report" \
  -d "type=document" \
  -d "member_ids=TARGET_FOLDER_ID" \
  -H "Authorization: Bearer YOUR_TOKEN"

Markdown is the faster option when your source is already structured as headings, lists, and simple emphasis. HTML gives you more control over tables, links, inline media, and precise structure.

Two behaviors matter for document imports:

  1. Section-level editing. Quip edits happen at the section level. You cannot change a single sentence in a paragraph — you replace the entire paragraph section that contains it. (quip.com)
  2. Comments are separate. Comments live in Quip's message system, not the document body. Importing an external document does not recreate comment threads. There is no documented endpoint for importing historical revisions.

If your document shape repeats, start from a Quip template and use the copy API with mail_merge_values instead of regenerating every section from scratch.

Importing Attachments and Images

Use the blob API: POST /1/blob/{thread_id} uploads a binary file to an existing thread and returns a url and id. You can then reference that URL in a subsequent edit-document call to embed the image in the document body.

Inline base64 images in HTML are not supported. If your source data has file columns, you need a second pass that uploads binaries and maps returned blob identifiers back into the right row, cell, or document section. Attachments are not imported by passing file paths inside CSV cells.

Quip API Limits: Payload Sizes, Cells, and Rate Limits

These are the hard constraints that shape every import script. Understanding how to export data from Quip and import it requires navigating the same API ceilings.

Request and payload limits

Constraint Limit What happens when exceeded
Content per request 1 MB HTTP 413 Payload Too Large
Cells per spreadsheet 100,000 total Writes rejected after limit
Rows per API call 1,000 max Extra rows silently ignored
Recommended cells per spreadsheet 30,000 Performance degrades above this
Items per folder 4,000 max (1,000 recommended) HTTP 400 at hard limit
Members per thread 2,500 total Across multiple requests
Folder IDs per request 100 max In folder assignment calls

Rate limits

Scope Limit
Per user (Automation API) 50 requests/minute, 750 requests/hour
Per company 600 requests/minute
Admin API per admin 100 requests/minute, 1,500 requests/hour
Bulk document export 36,000 documents/hour

The per-user limit of 50 requests per minute is the binding constraint for most imports. If you're inserting rows one at a time via edit-document, you can process at most 50 rows per minute. For a 10,000-row dataset, that's over 3 hours — assuming zero retries.

When mapping Notion databases to Quip, or dealing with any relational database export, you must proactively calculate columns × rows. If the total exceeds 100,000 cells, your script must split the dataset into multiple Quip spreadsheets before sending any API requests.

A few less obvious limits: new-document does not support slides. If you rely on search during QA, the Search for Threads endpoint caps results at 50 per call. And multiple parallel loaders under different user tokens can throttle each other if they exceed the 600 requests/minute company limit.

Info

You can request higher limits. Quip's documentation states: if you need to call APIs more frequently than allowed, contact Quip Customer Support.

The HTTP 503 Rate Limit Trap

This is the single most common cause of failed Quip import scripts.

Quip returns HTTP 503 when you hit the rate limit — not just the standard HTTP 429. At least one documented endpoint (Search for Threads) explicitly lists 503 as the rate-limited response code. Most HTTP client libraries, retry middleware, and iPaaS platforms interpret 503 as "server unavailable" and either give up or enter aggressive retry loops that prolong the lockout. (quip.com)

Quip's 503 response does not include a standard Retry-After header. You must read the rate-limit-specific headers:

  • X-Ratelimit-Limit: Requests allowed per minute
  • X-Ratelimit-Remaining: Requests left in the current window
  • X-Ratelimit-Reset: UTC timestamp when the window resets

What breaks in practice

  • Zapier / Make: Treat 503 as a server error. May stop the workflow or retry incorrectly. No built-in way to tell these tools "this is a rate limit, back off."
  • Python requests with default retry: Libraries like urllib3.Retry retry on 503 but with the wrong backoff strategy, potentially causing retry storms.
  • Monitoring/alerting: If your monitoring flags 503s as outage alerts, you'll get false positives during every batch import. Classify 503s from platform.quip.com as rate-limit events in your alerting rules, not outage incidents.

Safe retry implementation

import time
import requests
 
def quip_api_call(url, headers, data, max_retries=5):
    for attempt in range(max_retries):
        resp = requests.post(url, headers=headers, data=data)
        
        if resp.status_code == 200:
            return resp.json()
        
        if resp.status_code in (503, 429):
            reset_time = resp.headers.get('X-Ratelimit-Reset')
            if reset_time:
                wait = max(float(reset_time) - time.time(), 1)
            else:
                wait = 2 ** attempt  # Exponential backoff fallback
            print(f"Rate limited (attempt {attempt + 1}). Waiting {wait:.1f}s.")
            time.sleep(wait)
            continue
        
        resp.raise_for_status()
    
    raise Exception(f"Failed after {max_retries} retries")
Danger

503 ≠ server down. Never rely on default retry libraries when building against the Quip API. Explicitly handle both 429 and 503, evaluate X-Ratelimit-Remaining on every response, and apply backoff based on the X-Ratelimit-Reset timestamp.

The safest approach is to throttle before Quip does — run a client-side token bucket below Quip's published limits. Monitor X-Ratelimit-Remaining proactively on every response, not just error responses. Keep batches idempotent so retries don't create duplicates, and checkpoint every batch with source IDs, target thread IDs, and retry counts.

How Quip Interprets Imported Data

Understanding Quip's internal data model prevents silent data loss during imports.

Quip's content model

  • Thread: The core Quip object. A document or spreadsheet lives inside a thread, and messages/comments live on that thread too.
  • Folders: Quip allows nested folders, but a thread can belong to multiple folders. Access can be inherited from folders. If your source system assumes single-parent hierarchy or inherited page permissions, you need a mapping plan — not a straight copy. (quip.com)
  • Permissions: Access is folder/user based, not row or cell based. Adding a thread to a folder can extend access through folder inheritance. Source ACL models rarely copy 1:1.
  • Links: Threads have permanent IDs and expirable URL suffixes. Build internal links after thread creation and key them off stored thread IDs. Relational links from a source system do not become live Quip relations — that part is custom mapping logic.

Spreadsheet data mapping

  • Columns are untyped. Everything imports as text. There are no date pickers, dropdowns, or number formatting from the import. Formatting is set manually after import, or you use a project tracker live app for typed fields.
  • Formulas: Quip supports 400+ functions, but they use Quip's formula syntax, not Excel's. If you're importing from Excel, formulas need translation.
  • Cell references: =A1+B1 works within a spreadsheet, but cross-spreadsheet references don't exist. Each spreadsheet is an isolated grid.
  • Multi-tab spreadsheets: Quip spreadsheets support multiple tabs, but the API creates a single tab. Additional tabs require separate edit-document calls.

What breaks on import

Source element What happens in Quip
Excel formulas Dropped or rendered as text
Conditional formatting Lost entirely
Cell comments Not imported (comments are thread-level)
Merged cells Flattened to individual cells
Data validation / dropdowns Not supported
Hyperlinks in cells Preserved if wrapped in <a> tags
Relations / foreign keys No concept — flat grid only
Revision history Not transferable
Permissions per row/cell Not supported — permissions are per-thread
Embedded iframes/videos Stripped entirely
Custom CSS / inline styles Stripped or flattened
Multi-column layouts Collapsed to single column
Background colors Removed

Document import caveats

  • <h1> through <h3> are preserved. The first heading becomes the document title if none is specified.
  • Ordered and unordered lists import correctly. Checklists require specific Quip section markup.
  • You can embed spreadsheets inside documents — these become interactive Quip spreadsheets, not static HTML tables.
  • Quip's HTML parser is opinionated: complex CSS, <div> nesting, and inline styles are stripped. Custom fonts become Quip defaults. For content migration from tools like Confluence or SharePoint, see our Confluence to Quip guide or SharePoint to Quip guide.

Edge Cases That Break Quip Imports

These are the issues that actually bite in production.

Encoding problems

Quip's HTML parser expects UTF-8. If your CSV was exported from a legacy system using Windows-1252 or ISO-8859-1, special characters (em dashes, curly quotes, accented names) will render as mojibake or cause silent parse errors where entire rows are dropped. Always convert to UTF-8 before generating HTML. Run chardet or file -i on your source file to verify encoding before you start.

The 1 MB chunking problem

The 1 MB limit applies to the serialized HTML payload, not the raw CSV size. A 500 KB CSV can exceed 1 MB once wrapped in HTML tags. For a spreadsheet with 20 columns and 2,000 rows, the HTML table markup alone can push past 1 MB even with short cell values.

The solution: create the spreadsheet with headers only (one new-document call), then append rows in batches using edit-document with location=2 (AFTER_SECTION), staying under both the 1 MB and 1,000-row limits per call.

Duplicate and missing records

Quip's API has no upsert or deduplication. If your script retries after a timeout (especially on 503 responses), you may insert the same rows twice. Build idempotency into your import: track which batches succeeded (by section_id in the response) and skip them on retry. Keep a source-system primary key in a dedicated column so you can compare source and target counts without guessing which records already landed.

Missing records typically result from:

  • Exceeding 1,000 rows in a single edit-document call (extras are silently dropped)
  • HTML parse errors in individual rows (a malformed <td> tag can break the entire batch)
  • Hitting the 100,000-cell ceiling without an obvious error message

Zero-width characters from other tools

Exports from tools like Notion can include \u200b (zero-width space) characters in cell values. These are invisible but cause sort and filter issues in Quip spreadsheets. Strip them during pre-processing.

Large dataset performance

Quip recommends keeping spreadsheets under 30,000 cells for performance. Above that, the web UI slows down and collaborative editing becomes sluggish. At 100,000 cells, the spreadsheet hits the hard cap and no more data can be added.

If you're importing a dataset with 50 columns and 2,000+ rows, you're already at the recommended limit. Split the data across multiple Quip spreadsheets proactively.

Comparing Import Methods: Zapier vs. Custom Scripts vs. Managed Service

iPaaS tools (Zapier, Make)

Best for: Appending individual rows to an existing Quip spreadsheet from a trigger (form submission, CRM update, webhook).

Where they fail: Bulk imports. Zapier's Quip integration supports adding a single row per task execution. To import 5,000 rows, you'd need 5,000 task runs — hitting both Zapier's task limits and Quip's 50 requests/minute rate limit. The 503 rate-limit behavior breaks Zapier's built-in error handling, since Zapier doesn't recognize 503 as a rate-limit signal.

There's no support for creating spreadsheets with pre-defined column structures, importing documents, handling attachments, or organizing content into folder hierarchies. And as noted, Quip's July 2025 release notes classify Zapier as a retired legacy integration.

Custom scripts (Python / quipclient)

Best for: Engineering teams who need full control over data transformation and have time to build and maintain retry logic.

Where they fail: The engineering overhead compounds fast. You need to:

  1. Convert source data to HTML <table> format with proper escaping
  2. Chunk payloads under 1 MB
  3. Batch rows under 1,000 per call
  4. Handle both 503 and 429 rate limits with header-driven backoff
  5. Track successful batches for idempotent retries
  6. Create folder structures via separate API calls
  7. Upload attachments via the blob API, then reference them

For a one-time import of a single spreadsheet, this is manageable. For migrating hundreds of documents with mixed content types, folder hierarchies, and attachments, it becomes a multi-week engineering project.

The official quip-api Python library provides helper methods, but it hasn't seen significant updates. You'll likely write your own rate-limit handling and chunking logic.

Copy and paste

Best for: Fewer than 10 documents with simple formatting.

Where it fails: Everywhere else. Pasting strips most formatting, doesn't create proper spreadsheet structures from tabular data, and can't be automated. Not a viable method for any real migration.

When all standard tools break

The methods above work for simple imports. They start breaking when:

  • You're migrating from another tool (Notion, Confluence, SharePoint, Coda) and need to preserve document structure, folder hierarchies, and internal links. Each source platform has its own data model that needs translation into Quip's thread/folder paradigm.
  • The dataset exceeds 100,000 cells and needs to be intelligently split across multiple Quip spreadsheets with cross-references maintained.
  • You need mixed content — documents with embedded spreadsheets, images, and structured data — in a single automated pipeline.
  • You have compliance or audit requirements that demand verified record counts and data integrity validation post-import.
  • The timeline is measured in days, not weeks, and you can't afford to build, test, and debug custom import scripts from scratch.
  • You're in a VPC Quip environment where endpoint routing differs from standard examples.

For Notion to Quip migrations, the challenge compounds: Notion databases use typed properties (relations, rollups, formulas) that have no equivalent in Quip. The mapping requires custom transformation logic that iPaaS tools can't handle.

Best Practices Before Importing Data into Quip

Clean and structure your data first

  • Remove empty rows and columns. Every empty cell counts toward the 100,000-cell limit.
  • Normalize encoding to UTF-8. Use iconv or Python's chardet library to detect and convert.
  • Strip HTML from cell values if your source exported rich text into CSV fields — Quip will try to parse it as markup.
  • Flatten nested data. Quip spreadsheets are flat grids. JSON arrays in cells or multi-value fields will render as raw text.
  • Keep a source ID in every migrated object. It's the simplest defense against duplicate rows and broken link remaps.
  • Normalize dates, users, and enums before render time. Don't try to solve semantic mapping inside HTML generation.

Choose the right format

  • Tabular data → HTML table via API with type=spreadsheet. There is no other option.
  • Typed workflow data → project tracker live app when you need fields like PERSON, STATUS, DATE.
  • Rich text → HTML or Markdown via API with type=document. HTML gives more structural control; Markdown is simpler but loses some elements.
  • Mixed content (text + tables) → Document with embedded spreadsheet. Create the document first, then use edit-document to append a spreadsheet section.

Prevent data loss

  • Validate row counts before and after import. The API response includes the thread's HTML — parse it to count rows.
  • Log every API call with request payload hash and response status. This is your audit trail for debugging missing records.
  • Test with a small batch first. Import 100 rows into a test folder, verify formatting, then scale up.
  • Don't rely on Quip as your only copy. Given the EOL timeline, maintain your source data independently.
  • Plan the reverse path. If Quip is an interim stop, keep your staging data clean enough that exporting back out isn't a second reconstruction project.
  • Create folders first and test permissions deliberately. Quip access is folder and user based, not row based. Verify inheritance before loading content.

Real-World Import Scenarios

Scenario 1: Excel spreadsheet → Quip

Context: A RevOps team needs to import a 15,000-row pipeline spreadsheet from Excel into Quip for Salesforce integration.

Decision: At 15,000 rows × 12 columns = 180,000 cells, this exceeds Quip's 100,000-cell limit. Split into two spreadsheets: active deals (8,000 rows) and closed/lost (7,000 rows). Each stays under 100K cells.

Process: Export Excel to CSV. Convert CSV to HTML tables using a Python script. Create two spreadsheets via new-document. Append rows in batches of 800 (leaving margin under the 1,000-row limit). Total API calls: ~20 per spreadsheet, well within rate limits.

Gotcha: The Excel file used Windows-1252 encoding. Three columns with accented customer names rendered incorrectly until the source was re-encoded to UTF-8.

Scenario 2: Blog content → Quip documents

Context: A marketing team wants 200 blog posts from a CMS in Quip for collaborative editing before a website relaunch.

Process: Export posts as HTML. Strip CMS-specific markup (shortcodes, custom <div> classes). For each post, upload images via the blob API first, replace image URLs with Quip blob URLs, then create documents via new-document. Organize into a folder hierarchy matching the blog's category structure.

Gotcha: Embedded YouTube iframes were stripped entirely. The team replaced them with placeholder text and the video URL as a hyperlink.

Scenario 3: Notion workspace → Quip (tool-to-tool migration)

Context: A company consolidating on Salesforce needs 500 Notion pages and 30 databases moved to Quip.

Decision: Notion relations, rollups, and views have no Quip equivalent. Custom transformation logic is required.

Process: Export Notion workspace. Parse Markdown files for documents, CSV files for databases. Convert to Quip-compatible HTML. Map Notion's page hierarchy to Quip folder tree. Create a URL mapping table and rewrite internal links post-import.

Gotcha: Notion exports include \u200b (zero-width space) characters in cell values, causing sort and filter issues in Quip. Strip them during pre-processing.

For source-specific migration patterns, see our guides on Notion to Quip, Confluence to Quip, and SharePoint to Quip.

How ClonePartner Handles Complex Quip Imports

The constraints above — 503 rate limits, 1 MB payload caps, HTML table formatting, 100K cell ceilings — are the same constraints we've engineered around across hundreds of Quip-related migrations.

What this looks like in practice:

  • Automatic CSV/JSON → HTML table conversion with encoding detection, HTML entity sanitization, and proper first-row header interpretation. Handles edge cases like unescaped HTML entities in cell values that break Quip's parser.
  • Built-in 503 handling with proactive rate-limit header monitoring and adaptive backoff — not the reactive retry-and-pray approach that standard libraries default to.
  • Payload chunking that respects both the 1 MB content limit and the 1,000-row batch limit, with idempotent batch tracking to prevent duplicates on retry.
  • Folder hierarchy reconstruction — creating the target folder tree first, then filing documents and spreadsheets into the correct locations with proper permission inheritance.
  • Data integrity validation — post-import row counts, cell-level verification, and a migration report that proves nothing was dropped or duplicated.

We typically complete Quip imports in days, not weeks. For small one-off imports, an internal script can be enough. For multi-phase or high-risk migrations — especially under deadline — specialist execution is usually faster than building and debugging a throwaway importer in-house.

Frequently Asked Questions

Can you still import CSV files into Quip?
No. Quip retired the native Upload/Import option on January 19, 2024. To import CSV data, you must convert it to HTML markup and use the Quip Automation API's POST /1/threads/new-document endpoint with type=spreadsheet.
What is the Quip API rate limit?
The Quip Automation API defaults to 50 requests per minute and 750 requests per hour per user. There's also a company-wide limit of 600 requests per minute. The Admin API allows 100 requests per minute per admin. You can request higher limits through Quip Customer Support.
What is the maximum spreadsheet size in Quip?
Quip spreadsheets have a hard limit of 100,000 cells total and a recommended maximum of 30,000 cells for performance. You can add a maximum of 1,000 rows per API call, and each API request is capped at 1 MB of content.
Why does Quip return 503 instead of 429 for rate limiting?
Quip returns HTTP 503 when you hit the rate limit, not just the standard 429. It also doesn't include a Retry-After header. This breaks standard retry logic in most HTTP libraries and iPaaS tools, which interpret 503 as a server error. You must implement custom backoff logic using the X-Ratelimit-Reset header.
Is Quip being discontinued?
Yes. Salesforce announced that all Quip products are being retired. Subscriptions cannot be renewed after March 1, 2027. After your subscription expires, there's a 90-day read-only phase, followed by blocked logins, then data deletion.

More from our Blog