Skip to content
Cloudflare Docs

Limits

FeatureLimit
Databases50,000 (Workers Paid)1 / 10 (Free)
Maximum database size10 GB (Workers Paid) / 500 MB (Free)
Maximum storage per account1 TB (Workers Paid)2 / 5 GB (Free)
Time Travel duration (point-in-time recovery)30 days (Workers Paid) / 7 days (Free)
Maximum Time Travel restore operations10 restores per 10 minute (per database)
Queries per Worker invocation (read subrequest limits)1000 (Workers Paid) / 50 (Free)
Maximum number of columns per table100
Maximum number of rows per tableUnlimited (excluding per-database storage limits)
Maximum string, BLOB or table row size2,000,000 bytes (2 MB)
Maximum SQL statement length100,000 bytes (100 KB)
Maximum bound parameters per query100
Maximum arguments per SQL function32
Maximum characters (bytes) in a LIKE or GLOB pattern50 bytes
Maximum bindings per Workers scriptApproximately 5,000 3
Maximum SQL query duration30 seconds 4
Maximum file import (d1 execute) size5 GB 5

Footnotes

1: The maximum number of databases per account can be increased by request on Workers Paid and Enterprise plans, with support for millions to tens-of-millions of databases (or more) per account. Refer to the guidance on limit increases on this page to request an increase.

2: The maximum storage per account can be increased by request on Workers Paid and Enterprise plans. Refer to the guidance on limit increases on this page to request an increase.

3: A single Worker script can have up to 1 MB of script metadata. A binding is defined as a binding to a resource, such as a D1 database, KV namespace, environmental variable, or secret. Each resource binding is approximately 150 bytes, however environmental variables and secrets are controlled by the size of the value you provide. Excluding environmental variables, you can bind up to ~5,000 D1 databases to a single Worker script.

4: Requests to Cloudflare API must resolve in 30 seconds. Therefore, this duration limit also applies to the entire batch call.

5: The imported file is uploaded to R2. Refer to R2 upload limit.

Cloudflare also offers other storage solutions such as Workers KV, Durable Objects, and R2. Each product has different advantages and limits. Refer to Choose a data or storage product to review which storage option is right for your use case.

Frequently Asked Questions

Frequently asked questions related to D1 limits:

How much work can a D1 database do?

D1 is designed for horizontal scale out across multiple, smaller (10 GB) databases, such as per-user, per-tenant or per-entity databases. D1 allows you to build applications with thousands of databases at no extra cost, as the pricing is based only on query and storage costs.

Storage

Each D1 database can store up to 10 GB of data.

Concurrency and throughput

Each individual D1 database is inherently single-threaded, and processes queries one at a time.

Your maximum throughput is directly related to the duration of your queries.

  • If your average query takes 1 ms, you can run approximately 1,000 queries per second.
  • If your average query takes 100 ms, you can run 10 queries per second.

A database that receives too many concurrent requests will first attempt to queue them. If the queue becomes full, the database will return an "overloaded" error.

Each individual D1 database is backed by a single Durable Object. When using D1 read replication each replica instance is a different Durable Object and the guidelines apply to each replica instance independently.

Query performance

Query performance is the most important factor for throughput. As a rough guideline:

  • Read queries like SELECT name FROM users WHERE id = ? with an appropriate index on id will take less than a millisecond for SQL duration.
  • Write queries like INSERT or UPDATE can take several milliseconds for SQL duration, and depend on the number of rows written. Writes need to be durably persisted across several locations - learn more on how D1 persists data under the hood.
  • Data migrations like a large UPDATE or DELETE affecting millions of rows must be run in batches. A single query that attempts to modify hundreds of thousands of rows or hundreds of MBs of data at once will exceed execution limits. Break the work into smaller chunks (e.g., processing 1,000 rows at a time) to stay within platform limits.

To ensure your queries are fast and efficient, use appropriate indexes in your SQL schema.

CPU and memory

Operations on a D1 database, including query execution and result serialization, run within the Workers platform CPU and memory limits.

Exceeding these limits, or hitting other platform limits, will generate errors. Refer to the D1 error list for more details.

How many simultaneous connections can a Worker open to D1?

You can open up to six connections (to D1) simultaneously for each invocation of your Worker.

For more information on a Worker's simultaneous connections, refer to Simultaneous open connections.

Footnotes

  1. The maximum number of databases per account can be increased by request on Workers Paid and Enterprise plans, with support for millions to tens-of-millions of databases (or more) per account. Refer to the guidance on limit increases on this page to request an increase.

  2. The maximum storage per account can be increased by request on Workers Paid and Enterprise plans. Refer to the guidance on limit increases on this page to request an increase.

  3. A single Worker script can have up to 1 MB of script metadata. A binding is defined as a binding to a resource, such as a D1 database, KV namespace, environmental variable, or secret. Each resource binding is approximately 150-bytes, however environmental variables and secrets are controlled by the size of the value you provide. Excluding environmental variables, you can bind up to ~5,000 D1 databases to a single Worker script.

  4. Requests to Cloudflare API must resolve in 30 seconds. Therefore, this duration limit also applies to the entire batch call.

  5. The imported file is uploaded to R2. Refer to R2 upload limit.