Search Shortcut cmd + k | ctrl + k

Frequently Asked Questions

Overview

Why should I use DuckLake?

DuckLake provides a lightweight one-stop solution if you need a data lake and catalog.

You can use DuckLake for a “multiplayer DuckDB” setup with multiple DuckDB instances reading and writing the same dataset – a concurrency model not supported by vanilla DuckDB.

If you only use DuckDB for both your DuckLake entry point and your catalog database, you can still benefit from DuckLake: you can run time travel queries, exploit data partitioning, and can store your data in multiple files instead of using a single (potentially very large) database file.

Is DuckLake an open table format?

DuckLake is both a lakehouse format and an open table format. When comparing to other technologies, DuckLake is similar to Delta Lake with Unity Catalog and Iceberg with Lakekeeper or Polaris.

What is “DuckLake”?

“DuckLake” can refer to a number of things:

  1. The DuckLake lakehouse format that uses a catalog database and a Parquet storage to store data.
  2. A DuckLake instance storing a dataset with the DuckLake lakehouse format.
  3. The ducklake DuckDB extension, which supports reading/writing datasets using the DuckLake format.

You can download the logo package. You can also download individual logos:

  • Dark mode, inline layout: png, svg
  • Dark mode, stacked layout: png, svg
  • Light mode, inline layout: png, svg
  • Light mode, stacked layout: png, svg

Architecture

What are the main components of DuckLake?

DuckLake needs a storage layer and a catalog database. Both components can be picked from a wide range of options. The storage system can a blob storage (object storage), a block storage or a file storage. For the catalog database, any SQL-compatible database works that supports ACID operations and primary keys.

Does DuckLake work on AWS S3 (or a compatible storage)?

DuckLake can store the data files (Parquet files) on the AWS S3 blob storage or compatible solutions such as Azure Blob Storage, Google Cloud Storage or Cloudflare R2. You can run the catalog database anywhere, e.g., in an AWS Aurora database.

DuckLake in Operation

Is DuckLake production-ready?

While we tested DuckLake extensively, it is not yet production-ready as demonstrated by its version number 0.3. We expect DuckLake to mature over the course of 2025.

How is authentication implemented in DuckLake?

DuckLake piggybacks on the authentication of the metadata catalog database. For example, if your catalog database is PostgreSQL, you can use PostgreSQL's authentication and authorization methods to protect your DuckLake. This is particularly effective when enabling encryption of DuckLake files.

How does DuckLake deal with the “small files problem”?

The “small files problem” is a well-known problem in data lake formats and occurs e.g. when data is inserted in small batches, yielding many small files with each storing only a small amount of data. DuckLake significantly mitigates this problem by storing the metadata in a database system (catalog database) and making the compaction step simple. DuckLake also harnesses the catalog database to stage data (a technique called “data inlining”) before serializing it into Parquet files. Further improvements are on the roadmap.

Features

Are constraints such as primary keys and foreign keys supported?

No. Similarly to other lakehouse technologies, DuckLake does not support constraints, keys, or indexes. For more information, see the list of unsupported features.

Can I export my DuckLake into other formats?

Yes. Starting with v0.3, you can copy from DuckLake to Iceberg.

Are DuckDB database files supported as the data files for DuckLake?

The data files of DuckLake must be stored in Parquet. Using DuckDB files as storage are not supported at the moment.

Are there any practical limits to the size of data and the number of snapshots?

No. The only limitation is the catalog database's performance but even with a relatively slow catalog database, you can have terabytes of data and millions of snapshots.

Development

How is DuckLake tested?

DuckLake receives extensive testing, including running the applicable subset of DuckDB's thorough test suite. That said, if you encounter any problems using DuckLake, please submit an issue in the DuckLake issue tracker.

How can I contribute to DuckLake?

If you encounter any problems using DuckLake, please submit an issue in the DuckLake issue tracker. If you have any suggestions or feature requests, please open a ticket in DuckLake's discussion forum. You are also welcome to implement support in other systems for DuckLake following the specification.

Is the documentation available as a single file?

Yes, you can download the documentation as a single Markdown file and as a PDF.

When is the next version of the DuckLake standard released and what features will it include?

The DuckLake 0.4 standard will be released in late 2025. See the roadmap for upcoming features. For past releases, see the release calendar.