Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
6 changes: 3 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ The format is based on [Keep a Changelog], and this project adheres to [Semantic

Releases prior to 7.0 has been removed from this file to declutter search results; see the [archived copy](https://github.com/dipdup-io/dipdup/blob/8.0.0b5/CHANGELOG.md) for the full list.

## [8.3.4] - 2025-05-19
## [8.4.0] - 2025-05-19

### Added

Expand Down Expand Up @@ -734,8 +734,8 @@ Releases prior to 7.0 has been removed from this file to declutter search result
[semantic versioning]: https://semver.org/spec/v2.0.0.html

<!-- Versions -->
[Unreleased]: https://github.com/dipdup-io/dipdup/compare/8.3.4...HEAD
[8.3.4]: https://github.com/dipdup-io/dipdup/compare/8.3.3...8.3.4
[Unreleased]: https://github.com/dipdup-io/dipdup/compare/8.4.0...HEAD
[8.4.0]: https://github.com/dipdup-io/dipdup/compare/8.3.3...8.4.0
[8.3.3]: https://github.com/dipdup-io/dipdup/compare/8.3.2...8.3.3
[8.3.2]: https://github.com/dipdup-io/dipdup/compare/8.3.1...8.3.2
[8.3.1]: https://github.com/dipdup-io/dipdup/compare/8.3.0...8.3.1
Expand Down
2 changes: 0 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -94,7 +94,5 @@ before_release: ## Prepare for a new release after updating version in pyproject

jsonschemas: ## Dump config JSON schemas
python scripts/docs.py dump-jsonschema
git checkout origin/current schema.json
mv schema.json schemas/dipdup-2.0.json

##
6 changes: 5 additions & 1 deletion docs/1.getting-started/1.installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,8 @@ If you prefer to use other package manager, edit `configs/replay.yaml` file and

## Docker

For Docker installation, please refer to the [Docker](../5.advanced/1.docker.md) page.
We provide a Docker image for DipDup. It is built automatically and published to [Docker Hub](https://hub.docker.com/r/dipdup/dipdup) and [GitHub Container Registry](https://github.com/dipdup-io/dipdup/pkgs/container/dipdup). The image is based on the `python:3.12-slim` image and contains all dependencies required to run DipDup.

Inside the `deploy/` project directory you can find a Dockerfile to build your own image with the project code and Docker Compose/Swarm manifests to deploy it.

See the [Docker](../5.advanced/1.docker.md) page for more details.
64 changes: 57 additions & 7 deletions docs/1.getting-started/3.config.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,22 +32,72 @@ See [Config reference guide](../7.references/2.config.md) for the full list of a
| | `custom` | Mapping of user-defined values; neither typed nor validated |
| | `logging` | Configure logging verbosity |

## Merging multiple files
## Config merging

DipDup allows you to customize the configuration for a specific environment or workflow. It works similarly to docker-compose anchors but only for top-level sections. If you want to override a nested property, you need to recreate a whole top-level section. To merge several DipDup config files, provide the `-c` command-line option multiple times:
DipDup allows you to customize the configuration for specific environments or workflows. It works similarly to Docker Compose anchors but only for top-level sections. If you want to override a nested property, you need to recreate the entire top-level section.

### Environment-specific configurations

You can organize your configuration files in a `configs/` directory within your project.

Base template contains three environment-specific configurations:

```shell
my-dipdup-project/
├── dipdup.yaml # Root config `database` and `hasura` sections
├── configs/
│ ├── dipdup.sqlite.yaml # SQLite configuration override
│ ├── dipdup.compose.yaml # Docker Compose configuration override
│ └── dipdup.swarm.yaml # Docker Swarm configuration override
```

To merge multiple DipDup config files, provide the `-c` command-line option multiple times. Root config must be specified first.

```shell [Terminal]
dipdup -c dipdup.yaml -c configs/dipdup.sqlite.yaml run
# Merge root config with SQLite-specific settings
dipdup -c . -c configs/dipdup.sqlite.yaml run

# or, using a shorthand
# Using a shorthand
dipdup -C sqlite run

# Combining multiple configuration overrides
dipdup -C sqlite -C mainnet run
```

Use `config export`{lang="shell"} and `config env`{lang="shell"} commands to check the resulting config used by DipDup.
::banner{type="note"}
**Tip:** Avoid adding sections related to database and integrations to the root config unless local configuration is required to run the indexer.
::

### Creating custom configurations

Config merging can be easily extended. Some ideas include:

- `dipdup.testnet.yaml` to override `datasources` and `contracts` sections
- `dipdup.debug.yaml` to override `logging` and `sentry` sections

### Exporting the merged configuration

You can use the `config export` command to see the merged configuration that will be used by DipDup. This is useful for debugging and understanding how different configurations are combined.

```shell [Terminal]
# Export the merged configuration
dipdup -C sqlite config export

# Show all available options
dipdup config export -h

```

## Environment variables

DipDup supports compose-style variable expansion with an optional default value. Use this feature to store sensitive data outside of the configuration file and make your app fully declarative. If a required variable is not set, DipDup will fail with an error. You can use these placeholders anywhere throughout the configuration file.
DipDup supports compose-style variable expansion in `dipdup.yaml` and other configuration files. You can use placeholders in the format `${VARIABLE_NAME}` or `${VARIABLE_NAME:-default_value}` to substitute environment variables at runtime. If a variable is not set and no default value is provided, DipDup will raise an error.

This feature is useful for:

- Keeping sensitive data like API keys or passwords out of your version-controlled configuration files.
- Customizing deployments for different environments (development, staging, production) without altering the base configuration.

You can use these placeholders anywhere throughout the configuration file. For example:

```yaml [dipdup.yaml]
database:
Expand All @@ -61,7 +111,7 @@ There are multiple ways to pass environment variables to DipDup:
- Export them in the shell before running DipDup
- Create the env file and pass it to DipDup with the `-e` CLI option

See [Environment Variables](../5.advanced/2.environment-variables.md) page for details.
See [Environment Variables](../5.advanced/2.environment-variables.md) for advanced usage, troubleshooting, and built-in variables.

## Contract typenames

Expand Down
5 changes: 2 additions & 3 deletions docs/1.getting-started/4.package.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The structure of the resulting package is the following:
| ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| :file_folder: `abi` | Contract ABIs used to generate typeclasses |
| :file_folder: `configs` | Environment-specific configs to merge with the root one |
| :file_folder: `deploy` | Dockerfiles, compose files, and default env variables for each environment |
| :file_folder: `deploy` | Dockerfiles and Compose manifests |
| :file_folder: `graphql` | Custom GraphQL queries to expose with Hasura engine |
| :file_folder: `handlers` | User-defined callbacks to process contract data |
| :file_folder: `hasura` | Arbitrary Hasura metadata to apply during configuration |
Expand Down Expand Up @@ -59,7 +59,7 @@ This approach allows working with complex contract types with nested structures

## Config snippets

`config/` directory contains environment-specific config snippets. They don't contain `package`/`spec_version` fields and can be used to extend/override the root config. See [Merging config files](../1.getting-started/3.config.md#merging-multiple-files) for details.
`config/` directory contains environment-specific config snippets. They don't contain `package`/`spec_version` fields and can be used to extend/override the root config. See [Merging config files](../1.getting-started/3.config.md#config-merging) for details.

### Replay file

Expand Down Expand Up @@ -94,7 +94,6 @@ The `deploy` directory contains:

- `Dockerfile`, a recipe to build a Docker image with your project. Usually, you won't need to modify it. See comments inside for details.
- Compose files to run your project locally or in the cloud.
- Default env variables for each environment. See [Environment variables](../1.getting-started/3.config.md#environment-variables) for details.

## Nested packages

Expand Down
4 changes: 4 additions & 0 deletions docs/1.getting-started/5.database.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,6 +128,10 @@ For more information visit the official TimescaleDB documentation:

## Migrations

::banner{type="warning"}
Using migrations is generally not recommended. See [F.A.Q.](../12.faq.md#how-to-perform-database-migrations) for details.
::

::banner{type="note"}
The database migrations feature is optional and is disabled by default. To enable it, you need to install `aerich`, which is available in the `[migrations]` optional dependencies group and set the `DIPDUP_MIGRATIONS` environment variable.
::
Expand Down
38 changes: 19 additions & 19 deletions docs/1.getting-started/6.models.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
---
title: "Models"
description: "To store indexed data in the database, you need to define models, that are Python classes that represent database tables. DipDup uses a custom ORM to manage models and transactions."
description: "To store indexed data in the database, you need to define models that are Python classes representing database tables. DipDup uses customized Tortoise ORM to manage models and transactions."
---

# Models

To store indexed data in the database, you need to define models, that are Python classes that represent database tables. DipDup uses a custom ORM to manage models and transactions.
To store indexed data in the database, you need to define models that are Python classes representing database tables. DipDup uses customized Tortoise ORM to manage models and transactions.

## DipDup ORM
## DipDup

Our storage layer is based on [Tortoise ORM](https://tortoise.github.io/index.html). This library is fast, flexible, and has a syntax familiar to Django users. We have extended it with some useful features like a copy-on-write rollback mechanism, caching, and more. We plan to make things official and fork Tortoise ORM under a new name, but it's not ready yet. For now, let's call our implementation **DipDup ORM**.
Our storage layer is based on [Tortoise ORM](https://tortoise.github.io/index.html). This library is fast, flexible, and has a syntax familiar to Django users. We have extended it with some useful features like a copy-on-write rollback mechanism, caching, and more.

Before we begin to dive into the details, here's an important note:
Before we dive into the details, here's an important note:

::banner{type="warning"}
Please, don't report DipDup ORM issues to the Tortoise ORM bug tracker! We patch it heavily to better suit our needs, so it's not the same library anymore.
Please, don't report DipDup issues to the Tortoise ORM bug tracker! We patch it heavily to better suit our needs, so it's not the same library anymore.
::

You can use [Tortoise ORM docs](https://tortoise.github.io/examples.html) as a reference. We will describe only DipDup-specific features here.
Use [Tortoise ORM docs](https://tortoise.github.io/examples.html) as a reference. We will describe only DipDup-specific features here.

## Defining models

Expand All @@ -29,9 +29,9 @@ Here's an example containing all available fields:
{{ #include ../src/dipdup/templates/models.py }}
```

Pay attention to the imports: field and model classes **must** be imported from `dipdup` package instead of `tortoise` to make our extensions work.
Pay attention to the imports: field and model classes **must** be imported from the `dipdup` package instead of `tortoise` to make our extensions work.

Some limitations are applied to model names and fields to avoid ambiguity in GraphQL API:
Some limitations are applied to model names and fields to avoid ambiguity in the GraphQL API:

- Table names must be in snake_case
- Model fields must be in snake_case
Expand All @@ -45,35 +45,35 @@ Now you can use these models in hooks and handlers.
{{ #include ../src/demo_tezos_dao/handlers/on_propose.py }}
```

Visit [Tortose ORM docs](https://tortoise.github.io/examples.html) for more examples.
Visit [Tortoise ORM docs](https://tortoise.github.io/examples.html) for more examples.

## Caching

::banner{type="warning"}
Caching API is experimental and may change in the future.
::

Some models can be cached to avoid unnecessary database queries. Use `CachedModel` base class for this purpose. It's a drop-in replacement for `dipdup.models.Model`, but with additional methods to manage the cache.
Some models can be cached to avoid unnecessary database queries. Use the `CachedModel` base class for this purpose. It's a drop-in replacement for `dipdup.models.Model` with additional methods to manage the cache.

- `cached_get` — get a single object from the cache or the database
- `cached_get_or_none` — the same, but None result is also cached
- `cached_get_or_none` — the same, but a None result is also cached
- `cache` — cache a single object

See `demo_evm_uniswap` project for real-life examples.
See the `demo_evm_uniswap` project for real-life examples.

## Differences from Tortoise ORM

This section describes the differences between DipDup and Tortoise ORM. Most likely won't notice them, but it's better to be aware of them.
This section describes the differences between DipDup and Tortoise ORM. You most likely won't notice them, but it's better to be aware of them.

### Fields

We use different column types for some fields to avoid unnecessary reindexing for minor schema changes. Some fields also behave slightly differently for the sake of performance.

- `TextField` can be indexed and used as a primary key. We can afford this since MySQL is not supported.
- `DecimalField` is stored as `DECIMAL(x,y)` both in SQLite and PostgreSQL. In Tortoise ORM it's `VARCHAR(40)` in SQLite for some reason. DipDup ORM doesn't have an upper bound for precision.
- `EnumField` is stored in `TEXT` column in DipDup ORM. There's no need in `VARCHAR` in SQLite and PostgreSQL. You can still add `max_length` directive for additional validation, but it won't affect the database schema.
- `DecimalField` is stored as `DECIMAL(x,y)` both in SQLite and PostgreSQL. In Tortoise ORM it's `VARCHAR(40)` in SQLite for some reason. DipDup doesn't have an upper bound for precision.
- `EnumField` is stored in `TEXT` column in DipDup. There's no need in `VARCHAR` in SQLite and PostgreSQL. You can still add `max_length` directive for additional validation, but it won't affect the database schema.

We also have `ArrayField` for native array support in PostgreSQL.
DipDup also provides `ArrayField` for native array support in PostgreSQL.

### Querysets

Expand All @@ -83,8 +83,8 @@ Querysets are not copied between chained calls. Consider the following example:
await dipdup.models.Index.filter().order_by('-level').first()
```

In Tortoise ORM each subsequent call creates a new queryset using an expensive `copy.`copy()` call. In DipDup ORM it's the same queryset, so it's much faster.
In Tortoise ORM, each subsequent call creates a new queryset using an expensive `copy.copy()` call. In DipDup, it's the same queryset, so it's much faster.

### Transactions

DipDup manages transactions automatically for indexes opening one for each level. You can't open another one. Entering a transaction context manually with `in_transaction()` will return the same active transaction. For hooks, there's the `atomic` flag in the configuration.
DipDup manages transactions automatically for indexes, opening one for each level. You cannot open another one. Entering a transaction context manually with `in_transaction()` will return the same active transaction. For hooks, there's the `atomic` flag in the configuration.
6 changes: 5 additions & 1 deletion docs/10.supported-networks/0.overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,11 @@ datasources:
ws_url: ${NODE_WS_URL:-wss://eth-mainnet.g.alchemy.com/v2}/${NODE_API_KEY:-''}
```

To configure datasources for other networks, you need to change URLs and API keys. You can do it in the config file directly, but it's better to use environment variables. Check the `deploy/.env.default` file in your project directory; it contains all the variables used in config.
To configure datasources for other networks, you need to change URLs and API keys. You can do it in the config file directly, but it's better to use environment variables. Run the following command to create a `.env` file with all the necessary variables:

```shell [Terminal]
dipdup config env -o deploy/.env
```

[evm.subsquid](../3.datasources/1.evm_subsquid.md) - Subsquid Network is the main source of historical data for EVM-compatible networks. It's free and available for many networks.

Expand Down
Loading