Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,11 @@ The format is based on [Keep a Changelog], and this project adheres to [Semantic

Releases prior to 7.0 has been removed from this file to declutter search results; see the [archived copy](https://github.com/dipdup-io/dipdup/blob/8.0.0b5/CHANGELOG.md) for the full list.

## [Unreleased]
## [8.2.2] - 2025-03-13

### Added

- starknet.node: Added `fetch_block_headers` option to datasource config.

### Fixed

Expand Down Expand Up @@ -637,7 +641,8 @@ Releases prior to 7.0 has been removed from this file to declutter search result
[semantic versioning]: https://semver.org/spec/v2.0.0.html

<!-- Versions -->
[Unreleased]: https://github.com/dipdup-io/dipdup/compare/8.2.1...HEAD
[Unreleased]: https://github.com/dipdup-io/dipdup/compare/8.2.2...HEAD
[8.2.2]: https://github.com/dipdup-io/dipdup/compare/8.2.1...8.2.2
[8.2.1]: https://github.com/dipdup-io/dipdup/compare/8.2.0...8.2.1
[8.2.0]: https://github.com/dipdup-io/dipdup/compare/8.2.0rc1...8.2.0
[8.2.0rc1]: https://github.com/dipdup-io/dipdup/compare/8.1.4...8.2.0rc1
Expand Down
4 changes: 2 additions & 2 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ help: ## Show this help (default)
##

install: ## Install dependencies
uv sync --all-extras --all-groups --locked
uv sync --all-extras --all-groups --link-mode symlink --locked

update: ## Update dependencies and dump requirements.txt
uv sync -U --all-extras --all-groups
uv sync -U --all-extras --all-groups --link-mode symlink
uv export --all-extras --locked --no-group lint --no-group test --no-group docs --no-group perf > requirements.txt


Expand Down
101 changes: 7 additions & 94 deletions docs/0.quickstart-evm.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,61 +7,25 @@ network: "ethereum"

# Quickstart

This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details.

A selective blockchain indexer is an application that extracts and organizes specific blockchain data from multiple data sources, rather than processing all blockchain data. It allows users to index only relevant entities, reducing storage and computational requirements compared to full node indexing, and query data more efficiently for specific use cases. Think of it as a customizable filter that captures and stores only the blockchain data you need, making data retrieval faster and more resource-efficient. DipDup is a framework that helps you implement such an indexer.
{{ #include _quickstart_01_intro.md }}

Let's create an indexer for the [USDt token contract](https://etherscan.io/address/0xdac17f958d2ee523a2206206994597c13d831ec7). Our goal is to save all token transfers to the database and then calculate some statistics of its holders' activity.

## Install DipDup

A modern Linux/macOS distribution with Python 3.12 installed is required to run DipDup.

The recommended way to install DipDup CLI is [pipx](https://pipx.pypa.io/stable/). We also provide a convenient helper script that installs all necessary tools. Run the following command in your terminal:

{{ #include _curl-spell.md }}

See the [Installation](../docs/1.getting-started/1.installation.md) page for all options.

## Create a project

DipDup CLI has a built-in project generator. Run the following command in your terminal:

```shell [Terminal]
dipdup new
```
{{ #include _quickstart_02_installation.md }}

Choose `From template`, then `EVM` network and `demo_evm_events` template.

::banner{type="note"}
Want to skip a tutorial and start from scratch? Choose `Blank` at the first step instead and proceed to the [Config](../docs/1.getting-started/3.config.md) section.
::

Follow the instructions; the project will be created in the new directory.

## Write a configuration file

In the project root, you'll find a file named `dipdup.yaml`. It's the main configuration file of your indexer. We will discuss it in detail in the [Config](../docs/1.getting-started/3.config.md) section; now it has the following content:
{{ #include _quickstart_03_config.md }}

```yaml [dipdup.yaml]
{{ #include ../src/demo_evm_events/dipdup.yaml }}
```

## Generate types and stubs

Now it's time to generate typeclasses and callback stubs based on definitions from config. Examples below use `demo_evm_events` as a package name; yours may differ.

Run the following command:

```shell [Terminal]
dipdup init
```

DipDup will create a Python package `demo_evm_events` with everything you need to start writing your indexer. Use `package tree` command to see the generated structure:
{{ #include _quickstart_04_codegen.md }}

```shell [Terminal]
$ dipdup package tree
demo_evm_events [.]
dipdup_indexer [.]
├── abi
│ └── eth_usdt/abi.json
├── configs
Expand Down Expand Up @@ -94,23 +58,7 @@ demo_evm_events [.]
└── py.typed
```

That's a lot of files and directories! But don't worry, we will need only `models` and `handlers` sections in this guide.

## Define data models

DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use modified [Tortoise ORM](https://tortoise.github.io/) library as an abstraction layer.

First, you need to define a model class. DipDup uses model definitions both for database schema and autogenerated GraphQL API. Our schema will consist of a single model `Holder` with the following fields:

| | |
| ----------- | ----------------------------------- |
| `address` | account address |
| `balance` | token amount held by the account |
| `turnover` | total amount of transfer/mint calls |
| `tx_count` | number of transfers/mints |
| `last_seen` | time of the last transfer/mint |

Here's how to define this model in DipDup:
{{ #include _quickstart_05_models.md }}

```python [models/__init__.py]
{{ #include ../src/demo_evm_events/models/__init__.py }}
Expand All @@ -128,39 +76,4 @@ Our task is to index all the balance updates. Put some code to the `on_transfer`
{{ #include ../src/demo_evm_events/handlers/on_transfer.py }}
```

And that's all! We can run the indexer now.

## Next steps

Run the indexer in memory:

```shell
dipdup run
```

Store data in SQLite database (defaults to /tmp, set `SQLITE_PATH` env variable):

```shell
dipdup -c . -c configs/dipdup.sqlite.yaml run
```

Or spawn a Compose stack with PostgreSQL and Hasura:

```shell
cd deploy
cp .env.default .env
# Edit .env file before running
docker-compose up
```

DipDup will fetch all the historical data and then switch to realtime updates. You can check the progress in the logs.

If you use SQLite, run this query to check the data:

```bash
sqlite3 /tmp/demo_evm_events.sqlite 'SELECT * FROM holder LIMIT 10'
```

If you run a Compose stack, open `http://127.0.0.1:8080` in your browser to see the Hasura console (an exposed port may differ). You can use it to explore the database and build GraphQL queries.

Congratulations! You've just created your first DipDup indexer. Proceed to the Getting Started section to learn more about DipDup configuration and features.
{{ #include _quickstart_06_next_steps.md }}
101 changes: 7 additions & 94 deletions docs/0.quickstart-starknet.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,61 +7,25 @@ network: "starknet"

# Quickstart

This page will guide you through the steps to get your first selective indexer up and running in a few minutes without getting too deep into the details.

A selective blockchain indexer is an application that extracts and organizes specific blockchain data from multiple data sources, rather than processing all blockchain data. It allows users to index only relevant entities, reducing storage and computational requirements compared to full node indexing, and query data more efficiently for specific use cases. Think of it as a customizable filter that captures and stores only the blockchain data you need, making data retrieval faster and more resource-efficient. DipDup is a framework that helps you implement such an indexer.
{{ #include _quickstart_01_intro.md }}

Let's create an indexer for the [USDt token contract](https://starkscan.co/contract/0x68f5c6a61780768455de69077e07e89787839bf8166decfbf92b645209c0fb8). Our goal is to save all token transfers to the database and then calculate some statistics of its holders' activity.

## Install DipDup

A modern Linux/macOS distribution with Python 3.12 installed is required to run DipDup.

The recommended way to install DipDup CLI is [pipx](https://pipx.pypa.io/stable/). We also provide a convenient helper script that installs all necessary tools. Run the following command in your terminal:

{{ #include _curl-spell.md }}

See the [Installation](../docs/1.getting-started/1.installation.md) page for all options.

## Create a project

DipDup CLI has a built-in project generator. Run the following command in your terminal:

```shell [Terminal]
dipdup new
```
{{ #include _quickstart_02_installation.md }}

Choose `From template`, then `Starknet` network and `demo_starknet_events` template.

::banner{type="note"}
Want to skip a tutorial and start from scratch? Choose `Blank` at the first step instead and proceed to the [Config](../docs/1.getting-started/3.config.md) section.
::

Follow the instructions; the project will be created in the new directory.

## Write a configuration file

In the project root, you'll find a file named `dipdup.yaml`. It's the main configuration file of your indexer. We will discuss it in detail in the [Config](../docs/1.getting-started/3.config.md) section; now it has the following content:
{{ #include _quickstart_03_config.md }}

```yaml [dipdup.yaml]
{{ #include ../src/demo_starknet_events/dipdup.yaml }}
```

## Generate types and stubs

Now it's time to generate typeclasses and callback stubs based on definitions from config. Examples below use `demo_starknet_events` as a package name; yours may differ.

Run the following command:

```shell [Terminal]
dipdup init
```

DipDup will create a Python package `demo_starknet_events` with everything you need to start writing your indexer. Use `package tree` command to see the generated structure:
{{ #include _quickstart_04_codegen.md }}

```shell [Terminal]
$ dipdup package tree
demo_starknet_events [.]
dipdup_indexer [.]
├── abi
│ └── stark_usdt/cairo_abi.json
├── configs
Expand Down Expand Up @@ -94,23 +58,7 @@ demo_starknet_events [.]
└── py.typed
```

That's a lot of files and directories! But don't worry, we will need only `models` and `handlers` sections in this guide.

## Define data models

DipDup supports storing data in SQLite, PostgreSQL and TimescaleDB databases. We use modified [Tortoise ORM](https://tortoise.github.io/) library as an abstraction layer.

First, you need to define a model class. DipDup uses model definitions both for database schema and autogenerated GraphQL API. Our schema will consist of a single model `Holder` with the following fields:

| | |
| ----------- | ----------------------------------- |
| `address` | account address |
| `balance` | token amount held by the account |
| `turnover` | total amount of transfer/mint calls |
| `tx_count` | number of transfers/mints |
| `last_seen` | time of the last transfer/mint |

Here's how to define this model in DipDup:
{{ #include _quickstart_05_models.md }}

```python [models/__init__.py]
{{ #include ../src/demo_starknet_events/models/__init__.py }}
Expand All @@ -128,39 +76,4 @@ Our task is to index all the balance updates. Put some code to the `on_transfer`
{{ #include ../src/demo_starknet_events/handlers/on_transfer.py }}
```

And that's all! We can run the indexer now.

## Next steps

Run the indexer in memory:

```shell
dipdup run
```

Store data in SQLite database (defaults to /tmp, set `SQLITE_PATH` env variable):

```shell
dipdup -c . -c configs/dipdup.sqlite.yaml run
```

Or spawn a Compose stack with PostgreSQL and Hasura:

```shell
cd deploy
cp .env.default .env
# Edit .env file before running
docker-compose up
```

DipDup will fetch all the historical data and then switch to realtime updates. You can check the progress in the logs.

If you use SQLite, run this query to check the data:

```bash
sqlite3 /tmp/demo_starknet_events.sqlite 'SELECT * FROM holder LIMIT 10'
```

If you run a Compose stack, open `http://127.0.0.1:8080` in your browser to see the Hasura console (an exposed port may differ). You can use it to explore the database and build GraphQL queries.

Congratulations! You've just created your first DipDup indexer. Proceed to the Getting Started section to learn more about DipDup configuration and features.
{{ #include _quickstart_06_next_steps.md }}
Loading