Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ jobs:
uses: actions/setup-python@v4
with:
python-version: '3.12'
cache: 'pip'

- name: Run install
run: make install
Expand Down
10 changes: 3 additions & 7 deletions docs/0.quickstart-substrate.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,12 +64,9 @@ DipDup will create a Python package `demo_substrate_events` with everything you

```shell [Terminal]
$ dipdup package tree
demo_substrate_events [/home/droserasprout/git/dipdup/src/demo_substrate_events]
demo_substrate_events [.]
├── abi
│ ├── assethub/v1000000.json
│ ├── assethub/v1001002.json
│ ├── ...
│ └── assethub/v9430.json
│ └── assethub/v601.json
├── configs
│ ├── dipdup.compose.yaml
│ ├── dipdup.sqlite.yaml
Expand Down Expand Up @@ -98,8 +95,7 @@ demo_substrate_events [/home/droserasprout/git/dipdup/src/demo_substrate_events]
├── sql
├── types
│ ├── assethub/substrate_events/assets_transferred/__init__.py
│ ├── assethub/substrate_events/assets_transferred/v601.py
│ └── assethub/substrate_events/assets_transferred/v700.py
│ └── assethub/substrate_events/assets_transferred/v601.py
└── py.typed
```

Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/2.astar.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ Explorer: [Blockscout](https://astar-zkevm.explorer.startale.com/)

### Astar zKyoto

Explorer: [Blockscout](https://zkyoto.explorer.startale.com/)
Explorer: [Blockscout](https://zkyoto.explorer.startale.com/) (🔴 404)

| datasource | status | URLs |
| -----------------:|:------------ | ----------------------------------------------------- |
Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/23.hokum.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: "Hokum network support"

{{ #include 10.supported-networks/_intro.md }}

Explorer: [Blockscout](https://explorer.hokum.gg/)
Explorer: [Blockscout](https://explorer.hokum.gg/) (🔴 408)

| datasource | status | URLs |
| -----------------:|:------------- | --------------------------- |
Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/24.kakarot.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ description: "Kakarot network support"

{{ #include 10.supported-networks/_intro.md }}

See step-by-step instructions on how to get started in [this guide](https://docs.kakarot.org/ecosystem/data-indexers/dipdup)
See step-by-step instructions on how to get started in [this guide](https://docs.kakarot.org/starknet/ecosystem/data-indexers/dipdup)

## Kakarot Sepolia

Expand Down
14 changes: 4 additions & 10 deletions docs/10.supported-networks/37.polygon.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,10 @@ Explorer: [Polygonscan](https://polygonscan.com)
| **evm.etherscan** | 🟢 works | `https://api.polygonscan.com/api` |
| **evm.node** | 🟢 works | `https://polygon-mainnet.g.alchemy.com/v2` <br> `wss://polygon-mainnet.g.alchemy.com/v2` |

### Polygon Mumbai

Explorer: [Polygonscan](https://mumbai.polygonscan.com/)

| datasource | status | URLs |
| -----------------:|:------------- | ------------------------------------------------------- |
| **evm.subsquid** | 🤔 not tested | `https://v2.archive.subsquid.io/network/polygon-mumbai` |
| **evm.etherscan** | 🤔 not tested | `https://api-testnet.polygonscan.com/api` |
| **evm.node** | 🤔 not tested | |

### Polygon Amoy Testnet

Explorer: [Polygonscan](https://amoy.polygonscan.com)

| datasource | status | URLs |
| -----------------:|:------------- | ------------------------------------------------------------- |
| **evm.subsquid** | 🤔 not tested | `https://v2.archive.subsquid.io/network/polygon-amoy-testnet` |
Expand Down Expand Up @@ -59,6 +51,8 @@ Explorer: [Polygonscan](https://testnet-zkevm.polygonscan.com/)

### Polygon zkEVM Cardona Testnet

Explorer: [Polygonscan](https://cardona-zkevm.polygonscan.com/)

| datasource | status | URLs |
| -----------------:|:------------- | ---------------------------------------------------------------------- |
| **evm.subsquid** | 🤔 not tested | `https://v2.archive.subsquid.io/network/polygon-zkevm-cardona-testnet` |
Expand Down
2 changes: 1 addition & 1 deletion docs/10.supported-networks/43.scale.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ description: "Scale network support"

### Skale Nebula

Explorers: [Blockscout](https://green-giddy-denebola.explorer.mainnet.skalenodes.com/), [Skalescan](https://skalescan.com/)
Explorers: [Blockscout](https://green-giddy-denebola.explorer.mainnet.skalenodes.com/), [Skalescan](https://skalescan.com/) (🔴 408)

| datasource | status | URLs |
| -----------------:|:-------- | ------------------------------------------------------------------------------------------------------------------------ |
Expand Down
12 changes: 6 additions & 6 deletions docs/4.graphql/1.overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "DipDup provides seamless integration with Hasura GraphQL Engine to

# Overview

DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/latest/graphql/core/index.html) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.
DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/2.0/index/) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.

Before starting to do client integration, it's good to know the specifics of Hasura GraphQL protocol implementation and the general state of the GQL ecosystem.

Expand All @@ -18,17 +18,17 @@ By default, Hasura generates three types of queries for each table in your schem
- Aggregation query (can be disabled in config)

All the GQL features such as fragments, variables, aliases, and directives are supported, as well as batching.
Read more in [Hasura docs](https://hasura.io/docs/latest/graphql/core/databases/postgres/queries/index.html).
Read more in [Hasura docs](https://hasura.io/docs/2.0/queries/postgres/index/).

It's important to understand that a GraphQL query is just a [POST request](https://graphql.org/graphql-js/graphql-clients/) with JSON payload, and in some instances, you don't need a complicated library to talk to your backend.

### Pagination

By default, Hasura does not restrict the number of rows returned per request, which could lead to abuses and a heavy load on your server. You can set up limits in the configuration file. See [hasura page](../4.graphql/2.hasura.md?limit-number-of-rows). But then, you will face the need to [paginate](https://hasura.io/docs/latest/graphql/core/databases/postgres/queries/pagination.html) over the items if the response does not fit the limits.
By default, Hasura does not restrict the number of rows returned per request, which could lead to abuses and a heavy load on your server. You can set up limits in the configuration file. See [hasura page](../4.graphql/2.hasura.md?limit-number-of-rows). But then, you will face the need to [paginate](https://hasura.io/docs/2.0/queries/postgres/pagination/) over the items if the response does not fit the limits.

## Subscriptions

From [Hasura documentation](https://hasura.io/docs/latest/graphql/core/databases/postgres/subscriptions/index.html):
From [Hasura documentation](https://hasura.io/docs/2.0/subscriptions/postgres/index/):

Hasura GraphQL engine subscriptions are **live queries**, i.e., a subscription will return the latest result of the query and not necessarily all the individual events leading up to it.

Expand All @@ -52,6 +52,6 @@ Please note, that [subscriptions-transport-ws](https://github.com/apollographql/

The purpose of DipDup is to create indexers, which means you can consistently reproduce the state as long as data sources are accessible. It makes your backend "stateless", meaning tolerant to data loss.

However, you might need to introduce a non-recoverable state and mix indexed and user-generated content in some cases. DipDup allows marking these UGC tables "immune", protecting them from being wiped. In addition to that, you will need to set up [Hasura Auth](https://hasura.io/docs/latest/graphql/core/auth/index.html) and adjust write permissions for the tables (by default, they are read-only).
However, you might need to introduce a non-recoverable state and mix indexed and user-generated content in some cases. DipDup allows marking these UGC tables "immune", protecting them from being wiped. In addition to that, you will need to set up [Hasura Auth](https://hasura.io/docs/2.0/auth/overview/) and adjust write permissions for the tables (by default, they are read-only).

Lastly, you will need to execute GQL mutations to modify the state from the client side. [Read more](https://hasura.io/docs/latest/graphql/core/databases/postgres/mutations/index.html) about how to do that with Hasura.
Lastly, you will need to execute GQL mutations to modify the state from the client side. [Read more](https://hasura.io/docs/2.0/mutations/postgres/index/) about how to do that with Hasura.
6 changes: 3 additions & 3 deletions docs/4.graphql/2.hasura.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ description: "DipDup can connect to any Hasura GraphQL Engine instance and confi

# Hasura GraphQL

DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/latest/graphql/core/index.html) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.
DipDup provides seamless integration with [Hasura GraphQL Engine](https://hasura.io/docs/2.0/index/) to expose your data to the client side. It's a powerful tool that allows you to build a GraphQL API on top of your database with minimal effort. It also provides a subscription mechanism to get live updates from the backend. If you don't plan to use GraphQL, you can skip this section.

DipDup can connect to any Hasura instance, cloud or self-hosted, and configure it to expose your data via GraphQL API. All you need is to enable this integration in the config file:

Expand All @@ -15,7 +15,7 @@ hasura:
admin_secret: ${HASURA_SECRET:-changeme}
```

DipDup will generate Hasura metadata based on your DB schema and apply it using [Metadata API](https://hasura.io/docs/latest/graphql/core/api-reference/metadata-api/index.html).
DipDup will generate Hasura metadata based on your DB schema and apply it using [Metadata API](https://hasura.io/docs/2.0/api-reference/metadata-api/index/).

Hasura metadata is all about data representation in GraphQL API. The structure of the database itself is managed solely by DipDup ORM.

Expand Down Expand Up @@ -78,6 +78,6 @@ Remember that "camelcasing" is a separate stage performed after all tables are r

There are some cases where you want to apply custom modifications to the Hasura metadata. For example, assume that your database schema has a view that contains data from the main table, in which case you cannot set a foreign key between them. Then you can place files with a .json extension in the `hasura` directory of your project with the content in Hasura query format, and DipDup will execute them in alphabetical order of file names when the indexing is complete.

The format of the queries can be found in the [Metadata API](https://hasura.io/docs/latest/api-reference/metadata-api/index/) documentation.
The format of the queries can be found in the [Metadata API](https://hasura.io/docs/2.0/api-reference/metadata-api/index/) documentation.

Feature flag `allow_inconsistent_metadata` set in Hasura configuration section allows users to modify the behavior of the requests error handling. By default, this value is `False`.
6 changes: 0 additions & 6 deletions docs/8.examples/2.in-production.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,12 +49,6 @@ StakeNow.fi gives you a 360° view of your investments and lets you manage your

Mavryk is a DAO-operated financial ecosystem that lets users borrow and earn on their terms while participating in the governance of the platform.

## Vortex

[Homepage](https://app.vortex.network/) (🔴 404)

Vortez is an all-in-one decentralized finance protocol on Tezos blockchain built by Smartlink. Vortex uses DipDup indexer to track AMM swaps, pools, positions, as well as yield farms, and NFT collections.

## Versum

[Homepage](https://versum.xyz/)
Expand Down
1 change: 0 additions & 1 deletion docs/9.release-notes/1.v8.2.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,5 @@ DipDup 7.5, our previous major release, has reached end-of-life. We recommend up

Going forward, we'll focus on supporting only the latest major version to reduce maintenance overhead. Any breaking changes will be introduced gradually and can be enabled using the `DIPDUP_NEXT` environment variable.


{{ #include 9.release-notes/_8.2_changelog.md }}
{{ #include 9.release-notes/_footer.md }}
2 changes: 1 addition & 1 deletion docs/9.release-notes/2.v8.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: DipDup 8.1 release notes

# Release Notes: 8.1

This release was created during the [ODHack 8.0](https://app.onlydust.com/hackathons/odhack-80) event by the following participants:
This release was created during the [ODHack 8.0](https://app.onlydust.com/osw/odhack-80/overview) event by the following participants:

@bigherc18 contributed support for database migrations using the Aerich tool. This optional integration allows to manage database migrations with the `dipdup schema` commands. See the [Migrations](../1.getting-started/5.database.md#migrations) section to learn to enable and use this integration.

Expand Down
2 changes: 1 addition & 1 deletion docs/9.release-notes/_8.2_changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@
- cli: Fixed help message on `CallbackError` reporting `batch` handler instead of actual one.
- database: Don't process internal models twice if imported from the project.
- evm.subsquid: Fixed event/transaction model deserialization.
- starknet: Process all data types correctly.
- starknet.node: Fetch missing block timestamp and txn id when synching with node.
- starknet: Process all data types correctly.
- substrate.subsquid: Fixed parsing for `__kind` junctions with multiple keys.
- substrate.subsquid: Fixed parsing nested structures in response.

Expand Down
24 changes: 12 additions & 12 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -172,9 +172,9 @@ cytoolz==1.0.1 ; implementation_name == 'cpython' \
--hash=sha256:c8231b9abbd8e368e036f4cc2e16902c9482d4cf9e02a6147ed0e9a3cd4a9ab0 \
--hash=sha256:fb988c333f05ee30ad4693fe4da55d95ec0bb05775d2b60191236493ea2e01f9 \
--hash=sha256:fcb8f7d0d65db1269022e7e0428471edee8c937bc288ebdcb72f13eaa67c2fe4
datamodel-code-generator==0.27.2 \
--hash=sha256:1a7655f5fd3a61329b57534904f5c40dd850850e420696fd946ec7a4f59c32b8 \
--hash=sha256:efcbfbe6a1488d3411fc588b1ce1af5f854f5107810b1cc9026a6d6333a7c4d8
datamodel-code-generator==0.27.3 \
--hash=sha256:01e928c00b800aec8d2ee77b5d4b47e1bc159a3a1c32f0f405df0a442d9ab5e7 \
--hash=sha256:ddef49e66e2b90a4c9b238f6ce42dc5a2a23f6ab1b8370eaca08576777921e43
dictdiffer==0.9.0 \
--hash=sha256:17bacf5fbfe613ccf1b6d512bd766e6b21fb798822a133aa86098b8ac9997578 \
--hash=sha256:442bfc693cfcadaf46674575d2eba1c53b42f5e404218ca2c2ff549f2df56595
Expand All @@ -190,9 +190,9 @@ eth-account==0.13.3 \
eth-hash==0.7.1 \
--hash=sha256:0fb1add2adf99ef28883fd6228eb447ef519ea72933535ad1a0b28c6f65f868a \
--hash=sha256:d2411a403a0b0a62e8247b4117932d900ffb4c8c64b15f92620547ca5ce46be5
eth-keyfile==0.9.0 \
--hash=sha256:45d3513b6433ad885370225ba0429ed26493ba23589c5b1ca5da024765020fef \
--hash=sha256:8621c35e83cbc05909d2f23dbb8a87633918733caea553ae0e298f6a06291526
eth-keyfile==0.9.1 \
--hash=sha256:9789c3b4fa0bb6e2616cdc2bdd71b8755b42947d78ef1e900a0149480fabb5c2 \
--hash=sha256:c7a8bc6af4527d1ab2eb1d1b949d59925252e17663eaf90087da121327b51df6
eth-keys==0.6.1 \
--hash=sha256:7deae4cd56e862e099ec58b78176232b931c4ea5ecded2f50c7b1ccbc10c24cf \
--hash=sha256:a43e263cbcabfd62fa769168efc6c27b1f5603040e4de22bb84d12567e4fd962
Expand Down Expand Up @@ -499,9 +499,9 @@ ruamel-yaml-clib==0.2.12 ; platform_python_implementation == 'CPython' \
scalecodec==1.2.11 \
--hash=sha256:99a2cdbfccdcaf22bd86b86da55a730a2855514ad2309faef4a4a93ac6cbeb8d \
--hash=sha256:d15c94965f617caa25096f83a45f5f73031d05e6ee08d6039969f0a64fc35de1
sentry-sdk==2.20.0 \
--hash=sha256:afa82713a92facf847df3c6f63cec71eb488d826a50965def3d7722aa6f0fdab \
--hash=sha256:c359a1edf950eb5e80cffd7d9111f3dbeef57994cb4415df37d39fda2cf22364
sentry-sdk==2.21.0 \
--hash=sha256:7623cfa9e2c8150948a81ca253b8e2bfe4ce0b96ab12f8cd78e3ac9c490fd92f \
--hash=sha256:a6d38e0fb35edda191acf80b188ec713c863aaa5ad8d5798decb8671d02077b6
six==1.17.0 \
--hash=sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274 \
--hash=sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81
Expand Down Expand Up @@ -545,9 +545,9 @@ typing-inspect==0.9.0 \
tzdata==2025.1 ; sys_platform == 'win32' \
--hash=sha256:24894909e88cdb28bd1636c6887801df64cb485bd593f2fd83ef29075a81d694 \
--hash=sha256:7e127113816800496f027041c570f50bcd464a020098a3b6b199517772303639
tzlocal==5.2 \
--hash=sha256:49816ef2fe65ea8ac19d19aa7a1ae0551c834303d5014c6d5a62e4cbda8047b8 \
--hash=sha256:8d399205578f1a9342816409cc1e46a93ebd5755e39ea2d85334bea911bf0e6e
tzlocal==5.3 \
--hash=sha256:2fafbfc07e9d8b49ade18f898d6bcd37ae88ce3ad6486842a2e4f03af68323d2 \
--hash=sha256:3814135a1bb29763c6e4f08fd6e41dbb435c7a60bfbb03270211bcc537187d8c
urllib3==2.3.0 \
--hash=sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df \
--hash=sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d
Expand Down
2 changes: 1 addition & 1 deletion scripts/docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -465,7 +465,7 @@ def check_links(source: Path, http: bool) -> None:
green_echo(f'{i+1}/{len(http_links)}: checking link `{link}`')
try:
res = subprocess.run(
('curl', '-s', '-L', '-o', '/dev/null', '-w', '%{http_code}', link),
('curl', '-s', '-L', '-o', '/dev/null', '-w', '%{http_code}', '--max-time', '10', link),
check=True,
capture_output=True,
)
Expand Down
Loading
Loading