Skip to content

Commit 3105756

Browse files
authored
Add a note about using dimensions (#4691)
* note about dimensions * latest updates
1 parent c5a2abf commit 3105756

File tree

5 files changed

+9
-25
lines changed

5 files changed

+9
-25
lines changed

_partials/_dimensions_info.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ to an existing hypertable.
88
#### Samples
99

1010
Hypertables must always have a primary range dimension, followed by an arbitrary number of additional
11-
dimensions that can be either range or hash, Typically this is just one hash. For example:
11+
dimensions that can be either range or hash. Typically, this is just one hash. For example:
1212

1313
```sql
1414
SELECT add_dimension('conditions', by_range('time'));

api/continuous-aggregates/add_continuous_aggregate_policy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,4 +86,4 @@ Because each `batch` is an individual transaction, executing a policy in batches
8686
[concurrent-refresh-policies]: /use-timescale/:currentVersion:/continuous-aggregates/refresh-policies/
8787
[informational-views]: /api/:currentVersion:/informational-views/jobs/
8888
[real-time-aggregation]: /use-timescale/:currentVersion:/continuous-aggregates/real-time-aggregates/
89-
[utc-bucketing]: https://www.tigerdata.com/docs/use-timescale/:currentVersion:/time-buckets/about-time-buckets/
89+
[utc-bucketing]: /use-timescale/:currentVersion:/time-buckets/about-time-buckets/#timezones

api/hypertable/create_hypertable.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -167,27 +167,26 @@ Subsequent data insertion and queries automatically leverage the UUIDv7-based pa
167167
| Name | Type | Default | Required | Description |
168168
|-------------|------------------|---------|-|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
169169
|`create_default_indexes`| `BOOLEAN` | `TRUE` | ✖ | Create default indexes on time/partitioning columns. |
170-
|`dimension`| [DIMENSION_INFO][dimension-info] | - | ✔ | To create a `_timescaledb_internal.dimension_info` instance to partition a hypertable, you call [`by_range`][by-range] and [`by_hash`][by-hash]. |
170+
|`dimension`| `DIMENSION_INFO` | - | ✔ | To create a `_timescaledb_internal.dimension_info` instance to partition a hypertable, you call [`by_range`][by-range] and [`by_hash`][by-hash]. **Note**: best practice is to not use additional dimensions, especially on $CLOUD_LONG.
171+
|
171172
|`if_not_exists` | `BOOLEAN` | `FALSE` | ✖ | Set to `TRUE` to print a warning if `relation` is already a hypertable. By default, an exception is raised. |
172173
|`migrate_data`| `BOOLEAN` | `FALSE` | ✖ | Set to `TRUE` to migrate any existing data in `relation` in to chunks in the new hypertable. Depending on the amount of data to be migrated, setting `migrate_data` can lock the table for a significant amount of time. If there are [foreign key constraints][foreign-key-constraings] to other tables in the data to be migrated, `create_hypertable()` can run into deadlock. A hypertable can only contain foreign keys to another hypertable. `UNIQUE` and `PRIMARY` constraints must include the partitioning key. <br></br> Deadlock may happen when concurrent transactions simultaneously try to insert data into tables that are referenced in the foreign key constraints, and into the converting table itself. To avoid deadlock, manually obtain a [SHARE ROW EXCLUSIVE][share-row-exclusive] lock on the referenced tables before you call `create_hypertable` in the same transaction. <br></br> If you leave `migrate_data` set to the default, non-empty tables generate an error when you call `create_hypertable`. |
173174
|`relation`| REGCLASS | - | ✔ | Identifier of the table to convert to a hypertable. |
174175

175176

176-
<DimensionInfo />
177-
178177
## Returns
179178

180179
|Column|Type| Description |
181180
|-|-|-------------------------------------------------------------------------------------------------------------|
182181
|`hypertable_id`|INTEGER| The ID of the hypertable you created. |
183182
|`created`|BOOLEAN| `TRUE` when the hypertable is created. `FALSE` when `if_not_exists` is `true` and no hypertable was created. |
184183

184+
[add-dimension]: /api/:currentVersion:/hypertable/add_dimension
185185
[api-create-hypertable-arguments]: /api/:currentVersion:/hypertable/create_hypertable/#arguments
186-
[by-hash]: /api/:currentVersion:/hypertable/create_hypertable/#by_hash
187-
[by-range]: /api/:currentVersion:/hypertable/create_hypertable/#by_range
186+
[by-range]: /api/:currentVersion:/hypertable/add_dimension/#by_range
187+
[by-hash]: /api/:currentVersion:/hypertable/add_dimension/#by_hash
188188
[chunk_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/
189189
[declarative-partitioning]: https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE
190-
[dimension-info]: /api/:currentVersion:/hypertable/create_hypertable/#dimension-info
191190
[foreign-key-constraings]: /use-timescale/:currentVersion:/schema-management/about-constraints/
192191
[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/
193192
[hypertables-section]: /use-timescale/:currentVersion:/hypertables/

integrations/debezium.md

Lines changed: 1 addition & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -38,10 +38,6 @@ This page explains how to capture changes in your database and stream them using
3838

3939
## Configure your database to work with Debezium
4040

41-
<Tabs label="Integrate with Debezium" persistKey="source-database">
42-
43-
<Tab title="Self-hosted TimescaleDB" label="self-hosted">
44-
4541
To set up $SELF_LONG to communicate with Debezium:
4642

4743
<Procedure>
@@ -60,18 +56,7 @@ Set up Kafka Connect server, plugins, drivers, and connectors:
6056

6157
</Procedure>
6258

63-
</Tab>
64-
65-
<Tab title="Tiger Cloud" label="tiger-cloud">
66-
67-
Debezium requires logical replication to be enabled. Currently, this is not enabled by default on $SERVICE_LONGs.
68-
We are working on enabling this feature as you read. As soon as it is live, these docs will be updated.
69-
70-
</Tab>
71-
72-
</Tabs>
73-
74-
And that is it, you have configured Debezium to interact with $COMPANY products.
59+
And that is it, you have configured Debezium to interact with $TIMESCALE_DB.
7560

7661
[caggs]: /use-timescale/:currentVersion:/continuous-aggregates/
7762
[debezium]: https://debezium.io/

migrate/troubleshooting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ for live migration to work smoothly.
112112
113113
## Can I use $CLOUD_LONG instance as source for live migration?
114114
115-
No, $CLOUD_LONG cannot be used as a source database for live migration.
115+
Yes, but logical replication must be enabled first. [Contact us](mailto:support@tigerdata.com) to enable.
116116
117117
118118
## How can I exclude a schema/table from being replicated in live migration?

0 commit comments

Comments
 (0)