Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion about/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -791,7 +791,7 @@ Finding logs just got easier! We've added a date, time, and timezone picker, so
## 📒Faster vector search and improved job information
<Label type="date">April 4, 2025</Label>

### pgvectorscale 0.7.0: faster filtered filtered vector search with filtered indexes
### pgvectorscale 0.7.0: faster filtered vector search with filtered indexes

This pgvectorscale release adds label-based filtered vector search to the StreamingDiskANN index.
This enables you to return more precise and efficient results by combining vector
Expand Down
2 changes: 1 addition & 1 deletion api/_hyperfunctions/tdigest/approx_percentile_rank.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ hyperfunction:
aggregates:
- tdigest()
api_details:
summary: Estimate the the percentile at which a given value would be located.
summary: Estimate the percentile at which a given value would be located.
signatures:
- language: sql
code: |
Expand Down
2 changes: 1 addition & 1 deletion api/uuid-functions/generate_uuidv7.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ products: [cloud, mst, self_hosted]

Generate a UUIDv7 object based on the current time.

The UUID contains a a UNIX timestamp split into millisecond and sub-millisecond parts, followed by
The UUID contains a UNIX timestamp split into millisecond and sub-millisecond parts, followed by
random bits.


Expand Down
2 changes: 1 addition & 1 deletion integrations/apache-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ To set up Kafka Connect server, plugins, drivers, and connectors:

1. **Verify Kafka Connect is running**

In yet another another Terminal window, run the following command:
In yet another Terminal window, run the following command:
```bash
curl http://localhost:8083
```
Expand Down
2 changes: 1 addition & 1 deletion integrations/cloudwatch.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ import ManageDataExporter from "versionContent/_partials/_manage-a-data-exporter

You can export telemetry data from your $SERVICE_LONGs with the time-series and analytics capability enabled to CloudWatch. The available metrics include CPU usage, RAM usage, and storage. This integration is available for [Scale and Enterprise][pricing-plan-features] pricing tiers.

This pages explains how to export telemetry data from your $SERVICE_LONG into CloudWatch by creating a $CLOUD_LONG data exporter, then attaching it to the $SERVICE_SHORT.
This page explains how to export telemetry data from your $SERVICE_LONG into CloudWatch by creating a $CLOUD_LONG data exporter, then attaching it to the $SERVICE_SHORT.

## Prerequisites

Expand Down
2 changes: 1 addition & 1 deletion integrations/power-bi.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ Use the PostgreSQL ODBC driver to connect Power BI to $CLOUD_LONG.

</Procedure>

## Import the data from your your $SERVICE_LONG into Power BI
## Import the data from your $SERVICE_LONG into Power BI

Establish a connection and import data from your $SERVICE_LONG into Power BI:

Expand Down
2 changes: 1 addition & 1 deletion integrations/supabase.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ To set up a $SERVICE_LONG optimized for analytics to receive data from Supabase:
WITH NO DATA;
```

1. Setup a view to recieve the data from Supabase.
1. Setup a view to receive the data from Supabase.

```sql
CREATE VIEW signs_per_minute_delay
Expand Down
2 changes: 1 addition & 1 deletion integrations/telegraf.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Ingest data using Telegraf
excerpt: Ingest data into a Tiger Cloud service using using the Telegraf plugin
excerpt: Ingest data into a Tiger Cloud service using the Telegraf plugin
products: [cloud, self_hosted]
keywords: [ingest, Telegraf]
tags: [insert]
Expand Down
2 changes: 1 addition & 1 deletion self-hosted/multinode-timescaledb/about-multinode.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ LIMIT 100;

Partitioning on `time` and a space dimension such as `location`, is also best if
you need faster insert performance. If you partition only on time, and your
inserts are generally occuring in time order, then you are always writing to one
inserts are generally occurring in time order, then you are always writing to one
data node at a time. Partitioning on `time` and `location` means your
time-ordered inserts are spread across multiple data nodes, which can lead to
better performance.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -637,7 +637,7 @@ for visualizing your data analysis
Check out these resources for more about using $TIMESCALE_DB with crypto data:

* [Analyze cryptocurrency market data][crypto-tutorial]
* [Analyzing Analyzing Bitcoin, Ethereum, and 4100+ other cryptocurrencies using $PG and $TIMESCALE_DB][crypto-blog]
* [Analyzing Bitcoin, Ethereum, and 4100+ other cryptocurrencies using $PG and $TIMESCALE_DB][crypto-blog]
* [Learn how $TIMESCALE_DB user Messari uses data to open the crypto economy to everyone][messari]
* [How one $TIMESCALE_DB user built a successful crypto trading bot][trading-bot]

Expand Down
2 changes: 1 addition & 1 deletion tutorials/OLD_nfl-analytics/advanced-analysis.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ WITH total_yards AS (
FROM player_yards_by_game t
GROUP BY t.player_id, t.gameid
), avg_yards AS (
-- This table takes the average of the yards run by each player and calls out thier position
-- This table takes the average of the yards run by each player and calls out their position
SELECT p.player_id, p.displayname, AVG(yards) AS avg_yards, p."position"
FROM total_yards t
LEFT JOIN player p ON t.player_id = p.player_id
Expand Down
2 changes: 1 addition & 1 deletion tutorials/OLD_nfl-fantasy-league.md
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,7 @@ WITH total_yards AS (
FROM player_yards_by_game t
GROUP BY t.player_id, t.gameid
), avg_yards AS (
-- This table takes the average of the yards run by each player and calls out thier position
-- This table takes the average of the yards run by each player and calls out their position
SELECT p.player_id, p.display_name, AVG(yards) AS avg_yards, p."position"
FROM total_yards t
LEFT JOIN player p ON t.player_id = p.player_id
Expand Down
2 changes: 1 addition & 1 deletion tutorials/_template/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Before you begin, make sure you have:
A numbered list of the sub-pages in the tutorial. Remember that this is
curricula content, so these steps must be in order:

1. [Set up up your dataset][tutorial-dataset]
1. [Set up your dataset][tutorial-dataset]
1. [Query your dataset][tutorial-query]
1. [More things to try][tutorial-advanced]

Expand Down
2 changes: 1 addition & 1 deletion tutorials/financial-tick-data/financial-tick-dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ import GrafanaConnect from "versionContent/_partials/_grafana-connect.mdx";
# Ingest data into a $SERVICE_LONG

This tutorial uses a dataset that contains second-by-second trade data for
the most-traded crypto-assets. You optimize this time-series data in a a hypertable called `assets_real_time`.
the most-traded crypto-assets. You optimize this time-series data in a hypertable called `assets_real_time`.
You also create a separate table of asset symbols in a regular $PG table named `assets`.

The dataset is updated on a nightly basis and contains data from the last four
Expand Down
2 changes: 1 addition & 1 deletion use-timescale/hypercore/secondary-indexes.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ import CreateHypertablePolicyNote from "versionContent/_partials/_create-hyperta

Real-time analytics applications require more than fast inserts and analytical queries. They also need high performance
when retrieving individual records, enforcing constraints, or performing upserts, something that OLAP/columnar databases
lack. This pages explains how to improve performance by segmenting and ordering data.
lack. This page explains how to improve performance by segmenting and ordering data.

To improve query performance using indexes, see [About indexes][about-index] and [Indexing data][create-index].

Expand Down
2 changes: 1 addition & 1 deletion use-timescale/metrics-logging/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ When something doesn't look right, $CLOUD_LONG provides a complete investigation

Want to save some time? Check out [**Recommendations**][recommendations] for alerts that may have already flagged the problem!

This pages explains what specific data you get at each point.
This page explains what specific data you get at each point.

## Metrics

Expand Down
6 changes: 3 additions & 3 deletions use-timescale/tigerlake.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ To connect a $SERVICE_LONG to your data lake:
`"Principal": { "AWS": "arn:aws:iam::123456789012:root" }` does not mean `root` access. This delegates
permissions to the entire AWS account, not just the root user.

1. Replace `<ProjectID>` and `<ServiceID>` with the the [connection details][get-project-id] for your $LAKE_LONG
1. Replace `<ProjectID>` and `<ServiceID>` with the [connection details][get-project-id] for your $LAKE_LONG
$SERVICE_SHORT, then click `Next`.

1. In `Permissions policies`. click `Next`.
Expand Down Expand Up @@ -228,7 +228,7 @@ destination Iceberg table. This happens at approximately 30.000 events a second.
can be handled for a certain amount of time and feathered out over time. This depends on duration of the
ingestion burst, and the amount of extra events to be handled.

Once the snapshot is fully imported, the snapshot and CDC Iceberg table branches are merged. Merging takes from a couple of seconds, to ten minutes for larger tables of 5TB or more. During this time, new events are held on the WAL. Once the merge is completed, events in the WAL are CDC'd to Iceberg. This implies eventual consistency of the Iceberg table after you started the the sync.
Once the snapshot is fully imported, the snapshot and CDC Iceberg table branches are merged. Merging takes from a couple of seconds, to ten minutes for larger tables of 5TB or more. During this time, new events are held on the WAL. Once the merge is completed, events in the WAL are CDC'd to Iceberg. This implies eventual consistency of the Iceberg table after you started the sync.

To stream data from a $PG relational table, or a $HYPERTABLE in your $SERVICE_LONG to your data lake, run the following
statement:
Expand Down Expand Up @@ -330,7 +330,7 @@ data lake:

**Specify a different namespace**

By default, tables are created in the the `timescaledb` namespace. To specify a different namespace when you start the sync, use the `tigerlake.iceberg_namespace` property. For example:
By default, tables are created in the `timescaledb` namespace. To specify a different namespace when you start the sync, use the `tigerlake.iceberg_namespace` property. For example:

```sql
ALTER TABLE my_hypertable SET (
Expand Down
Loading