@@ -175,7 +175,7 @@ instance to a $SERVICE_LONG:
175175 As you run the $PG_CONNECTOR continuously, best practice is to run it as a Docker daemon.
176176
177177 ``` shell
178- docker run -d --rm --name livesync timescale/live-sync:v0.7.0 run \
178+ docker run -d --rm --name livesync timescale/live-sync:v0.11.2 run \
179179 --publication < publication_name> --subscription < subscription_name> \
180180 --source $SOURCE --target $TARGET --table-map < table_map_as_json>
181181 ```
@@ -232,10 +232,11 @@ instance to a $SERVICE_LONG:
232232
233233 | state | description |
234234 | -------| -------------|
235- | d | initial table data sync |
236- | f | initial table data sync completed |
237- | s | catching up with the latest changes |
238- | r | table is ready, syncing live changes |
235+ | i | initial state, table data sync not started |
236+ | d | initial table data sync is in progress |
237+ | f | initial table data sync completed, catching up with incremental changes |
238+ | s | synchronized, waiting for the main apply worker to take over |
239+ | r | table is ready, applying changes in real-time |
239240
240241 To see the replication lag, run the following against the SOURCE database:
241242
330331 Use the ` --drop ` flag to remove the replication slots created by the $PG_CONNECTOR on the source database.
331332
332333 ``` shell
333- docker run -it --rm --name livesync timescale/live-sync:v0.7.0 run \
334+ docker run -it --rm --name livesync timescale/live-sync:v0.11.2 run \
334335 --publication < publication_name> --subscription < subscription_name> \
335336 --source $SOURCE --target $TARGET \
336337 --drop
0 commit comments