A companion service for Backrest Restic that tracks snapshot activity and storage usage. It aggregates backup metadata into a database and sends scheduled email reports summarizing snapshot events and storage statistics.
- Receive snapshot events from Backrest via webhooks and store their summary data
- Monitor and track connected storage devices
- Compare data and statistics against the previous day, week, and month to analyze trends over time
- Generate and send formatted email reports, highlighting snapshots and storages over a specified date range
Below are the recurring main events.
When Backrest starts or completes a snapshot, forget, check, or prune, it calls the /add-event endpoint. All of the available data provided during the Backrest hook is sent and stored in the API's database.
flowchart TD
subgraph Backrest
subgraph Backrest Events
A1["Snapshot"]
A2["Forget"]
A3["Check"]
A4["Prune"]
end
B["Backrest Hook Triggered"]
C["Call /add-event Endpoint Containing Event Metadata"]
end
subgraph Backrest Summary Reporter
D["Received Event Metadata"]
E["Store Data in Database"]
end
A1 --> B
A2 --> B
A3 --> B
A4 --> B
B --> C
C --> D
D --> E
The storage stats update at 0 0 * * * (nightly at 00:00 UTC). All provided storage paths are checked and saved for their current used/free/total disk space via the /update-storage-statistics endpoint.
The storages that are checked are defined in the .env's STORAGE_PATH_N. See Setting up Storage Mounts for more information.
flowchart TD
A["Cron Schedule"]
A -->|"Default: 0 0 * * *"| B["Storage Check Trigger"]
B --> C["Execute Storage Check"]
C -->|"/storage_stats Endpoint"| D{"Query and Save Storage Stats"}
subgraph Storage_Paths
E["Defined in .env"]
E --> F["STORAGE_PATH_1"]
E --> G["STORAGE_PATH_2"]
E --> H["STORAGE_PATH_N"]
end
C --> Storage_Paths
D -->|Output| I["Used, Free, Total Disk Space"]
Email reports occur using the EMAIL_FREQUENCY defined in the .env and use the /generate-and-send-email-report endpoint. They use the provided SMTP settings and current database to send a formatted report for all the restic events captured in the last STATS_INTERVAL. STATS_INTERVAL is defined in the .env and defaults to 24, which translates to data within the past 24 hours.
Storage statistics are refreshed before querying the statistics for the latest, previous day, previous week, and previous month's endpoint.
Email reports can also be manually called via the /generate-and-send-email-report endpoint.
flowchart TD
subgraph Backrest Summary Reporter
subgraph Email Reporting Logic
E1["Scheduled Email Report Trigger"]
E2["EMAIL_FREQUENCY (.env)"]
E3["STATS_INTERVAL (.env, default 24h)"]
E4["Call /generate-and-send-email-report"]
E5["Use SMTP Settings"]
E6["Query Events from Last STATS_INTERVAL"]
E7["Format and Send Report via Email"]
end
subgraph Manual Trigger
M1["Manual API Call"]
M1 --> E4
end
subgraph Storage Statistics
S1["Refresh Latest Storage Stats"]
S2["Query Latest, Previous Day, Previous Week, and Previous Month"]
end
end
E1 --> E2
E1 --> E3
E2 --> E4
E3 --> E6
E4 --> S1
E4 --> E5
E4 --> E6
E6 --> E7
E5 --> E7
S1 --> S2
S2 --> E7
This companion runs via Docker and alongside a Backrest setup. Additional configuration is covered under Backrest Webhooks, Setting up SMTP Settings, Setting up Storage Mounts, and Healthchecks.
The docker-compose.yaml at the root of this project provides a standard setup that utilizes the main API (backrest-reporter) and its database (backrest-reporter-db).
To run this:
- Create a new directory (or use the same as your Backrest folder).
- Copy the .env.example into a new
.envfile. - Modify the settings as needed (see Environment Variables). At a minimum, configure the Required fields, which include database, email, and API key settings. Email settings are described here.
- Run the containers.
docker compose up -d --buildYou can test out the configuration by sending a test email via the /send-test-email endpoint to ensure it works smoothly.
At this point, you can test additional endpoints and setting up the webhooks in the Backrest settings.
| Variable | Description | Required / Default |
|---|---|---|
| DB_USERNAME | Username used to connect to the PostgreSQL database | Required |
| DB_PASSWORD | Password for the PostgreSQL user | Required |
| AUTH_KEY | Secret key used to authenticate requests to internal endpoints | Required |
| SMTP_HOST | SMTP server hostname (e.g. smtp.gmail.com) |
Required |
| SMTP_PORT | SMTP port (commonly 587 for TLS or 465 for SSL) |
Required |
| SMTP_USERNAME | SMTP username (usually your email address) | Required |
| SMTP_PASSWORD | SMTP password or app-specific password (never use your main email password) | Required |
| EMAIL_FROM | Email address and display name emails will be sent from (e.g. Your App Name <you@example.com>) |
Required |
| EMAIL_TO | Comma-separated list of recipient email addresses | Required |
| SEND_STARTUP_EMAIL | Flag for sending email when system is first online. Set to TRUE or 1 to enable. |
Optional • Default: None (False) |
| EMAIL_FREQUENCY | Cron schedule in UTC (e.g. 0 0 0 * * * runs daily at midnight UTC) |
Optional • Default: 0 0 0 * * * |
| STATS_INTERVAL | Interval (in hours) of backup data to include in the email (e.g. 24 = last 24 hours) |
Optional • Default: 24 |
| NUM_RETAINED_REPORTS | Number of retained reports stored; oldest are deleted first when exceeding this number | Optional • Default: 10 |
| HEALTHCHECK_PING_URL | Optional healthcheck URL (e.g. https://hc-ping.com/ping/...) |
Optional |
| RCLONE_REMOTE | Your rclone remote name (must end with a colon, e.g. google_drive:) |
Optional |
| RCLONE_TARGET | Path inside the container where the rclone remote is mounted (e.g. /mnt-rclone/google_drive) |
Optional |
| STORAGE_PATH_1–N | Inside-the-container paths where backup archives are located (e.g. /mnt/opt, /mnt/mnt) |
Optional • At least one path if using storage stats |
| STORAGE_NICK_1–N | Friendly nickname for each storage path shown in reports (e.g. fedserver01-opt, External Drive 01) |
Optional • Defaults to path if blank |
| SERVER_NAME | Human-readable name of the server/environment used in email reports | Optional |
| BACKREST_URL | URL to Backrest backup management UI/API used in email reports (e.g. https://backrest.example.com/) |
Optional |
| PGADMIN_URL | URL to pgAdmin database management interface used in email reports (e.g. https://pgadmin.example.com/) |
Optional |
| TZ | Timezone for the application (e.g. UTC, America/New_York) |
Optional • Default: container’s OS timezone |
The repo contains the source data used to build the docker image.
See a full example docker-compose.yaml for building from source here.
To provide all the necessary information to the Backrest reporter, go into your Backrest instance and modify the webhook settings of your plans/repos.
For Available Conditions, you should at a minimum include:
CONDITION_SNAPSHOT_SUCCESSCONDITION_SNAPSHOT_ERRORCONDITION_SNAPSHOT_STARTCONDITION_SNAPSHOT_ENDCONDITION_FORGET_SUCCESSCONDITION_FORGET_ERRORCONDITION_FORGET_STARTCONDITION_FORGET_END
Use the following for Script, replacing the endpoint URL to your instance and the API key to the one in your .env.
curl -X POST https://your-backrest-reporter-instance/add-event \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
--data-binary @- <<EOF
{
"task": {{ .JsonMarshal .Task }},
"time": "{{ .FormatTime .CurTime }}",
"event": "{{ .EventName .Event }}",
"repo": {{ .JsonMarshal .Repo.Id }},
"plan": {{ .JsonMarshal .Plan.Id }},
"snapshot": {{ .JsonMarshal .SnapshotId }}{{ if .Error }},
"error": {{ .JsonMarshal .Error }}{{ else if .SnapshotStats }},
"snapshot_stats": {{ .JsonMarshal .SnapshotStats }}{{ end }}
}
EOFOnce saved, future snapshot events will send to the Backrest Reporter API.
To enable email reporting, you must configure the SMTP settings in the .env file. This section will walk you through using a Gmail account.
Gmail provides SMTP access for sending emails from external applications. Follow these steps to configure it:
If you have 2-Step Verification enabled (highly recommended), you must create an App Password:
- Go to Google Account Security.
- Under Signing in to Google, enable 2-Step Verification (if not already enabled).
- After that, a new App passwords option appears.
- Select Mail and your device name (e.g. Backup Server), then generate the password.
- Copy the 16-character password provided (you will use this instead of your Gmail password in the
.env).
Important
For security, do not use your main Gmail password for SMTP; always use an app password.
Update your .env file with the following keys:
SMTP_HOST=smtp.gmail.com
SMTP_PORT=587
SMTP_USERNAME=your_email@example.com
SMTP_PASSWORD=your_email_app_password
EMAIL_FROM=Your App Name <your_email@example.com>
EMAIL_FROM=your-email@gmail.com
EMAIL_TO=receiver_email@example.comSMTP_HOSTshould besmtp.gmail.comSMTP_PORTshould be587for TLS (STARTTLS)SMTP_USERNAMEandSMTP_FROM_EMAILshould both be your Gmail addressSMTP_PASSWORDis your App PasswordEMAIL_FROMis the name shown in the email “From” fieldEMAIL_TOis the comma-separated list of recipient email addresses
Once set up, trigger a test email via the /send-test-email endpoint:
curl -X POST https://your-backrest-reporter-instance/send-test-email \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \Check your Gmail inbox (or Spam folder) to verify the report is being sent.
You can also enable SEND_STARTUP_EMAIL in your .env by setting it to TRUE or 1. Doing so enables a startup email when the container is brought online. It contains the container ID and next report generation time. This can be useful for detecting automated updates to the container or unexpected outages.
- Gmail limits the number of emails you can send per day. If you are hitting limits, consider using another SMTP provider (e.g. SendGrid, Mailgun, etc.).
- If you see a “less secure app” warning, verify you are using an App Password, and 2FA is enabled.
Storage mounts are mounted to the main API service to track and send storage statistics. Currently, this has been tested to work for local drives, local network drives via SSHFS, and rclone via FUSE.
Once exposed to the API container, storage paths can be tracked and recorded by adding them to the .env and/or docker-compose.yaml.
Simple local drives include the drive the container is running on and any physically connected devices to the host machine (e.g. external HDD).
For these, the folder or any of its parents must be mounted in the volumes section of the docker-compose.yaml. In the below example, the host /opt is being mounted to /mnt/opt inside the container as read-only.
# docker-compose.yaml
volumes:
# Host bind-mounts for local or remote mount points (read-only since we are only pulling stats)
- /opt:/mnt/opt:roNow in the .env, we can specify the path to the volume inside the container, or any of its children, that we want to track. Additionally, we can specify an optional nickname that is used in the emails. If no nickname is used, the path will be used.
# .env
STORAGE_PATH_1=/mnt/opt
STORAGE_NICK_1=fedserver01-opt
Tip
The paths are used as the key, meaning you should not re-use the same internal container path for other storage devices. Instead, use a unique path to start with a clean slate. Nicknames can be changed at any time and the most recent one will always be used.
Externally connected drives, such as a USB external HDD, are recommended to be mounted via fstab to ensure they are available on system boot.
One way to mount is:
- Connect the drive to the host machine.
- Run
lsblk -fto list the names, fs types, UUID, and mount points. - Find the drive you want to connect and copy its UUID, where you want to mount it to, and the fs type.
- Edit
/etc/fstabto include a line for your new entry. An example of mounting to/mnt:UUID=f6b99246-8780-e989-9bb6-94211a0f0f50 /mnt ext4 defaults 0 2 - Save the file and apply the changes with
mount -a.
One way of tracking another machine that is on the same local network as the host machine is through SSHFS. First, ensure this is installed.
Fedora
sudo dnf -y install sshfsUbuntu
sudo apt-get install sshfsThen, we can similarly auto-mount via fstab as we did with locally connected drives.
- Connect the drive to the machine on the local network and follow the steps on that machine for external drives Linux if it needs to be added to that
fstab. - Obtain the local ip address of the machine using
ifconfig. - On the host machine, generate an SSH key if you do not already have one.
ssh-keygen -t rsa -b 4096 -C "your_email@example.com" - Press Enter to accept default file location (
~/.ssh/id_rsa) - Copy the Public Key to Machine 2:
Replace user with your actual username on Machine 2 and enter the password one last time when prompted.
ssh-copy-id user@machine2_ip_or_hostname
- Test the connection:
ssh user@machine2_ip_or_hostname
- Edit
/etc/fstabto include a line for your new entry for mounting viaSSHFSand your working SSH key. An example of mounting to/mnt/immich_remotefor the useruser, ip192.168.10.44, and remote's/mnt/.immich:user@192.168.10.44:/mnt/.immich /mnt/immich_remote fuse.sshfs ro,allow_other,_netdev,IdentityFile=/root/.ssh/id_rsa,users,idmap=user,follow_symlinks 0 0 - Save the file and apply with
mount -a.
Rclone mounts, such as Google Drive, are supported by a rclone-mounter container that uses FUSE connections to mount the cloud connection to a shared directory across the rclone container, the host machine, and the API container. To make sure the FUSE mount is properly created and mounted, a healthcheck is used to prevent starting the API container until it is ready.
See a full example docker-compose.yaml for rclone mounts here.
The below example shows the rclone docker-compose.yaml service that uses a pre-configured Google Drive mount google_drive. The pre-configured rclone.conf is stored in ./rclone/config on the host machine.
Important
For these FUSE mounts to work, the folder must exist on the host machine. In the example configuration, you will need to run sudo mkdir -p /mnt-rclone/google_drive before running it for the first time (if it does not already exist). âť—âť—
To create an rclone config, see the official rclone docs for more information.
Important
If using auto-update services, such as Watchtower, it is recommended to add the rclone-mounter service as an exception or to not use rclone:latest. Auto-updates to rclone will cause the mount to no longer be populated on both backrest and the backrest-reporter until they are restarted. The example below hard sets it to rclone:1.70.3.
# Rclone container that handles mounting Google Drive via FUSE
rclone-mounter:
image: rclone/rclone:1.70.3
container_name: rclone-mounter
restart: unless-stopped
cap_add:
- SYS_ADMIN # Required for FUSE
devices:
- /dev/fuse # Expose FUSE device
security_opt:
- apparmor:unconfined # Unconfine AppArmor to allow FUSE mount
volumes:
# Config volume for rclone.conf
- type: bind
source: ./rclone/config
target: /config/rclone
# Optional: cache directory for VFS (improves stability and performance)
- type: bind
source: ./rclone/vfs-cache
target: /config/rclone/vfs-cache
# Mountpoint shared with host and other containers (e.g. Google Drive)
# Ensure the folder exists on the host machine before running
# e.g. sudo mkdir -p /mnt-rclone/google_drive
- type: bind
source: /path/to/desired/host/location # Mounted to the API container
target: /mnt-rclone/google_drive
bind:
propagation: shared # Allow mount propagation between containers
command: >
mount google_drive: /mnt-rclone/google_drive
--config=/config/rclone/rclone.conf
--allow-other
--allow-non-empty
--vfs-cache-mode writes
--cache-dir /config/rclone/vfs-cache
healthcheck:
test: ["CMD-SHELL", "grep -q ' /mnt-rclone/google_drive ' /proc/mounts"]
interval: 5s
timeout: 2s
retries: 5
start_period: 5sImportant
If the main Backrest container will be accessing this mount, it may be best practice to ensure it is included in the compose file. This ensures that restarts to the rclone container do not break accessibility within the Backrest service. An example docker compose with Backrest included can be found here.
Healthchecks are recommended to setup via the .env using HEALTHCHECK_PING_URL.
This setup helps by catching failed and successful scheduled endpoints.
Since the storage stats update runs daily, the recommended period is 1 day with a grace period of 1 hour.
Once you have cded into the folder with your docker-compose.yaml and your .env (containing DB_USERNAME and DB_PASSWORD), follow the instructions below for backing up.
- Run the database dump (writes
backrest-reporter-db.sqlinto your./backupsdirectory):docker exec \ -e PGPASSWORD="$DB_PASSWORD" \ backrest-reporter-db \ pg_dump \ -U "$DB_USERNAME" \ -d backrest-reporter-db \ > ./backups/backrest-reporter-db_$(date +%F).sql
-e PGPASSWORD=…injects your password sopg_dumpwill not prompt you.-U "$DB_USERNAME"uses the same user you set in your.env.-d backrest-reporter-dbis the database name you set withPOSTGRES_DB.- The
> ./backups/...on the host side writes the output into your newbackupsfolder, with today’s date in the filename.
- Verify:
ls -lh ./backups/backrest-reporter-db_*.sql head -n 20 ./backups/backrest-reporter-db_$(date +%F).sql
Replace step 1 of the previous instructions with:
docker exec \
-e PGPASSWORD="$DB_PASSWORD" \
backrest-reporter-db \
pg_dump \
-U "$DB_USERNAME" \
-d backrest-reporter-db \
-F c -b -v \
> ./backups/backrest-reporter-db_$(date +%F).dump-F c→ custom (compressed)-b→ include large objects-v→ verbose logging
- Stop the backrest-reporter container
docker stop backrest-reporter- Set your environment variables
export DB_CONTAINER=backrest-reporter-db
export DB_NAME=backrest-reporter-db
export DB_USERNAME=<your_db_username>
export DB_PASSWORD=<your_db_password>
export DUMP_FILE=/backups/backrest-reporter-db_YYYY_MM_DD.dump- Terminate active connections
docker exec \
-e PGPASSWORD="$DB_PASSWORD" \
$DB_CONTAINER \
psql -U "$DB_USERNAME" -d postgres \
-c "SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = '$DB_NAME'
AND pid <> pg_backend_pid();"- Drop & recreate the database
docker exec \
-e PGPASSWORD="$DB_PASSWORD" \
$DB_CONTAINER \
psql -U "$DB_USERNAME" -d postgres \
-c "DROP DATABASE IF EXISTS \"$DB_NAME\";"
docker exec \
-e PGPASSWORD="$DB_PASSWORD" \
$DB_CONTAINER \
psql -U "$DB_USERNAME" -d postgres \
-c "CREATE DATABASE \"$DB_NAME\";"- Restore from dump
docker exec \
-e PGPASSWORD="$DB_PASSWORD" \
$DB_CONTAINER \
pg_restore \
-U "$DB_USERNAME" \
-d "$DB_NAME" \
"$DUMP_FILE"When you need to restore in the future, update the variables and dump filename:
export DB_CONTAINER=backrest-reporter-db
export DB_NAME=backrest-reporter-db
export DB_USERNAME=<your_db_username>
export DB_PASSWORD=<your_db_password>
export DUMP_FILE=/backups/backrest-reporter-db_YYYY_MM_DD.dump
# (Optional) Clean slate:
docker exec -e PGPASSWORD="$DB_PASSWORD" $DB_CONTAINER psql -U "$DB_USERNAME" -d postgres \
-c "DROP DATABASE IF EXISTS \"$DB_NAME\";"
docker exec -e PGPASSWORD="$DB_PASSWORD" $DB_CONTAINER psql -U "$DB_USERNAME" -d postgres \
-c "CREATE DATABASE \"$DB_NAME\";"
# Restore:
docker exec -e PGPASSWORD="$DB_PASSWORD" $DB_CONTAINER pg_restore \
-U "$DB_USERNAME" \
-d "$DB_NAME" \
"$DUMP_FILE"Below are the current endpoints that the API supports. They are placed here for debugging and developing.
Send a test email using the configured SMTP settings.
curl -X POST https://your-backrest-reporter-instance/send-test-email \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \This inserts snapshot summary and statistics into the database. Backrest and Docker Rsync Cron use this for adding snapshot events.
With path-to-example.json containing the mock event data.
curl -X POST https://your-backrest-reporter-instance/add-event \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
-d @tests/path-to-example.jsonGet events takes in a start and end date and returns the snapshot events between the provided times.
Querying the events between 2025-05-02T15:13:00Z and 2025-05-03T15:13:21Z.
curl -X POST https://your-backrest-reporter-instance/get-events-in-range \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
-d '{
"start_date": "2025-05-02T15:13:00Z",
"end_date": "2025-05-03T15:13:21Z"
}'All events recorded within the provided time.
[
{
"summary_id": 3,
"created_at": "2025-05-02T15:13:22.272470Z",
"task": "backup for plan \"local-fedserver01-opt\"",
"time": "2025-05-02T15:13:22Z",
"event": "snapshot success",
"repo": "backupdrive01",
"plan": "local-fedserver01-opt",
"snapshot": "ebacb858b239b0562b7f354db770a83951c88c490dda10d95a40e8bcc3e8e270",
"files_new": 0,
"files_changed": 0,
"files_unmodified": 207,
"dirs_new": 0,
"dirs_changed": 0,
"dirs_unmodified": 137,
"data_blobs": 0,
"tree_blobs": 0,
"data_added": 0,
"total_files_processed": 207,
"total_bytes_processed": 4064450,
...
},
{
"summary_id": 7,
"created_at": "2025-05-02T15:13:35.403089Z",
"task": "backup for plan \"local-fedserver01-opt\"",
"time": "2025-05-02T15:13:35Z",
"event": "snapshot success",
"repo": "backupdrive01",
"plan": "local-fedserver01-opt",
"snapshot": "b0af97e43e33db94b8223c853895405b77992e2d5b326953b31b09d89361e121",
...
}
]Gets events takes in a start and end date and returns the summarized event data between the provided times, the prior day, prior week, and prior month.
Querying the summaries between 2025-05-02T15:13:00Z and 2025-05-03T15:13:21Z and its historical data for comparison.
curl -X POST https://your-backrest-reporter-instance/get-events-in-range \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
-d '{
"start_date": "2025-05-02T15:13:00Z",
"end_date": "2025-05-03T15:13:21Z"
}'Event totals for the provided date range, the previous day, week, and month.
{
"current": {
"start_date": "2025-05-02T15:13:00Z",
"end_date": "2025-05-03T15:13:21Z",
"total_events": 8,
"total_snapshot_success": 2,
"total_forget_success": 2,
"total_files_processed": 414,
"total_bytes_processed": 8128900,
...
},
"previous_day": {
"start_date": "2025-05-01T15:13:00Z",
"end_date": "2025-05-02T15:13:21Z",
"total_events": 13,
"total_snapshot_success": 12,
"total_files_new": 8,
"total_files_changed": 15,
"total_files_processed": 1359,
"total_bytes_processed": 149925354,
...
}
}Updates the configured storage mounts with the latest statistics.
curl -X GET https://your-backrest-reporter-instance/update-storage-statistics \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV"[
{
"location": "/mnt/opt",
"nickname": "fedserver01-opt",
"used_bytes": 181917847552,
"total_bytes": 478399168512
},
{
"location": "/mnt/mnt",
"nickname": "External Drive 01",
...
},
...
]This retrieves the latest storage statistics along with its previous day, week, and month.
curl -X GET https://your-backrest-reporter-instance/get-latest-storage-stats \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV"[
{
"location": "/mnt/opt",
"nickname": "fedserver01-opt",
"current": {
"used_bytes": 181917847552,
"free_bytes": 296481320960,
"total_bytes": 478399168512,
"percent_used": 38.02637201854518,
"time_added": "2025-05-27T23:04:02.323203Z"
},
"previous_day": {
"used_bytes": 183000031232,
"free_bytes": 295399137280,
"total_bytes": 478399168512,
"percent_used": 38.25258137492137,
"time_added": "2025-05-26T20:10:00.177965Z"
},
"previous_week": {
"used_bytes": 182409072640,
"free_bytes": 295990095872,
"total_bytes": 478399168512,
"percent_used": 38.1290530264424,
"time_added": "2025-05-20T20:10:00.135573Z"
},
"previous_month": null
},
{
"location": "/mnt/mnt",
"nickname": "fedserver01 External Drive 01",
"current": {
"used_bytes": 32433807360,
...
}
...
}
]Retrieves the latest storage statistics at a specific date and its previous day, week, and month.
curl -X GET https://your-backrest-reporter-instance/get-storage-stats \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
-d '{
"end_date": "2025-07-05T12:42:12Z"
}'[
{
"location": "/mnt-backup",
"nickname": "extdrive02 (Backups)",
"current": {
"used_bytes": 207065759744,
"free_bytes": 1753262551040,
"total_bytes": 1960328310784,
"percent_used": 10.56281024994163,
"time_added": "2025-07-05T11:23:15.042394Z"
},
"previous_day": {
"used_bytes": 206484578304,
"free_bytes": 1753843732480,
"total_bytes": 1960328310784,
"percent_used": 10.533163101716365,
"time_added": "2025-07-04T11:00:00.493426Z"
},
"previous_week": {
"used_bytes": 203562176512,
"free_bytes": 1756766134272,
"total_bytes": 1960328310784,
"percent_used": 10.384085940716163,
"time_added": "2025-06-28T12:14:55.262699Z"
},
"previous_month": {
"used_bytes": 191168102400,
"free_bytes": 1769160208384,
"total_bytes": 1960328310784,
"percent_used": 9.751841125201398,
"time_added": "2025-06-05T11:00:00.364751Z"
}
},
{
"location": "/mnt-rclone/google_drive",
"nickname": "rclone01",
"current": {
"used_bytes": 28077682688,
"free_bytes": 79296499712,
"total_bytes": 107374182400,
"percent_used": 26.14937973022461,
"time_added": "2025-07-05T11:23:15.273657Z"
},
...
},
...
]Takes in a start_date and end_date and:
- Returns the event totals between the provided times
- Returns the queried data between the provided times
- Updates the configured storage mounts with the latest statistics
- Returns the latest storage statistics and its previous day, week, and month
curl -X POST https://your-backrest-reporter-instance/get-events-and-storage-stats \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
-d '{
"start_date": "2025-05-02T15:13:00Z",
"end_date": "2025-05-03T15:13:21Z"
}'{
"event_totals": {
"current": {
"start_date": "2025-05-02T15:13:00Z",
"end_date": "2025-05-03T15:13:21Z",
"total_events": 8,
...
},
...
},
"snapshot_summaries": [
{
"summary_id": 3,
"created_at": "2025-05-02T15:13:22.272470Z",
"task": "backup for plan \"local-fedserver01-opt\"",
"time": "2025-05-02T15:13:22Z",
"event": "snapshot success",
"repo": "backupdrive01",
"plan": "local-fedserver01-opt",
...
},
...
],
"storage_statistics": [
{
"location": "/mnt/opt",
"nickname": "fedserver01-opt",
"current": {
"used_bytes": 183772540928,
...
}
...
},
...
]
}Receives a start_date and end_date and:
- Returns the event totals between the provided times
- Returns the queried data between the provided times
- Updates the configured storage mounts with the latest statistics
- Returns the latest storage statistics and its previous day, week, and month
curl -X POST https://your-backrest-reporter-instance/generate-and-send-email-report \
-H "Content-Type: application/json" \
-H "X-API-Key: YOUR_API_KEY_FROM_ENV" \
-d '{
"start_date": "2025-05-02T15:13:00Z",
"end_date": "2025-05-03T15:13:21Z"
}'Problem: An rclone mount is accessible on the host but not inside the backrest-reporter container. Example error:
ls: cannot access '/mnt-rclone/google_drive/': Transport endpoint is not connectedSolution: Unmounting the FUSE mount point and rebuilding the containers:
- Unmount the FUSE mount point
fusermount -u /mnt-rclone/google_drive
- Rebuild the containers
docker compose up -d --build
- Docker Rsync Cron - A Dockerized solution for managing scheduled
rsyncjobs withcron. Useful for creating scheduled clones rather than restic backups. Creates snapshot information compatible with the Backrest Summary Reporter.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.





