-
-
Notifications
You must be signed in to change notification settings - Fork 0
Performance Benchmarks
whisprer edited this page Oct 5, 2025
·
4 revisions
-
Command:
secure_delete <file> --passes N --pattern random|zeros -
Timer: shell
time(real/user/sys) -
File sizes: 1 GiB for single-pass zeros; 1 GiB × 3 passes for random test (3 GiB total writes)
-
Host: ThinkPad P52, Intel i7 @ 4.2 GHz, 64 GB RAM, dual NVMe
# 1 GiB random file dd if=/dev/urandom of=testfile.bin bs=1M count=1024 status=progress
# 3 random passes time secure_delete testfile.bin --passes 3 --pattern random # 1 zero pass dd if=/dev/urandom of=testfile.bin bs=1M count=1024 status=none time secure_delete testfile.bin --passes 1 --pattern zeros
Log results to
benchmarks/results.mdfor longitudinal tracking.
(ThinkPad P52, i7 @ 4.2 GHz, 64 GB RAM, dual NVMe)
Filesystem / Path | Command | Passes | Pattern | real | user | sys | Throughput
-- | -- | -- | -- | -- | -- | -- | --
WSL → NTFS (/mnt/d) | secure_delete testfile.bin | 3 | random | 24.84s | 0.11s | 7.60s | ≈ 130 MB/s
WSL ext4 (~/bench) | same | 3 | random | 10.24s | 0.13s | 8.15s | ≈ 315 MB/s
ext4 (zeros pattern) | --passes 1 --pattern zeros | 1 | zeros | 1.07s | 0.00s | 0.55s | ≈ 960 MB/s
dd direct write | dd oflag=direct | — | — | 1.39s | — | — | ≈ 3.1 GB/s
dd direct read | dd iflag=direct | — | — | 0.79s | — | — | ≈ 5.5 GB/s
-
Native ext4 recovers near-NVMe throughput; WSL → NTFS adds 2–3× latency due to boundary flushes.
-
Per-pass
sync_all()adds roughly 5–10 % latency — expected and by design. -
Random × 3 passes writes 3× the bytes; apparent MB/s drops vs single-pass zeros, trading speed for entropy.
-
Bottom line: the Rust shred core is I/O-bound, not CPU-bound. Keep targets on native Linux/ext4 when possible.