-
Notifications
You must be signed in to change notification settings - Fork 65
Description
Hello,
I've been using bees for a while now to dedupe my filesystems. Recently I started switching from release 0.10 to 0.11 on a few select machines of mine. Yesterday I noticed an issue for the first time where bees was constantly running in the background on 100% CPU utilization (i.e. single core, full load).
At first I suspected my CLI arguments might have been faulty:
--verbose=3 --scan-mode=4 --throttle-factor=0.9 --loadavg-target=10.0 --thread-min=1
my previous usage (with version 0.10) was:
--verbose=3 --scan-mode=1
so I stripped everything after scan-mode=4 away but the problem remained. Next I enabled debug logging (--verbose=7) and repeatedly found the following messages in the output:
2025-10-12 09:52:25 1.31<6> extent_255_0_128K: Crawl started BeesCrawlState 255:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.28<6> extent_253_512K_2M: Crawl started BeesCrawlState 253:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.25<6> extent_251_8M_32M: Crawl started BeesCrawlState 251:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.30<6> extent_254_128K_512K: Crawl started BeesCrawlState 254:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.27<6> extent_252_2M_8M: Crawl started BeesCrawlState 252:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.24<6> extent_250_32M_16E: Crawl started BeesCrawlState 250:0 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: extsz datasz point gen_min gen_max this cycle start tm_left next cycle ETA
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: ----- ------- ----- ------- ------- ---------------- ------- ----------------
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: max 68.096G idle 0 70250 2025-10-11 16:37 - -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: 32M 6.268G idle 72265 72266 2025-10-12 09:52 - -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: 8M 3.489G idle 72265 72266 2025-10-12 09:52 - -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: 2M 799.28M idle 72265 72266 2025-10-12 09:52 - -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: 512K 2.112G idle 72265 72266 2025-10-12 09:52 - -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: 128K 10.254G idle 72265 72266 2025-10-12 09:52 - -
2025-10-12 09:52:25 1.29<6> crawl_new: PROGRESS: total 91G gen_now 72266 updated 2025-10-12 09:52
2025-10-12 09:52:25 1.26<6> extent_254_128K_512K: Crawl finished BeesCrawlState 254:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.30<6> extent_255_0_128K: Crawl finished BeesCrawlState 255:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.28<6> extent_253_512K_2M: Crawl finished BeesCrawlState 253:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.32<6> extent_252_2M_8M: Crawl finished BeesCrawlState 252:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.25<6> extent_251_8M_32M: Crawl finished BeesCrawlState 251:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:25 1.24<6> extent_250_32M_16E: Crawl finished BeesCrawlState 250:378646495232 offset 0x0 transid 72265..72266 started 2025-10-12-09-52-25 (0s ago)
2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.551 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x231000..0x8231000] dst[0x231000..0x8231000]
2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: src = 170 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:27 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.628 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x232000..0x8232000] dst[0x232000..0x8232000]
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: src = 171 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:36 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.441 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x233000..0x8233000] dst[0x233000..0x8233000]
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: src = 171 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:45 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: PERFORMANCE: 8.473 sec: grow constrained = 1 *this = BeesRangePair: 128M src[0x234000..0x8234000] dst[0x234000..0x8234000]
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: src = 171 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/@snapshots/home.20251009T0615/hartan/.local/share/flatpak/repo/objects/9f/43d98473c39960912acac04d9fbf0b471bc3792c40311d40c15760cd1c4e1e.commitmeta
2025-10-12 09:52:53 1.34<6> ref_4298cdd000_189.641M_3: dst = 8 /run/bees/mnt/4a4a680a-0b6c-4260-8398-c79691587c89/home/hartan/.local/share/flatpak/repo/objects/ae/b385d0f92ef6470cd35a0c311dd311a98981b8721697c0adb354c5a3cd192e.commitmeta
I restarted bees multiple times but the problem remained. Next I entirely removed the bees working directory including the previous hash table and had it rerun on the filesystem over night, but it ended up in the same weird state.
I tried creating a MWE by creating a blank, file-backed btrfs and copying only the apparent "faulty" files in, but couldn't reproduce the issue there. Right now I'm having it rerun with --scan-mode=1 to see if it's a generic issue, but that may take a few more hours to complete.
I have no idea what's happening there. I checked the files and they're both ~190 MB in size, which I assume shouldn't cause multiple hours of runtime. The backing storage is an NVMe SSD after all. Do you have an idea what may be causing this? Is there something I should try perhaps?
Thanks in advance!