Skip to content

REMOVE_ORPHANS deletes valid downloads immediately upon completion (Race Condition?) #322

@PromotheusParis

Description

@PromotheusParis

Describe the bug

I am observing a critical issue where the remove_orphans job deletes a download from the client (NZBGet) immediately after the network download finishes, but before the Usenet post-processing (Repair/Unpack) can start/complete.

The Facts (Timeline):

  1. 02:55:44: NZBGet reports Successfully downloaded ... part029.rar (Download hits 100%).
  2. 02:55:47: NZBGet reports Deleting file ... queued (3 seconds later).
  3. Decluttarr Logs: Confirm that the remove_orphans job triggered this specific removal.

Crucially, I found NO failure logs, NO health check errors, and NO critical warnings in the NZBGet history that would justify this removal. The download appeared perfectly successful and valid in NZBGet right before it was wiped by the script.

Logs

1. NZBGet Logs (Showing success followed by immediate deletion)

info  Sun Dec 28 2025 02:55:48  Deleting file /downloads/nzb/The.Great.Flood.2025.2160p.MULTI.WEB-DL.SDR.H265-AOC.nzb.queued
info  Sun Dec 28 2025 02:55:47  Collection The.Great.Flood.2025.2160p.MULTI.WEB-DL.SDR.H265-AOC deleted from queue
info  Sun Dec 28 2025 02:55:44  Successfully downloaded The.Great.Flood.2025.2160p.MULTI.WEB-DL.SDR.H265-AOC/The.Great.Flood.2025.2160p.MULTI.WEB-DL.SDR.H265-AOC.part029.rar
...
info  Sun Dec 28 2025 02:52:06  Collection The.Great.Flood.2025.2160p.MULTI.WEB-DL.SDR.H265-AOC added to queue

2. Decluttarr Logs (Identifying the trigger)

INFO    | Job 'remove_orphans' triggered removal: The.Great.Flood.2025.2160p.MULTI.WEB-DL.SDR.H265-AOC

Suspected Cause (Hypothesis)

Based on the behavior (deletion occurring only 3 seconds after completion), it appears that the remove_orphans logic might be performing a strict comparison between the Client list and the Radarr Queue without accounting for the file's recent completion time.

It seems that during the short window where a file transitions from "Downloading" (Client) to "Importing" (Radarr), it might momentarily disappear from the Radarr Queue API response. If the script scans exactly at this moment (due to the TIMER: 10), it identifies the file as an orphan and deletes it immediately.

Suggestion

If my understanding is correct, the script lacks a timestamp delta check.
Implementing a Grace Period (e.g., only consider a file an orphan if it has been completed for > 180 minutes) would likely solve this race condition and prevent the deletion of fresh, valid downloads.
Large file (e.g., 60GB+ Remuxes) can take a significant amount of time to verify, repair, and unpack depending on the server hardware (CPU/Disk I/O).

Environment

  • Image: ghcr.io/manimatter/decluttarr:latest
  • Network : usenet
  • Configuration:
    environment:
      TIMER: 10
      REMOVE_ORPHANS: "True"
      REMOVE_FAILED_DOWNLOADS: "True"
      DETECT_DELETIONS: "True"

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions