Skip to content

Conversation

@Dsantra92
Copy link
Contributor

@Dsantra92 Dsantra92 commented Jan 21, 2026

RepoManager now handles all the repos for parsing and creating user and task specific worktrees. Additionally:

  • Added new fields in metadata
  • Updated eviction policies
  • Bare repo sharing with filter:blob for space saving
  • Removed some non-required functions and updated some auto-commit issues in older functions.

Summary by CodeRabbit

  • New Features

    • Repository management: register, inspect, list, prepare for parsing, create/delete worktrees, evict, and view per-user volume/usage.
    • Runtime access to repository resource and new repo-related configuration options (storage path, volume limit, eviction threshold, GitHub base URL).
  • Refactor

    • Redesigned repository storage/worktree lifecycle, cloning-based parsing flow, improved metadata and language detection, and eviction/volume orchestration.
    • Parsing flows updated to use the repo manager and safer worktree provisioning.
  • Security

    • Stronger token selection and enforced TLS minimum for GitHub interactions.

@Dsantra92 Dsantra92 requested review from dhirenmathur and nndn and removed request for nndn January 21, 2026 21:45
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 21, 2026

Caution

Review failed

The pull request is closed.

Walkthrough

Introduces a bare-repo + worktree RepoManager with multi-tenant lifecycle and eviction, integrates it into parsing (replacing tarball flows with clone/worktree paths), updates parsing/project method signatures (AsyncSession→Session, ints→strs, optional email), adds repository types/resources/exceptions, and switches token selection to secrets.choice.

Changes

Cohort / File(s) Summary
RepoManager core & interface
app/modules/repo_manager/repo_manager.py, app/modules/repo_manager/repo_manager_interface.py
Adds v2-style bare-repo + worktree layout, per-user metadata, auth helpers, eviction/lifecycle (prepare_for_parsing, evict_stale_worktrees), volume accounting, metadata I/O, and new interface methods (get_total_volume_bytes, get_volume_percentage, prepare_for_parsing, evict_stale_worktrees).
Parsing controller / helper / service
app/modules/parsing/graph_construction/parsing_controller.py, app/modules/parsing/graph_construction/parsing_helper.py, app/modules/parsing/graph_construction/parsing_service.py, app/modules/parsing/graph_construction/parsing_repomap.py
Replaces tarball flows with clone/worktree flows via RepoManager; converts helper functions to instance/static methods; changes signatures to use Session and optional user_email; project_id types widened to str; adds done-callback logging for background tasks; repomap uses repo-relative root filtering.
Project service typing
app/modules/projects/projects_service.py
Widened project_id from intstr, branch_name now `str
Repository resource, types & exports
potpie/resources/repository.py, potpie/types/repository.py, potpie/types/__init__.py, potpie/__init__.py, potpie/exceptions.py
Adds RepositoryResource wrapper, RepositoryInfo/RepositoryStatus/VolumeInfo types, new exceptions RepositoryError & RepositoryNotFoundError, and exports them at package level.
Runtime & config
potpie/runtime.py, potpie/config.py
Adds config fields for repo storage (repos_base_path, repos_volume_limit_bytes, repos_eviction_threshold_percent, github_base_url) and a lazy repositories property on PotpieRuntime.
Provider/token security changes
app/modules/code_provider/..., app/modules/intelligence/tools/web_tools/*, app/modules/code_provider/provider_factory.py
Replaces random.choice with secrets.choice for GitHub token selection; enforces TLS minimum version in GitHub service SSL contexts.
Background task error handling
app/modules/conversations/conversation/conversation_service.py, app/modules/parsing/...
Replaces silent fire-and-forget tasks with created asyncio tasks plus add_done_callback handlers to log exceptions for background fetch/email tasks.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Controller as ParsingController
    participant RepoMgr as RepoManager
    participant Git as Git (clone/fetch)
    participant FS as FileSystem
    participant Meta as MetadataStore

    Controller->>RepoMgr: prepare_for_parsing(repo_name, ref, repo_url?, auth_token?, is_commit?, user_id)
    RepoMgr->>RepoMgr: _evict_if_needed()/evict_stale_worktrees()
    RepoMgr->>RepoMgr: ensure_bare_repo(repo_name, repo_url, token)
    RepoMgr->>Git: clone/fetch bare repo (authenticated)
    Git->>FS: store bare repo (.bare)
    RepoMgr->>Meta: write/update repo .meta
    RepoMgr->>RepoMgr: create_worktree(ref, unique_id)
    RepoMgr->>Git: checkout worktree from bare repo -> worktree path
    RepoMgr->>Meta: store worktree metadata
    RepoMgr-->>Controller: return worktree path
    Controller->>Controller: parse_directory(worktree_path, db=Session)
Loading
sequenceDiagram
    autonumber
    participant Runtime as PotpieRuntime
    participant RepoRes as RepositoryResource
    participant RepoMgr as RepoManager
    participant Config as RuntimeConfig

    Runtime->>Runtime: access repositories property
    Runtime->>Config: read repos_base_path & limits
    Runtime->>RepoRes: instantiate RepositoryResource (lazy)
    RepoRes->>RepoMgr: prepare_for_parsing(repo_name, ref, repo_url?, auth_token?, is_commit?, user_id)
    RepoMgr-->>RepoRes: worktree path
    RepoRes-->>Runtime: return path
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • dhirenmathur
  • nndn

Poem

🐰 Thump — a bare repo takes its place,
Worktrees whisper, tidy every space.
Sessions steadied where async once leapt,
Secrets choose tokens while the codebase slept.
Hop—small paws leave the repo neat and ace.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 70.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Repo manager with parsing logic' accurately summarizes the main focus of the changeset, which centralizes repository management and integrates it with parsing workflows.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch repo_manager

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
app/modules/projects/projects_service.py (1)

200-254: Guard against branch_name=None before branch fallback.

With the signature now allowing None, the branch fallback can accidentally match records where branch_name IS NULL (e.g., commit-only entries). If no branch name is available, return early instead of querying.

✅ Suggested guard
         if commit_id:
             # If commit_id is provided, only check by commit_id (no fallback to branch)
             # This ensures repo+commit_id maps to exactly one project
             project = query.filter(Project.commit_id == commit_id).first()
             if project:
                 logger.info(f"Found project by commit_id: {project.id}")
                 return project
             logger.info(
                 f"No project found with commit_id={commit_id}; not falling back to branch lookup."
             )
             return None

+        if not branch_name:
+            logger.info("No branch_name provided and commit_id is None; skipping branch lookup.")
+            return None
+
         # Fall back to branch_name lookup only if commit_id was not provided
         project = query.filter(Project.branch_name == branch_name).first()
app/modules/parsing/graph_construction/parsing_helper.py (1)

333-344: Avoid masking clone errors during cleanup.

If shutil.rmtree fails inside the exception handler, it can obscure the original clone error.

🐛 Safer cleanup
-            if os.path.exists(temp_clone_dir):
-                shutil.rmtree(temp_clone_dir)
+            if os.path.exists(temp_clone_dir):
+                shutil.rmtree(temp_clone_dir, ignore_errors=True)
...
-            if os.path.exists(temp_clone_dir):
-                shutil.rmtree(temp_clone_dir)
+            if os.path.exists(temp_clone_dir):
+                shutil.rmtree(temp_clone_dir, ignore_errors=True)
app/modules/repo_manager/repo_manager.py (1)

482-522: du -s returns blocks, not bytes; volume limits are broken.

The function uses ["du", "-s", path] but returns the result as bytes, causing volume thresholds to be off by 512–1024x. A 100GB repository would be reported as ~200MB, triggering eviction prematurely and breaking the volume limit feature entirely. Use ["du", "-sb", path] to get actual byte counts.

Suggested fix
-            result = subprocess.run(
-                ["du", "-s", path],
+            result = subprocess.run(
+                ["du", "-sb", path],
                 capture_output=True,
                 text=True,
                 timeout=60,  # 60 second timeout for large directories
             )
🤖 Fix all issues with AI agents
In `@app/modules/parsing/graph_construction/parsing_repomap.py`:
- Around line 617-620: The hidden-directory filter is mistakenly skipping the
repo root because rel_root == "." matches the startswith(".") check; modify the
os.walk loop in parsing_repomap.py so you first guard against the repo root (and
optionally "..") before applying the hidden-part filter: e.g., compute rel_root
= os.path.relpath(root, repo_dir) and only run the any(part.startswith(".") for
part in rel_root.split(os.sep)) check when rel_root not in (".", ".."); update
the loop that references repo_dir, rel_root and os.walk accordingly.

In `@app/modules/parsing/graph_construction/parsing_service.py`:
- Around line 163-180: The code currently always prepares a worktree using
repo_details.branch_name, ignoring repo_details.commit_id; update the
branch/worktree creation logic in parsing_service.py so prepare_for_parsing
respects commit_id: when repo_details.commit_id is present call
self.repo_manager.prepare_for_parsing(repo_details.repo_name,
ref=repo_details.commit_id) (otherwise use ref=repo_details.branch_name), then
create Repo(worktree) and set extracted_dir from that worktree to ensure the
correct revision is parsed.

In `@app/modules/repo_manager/repo_manager.py`:
- Around line 526-567: _in _update_bare_repo_metadata the recomputed volume from
_calculate_volume_bytes is being discarded because the code sets volume =
data.get("volume_bytes", 0) when volume is not None; change the logic so that if
_calculate_volume_bytes returns None you fall back to the existing metadata
value, i.e. call _calculate_volume_bytes(str(bare_repo_path)) into volume and
then if volume is None set volume = data.get("volume_bytes", 0), before
assigning data["volume_bytes"] = volume (refer to symbols
_update_bare_repo_metadata, bare_repo_path, _calculate_volume_bytes, and
data["volume_bytes"]).
- Around line 947-1009: The metadata deletion call incorrectly swaps
branch/commit values — the current self._delete_metadata_entry(repo_name, None
if ref else ref, ref if ref else None) always passes None for the branch when
ref is set; change the call in remove_worktree to pass the ref as the branch (or
correctly map branch vs commit) e.g. call self._delete_metadata_entry(repo_name,
ref, None) when ref represents a branch (or implement proper detection and pass
(repo_name, None, ref) when ref is a commit) so branch metadata is removed
correctly.

In `@potpie/resources/repository.py`:
- Around line 45-78: The async method is_available (and any other async methods
in this class that call RepoManager) currently invokes blocking RepoManager
methods directly; wrap those calls with asyncio.to_thread (or
loop.run_in_executor for older Python) and await the result so the event loop is
not blocked—e.g., obtain repo_manager via self._get_repo_manager(), then call
await asyncio.to_thread(repo_manager.is_repo_available, repo_name, branch,
commit_id, user_id) and handle exceptions the same way; ensure asyncio is
imported and apply this pattern to all methods that call RepoManager (e.g.,
is_available, clone/fetch/worktree callers) so all blocking Git/filesystem
operations run in a threadpool.
- Around line 459-476: The current delete_worktree implementation calls the
private method _get_unique_worktree_path and manually deletes the directory with
shutil.rmtree; instead call the public
RepoManager.cleanup_unique_worktree(repo_name, user_id, unique_id) on the
repo_manager instance to perform proper git worktree cleanup (and fallback
removal) and return/propagate its boolean result or raise RepositoryError on
failure; remove direct use of _get_unique_worktree_path and manual
shutil.rmtree, and consider whether the unused ref parameter in delete_worktree
should be removed or ignored when delegating to cleanup_unique_worktree.
- Around line 276-303: get_volume_info currently calls the deprecated
repo_manager.list_available_repos() to compute repo_count (causing noisy
warnings and inefficiency); replace that call with a non-deprecated counting
approach: either count entries from the generator
repo_manager._iter_metadata_entries(user_id=...) to include all metadata
entries, or use repo_manager.list_repos(user_id=...) if the intent is to count
only bare repos (no worktrees). Update the repo_count calculation in
get_volume_info accordingly, remove the deprecated call, and ensure you pick and
document which set (all entries vs. bare repos) is intended for the volume
metrics.

In `@potpie/types/repository.py`:
- Around line 71-102: The from_dict classmethod in RepositoryInfo currently
forces status to RepositoryStatus.AVAILABLE; change it to read status =
data.get("status") and if status is already a RepositoryStatus use it, if it's a
str try to convert it to RepositoryStatus (normalizing case) and fall back to
RepositoryStatus.AVAILABLE only when missing or conversion fails; then pass that
parsed status into the cls(...) call instead of the hardcoded
RepositoryStatus.AVAILABLE.
🧹 Nitpick comments (5)
app/modules/parsing/graph_construction/parsing_controller.py (1)

186-190: Silence the unused db arg and drop dead code.

Ruff flags db as unused, and the commented task block can be removed to reduce clutter.

♻️ Suggested cleanup
-        db: Session,
+        _db: Session,
...
-        # asyncio.create_task(
-        #     CodeProviderService(db).get_project_structure_async(new_project_id)
-        # )

Also applies to: 206-208

potpie/config.py (3)

130-139: Add exception chaining for clearer error tracebacks.

The get_env_float function should use raise ... from err to preserve the exception chain, consistent with Python best practices for exception handling.

♻️ Proposed fix
     def get_env_float(key: str, default: Optional[float] = None) -> Optional[float]:
         value = get_env(key)
         if value is None:
             return default
         try:
             return float(value)
-        except ValueError:
+        except ValueError as err:
             raise ConfigurationError(
                 f"Environment variable {env_prefix}{key} must be a float"
-            )
+            ) from err

52-55: Consider adding validation for repos_eviction_threshold_percent.

The eviction threshold represents a percentage. If a value is provided, it would be prudent to validate it falls within a sensible range (0.0–100.0) in the validate() method to catch configuration errors early.

♻️ Suggested validation addition in validate()
# Add to validate() method after existing validations:
if self.repos_eviction_threshold_percent is not None:
    if not (0.0 <= self.repos_eviction_threshold_percent <= 100.0):
        raise ConfigurationError(
            "repos_eviction_threshold_percent must be between 0 and 100"
        )

282-293: Add docstring for consistency with other builder methods.

All other builder methods (postgres, neo4j, redis, llm, user, project_path, development_mode, pool_settings) include docstrings. Adding one here would maintain API consistency.

♻️ Suggested docstring
     def repositories(
         self,
         base_path: Optional[str] = None,
         volume_limit_bytes: Optional[int] = None,
         eviction_threshold_percent: Optional[float] = None,
         github_base_url: Optional[str] = None,
     ) -> RuntimeConfigBuilder:
+        """Set repository management configuration."""
         self._repos_base_path = base_path
potpie/resources/repository.py (1)

39-43: Consider caching the RepoManager instance.

_get_repo_manager() creates a new RepoManager instance on every call. If RepoManager has any initialization overhead or maintains state, this could be inefficient. Consider lazy-initializing and caching the instance.

♻️ Proposed caching pattern
     def __init__(
         self,
         config: RuntimeConfig,
         db_manager: DatabaseManager,
         neo4j_manager: Neo4jManager,
     ):
         super().__init__(config, db_manager, neo4j_manager)
+        self._repo_manager: Optional[RepoManager] = None

     def _get_repo_manager(self):
         """Get a RepoManager instance configured from RuntimeConfig."""
+        if self._repo_manager is None:
+            from app.modules.repo_manager.repo_manager import RepoManager
+            self._repo_manager = RepoManager(repos_base_path=self._config.repos_base_path)
+        return self._repo_manager
-        from app.modules.repo_manager.repo_manager import RepoManager
-
-        return RepoManager(repos_base_path=self._config.repos_base_path)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/modules/repo_manager/repo_manager.py (1)

495-509: du -s returns blocks, not bytes—volume math is off.

The code treats du -s output as bytes, but it reports disk usage in blocks (typically 1 KiB units on GNU systems). This undercounts repo size and weakens eviction thresholds. Use byte-sized output explicitly with du -sb.

🐛 Proposed fix
-            result = subprocess.run(
-                ["du", "-s", path],
+            result = subprocess.run(
+                ["du", "-sb", path],
                 capture_output=True,
                 text=True,
                 timeout=60,  # 60 second timeout for large directories
             )
🤖 Fix all issues with AI agents
In `@app/modules/parsing/graph_construction/parsing_service.py`:
- Around line 190-194: Call prepare_for_parsing with the tenant/user context:
update the call to self.repo_manager.prepare_for_parsing(...) to pass the user
id (e.g., repo_details.user_id or the current user_id variable) so worktrees are
registered per-tenant; also update the RepoManager.prepare_for_parsing signature
and any callers to accept a user_id parameter and use it for multi-tenant
metadata/eviction logic (identify symbols: RepoManager.prepare_for_parsing,
parsing_service.prepare_for_parsing call site, repo_details.user_id).

In `@app/modules/repo_manager/repo_manager.py`:
- Around line 651-658: The fetch for existing bare repos uses auth_token
directly, which can be None even after resolving a token into github_token;
change the call to _fetch_ref to pass the resolved github_token (not the
original auth_token) so private repos use the available credentials — i.e.,
after computing github_token via github_token = auth_token or
self._get_github_token() and building clone_url with _build_authenticated_url,
call self._fetch_ref(bare_repo_path, ref, github_token, repo_url) (and ensure
subsequent calls that need credentials use github_token) before calling
_update_bare_repo_metadata.
- Around line 582-599: The code currently sets remote.origin.url to an
authenticated URL (via _build_authenticated_url) which persists the token in the
repo config; instead, remove the subprocess.run call that configures
remote.origin.url and pass the authenticated_url directly to the git fetch
invocation (i.e., use authenticated_url as the remote/URL argument in the
subprocess.run that executes ["git", "-C", str(bare_repo_path), "fetch", ... ,
ref]) so the token is used for that single network operation and is never
written to .git/config.
🧹 Nitpick comments (2)
app/modules/parsing/graph_construction/parsing_repomap.py (1)

618-621: Prefer != "." (or a real tuple) for clarity.

rel_root not in (".") is a string-membership check, which is easy to misread as a tuple. Consider the explicit comparison to avoid ambiguity.

♻️ Suggested tweak
-            if rel_root not in (".") and any(
+            if rel_root != "." and any(
                 part.startswith(".") for part in rel_root.split(os.sep)
             ):
potpie/types/repository.py (1)

81-89: Handle invalid datetime strings gracefully.

datetime.fromisoformat will raise on malformed values; a single bad metadata row will crash callers. Consider a safe fallback to keep from_dict resilient.

♻️ Proposed fix
-        if isinstance(registered_at, str):
-            registered_at = datetime.fromisoformat(registered_at)
+        if isinstance(registered_at, str):
+            try:
+                registered_at = datetime.fromisoformat(registered_at)
+            except ValueError:
+                registered_at = None
@@
-        if isinstance(last_accessed, str):
-            last_accessed = datetime.fromisoformat(last_accessed)
+        if isinstance(last_accessed, str):
+            try:
+                last_accessed = datetime.fromisoformat(last_accessed)
+            except ValueError:
+                last_accessed = None

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
app/modules/parsing/graph_construction/parsing_helper.py (1)

470-500: Missing @staticmethod decorator.

The method extract_local_repo_metadata lacks a self parameter but is missing the @staticmethod decorator. While calling it via ParseHelper.extract_local_repo_metadata(repo) works, this is fragile:

  1. Calling via an instance (self.extract_local_repo_metadata(repo)) would fail with repo being bound to self.
  2. Static analysis tools and IDEs will flag this as an error.
  3. It's inconsistent with get_local_repo_languages which correctly has @staticmethod.
Suggested fix
+    `@staticmethod`
     def extract_local_repo_metadata(repo: Repo):
         languages = ParseHelper.get_local_repo_languages(repo.working_tree_dir)
app/modules/parsing/graph_construction/parsing_service.py (1)

418-508: Critical: duplicate_graph is defined outside the ParsingService class.

The duplicate_graph function at line 418 has incorrect indentation—it's at the module level rather than indented inside ParsingService. This will cause a NameError at runtime when the function tries to access self.search_service and self.inference_service, as self won't be bound.

🐛 Proposed fix: Indent the entire function to be a class method
-async def duplicate_graph(self, old_repo_id: str, new_repo_id: str):
-    await self.search_service.clone_search_indices(old_repo_id, new_repo_id)
-    node_batch_size = 3000  # Fixed batch size for nodes
-    relationship_batch_size = 3000  # Fixed batch size for relationships
-    try:
+    async def duplicate_graph(self, old_repo_id: str, new_repo_id: str):
+        await self.search_service.clone_search_indices(old_repo_id, new_repo_id)
+        node_batch_size = 3000  # Fixed batch size for nodes
+        relationship_batch_size = 3000  # Fixed batch size for relationships
+        try:

The entire function body (lines 418-508) needs to be indented by one level (4 spaces) to become a method of ParsingService.

app/modules/auth/tests/auth_service_test.py (1)

114-167: Update signup error tests to match the new tuple return contract.

AuthService.signup now catches exceptions and returns (None, {"error": ...}), so the duplicate/invalid/weak/empty-display-name tests expecting exceptions will fail. Update those tests to assert error instead of pytest.raises.

🔧 Example adjustment (apply pattern to the other error tests)
-            with patch(
-                "firebase_admin.auth.create_user",
-                side_effect=Exception("Email already exists"),
-            ):
-                with pytest.raises(Exception) as exc_info:
-                    auth_service.signup(
-                        "existing@example.com", "password123", "Test User"
-                    )
-                assert "Email already exists" in str(exc_info.value)
+            with patch(
+                "firebase_admin.auth.create_user",
+                side_effect=Exception("Email already exists"),
+            ):
+                success_response, error = auth_service.signup(
+                    "existing@example.com", "password123", "Test User"
+                )
+                assert success_response is None
+                assert "Email already exists" in error["error"]
🤖 Fix all issues with AI agents
In `@app/modules/parsing/graph_construction/parsing_helper.py`:
- Line 543: The method extract_remote_repo_metadata is defined without self but
missing the `@staticmethod` decorator; add `@staticmethod` above
extract_remote_repo_metadata (matching extract_local_repo_metadata) so it can be
called on the class or instance without requiring self and to keep behavior
consistent with the other static extractor methods.
- Around line 526-534: The language byte counts are double-counting because the
chain uses separate if statements so a matched `.py` or `.ts` still falls
through into the final else; update the conditional in parsing_helper.py that
inspects entry.suffix to use an if/elif/elif/else chain (checking ".py", ".ts",
".js" in order) so only one branch increments language_bytes (the language_bytes
dict and the entry.suffix checks are the unique identifiers to locate the
change).
♻️ Duplicate comments (1)
app/modules/parsing/graph_construction/parsing_service.py (1)

190-196: user_id not passed to prepare_for_parsing for tenant isolation.

The user_id parameter is available in scope but not passed to prepare_for_parsing. Per the RepoManager.prepare_for_parsing signature, user_id is used for multi-tenant metadata and eviction tracking. Not passing it may cause worktrees to be registered globally, leading to cross-tenant eviction or lookup mismatches.

Suggested fix
                     worktree = self.repo_manager.prepare_for_parsing(
                         repo_details.repo_name,
                         ref=ref,
                         is_commit=bool(repo_details.commit_id),
+                        user_id=user_id,
                     )
🧹 Nitpick comments (2)
app/modules/parsing/graph_construction/parsing_helper.py (2)

53-58: Remove commented-out code.

The commented-out block for filtering symlinked directories is dead code. With followlinks=False already set, symlinked directories won't be followed during traversal, and symlinked files are explicitly skipped at lines 62-63.

Also, per static analysis hint (B007), consider renaming unused dirnames to _dirnames if you want to explicitly indicate it's intentionally unused.

Suggested cleanup
-        for dirpath, dirnames, filenames in os.walk(path, followlinks=False):
-            # # Skip symlinked directories
-            # dirnames[:] = [
-            #     d for d in dirnames if not os.path.islink(os.path.join(dirpath, d))
-            # ]
-
+        for dirpath, _dirnames, filenames in os.walk(path, followlinks=False):
             for f in filenames:

344-357: Cleanup failure during exception handling could mask original error.

The shutil.rmtree() calls in exception handlers (lines 348, 354) are now unguarded. If cleanup fails (e.g., permission denied), it will raise a new exception that masks the original GitCommandError or unexpected error, making debugging harder.

Consider wrapping cleanup in a try/except to ensure the original exception propagates:

Suggested fix
         except GitCommandError as e:
             logger.exception("ParsingHelper: Git clone failed")
             # Clean up temp directory on error
             if os.path.exists(temp_clone_dir):
-                shutil.rmtree(temp_clone_dir)
+                try:
+                    shutil.rmtree(temp_clone_dir)
+                except Exception as cleanup_err:
+                    logger.warning(f"ParsingHelper: Failed to clean up temp directory: {cleanup_err}")
             raise ParsingFailedError(f"Failed to clone repository: {e}") from e
         except Exception as e:
             logger.exception("ParsingHelper: Unexpected error during git clone")
             # Clean up temp directory on error
             if os.path.exists(temp_clone_dir):
-                shutil.rmtree(temp_clone_dir)
+                try:
+                    shutil.rmtree(temp_clone_dir)
+                except Exception as cleanup_err:
+                    logger.warning(f"ParsingHelper: Failed to clean up temp directory: {cleanup_err}")
             raise ParsingFailedError(
                 f"Unexpected error during repository clone: {e}"
             ) from e

Dsantra92 and others added 3 commits January 22, 2026 17:24
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@app/modules/parsing/graph_construction/parsing_helper.py`:
- Around line 502-541: In get_local_repo_languages, skip following symlinks to
avoid escaping the root or creating cycles: before using entry.is_dir() or
entry.is_file(), check entry.is_symlink() and continue if True (or use
entry.is_dir(follow_symlinks=False) on Python 3.13+); ensure both directory
traversal (stack.append(entry)) and file size/stat logic only run for
non-symlink entries and consider validating entry.resolve().is_relative_to(root)
if you want extra protection against escapes.
♻️ Duplicate comments (3)
potpie/resources/repository.py (2)

290-290: Still using deprecated list_available_repos() method.

Line 290 calls list_available_repos() which was flagged in a previous review as deprecated (logs warning: "list_available_repos() is deprecated. Use list_repos() for bare repos only."). This was marked as addressed in commit e2dd43f but the code still shows the deprecated call.

Replace with the non-deprecated counting approach. If counting bare repos only:

-repo_count = len(repo_manager.list_available_repos(user_id=user_id))
+repo_count = len(repo_manager.list_repos(user_id=user_id))

Or if counting all metadata entries (including worktrees):

-repo_count = len(repo_manager.list_available_repos(user_id=user_id))
+repo_count = sum(1 for _ in repo_manager._iter_metadata_entries(user_id=user_id))

Verify which repositories should be counted for volume metrics.


457-471: Use RepoManager.cleanup_unique_worktree instead of manual deletion.

Lines 459-471 access the private method _get_unique_worktree_path and manually delete with shutil.rmtree, bypassing proper git worktree cleanup. RepoManager provides the public method cleanup_unique_worktree(repo_name, user_id, unique_id) which handles proper git cleanup before falling back to filesystem removal.

🔧 Proposed fix
 async def delete_worktree(
     self,
     repo_name: str,
     ref: str,
     user_id: str,
     unique_id: str,
 ) -> bool:
     ...
     repo_manager = self._get_repo_manager()
     try:
-        worktree_path = repo_manager._get_unique_worktree_path(
-            repo_name=repo_name, ref=ref, user_id=user_id, unique_id=unique_id
-        )
-
-        if worktree_path and worktree_path.exists():
-            import shutil
-
-            shutil.rmtree(worktree_path)
-            logger.info(f"Deleted worktree: {worktree_path}")
-            return True
-
-        logger.warning(f"Worktree not found: {worktree_path}")
-        return False
+        return repo_manager.cleanup_unique_worktree(
+            repo_name=repo_name,
+            user_id=user_id,
+            unique_id=unique_id,
+        )
     except Exception as e:
         raise RepositoryError(f"Failed to delete worktree: {e}") from e

Note: The ref parameter appears unused by cleanup_unique_worktree. Consider removing it from the method signature if it's not needed elsewhere.

app/modules/parsing/graph_construction/parsing_helper.py (1)

543-543: Missing @staticmethod on extract_remote_repo_metadata (same as prior comment).

🧹 Nitpick comments (2)
potpie/resources/repository.py (1)

39-43: Consider caching the RepoManager instance.

Each method call creates a new RepoManager instance. If methods are called frequently, this could add overhead. Consider caching the instance as a private attribute initialized once:

def __init__(self, config: RuntimeConfig, db_manager: DatabaseManager, neo4j_manager: Neo4jManager):
    super().__init__(config, db_manager, neo4j_manager)
    self._repo_manager = None

def _get_repo_manager(self):
    """Get a RepoManager instance configured from RuntimeConfig."""
    if self._repo_manager is None:
        from app.modules.repo_manager.repo_manager import RepoManager
        self._repo_manager = RepoManager(repos_base_path=self._config.repos_base_path)
    return self._repo_manager

However, if RepoManager is intentionally created per-operation for isolation or thread-safety, the current approach is fine.

app/modules/parsing/graph_construction/parsing_helper.py (1)

51-64: Rename the unused dirnames loop variable to satisfy lint.

This addresses the B007 warning without behavior change.

♻️ Proposed diff
-        for dirpath, dirnames, filenames in os.walk(path, followlinks=False):
+        for dirpath, _dirnames, filenames in os.walk(path, followlinks=False):

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/modules/repo_manager/repo_manager.py (1)

495-508: du -s returns block count, not bytes—missing the -b flag.

The code comment claims to use du -sb, but line 499 only specifies ["du", "-s", path]. Without the -b flag, du returns size in display block units (typically 1024-byte blocks by default, configurable via environment variables), not actual bytes. This causes the function to return values that are ~1000x smaller than intended, breaking volume tracking.

Fix
             # Use 'du -sb' to get size in bytes (summary, bytes)
             # This is more accurate and faster than walking the directory tree
             result = subprocess.run(
-                ["du", "-s", path],
+                ["du", "-sb", path],
                 capture_output=True,
                 text=True,
                 timeout=60,  # 60 second timeout for large directories
             )
🤖 Fix all issues with AI agents
In `@app/modules/parsing/graph_construction/parsing_helper.py`:
- Line 470: The method extract_local_repo_metadata is defined without self but
missing the `@staticmethod` decorator; add `@staticmethod` above
extract_local_repo_metadata so it behaves like get_local_repo_languages and can
be called both on the class and instances without accidentally receiving self,
ensuring the signature matching and preventing bugs when invoking
self.extract_local_repo_metadata(repo).
- Around line 520-523: The code uses entry.is_dir(follow_symlinks=False) (and
inconsistently entry.is_file()) which requires Python 3.13; either raise the
package minimum to >=3.13 or change these calls to be compatible with 3.10–3.12
— e.g., replace entry.is_dir(follow_symlinks=False) with a symlink-safe check
using entry.is_symlink() and entry.is_dir() (and make entry.is_file() likewise
consistent) in parsing_helper.py so the directory/file detection does not rely
on the newer follow_symlinks parameter; update all uses (entry.is_dir,
entry.is_file) accordingly to ensure consistent symlink handling or bump
requires-python to >=3.13 in pyproject.toml.

In `@app/modules/repo_manager/repo_manager.py`:
- Around line 1522-1534: The current repo eviction removes the filesystem
directory but leaves orphaned metadata; update the eviction logic in the
function containing the shutil.rmtree call (e.g., evict_repo / the method in
repo_manager.py handling repo eviction) to also remove metadata keyed to that
repo: before/after deleting repo_dir, query the centralized metadata store (use
the repository identifier repo_name) and delete or mark all entries belonging to
that repo (e.g., call the metadata service method to list entries by repo and
then delete them or use a bulk delete API such as
delete_entries_by_repo/delete_by_namespace), handle and log any metadata
deletion errors but continue cleanup, and ensure the code paths use the same
repo identifier and transactional ordering so metadata removal won’t be left
orphaned if rmtree succeeds.
- Around line 805-809: The call to _fetch_ref uses auth_token directly which may
be None; instead, resolve the token the same way ensure_bare_repo does and pass
that resolved token to _fetch_ref (e.g., obtain resolved_token via the same
helper used in ensure_bare_repo and replace auth_token with resolved_token when
calling _fetch_ref with repo_url). Ensure you use the same token-resolution path
as in ensure_bare_repo so private repos can be fetched.
🧹 Nitpick comments (8)
app/modules/repo_manager/repo_manager.py (7)

66-66: Unused constant _DEFAULT_VOLUME_LIMIT_GB.

The constant is defined but the initialization code manually calculates 100 * 1024 * 1024 * 1024 instead of using it.

♻️ Suggested fix
-                self.volume_limit_bytes = 100 * 1024 * 1024 * 1024
+                self.volume_limit_bytes = self._DEFAULT_VOLUME_LIMIT_GB * 1024 * 1024 * 1024
         else:
-            self.volume_limit_bytes = 100 * 1024 * 1024 * 1024
+            self.volume_limit_bytes = self._DEFAULT_VOLUME_LIMIT_GB * 1024 * 1024 * 1024

Also applies to: 106-108


144-151: Token sanitization may miss newer GitHub token formats.

The regex ghp_[a-zA-Z0-9]{36} only matches classic PATs. GitHub fine-grained tokens use github_pat_ prefix with variable length.

🔒 Suggested fix for broader token coverage
     `@staticmethod`
     def _sanitize_error_message(error_msg: str) -> str:
         """Remove potential credentials from error messages."""
         import re
 
         sanitized = re.sub(r"oauth2:[^@]+@", "oauth2:***@", error_msg)
-        sanitized = re.sub(r"ghp_[a-zA-Z0-9]{36}", "***", sanitized)
+        # Classic PATs (ghp_), fine-grained PATs (github_pat_), OAuth tokens (gho_), etc.
+        sanitized = re.sub(r"(ghp_|gho_|ghu_|ghs_|ghr_)[a-zA-Z0-9_]+", "***", sanitized)
+        sanitized = re.sub(r"github_pat_[a-zA-Z0-9_]+", "***", sanitized)
         return sanitized

196-205: Minor duplication in path helpers.

_get_repo_dir and _get_repo_local_path are identical. Consider consolidating or documenting the semantic distinction.


291-298: datetime.utcnow() is deprecated in Python 3.12+.

Consider using datetime.now(timezone.utc) for forward compatibility.

♻️ Suggested fix
+from datetime import datetime, timedelta, timezone
+
 `@staticmethod`
 def _deserialize_datetime(dt_str: Optional[str]) -> datetime:
     if not dt_str:
-        return datetime.utcnow()
+        return datetime.now(timezone.utc).replace(tzinfo=None)
     try:
         return datetime.fromisoformat(dt_str)
     except ValueError:
         logger.warning(f"Failed to parse datetime '{dt_str}'; defaulting to now()")
-        return datetime.utcnow()
+        return datetime.now(timezone.utc).replace(tzinfo=None)

Note: This pattern should be applied to all datetime.utcnow() calls in the file (lines 542, 547, 1175, 1312, 1319, 1393, 1577).


701-708: Use logger.exception for better stack trace logging.

When catching exceptions in except blocks, logger.exception automatically includes the traceback, while logger.error does not.

♻️ Suggested fix
         except subprocess.TimeoutExpired:
-            logger.error("Git clone timeout", repo_name=repo_name)
+            logger.exception("Git clone timeout for %s", repo_name)
             raise RuntimeError(
                 f"Git clone timed out after {self._CLONE_TIMEOUT // 60} minutes"
-            )
+            ) from None

1134-1136: Redundant exception in logging.exception call.

logger.exception automatically includes the exception info; passing e in the message is redundant.

♻️ Suggested fix
         except Exception as e:
-            logger.exception(f"Error cleaning up unique worktrees: {e}")
+            logger.exception("Error cleaning up unique worktrees")
             return False

1447-1449: Deprecation warning on every call may be noisy.

Consider using warnings.warn with DeprecationWarning category for proper deprecation handling, or log once using a class-level flag.

♻️ Suggested fix using warnings module
+import warnings
+
 def list_available_repos(
     self,
     user_id: Optional[str] = None,
     limit: Optional[int] = None,
 ) -> List[Dict[str, Any]]:
     """
     List all available repositories including worktrees.

     DEPRECATED: Use list_repos() for v2-style behavior (bare repos only).
     """
-    logger.warning(
-        "list_available_repos() is deprecated. Use list_repos() for bare repos only."
-    )
+    warnings.warn(
+        "list_available_repos() is deprecated. Use list_repos() for bare repos only.",
+        DeprecationWarning,
+        stacklevel=2,
+    )
app/modules/parsing/graph_construction/parsing_helper.py (1)

53-64: Clean up unused variable and dead code.

The dirnames variable is unused (as flagged by static analysis), and the commented-out code block (lines 54-57) is redundant since followlinks=False already prevents traversing into symlinked directories.

Suggested cleanup
     `@staticmethod`
     def get_directory_size(path):
         total_size = 0
-        for dirpath, dirnames, filenames in os.walk(path, followlinks=False):
-            # # Skip symlinked directories
-            # dirnames[:] = [
-            #     d for d in dirnames if not os.path.islink(os.path.join(dirpath, d))
-            # ]
-
+        for dirpath, _, filenames in os.walk(path, followlinks=False):
             for f in filenames:
                 fp = os.path.join(dirpath, f)
                 # Skip all symlinks

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@potpie/resources/repository.py`:
- Around line 379-429: The create_worktree method currently ignores the
exists_ok parameter (it’s accepted in the signature but
repo_manager.create_worktree is called with exists_ok=True), so propagate the
caller's intent by passing the exists_ok parameter through to
repo_manager.create_worktree (i.e., replace the hardcoded exists_ok=True with
exists_ok=exists_ok) or, if the design decides existence behavior should not be
configurable, remove exists_ok from create_worktree's signature and callers;
update references to the create_worktree function accordingly to maintain
consistency.
🧹 Nitpick comments (1)
potpie/resources/repository.py (1)

40-44: Consider caching the RepoManager instance.

_get_repo_manager() creates a new RepoManager instance on every method call. If RepoManager initialization is lightweight and stateless, this is fine. However, if it involves any setup cost (e.g., filesystem checks, configuration parsing), consider lazy initialization with caching:

♻️ Optional: Lazy singleton pattern
+    _repo_manager: Optional["RepoManager"] = None
+
     def _get_repo_manager(self):
         """Get a RepoManager instance configured from RuntimeConfig."""
+        if self._repo_manager is None:
+            from app.modules.repo_manager.repo_manager import RepoManager
+            self._repo_manager = RepoManager(repos_base_path=self._config.repos_base_path)
+        return self._repo_manager
-        from app.modules.repo_manager.repo_manager import RepoManager
-
-        return RepoManager(repos_base_path=self._config.repos_base_path)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/modules/repo_manager/repo_manager.py (1)

531-543: du -s returns blocks, not bytes; code misinterprets the output.

The comment claims "Use 'du -sb'" but the actual code only uses -s. Across systems, du -s returns blocks (1024-byte on Linux, 512-byte on macOS/BSD), not bytes. The code treats these directly as bytes, causing incorrect size calculations that affect eviction and quota checks.

🔧 Suggested fix
             result = subprocess.run(
-                ["du", "-s", path],
+                ["du", "-sk", path],
                 capture_output=True,
                 text=True,
                 timeout=60,  # 60 second timeout for large directories
             )
 
             if result.returncode == 0:
                 # du output format: "size_bytes\tpath"
-                size_str = result.stdout.split()[0]
-                return int(size_str)
+                size_kb = int(result.stdout.split()[0])
+                return size_kb * 1024
🧹 Nitpick comments (1)
app/modules/parsing/graph_construction/parsing_helper.py (1)

51-53: Rename unused loop variable to avoid lint noise.

dirnames isn’t used; rename to _dirnames (or re-enable dir filtering) to align with Ruff B007.

♻️ Suggested tweak
-        for dirpath, dirnames, filenames in os.walk(path, followlinks=False):
+        for dirpath, _dirnames, filenames in os.walk(path, followlinks=False):

dhirenmathur
dhirenmathur previously approved these changes Feb 4, 2026
@Dsantra92 Dsantra92 dismissed dhirenmathur’s stale review February 4, 2026 13:23

The merge-base changed after approval.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/modules/parsing/graph_construction/parsing_controller.py (1)

117-176: ⚠️ Potential issue | 🔴 Critical

Fix NameError: asyncio is not imported.
Line 152 calls asyncio.create_task(...), but only create_task is imported. This will raise at runtime in the demo-repo path. Use the imported create_task (or import asyncio).

🛠️ Proposed fix
-                    asyncio.create_task(
+                    create_task(
                         CodeProviderService(db).get_project_structure_async(
                             new_project_id
                         )
                     )
🧹 Nitpick comments (2)
app/modules/parsing/graph_construction/parsing_controller.py (2)

261-265: db parameter is unused after disabling the async task.
Ruff flags db as unused. Consider renaming to _db or removing it (and updating call sites) to keep the signature clean.

🧹 Minimal lint-friendly tweak
-        db: Session,
+        _db: Session,

281-283: Remove commented-out code or guard with a feature flag.
Leaving dead code makes future intent unclear. If needed later, wrap with a feature flag/config instead of commenting out.

🧹 Proposed cleanup
-        # asyncio.create_task(
-        #     CodeProviderService(db).get_project_structure_async(new_project_id)
-        # )

dhirenmathur
dhirenmathur previously approved these changes Feb 4, 2026
@Dsantra92 Dsantra92 dismissed dhirenmathur’s stale review February 4, 2026 13:31

The merge-base changed after approval.

@Harigithub11 Harigithub11 self-requested a review February 4, 2026 23:28
Harigithub11
Harigithub11 previously approved these changes Feb 4, 2026
@Dsantra92 Dsantra92 dismissed Harigithub11’s stale review February 4, 2026 23:28

The merge-base changed after approval.

@Harigithub11 Harigithub11 removed the request for review from nndn February 4, 2026 23:55
Harigithub11
Harigithub11 previously approved these changes Feb 4, 2026
@Dsantra92 Dsantra92 dismissed Harigithub11’s stale review February 4, 2026 23:55

The merge-base changed after approval.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 5, 2026

Caution

Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted.

Error details
{"name":"HttpError","status":401,"request":{"method":"PATCH","url":"https://api.github.com/repos/potpie-ai/potpie/issues/comments/3781307221","headers":{"accept":"application/vnd.github.v3+json","user-agent":"octokit.js/0.0.0-development octokit-core.js/7.0.6 Node.js/24","authorization":"token [REDACTED]","content-type":"application/json; charset=utf-8"},"body":{"body":"<!-- This is an auto-generated comment: summarize by coderabbit.ai -->\n<!-- walkthrough_start -->\n\n## Walkthrough\n\nThis PR introduces a fundamental shift in repository management architecture, replacing simple path-based layouts with a Bare Repo + Worktree system featuring centralized metadata and multi-tenant support. Core changes include migrating from AsyncSession to synchronous Session in parsing APIs, rewriting repository helpers to use Git cloning instead of tarballs, expanding RepoManager with a new layered architecture, adding a RepositoryResource wrapper for async operations, extending configuration for repository settings, and improving token selection security by switching from random to cryptographic randomness.\n\n## Changes\n\n|Cohort / File(s)|Summary|\n|---|---|\n|**Parsing Controller & Session Migration** <br> `app/modules/parsing/graph_construction/parsing_controller.py`|Changed public method signatures to use synchronous `Session` instead of `AsyncSession`; made `user_email` parameter optional (`str \\| None`) in both `parse_directory` and `handle_new_project`.|\n|**Parsing Helper Rewrite** <br> `app/modules/parsing/graph_construction/parsing_helper.py`|Removed tarball download/extraction workflow entirely; replaced with Git cloning and text-file filtering; converted `extract_repository_metadata` to instance method; redesigned `get_local_repo_languages` as staticmethod with pathlib-based traversal; updated language detection and metadata aggregation logic; removed extensive setup workflow methods.|\n|**Parsing Service & Type Changes** <br> `app/modules/parsing/graph_construction/parsing_service.py`|Updated `project_id` type from `int` to `str` in public signatures; refactored repo setup flow to use RepoManager; enhanced commit detection via git commands; added language detection from GitHub API with fallback; added public `repo_manager` attribute.|\n|**Parsing Utilities** <br> `app/modules/parsing/graph_construction/parsing_repomap.py`|Modified directory-skipping logic to operate on relative paths instead of absolute paths; now checks for leading-dot segments in relative root paths.|\n|**Project Service Types** <br> `app/modules/projects/projects_service.py`|Changed `project_id` type from `int` to `str` across `update_project_status`, `get_project_from_db_by_id`, and new `update_project` staticmethod; made `branch_name` parameter optional (`str \\| None`) in `get_project_from_db`.|\n|**Repository Manager Core** <br> `app/modules/repo_manager/repo_manager.py`|Complete architectural rewrite: introduced `.bare` + `worktrees` + `.meta` layout; added multi-tenant support via `user_id`; implemented rich metadata I/O helpers; new public methods for listing, path/info retrieval, availability checks, volume accounting; added bare repo and worktree lifecycle management (`ensure_bare_repo`, `create_worktree`, `remove_worktree`); added `prepare_for_parsing` orchestration method; implemented tiered eviction strategy for worktrees and repos; added authentication via token-based URL building.|\n|**Repository Manager Interface** <br> `app/modules/repo_manager/repo_manager_interface.py`|Added four new abstract methods: `get_total_volume_bytes`, `get_volume_percentage`, `prepare_for_parsing`, `evict_stale_worktrees` for volume management and parsing orchestration.|\n|**Repository Resource Layer** <br> `potpie/resources/repository.py`|New async wrapper around RepoManager providing domain-level API (`is_available`, `register`, `get_path`, `get_info`, `list_repos`, `evict`, `get_volume_info`, `prepare_for_parsing`, worktree lifecycle); converts generic exceptions to `RepositoryError`; returns typed data structures (`RepositoryInfo`, `VolumeInfo`).|\n|**Configuration & Settings** <br> `potpie/config.py`|Added repository-related config fields: `repos_base_path`, `repos_volume_limit_bytes`, `repos_eviction_threshold_percent`, `github_base_url`; new `RuntimeConfigBuilder.repositories()` method for builder-pattern configuration.|\n|**Types & Exceptions** <br> `potpie/exceptions.py`, `potpie/types/repository.py`|Added `RepositoryError` and `RepositoryNotFoundError` exception classes; new `RepositoryStatus` enum (AVAILABLE, EVICTED, MISSING); `RepositoryInfo` dataclass with metadata and `from_dict` classmethod; `VolumeInfo` dataclass with GB-based properties and availability calculations.|\n|**Public API Exports** <br> `potpie/__init__.py`, `potpie/types/__init__.py`|Exposed new repository types and exceptions in module public APIs: `RepositoryInfo`, `RepositoryStatus`, `VolumeInfo`, `RepositoryError`, `RepositoryNotFoundError`.|\n|**Runtime Integration** <br> `potpie/runtime.py`|Added lazy-loaded `repositories` property exposing `RepositoryResource`; added `_repositories` private field for singleton instance management.|\n|**Token Selection Security** <br> `app/modules/code_provider/github/github_service.py`, `app/modules/code_provider/provider_factory.py`, `app/modules/intelligence/tools/web_tools/code_provider_*.py` (5 files)|Replaced `random.choice` with `secrets.choice` for GitHub token selection across all code provider modules, improving cryptographic randomness for token selection without altering control flow.|\n\n## Sequence Diagram(s)\n\n```mermaid\nsequenceDiagram\n    participant Client\n    participant RepositoryResource\n    participant RepoManager\n    participant GitOps as Git Operations\n    participant Metadata as Metadata Storage\n    \n    Client->>RepositoryResource: prepare_for_parsing(repo_name, ref, user_id)\n    \n    RepositoryResource->>RepoManager: prepare_for_parsing(repo_name, ref, ...)\n    \n    RepoManager->>RepoManager: _evict_if_needed()\n    note over RepoManager: Check storage threshold\n    \n    RepoManager->>GitOps: ensure_bare_repo(clone/fetch)\n    GitOps-->>RepoManager: bare_repo_path\n    \n    RepoManager->>Metadata: _update_bare_repo_metadata()\n    Metadata-->>RepoManager: stored\n    \n    RepoManager->>GitOps: create_worktree(ref)\n    GitOps-->>RepoManager: worktree_path\n    \n    RepoManager->>Metadata: _write_metadata(worktree info)\n    Metadata-->>RepoManager: stored\n    \n    RepoManager-->>RepositoryResource: worktree_path\n    RepositoryResource-->>Client: path ready for parsing\n```\n\n## Estimated code review effort\n\n🎯 4 (Complex) | ⏱️ ~45 minutes\n\n## Possibly related PRs\n\n- **#483**: Adds and modifies RepoManager functionality and integrates it into parsing/code-provider flows; shares core classes and methods like RepoManager with the main PR.\n- **#502**: Modifies parsing and repo-management modules (parsing_controller, parsing_helper, parsing_service, repo_manager) with overlapping function/class changes.\n- **#334**: Modifies GithubService implementation with GitHub app authentication logic; complements token selection security improvements in main PR.\n\n## Suggested reviewers\n\n- ASCE-D\n- dhirenmathur\n- nndn\n\n## Poem\n\n> 🐰 A rabbit hops through repositories new,  \n> With worktrees and metadata, shiny and true,  \n> No tarballs weigh down our swift Git-cloned fare,  \n> Sessions sync up with a multi-tenant care,  \n> And tokens now hide with cryptographic flair! 🔐\n\n<!-- walkthrough_end -->\n\n\n<!-- pre_merge_checks_walkthrough_start -->\n\n<details>\n<summary>🚥 Pre-merge checks | ✅ 1 | ❌ 2</summary>\n\n<details>\n<summary>❌ Failed checks (1 warning, 1 inconclusive)</summary>\n\n|     Check name     | Status         | Explanation                                                                                                                                                                                                                                                             | Resolution                                                                                                                                                                                                                                                                                           |\n| :----------------: | :------------- | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| Docstring Coverage | ⚠️ Warning     | Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%.                                                                                                                                                                                   | Write docstrings for the functions missing them to satisfy the coverage threshold.                                                                                                                                                                                                                   |\n|     Title check    | ❓ Inconclusive | The title 'Repo manager with parsing logic' is vague and does not clearly convey the primary changes in the changeset. While the PR involves repo manager modifications and parsing logic, the title is too generic and could apply to many different types of changes. | Consider revising the title to be more specific, such as 'Refactor RepoManager with bare repo architecture and metadata storage' or 'Implement bare repository sharing and per-user worktree management in RepoManager' to better reflect the major architectural changes and the scope of the work. |\n\n</details>\n<details>\n<summary>✅ Passed checks (1 passed)</summary>\n\n|     Check name    | Status   | Explanation                                                 |\n| :---------------: | :------- | :---------------------------------------------------------- |\n| Description Check | ✅ Passed | Check skipped - CodeRabbit’s high-level summary is enabled. |\n\n</details>\n\n<sub>✏️ Tip: You can configure your own custom pre-merge checks in the settings.</sub>\n\n</details>\n\n<!-- pre_merge_checks_walkthrough_end -->\n\n<!-- finishing_touch_checkbox_start -->\n\n<details>\n<summary>✨ Finishing touches</summary>\n\n- [ ] <!-- {\"checkboxId\": \"7962f53c-55bc-4827-bfbf-6a18da830691\"} --> 📝 Generate docstrings\n<details>\n<summary>🧪 Generate unit tests (beta)</summary>\n\n- [ ] <!-- {\"checkboxId\": \"f47ac10b-58cc-4372-a567-0e02b2c3d479\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Create PR with unit tests\n- [ ] <!-- {\"checkboxId\": \"07f1e7d6-8a8e-4e23-9900-8731c2c87f58\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Post copyable unit tests in a comment\n- [ ] <!-- {\"checkboxId\": \"6ba7b810-9dad-11d1-80b4-00c04fd430c8\", \"radioGroupId\": \"utg-output-choice-group-unknown_comment_id\"} -->   Commit unit tests in branch `repo_manager`\n\n</details>\n\n</details>\n\n<!-- finishing_touch_checkbox_end -->\n\n<!-- tips_start -->\n\n---\n\nThanks for using [CodeRabbit](https://coderabbit.ai?utm_source=oss&utm_medium=github&utm_campaign=potpie-ai/potpie&utm_content=566)! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.\n\n<details>\n<summary>❤️ Share</summary>\n\n- [X](https://twitter.com/intent/tweet?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A&url=https%3A//coderabbit.ai)\n- [Mastodon](https://mastodon.social/share?text=I%20just%20used%20%40coderabbitai%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20the%20proprietary%20code.%20Check%20it%20out%3A%20https%3A%2F%2Fcoderabbit.ai)\n- [Reddit](https://www.reddit.com/submit?title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&text=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code.%20Check%20it%20out%3A%20https%3A//coderabbit.ai)\n- [LinkedIn](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fcoderabbit.ai&mini=true&title=Great%20tool%20for%20code%20review%20-%20CodeRabbit&summary=I%20just%20used%20CodeRabbit%20for%20my%20code%20review%2C%20and%20it%27s%20fantastic%21%20It%27s%20free%20for%20OSS%20and%20offers%20a%20free%20trial%20for%20proprietary%20code)\n\n</details>\n\n<sub>Comment `@coderabbitai help` to get the list of available commands and usage tips.</sub>\n\n<!-- tips_end -->\n\n<!-- internal state start -->\n\n\n<!-- DwQgtGAEAqAWCWBnSTIEMB26CuAXA9mAOYCmGJATmriQCaQDG+Ats2bgFyQAOFk+AIwBWJBrngA3EsgEBPRvlqU0AgfFwA6NPEgQAfACgjoCEYDEZyAAUASpETZWaCrKPR1AGxJcbJbvkhmTDRSPgB3dVgeZ0R4DCJID3wieAZIAAozAFYANhyASkhIAwBVGwAZLlhcXG5EDgB6BpTcWGwBDSZmBv9a+BIwbR78PpIe7A8PBtycotLESi4AEURMXCoATgAmOYBlfGwKBhJIASoMBlguCj98AH0gjBDKSEAkwhhnUlxT88uuILie1w1Gw9X43DIcwAwjdqHR0JxIFsAAxbHJgZEARjAW0x0FxHAALFkOMiNgAtIxLaQMCjwbjifAYDgGKC+fwAWWCoUgGHwYUgsEwtC8yDQk0gN38sQIdOkkAAZvg+NwYnEEsLFcrGLDxPFIKDKFBNcDEABrMCICEMeAK1KQMLKs3rEjSDSsyAAORIArtJA8tDFtCU9AIgRIwNo1DQ7qgAFEJKlGVh/B5Uv1kNhuFGaLRY5AAJLMbheNgYXOnZwnKX4GXK+SIIV0/URVqK+AeGgUDgCJICSBh1ZSeyq4753zMfBSeh8jBgG4AR2w8Bu9AV2AuybFGHoaGDN0QC13eEIXWY6hQh+w8sB+ADL3Xm/gTMQ7qMrLAhgMJigZHo+AVHACGIMhlArM92C4Xh+GEURxCkGR5CYJQqFUdQtB0fRv3AKA4FQVBMCAwhSHIKhwJYMtESoAUHCcFxTiQxRlDQzRtF0T93x/Iw0G4bgGknWgJmkHo1XiZoqG4WA7iYDBEHWbAxGfDARIoWJ4mkpl1jvLwKA0bhZBZAAiYyDAsSAAEECxA0i4XoWignogDGCFeJpCMKAoRc0geHaNM0kfRSmXseAiCeXBDnlMNDUgXYAEVynMjxLhIZh5F2aRYiC9JaAELh0sPJTCjiOSSD3fhAPMxBZAufLMqwQErFEogoU0ihtJeNhWkUZBVVUkg7loFc4PrdAd0FYUvDucgwjuXh8BEMQABp0AVBU4PVSAlAhHcyAYeRcFkbgNs1HMVDQBZ0AYY5D2iGpKFk/M4wADxoMbuF8+1erQTqXgOiFkDDcUkgFT0mROJU+ENCg7hS7QPAyKGYYBDwuDkvgAB8vTBoqsAEEYol6hYBqGsQRs1FyRX66bZrahbNA9Dkp3BtrmAsqqaoypSBwCWquYhzb+y+n7VJQXH8ZWtaxHhTrYG65bxS7DbWhOd7e3tcyrALexDgVNAxw9AtyzawTjlG3daCEUFvngYtlVwMUruVQb4g8eR0j3eheaChUWfsBcPHFZLUo0ZVmAAbklFKmf4PgpQD03KuqhgvawUFnnyccfSdRJkhSfUTXOs17GSwS031G41puC5IoCcUQqwVsomm+wk9gNq+VBexOaCimy4SdISA0IgNHli2rY2pIEi8KQPEQZalEr5sEmOChgUBc6k4HQvEAzj1fDiLSTZOEgnpLdNvl7jb+aUSdI+ldRlQzUX0AdNB9qFb5DQB2ATizHN4TmnTRUwMHSRAHD/SOiYjwCwaAsAqQU/pugZnEbUe5LZyUogDAIZAHA3EgC1JQVg2qJhQulCgiZjgNDjMjAAEv6CEkNVjeVQBBcs8JtQ3GVEQTA8AABe8IwwVy8GIcB1Yo4SHFOVTaJNvgbwuM+MAAhzoCMLjqUqyZn6qz8rdWAr4GbaHLAY7WvE7Zaj4CfJAeoEi0H5LJF031gH8mQAPIeI8eC0zgmo6gSllpyRBJmbMcI55m0gCQKQ5Yc5EDzkQQoYQEBeHQFGBkSsAjKxit3OcSjoFaPVprN8ZlEpdm8S+bmojNqiADmRJSyAnIn38KvdhKoPppHYOoDM7lIAcgjLLOy9cQR4L/rZLgAADBe0Q+rExuKTFw6QawDQjHDMEjVVLql8EuaQuB565TZknFOy0obLCTAAbTRstcyGBZAAF18jDLeJAUZJBAKE36oNKZspZCzNuPMtes8uDLLUkQNZ145JbLyhk/ZCxuyQCWMc05FkLnXNuYCBgAcbrDP+eqFqRt2oUCRVgYZPE+ICSEogFSALxI8SkjJNGClkxkvVBpbFkxKB6VkMMhm3TFDBVCv03+gTcwjLGb3KmPoabzTgp8/w3zFl/KakCjZELKB3HgLQVG6xFXQ1hh2NVFBlrU0AXBZVqr7DqvceKsQDxuSLGsB4sQZCKEkFBTsjm8CMA3LuQ8wCwqpqioNWISV9wlA/KWXKkg6yQUGkhUanVGqkZwx1ZATGoNyB6t9ba3A0aTW6rNXTS1TxQh/PTfa1IjqBZgtde65FqLkDoqalirSzLcXPwJbxfiigSX0rEkQCSVKXzyUCspQmDKZINp0qy9lUArDNJQLbVe6BOxgSUiyIoUBzLBgAc0hoNt6kVlaQdEZKdbk+xYPcxA/tA4/2DqHPF9zCVttLsJIdXae2MppQOztRBGWjpZfpdlplLBcgwLaDZkAABiHYTjnPFLIfhFAjDlDiPKS4mBSDGoANRbAABwNBxJhgwxlDLuVMAYO9xLRQfopZJV9/a6VPs/T/DwDDWVGRMgUqyJEwLwnss4eQTlkOuUQB0icTNQzOCURKGxYQMBJD3A0E+6w9YaMdBQM0CoQHZVsTJ2gdxhQwxelQC1wIKDiY8IUTU6gxSHnwDaWyQEf7llSMUrAk9UgRzjnreEjdGBJHIIo5R9Aax1nojcdY/QJHw01GgAc8mwB2gSXFxW+deJtT1lEQSS9vNMnVPmLFUgGn0HkwZjNgWH4uAeAs06AbCjHtZlFgKyYwBpjNCcJQKLnBOdKYRYqwJq7hi6vQdICwPAKj80eMzY1BngSBsFGgiAI6FcU3cJIDBxR3DmZ1Pc0YQkLYtTcScNA1tfI26dA0/L4R6yYBQZ2RBXaZztOQegx2ts7Y0Qxhh9QDArtCfpxby3VvrYq9GKrvJ+SXWOAyMUkB2SpMOicTUX9eQ+kSCh7AzwwBBvWkFFzDB3Rfa+Et6z/2vkB3iKj0giB0iqlaIUVc0h67nch348QDAZZcq81FqnsA0wCFG/COL0hZAlVZgpvLqwItjRJ0QMnAw5A0HQFEm43DkyZzAMcNhq5kek7RxjgdISnvAnl92kgSuubOAOGNbrmBjiNbCf6PrPTnEvcO/fd55XIzRmWk7v7Hhnf3H12gT3P3dtRwOwD93wJYlgMm1xvp4UDwhMQYKA+eioAlDO/QfHrzhpldiPw0p5p6Qx3QBIfAKqtSTH5BtKqzAy5mjm4k9BFYReUDF6UgQJxq+18GGEKsmcSwefoJU7yRmTMNGU86G4Mv/OOLCMgLzsRiw6Uy75rJ8IQtynCyEtGpUa+IYC9IA4RwTgotKhgLMbnxHwgkXSA4AMxPTZe1zTnKeLJjxKvQbdpi31x5OIDNMoVQE2w0lm5Ldq4wBH52BPMnQ1NQdIt116A8AOw2l5RspHk0AJhcBBolprBqBYBqttRJdpdykaAB1M5Jxpwi8mBJgeJoF5McFJAxF9sxhvcu5wpuAHRoDgZnEFg2CxVc0s9pl5BNQbgA4KwawwBHhng+A3sW9xtQwf4VxGAgZLR1BpAL9+9jh6AvNm4UUssxJ/dvsFNddn98wp01YGBYs9YNoNYtZ+Nydl1dBDCitfcgtZA3dNtgRBt/QFRloaxChYgeUf9Tt/41xfYosyMBhp47d6suZAZ6o7ErcThWc8xPtHCvdCcfcw8PC0Bgdo9Qxa5IAAABJnVIZIwAqIOpPyC8aHAcWHUaPkYEZXVIqAfHb3X3JbFHZ4CnTnQoPIzrIokolnTlLQqPbgbtPcAROohAcsbcGcJHZvVSSRQlFLS4XHRwyTaTfAPcHTHcPTIwjNEfabYQ/0WzQ4yYMfTg0HZI5APbETNYqAdKPLSRA+e6SRGQkWWne7BnUWPxauWBRo+0a48o4I2zIWCMFvBPWHF/DkFBPgSeaJbbCgNqCgMAS+fON/XATBSONTTxYApHGAgUTUXoVpSRXWDsCKHgUvGYjIQeYeZaXQ8gUJJE5UYJdArqOkXhDrNE6xQ4DaBkkgXef9CyBdKpEpMMNJVrSpJzGpQCOpO2OgcYcw0JBzcQNyDlfrblMKCkvIwVR5JwxbErV3f3KrW5d4T1fU4PF3esdwyrIbHwu+fAG5dUnpTU3lEEgVe5MZdIlbTIo7QHTwvw00+5YogEoYjUr0oPDNNorIyrGsHwW4J0qALpDUgIrUgZdPXUwCVojI9owgroynXA91M0kM7xMMl0sZbMn03Mzo8nAs1oBNTGWsDQRqVoBDZrE5dYa5XQPQaRMQDs7NA+S5CdTpYYyOcgugLgM4jwdHTTLY2gecE4isIElxOkz02c7Y3TJ3Kc4ZOQ0RRQkQ2zH06cmUNyJM0c1Mt0nUz0vUp3Q06040wMjIJQLwJXGuHOH0ovW4uXAwl7Fkwod4CxOSDaNObyNBK2ARAIUA02EdeTZ0rlC8oIq86/Z8TuF4igJ4eGd45APkAUOIEvZrXcZAKCpI4Y7g7w3nWgSPNsPosErsWY2o/6D0Mw7RPdeQbdRTa4cRSRJyIfDvCMLMC4lTAkwUehCE9IYZHgrMPgw1AQ95YZZaYZDSfSdoggdoyQ0IeS+5ZVIDcQOufhO4VfX3TSxS2kdRfqcfF0EgHchPBQppcwywm0fUOU3BEgJi6dVimdUcREB5Eg4rYnGskgANSZd1PbAxSzF+QY5I5aLzPol48YjRfwZPYwIIIQZUO4aPRAL8BDcgZAOwicyANDLIbDMAHIDYAAdnwxMg/DAG4lbQiNJTo0o17TsQoFpSUg/V9yCG4GY0qsIzY2sk4zskcAcl40AjysEwNiwFMrhDuHGMkmWjSTahGEtDNHpCOn1GxxBwFC6Helm0jkyPwBGEgAAF5+BXxOcNARDOdZlDrNkHTgqt9Vq6hpE3kIC58f48ERC1tbqFBDFipRp5AvA9x1QZzvgFgiBMFA9wdvg4lIQvqlrrZkBDINBkaYAEAbi/B45IoIFeAwkULcqf4GAi5WhqBi5MAHt0ABBEA7w8BqwfrwasT+YgbrtNoRhXxIA4xVp1oZ5ZB54ZERoEtKANocLwROMaksAOdcD9rvFhxBEvlXlJRcDfoXIykVBqaPBabJRbq3xzBLBClF0xTUkIFJT2stwpE5T8si8ckWkVT2kPRQZnIUM3yEKIoZTvsd0FTrblTxBVTkAddcw3xsqkMvJ8q0NCQcNMQjA4wgKghyIlBIF+gBRHkIZEQaEQpYBeqiMaqSM6r21yNGq5rmrv92q6M7hIUHUeqCMhTLIBqyIuNhqeMpFxqhNMaPMeobhExb9RYy419bhWCswZ9gS7z6J1KjRaA6Rwlxodw+4zExyRgVYmoQkWCTCPRygMiHSp6RRgKMyUBAI5lOczUSE6BlpwsVUglvtLFdpS1qUaNkAWCajZLXZ5YxoeDkAXs6BJlSk0kUhJ6Lq94Q8xF/BB6rzFxlx485kng2AHR7N7qD6CIqb2AI4McKBzwcrsTFRfYzx1AjUi8zgrcpJIGSAI4EdodAMpDSlcbeo4cOCVNLKFqYGBBlwAxIcaiatwFSbOd5tIyP6FaeCMGT0LLJ8csKILx/bTcJcAriDMcsAyAXJNCHC2QIwN95RRCQMsHvhdgaFzJIBEwosWhIEwBnlIAaE4xzIlhgTKBkTN6+4I45JlQVGgkM11Gy6hR7jIByhJGxGgpZHEjjV+YABxdQGhdoCih05xCB76FWA8dgEJfeqWtW9gfIRVZAQJ3AYJ/sGwzXKXLo4E3WSYJRIm0pR4VHeGLxjACOFgwLSNJIzAEpqR0g9U7Ii0jRCuRTZULgFCBgjGq0ke/0urX2RavuwQIBdIFgjhAB/AlUdNd00tOIFFbAFm1RuSBQVgC8TR7RzUJnTuPhsMKEcoAAeU9DjCWByyBrP2etuOnE4vHMzAuCZEGmTEkRP1qfYNqW4e0wVsBHuyBnkF7GszNEemZOkImmOgl1zmOgxPypCzCtfnQvVHovhI2kbgOG+HXAlCMJIAKaLknnr2hf+t2HjiLgaNtEczNvhzO1ytajvEHouydnVFuw9F2FjwpKJOnUyfGoUfGSJlkpGhdrwTyuNT9QzTL0TzYYPnz3WDceCFdn0p5foj5ePxDsFfTRwdFd9nFcHEleaNfxDB8iVOoFC0Ybl3WytT4DDAxXiGLVNnM1nQrFIdNfzCgxlbGELoXNUbXBAQrkQ0wU5elZgxeX5vohxYb3At3Deilu5IHHTtejoDcaWCzD8jhEgELsiXtD6K81Qe1AhljqsRCTiCUEMQ0XGojnbyFE7qmekEoAoPSCUVwEuFCb5CUAaAPK3AQHYMEjPhW2TEFIKRFOlK/uNoqVNuqXNtPnlP/DspYttrVMnWnQNbpCNbh3gJGRNfzUoFuQ9ggu83OhrQtaICtasubVIzzsfSaiauo1avfVLvLpLXHTgt6UCO1J3vNOeUmWzw+TtN8PloWQ7BDRWXiHlQjURhVRjWqc1WRlA6FczQPnpLOaksLq4DxmpdOugFaoFKDJfZiADdepmU/fuqDRlRwP/cBTDWBTuuA+NThURi1RRizWWig5A7o+81P3g57UQ8OvhhQ7Q8TJHJTOZfTJCJGTkTSDGT9dlcDY/e8MD32J4ZXEg5VcY5g7A8zThRymWjzNIFA+o4g6zSLNvXZhE71LE+w/fa8OG2k6K1k6hThQY8o9NQo9A7U6yely06jRo51SdKFMA2A2Wf5ykZQbiCc3g0QwJqdvQ0xCyGRAjsxDwyruqtqqJRPYavTWS/NXtjLqrdvf0hYz6r1vY1AjrqGrolGsdoEw6U8ids0WnTKIVbBFs4Yr5RCP4dZg1YCDRmfmj2kotS2eCXxyg5qwGgEGfozwjC64zQG5ygMrcJVQ0DRtQAiALdJTArkm/jhyuj8ArCFZQALfED9DNbqP5mVgunwAYWlPzH2eSSZDrgOhmcQ9+AIcifGUia7Gfj65VYm/7B71C9clCJPXa81YxixhTXnWBg2mTTh0h1PvoHC2vFnrwYuCiCSHwDNCzHeshGcbL1QAaMPpVVjcZdDPty5Q9jaSCk2f48Vadq4E6626x9gIGNDLKPM3thzUNTL1opeAFea9Fm+AB/lhFI2llgFFaFQHeJmfbrS/osF5QERtFnwr50werUJ8DHO6eMmHocrdK/Jwlk8TiF2oa+QH5kuyibS5zmR4HvJf/h0VZLGgEltHkDKLUB3Cvm1Gp/TR1urr7bNvFMHba1FNklHY9onb1anZ9rtrPL48fYE6GWvMAld7S7LsaNBDM/tNs9A568LTS92ET8QDjDP2YErXxWPYfVS7phL7gkQAy/ISy7ZTuVxs7tBFdnuVT+5+GXzGTJdIVZmeNQJQM/KSzNG/65ZkG+T+Wg0DH+Wnh8uCmkiYbKB9LTH40AL9vVzuL56BS7X4l8r4rt/Vr47vxsb+GUn4e7YB1Vb/vddMQufeE77+TYH/e6H8m7kCNRH5Z4tUY7RiX5bUS9X6FbL7EAr5vZjgd+7wOvvv3kDDJm+B8M/uHw77k8u+ISVrMqGoCoJl2xQL7MMhLLM5kityNDDHxmZjdsoQ3V/sKzs7ZoAAVGQLNA94KARAHeDei/73oO0v/DfqXy37V8gyoA2/AfzGRx86YhA8tHVHo4KdjUSnCgVQM+B0C1iBgbzmtGWbgYEkTrf1nBgMBB1vuqGLgIVRwyEgo6MdQ8kxATpI5k6dsLgF0kGiOBM68XHOt/w7SrspCzbI7A62y6Z1+qHGQrtrGK5N0Q6E1XYAgAVDM9h69vK1JRC55RYF8JYBeq0FCYBxZAKLfogACEqwUOPurgIADqToSyugCOAIBfKFJLzGrgUxph+Ej2PpnrgwLwAwAr0NYMYh3SzcAA0iQHkAxErupTW0JXCvofZJqh8BSPKCizRDKA52LIaoTEARQuACQz6rcCCxPw7GGuDcChEgAaAlENwejkaBrA0MJ8roW5nMMEYbDhujAdgFQEKHSwShswl4BoA2yzcAAanXBzA+Iu4DAXkgdF2F41dcaMOECkHlB0Edou4R2FdnpayA2+ZQioWQCqEOATEq8KnlGkx7YVQcrQWELq1hEHAiAUQI3qLT97OIoeTmL9ikFeHJh1OlidUJ7goS4i9cJQ7koUDDBIA7wSbKpgwjA7jhUgUQAwgWAaD7MuAzcAwnCPVBj46QubLChkEOzbF/cy0O4GEB5H9QhRkAeZF4AOwSiCcgovpjDCNi81JRoo1QjaWjCKj1gyoqUeCXVHAhNRLgKilEGQHng0gqovapqGebnNzaxYG7q8nrwqAkCN3MMGmGWaahs2pNVYXECVA6N4AUWO4B6L8pSpvR+AUwmy01jawKAusU2HUmFDnYX6wINQIcLXDagJAWwS0AdASSuirEnLbMS4XSAUdjq4PPEeeFwBFjsYkcOPP70WGANaw/AaTP8O1b444mrQIKoQwn73dyxwPDHrQC7GlpCx4PGnAfg8AIRQkkQF4NsIiG6JcGiQ1Yc/gjhzMNaSgSHPziqhC5FQQMLFtA0hAbhFcl9VcG42bFfIQxbYyJh2PwZ9j6SIjEgZeOU4qo+xQ4qsQb1DgGsjh4eGMNqyQDtE0AEiDsCoC8Cni2A54hHreJ7G3iBxFY5KETWQAGF3REGNcTQFZi40Fg1cfMGkNoaT5EgwGPaCfhCQbh4A6yNYZZQ+x45Ruk42aLgUAmloK4u5S5uZXSGT4qJX7BUIUFHpYS1ojAWQCfgjjTUDsk4pidiWWiAAyAkKC8T5Q4zH2NIB/hz4GJOw4Eid0ebwx8J6yHBqCJ3Sz0GEYAKGMQVSxQEMJGwtxlaKkrKTrwIo2SYFQiZAS7xtAfZEBhUkqpdyRk9gptX5gmSTgk4rBHPWHB6M2oA9JyJpO0keT8wYw2sexNEBcSEkkWPAPZmZxOZOWOCCKAZSrC+50gC/UST5nEmQwKWlYcYdKHnhC1RxqTdJpADKDlBgkaku2FgmayZJp8bJGKaSy5i6NJRjDDsNpjqmtIu2H9Q4B4F2FrQ620k7EmjxTDEJceKRL7PMgnr9QWgbQAQOlQoDi56As1UbtNPaB3ACA1U2eqVPKQT0OsJoZHpCDmhQIlI6oCOAZSYZtTopHU2yHNPhhxBS+aiHbn6Nnjc8AgNCaANACsC7ASpFQF/AmCTDiNB82EiKScGVDJQcRS6bVjDCJHKoFQPqOgMfVCTQy/Ek0DyYSKTAJ9xQ/UKptulLAxNVSGuZ4RonSAeT2wqkO6srCwCBYjRv1O0FLlQgJIggT0HTKQAGhvw9EX2ShslIhgUTiOAkiuMJMKCgzpJCmPaoTNuE1iN6h0pAMdPiC7DJxXiDROvGCguxwYwMC/E+J0QDsomfgKsFoXMkaSmo+YC4TTSga0sNwubTZrKGeCBAghkEbVsbI1pQMfS9w0QiOyanSRA4EwGaiXkdn9RZcPQyFvkWqabRsAGQbjIaI0mUAPRfxC7IcD1iyAI4PsxwCRXfG/VYgJUcsI3zyIRx8cBAYED7iTlsApuFosaPjkLn9QGE+Q62Sd04zAkoYloJgBCDXAdhEsw8bVn9N1zHFsRIsvmCAj6JhhqUuPFUEaG0lWgPMuwqKBdDTEZjZAWY/EfqDkB990CnYfMEsC2ywBZA7wxkps1EAPCDI2rNdLqzuAYjQ8XydsZKJPlYzHkuwsujwnED6VLGaVNgIeGtlhhsmV2eXGFW+AH0FireS0bTgcySIgaZoZ4KdKPFSpmsQhMaF6IwA+jAxubEQrxiwDf4IoxQ98W3yQAMBOWvgLhDwiKFpyes9sZaHEDaR6VMRISA+nyL/z1xgSCwqsNFXMnW96AZwhZMjhiF4A3Gz0VUF8MiQIkzcswkHjbNSp8Aa5aIxElYxXpCl3wvbIpF7yNotYh2Yit5oHytruVp2E1GAUT1QHmk8xgWF/g50gAXdFJ/ZS5CdTn4lj1AXAIxUpHFBHJByZiwcd2XcaWIjkMKPsnCnORXIhyz8NrGintZrsm0gIBgfVXsFSpR6FAUJX7kcFspz+m7bvhWTInmSKJrYvDpZO8BMcK4HnJxdYuaFHIWysAbxVWh3b3J/FUhegUXxsEOCAlkSvNFITvaaLzYIYTMsxxebpU7Jpk/iakrPkz8mOBiqju0v6jv91ghQT8KcA443pfFNaUpRpSPYr9KlYS01jUvCX1LeOLpOJc0s5k3AAxaVOjC/zSWgdMlGSr5N1KsWXcMKJihxWDHljRS1p+05kIYrOW2K0Ypi06sWMvAaRVmiIJDpxzAzigFgsaRjjkvOXPLLl5AEZT2TRgTLFewyaZeu1mXWDyMtg0IEsuiXDl2+WippXgLolmT9JgVLpVKkIYHLHkWS0Zd8qhXFKYVtwMhjMqCUVLEVVSuwUip/QxKGliSTFTossQ6ZfxAcXsFfOlD6LIRxqIFU8s7KgrS0aYUsacuMX2LXlFY0ZQhjkiuLYUpqTxZcm8WhV/qRrS8KzXuGURviW0KZLZjiCKJFA8gGxAwDRgnSL+2pBHmFzb6jl1leApaUGPuCQK9l3Sk/kxyP5SqbFHgC5bKu7HXjM0wqv1SCoDX9jBVPq3JWGrn7gqs0tyNPE12QlX1RYWBc+tbUazwBms1q8YQCGrF4Ah6DIjqMcKYQnB0gaFDCr6NiBJj1AsgXeKysdXmlwF9wa6vivuCEqvV93KNcCtFXhqrxny4NY8tDW9rzF1k7tSKooAvLY12SodSYtuSU5p00Yq1Z33QIQ1ICIxaioKo66lrfqNAF6PkFiXaKElLq7SkqHdUEqelcKb1Q8uMUxq3lPY8dcOsnViqAVQq2dXerlU9kQ1Sq9xSqoRTeKF15hA9Q2qPU3loZNYc9e2svWmpr136j9YGoHWAr31I6t5QYrg0obP1Yyu8POutrAbVlGK/Kk2tG5zJc8eKqTvdQ7VXqu1N631f6tHUPqaN0ajDcDzQ3Ibn14auNd+sHI4bmkeG9FY0sI3Hrbl+c4+SbL9myBZsAqpVEhtvXMaBSTiqARkGtq2M4BG4AVnxodWgb++GacubNEoBVzSAUm6GDJto3wb5NoymAtQB43mEVNkfX+Lap+4aaNSjasZF+LmQ/i4Y/4sjeZwo3Qbs0sGtjVOvvVBqTNTG9jaOtY2ybwtji0leMqU3NJbNaZezepsPXsqxke4kqNDAg1trp+nquFG0U5ygcAtUWoLVcpWaljB1JWl9WOsY09rotZW/3I+p/W4B+yZyf9WKrjWQr4tNmnVZaqcqnw4xtARLW6WTrc1/Q8gNTUqyc1rKtNkouEXqJyJ4d/cySq4DgWpwzrjFbilrR4v/XzqK14oabQRviV6kRRYohbS/2W2Fa1tsAeeNGEOS/rs0qquNeDz23q4MKh2gTcdsAg6iZRfTC7QqKu35LntYMV7V2He2pbBNJ258rqOW1P9st5G/ZbpwU3lhQdrxUzBDq+2SjVC0MZbfsIzBGbKtpmuTXGs8Wo70KB2jHc0u2UoNqA7RE8Tlso2mpqIrM4EPdu21/qrkcarba1vhRXIyd4OkDWlpO2QonpfCF5HCHEBsBsoiIf+JLvM0Qr1gG7MaOaXmQi69K4umgHLul0J8oU6GydXGtl02xD25at7RTsF2Q7vtzsr2Qdl03+yKceHK7R/w220buNGQfbejvN2Y70qZ2JKVspjJA4Gdfmr9lKhOW1aJ1pW4HnVPSpHgmt3yiPRGuk1vqqtHGpxdzp21863dpuj3fhs+1U6+pU/CuC/xrHtFAduBZiaByj3rSyATW55cHvuCh69d8e4HeQH51m6c9bKi3cqRcq+6+V+ASDblvSVwo5kDewLdVsr13Ka9cm8vWHqfXx7X1k++rWCqcX5LW92e/jR3sx1iScV6w7zfaUR1D7iVTHcfdVIX1z73l6jdjsh1+WzwE9xmpPUTsX39iBlhOsLWfsArpdkel+n5aBj+Xy7rtq+j7RvuaVOS2lBEjpeZP72M7s0fS+zs/qGUUA41ZKp8gm1JYqw52eMKQIAZc16lm1K2+dR20TZy4vanfBoudgwMCldanSHhLINRYQZRo0GWDMFzQYCsNBmITEAAGZiqmIHIFsB0GS69B8dPfoYNWjGDjG6dCwRAGzp0rhITKiJbIe0pdhoxg8JwVXRcEFdbM4ckrs3U6HGxuhz4w4IjkJJU19iSvTyQWFhXdgmxo3POatlt0SbpABYyNTPro0xaeyB8TihrJsPwxph1s8udU3oCLzpsw9J+OkAUm+rG+gtDXIvKhi7xSJOmsTXpqPyGJDNkWh/U3qcWWaqI/FdCqtx4D6b2A1cwCGkj8MSqLw9wpEuwCznQJ0gyIDQMiEADIBJiGRC1HkQSTcEIpIiMtz+hARibZCjcabL+o3M3ZWlNSJfYmWdmrgJAaD3YkiVIe+aafrH03Kq99yxvdVq/EX6sN3+3/fPucNmbOt6wUY44X2ZZCNkddN+kSNuEJT48EsmsLsLEmQ5JxEcd7JYhkm4rbm4geGFi27Tm4CsFxpkGsS+y+ANZP8govLLlbyA4RjQ7UHRjcaEyMZKMxhekEZnMzxdguLgOK1OrsHkQOx1YynvlUuLnlDhL7B3OZ53g9Zbx0pJJJaxIAi4Y844PNiJEyBHk9jHKbWKGngIDwssZhpkLES6ShtShZlMeAICx1HMkwX5vIChnozbQcMkMOkFiNFBHCQJw4P72zHm0iReknfS/kSjwx2RpFUWPM3joEcIM9Ac1X1toHlJEAtIeAI7wSDs9VIE/EgGW2fDZpjiVYnRuKGBT5gHa41UpO/o2jXFAAmAQ5rMppwR0z+OdOhmYhY0YnhoicjNw1aJh64u7xkUG1/e3vBRb737bKLx2qipUnujD7t7sD2m4TbYYSN26CdoWurekdGWKailaKcw1StNblK5l9KhZdUvkNoUlDKy9fUWdv7xHfZiRgzbvpxOj68TPZTI+SvrMWHmzCKmQwyuRUdn1cXZ39JTrwEDHqdPMgFFMby1M7D9B+uY7R1xOjrj91e3Y1PvP3Xiv9Zin/TfpHPJ7p1oyrrXWZrQNnOQTZ+FYwNbNRL2z856TUuY8zdnNNQu2UkjPzn0S3jF2tAEzOeCsy0T3PMxVibvNpGOtTihVezsnWTmXz05j8yEtkMoqAlChqOQBZXMqCQumvUOmVRyA4ZkQfBm2AIbESJhhDKdEwXQHgDmC4ukhowL0COhjA7g2lbBncErqsY8utdDQw3UchjUvBHSQ+RunMINAbCYIZuAELda2YT40NEdtDlcJxggWISTS6VlkCgxcAoGH49pasaJCRaFtCsGw24v9ANAaljbiO3MwXAlx8IQEGkgiIAByHqOGILCOt4Chh4PvaEQRgg9L7yQ2EqGWihX6wWffxLsIdnJzwrtcPBBZbHaW1rLIwHixoGCvbYx20CNyzjT1ggLSAAZ7y0qRsKPRUr3wVU1eT4tAw+LmiDK7Zb4skKM0dwLavqZcvIAorLgRK5FYmH6WYr4UYJPFbYC9WkhPTWQKZeVC7DurBlkYMZdmFTXdIBsW1pGKUOGE/w+VGy4PGas6V6rn+VeLkaUv9X3kKlpvFCXEUsl7A8ioQngguycxeVCIMpKOCKsnAoiHgZM3rU94jt0z5STM2bWzOW1oTai0PjO2sDuVKrQBkpSdfrBLWb0ERJvo1cHj2WzliAYcsxXtCWWoblKia4ZYWs7g4bz8BGxAKRt2Wno6ll8OjYhvqTG1s1xK/DZPaI3RgWVqElTbzOQ3abMNlwINdBAM2H0TNzK8FbZssUOb2ikayQHptE3GbJN5m0LbcplWIxuCda1eRxu1h9LktoJdtY0C7WBLwyLy+7Uqny3tEmTJWx5i74jJZrPNtG82i1s63Wrett+pVcEyzsFbWsU26bBVvi2Nb+KW2/xftv63LLzt8G67bWtm2Vbs1wm5rdJt22+LDtg24daNu5I3bOsMO8+1mt42TLQLegb7Zaux2A7Ttpg8HTC6sGouGIWi7HXhDIQGLidUJCIfBEjkzBzACQ8Rm2sNAZItMoS7lwsj5cbIFYTQ54KdoaKdW8IZuGEeaE0yQoccjRH6G5NhgbA5so3VilpmcVpQSUomAfUGzDK69FfXTaUYzR27M9uANo4FklMDo1pbcKSWScHMxN0gmRto5qBWmzTDK3UsOcMoquvRdWZARMO3EohgBkKXmnOMDX1CDgsw6k46/4DOtX5nAfo3lcgCan44v7ChkJM3AQcYAJAAYmTBfFEr2mKFJ3a3b/ggTNwZ7gYZ+O5aMTt2Qovo/piwEVESB8whsLoZoRfjNwWp94PgGUXnsOY2AS9kKHEPOkspgjDhtKfngjBlJwHat06y2y4wRgrETC0BBriO4nAYe2NXyUiK1nthK1nDuXTw55IZZWHtAOUxVe4W6stHi9pkLTL4etTfobXWUFE0kBJsmA5sygL1GZ6HdCH+JfoNyc1BXGCHDQnkx1YWauWsAaSG+peztYL3uH5jkKPmETWnEIE+jwemGDmiqhXyYTNexXLL1pPd7NsbBnbu3un3kw59zk1fcrnsBdhj99JzdJellIDwZQ/UKY8iewLon74Vc8MkEcU5DKV2o828uycVa7dTWmVaOrFlMginl9gMNffLBNbMjZ+ip8/fmNnnH9S/SZSUoickAdHljthzej84y2eLbdqJ8PBXOroagC7TWo2pPudPcCp+zCys64drP9n86yy3QDw3mRjn1p059opPu9Pcn9hsEFxvLCFKpq0Khp3c6adEAHnlVp5x6BeeGt3n7Kk+8M4wCjPGwJT/I5M+cPTPrnMK1Zzo/BeB9nnrzxdtjdmfKIbpVznxUC+xf3OaSELyih6A2dzCmcS7dlS4UqddPAtxlFl18/3s/OBn/zjl/C7+OIvYRYz7TKU7RffqMXClWapEFWlzPDz7L8lxSuBfrP+HTaSnDtJoB4adHBCsJ1zCvKqhDwFQtuIiKiBOQ2nEw1l7gWMqfOEje94udIGtcWuEXSLrk6K9Re4BjKxLomN1PoHKv9n9LllPo7lO3IwwWL25zi6zqF21BodQkOHTACR0DA0dfg3HWrtMXRDpgti03Y4st2kbcmcmw5ZfCd3q6Pdwau4JGoD2yuUL/y7gEdABWvaKNottWjfJpIrASN77BTawAIBlAWQ/eYCa5uTWgW6QVt6MCWvUzTToWfUIZFmuoiOsZJLwHmEIx9vcb81zO8iXSAR2gWY76zGacgBTv+3rNeUNj3f0o06X4sSZT0LwRhCEkBrxAEa9UdpZFF/bdIHyFDNOntQ41XchoXOx+CXguWFvE5gzuLWdL+VpIie0DpkWWDBVTEFsGovl36LBgpOnXcRAZv2LVVTiwYFbsHhD810SJa4SLdqHe79dDwXxiksGBD5kOZuFYAk2yxKZB+Q4KbARuYe6PR+UlAENZTc9dDjlBILNd8DU16PpaKLKKMJQvB+FY0Cw0UytSQBUmYTUrE/G9y3YOauVnoRLQM7LCUSoijrF7U7Nm2wwbE9pzZRICKEJRi4hZsdG5WOi0wN3KCXXixGWI/e9HXAlulgUBB18YWcUHiKAqyzEZ/0rKIri9l8BLZmMto34ZDE07iRRJRepsqcyzciEU4XHjUhOPgyscyQe0PzHXNDHF6A8QV3chuN913g8s3iYVBCTyy0wksYGRkDEkNBodmrvnkyASBeY/DVu12YW/wRMg8szPVwZjfzeo3qnG7sy9AuyP+8bEeazaFthQXx46bznkJF7ec+zcPGvCDsGxQSKAK9q4npqf6NwPhKlesTf0E/CCh+vQXs9Hw95Aod0yOsAaCvhc9aAn0xNWE0sTvBi/Vc9TJn+Ol+I81/jeVtnzLctD66ZP8cIYjzy6uCSEy0Z3XMCz99G66aAf7iHWVsoy/EdQfGaZGeBc1P0ldQKPuhggP9C6jJx+SL67Ip+vyK/rUpAG7KRpe5mQ+yBIe1CEV6c2JrvH7D4eyjujB7BfHlj7h/0uAXnN2i6/q5or7veeVAEwPTuegNOG4UZA4CX8AWdn6GN3TzDUgefPQ36fzH44GiqAuEbefepDLV2G3OD7TUBWy570rF+moJfPwfBgsb7XlbsGlZ8PdVsa3OGsCuAaswrsCWAulX/bhn/x7V/c/2Vmv4s62oR0eq9fovxPaB1N/FbkLlv2X6OYfNfqFXiv1W64U99H5vfM23373yE307A/F6kXzsfF+S/VtcvhDRVpt+z6ULoy79ZN6VAAvt2fij3yr6sqtO/fWEuSPmLw4wHyBFixEH86d/l+eyaFo5FX/wA1/lnif/S8n9V9N+M/YGpMLr9c6h+mO4f6jUX+vohb79r+vv5scxc8eG/qfo7UJ2n8gX0ZyPyC9BZZlRg4LSnDv3GoH/PLt/9ftn5P890H/N4QmqH854rOkC41M3pUPf+V+P/G/Z/305X/PUnS8dlJqDn8jlBUHn879MPzr1SXaX0WNWgW5RP1EAy33WNLzTY2vNf9fY1d9a/KZQf9GfPf1z1gAi4Bv44TZH231iJU/xRNYLMEAxNIARCxq0V/G/wJNOyP/wkd6wCf0ADCzHn0P8FZDH0YlhfYP2n18/GrXa4k0MGGq03JFTnWBE0OfiQCpIZY1n5weNYwr4Njb5WwDbzC+hW47gT/SwDTqG8wWA41FfUVc6/f/yICp/EAMAhqvQQNdBIAg/WgCjfBf36UwDQZS/8nFBXzd9zAzgJcBuAv9DMgZBEDHkFIMDCiUEWnMyGpAbQOqChwSUAwExBZufSCFYsrFgB6lI4JcCGhMELKxegDALYFm4yBdj1vtNxQqxBxvgUBiGh6AD6goN2DWbk9BDmOMAMBCQWoPqCo3ci3QxCQMqjKpYPbN2zpMPVZ3w8RLVwTEtiPSS0HsOkBh048ehAKwCFI4AAIdhroHqClogoYdx4tgXKhxfgA4XhFkBGsOclkttEZJ0oAbudpx+9UcK7HhBF5Fq1F1OSItkJo68EJCxZzgoQjrcNXE4BcJZPeUGIdZuOAGN4GEG7k2DFvTykqlxrXwNkBuAkJFCdaUIihhowEU7y2RalUIF2FyAfAEJAhAeEInEYGO0DJkwcDKDx5V0YMEhwCAbgBtwZ4QELnQnIFOC54z0AOCShL0f4VDh37P8GQBoAAAE0rAOMDuAoQExihBahAsE9B/GUkOZ4KRZy0CdgQpPwb9Z6RPGmIiFedHrgkWMBDxIBQL2gODV4RsTxCSHKLGbheAexzlx52N5zlw3g2UCfgwwV1zKR/g12DABLg9XXoAd/AAJ+IesfWG/AGgP1nTluYY6m+Z4YKu0qhoAeTCyoIPJVg0F2DON14NE3XQRTcEPWu2YsG7TN2bteg3N2CsGgGO0EsVDYS27tRLPu3EstDUjzVC9grrx3RnxWOH7dIHUMAusLg21g2gh/PqwmsrbOKzE1ErLni1tgrS6n3cD4AIAbCoSbWz9s+LXYRe8lYaSROBCGIimCcIETy2QBarSYD4t8wPCEdsP7XIxetrZL2kyYwwZyjZN9LPrCaZsrcUDq9YgeOj9NgHdsIiD8fVM08kJSR91J947CsGBs8zdRQ6QMbG2h9ohCVASH8pbfmzbD/oDsNzs7gROzvDa1KG0tts+Z8KEhKSWW3bDEwr8O9ofwx1R/9IKLAEY9SbRsNAjSLZg39CCqYqhosQw5N0rt9BIQ0Q9IwlDyzc0PHNxZ94wtj2TCu7GuiGD0wkYPIsJqcj2YckcKjy6gYIxm1btiI/t3Y8cZFKFaQ9wiEGJC7cCEP/5Z6GYNHpKIFkCgAJg9tGuhRQga3/CB4PPmpllHZAHMgLhSyASg4hcoDjBloOMAuECwKEGgBjmZaA5ACwXYF2BeQ/kMiwrMGzArBTTPyxIcnw06GWcvMYhzBA5kSBXgDz5A32u8zfBHgyAx7d7X7US/AbF8iDtV9R8ih1Y+2Nx9xD+lJpQjIdVG9NdI3TaNUUDNHutDweEBijFJOKIjAEom7wHMD7dKN9VueNowMJ8o8e0d82jHrgyA/w/xAzhCwYUOXFmuYmBERJlMohJpv5LDj9oJdI3SzR4WPVBfFDhZAAqjdePAHyldYDAhgkkAAFHbB/QQMF2EQsZUxYZ+3OsOIobI5ACgi4ovWEV4vMLw1E1con5xyjk5JbByduXWbH2ii5MV2BBSAeAMcdywGLxGl6ouEXAIGxM1F+CMwLgF01DQbTCIAiBLl1moiBAXy81fojIFnAwAcgBNxMDUwjuiVGaNjCBDPJETupHfGxRcAohOclBY7IBTGjYIYagWh5bvJr37YB5F8DvAz6Y1n3cJRCqO5JdhQaHNBI0a2VC8RTf40oN9aMRV+sTaJRTJ8VFS8Mp9facrlp9tFKqKGts7OMKhIOfd5BWUafYpTp8QQ72wFt+gBoFYiJrLnzT9CNQomajRyMZAm5Z/FFGCRToNnR50ntJxT3cJrRK0MgOA1wilidnGWLliQQ0WJ5j2VKCIFiiIoWJIiWVG1FEUbuRtXeijwX6OT441CcwyAmpDAWVCDoaygT87Ym20Fj/oYWPrAVlWL1+CHw9lR+ivo72IyNMHedX9jCiQOLZQcYfAPuRQ45n12dLYvD0OcXY2OOxt/o3lS9i7SH2JTi/Yv0WDIM44OO8Ca0XOJ9tw4uc3liSLQIOoNggug0UFGDaRUsAog6WX284gxCKLsfuVg2JBuggiKkMWzYSCrsxUI+giVH7ZoBldZpQAWUN95VQ0GD1DSiPLcSPMYP/ov3BAmO5AIBiJo99bc4GG9OgWWBLRgSBYFMp7Ya+NLxTYWGmQVsfRSHzgpPIJnaBuYDaTYZ/GGhDuBoAfZlqE4wT0DuBygIyOgBSHWtz+wtwXYUQBWwEuDKQDrZnlpZfhF2DdhUExWh3AT0OiRskBQu4RCwgwEMBqipw6iJqdhQFgByoBoxnwlC7lLuGEQNELVWx40kJmAOEPQqlnhhhKbUEflgWaemywOkb4N/jIQIbGkZKSZPE5ZDYPsxZduZA5C55L4lgCfiS0dIGWMd4ERwfjXwS4GfjAqNRPlMxIrAD65mkaV1aBVpYii4A2GRROYBlE44HSBNYoeCUC7lQ6LkhyRNrlEBiEmxMCp7EpERQCyAZxKPtwPJCOLsCqdgxwx2DODzDDsIiMPTdWLVD0IxLBaQ1JR54qWRQgN+ReIDE2mFwAGDUwiiKI8940YMrcDExh3hBooJyDSR744hMCBGbfmDESP4hIDUTg9eOA2grEzxOfhpsPJg8AtxEwjm4xQLmgEj2pBzC7ZYiBhNqTp7X2AASgEkBLASIEqBJ293WDcXyYSgo/nlBkWAtQVghaYBxxofLIvCaFkFcnnZphEqxMITUAfBO3EBkbJD34uBf4U6RYSXhWOhfJMaGWMW/NqOI4xyGFgPhFMWbn2ZVeThKZRB6ZNXIQPg4oMKYnYF4FGYMiZaCKkf48yF4hIUwBOATQE8BMgTdgaAGWgoQfZiWB2Q2wH2ZtIzFJsBJkxFNskBk2KVzBqZcpMe4+E6xgJFTgAtSeTRkrmFp5tqFwAZBkgHtFFNG+NGDq8XgJqQqSIwLRJviHQqAG9MvBChk3RHnArBVIGwfZMCSx49QRCScMHIAiTMIwQzxo03euzwiYwhLk/NhIF4kmAQoK+lliOOUlBhjZpAgDvAkkpiAXih5HTGDAaYD5Uohsk8iJ3i8kxun3jCk8ayxpMwHdScgWk7RNvj58dxL5TWk1+MYT1oT+KeS2GHxOWN/ElaMISnIXlLQSBvG5jjS96KhOsSPQaAAYTXPMJEkQD6EWgRwE0/lJ0TZ6YGOQZUGfEQYA0fQ6GAh74iknpT6Y98CFSAgH00HB9kgPnHZFSTmIzAZU6N3QwYPHECVT6AKu3DCjBdVNiT8I+JPQ9EkpzxoA9UkiEoQzU2eDHxMWW5XNS9nJQCtSUIaSHR8DKe7kdSS3NwX7s3U8nBbov3G4jTTg0sBELTg0mBnpTP4qFP7BljXxF8Fc2Nhj0Z1DeGCOTECSz32gCiWkGrTWUylHZSGwXeUvdGfWbloiusVaxqTA0xNP3xk07BJ9SL0rGECs0gTJizY1NDRFq42giOCeTs0zfAPoNVNMwgRVgKBnxhKACIAWBZuOITDNy2ci21UKHCmnFIRk9+JaY00mhOut+PT62FICfQ2jKQWYrMzZiczDmMxtrw+2jBhWgyDwwwcMYMKTc6LSJNVScImJMbtNUqwW1TSUXVP/wDUxdONSV0nTPXSK5SGOhgt9XgH3S0wl1IktqI09M9TqY7yCQzcE6xL9SX4q9Ngyi02+JgymE46C/i0mH+IjTfYKNKcTVTQECMTzCExJmltKX4mOAv2cgl7CTgRDNTSHMkJGJ59QbBOvSekihLSQ6U1jK5g2AfjCQBhcP9OZSCAQumAys0LlPYdhiaKkiBYhdZIyw2En5N3U2obhJAReEnS25JuMxmLxiifATLPDxUin1EzQbIe2FTB7fPDbTAbXME7SBsqnx7S2gjQX7TZM0MOVTU3JTLHSVMnoK1SQlTTP1S/ifTJNTV0pdOSSjM/bLMzckornySrMw+Kxp8iDaXrSsANhlBBmki9Kczf8NxM0TWkwEAIQSAWL0XjM0u8CHgB+YxIqdiKeWB/cAvYMFBYU0ohL5SdvGLM2S4s1a3syr4sDEwzwjZUTwylGNz0/StUYjI7xHuLVQRwQnIrMAzJIUrM5TXIPgELSqkh9FqCAgcjL4BS2cMyQEPQkVP9o6AHiS4SaWMaEpTI2IjO55Pkg8J4yjw9Rx6yR2cbMaRUM8CK5iDAYbIExfTGl0mzvwm7mwzWclIi84u4uQR7iwgvuNUFZsgqnmzB0hQBVTGLFbOQ9x01TOnSts+dLGBdsvTKNSDMzdKVROufnBOznUs7NdSCkk9Muy26WzJBkEsq+Oey741zNaSPMsNI1BvM4qSfSVoF7mSyNQeqGgztQSnOOJYc+pJxpLkzuCOTUEr4PRoMsiBCyzPMoKFyyXIfLNKRoocpNcyqcoSE9wMAORj5IickrLSArEmhJpyJc9DL4Bdk4Mznx+hKnJJY2c+sUb53dERNo9QsHNPhhNqAVg6zvrPjJPD/rUXKEygbSdimypcptPIBKDIII1yFBLXMoBJM5COkyB09CPkylskdKQ8WLNbIIivwH8GVJ/wQCDZJiIV3PK1IIRWhogMwhiENzmINQFYhMIDiAvyexCvmwiP6PxDnQsIbCAgBGAQkDQAtgXEBIAyqAQAVAlETDBIANgTEBIAOghgAEANgQkA2BkQPcA2AFQZEAVAwCnIDQLCQWgA2AosLCAvy2DLYAYBkQQkEwx2DDYEeQGAWgGRBaAJowVBuDBUCyBRAHIAVBMMWgpyBmjBUAYBMQMqi2BCQBgA2B2IQwAvz8ChgEwxMCjYHYNuDdvC2A9wZEBIBMQBgCyBCQLYAVAyqLExi4cgPgvXRCQdgx4L4CqQuALfwLYGDBTChUByAcoLDFoAsgTAoYA1oQgucKqChUC2ABABgDKpgwbgx8KBAHIFdBLCi/NoAtgUgsxA9YAQu2ANgTDGDBODSiyyAmjZAoEAlCtAA2A/CrQrwKtgLIDUKwinCEgBmjEIsotaAK6AYLdCnIA4McgLIG8LkCsqkJABALgtyBgigQExAMC9QviKyqQopAKyQWgHYM6isqkwwuChgCxNqiwYp0KVseIoi5xClbBIB2DegtyLMQHKEwxeiqADGL28BgEJA9CqAp0K0AEgCyB2DMqkxZMMcAqyBqChUA2AcgAwogKFCmgsaL1iyAEwx9C3WCoLGig4oQKcgJosJBkQUQpIBkQTYtULtgFQCyB/CugDKoXC5EEsKrCq3xIFf8xTI/o/wJ4txpysGgX6hrPAASMxvgIAoABvVIkMgkAWwDiFlsAihahWAdgFbd38QyAsTtjAkuRcJgWgBJL/mWwBpKFk/5QJKkAb5MsYVUAtjZK8mDkqKBDIQaFoBOHJYGsws+JeEQBPIUQDNA2S+SEdQCSkUs4d3AXAC8AZSomnlK0ORaCVKVUMUppA6QM5Q1K5S2ktvMCS2vDoACwK8GkBJStkuMgdSoUqSjjS3jzGi2So5EON8ShUwVNDIaz09BImO0upBLTQ0o0RjSwyAdKvS3dx64tS68HDKvSwyEqJggZMDtLjS+wCeom5Rwk+ybAFQHfySrJPCRFeI7w2fzUAYEV5UF3WMu9KBIEgDtLqBIDHiAwyw4yFLH4POHFBjSv0rYA7S5cStMzlRdwVMAAX3LLPSiMp9LbgtsqrKuAQyFVKEkaz3rKIyyMuz5+SuktnL4y0+BJwnMO0uESfaBJA8saiLby8w6MVNgYAPLbVQkQpcOHDGgbEQ9yOpnmeaUYg0HPxzSQtQ8tx9MQPaiIjBZuFIXiRfHawDsA8KO8FHFVhLb1t47QIZJKQIvV5Oxx1eKNjVK4srBACBOvNIEtEDgAMHQBeIRvl09MAM1VaF+hCJGytj0t0BnKhyrDw1oky8cqxRtwl4D35JozLM8AXs0MyqTL3a0BJZK07WEuB0AZAG3K0CaZGBDqVCcTARcvIBmcBLgIYSCJNQAwmO8SAI8u1APLIsHCFghfipBD7AJsFRi8jFEkCl9ZYSJiZAQCwyPKwwdvDugCwnEhEQyHYRUyEhKnIQOEKEk0BIzG5X3LKRx8FGnLKhSysurLnAWsqIACKuMqmR9nCKGjLFSpcqbLAuDwFbL/S8cs3Kqyw437KPShst3dfSkKt3dxSi1QncEgFqCeJSADyu9Koy00sFKhyhMrCglIAMu3ckqhQFSqYKyAEhK6jZEAABSaBgZFtVYqGwBVodMHYAs8sRAyCFHC+2RdkK1AEwwWjKqocroqwyCIrECJkDtKUhMUV60kq/MLKRdk8aIKg4cgrPsBvERAAVB34Y/HYTX5Dqtdd+qpcucrxymsvVB0qxsrpBmyoKpHK4q4UsKqpSnsqKBey1IkuRwywyCSjbAQMq7KSK3d0xBVodgzQAFQWgH0L4CrIBRBmjaIqyB8i5Aswwfq4wrQBMMTEA2BIuHqpWK/C1aCKpaCzDF8KAatAr8KCCgQEhKMCgiserzoXAFsBJyscverqi76u+rUC9gwVBFi3IFEKIijYGxrIuPQuRB2DOQrQB2DLDF+LOayiz+KVsfQrKprisYrYNtin6u2BqajYA2A8a8dyXgUq5QFIAGHNHUGsSawcqFLp0guhfR+I2jCagv0JrLHR9INkpVrvSrw1AwUcl8DZKtgRyt3cZqlIUiAEqs00QA2Smiy9LIquMrVqz2QugvY2qJkA6oawLqlZQDagauNrTa2SDZLMQS2sMhra22sur4WR2oiryywyDVr1+X/jYEgBWQADqlyoOqfAzargDKpw6yOtaA7ayavNq46w4wTrZ4hqndqNavtF1dva69ky5U69OqHLM6gdAdquAHIDzrg6xABtqC66OviBW6pEBLrXa8uvwtGVX82M1/zBuq4BDaoUubqtwNkqyAO6rOtkhu62AELqrqrgCyBB670rdriOc9k1qS6bWveJ/aqesDqRgcUBNql6/uswxF6lupXq16mOq4BI6Z2vjrW7U72PrIAaet3dZ66pFDrsTAavzrV63utoF56reqFK+g253frP6wyG/rs6yAEJAb6rcDvqgG/uu0Fn60upYjHYtiP1qT6jOrPqPAC+pbq2S+Bv/rO6pBsSr16uBtAbd3JjzmDI4rJOwaP60+vzkCGuesfriGpcoAb76vutDrwktBqHrZzVjzHqR60IEgamG8+s7r56v+o4bSGqOvIaH6uBoqo+G7euHq6pJ0JuV140RtwbmGiRsfqEG6pDIb7a2OqUbValRuilZYjZCW51G+uv6hZsTQAYaoGmBpDquAdhqHLOG5BqIaIq1Ilurbq2EtRK2AdEukhbgivmRL9AIAA= -->\n\n<!-- internal state end -->"},"request":{"retryCount":1}},"response":{"url":"https://api.github.com/repos/potpie-ai/potpie/issues/comments/3781307221","status":401,"headers":{"access-control-allow-origin":"*","access-control-expose-headers":"ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset","connection":"close","content-security-policy":"default-src 'none'","content-type":"application/json; charset=utf-8","date":"Thu, 05 Feb 2026 10:00:20 GMT","referrer-policy":"origin-when-cross-origin, strict-origin-when-cross-origin","server":"github.com","strict-transport-security":"max-age=31536000; includeSubdomains; preload","vary":"Accept-Encoding, Accept, X-Requested-With","x-content-type-options":"nosniff","x-frame-options":"deny","x-github-media-type":"github.v3; format=json","x-github-request-id":"B078:B906A:3671755:EE4FD84:69846A34","x-xss-protection":"0"},"data":{"message":"Requires authentication","documentation_url":"https://docs.github.com/rest","status":"401"}}}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
app/modules/repo_manager/repo_manager.py (2)

1304-1369: ⚠️ Potential issue | 🟠 Major

Metadata keys ignore user_id/unique_id, causing collisions.
_write_metadata_entry() keys only by repo_name/branch/commit, so multiple users (or multiple unique worktrees for the same ref) overwrite each other. That breaks multi-tenant isolation and eviction accuracy.

🛠️ Suggested direction (propagate user_id/unique_id into metadata key)
-def _metadata_filename(self, branch: Optional[str], commit_id: Optional[str]) -> str:
+def _metadata_filename(
+    self,
+    branch: Optional[str],
+    commit_id: Optional[str],
+    user_id: Optional[str] = None,
+    unique_id: Optional[str] = None,
+) -> str:
     parts: List[str] = []
     if branch:
         parts.append(f"branch={branch}")
     if commit_id:
         parts.append(f"commit={commit_id}")
+    if user_id:
+        parts.append(f"user={user_id}")
+    if unique_id:
+        parts.append(f"unique={unique_id}")
     if not parts:
         parts.append("default")

Update _metadata_path, _load_metadata_entry, _write_metadata_entry, _delete_metadata_entry, and callers (e.g., register_repo, update_last_accessed, remove_worktree) to thread through user_id and unique_id.


517-557: ⚠️ Potential issue | 🔴 Critical

Code uses du -s (blocks) but treats output as bytes, breaking eviction thresholds.

The comment says "Use 'du -sb'" but the code runs ["du", "-s", path]. On Linux, du -s returns disk usage in blocks (typically 1024 bytes/block), not bytes. The current code treats this as bytes, undercounting volume by ~881x. This will severely distort eviction calculations—repos won't trigger eviction until usage far exceeds the configured limit.

🛠️ Suggested fix
-            result = subprocess.run(
-                ["du", "-s", path],
+            result = subprocess.run(
+                ["du", "-sb", path],
                 capture_output=True,
                 text=True,
                 timeout=60,  # 60 second timeout for large directories
             )

This aligns the code with the documented intent and returns bytes directly.

🤖 Fix all issues with AI agents
In `@app/modules/repo_manager/repo_manager.py`:
- Around line 477-513: _fmt: _format_repo_info currently parses metadata but
discards a top-level "type" field, making bare repos invisible to list_repos
which filters on metadata["type"]; update _format_repo_info to merge
raw_data.get("type") into the parsed metadata (only setting metadata["type"]
when it's missing or empty) before returning so the returned dict's "metadata"
always contains the repo type used by list_repos; reference symbols:
_format_repo_info, raw_data, metadata, list_repos.
- Around line 981-1053: The remove_worktree method currently always deletes
metadata and returns True even when worktree removal fails; change it to track
removal success and only delete metadata (via _delete_metadata_entry) and return
True when the worktree was actually removed. Concretely, in remove_worktree use
a boolean (e.g., worktree_removed = False), set it to True when subprocess.run
returns returncode==0, and if you fall back to shutil.rmtree, attempt removal
and set worktree_removed based on whether worktree_path no longer exists
(os.path.exists). Move the metadata deletion block to run only if
worktree_removed is True, otherwise log the failure and return False. Ensure you
still catch TimeoutExpired/Exception, log them, attempt rmtree, and set
worktree_removed accordingly before deciding metadata deletion and the returned
value.
- Around line 561-602: The metadata's local_path is being set to the repo root
via _get_repo_local_path which can point to a non-git container; change
_update_bare_repo_metadata so local_path is set to the actual bare repo
directory (use _get_bare_repo_path(repo_name) and stringify it) and only fall
back to _get_repo_local_path if the bare repo path does not exist; keep the rest
of the logic (volume calculation via _get_bare_repo_path, writing via
_write_metadata_entry) unchanged and preserve the function name
_update_bare_repo_metadata.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
app/modules/parsing/graph_construction/parsing_controller.py (1)

58-70: ⚠️ Potential issue | 🟡 Minor

~ paths are detected but not expanded.

os.path.isdir(os.path.expanduser(...)) is used for detection, but the stored repo_path keeps ~, which later fails os.path.exists.

🛠️ Proposed fix
             if is_path:
                 # Move from repo_name to repo_path
-                repo_details.repo_path = repo_details.repo_name
+                repo_details.repo_path = os.path.expanduser(repo_details.repo_name)
                 repo_details.repo_name = repo_details.repo_path.split("/")[-1]
app/modules/parsing/graph_construction/parsing_service.py (1)

425-510: ⚠️ Potential issue | 🔴 Critical

duplicate_graph is de‑indented and no longer a class method.

As written, this becomes a module‑level function and self.duplicate_graph(...) will fail at runtime.

🛠️ Proposed fix
-async def duplicate_graph(self, old_repo_id: str, new_repo_id: str):
+    async def duplicate_graph(self, old_repo_id: str, new_repo_id: str):
         await self.search_service.clone_search_indices(old_repo_id, new_repo_id)
         ...
app/modules/repo_manager/repo_manager.py (1)

531-543: ⚠️ Potential issue | 🔴 Critical

Volume units mismatch: du -s returns blocks, not bytes.

The function docstring specifies "Size in bytes" but the code runs du -s which returns block counts (typically 512B–4KB per block). This causes directory sizes to be underreported by 500–4000×, breaking volume tracking and eviction limits. The comment claims "Use 'du -sb'" but the actual command omits the -b flag.

Replace with du -b or du -sb to get byte-accurate results.

🛠️ Proposed fix
-            # Use 'du -sb' to get size in bytes (summary, bytes)
-            # This is more accurate and faster than walking the directory tree
-            result = subprocess.run(
-                ["du", "-s", path],
+            # Use 'du -b' to get size in bytes
+            result = subprocess.run(
+                ["du", "-b", path],
                 capture_output=True,
                 text=True,
                 timeout=60,  # 60 second timeout for large directories
             )
🤖 Fix all issues with AI agents
In `@app/modules/conversations/conversation/conversation_service.py`:
- Around line 267-275: The done-callback for fetch_task calls t.exception()
twice and doesn't guard against CancelledError; change the callback used with
asyncio.create_task(_fetch_structure_with_timeout()) so it first checks
t.cancelled() and returns early for cancelled tasks, then calls t.exception()
once (catching CancelledError if needed), stores that result in a variable, and
only calls logger.exception("Failed to fetch project structure", exc_info=...)
when a real exception is present—refer to fetch_task,
_fetch_structure_with_timeout, and Task.exception()/Task.cancelled() when making
this change.

In `@app/modules/repo_manager/repo_manager.py`:
- Around line 951-963: register_repo is currently keyed only by (repo_name,
branch, commit_id) so multiple unique worktrees (metadata with unique_id and
differing user_id) overwrite each other; update the metadata keying logic used
by register_repo and the helpers that build storage paths (e.g.,
_metadata_filename / _metadata_path) to include user_id and/or unique_id (or
introduce a dedicated namespace for worktree entries) so each worktree metadata
entry is stored under a unique composite key; change calls that create metadata
(see metadata dict in this snippet) to ensure unique_id and user_id are present
and adjust lookup/cleanup logic to account for the extended key to preserve
multi‑tenant isolation.
🧹 Nitpick comments (1)
app/modules/parsing/graph_construction/parsing_controller.py (1)

273-280: db parameter is unused in handle_new_project.

Ruff flags this (ARG004). Consider removing it from the signature or using it (e.g., for async helpers).

Comment on lines +951 to +963
# Register worktree in metadata
metadata = {"type": self._TYPE_WORKTREE, "is_commit": is_commit}
if unique_id:
metadata["unique_id"] = unique_id

self.register_repo(
repo_name=repo_name,
local_path=str(worktree_path),
branch=None if is_commit else ref,
commit_id=ref if is_commit else None,
user_id=user_id,
metadata=metadata,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unique worktree metadata overwrites existing entries.

register_repo() keys metadata by (repo_name, branch, commit_id) only, so multiple unique worktrees for the same ref (or multiple users) clobber each other. This breaks multi‑tenant isolation and cleanup targeting.

Suggested direction: include user_id/unique_id in the metadata key (e.g., extend _metadata_filename/_metadata_path) or store unique worktree entries under a dedicated namespace.

🤖 Prompt for AI Agents
In `@app/modules/repo_manager/repo_manager.py` around lines 951 - 963,
register_repo is currently keyed only by (repo_name, branch, commit_id) so
multiple unique worktrees (metadata with unique_id and differing user_id)
overwrite each other; update the metadata keying logic used by register_repo and
the helpers that build storage paths (e.g., _metadata_filename / _metadata_path)
to include user_id and/or unique_id (or introduce a dedicated namespace for
worktree entries) so each worktree metadata entry is stored under a unique
composite key; change calls that create metadata (see metadata dict in this
snippet) to ensure unique_id and user_id are present and adjust lookup/cleanup
logic to account for the extended key to preserve multi‑tenant isolation.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 5, 2026

@dhirenmathur dhirenmathur merged commit 255f0a6 into main Feb 5, 2026
1 check passed
@Dsantra92 Dsantra92 deleted the repo_manager branch February 5, 2026 10:35
@Dsantra92 Dsantra92 restored the repo_manager branch February 5, 2026 10:35
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/modules/parsing/graph_construction/parsing_service.py (1)

431-520: ⚠️ Potential issue | 🔴 Critical

duplicate_graph is no longer inside the class.

It is currently module-scoped but uses self, which will raise at call sites like parsing_service.duplicate_graph(...).

🧭 Suggested fix (indent into class)
-async def duplicate_graph(self, old_repo_id: str, new_repo_id: str):
-    await self.search_service.clone_search_indices(old_repo_id, new_repo_id)
+    async def duplicate_graph(self, old_repo_id: str, new_repo_id: str):
+        await self.search_service.clone_search_indices(old_repo_id, new_repo_id)
     ...
🤖 Fix all issues with AI agents
In `@app/modules/parsing/graph_construction/parsing_controller.py`:
- Around line 286-292: The parameter named db: Session is unused in the function
that takes repo_details: ParsingRequest, user_id, user_email, new_project_id,
project_manager: ProjectService, db: Session; rename db to _db to silence lint
warnings (or if callers can be updated, remove the parameter entirely) and
update any call sites or references accordingly so the signature and callers
remain consistent.
- Around line 181-197: The code creates an async email task regardless of
whether user_email is present, causing background errors when user_email is
None; update the block around create_task/EmailHelper().send_email so it only
constructs email_task and registers _on_email_done if user_email (or a truthy
email) is available — i.e., wrap the create_task call, the
definition/registration of _on_email_done, and email_task.add_done_callback in a
conditional (if user_email:) so no task or callback is created when user_email
is missing.

Comment on lines +181 to +197
email_task = create_task(
EmailHelper().send_email(
user_email, repo_name, repo_details.branch_name
)
)

def _on_email_done(t: asyncio.Task) -> None:
if t.cancelled():
return
try:
exc = t.exception()
except asyncio.CancelledError:
return
if exc is not None:
logger.exception("Failed to send email", exc_info=exc)

email_task.add_done_callback(_on_email_done)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Skip email task when user_email is missing.

user_email is now optional; creating a send task with None will just log background errors. Guard it.

💡 Suggested fix
-                    email_task = create_task(
-                        EmailHelper().send_email(
-                            user_email, repo_name, repo_details.branch_name
-                        )
-                    )
+                    if user_email:
+                        email_task = create_task(
+                            EmailHelper().send_email(
+                                user_email, repo_name, repo_details.branch_name
+                            )
+                        )
 
-                    def _on_email_done(t: asyncio.Task) -> None:
-                        if t.cancelled():
-                            return
-                        try:
-                            exc = t.exception()
-                        except asyncio.CancelledError:
-                            return
-                        if exc is not None:
-                            logger.exception("Failed to send email", exc_info=exc)
-
-                    email_task.add_done_callback(_on_email_done)
+                        def _on_email_done(t: asyncio.Task) -> None:
+                            if t.cancelled():
+                                return
+                            try:
+                                exc = t.exception()
+                            except asyncio.CancelledError:
+                                return
+                            if exc is not None:
+                                logger.exception("Failed to send email", exc_info=exc)
+
+                        email_task.add_done_callback(_on_email_done)
🤖 Prompt for AI Agents
In `@app/modules/parsing/graph_construction/parsing_controller.py` around lines
181 - 197, The code creates an async email task regardless of whether user_email
is present, causing background errors when user_email is None; update the block
around create_task/EmailHelper().send_email so it only constructs email_task and
registers _on_email_done if user_email (or a truthy email) is available — i.e.,
wrap the create_task call, the definition/registration of _on_email_done, and
email_task.add_done_callback in a conditional (if user_email:) so no task or
callback is created when user_email is missing.

Comment on lines 286 to 292
repo_details: ParsingRequest,
user_id: str,
user_email: str,
user_email: str | None,
new_project_id: str,
project_manager: ProjectService,
db: AsyncSession,
db: Session,
):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Unused db parameter.

If this is intentionally unused, consider renaming to _db to avoid lint noise.

🧹 Suggested fix
-        db: Session,
+        _db: Session,
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
repo_details: ParsingRequest,
user_id: str,
user_email: str,
user_email: str | None,
new_project_id: str,
project_manager: ProjectService,
db: AsyncSession,
db: Session,
):
repo_details: ParsingRequest,
user_id: str,
user_email: str | None,
new_project_id: str,
project_manager: ProjectService,
_db: Session,
):
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 291-291: Unused static method argument: db

(ARG004)

🤖 Prompt for AI Agents
In `@app/modules/parsing/graph_construction/parsing_controller.py` around lines
286 - 292, The parameter named db: Session is unused in the function that takes
repo_details: ParsingRequest, user_id, user_email, new_project_id,
project_manager: ProjectService, db: Session; rename db to _db to silence lint
warnings (or if callers can be updated, remove the parameter entirely) and
update any call sites or references accordingly so the signature and callers
remain consistent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants