Skip to content

Conversation

@yashkrishan
Copy link
Contributor

@yashkrishan yashkrishan commented Dec 31, 2025

  • Add INFERRING status to ProjectStatusEnum
  • Set project status to INFERRING at start of inference, READY on completion
  • Pass project_status to ChatContext for all agents
  • Exclude embedding-dependent tools (ask_knowledge_graph_queries, get_nodes_from_tags) during INFERRING
  • Add graceful error handling for vector index queries during INFERRING
  • Update all system agents and custom agents to respect INFERRING status
  • Add database migration for INFERRING status

This allows users to chat with the AI agent even while AI enrichment (embeddings, docstrings) is still in progress, with limited tool access.

Summary by CodeRabbit

  • New Features

    • Added an "inferring" project status to track inference lifecycle.
    • Agents and tools now respect project status and automatically exclude embedding-dependent tools during the inferring phase.
  • Bug Fixes

    • Improved error handling for embedding- and query-related operations to degrade gracefully (return empty results instead of failing).
    • Project status updates reflect inference outcomes (inferring → ready or error).

✏️ Tip: You can customize this high-level summary in your review settings.

- Add INFERRING status to ProjectStatusEnum
- Set project status to INFERRING at start of inference, READY on completion
- Pass project_status to ChatContext for all agents
- Exclude embedding-dependent tools (ask_knowledge_graph_queries, get_nodes_from_tags) during INFERRING
- Add graceful error handling for vector index queries during INFERRING
- Update all system agents and custom agents to respect INFERRING status
- Add database migration for INFERRING status

This allows users to chat with the AI agent even while AI enrichment
(embeddings, docstrings) is still in progress, with limited tool access.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 31, 2025

Walkthrough

Adds a new project status "inferring", propagates project_status into ChatContext and agent flows, excludes embedding-dependent tools when project is INFERRING, adds defensive error handling in KG tools, and updates InferenceService and DB migration to manage INFERRING→READY/ERROR state transitions.

Changes

Cohort / File(s) Summary
Project status & DB migration
app/modules/projects/projects_schema.py, app/alembic/versions/20251209_add_inferring_status.py
Adds INFERRING = "inferring" to ProjectStatusEnum; Alembic migration updates projects check constraint to include 'inferring' (upgrade/downgrade drop+create constraint).
Chat context
app/modules/intelligence/agents/chat_agent.py
Adds project_status: Optional[str] = None to ChatContext and method is_inferring(self) -> bool; ChatContext now carries project status.
Conversation service
app/modules/conversations/conversation/conversation_service.py
Retrieves project status and injects project_status into ChatContext used for agent invocation/streaming.
System agents
app/modules/intelligence/agents/chat_agents/system_agents/*
.../blast_radius_agent.py, .../code_gen_agent.py, .../debug_agent.py, .../low_level_design_agent.py, .../qna_agent.py
_build_agent signatures updated to accept ctx: ChatContext = None; build/run paths propagate enriched ctx; components compute exclude_embedding_tools = ctx.is_inferring() and pass flag into ToolService to omit embedding-dependent tools.
Custom/runtime agents
app/modules/intelligence/agents/custom_agents/runtime_agent.py
_build_agent now accepts optional ctx; run/run_stream enrich ctx then build/run with it; excludes embedding-dependent tools when ctx indicates INFERRING.
Tool service
app/modules/intelligence/tools/tool_service.py
Adds EMBEDDING_DEPENDENT_TOOLS set and extends get_tools(..., exclude_embedding_tools: bool = False) to filter out embedding-dependent tools when requested.
Knowledge-graph tools
app/modules/intelligence/tools/kg_based_tools/ask_knowledge_graph_queries_tool.py, .../get_nodes_from_tags_tool.py
Add try/except around vector/graph queries, log warnings on failures, return empty list on error (avoid raising when embeddings unavailable); safer DB generator handling.
Inference service
app/modules/parsing/knowledge_graph/inference_service.py
Sets project status to INFERRING at start of run_inference, updates to READY on success or ERROR on failure; adds defensive handling in query_vector_index.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant ConvSvc as Conversation Service
    participant Inference as Inference Service
    participant Agent
    participant ToolSvc as ToolService
    participant KG as Knowledge Graph

    Client->>ConvSvc: send message (project_id, user input)
    ConvSvc->>Inference: get_project_from_db_by_id(project_id)
    Inference-->>ConvSvc: project_status (e.g., "inferring")
    ConvSvc->>Agent: build ChatContext(project_status=...)
    
    rect rgb(230,240,255)
    Note over Agent,ToolSvc: Agent initialization with context
    Agent->>Agent: _build_agent(ctx)
    Agent->>Agent: ctx.is_inferring()
    alt ctx.is_inferring == true
        Agent->>ToolSvc: get_tools(..., exclude_embedding_tools=true)
        ToolSvc-->>Agent: tools (embedding-dependent excluded)
    else
        Agent->>ToolSvc: get_tools(..., exclude_embedding_tools=false)
        ToolSvc-->>Agent: tools (all)
    end
    end

    Agent->>KG: call vector/graph query (if tool used)
    alt KG query raises (embeddings not ready)
        KG-->>Agent: exception -> caught, return []
        Agent-->>Client: graceful result (empty/placeholder)
    else
        KG-->>Agent: query results
        Agent-->>Client: response with results
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • dhirenmathur
  • nndn
  • ASCE-D

Poem

🐰 A tiny hop as "inferring" takes flight,
Context carried softly through agents tonight,
Embedding tools nap until vectors are ready,
Queries return gentle — no crashes, just steady. 🥕✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and specifically describes the main feature: enabling chat functionality before inference completes. This aligns perfectly with the PR's core objective of allowing users to interact with agents during the INFERRING status.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/chat-before-inference

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (9)
app/modules/intelligence/tools/tool_service.py (1)

100-105: Annotate the class-level set with ClassVar to satisfy static analysis.

The static analysis tool (Ruff RUF012) correctly flags that mutable class attributes should be annotated with typing.ClassVar to indicate they are not instance attributes.

🔎 Proposed fix
+from typing import ClassVar, Dict, List, Set
-from typing import Dict, List
 
 class ToolService:
     # Tools that depend on embeddings/docstrings and should be disabled during INFERRING status
     # These tools require AI-inferenced content (embeddings, tags) that doesn't exist during inference
-    EMBEDDING_DEPENDENT_TOOLS = {
+    EMBEDDING_DEPENDENT_TOOLS: ClassVar[Set[str]] = {
         "ask_knowledge_graph_queries",  # Uses vector search which requires embeddings
         "get_nodes_from_tags",  # Uses AI-generated tags which may not exist yet
     }
app/modules/intelligence/agents/chat_agent.py (1)

63-65: Consider using a constant for the status string.

The method uses a hardcoded "inferring" string. While importing ProjectStatusEnum here could introduce coupling, consider defining a module-level constant or documenting the coupling to ProjectStatusEnum.INFERRING.value to reduce the risk of drift.

🔎 Alternative approach
+# Status value must match ProjectStatusEnum.INFERRING from projects_schema
+_INFERRING_STATUS = "inferring"
+
 class ChatContext(BaseModel):
     ...
     def is_inferring(self) -> bool:
         """Check if the project is still in INFERRING state (AI enrichment in progress)"""
-        return self.project_status == "inferring"
+        return self.project_status == _INFERRING_STATUS
app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (1)

37-40: Minor inconsistency: Missing log message when excluding embedding tools.

Other agents (e.g., debug_agent.py, qna_agent.py, code_gen_agent.py, low_level_design_agent.py) log an informational message when exclude_embedding_tools is true. This agent lacks that logging, which may hinder debugging.

🔎 Proposed fix for consistency
         # Exclude embedding-dependent tools during INFERRING status
         exclude_embedding_tools = ctx.is_inferring() if ctx else False
+        if exclude_embedding_tools:
+            logger.info(
+                "Project is in INFERRING status - excluding embedding-dependent tools"
+            )

Note: You'll also need to add the logger import and setup at the top of the file:

from app.modules.utils.logger import setup_logger

logger = setup_logger(__name__)
app/modules/intelligence/agents/chat_agents/system_agents/code_gen_agent.py (1)

156-165: Minor naming inconsistency between run and run_stream.

In run(), the enriched context is stored in a new variable enriched_ctx, while in run_stream(), the original ctx is reassigned. Both approaches work correctly, but using consistent naming across methods improves readability.

🔎 Proposed fix for consistency
     async def run_stream(
         self, ctx: ChatContext
     ) -> AsyncGenerator[ChatAgentResponse, None]:
-        ctx = await self._enriched_context(ctx)
-        async for chunk in self._build_agent(ctx).run_stream(ctx):
+        enriched_ctx = await self._enriched_context(ctx)
+        async for chunk in self._build_agent(enriched_ctx).run_stream(enriched_ctx):
             yield chunk
app/modules/intelligence/agents/chat_agents/system_agents/low_level_design_agent.py (1)

149-158: Same naming inconsistency as code_gen_agent.py.

The run() method uses enriched_ctx while run_stream() reassigns to ctx. Consider aligning the naming for consistency across the codebase.

🔎 Proposed fix
     async def run_stream(
         self, ctx: ChatContext
     ) -> AsyncGenerator[ChatAgentResponse, None]:
-        ctx = await self._enriched_context(ctx)
-        async for chunk in self._build_agent(ctx).run_stream(ctx):
+        enriched_ctx = await self._enriched_context(ctx)
+        async for chunk in self._build_agent(enriched_ctx).run_stream(enriched_ctx):
             yield chunk
app/modules/intelligence/agents/chat_agents/system_agents/qna_agent.py (1)

154-163: Same naming inconsistency as other agents.

The run() method uses enriched_ctx while run_stream() reassigns to ctx. This pattern appears consistently across code_gen_agent.py, low_level_design_agent.py, and here. Consider a single refactor pass to align naming across all affected agents.

🔎 Proposed fix
     async def run_stream(
         self, ctx: ChatContext
     ) -> AsyncGenerator[ChatAgentResponse, None]:
-        ctx = await self._enriched_context(ctx)
-        async for chunk in self._build_agent(ctx).run_stream(ctx):
+        enriched_ctx = await self._enriched_context(ctx)
+        async for chunk in self._build_agent(enriched_ctx).run_stream(enriched_ctx):
             yield chunk
app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py (1)

104-110: Move import to the top of the file.

The logging import should be at the top of the file with other imports rather than inside the exception handler.

🔎 Proposed fix

Add at the top of the file with other imports:

import logging

Then simplify the except block:

         except Exception as e:
-            import logging
-
             logging.warning(
                 f"Error querying graph for tags for project {project_id}: {e}"
             )
             return []
app/modules/intelligence/tools/kg_based_tools/ask_knowledge_graph_queries_tool.py (1)

81-90: Move import to the top of the file.

The logging import should be at the top of the file with other imports for consistency and to avoid repeated imports on each exception.

🔎 Proposed fix

Add at the top of the file with other imports:

import logging

Then simplify the except block:

             except Exception as e:
                 # Vector search may fail during INFERRING status (embeddings not ready)
                 # Return empty results gracefully instead of failing
-                import logging
-
                 logging.warning(
                     f"Vector search failed for project {query_request.project_id} "
                     f"(likely during INFERRING): {e}"
                 )
                 return []
app/modules/parsing/knowledge_graph/inference_service.py (1)

1113-1115: Use logging.exception for better error diagnostics.

When logging inside an exception handler, logging.exception automatically includes the stack trace, which is valuable for debugging inference failures.

🔎 Proposed fix
         except Exception as e:
-            logger.error(f"Inference failed for project {repo_id}: {e}")
+            logger.exception(f"Inference failed for project {repo_id}: {e}")
             # Set status to ERROR on failure
             await self.project_manager.update_project_status(
                 repo_id, ProjectStatusEnum.ERROR
             )
             raise
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1d33d50 and 9bf65b8.

📒 Files selected for processing (14)
  • app/alembic/versions/20251209_add_inferring_status.py
  • app/modules/conversations/conversation/conversation_service.py
  • app/modules/intelligence/agents/chat_agent.py
  • app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py
  • app/modules/intelligence/agents/chat_agents/system_agents/code_gen_agent.py
  • app/modules/intelligence/agents/chat_agents/system_agents/debug_agent.py
  • app/modules/intelligence/agents/chat_agents/system_agents/low_level_design_agent.py
  • app/modules/intelligence/agents/chat_agents/system_agents/qna_agent.py
  • app/modules/intelligence/agents/custom_agents/runtime_agent.py
  • app/modules/intelligence/tools/kg_based_tools/ask_knowledge_graph_queries_tool.py
  • app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py
  • app/modules/intelligence/tools/tool_service.py
  • app/modules/parsing/knowledge_graph/inference_service.py
  • app/modules/projects/projects_schema.py
🧰 Additional context used
🧬 Code graph analysis (9)
app/modules/intelligence/agents/chat_agents/system_agents/qna_agent.py (2)
app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (1)
  • _build_agent (25-60)
app/modules/intelligence/agents/chat_agents/system_agents/debug_agent.py (1)
  • _build_agent (32-131)
app/modules/parsing/knowledge_graph/inference_service.py (2)
app/modules/projects/projects_schema.py (1)
  • ProjectStatusEnum (6-13)
app/modules/projects/projects_service.py (1)
  • update_project_status (158-167)
app/modules/intelligence/agents/chat_agents/system_agents/low_level_design_agent.py (1)
app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (3)
  • _build_agent (25-60)
  • run (62-63)
  • run_stream (65-69)
app/modules/intelligence/tools/kg_based_tools/ask_knowledge_graph_queries_tool.py (2)
app/modules/parsing/knowledge_graph/inference_service.py (1)
  • query_vector_index (1121-1202)
app/modules/intelligence/agents/custom_agents/custom_agent_schema.py (1)
  • QueryResponse (144-146)
app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py (3)
app/core/config_provider.py (1)
  • get_neo4j_config (50-51)
app/modules/parsing/graph_construction/code_graph_service.py (1)
  • query_graph (193-196)
app/core/database.py (1)
  • get_db (100-105)
app/modules/intelligence/agents/chat_agents/system_agents/debug_agent.py (4)
app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (3)
  • _build_agent (25-60)
  • run (62-63)
  • run_stream (65-69)
app/modules/intelligence/agents/chat_agents/system_agents/low_level_design_agent.py (4)
  • _build_agent (32-129)
  • run (149-151)
  • run_stream (153-158)
  • _enriched_context (131-147)
app/modules/intelligence/agents/chat_agents/system_agents/qna_agent.py (4)
  • _build_agent (32-134)
  • run (154-156)
  • run_stream (158-163)
  • _enriched_context (136-152)
app/modules/intelligence/agents/chat_agents/system_agents/general_purpose_agent.py (3)
  • _build_agent (34-120)
  • run (122-123)
  • run_stream (125-129)
app/modules/conversations/conversation/conversation_service.py (1)
app/modules/projects/projects_service.py (1)
  • get_project_from_db_by_id (267-280)
app/modules/intelligence/agents/custom_agents/runtime_agent.py (3)
app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (1)
  • _build_agent (25-60)
app/modules/intelligence/agents/chat_agent.py (2)
  • ChatContext (46-94)
  • is_inferring (63-65)
app/modules/intelligence/tools/tool_service.py (1)
  • get_tools (133-150)
app/modules/intelligence/agents/chat_agents/system_agents/code_gen_agent.py (3)
app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (1)
  • _build_agent (25-60)
app/modules/intelligence/agents/chat_agents/system_agents/qna_agent.py (1)
  • _build_agent (32-134)
app/modules/intelligence/agents/chat_agent.py (3)
  • ChatContext (46-94)
  • ChatAgent (97-108)
  • is_inferring (63-65)
🪛 Ruff (0.14.10)
app/modules/intelligence/tools/tool_service.py

102-105: Mutable class attributes should be annotated with typing.ClassVar

(RUF012)

app/modules/parsing/knowledge_graph/inference_service.py

1114-1114: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


1198-1198: Do not catch blind exception: Exception

(BLE001)

app/modules/intelligence/tools/kg_based_tools/ask_knowledge_graph_queries_tool.py

81-81: Do not catch blind exception: Exception

(BLE001)

app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py

104-104: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (17)
app/modules/projects/projects_schema.py (1)

6-13: LGTM!

The INFERRING status is well-placed in the workflow progression between PROCESSING and READY, accurately representing the state where AI enrichment (embeddings, docstrings) is in progress.

app/modules/intelligence/tools/tool_service.py (1)

133-150: LGTM!

The get_tools method correctly implements conditional exclusion of embedding-dependent tools. The default False maintains backward compatibility, and the logic cleanly skips tools in EMBEDDING_DEPENDENT_TOOLS when exclusion is requested.

app/modules/intelligence/agents/chat_agents/system_agents/debug_agent.py (2)

32-58: LGTM!

The _build_agent method correctly accepts an optional context parameter and safely handles the None case with ctx.is_inferring() if ctx else False. The informative log message aids debugging when embedding-dependent tools are excluded.


143-152: LGTM!

The run and run_stream methods correctly pass the enriched context to _build_agent, ensuring the INFERRING status is properly propagated for tool exclusion decisions.

app/modules/intelligence/agents/chat_agent.py (1)

54-55: LGTM!

The project_status field is correctly typed as Optional[str] with a sensible default. The comment clearly explains its purpose for conditional tool enabling/disabling.

app/modules/intelligence/agents/chat_agents/system_agents/blast_radius_agent.py (1)

62-69: LGTM!

The run and run_stream methods correctly pass the context to _build_agent, ensuring proper propagation of project status for tool exclusion.

app/modules/intelligence/agents/chat_agents/system_agents/code_gen_agent.py (1)

61-68: LGTM!

The INFERRING status handling correctly excludes embedding-dependent tools with appropriate logging.

app/modules/intelligence/agents/chat_agents/system_agents/low_level_design_agent.py (1)

48-55: LGTM!

The INFERRING status handling correctly excludes embedding-dependent tools with appropriate logging, consistent with other agents.

app/modules/intelligence/agents/chat_agents/system_agents/qna_agent.py (1)

52-59: LGTM!

The INFERRING status handling is well-implemented with a particularly helpful log message that lists the specific tools being excluded (ask_knowledge_graph_queries, get_nodes_from_tags), which aids debugging.

app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py (1)

93-93: Good improvement: graceful fallback for null docstrings.

Using COALESCE to fall back to the first 500 characters of text when docstring is null ensures the tool returns meaningful content during INFERRING status when docstrings may not yet be generated.

app/modules/intelligence/tools/kg_based_tools/ask_knowledge_graph_queries_tool.py (1)

62-80: LGTM: Graceful error handling for INFERRING status.

The try/except block provides appropriate defense-in-depth since query_vector_index may fail when embeddings aren't ready. The comment explaining the INFERRING context is helpful for future maintainers.

app/modules/conversations/conversation/conversation_service.py (2)

673-677: LGTM: Project status retrieval for conditional tool selection.

The project status is correctly fetched and will be used to conditionally exclude embedding-dependent tools when the project is in INFERRING state. The None-safe access pattern handles cases where project_info might not exist.


736-746: Consistent project_status propagation across both agent paths.

Both the CUSTOM_AGENT and standard agent paths now correctly include project_status in the ChatContext, enabling downstream agents to check ctx.is_inferring() and conditionally exclude embedding-dependent tools.

app/modules/intelligence/agents/custom_agents/runtime_agent.py (2)

55-78: LGTM: Consistent INFERRING status handling pattern.

The implementation correctly:

  1. Accepts optional ChatContext in _build_agent
  2. Safely handles None context with ctx.is_inferring() if ctx else False
  3. Logs when embedding-dependent tools are excluded
  4. Passes exclude_embedding_tools flag to get_tools()

This is consistent with the pattern used in other system agents like blast_radius_agent.


145-154: Context now flows through agent build and execution.

The updated run and run_stream methods properly enrich the context first and then pass it to _build_agent, ensuring that the project status is available for tool selection decisions.

app/modules/parsing/knowledge_graph/inference_service.py (2)

1094-1119: LGTM: Proper status lifecycle management for inference.

The status transitions are correctly implemented:

  • INFERRING at start of inference
  • READY on successful completion
  • ERROR on failure with re-raise to propagate the exception

This enables the INFERRING-aware tool exclusion feature.


1128-1134: LGTM: Defensive error handling with clear documentation.

The updated docstring clearly explains that this method may fail during INFERRING status and that callers should handle errors gracefully. The try/except block with empty list return provides appropriate resilience.

Also applies to: 1198-1202

…leak

- Add 'processing' status to check constraint in migration to match ProjectStatusEnum
- Fix DB session leak in get_nodes_from_tags_tool by properly managing generator lifecycle
- Ensure generator.close() is called in finally block to trigger cleanup
@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
13.2% Duplication on New Code (required ≤ 3%)

See analysis details on SonarQube Cloud

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
app/alembic/versions/20251209_add_inferring_status.py (1)

33-43: Downgrade data migration issue already flagged.

A previous review identified that this downgrade will fail with a constraint violation if any projects have status = 'inferring' at rollback time, suggesting to "Add a data migration step to either delete or reassign records with 'inferring' status before dropping the constraint." This remains unaddressed.

Additionally, the comment on lines 38-39 has the same misleading wording as the upgrade (states 'processing' is "included" when it's being added). Consider aligning the comment with the suggested rewording in the upgrade function.

Based on past review comments.

🧹 Nitpick comments (2)
app/alembic/versions/20251209_add_inferring_status.py (1)

20-30: Consider splitting 'processing' fix into a separate migration.

This migration makes two distinct changes: adding 'inferring' (the PR objective) and adding 'processing' (fixing a pre-existing discrepancy). A previous review suggested that the 'processing' status "should be addressed in a separate migration" to maintain clear migration history and easier rollback granularity.

Additionally, the comment on lines 25-26 states that 'processing' is "included," which is misleading—it's actually being added to the constraint for the first time. Consider rewording to: "Note: 'processing' is also added to align with ProjectStatusEnum.PROCESSING."

Verification script to check all enum values are covered
#!/bin/bash
# Description: Verify all ProjectStatusEnum values are included in the migration constraint

# Extract enum values from schema
echo "=== ProjectStatusEnum values ==="
rg -A 10 "class ProjectStatusEnum" app/modules/projects/projects_schema.py

# Check migration constraint values
echo -e "\n=== Migration constraint values ==="
rg "status IN" app/alembic/versions/20251209_add_inferring_status.py

Based on past review comments.

app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py (1)

95-124: Great fix for the resource leak, but refine exception handling in the finally block.

The refactor correctly addresses the DB session leak by managing the generator lifecycle explicitly. However, the exception handling in the finally block catches the wrong exceptions.

When gen.close() is called:

  • It sends GeneratorExit INTO the generator (not raised BY close())
  • If get_db()'s finally block (which calls db.close()) raises an exception, that exception propagates OUT of gen.close()
  • The current handlers (lines 120-123) won't catch exceptions from db.close(), which could mask the original exception from the try block
🔎 Recommended fix for exception handling in finally block
         finally:
             # Close the generator to trigger its finally block, which closes the DB session
             if gen:
                 try:
                     gen.close()
-                except (GeneratorExit, StopIteration):
-                    # GeneratorExit is expected when closing a generator
-                    # StopIteration may occur if generator is already exhausted
-                    pass
+                except Exception as e:
+                    import logging
+                    logging.warning(
+                        f"Error closing database generator for project {project_id}: {e}"
+                    )

This ensures that any exception during cleanup (e.g., from db.close()) is logged without masking the original exception.

Minor note: The if gen: check on line 117 is always True since gen is assigned on line 97, making it redundant (though harmless defensive programming).

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9bf65b8 and 0d9c3fd.

📒 Files selected for processing (2)
  • app/alembic/versions/20251209_add_inferring_status.py
  • app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py
🧰 Additional context used
🧬 Code graph analysis (1)
app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py (2)
app/core/database.py (1)
  • get_db (100-105)
app/modules/parsing/graph_construction/code_graph_service.py (3)
  • CodeGraphService (15-196)
  • query_graph (193-196)
  • close (34-35)
🪛 Ruff (0.14.10)
app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py

108-108: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (1)
app/modules/intelligence/tools/kg_based_tools/get_nodes_from_tags_tool.py (1)

93-93: LGTM: COALESCE improves null handling.

The COALESCE function provides a graceful fallback to text content when docstrings are missing, improving the tool's reliability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant