CQ Lite is an intelligent, multi-agent code quality analysis tool that combines traditional static analysis with AI-powered insights. It provides comprehensive code reviews, security analysis, and quality metrics for Python, JavaScript, and Docker projects.
- Python Analysis: AST-based complexity analysis, security scanning with Bandit, hardcoded secrets detection
- JavaScript Analysis: Syntax validation, complexity metrics, best practices checking
- Docker Analysis: Dockerfile security scanning, optimization recommendations
- GitHub Repository Analysis: Direct analysis of remote repositories without cloning
- Multi-Agent Workflow: Orchestrated using LangGraph for intelligent task routing
- Hybrid Analysis: Combines traditional static analysis with AI-enhanced insights
- Token Optimization: Smart truncation and description generation to reduce API costs by 20%+
- Vector Database Integration: Early population during analysis for enhanced Q&A capabilities
- Notion Integration: Automated report publishing to Notion workspace
- Interactive Q&A: Chat with your codebase using vector-enhanced knowledge base
- Multiple AI Models: Support for Google Gemini and Nebius AI
- FastAPI Server: RESTful API for integration with CI/CD pipelines
- CLI Interface: Command-line tool for local and remote analysis
- Python 3.9 or higher
- UV package manager (recommended) or pip
-
Clone the repository
git clone https://github.com/yourusername/cq-lite.git cd cq-lite -
Install dependencies
# Using UV (recommended) uv sync # Or using pip pip install -e .
-
Set up environment variables
cp .env.example .env # Edit .env with your API keys
Create a .env file with the following variables:
# AI Model APIs (choose at least one)
GOOGLE_API_KEY=your_google_api_key_here
NEBIUS_API_KEY=your_nebius_api_key_here
# GitHub Integration (for repository analysis)
GITHUB_API_TOKEN=your_github_token_here
# Notion Integration (optional)
NOTION_TOKEN=your_notion_integration_token
NOTION_PAGE_ID=your_notion_page_id
# OpenAI (for vector embeddings)
OPENAI_API_KEY=your_openai_api_key_hereGet your API keys:
- Google AI: https://makersuite.google.com/app/apikey
- GitHub: https://github.com/settings/tokens (needs repo access)
- Notion: https://www.notion.so/my-integrations
- Nebius AI: https://console.nebius.com/
- OpenAI: https://platform.openai.com/api-keys
# Analyze local directory
uv run python -m cli.agentic_cli analyze /path/to/your/code
# Analyze GitHub repository
uv run python -m cli.agentic_cli analyze --repourl https://github.com/owner/repo
# Quick analysis with token optimization
uv run python -m cli.agentic_cli analyze --repourl https://github.com/owner/repo --quick --max-files 10# Full analysis with Notion reporting
uv run python -m cli.agentic_cli analyze \
--repourl https://github.com/owner/repo \
--model gemini \
--notion \
--max-files 20 \
--severity high
# Interactive Q&A mode
uv run python -m cli.agentic_cli chat
# Check environment setup
uv run python -m cli.agentic_cli envCQ Lite uses a multi-agent architecture orchestrated by LangGraph:
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
β CLI/API βββββΆβ Workflow βββββΆβ Agents β
β Interface β β Orchestrator β β - Discovery β
βββββββββββββββββββ β (LangGraph) β β - Analysis β
ββββββββββββββββββββ β - AI Review β
β β - Q&A β
β β - Notion β
βΌ βββββββββββββββββββ
ββββββββββββββββββββ
β Vector Store β
β (ChromaDB) β
ββββββββββββββββββββ
- File Discovery Agent: Intelligently discovers and categorizes files
- Language-Specific Analyzers: Python, JavaScript, Docker analysis
- AI Review Agent: Comprehensive AI-powered code review
- Q&A Agent: Interactive codebase exploration
- Notion Report Agent: Automated documentation generation
- Vector Store: ChromaDB for semantic code search
When building CQ Lite, I evaluated several frameworks for orchestrating the multi-agent workflow:
Options Considered:
- CrewAI: Great for predefined agent roles, but felt restrictive for custom workflows
- OpenAI SDK: Powerful for single-agent tasks, but lacking orchestration capabilities
- Google AI SDK: Excellent for Gemini integration, but no workflow management
- LangChain: Good foundation, but too heavyweight for our specific needs
Why LangGraph Won:
- Complete Workflow Freedom: I could design exactly the agent flow I envisioned - conditional routing, parallel execution, dynamic state management
- Visual Workflow Design: The graph-based approach made it easy to visualize and debug complex agent interactions
- State Management: Built-in state passing between agents without boilerplate code
- Flexibility: Could easily add new agents, modify routing logic, or change execution order without rewriting core logic
- Performance: Lightweight compared to full LangChain while keeping the power I needed
The breakthrough moment was realizing I could create conditional edges that route based on discovered files - something that would have been much harder to implement cleanly in other frameworks. LangGraph's philosophy of "graphs as code" aligned perfectly with my vision of an intelligent, adaptive analysis pipeline.
- Smart Truncation: Files with no quality issues and low interdependency are truncated with AI-generated descriptions
- Token Savings: Achieves 20%+ reduction in API token usage
- Context Preservation: Maintains code understanding while reducing costs
- Early Population: Vector store is populated during analysis, not after
- Enhanced Q&A: Enables semantic search across the entire codebase
- Persistent Knowledge: Analysis results are stored for future queries
- Traditional + AI: Combines AST analysis, security scanning, and AI insights
- Issue Enhancement: AI enhances traditional static analysis findings
- Contextual Understanding: AI provides business impact and architectural insights
π Analysis Summary for my-project/
βββ π Files Analyzed: 25
βββ π Python Files: 15 (600 lines)
βββ π¨ JavaScript Files: 8 (450 lines)
βββ π³ Docker Files: 2
βββ β οΈ Issues Found: 12
π Key Issues:
βββ π΄ Critical: Hardcoded API key in config.py:15
βββ π High: Complex function in main.py:45 (CC: 12)
βββ π‘ Medium: Missing error handling in api.py:23
π€ AI Insights:
βββ Business Impact: High - Security vulnerabilities detected
βββ Architecture: Consider implementing dependency injection
βββ Priority: Address security issues immediately
Reports are automatically published to Notion with:
- Executive summary and metrics
- Detailed issue breakdown with severity
- Fix recommendations and priority matrix
- Code snippets and architectural insights
cq-lite/
βββ api/ # FastAPI server
β βββ models/ # Pydantic models
β βββ routers/ # API endpoints
β βββ services/ # Business logic
βββ backend/ # Core analysis engine
β βββ agents/ # LangGraph agents
β βββ analyzers/ # Language-specific analyzers
β βββ models/ # Data models
β βββ services/ # AI services
β βββ tools/ # Integration tools
βββ cli/ # Command-line interface
βββ frontend/ # Next.js frontend
βββ docs/ # Documentation
βββ tests/ # Test suites
# Run all tests
uv run pytest
# Run specific test suite
uv run pytest tests_server/
uv run pytest test_cli/POST /api/github/analyze
Content-Type: application/json
{
"repo_url": "https://github.com/owner/repo",
"model_choice": "gemini",
"max_files": 10,
"severity_filter": "medium"
}POST /api/upload
Content-Type: multipart/form-data
files: [file1.py, file2.js, ...]
model_choice: "gemini"GET /api/status/{job_id}POST /api/chat
Content-Type: application/json
{
"query": "What are the main security issues in this codebase?",
"context": "analysis_results"
}# Build and run
docker build -t cq-lite .
docker run -p 8000:8000 --env-file .env cq-lite- Render: Uses
render.yamlconfiguration - Netlify: Frontend deployment with
netlify.toml - Vercel: Next.js frontend deployment
# Test analysis on sample repository
uv run python -m cli.agentic_cli analyze --repourl https://github.com/python/cpython --max-files 5 --quick
# Test Q&A functionality
uv run python -m cli.agentic_cli chat# Start server
uv run python -m api
# Test in another terminal
curl -X GET http://localhost:8000/api/health-
Missing API Keys
uv run python -m cli.agentic_cli env
-
Token Limit Exceeded
- Use
--quickflag for faster analysis - Reduce
--max-filesparameter - Enable smart truncation (default)
- Use
-
Vector Store Issues
# Clear vector database rm -rf db/chroma_db/ -
GitHub Rate Limits
- Ensure
GITHUB_API_TOKENis set - Reduce analysis scope with
--max-files
cd frontend npm install - Ensure
-
Get Gemini API Key:
- Visit Google AI Studio
- Create an API key
- Add to your
.envfile
Backend:
uv run uvicorn backend.main:app --reloadFrontend:
cd frontend
npm run devTraditional Analysis:
# Analyze a directory
uv run python -m cli.agentic_cli analyze ./src
# Filter by severity
uv run python -m cli.agentic_cli analyze ./src --severity high
# Get detailed resolution steps for each issue
uv run python -m cli.agentic_cli analyze ./src --insights
# JSON output
uv run python -m cli.agentic_cli analyze ./src --format jsonπ€ NEW: Agentic Analysis (LangGraph-powered):
# AI-orchestrated analysis with intelligent agents
uv run python -m cli.agentic_cli analyze ./src
# Agentic analysis with AI insights
uv run python -m cli.agentic_cli analyze ./src --insights
# AI agents determine optimal analysis strategy
uv run python -m cli.agentic_cli analyze ./src --severity high
Interactive Chat:
# Traditional chat
uv run python -m cli chat --context ./src
# Agentic chat (coming soon)
uv run python -m cli.agentic_cli chat --context ./src- Landing: http://localhost:3000 - Upload and analyze files
- Dashboard: http://localhost:3000/dashboard - View detailed results
- Chat: http://localhost:3000/chat - AI-powered Q&A
β AST-Based Analysis
- Python: Full AST parsing with complexity and security analysis
- JavaScript: Syntax analysis and pattern detection
β Issue Detection
- Security vulnerabilities (bandit integration)
- Performance bottlenecks
- Code complexity (cyclomatic complexity)
- Code duplication detection
- Style and quality issues
- Hardcoded secrets detection
β AI Integration
- Gemini-powered conversational interface
- Context-aware code explanations
- Actionable improvement suggestions
- π LangGraph Agentic Workflows
β π€ Agentic System (NEW)
- AI-Orchestrated Analysis: LangGraph agents coordinate analysis
- Intelligent Strategy Planning: AI determines optimal analysis approach
- Multi-Agent Coordination: Specialized agents for different languages
- Dynamic Workflow Routing: Conditional logic based on codebase structure
β Modern Web UI
- Dark gradient theme with accessibility
- Interactive dashboard with filtering
- Real-time chat interface
- Responsive design
β CLI Interface
- Rich terminal output
- Multiple output formats
- Severity filtering
- Interactive chat mode
- π Agentic CLI: AI-powered analysis orchestration
The system uses a modern, scalable architecture:
- Backend: Python FastAPI with uvicorn
- Frontend: Next.js with Tailwind CSS
- AI: Google Gemini Pro via Langgraph
- Analysis: AST-based with radon, bandit
- CLI: Click with Rich formatting
See docs/ARCHITECTURE.md for detailed information.
See docs/DEMO.md for a complete demo script and sample code.
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: docs/
- LangGraph: Multi-agent orchestration framework
- FastAPI: High-performance web framework
- ChromaDB: Vector database for semantic search
- Radon: Python complexity analysis
- Bandit: Python security analysis
- Notion API: Documentation integration
Built with β€οΈ using AI-powered architecture and modern Python frameworks.