Transform natural language into production-ready code with AI-powered multi-agent architecture
Features β’ Installation β’ Usage β’ Architecture β’ Examples
AI Code Architect is an intelligent code generation system that uses a multi-agent architecture to convert natural language descriptions into fully functional applications. Powered by LangChain and LangGraph, it orchestrates three specialized AI agents working in harmony:
- π― Planner Agent: Analyzes requirements and creates high-level plans
- ποΈ Architect Agent: Designs file structure and implementation steps
- π» Coder Agent: Writes actual code with tool-use capabilities
- π§ Multi-Agent Collaboration: Three specialized agents work together for optimal results
- π Natural Language Input: Describe what you want in plain English
- π§ Tool-Augmented Generation: Agents can read, write, and navigate files intelligently
- π¨ Structured Output: Type-safe planning and execution using Pydantic models
- π Iterative Development: Agents iterate until all tasks are complete
- π¦ File System Management: Automatic file creation and organization
- π Production Ready: Built with LangChain and LangGraph for reliability
- Generate complete web applications (HTML, CSS, JavaScript)
- Create Python scripts and utilities
- Build API endpoints and services
- Develop data processing pipelines
- Prototype applications rapidly
- Generate boilerplate code
- Python 3.9 or higher
- OpenAI API key
- Clone the repository
git clone https://github.com/yourusername/ai-code-architect.git
cd ai-code-architect- Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies
pip install -r requirements.txt- Configure environment
Create a
.envfile in the root directory:
OPENAI_API_KEY=your_openai_api_key_herefrom main import agent
# Describe what you want to build
user_prompt = "Create a simple working calculator web application in HTML, CSS and js."
# Let the AI agents do the work
result = agent.invoke({"user_prompt": user_prompt}, {"recursion_limit": 100})python main.pyThis will generate a complete calculator web application with HTML, CSS, and JavaScript files.
# Todo App Example
user_prompt = """
Create a todo list application with the following features:
- Add new tasks
- Mark tasks as complete
- Delete tasks
- Save to localStorage
Use vanilla JavaScript, HTML, and CSS
"""
result = agent.invoke({"user_prompt": user_prompt}, {"recursion_limit": 100})graph LR
A[User Prompt] --> B[Planner Agent]
B --> C[Architect Agent]
C --> D[Coder Agent]
D --> E{All Tasks Done?}
E -->|No| D
E -->|Yes| F[Complete]
style B fill:#ff6b6b
style C fill:#4ecdc4
style D fill:#45b7d1
-
Planner Agent π―
- Analyzes user requirements
- Creates structured plan with objectives
- Defines success criteria
-
Architect Agent ποΈ
- Designs file structure
- Breaks down into implementation steps
- Creates task dependencies
-
Coder Agent π»
- Implements each step sequentially
- Uses tools to read/write files
- Iterates until completion
The system uses LangGraph's state management to maintain context across agents:
State = {
"user_prompt": str,
"plan": Plan,
"task_plan": TaskPlan,
"coder_state": CoderState,
"status": str
}ai-code-architect/
β
βββ main.py # Main orchestration logic
βββ prompts.py # Agent prompts and instructions
βββ states.py # Pydantic state models
βββ tool.py # File system tools
βββ requirements.txt # Project dependencies
βββ .env # Environment variables (create this)
βββ .gitignore # Git ignore rules
βββ README.md # This file
| Tool | Description | Usage |
|---|---|---|
write_file |
Write content to a file | Creates or overwrites files |
read_file |
Read file contents | Retrieves existing code |
list_files |
List directory contents | Navigate file system |
get_current_directory |
Get working directory | Verify location |
Input:
user_prompt = "Create a simple working calculator web application"Output:
calculator.html- HTML structurecalculator.css- Stylingcalculator.js- Calculator logic
Change the LLM model in main.py:
llm = ChatOpenAI(
model="gpt-4o", # Options: gpt-4o, gpt-4-turbo, gpt-3.5-turbo
temperature=1.0
)Adjust the number of iterations:
result = agent.invoke(
{"user_prompt": user_prompt},
{"recursion_limit": 100} # Increase for complex projects
)Enable detailed logging:
from langchain.globals import set_verbose, set_debug
set_debug(True)
set_verbose(True)Contributions are welcome! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Built with LangChain
- Powered by LangGraph
- Uses OpenAI GPT Models
For questions and support:
β Star this repo if you find it helpful!
Made with β€οΈ by AI enthusiast