This repository contains an advanced Agent Communication Protocol (ACP) implementation that integrates with GitHub's AI model (OpenAI GPT-4o) to provide intelligent responses. This guide will walk you through everything from setup to completion of the assignment.
- Introduction to ACP
- Project Structure
- Prerequisites
- Getting Started
- Understanding the Code
- Running the Sample Code
- Assignment: Creating a Knowledge Assistant
- Submitting Your Assignment
- Troubleshooting
- Additional Resources
- FAQ
The Agent Communication Protocol (ACP) is a standardized way for AI agents to communicate with clients. It defines a clear interface for exchanging messages between AI systems and clients, supporting features like:
- Standardized request and response formats
- Streaming responses
- State management across interactions
- Flexibility for different agent types
ACP makes it easier to build, maintain, and integrate AI systems by providing a common communication framework.
LLM-Powered-ACP-Agent/
├── llm_agent.py # The ACP server implementation with GitHub AI integration
├── llm_client.py # Sample client for testing the ACP server
├── README.md # This documentation
└── .env.example # Example environment variables file
Before you begin, ensure you have:
- GitHub account with access to GitHub AI models
- GitHub personal access token with appropriate permissions
- Python 3.8 or higher
- Git installed on your machine
- Basic knowledge of Python and async programming
- uv Python package manager (or pip)
-
Fork this repository to your GitHub account by clicking the "Fork" button at the top right of the repository page.
-
Clone your forked repository to your local machine:
git clone https://github.com/nisalgunawardhana/LLM-Powered-ACP-Agent.git
cd LLM-Powered-ACP-Agent- Create a submission branch for your changes:
git checkout -b submission- Create a
.envfile in the root directory based on the.env.exampletemplate:
cp .env.example .env # If .env.example exists
# OR
touch .env # If it doesn't exist- Add your GitHub token to the
.envfile:
GITHUB_TOKEN="your-github-token-goes-here"
Alternatively, you can set the token as an environment variable:
Bash/Zsh:
export GITHUB_TOKEN="your-github-token-goes-here"PowerShell:
$Env:GITHUB_TOKEN="your-github-token-goes-here"Windows Command Prompt:
set GITHUB_TOKEN=your-github-token-goes-hereInitialize your project and install the required Python packages using uv:
# Initialize a new uv project with Python 3.11 or higher
uv init --python ">=3.11"# Install the ACP SDK
uv add acp-sdk# Install the Azure AI Inference SDK
uv add azure-ai-inference# Install dotenv for environment variable management
uv add python-dotenvIf you prefer using pip instead of uv:
pip install acp-sdk azure-ai-inference python-dotenvThe llm_agent.py file contains the ACP server implementation that:
- Sets up communication with GitHub's AI model
- Creates an ACP server with a single agent endpoint
- Processes incoming messages and sends them to the AI model
- Returns AI-generated responses back to the client
- Maintains conversation history for context
Key components:
ChatCompletionsClient: Connects to GitHub's AI APIServer: The ACP server that handles incoming requests@server.agent(): Decorator that registers thellm_assistantfunction as an ACP agentconversation_history: Dictionary that stores conversation history by session ID
The llm_client.py file provides a simple client implementation that:
- Connects to the local ACP server
- Sends sample questions to the
llm_assistantagent - Displays the responses
- Demonstrates maintaining conversation context with follow-up questions
- Start the ACP server in one terminal:
uv run llm_agent.py- In another terminal, run the client to test the server:
uv run llm_client.py- You can also test with curl:
curl -X POST http://localhost:8000/runs \
-H "Content-Type: application/json" \
-d '{
"agent_name": "llm_assistant",
"input": [
{
"role": "user",
"parts": [
{
"content": "Explain what ACP is in simple terms",
"content_type": "text/plain"
}
]
}
]
}'Your task is to enhance the LLM agent to make it a specialized knowledge assistant by:
- Modifying the
llm_agent.pyfile to:- Update the system prompt to focus on a specific knowledge domain
- Enhance conversation history management
- Implement robust error handling for API failures
Choose a specific knowledge domain (e.g., programming, science, history) and update the system prompt:
# Find this line in llm_agent.py:
SystemMessage("You are a helpful assistant powered by GitHub AI.")
# Replace it with a specialized prompt, for example:
SystemMessage("""You are a specialized programming assistant with expertise in Python, JavaScript, and software development best practices.
Focus on providing clear, accurate code examples and explanations for programming concepts.
When sharing code, always include comments explaining key parts.
Suggest best practices and common pitfalls to avoid.""")The current implementation already has basic conversation history, but you can improve it by:
- Adding timestamps to track conversation flow
- Limiting conversation history length to prevent token limits
- Adding a mechanism to clear or summarize long conversations
Enhance the error handling to provide more informative messages:
except Exception as e:
# Categorize and handle different types of errors
error_message = "I'm sorry, but I encountered an issue."
if "rate limit" in str(e).lower():
error_message += " The API rate limit has been exceeded. Please try again in a few minutes."
elif "timeout" in str(e).lower():
error_message += " The request timed out. This might be due to high server load or network issues."
elif "token" in str(e).lower():
error_message += " There seems to be an authentication issue. Please check your GitHub token."
else:
error_message += f" Error details: {str(e)}"After making your changes:
- Start the agent server:
uv run llm_agent.py- Test with the client:
uv run llm_client.pyVerify that:
- Your agent responds with domain-specific knowledge
- It remembers context from previous messages
- It handles errors gracefully with informative messages
Important Note: The ACP SDK requires message roles to be either "user" or "agent", not "assistant". Ensure your code uses the correct role values.
- After implementing and testing your changes, commit them to your submission branch:
git add llm_agent.py
git commit -m "Implement specialized knowledge assistant"
git push origin submission-
Create a pull request from your
submissionbranch to themainbranch on the Your repository.Follow the above images for a visual guide on creating a pull request.
Tip: After creating your pull request, copy the PR link from your browser's address bar. You will need this link when creating your submission issue in the next step.
Note: The images above demonstrate how to select the correct branches and create a pull request. The repository name shown in the screenshots may differ from yours—just follow the same steps for your own repository.
- Go to the original repository (the one you forked from)
- Navigate to the "Issues" tab
- Click "New issue"
- Select the "Submission" template if available
- Fill in the required information:
- Your full name
- The URL of your pull request
- A summary of what you learned
- Any challenges you faced
- Submit the issue
-
GitHub Token Issues
- Ensure your token has the correct permissions
- Check that the token is correctly set in your environment
- Verify the token hasn't expired
-
Installation Problems
- Make sure Python 3.8+ is installed:
python --version - Try installing dependencies individually if batch install fails
- Check for package conflicts:
uv list
- Make sure Python 3.8+ is installed:
-
Runtime Errors
- Ensure the server is running before starting the client
- Check port availability (default is 8000)
- Look for error messages in the server console
- If you see
'Context' object has no attribute 'run_id'error, the code has been updated to usesession_idor a fallback identifier - For message validation errors: ensure you're using
role="agent"instead ofrole="assistant"in your Message objects (ACP SDK requires specific role values)
-
Conversation Context Issues
- If the agent doesn't remember previous conversation context, ensure you're using the same
session_idacross requests - For the sample client, this is handled automatically by saving the
session_idfrom the first response
- If the agent doesn't remember previous conversation context, ensure you're using the same
-
GitHub API Rate Limiting
- The GitHub AI API has rate limits that may prevent requests from succeeding
- If you encounter "429 Too Many Requests" errors, you'll need to wait before making more requests
- The error message will include a "Retry-After" header indicating how many seconds to wait
- Consider using a different GitHub token if available
- In educational settings, you can use a mock response or sample data when rate limited
To use these demos, you need a GitHub personal access token with the models:read permission.
Steps to create a token:
- Go to GitHub Settings > Developer settings > Personal access tokens
- Click Generate new token (classic or fine-grained)
- Name your token and set an expiration date
- Under Select scopes, check
models:read - Click Generate token and copy it (you won't be able to view it again)
Video Walkthrough:

Watch the video walkthrough on Google Drive.
Watch this short video for a step-by-step guide on generating your GitHub personal access token.
Keep your token secure and do not share it publicly.
- Agent Communication Protocol Documentation
- Azure AI Inference SDK Documentation
- GitHub AI Models Documentation
- Async Python Programming Guide
To dive deeper into ACP concepts, visit the Agent Communication Protocol main repository for documentation, examples, and advanced usage.
Follow me on social media for more sessions, tech tips, and giveaways:
- LinkedIn — Professional updates and networking
- Twitter (X) — Insights and announcements
- Instagram — Behind-the-scenes and daily tips
- GitHub — Repositories and project updates
- YouTube — Video tutorials and sessions
Feel free to connect and stay updated!


