The simple way to build and embed AI agents in your code...
agent = Agent("aethelland-demo")
text = agent.run("How many people live in the country?")
print(text) # "Aethelland has a population of..."Built to be simpler than other frameworks while remaining code-native. Prompts are easy to debug.
🛡️ Includes Semantic Unit Testing — verify agent logic based on meaning, not just string matching.
Build AI agents in Python. A library for developers.
Agent-Assembly-Line is a framework for building AI agents that can be easily embedded into existing software stacks. Includes ready-to-use components for task-based and conversational agents.
Agent-Assembly-Line offers components that simplify the setup of agents and multi-agent chains and works with local and cloud based LLMs.
Agent-Assembly-Line supports:
- Task based Agents (Functional Agents)
- Conversational Agents
- Local memory
- RAG for Local documents or remote endpoints
- Websites, RSS, JSON, PDF, ..
- Local LLMs as well as cloud-based LLMs: Ollama and ChatGPT
- Streaming mode and regular runs
- Context
- Micros: small, single task agents that handle distinct functionalities and can be chained
- cli-agents: agents that can be chained on the command line
Agent-Assembly-Line comes with examples such as Semantic Unittests, Diff Analysis, and more; a demo chat app and tests.
Python 3.9, 3.10, 3.12:
pip install agent_assembly_lineCreate an agent for fetching the weather in Helsinki
from agent_assembly_line import FmiWeatherAgent
agent = FmiWeatherAgent("Helsinki", forecast_hours=24, mode="local")
result = agent.run()Output: "The rest of today in Helsinki will be sunny and mild with temperatures around 4 degrees Celsius. Expect clear skies throughout the evening and overnight.
python -m venv .venv
source .venv/bin/activate
pip install -e .It works with Python 3.9.6
cp agent-assembly-line-service.service /etc/systemd/system/agent-assembly-line.service
sudo systemctl enable agent-assembly-line.service
systemctl is-enabled agent-assembly-line.service
journalctl -u agent-assembly-line.serviceRun all tests:
make testor
python -m unittest tests/test.pyjust one test:
python -m unittest tests.async.test_memory.TestMemory.test_save_messagesThe demo app provides a UI that talks to the REST API and can handle chat-based conversations, functional agents, memory and summaries, file upload and urls.
cd app/
npm run electron
You can use a local LLM as well as cloud-based LLMs. Currently supported are Ollama and OpenAI, more to come.
Note:
Choosing between a local or cloud LLM depends on your specific needs: local LLMs offer greater control, privacy, and potentially lower costs for frequent use, while cloud LLMs provide easy scalability, access to powerful models, and reduced maintenance overhead. Consider your requirements for data security, performance, and budget when making your decision.
To use an Ollama LLM, use the ollama identifier:
ollama:gemma2:latest
ollama:codegemma:latest
Make sure you have Ollama installed on your machine:
Then run it once on your console, it will download your model:
ollama run gemma2Important: you need to pull the embeddings:
ollama pull nomic-embed-textYou might also want to set the OLLAMA_HOST env variable in case your Ollama isn't listening on default 127.0.0.1:11434
You can use ChatGPT as an LLM by using the openai identifier:
openai:gpt-3.5-turbo-instruct
openai:gpt-4o
You need to set your OpenAI API Key before running:
export OPENAI_API_KEY=<your key here>Note:
Using the OpenAI API may incur costs. Please refer to the OpenAI pricing page for more details.
Create an agent:
agent = Agent("aethelland-demo")
question = "How many people live in the country?"
text = agent.run(question)Agent objects can either read its configuration from a YAML config file like the example, or use a dictionary:
config = Config()
config.load_conf_dict({
"name": "my-demo",
"llm": {
"model-identifier": "ollama:gemma:latest"
},
})
agent = Agent(config=config)The agent supports both streaming and synchronous runs and can store a history, which is useful for chat-based applications. Texts from documents, URLs, or strings can be stored in vectorstores or used as inline context. Inline context directly provides the text to the LLM prompt but is limited by the LLM's context window.
Micros are functional agents that serve 1 particular job, and can be used in pipes or chains. There are currently agents for analyzing diffs, semantic unittest validation, summarizing text and handling websites.
agent = WebsiteSummaryAgent(url)
summary = agent.run()class TestTextValidator(SemanticTestCase):
def test_semantic(self):
self.assertSemanticallyEqual("Blue is the sky.", "The sky is blue.")This example first creates a detailed textual summary, and in the second step it creates a shorter summary, which can be used e.g. in commit messages.
agent = DiffDetailsAgent(diff_text)
detailed_answer = agent.run()
sum_agent = DiffSumAgent(detailed_answer)
sum_answer = sum_agent.run()git diff HEAD | cli_agents/diff_detailsGeneric text summarization. You can choose local LLMs or cloud (ChatGPT).
agent = SumAgent(text, mode='local')
result = agent.run()For further understanding of how to use Agent-Assembly-Line, the tests can be read, as well as the demo app and examples.
The demo app also shows how the library can be used in a chat application. See also Build and run the demo app.
Micros can be used in combination, to build a [complex..]
Example:
git diff --cached | examples/diff_analysis.py | examples/summarize_text.pyOr in Python:
agent = DiffDetailsAgent(diff_text)
detailed_answer = agent.run()
sum_agent = DiffSumAgent(detailed_answer)
sum_answer = sum_agent.run("Please summarize these code changes in 2-3 sentences. The context is only for the bigger picture.")
sum_agent = DiffSumAgent(sum_answer)
sum_answer = sum_agent.run()# Basic debug output
agent = Agent("foo", debug=True)
# Full prompt auditing
agent = Agent("foo", audit_prompts=True)
# Both combined
agent = Agent("foo", debug=True, audit_prompts=True)debug=True: Shows prompt size and timing infoaudit_prompts=True: Logs complete prompts toaudit_logs/audit_[timestamp]_[agent]_[counter].txt
Detailed documentation for all components is available in the docs/ directory:
- Data Loaders - Documentation for all data loader components
- OCR Loader - Extract text from images using OCR
- Examples - Code examples and usage demonstrations
For component-specific documentation, see the corresponding files in the docs directory.
The structure reflects a clean separation between logic and configuration, essentially moving toward a "declarative" way of building AI.
-
"Micros" are the Functional Building Blocks The classes under micros (like TextCleanupAgent) are programmatic tools. The "How": They contain the actual Python logic, API calls, or specific regex/transformation code. The Usage: They are designed to be imported as standard Python objects. You use them when you want deterministic or high-performance control over a specific utility task within your own codebase. Analogy: These are like specialized power tools in a workshop. You pick them up when you need to "sand" or "drill" a specific piece of data.
-
"Agents" are Declarative Identities The YAML files under agents are definitions of persona and knowledge. The "Who": Since they are code-less YAMLs with templates and RAG data, they define the behavioral boundaries of an LLM instance—its role, the specific data it "knows" (RAG), and the prompt it follows. The Usage: This allows you to swap out an agent's "brain" or "personality" without changing a single line of Python code. You can update the YAML, and the system behaves differently. Analogy: These are like job descriptions or manuals. They don't "do" anything until they are loaded into a reasoning engine.
-
The "Assembly Line" Concept By separating these two, the framework allows you to build a true assembly line where: Logic (Micros) handles the heavy lifting and data cleaning. Intelligence (Agents) handles the decisions and context-heavy generation. Orchestration happens by passing data from a "Micro" (which might clean the text) to an "Agent" (defined by YAML, which analyzes that cleaned text using its RAG data). In this architecture, an "agent" is not just one step in a chain—it is the configuration that tells the system how a specific step in that chain should "think."
We welcome contributions to the Agent-Assembly-Line project! To contribute, please follow these steps:
- Create a new branch for your feature or bugfix.
- Make your changes and commit them to the branch.
- Push your changes and create a PR.
- Discuss if needed.
If you encounter any issues or have feature requests, please open an issue on GitHub. Provide as much detail as possible to help us understand and resolve the issue quickly.
This project is licensed under the Apache 2 License. See the LICENSE file for details.