This repository implements a multi-agent system for an intelligent researcher tool. The system leverages a local Large Language Model (LLM) via Ollama to find and summarize academic papers in a sophisticated, multi-step workflow.
The system is built on a modular, hierarchical agent architecture, which makes it robust and easy to extend.
-
Researcher Agent: This is the master agent and the main entry point of the application. It understands complex, multi-step user requests and intelligently delegates tasks to the appropriate specialized agent.
-
Finder Agent: A specialized agent responsible for finding academic papers. It uses two tools:
- ArXiv: To search for papers on the
arxiv.orgrepository. - DuckDuckGo Search: For general web searches to find papers on other sites or to gather broader context.
- ArXiv: To search for papers on the
-
Summarizer Agent: A specialized agent designed to summarize a given paper. It uses a tool to:
- Fetch Web Content: Reads the text content from a paper's URL.
- Summarize: Generates a concise summary covering the paper's objectives, methodology, and key findings.
When given a task like "Find papers on multi-agent systems and summarize the first one," the workflow is as follows:
- The Researcher Agent receives the task.
- It first calls the Finder Agent with the topic "multi-agent systems."
- The Finder Agent returns a list of relevant papers.
- The Researcher Agent extracts the title and URL of the first paper from the list.
- It then calls the Summarizer Agent with this information.
- The Summarizer Agent fetches the paper's content and returns a summary.
- Finally, the Researcher Agent presents the summary as the final answer.