Skip to content

coloradocollective/darshan-ai-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Repository Discovery Agent

An AI Agent that can answer questions about GitHub repositories for a user or an organization.

Create your repository

  1. Create a new repository using coloradocollective/ai-agent-assignment as a template.
  2. Create an OpenAI API Key and record the value.
  3. Add a repository secret named OPEN_AI_KEY and set your key from the previous step as the value.
  4. Create a GitHub token with repository access and record the value.
  5. In GitHub actions start the test workflow to see that it passes.
  6. Clone your repository locally.

Run

  1. Install uv.

  2. Copy the example environment file and fill in the necessary values.

    cp .env.example .env
    source .env
  3. Run the app then visit localhost:5050.

    uv run -m discovery
    
  4. Paste your GitHub token into the login screen and ask a few queries.

Test

  1. Run fast tests

    uv run -m unittest
  2. Run slow tests

    source .env
    RUN_SLOW_TESTS=true uv run -m unittest

Exercise

Create a tool

Use the GitHub API documentation and add a new tool.

  1. In the GithubClient, define a method that calls the GitHub API endpoint you've chosen.
  2. Add a function to github_tools that calls your new method in the GitHub client. Return a JSON string of the data returned by the method.
  3. Add a Python docstring to your function that describes how OpenAI should use the function.
  4. Add the @tool() decorator to the function.
  5. Register the new tool by adding it to the list of tools that are returned by the github_tools method.

Run the application and see if you can have OpenAI use your tool to fetch data.

Add a test for the new tool

Now make sure the agent integrates properly with OpenAI to use your tool. This test will use the LLM and take more time to run. The focus of the test should be to check that the agent uses the correct tool to answer the question, but we have limited ability to check that the response is correct.

  1. Define a test_ method in test_repository_agent.py that will cover the new tool that was added.
  2. Decorate the test method with the @slow decorator.
  3. Add the @responses.activate decorator to the method to enable stubbing the API call.
  4. Stub the API that your tool uses.
  5. Send a question to the agent and assert the correct tools are called
  6. Next, assert the response by spot-checking for relevant word(s).

Push to GitHub

  1. Make a commit and push your changes to the GitHub repository.
  2. Check to see that your test pass in the GitHub action, then submit your repository URL to Canvas.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published