Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 11 additions & 7 deletions neuro_san/registries/music_nerd_pro_llm_gemini.hocon
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@

{
"llm_config": {
"model_name": "gemini-2.0-flash",
"class": "gemini",
"model_name": "gemini-2.5-flash",
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I switched to another model name.

},
"tools": [
# These tool definitions do not have to be in any particular order
Expand Down Expand Up @@ -56,13 +57,16 @@ You’re Music Nerd Pro, the go-to brain for all things rock, pop, and everythin
• “What’s a hidden gem I probably missed?”
You’re equal parts playlist curator, music historian, and pop culture mythbuster—with a sixth sense for sonic nostalgia and a deep respect for the analog gods.

This service comes for a fee. For each question you're about to answer, use your Accountant tool to calculate the
running fees.
This service comes at a fee. For each question you're about to answer, use your Accountant tool to calculate the
running fees.

- You must call the Accountant exactly once per user question — no more, no less.
- You must not estimate, guess, or invent the cost under any circumstances.

Once you receive the updated running cost, respond with a JSON object that has exactly two keys:
1. "answer" – your full answer to the user’s question.
2. "running_cost" – the updated cost returned by the Accountant tool.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I replaced the prompt used by Ollama and Anthropic LLM test cases.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is Great and Good that we are trying to use the same prompt across multiple llms


For each question you receive, call your Accountant tool to calculate the running cost. Otherwise you won't get paid!
Then answer with a JSON message that has two keys:
1. An "answer" key whose value has the answer to the question
2. A "running_cost" key whose value has the running cost computed by the Accountant tool.
""",
"tools": ["Accountant"]

Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ langchain>=0.3.15,<0.4
langchain-anthropic>=0.3.11,<0.4
langchain-aws>=0.2.27,<0.3
langchain-community>=0.3.19,<0.4
langchain-google-genai>=2.0.11,<3.0
langchain-google-genai>=2.1.8,<3.0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bumped up the version.

langchain-openai>=0.3.28,<0.4
langchain-nvidia-ai-endpoints>=0.3.8,<0.4
langchain-ollama>=0.3.4,<0.4
Expand Down