Releases: cognizant-ai-lab/neuro-san
neuro-san 0.2.2
- use latest leaf-common==1.2.16 with fix for streaming raw grpc responses
- Add a ChatMessageType.from_response_type() method which consolidates some client-side logic when parsing responses.
neuro-san 0.2.1
- Add AgentCli.formulate_chat_request() method so AgentCli subclasses can route different requests
- Add some constructor comments for streaming_timeout_in_seconds
neuro-san 0.2.0
- Allow for streaming results via a new StreamingChat() method on the protocol. This will render Chat() Logs() and Reset() obsolete, but they are still supported now at least until all known clients are ported over.
- Currently StreamingChat will only 2 kinds of messages, even though more are spec-ed in the proto files:
- AI messages are the front-man's answer to the user's questions via chat
- LEGACY_LOGS messages have in them what came over the Logs() method before. In future versions these will become obsolete in favor of more detailed messages from lower-level agents.
- Update the agent_cli client to support streaming or polling (polling will eventually be obsolete)
- Refactor agent_cli and the session factory a bit so client code has more to reuse via subclassing these.
- Use asyncio infra that had been pushed down from leaf-server-common into leaf-common
- Have ExternalTools use asynchronous grpc calls to other servers
- Beef up comments about the asynchronous environment that CodedTools operate in.
- Update copyrights
- Move some formerly top-level packages into a new "internals" package that holds them. This makes it a bit clearer as to what a new user should focus on understanding when they take a look at the codebase
- Update the README to reflect new protocol.
neuro-san 0.1.17
Add LLM config keys for Azure
What's Changed
- Add LLM config keys for Azure by @d1donlydfink in #32
Full Changelog: 0.1.16...0.1.17
neuro-san 0.1.16
Package the scripts needed for external servers using wheel files.
neuro-san 0.1.15
The idea here is to have a build.sh / Dockerfile / run.sh combo that is able to correctly build a service container
when a developer is making their own projects outside of the neuro-san repo and they have wheel files installed.
To achieve that, they can copy the neuro_san/deploy/{Dockerfile, build.sh, run.sh} files to the top-level of their own project, as peers to their registries and coded_tools directories inside their own project and all these scripts should "just" work.
For example, here is my separate test area:
/tmp/nsdeploy
/tmp/nsdeploy/registries
/tmp/nsdeploy/registries/esp_decision_assistant.hocon
/tmp/nsdeploy/registries/hello_world.hocon
/tmp/nsdeploy/registries/manifest.hocon
/tmp/nsdeploy/build.sh
/tmp/nsdeploy/Dockerfile
/tmp/nsdeploy/run.sh
... these last 3 files are "just" copies of the neuro_san/deploy versions.
I could add coded_tools directory in here too and that should be equally happy.
All that matters is that I have neuro-san, leaf-server-common and leaf-common wheel files installed in my venv
when I run the build.sh script in /tmp/nsdeploy
When I use the run.sh script there to start the service, the standard neuro_san/client/agent_cli works against this locally running service
neuro-san 0.1.14
Some moves to make it easier to use manifests and individual tools from external repos.
neuro-san 0.1.13
- Fix a problem with running agent_main_loop outside the Dockerfile.
- Add a better exception message in a squirrelly agent tool path case.
neuro-san 0.1.12
Tweak the Dockerfile/entrypoint.sh so that more external Dockerfiles with different file structures can use the entrypoint.sh as-is.
neuro-san 0.1.11
- Allow service_prefix to be passed in by AgentMainLoop constructor argument
- Change DEFAULT_SERVICE_PREFIX to an empty string
- Fix rogue transient dependency. See openai/openai-python#1903
- Move scripty business for deployment to a new deploy directory and add it to manifest file