-
Notifications
You must be signed in to change notification settings - Fork 366
299 ollama client does not work with stream #309
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
299 ollama client does not work with stream #309
Conversation
… created a mock class::TestGeneratorWithStream for simulating streamed API response
liyin2015
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please also check this pr as it is highly related.
Updates in the generator requires to be minimized
| s = f"model_kwargs={self.model_kwargs}, model_type={self.model_type}" | ||
| return s | ||
|
|
||
| def _process_chunk(self, chunk: Any) -> GeneratorOutput: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it specifies only one output, but you returned a tuple.
Ensure we add code linting @fm1320 by developers to check these basics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for. the review. Yes, that's my fault. I will fix this.
| log.error(f"Error in stream processing: {e}") | ||
| yield GeneratorOutput(error=str(e)) | ||
|
|
||
| return GeneratorOutput(data=process_stream(), raw_response=output) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dont separate the code, it changed too much, and its better to minimize the change, so just add the initial code back to the else
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood. I’ll proceed with this approach then. It seems there’s no longer a need for the additional _process_chunk function I had added earlier then?
What does this PR do?
This PR addresses the issue where the application fails to work when the
streamparameter is set toTruein theadalflow.components.model_client.ollama_client::OllamaClientclass. The issue is traced to theadalflow.core.generator.py_post_callmethod, which does not currently handle streaming correctly.Fixes #299
_post_callmethod.output_processors.Usage Updates
The updated usage for the
streamparameter is as follows:Tests Added
TestGeneratorWithStream, is added intest_generator.pyto verify the streaming behavior.GeneratorTyperesponse from theparse_chat_completionmethod and validates the output for streamed data.Tests output (local) after changes:
Breaking Changes
As far as I can test, this PR does not introduce any breaking changes. Existing functionality for non-streaming cases remains unaffected.
Before submitting