Conversation
…n_file_with_class_key
…_class_key' of https://github.com/leaf-ai/neuro-san into UN-3276_Support_for_user-defined_llm_in_hocon_file_with_class_key
neuro_san/internals/run_context/langchain/core/langchain_run_context.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/core/langchain_run_context.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/user_specified_langchain_llm_factory.py
Outdated
Show resolved
Hide resolved
…n_file_with_class_key
There was a problem hiding this comment.
A lot to address in here.
Same advice as always:
- Slow down. Fast is not the answer in public API development.
- Think through how your change effects existing installations
- Be your own first reviewer. The best thing you could do is ask yourself "What is Dan going to say about this?" before handing it off to me.
neuro_san/internals/run_context/langchain/llms/user_specified_langchain_llm_factory.py
Outdated
Show resolved
Hide resolved
…n_file_with_class_key
- Resolve llm class in default_llm_factory when all factories fail - Some test hocons
…_class_key' of https://github.com/leaf-ai/neuro-san into UN-3276_Support_for_user-defined_llm_in_hocon_file_with_class_key
|
|
||
| class UserSpecifiedLangChainLlmFactory(LangChainLlmFactory): | ||
| """ | ||
| A factory for constructing LLMs based on user-specified configurations provided under the "llm_config" |
There was a problem hiding this comment.
Add UserSpecifiedLangChainLlmFactory that uses arguments from given by the user.
| if self.llm_class: | ||
| # If config has "class", it is a user-specified llm so return config as is, | ||
| # and replace "StandardLangChainLlmFactory" with "UserSpecifiedLangChainLlmFactory". | ||
| self.llm_factories[0] = UserSpecifiedLangChainLlmFactory() |
There was a problem hiding this comment.
If llm_config has class, use UserSpecifiedLangChainLlmFactory.
| try: | ||
| return_dict: Dict[str, Any] = await agent_executor.ainvoke(inputs, invoke_config) | ||
| except (APIError, BadRequestError, AuthenticationError, ChatGoogleGenerativeAIError) as api_error: | ||
| except (openai.APIError, anthropic.APIError, ChatGoogleGenerativeAIError) as api_error: |
There was a problem hiding this comment.
Use APIError for anthropic as well.
| retries = retries - 1 | ||
| exception = key_error | ||
| backtrace = traceback.format_exc() | ||
| except TypeError as type_error: |
There was a problem hiding this comment.
Add TypeError for non-existing arguments user may pass.
neuro_san/internals/run_context/langchain/util/api_key_error_check.py
Outdated
Show resolved
Hide resolved
| class_args_set: Set[str] = set(signature(method_class).parameters.keys()) | ||
| pydantic_args: Set[str] = set() | ||
| # Check for if it is a class that extends pydantic BaseModel | ||
| if isclass(method_class) and issubclass(method_class, BaseModel): |
There was a problem hiding this comment.
Check if the class extends BaseModel. This is necessary since pydantic allows arguments to have alias.
neuro_san/registries/manifest.hocon
Outdated
|
|
||
| "test_new_model_default_class.hocon": true, | ||
| "test_new_class.hocon": true, | ||
| "test_new_model_extended_class.hocon": true |
There was a problem hiding this comment.
This is for testing only. It will not be in the final PR.
| "llm_config": { | ||
| "class": "langchain_groq.chat_models.ChatGroq", | ||
| "model_name": "deepseek-r1-distill-llama-70b", | ||
| "temperature": 0.5 |
There was a problem hiding this comment.
This is for testing only. It will not be in the final PR.
| "llm_config": { | ||
| "class": "openai", | ||
| "model_name": "gpt-4.1-mini", | ||
| "temperature": 0.5 |
There was a problem hiding this comment.
This is for testing only. It will not be in the final PR.
| "llm_config": { | ||
| "class": "groq", | ||
| "model_name": "deepseek-r1-distill-llama-70b", | ||
| "temperature": 0.5, |
There was a problem hiding this comment.
This is for testing only. It will not be in the final PR.
|
There are three types of LLM extensions using the
|
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/standard_langchain_llm_factory.py
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/toolbox/toolbox_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/toolbox/toolbox_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/util/api_key_error_check.py
Outdated
Show resolved
Hide resolved
…n_file_with_class_key
- Refactor "check_invalid_arguments" into a util class - Revert the error message detection in api key error checker - Modified test_toolbox_factory - Add test for ArgumentValidator
…_class_key' of https://github.com/leaf-ai/neuro-san into UN-3276_Support_for_user-defined_llm_in_hocon_file_with_class_key
|
|
||
| # Try resolving via 'class' in config if factories failed | ||
| class_path: str = config.get("class") | ||
| if llm is None and found_exception is not None and class_path: |
There was a problem hiding this comment.
The conditions to instantiate using class in llm_config in agent network hocon are
- Failed instantiation from the llm factories (thus
llm is None) - Consequently,
found_exception is not None - There is
classin the config.
|
|
||
| def create_base_chat_model_from_user_class(self, class_path: str, config: Dict[str, Any]) -> BaseLanguageModel: | ||
| """ | ||
| Create a BaseLanguageModel from the user-specified langchain model class. |
There was a problem hiding this comment.
Refactor the logic of instantiating using "class" and user-defined args to create_base_chat_model_from_user_class() method.
|
|
||
| class ArgumentValidator: | ||
| """ | ||
| A utility class for inspecting method and class arguments, particularly useful for |
There was a problem hiding this comment.
Refactor checking for invalid args to a utility class ArgumentValidator.
There was a problem hiding this comment.
This is very nicely abstracted.
| # ------------------------- Tests ------------------------------- | ||
|
|
||
| class TestArgumentValidator: | ||
| """Tests for the ArgumentValidator class and its static validation utilities.""" |
There was a problem hiding this comment.
Add test for ArgumentValidator.
| from neuro_san.internals.run_context.langchain.toolbox.toolbox_factory import ToolboxFactory | ||
|
|
||
| RESOLVER_PATH = "leaf_common.config.resolver.Resolver.resolve_class_in_module" | ||
| VALIDATIOR_PATH = ( |
There was a problem hiding this comment.
Modify test for toolbox factory to reflect the refactoring.
|
| # To prevent this, we first fetch the default arguments for the given class from llm_info, | ||
| # then merge them with the user-provided config. This ensures all expected arguments are present, | ||
| # and the user’s config values take precedence over the defaults. | ||
| config_from_class_in_llm_info: Dict[str, Any] = self.get_chat_class_args(class_from_llm_config) |
There was a problem hiding this comment.
Combine user-specified config with the one in class in llm_info.
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
neuro_san/internals/run_context/langchain/llms/default_llm_factory.py
Outdated
Show resolved
Hide resolved
| @staticmethod | ||
| def get_base_model_args(base_model_class: Type[BaseModel]) -> Set[str]: | ||
| """ | ||
| Extract all field names and aliases from a Pydantic BaseModel class. |
There was a problem hiding this comment.
It's good that you call out the BaseModel class as coming from Pydantic.
I always get confused thinking it's an LLM, but it's not, it's lower level than that.
d1donlydfink
left a comment
There was a problem hiding this comment.
This is looking really good... good enough at least to take out of Draft mode.
A couple more little things and we are there I think.
…n_file_with_class_key
…n_file_with_class_key
…_class_key' of https://github.com/leaf-ai/neuro-san into UN-3276_Support_for_user-defined_llm_in_hocon_file_with_class_key
…n_file_with_class_key
|
|
||
| @staticmethod | ||
| def create_class(class_name: str, class_name_source: str, type_of_class: Type) -> Type: | ||
| """ |
There was a problem hiding this comment.
I refactor create_instance() to create_class() method because I need we need constructor with args for user-defined llm.
| if not isinstance(instance, type_of_class): | ||
| raise ValueError(f"Class '{class_name}' from {class_name_source} " | ||
| "must be of type {type_of_class.__name__}") | ||
| if not issubclass(class_reference, type_of_class): |
There was a problem hiding this comment.
Move check correct type to create_class().
| raise ValueError("'class' in llm_config must be a string") | ||
|
|
||
| # Resolve the 'class' | ||
| llm_class: Type[BaseLanguageModel] = ResolverUtil.create_class( |
There was a problem hiding this comment.
Use ResolveUtil.create_class for the llm since we need to instantiate it with args.
| # Check for invalid args and throw error if found | ||
| ArgumentValidator.check_invalid_args(llm_class, user_config) | ||
|
|
||
| return llm_class(**user_config) |
There was a problem hiding this comment.
LLM instantiation with args.
| raise ValueError(f"Class {llm_factory_class_name} in {llm_info_file} " | ||
| "must be of type LangChainLlmFactory") | ||
| # Resolve and instantiate the factory class | ||
| llm_factory = ResolverUtil.create_instance( |
There was a problem hiding this comment.
Use ResolverUtil.create_instance for resolve and instantiation of the factory class.
|
|
||
| with patch("leaf_common.config.resolver.Resolver.resolve_class_in_module") as mock_resolver, \ | ||
| patch.object(factory, "_check_invalid_args") as mock_check_invalid: | ||
| with patch(RESOLVER_PATH) as mock_resolver, patch(VALIDATIOR_PATH) as mock_check_invalid: |
d1donlydfink
left a comment
There was a problem hiding this comment.
This looks great Nor. Overall things actually look a lot simpler as this has evolved over time. Nice work!
…n_file_with_class_key
Changes
UserSpecifiedLangchainLlmFactoryTests
gpt-4.1-mini, which is not in the default llm info.