Skip to content

Dev lfx litellm#144

Merged
lfnothias merged 99 commits intodev_lfxfrom
dev_lfx_litellm
Feb 19, 2025
Merged

Dev lfx litellm#144
lfnothias merged 99 commits intodev_lfxfrom
dev_lfx_litellm

Conversation

@lfnothias
Copy link
Contributor

@lfnothias lfnothias commented Feb 19, 2025

PR Type

enhancement, documentation


Description

  • Add support for multiple LLM instances

  • Improve README.md with installation details

  • Update evaluation configuration for LangSmith

  • Remove deprecated files and clean up code


Changes walkthrough 📝

Relevant files
Enhancement
12 files
agents_factory.py
Enhance agent creation with LLM instance support                 
+25/-2   
agent.py
Modify agent creation to accept LLM instance                         
+3/-2     
evaluation.py
Update evaluation configuration for LangSmith                       
+17/-4   
main.py
Add support for LiteLLM                                                                   
+103/-50
params.ini
Add new LLM configurations                                                             
+38/-8   
params_alternative.ini
Add alternative parameters for LLMs                                           
+55/-0   
LangSmith_evaluation.yml
Update evaluation workflow configuration                                 
+54/-22 
docs.yml
Add documentation deployment workflow                                       
+86/-0   
pr_agent.yml
Update PR agent workflow configuration                                     
+1/-0     
test_main.yml
Update test workflow configuration                                             
+8/-2     
docker-compose.yml
Add Docker support for MetaboT                                                     
+8/-0     
environment.yml
Add Litellm dependency                                                                     
+1/-0     
Documentation
2 files
tool_chemicals.py
Update chemical resolver documentation                                     
+11/-52 
README.md
Improve installation and usage instructions                           
+199/-192
Additional files
20 files
.pr_agent.toml +1/-1     
Dockerfile +10/-0   
substructure_workinprogress.py +0/-136 
agent.py +4/-3     
agent.py +7/-7     
base.py +0/-5     
tool_merge_result.py +1/-1     
tool_sparql.py +16/-12 
agent.py +2/-2     
prompt.py +0/-26   
temp_for_record.py +0/-290 
agent.py +0/-26   
prompt.py +0/-18   
tool_say_hello.py +0/-60   
agent.py +4/-2     
environment_alternative.yml +0/-36   
environment_test.yml +0/-46   
mkdocs.yml +195/-0 
announce.html +18/-0   
pyproject.toml +0/-245 

Need help?
  • Type /help how to ... in the comments thread for any questions about PR-Agent usage.
  • Check out the documentation for more information.
  • @github-actions
    Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    LLM Instance Handling

    The code introduces logic to handle multiple LLM instances and fallback mechanisms. Ensure that the logic for selecting and falling back to default LLM instances is robust and correctly implemented.

    # If the agent configuration specifies an LLM choice, pass that specific instance
    if "llm_instance" in func_signature.parameters:
        if "llm_choice" in agent:
            if agent["llm_choice"] in llms:
                filtered_args["llm_instance"] = llms[agent["llm_choice"]]
            else:
                logger.error(f"LLM '{agent['llm_choice']}' specified for agent '{agent['name']}' is not available in configuration. Falling back to default LLM 'llm_o'.")
                filtered_args["llm_instance"] = llms.get("llm_o", None)
        else:
            filtered_args["llm_instance"] = llms.get("llm_o", None)
    Environment Variable Handling

    The code sets environment variables for Langchain and Langsmith configurations. Verify that the logic for setting these variables is correct and that it handles cases where environment variables might not be set.

            os.environ.get("LANGCHAIN_PROJECT") or 
            os.environ.get("LANGSMITH_PROJECT") or 
            "MetaboT evaluation"
        )
    os.environ["LANGCHAIN_ENDPOINT"] = (
            os.environ.get("LANGCHAIN_ENDPOINT") or 
            os.environ.get("LANGSMITH_ENDPOINT") or 
            os.environ.get("LANGSMITH_BASE_URL") or 
            "https://api.smith.langchain.com"
        )
    
    
    # Initialize Langsmith client
    api_key = os.environ.get("LANGCHAIN_API_KEY") or os.environ.get("LANGSMITH_API_KEY")
    if not api_key:
        raise ValueError("The environment variable LANGCHAIN_API_KEY is not defined")
    LLM Creation Logic

    The llm_creation function has been updated to support multiple LLM configurations. Ensure that the logic for reading configuration and initializing LLMs is correct and handles all edge cases.

    def llm_creation(api_key=None, params_file=None):
        """
         Reads the parameters from the configuration file (default is params.ini) and initializes the language models.
    
         Args:
             api_key (str, optional): The API key for the OpenAI API.
             params_file (str, optional): Path to an alternate configuration file.
    
         Returns:
             dict: A dictionary containing the language models.
         """
    
        config = configparser.ConfigParser()
        if params_file:
            config.read(params_file)
        else:
            config.read(params_path)
    
        sections = ["llm", "llm_preview", "llm_o", "llm_mini", "llm_o3_mini", "llm_o1",
                   "deepseek_deepseek-chat", "deepseek_deepseek-reasoner",
                   "ovh_Meta-Llama-3_1-70B-Instruct", "llm_litellm"]
        models = {}
    
        # Get the OpenAI API key from the configuration file or the environment variables if none is passed.
        openai_api_key = api_key if api_key else os.getenv("OPENAI_API_KEY")
    
        for section in sections:
            temperature = config[section]["temperature"]
            model_id = config[section]["id"]
            max_retries = config[section]["max_retries"]
            if section.startswith("deepseek"):
                base_url = config[section]["base_url"]
                llm = ChatOpenAI(
                    temperature=temperature,
                    model=model_id,
                    max_retries=max_retries,
                    verbose=True,
                    openai_api_base=base_url,
                    openai_api_key=get_deepseek_key(),
                )
            elif section.startswith("ovh"):
                base_url = config[section]["base_url"]
                llm = ChatOpenAI(
                    temperature=temperature,
                    model=model_id,
                    max_retries=max_retries,
                    verbose=True,
                    base_url=base_url,
                    api_key=get_ovh_key(),
                )
            elif section.startswith("llm_litellm"):
                base_url = config[section].get("base_url", None)
                llm = ChatLiteLLM(
                    temperature=float(temperature),
                    model=model_id,
                    max_retries=int(max_retries),
                    verbose=True,
                    api_key=get_litellm_key(),
                    base_url=base_url
                )
            else:
                llm = ChatOpenAI(
                    temperature=temperature,
                    model=model_id,
                    max_retries=max_retries,
                    verbose=True,
                    openai_api_key=openai_api_key,
                )
            models[section] = llm
    
        return models

    @github-actions
    Copy link
    Contributor

    github-actions bot commented Feb 19, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Ensure llms dictionary is not empty
    Suggestion Impact:The suggestion to add a check for the llms dictionary being non-empty was implemented in the commit, preventing potential KeyError exceptions.

    code diff:

    +                    if llms and "llm_choice" in agent:

    Add a check to ensure that the llms dictionary is not empty before accessing it to
    prevent potential KeyError.

    app/core/agents/agents_factory.py [66-71]

    -if "llm_choice" in agent:
    +if llms and "llm_choice" in agent:
         if agent["llm_choice"] in llms:
             filtered_args["llm_instance"] = llms[agent["llm_choice"]]
         else:
             logger.error(f"LLM '{agent['llm_choice']}' specified for agent '{agent['name']}' is not available in configuration. Falling back to default LLM 'llm_o'.")
             filtered_args["llm_instance"] = llms.get("llm_o", None)
    Suggestion importance[1-10]: 7

    __

    Why: Adding a check to ensure the llms dictionary is not empty before accessing it helps prevent potential KeyError exceptions, enhancing the robustness of the code.

    Medium
    Validate llm_instance before usage

    Add a check to ensure that llm_instance is not None before using it to prevent
    potential runtime errors.

    app/core/agents/enpkg/agent.py [27-28]

    -model_to_use = llm_instance if llm_instance is not None else llms[MODEL_CHOICE]
    +if llm_instance is not None:
    +    model_to_use = llm_instance
    +else:
    +    model_to_use = llms[MODEL_CHOICE]
     agent = create_openai_tools_agent(model_to_use, tools, CHAT_PROMPT)
    Suggestion importance[1-10]: 3

    __

    Why: The suggestion restructures the code to explicitly check if llm_instance is not None, which is already handled by the existing conditional expression. The improvement is marginal.

    Low
    General
    Add error handling for model creation

    Add error handling for the llm_creation function to manage potential exceptions
    during model initialization.

    app/core/main.py [68]

    -models = llm_creation()
    +try:
    +    models = llm_creation()
    +except Exception as e:
    +    logger.error(f"Failed to create LLM models: {e}")
    +    raise
    Suggestion importance[1-10]: 5

    __

    Why: Adding error handling for the llm_creation function can help manage potential exceptions during model initialization, improving the code's resilience to runtime errors.

    Low

    @lfnothias lfnothias merged commit 690be1c into dev_lfx Feb 19, 2025
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

    Projects

    None yet

    Development

    Successfully merging this pull request may close these issues.

    2 participants