test(models/openrouter): add e2e tests#2554
Conversation
Summary of ChangesHello @Stream29, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the testing infrastructure for the OpenRouter LLM plugin by introducing comprehensive end-to-end tests. The primary goal is to validate the functionality of various OpenRouter models, ensuring they can be correctly invoked and return expected outputs. This improvement aims to bolster the reliability and stability of the OpenRouter integration within the system. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces end-to-end tests for the OpenRouter model provider, which is a valuable addition for ensuring the integration's reliability. The changes correctly add the necessary testing dependencies and a new test file that covers invoking different models. My review includes a couple of suggestions to improve the test implementation by following common testing best practices and enhancing code readability.
| models: list[str] = [] | ||
| for item in data: | ||
| if isinstance(item, str) and item.strip(): | ||
| models.append(item.strip()) | ||
| return models |
There was a problem hiding this comment.
For better readability and conciseness, this loop can be replaced with a list comprehension. This is a more Pythonic way to create the list of models.
| models: list[str] = [] | |
| for item in data: | |
| if isinstance(item, str) and item.strip(): | |
| models.append(item.strip()) | |
| return models | |
| return [item.strip() for item in data if isinstance(item, str) and item.strip()] |
| if not api_key: | ||
| raise ValueError("OPENROUTER_API_KEY environment variable is required") |
There was a problem hiding this comment.
Instead of raising a ValueError when the OPENROUTER_API_KEY is not set, it's better practice to skip the test using pytest.skip. This prevents the test suite from failing in environments where the key is not available, and provides a clearer output that the test was skipped due to a missing requirement.
| if not api_key: | |
| raise ValueError("OPENROUTER_API_KEY environment variable is required") | |
| if not api_key: | |
| pytest.skip("OPENROUTER_API_KEY environment variable is required to run this test") |
Related Issues or Context
langgenius/dify#19379
This PR contains Changes to LLM Models Plugin
Only test added.