Skip to content

feat: add OpenAI API support with multi-provider architecture#1

Open
keyliass wants to merge 3 commits intorichstokes:mainfrom
keyliass:feat/openai-api-support
Open

feat: add OpenAI API support with multi-provider architecture#1
keyliass wants to merge 3 commits intorichstokes:mainfrom
keyliass:feat/openai-api-support

Conversation

@keyliass
Copy link

Add OpenAI API support

Hey! I've been playing around with InfiniChat and thought it would be cool to add OpenAI support alongside the existing Ollama functionality.

What's new

I added the ability to use OpenAI models (like GPT-4o) in addition to local Ollama models. The fun part is you can now mix and match - have a conversation between a local model and OpenAI, or use two different OpenAI models talking to each other.

How it works

Added two new flags:

  • --client_a and --client_b to choose between ollama or openai
  • If you don't specify models, it picks sensible defaults (llama3.2 for Ollama, gpt-4o for OpenAI)

Some examples:

# Two OpenAI models chatting
python app.py --client_a openai --client_b openai

# Mix local and cloud
python app.py --client_a ollama --client_b openai

# Still works exactly like before
python app.py --model_a llama3.2 --model_b codellama

Setup

Just need to add your OpenAI API key to a .env file (included an example). Everything else works the same.

Technical stuff

  • Added streaming support for OpenAI (so responses appear in real-time like Ollama)
  • Kept all existing functionality - nothing breaks
  • Added the openai and python-dotenv packages

I've been testing it with debates, creative writing prompts, and just random conversations.
Let me know if you'd like me to change anything or if you have questions about the implementation.

(PS: This is my first ever pull request, so don't hesitate to tell me if I've done anything wrong.)

Tonyo and others added 3 commits June 21, 2025 20:20
- Add OpenAI client implementation with streaming support
- Introduce --client_a and --client_b arguments for provider selection
- Support mixed conversations (Ollama + OpenAI)
- Add environment configuration with .env file support
- Implement auto-detection of default models per provider
- Add openai and python-dotenv dependencies
- Update README with new usage examples and configuration guide
- Maintain backward compatibility with existing Ollama functionality
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant