This repository shows how to use an Azure OpenAI Codex model with LangChain when the model API is not supported by LangChain’s standard wrappers.
The example focuses on OpenAI's latest Codex models that use the responses.create API instead of the usual chat.completions.create endpoint.
The main goal of this repo is to show, how to write a clean custom LangChain LLM wrapper so unsupported models can still be used inside LangChain chains.
LangChain wrappers are built around certain API assumptions. When a model uses a different endpoint or request format, the default wrappers stop working.
Instead of changing your application logic or waiting for framework support, you can write a small adapter that makes the model behave like a normal LangChain LLM.
This repository demonstrates that approach with a real example.
codex_generator/
│
├── .env
├── requirements.txt
├── main.py
└── README.md
-
.env
Stores environment variables such as API keys and endpoints.
This file should not be committed to version control. -
requirements.txt
Lists the Python libraries required to run the project. -
main.py
Contains:- A custom LangChain LLM wrapper for Azure OpenAI Codex
- A simple example showing how the wrapper plugs into a LangChain pipeline
-
README.md
Project documentation.
- Python 3.9 or newer
- An Azure OpenAI account with a Codex deployment
- Basic familiarity with LangChain
- Create a virtual environment (recommended)
- Install dependencies:
pip install -r requirements.txt- Create a
.envfile in the project root with the following values:
GENAI_OPENAI_ENDPOINT=your_azure_openai_endpoint
GENAI_OPENAI_API_KEY=your_api_key
GENAI_OPENAI_API_VERSION=your_api_version
DEPLOYMENT_NAME_CODEX=your_codex_deployment_nameLangChain expects an LLM to expose a _call method that takes a prompt string and returns a text response.
The custom wrapper in main.py:
- Inherits from
langchain_core.language_models.llms.LLM - Creates an Azure OpenAI client internally
- Maps LangChain’s prompt string to
client.responses.create - Returns plain text back to LangChain
Once this contract is satisfied, the model works with standard LangChain components like prompts, chains, and output parsers.
After setting up the environment variables, run:
python main.pyThe script will:
- Create a custom Codex LLM instance
- Build a simple LangChain pipeline
- Generate Python code based on a text request
This is meant as a reference example, not a production application.
You can reuse this approach for other models and APIs by:
- Identifying the API call that returns text
- Mapping LangChain’s prompt input to that API
- Returning a string from
_call(or_acallfor async use)
This lets you integrate new models without changing the rest of your LangChain code.
This repository accompanies a detailed article that explains the reasoning and design behind the wrapper step by step.
Link: Medium Article
- This example uses a synchronous wrapper. Async wrappers follow the same idea but implement
_acall. - Error handling is kept simple for clarity.
- This repo is meant for learning and experimentation.
Use this code freely for learning or as a starting point for your own projects.