Skip to content

BhavyaBhargava/Custom-Azure-OpenAI-Codex-LangChain-Wrapper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Custom Azure OpenAI Codex LangChain Wrapper

This repository shows how to use an Azure OpenAI Codex model with LangChain when the model API is not supported by LangChain’s standard wrappers.

The example focuses on OpenAI's latest Codex models that use the responses.create API instead of the usual chat.completions.create endpoint.

The main goal of this repo is to show, how to write a clean custom LangChain LLM wrapper so unsupported models can still be used inside LangChain chains.


Why This Repository Exists

LangChain wrappers are built around certain API assumptions. When a model uses a different endpoint or request format, the default wrappers stop working.

Instead of changing your application logic or waiting for framework support, you can write a small adapter that makes the model behave like a normal LangChain LLM.

This repository demonstrates that approach with a real example.


Project Structure

codex_generator/
│
├── .env
├── requirements.txt
├── main.py
└── README.md

File Overview

  • .env
    Stores environment variables such as API keys and endpoints.
    This file should not be committed to version control.

  • requirements.txt
    Lists the Python libraries required to run the project.

  • main.py
    Contains:

    • A custom LangChain LLM wrapper for Azure OpenAI Codex
    • A simple example showing how the wrapper plugs into a LangChain pipeline
  • README.md
    Project documentation.


Prerequisites

  • Python 3.9 or newer
  • An Azure OpenAI account with a Codex deployment
  • Basic familiarity with LangChain

Setup

  1. Create a virtual environment (recommended)
  2. Install dependencies:
pip install -r requirements.txt
  1. Create a .env file in the project root with the following values:
GENAI_OPENAI_ENDPOINT=your_azure_openai_endpoint
GENAI_OPENAI_API_KEY=your_api_key
GENAI_OPENAI_API_VERSION=your_api_version
DEPLOYMENT_NAME_CODEX=your_codex_deployment_name

How the Wrapper Works

LangChain expects an LLM to expose a _call method that takes a prompt string and returns a text response.

The custom wrapper in main.py:

  • Inherits from langchain_core.language_models.llms.LLM
  • Creates an Azure OpenAI client internally
  • Maps LangChain’s prompt string to client.responses.create
  • Returns plain text back to LangChain

Once this contract is satisfied, the model works with standard LangChain components like prompts, chains, and output parsers.


Running the Example

After setting up the environment variables, run:

python main.py

The script will:

  • Create a custom Codex LLM instance
  • Build a simple LangChain pipeline
  • Generate Python code based on a text request

This is meant as a reference example, not a production application.


Extending This Pattern

You can reuse this approach for other models and APIs by:

  • Identifying the API call that returns text
  • Mapping LangChain’s prompt input to that API
  • Returning a string from _call (or _acall for async use)

This lets you integrate new models without changing the rest of your LangChain code.


Related Article

This repository accompanies a detailed article that explains the reasoning and design behind the wrapper step by step.

Link: Medium Article


Notes

  • This example uses a synchronous wrapper. Async wrappers follow the same idea but implement _acall.
  • Error handling is kept simple for clarity.
  • This repo is meant for learning and experimentation.

License

Use this code freely for learning or as a starting point for your own projects.

Releases

No releases published

Packages

No packages published

Languages