This repo demonstrates the use of GPT-5-Pro model within Azure OpenAI using Responses API.
The implementation highlights new shift in the Azure OpenAI SDK to a more unified experience with the standard OpenAI Python library. This includes simplified client initialisation and the transition to version-free API endpoints, which reduces the overhead of manual API version management.
Note
Specifics of the Azure OpenAI models API lifecycle can be found on this AI Foundry documentation page.
- Part 1: Configuring Solution Environment
- Part 2: Simplified Client Initialisation
- Part 3: Executing a Request with Responses API
To run the provided Jupyter notebook, you'll need to set up your Azure OpenAI resource and install the required Python packages.
Ensure you have an Azure OpenAI resource with a deployment of GPT-5-Pro.
The notebook leverages Microsoft Entra ID (formerly Azure AD) for secure, keyless authentication. This is managed via the DefaultAzureCredential.
Configure the following environment variables. Note the use of the new /openai/v1/ base path suffix which supports versionless routing:
| Environment Variable | Description |
|---|---|
| MY_AOAI_API_BASE | The endpoint URL (e.g., https://AOAI_RESOUCE.openai.azure.com/openai/v1/). |
| MY_AOAI_API_DEPLOYMENT | The name of your GPT-5-Pro model deployment. |
pip install openai azure-identityIn line with the latest Azure OpenAI API evolution, this notebook demonstrates several simplifications:
- Standard OpenAI Class: We use the base
OpenAIclass rather than the Azure-specific subclass, making the code more portable. - No Explicit API Version: By pointing to the
/openai/v1/path in thebase_url, the system automatically routes to the latest stable logic, eliminating the need to hardcode a date-basedapi_version(e.g., 2025-04-01-preview). - Integrated Token Provider: Authentication is handled seamlessly by passing the bearer token provider directly into the
api_keyparameter.
from openai import OpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
# Simplified initialization
client = OpenAI(
base_url = MY_AOAI_API_BASE,
api_key = get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default"
)
)The Responses API provides a flexible way to interact with advanced models like GPT-5-Pro. The notebook uses the client.responses.create method, designed for high-performance interactions with conversation logic managed by the backend.
The following snippet from the notebook shows how to send a system instruction and a user query to the model.
response = client.responses.create(
model = MY_AOAI_API_DEPLOYMENT,
input=[
{
"role": "system",
"content": "You are a helpful AI assistant. Please limit your responses to 3 sentences.",
},
{
"role": "user",
"content": "How much silver would it take to coat the Eiffel Tower in a 1mm layer?"
}
]
)
# Printing the structured output
print(response.output[1].content[0].text)If setup successfully, you should expect to get a response looking like this:
Using the Eiffel Tower’s paintable surface area of about 250,000 m², a 1 mm layer would require roughly 250 m³ of silver. With silver’s density (~10,490 kg/m³), that’s about 2.6 million kilograms (≈2,600 metric tons). If the actual surface area is between 200,000 and 300,000 m², the required mass would range roughly 2.1–3.1 million kilograms.