A modern Next.js template for building GPT-powered applications with streaming responses. Built with TypeScript, Tailwind CSS, and deployed on Vercel Edge Functions.
- Streaming Responses: Real-time streaming of GPT responses using Server-Sent Events
- Edge Runtime: Optimized performance with Vercel Edge Functions
- TypeScript: Full type safety throughout the application
- Tailwind CSS: Modern, responsive UI with animated backgrounds
- Customizable: Easy configuration through environment variables
- Error Handling: Comprehensive error handling for production use
- Node.js 18.0.0 or higher
- An OpenAI API key (Get one here)
-
Clone the repository
git clone <your-repo-url> cd gpt-3-vercel-template
-
Install dependencies
npm install
-
Set up environment variables
cp .env.example .env
Edit
.envand add your OpenAI API key:OPENAI_API_KEY="your-api-key-here"
-
Run the development server
npm run dev
-
Open your browser Navigate to http://localhost:3000
- Click "Use this template" to create a new repository
- Import your repository in Vercel
- Configure environment variables (see below)
- Deploy!
Copy the .env.example file to .env and configure the following variables:
These are exposed to the browser:
| Variable | Required | Default | Description |
|---|---|---|---|
APP_NAME |
No | "OhMyGPT" | Your application name |
APP_LOGO |
No | - | URL to your app logo |
APP_THEME_COLOR |
No | "#22c55e" | Primary theme color (hex) |
APP_SUMMARY |
No | "Ask me any thing you want." | App description |
EXAMPLE_INPUT |
No | "Ask me any thing." | Placeholder text |
These are only available on the server:
| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY |
Yes | - | Your OpenAI API key |
OPENAI_API_BASE_URL |
No | "https://api.openai.com" | Custom API endpoint |
SYSTEM_MESSAGE |
No | - | System prompt for GPT |
MESSAGE_TEMPLATE |
No | - | Template for user messages (use {{input}} as placeholder) |
├── components/ # React components
│ ├── background-gradient.tsx
│ └── card.tsx
├── helpers/ # Utility functions
│ ├── env-utils.ts
│ └── openai-stream.ts
├── pages/ # Next.js pages
│ ├── api/
│ │ └── request.ts # API endpoint for GPT requests
│ ├── _app.tsx
│ └── index.tsx # Main page
├── public/ # Static files
├── styles/ # Global styles
├── config-client.ts # Client configuration
├── config-server.ts # Server configuration
└── env.d.ts # Environment types
Streams GPT responses for the given input.
Request Body:
{
"input": "Your question here"
}Response: Server-Sent Events stream of the GPT response
Error Codes:
400: Invalid input (missing or empty)500: Server error (API error, network issues, etc.)
npm run dev- Start development servernpm run build- Build for productionnpm run start- Start production servernpm run type-check- Run TypeScript type checkingnpm run lint- Lint code with ESLintnpm run format- Format code with Prettiernpm run format:check- Check code formatting
This project uses:
- TypeScript 5.7+ for type safety
- ESLint for code linting
- Prettier for code formatting
Run npm run lint and npm run format before committing.
You can create custom HTML widgets in the public/ directory:
- Create a new
.htmlfile (e.g.,public/custom.html) - Use the
/api/requestendpoint with streaming support - Access it at
/custom.html
See public/test.html for an example implementation.
Check out the full tutorial and demo on YouTube
MIT