|
1 | 1 | # OpenAPI Bridge Server |
2 | 2 | This module contains the HTTP server. It exposes a minimal OpenAI-style `POST /v1/chat/completions` endpoint backed by the SecureCoder engine. |
3 | 3 |
|
4 | | -## Prerequisites |
5 | | -- JDK 21 |
6 | | -- Optional but recommended for security analysis features: CodeQL CLI in `PATH` (the Guardian uses it when analyzing code). If not present, some security analysis steps may fail. |
| 4 | +## Run with Docker |
7 | 5 |
|
| 6 | +### Configuration |
| 7 | +- OpenRouter mode: |
| 8 | + - `OPENROUTER_KEY` — your OpenRouter API key |
| 9 | + - `MODEL` — model ID (e.g.: `openai/gpt-oss-20b`) |
| 10 | +- Ollama mode (used when `OPENROUTER_KEY` is not set): |
| 11 | + - `MODEL` — Ollama model name (e.g.: `gpt-oss:20`) |
| 12 | + - `OLLAMA_BASE_URL` — base URL to Ollama (default: 11434 on the host) |
| 13 | + - `OLLAMA_KEEP_ALIVE` — keep-alive duration (default: `5m`) |
8 | 14 |
|
9 | | -## Configuration (Environment Variables or -D system properties) |
10 | | -The server reads its configuration from either environment variables or JVM system properties (`-DNAME=value`). |
11 | | - |
12 | | -- `PORT` — HTTP port (default: `8080`) |
13 | | -- LLM selection (EngineFactory picks the first matching provider): |
14 | | - - OpenRouter mode: |
15 | | - - `OPENROUTER_KEY` — your OpenRouter API key |
16 | | - - `MODEL` — model ID (default: `openai/gpt-oss-20b`) |
17 | | - - Ollama mode (used when `OPENROUTER_KEY` is not set): |
18 | | - - `MODEL` — Ollama model name, e.g. `llama3.1:8b` |
19 | | - - `OLLAMA_BASE_URL` — base URL to Ollama (default: `http://127.0.0.1:11434`) |
20 | | - - `OLLAMA_KEEP_ALIVE` — keep-alive duration (default: `5m`) |
21 | | - |
22 | | - |
23 | | -## How to run the server |
24 | | - |
25 | | -On macOS/Linux (using -D system properties): |
26 | | - |
| 15 | +### Build and run |
| 16 | +Make sure you have Docker installed and are in the project root directory. |
27 | 17 | ``` |
28 | | -./gradlew :app:openapi-bridge:run -DOPENROUTER_KEY=... -DMODEL=... |
| 18 | +docker build -f app/openapi-bridge/Dockerfile -t openapi-bridge:latest . |
29 | 19 | ``` |
30 | 20 |
|
31 | | -Or with environment variables: |
32 | | - |
| 21 | +Run with Ollama on the host (macOS/Windows): |
33 | 22 | ``` |
34 | | -export OPENROUTER_KEY=... |
35 | | -export MODEL=... |
36 | | -./gradlew :app:openapi-bridge:run |
| 23 | +docker run --rm -p 8080:8080 \ |
| 24 | + -e MODEL="llama3.1:8b" \ |
| 25 | + openapi-bridge:latest |
37 | 26 | ``` |
38 | 27 |
|
39 | | -On Windows |
| 28 | +Run with Ollama on the host (Linux): |
| 29 | +``` |
| 30 | +docker run --rm -p 8080:8080 \ |
| 31 | + --add-host=host.docker.internal:host-gateway \ |
| 32 | + -e MODEL="llama3.1:8b" \ |
| 33 | + openapi-bridge:latest |
| 34 | +``` |
40 | 35 |
|
| 36 | +Run using OpenRouter instead of Ollama: |
41 | 37 | ``` |
42 | | -set OPENROUTER_KEY=... |
43 | | -set MODEL=... |
44 | | -gradlew.bat :app:openapi-bridge:run |
| 38 | +docker run --rm -p 8080:8080 \ |
| 39 | + -e OPENROUTER_KEY=... \ |
| 40 | + -e MODEL=openai/gpt-oss-20b \ |
| 41 | + openapi-bridge:latest |
45 | 42 | ``` |
46 | 43 |
|
47 | 44 | ## Endpoint |
48 | | - |
49 | 45 | - `POST /v1/chat/completions` — accepts a minimal OpenAI-style request and returns a single choice with the SecureCoder engine’s response. |
50 | 46 |
|
51 | | -Example request (curl): |
52 | | - |
| 47 | +Example request (from host): |
53 | 48 | ``` |
54 | 49 | curl -X POST "http://localhost:8080/v1/chat/completions" \ |
55 | 50 | -H "Content-Type: application/json" \ |
|
0 commit comments