Skip to content

Commit 4d155a2

Browse files
committed
feat: add OpenAPI shcema to custom rest backend
Signed-off-by: popsiclexu <zhenxuexu@gmail.com>
1 parent a542b40 commit 4d155a2

File tree

1 file changed

+94
-3
lines changed

1 file changed

+94
-3
lines changed

docs/tutorials/custom-rest-backend.md

Lines changed: 94 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,97 @@
22
This tutorial guides you through the process of integrating a custom backend with k8sgpt using RESTful API. This setup is particularly useful when you want to integrate Retrieval-Augmented Generation (RAG) or an AI Agent with k8sgpt.
33
In this tutorial, we will store a CNCF Q&A dataset for knowledge retrieval and create a simple Retrieval-Augmented Generation (RAG) application and integrate it with k8sgpt.
44

5+
## API Specification
6+
To ensure k8sgpt can interact with your custom backend, implement the following API endpoint using the OpenAPI schema:
7+
8+
### OpenAPI Specification
9+
```yaml
10+
openapi: 3.0.0
11+
info:
12+
title: Custom REST Backend API
13+
version: 1.0.0
14+
paths:
15+
/v1/completions:
16+
post:
17+
summary: Generate a text-based response from the custom backend
18+
requestBody:
19+
required: true
20+
content:
21+
application/json:
22+
schema:
23+
type: object
24+
properties:
25+
model:
26+
type: string
27+
description: The name of the model to use.
28+
prompt:
29+
type: string
30+
description: The textual prompt to send to the model.
31+
options:
32+
type: object
33+
additionalProperties:
34+
type: string
35+
description: Model-specific options, such as temperature.
36+
required:
37+
- model
38+
- prompt
39+
responses:
40+
"200":
41+
description: Successful response
42+
content:
43+
application/json:
44+
schema:
45+
type: object
46+
properties:
47+
model:
48+
type: string
49+
description: The model name that generated the response.
50+
created_at:
51+
type: string
52+
format: date-time
53+
description: The timestamp of the response.
54+
response:
55+
type: string
56+
description: The textual response itself.
57+
required:
58+
- model
59+
- created_at
60+
- response
61+
"400":
62+
description: Bad Request
63+
"500":
64+
description: Internal Server Error
65+
```
66+
### Example Interaction
67+
68+
#### Request
69+
```json
70+
{
71+
"model": "gpt-4",
72+
"prompt": "Explain the process of photosynthesis.",
73+
"options": {
74+
"temperature": 0.7,
75+
"max_tokens": 150
76+
}
77+
}
78+
```
79+
80+
#### Response
81+
```json
82+
{
83+
"model": "gpt-4",
84+
"created_at": "2025-01-14T10:00:00Z",
85+
"response": "Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize foods with the help of chlorophyll."
86+
}
87+
```
88+
89+
### Implementation Notes
90+
91+
- **Endpoint Configuration**: Ensure the /v1/completions endpoint is reachable and adheres to the provided schema.
92+
93+
- **Error Handling**: Implement robust error handling to manage invalid requests or processing failures.
94+
95+
By following this specification, your custom REST service will seamlessly integrate with k8sgpt, enabling powerful and customizable AI-driven functionalities.
596
## Prerequisites
697

798
- [K8sGPT CLI](https://github.com/k8sgpt-ai/k8sgpt.git)
@@ -54,7 +145,7 @@ var (
54145
func main() {
55146
server := gin.Default()
56147
server.POST("/completion", func(c *gin.Context) {
57-
var req K8sRagRequest
148+
var req CustomRestRequest
58149
if err := c.ShouldBindJSON(&req); err != nil {
59150
c.JSON(http.StatusBadRequest, gin.H{"error": err.Error()})
60151
return
@@ -64,7 +155,7 @@ func main() {
64155
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
65156
return
66157
}
67-
resp := K8sRagResponse{
158+
resp := CustomRestResponse{
68159
Model: req.Model,
69160
CreatedAt: time.Now(),
70161
Response: content,
@@ -190,7 +281,7 @@ func rag(serverURL string, req CustomRestRequest) (string, error) {
190281

191282
// generate content by LLM
192283
ragPromptTemplate := `Base on context: %s;
193-
Please generate a response to the following query and response doen't include context, if context is empty, generate a response using the model's knowledge and capabilities: \n %s`
284+
Please generate a response to the following query and response doesn't include context, if context is empty, generate a response using the model's knowledge and capabilities: \n %s`
194285
prompt := fmt.Sprintf(ragPromptTemplate, strings.Join(x, "; "), req.Prompt)
195286
ctx := context.Background()
196287
completion, err := llms.GenerateFromSinglePrompt(ctx, llm, prompt)

0 commit comments

Comments
 (0)