You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/custom-rest-backend.md
+94-3Lines changed: 94 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,97 @@
2
2
This tutorial guides you through the process of integrating a custom backend with k8sgpt using RESTful API. This setup is particularly useful when you want to integrate Retrieval-Augmented Generation (RAG) or an AI Agent with k8sgpt.
3
3
In this tutorial, we will store a CNCF Q&A dataset for knowledge retrieval and create a simple Retrieval-Augmented Generation (RAG) application and integrate it with k8sgpt.
4
4
5
+
## API Specification
6
+
To ensure k8sgpt can interact with your custom backend, implement the following API endpoint using the OpenAPI schema:
7
+
8
+
### OpenAPI Specification
9
+
```yaml
10
+
openapi: 3.0.0
11
+
info:
12
+
title: Custom REST Backend API
13
+
version: 1.0.0
14
+
paths:
15
+
/v1/completions:
16
+
post:
17
+
summary: Generate a text-based response from the custom backend
18
+
requestBody:
19
+
required: true
20
+
content:
21
+
application/json:
22
+
schema:
23
+
type: object
24
+
properties:
25
+
model:
26
+
type: string
27
+
description: The name of the model to use.
28
+
prompt:
29
+
type: string
30
+
description: The textual prompt to send to the model.
31
+
options:
32
+
type: object
33
+
additionalProperties:
34
+
type: string
35
+
description: Model-specific options, such as temperature.
36
+
required:
37
+
- model
38
+
- prompt
39
+
responses:
40
+
"200":
41
+
description: Successful response
42
+
content:
43
+
application/json:
44
+
schema:
45
+
type: object
46
+
properties:
47
+
model:
48
+
type: string
49
+
description: The model name that generated the response.
50
+
created_at:
51
+
type: string
52
+
format: date-time
53
+
description: The timestamp of the response.
54
+
response:
55
+
type: string
56
+
description: The textual response itself.
57
+
required:
58
+
- model
59
+
- created_at
60
+
- response
61
+
"400":
62
+
description: Bad Request
63
+
"500":
64
+
description: Internal Server Error
65
+
```
66
+
### Example Interaction
67
+
68
+
#### Request
69
+
```json
70
+
{
71
+
"model": "gpt-4",
72
+
"prompt": "Explain the process of photosynthesis.",
73
+
"options": {
74
+
"temperature": 0.7,
75
+
"max_tokens": 150
76
+
}
77
+
}
78
+
```
79
+
80
+
#### Response
81
+
```json
82
+
{
83
+
"model": "gpt-4",
84
+
"created_at": "2025-01-14T10:00:00Z",
85
+
"response": "Photosynthesis is the process by which green plants and some other organisms use sunlight to synthesize foods with the help of chlorophyll."
86
+
}
87
+
```
88
+
89
+
### Implementation Notes
90
+
91
+
-**Endpoint Configuration**: Ensure the /v1/completions endpoint is reachable and adheres to the provided schema.
92
+
93
+
-**Error Handling**: Implement robust error handling to manage invalid requests or processing failures.
94
+
95
+
By following this specification, your custom REST service will seamlessly integrate with k8sgpt, enabling powerful and customizable AI-driven functionalities.
Please generate a response to the following query and response doen't include context, if context is empty, generate a response using the model's knowledge and capabilities: \n %s`
284
+
Please generate a response to the following query and response doesn't include context, if context is empty, generate a response using the model's knowledge and capabilities: \n %s`
0 commit comments