Skip to content

Commit 480cafa

Browse files
authored
feat: replace separate messages with bulk payload (#9)
* replace line-by-line message inputs with bulk payload Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * re-architect from multiple separate inputs to a single payload via in-line or file input Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * update example and parameters Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * wording Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * update security email address Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * formatting Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> * reorder wording Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com> --------- Signed-off-by: Rishav Dhar <19497993+rdhar@users.noreply.github.com>
1 parent 325849f commit 480cafa

File tree

4 files changed

+111
-51
lines changed

4 files changed

+111
-51
lines changed

.github/workflows/ci.yml

Lines changed: 21 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -29,13 +29,27 @@ jobs:
2929
id: prompt
3030
uses: ./
3131
with:
32-
messages: '[{"role": "user", "content": "What is the capital of France?"}]'
33-
model: openai/o4-mini
34-
org: ${{ github.repository_owner}}
35-
max-tokens: 100
32+
payload: |
33+
model: openai/gpt-4.1-mini
34+
messages:
35+
- role: system
36+
content: You are a helpful assistant
37+
- role: user
38+
content: What is the capital of France
39+
max_tokens: 100
40+
temperature: 0.9
41+
top_p: 0.9
3642
3743
- name: Echo outputs
38-
continue-on-error: true
3944
run: |
40-
echo "response: ${{ steps.prompt.outputs.response }}"
41-
echo "response-raw: ${{ steps.prompt.outputs.response-raw }}"
45+
echo "response:"
46+
echo "${{ steps.prompt.outputs.response }}"
47+
48+
echo "response-file:"
49+
echo "${{ steps.prompt.outputs.response-file }}"
50+
51+
echo "response-file contents:"
52+
cat "${{ steps.prompt.outputs.response-file }}" | jq
53+
54+
echo "payload:"
55+
echo "${{ steps.prompt.outputs.payload }}"

README.md

Lines changed: 25 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@
1212

1313
## Usage Examples
1414

15+
[Compare available AI models](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task "Comparison of AI models for GitHub.") to choose the best one for your use-case.
16+
1517
```yml
1618
on:
1719
issues:
@@ -33,15 +35,18 @@ jobs:
3335
payload: |
3436
model: openai/gpt-4.1-mini
3537
messages:
38+
- role: system
39+
content: You are a helpful assistant running within GitHub CI.
3640
- role: user
3741
content: Concisely summarize this GitHub issue titled ${{ github.event.issue.title }}: ${{ github.event.issue.body }}
42+
max_tokens: 100
3843
temperature: 0.9
3944
top_p: 0.9
4045
4146
- name: Comment summary
4247
run: gh issue comment $NUMBER --body "$SUMMARY"
4348
env:
44-
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
49+
GH_TOKEN: ${{ github.token }}
4550
NUMBER: ${{ github.event.issue.number }}
4651
SUMMARY: ${{ steps.prompt.outputs.response }}
4752
```
@@ -50,23 +55,29 @@ jobs:
5055
5156
## Inputs
5257
53-
Only `messages` and `model` are required inputs. [Compare available AI models](https://docs.github.com/en/copilot/using-github-copilot/ai-models/choosing-the-right-ai-model-for-your-task "Comparison of AI models for GitHub.") to choose the best one for your use-case.
58+
Either `payload` or `payload-file` is required.
59+
60+
| Type | Name | Description |
61+
| ------ | -------------------- | ----------------------------------------------------------------------------------------------------------- |
62+
| Data | `payload` | Body parameters of the inference request in YAML format.</br>Example: `model…` |
63+
| Data | `payload-file` | Path to a file containing the body parameters of the inference request.</br>Example: `./payload.{json,yml}` |
64+
| Config | `show-payload` | Whether to show the payload in the logs.</br>Default: `true` |
65+
| Config | `show-response` | Whether to show the response content in the logs.</br>Default: `true` |
66+
| Admin | `github-api-version` | GitHub API version.</br>Default: `2022-11-28` |
67+
| Admin | `github-token` | GitHub token.</br>Default: `github.token` |
68+
| Admin | `org` | Organization for request attribution.</br>Example: `github.repository_owner` |
5469

55-
| Name | Description |
56-
| -------------------- | ---------------------------------------------------------------------------------------------------- |
57-
| `github-api-version` | GitHub API version.</br>Default: `2022-11-28` |
58-
| `github-token` | GitHub token.</br>Default: `github.token` |
59-
| `max-tokens` | Maximum number of tokens to generate in the completion.</br>Example: `1000` |
60-
| `messages` | Messages to send to the model in JSON format.</br>Example: `[{"role": "user", "content": "Hello!"}]` |
61-
| `model` | Model to use for inference.</br>Example: `openai/o4-mini` |
62-
| `org` | Organization to which the request should be attributed.</br>Example: `github.repository_owner` |
70+
</br>
6371

6472
## Outputs
6573

66-
| Name | Description |
67-
| -------------- | -------------------------------------------- |
68-
| `response` | Response content from the inference request. |
69-
| `response-raw` | Raw, complete response in JSON format. |
74+
| Name | Description |
75+
| -------------- | -------------------------------------------------------- |
76+
| `response` | Response content from the inference request. |
77+
| `response-raw` | File path containing the complete, raw response. |
78+
| `payload` | Body parameters of the inference request in JSON format. |
79+
80+
</br>
7081

7182
## Security
7283

SECURITY.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,4 +17,4 @@ Integrating security in your CI/CD pipeline is critical to practicing DevSecOps.
1717

1818
## Reporting a Vulnerability
1919

20-
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <contact@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/ai-inference-request/security/advisories/new "Create a new security advisory.").
20+
You must never report security related issues, vulnerabilities or bugs including sensitive information to the issue tracker, or elsewhere in public. Instead, sensitive bugs must be sent by email to <security@OP5.dev> or reported via [Security Advisory](https://github.com/op5dev/ai-inference-request/security/advisories/new "Create a new security advisory.").

action.yml

Lines changed: 64 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,36 @@
11
---
22
name: AI Inference Request via GitHub Action
33
author: Rishav Dhar (https://rdhar.dev)
4-
description: AI inference request GitHub Models with this GitHub Action.
4+
description: AI inference request GitHub Models via this GitHub Action.
55

66
inputs:
77
github-api-version:
88
default: "2022-11-28"
9-
description: "GitHub API version (e.g., `2022-11-28`)."
9+
description: GitHub API version (e.g., `2022-11-28`)
1010
required: false
1111
github-token:
12-
default: ${{ github.token }}
13-
description: "GitHub token (e.g., `github.token`)."
12+
default: "${{ github.token }}"
13+
description: GitHub token (e.g., `github.token`)
1414
required: false
15-
max-tokens:
15+
org:
1616
default: ""
17-
description: "Maximum number of tokens to generate in the completion (e.g., `1000`)."
17+
description: Organization for request attribution (e.g., `github.repository_owner`)
1818
required: false
19-
messages:
20-
default: ""
21-
description: 'Messages to send to the model in JSON format (e.g., `[{"role": "user", "content": "Hello!"}]`).'
22-
required: true
23-
model:
19+
payload:
2420
default: ""
25-
description: "Model to use for inference (e.g., `openai/o4-mini`)."
26-
required: true
27-
org:
21+
description: Body parameters of the inference request in YAML format (e.g., `model…`)
22+
required: false
23+
payload-file:
2824
default: ""
29-
description: "Organization to which the request should be attributed (e.g., `github.repository_owner`)."
25+
description: Path to a file containing the body parameters of the inference request (e.g., `./payload.{json,yml}`)
26+
required: false
27+
show-payload:
28+
default: "true"
29+
description: Whether to show the payload in the logs (e.g., `true`)
30+
required: false
31+
show-response:
32+
default: "true"
33+
description: Whether to show the response content in the logs (e.g., `true`)
3034
required: false
3135

3236
runs:
@@ -38,31 +42,62 @@ runs:
3842
API_VERSION: ${{ inputs.github-api-version }}
3943
GH_TOKEN: ${{ inputs.github-token }}
4044
ORG: ${{ inputs.org != '' && format('orgs/{0}/', inputs.org) || '' }}
45+
PAYLOAD: ${{ inputs.payload }}
46+
PAYLOAD_FILE: ${{ inputs.payload-file }}
47+
SHOW_PAYLOAD: ${{ inputs.show-payload }}
48+
SHOW_RESPONSE: ${{ inputs.show-response }}
4149
run: |
42-
GH_HOST=$(echo $GITHUB_SERVER_URL | sed 's/.*:\/\///')
50+
# AI inference request
51+
if [[ -n "$PAYLOAD_FILE" ]]; then
52+
# Check if the file exists
53+
if [[ ! -f "$PAYLOAD_FILE" ]]; then
54+
echo "Error: Payload file '$PAYLOAD_FILE' does not exist." >&2
55+
exit 1
56+
fi
57+
# Determine whether the format is JSON (starts with '{') or YAML (default)
58+
first_char=$(sed -n 's/^[[:space:]]*\(.\).*/\1/p; q' "$PAYLOAD_FILE")
59+
if [[ "$first_char" == '{' ]]; then
60+
body=$(cat "$PAYLOAD_FILE")
61+
else
62+
body=$(yq --output-format json "$PAYLOAD_FILE")
63+
fi
64+
else
65+
body=$(echo "$PAYLOAD" | yq --output-format json)
66+
fi
67+
echo "payload_json=$(echo $body)" >> $GITHUB_OUTPUT
68+
if [[ "${SHOW_PAYLOAD,,}" == "true" ]]; then echo "$body"; fi
69+
70+
# Create a temporary file to store the response
71+
temp_file=$(mktemp)
4372
44-
response_raw=$(curl --request POST --location https://models.github.ai/${ORG}inference/chat/completions \
73+
# Send the AI inference request via GitHub API
74+
curl \
75+
--request POST \
76+
--no-progress-meter \
77+
--location "https://models.github.ai/${ORG}inference/chat/completions" \
4578
--header "Accept: application/vnd.github+json" \
4679
--header "Authorization: Bearer $GH_TOKEN" \
4780
--header "Content-Type: application/json" \
4881
--header "X-GitHub-Api-Version: $API_VERSION" \
49-
--data '{
50-
"messages": ${{ inputs.messages }},
51-
"model": "${{ inputs.model }}"
52-
}'
53-
)
82+
--data "$(echo $body | jq --compact-output --exit-status)" \
83+
&> "$temp_file"
5484
55-
echo $response_raw
56-
echo "response_raw=$response_raw" >> $GITHUB_OUTPUT
57-
echo "response=$response_raw | jq --raw-output '.choices[0].message.content'" >> $GITHUB_OUTPUT
85+
# In addition to the temporary file containing the full response,
86+
# return the first 2**18 bytes of the response content (GitHub's limit)
87+
echo "response_file=$temp_file" >> $GITHUB_OUTPUT
88+
echo "response=$(cat $temp_file | jq --raw-output '.choices[0].message.content' | head --bytes 262144 --silent)" >> $GITHUB_OUTPUT
89+
if [[ "${SHOW_RESPONSE,,}" == "true" ]]; then cat "$temp_file" | jq --raw-output '.choices[0].message.content' || true; fi
5890
5991
outputs:
92+
payload:
93+
description: Body parameters of the inference request in JSON format.
94+
value: ${{ steps.request.outputs.payload_json }}
6095
response:
61-
description: "Response content from the inference request."
96+
description: Response content from the inference request.
6297
value: ${{ steps.request.outputs.response }}
63-
response-raw:
64-
description: "Raw, complete response in JSON format."
65-
value: ${{ steps.request.outputs.response_raw }}
98+
response-file:
99+
description: File path containing the complete, raw response in JSON format.
100+
value: ${{ steps.request.outputs.response_file }}
66101

67102
branding:
68103
color: white

0 commit comments

Comments
 (0)