Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 51 additions & 0 deletions .github/workflows/time.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
name: Time (CLI utility) Example

on:
push:
branches:
- master
pull_request:
branches:
- master

permissions:
contents: write
deployments: write

jobs:
benchmark:
name: Run Some silly benchmark under `time`
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider updating to actions/checkout@v4.

v3 is still functional but v4 is the current recommended version with Node.js 20 support and performance improvements.

🔧 Proposed fix
-      - uses: actions/checkout@v3
+      - uses: actions/checkout@v4
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- uses: actions/checkout@v3
- uses: actions/checkout@v4
🤖 Prompt for AI Agents
In @.github/workflows/time.yml at line 20, Update the GitHub Actions checkout
step to use the newer actions/checkout@v4 instead of actions/checkout@v3; locate
the workflow step that currently contains the string "uses: actions/checkout@v3"
and change it to "uses: actions/checkout@v4" to pick up Node.js 20 support and
performance improvements.

- name: Run some silly benchmark with time
run: (time (head --bytes=1000000000 /dev/random >/dev/null)) 2>&1 | tee output.txt
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use /dev/urandom instead of /dev/random.

/dev/random blocks when the entropy pool is exhausted, which can cause inconsistent benchmark times. /dev/urandom is non-blocking and sufficient for this use case.

🔧 Proposed fix
-        run: (time (head --bytes=1000000000 /dev/random  >/dev/null)) 2>&1 | tee output.txt
+        run: (time (head --bytes=1000000000 /dev/urandom  >/dev/null)) 2>&1 | tee output.txt
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
run: (time (head --bytes=1000000000 /dev/random >/dev/null)) 2>&1 | tee output.txt
run: (time (head --bytes=1000000000 /dev/urandom >/dev/null)) 2>&1 | tee output.txt
🤖 Prompt for AI Agents
In @.github/workflows/time.yml at line 22, Replace the blocking entropy source
in the workflow run command: update the run step that currently invokes (time
(head --bytes=1000000000 /dev/random >/dev/null)) 2>&1 | tee output.txt to use
/dev/urandom instead of /dev/random so the head --bytes invocation is
non-blocking and yields consistent benchmark timing.


- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
name: Time Benchmark
tool: 'time'
output-file-path: output.txt
github-token: ${{ secrets.GITHUB_TOKEN }}
auto-push: true
# Show alert with commit comment on detecting possible performance regression
alert-threshold: '200%'
comment-on-alert: true
fail-on-alert: true
alert-comment-cc-users: '@ktrz','@henrikingo'

Check failure on line 36 in .github/workflows/time.yml

View workflow job for this annotation

GitHub Actions / Run unit tests

36:42 syntax error: expected <block end>, but found ',' (syntax)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Invalid YAML syntax for alert-comment-cc-users.

The value '@ktrz','@henrikingo' is not valid YAML. This is causing the actionlint parsing error. Use proper YAML array syntax:

🔧 Proposed fix
-          alert-comment-cc-users: '@ktrz','@henrikingo'
+          alert-comment-cc-users: '@ktrz,`@henrikingo`'

Or as an explicit YAML list if the action supports it:

          alert-comment-cc-users: |
            `@ktrz`
            `@henrikingo`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
alert-comment-cc-users: '@ktrz','@henrikingo'
alert-comment-cc-users: '@ktrz,`@henrikingo`'
🤖 Prompt for AI Agents
In @.github/workflows/time.yml at line 36, The YAML value for the key
alert-comment-cc-users is invalid because it's written as a comma-separated
string ('@ktrz','@henrikingo'); change it to a proper YAML sequence or a
multiline string supported by the action. Replace the current value with either
a YAML array (e.g., alert-comment-cc-users: ["@ktrz", "@henrikingo"]) or a
pipe-delimited block (alert-comment-cc-users: | followed by each user on its own
line) so actionlint will parse the workflow correctly.


- name: Store benchmark result - separate results repo
uses: benchmark-action/github-action-benchmark@v1
with:
name: Time Benchmark
tool: 'time'
output-file-path: output.txt
github-token: ${{ secrets.BENCHMARK_ACTION_BOT_TOKEN }}
auto-push: true
# Show alert with commit comment on detecting possible performance regression
alert-threshold: '200%'
comment-on-alert: true
fail-on-alert: true
alert-comment-cc-users: '@ktrz','@henrikingo'

Check failure on line 50 in .github/workflows/time.yml

View workflow job for this annotation

GitHub Actions / Run unit tests

50:43 [commas] too few spaces after comma
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Same YAML syntax issue as Line 36.

🔧 Proposed fix
-          alert-comment-cc-users: '@ktrz','@henrikingo'
+          alert-comment-cc-users: '@ktrz,`@henrikingo`'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
alert-comment-cc-users: '@ktrz','@henrikingo'
alert-comment-cc-users: '@ktrz,`@henrikingo`'
🤖 Prompt for AI Agents
In @.github/workflows/time.yml at line 50, The YAML entry for
alert-comment-cc-users is using invalid syntax ("'@ktrz','@henrikingo'"); update
the alert-comment-cc-users value to a proper YAML sequence such as
alert-comment-cc-users: ['@ktrz', '@henrikingo'] or a block list (- '@ktrz' / -
'@henrikingo') so the workflow parser accepts it; edit the
alert-comment-cc-users line in the time.yml workflow to use one of these valid
list formats.

gh-repository: 'github.com/benchmark-action/github-action-benchmark-results'
4 changes: 4 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ This action currently supports the following tools:
- [Benchmark.Net][benchmarkdotnet] for .Net projects
- [benchmarkluau](https://github.com/Roblox/luau/tree/master/bench) for Luau projects
- [JMH][jmh] for Java projects
- [time][time-cli-util] CLI utility to measure runtime time of any Linux executable
- Custom benchmarks where either 'biggerIsBetter' or 'smallerIsBetter'

Multiple languages in the same repository are supported for polyglot projects.
Expand All @@ -51,6 +52,7 @@ definitions are in [.github/workflows/](./.github/workflows) directory. Live wor
| .Net | [![C# Benchmark.Net Example Workflow][benchmarkdotnet-badge]][benchmarkdotnet-workflow-example] | [examples/benchmarkdotnet](./examples/benchmarkdotnet) |
| Java | [![Java Example Workflow][java-badge]][java-workflow-example] | [examples/java](./examples/java) |
| Luau | Coming soon | Coming soon |
| Time (CLI utility) | [`time`][time-workflow-example]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Malformed table row - missing column and trailing pipe.

The table row for "Time (CLI utility)" is missing the third column (Example Project) and the trailing pipe character, which breaks the table structure.

📝 Proposed fix
-| Time (CLI utility) | [`time`][time-workflow-example]
+| Time (CLI utility) | [`time`][time-workflow-example] | [examples/time](./examples/time) |
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
| Time (CLI utility) | [`time`][time-workflow-example]
| Time (CLI utility) | [`time`][time-workflow-example] | [examples/time](./examples/time) |
🧰 Tools
🪛 markdownlint-cli2 (0.20.0)

[warning] 55-55: Table pipe style
Expected: leading_and_trailing; Actual: leading_only; Missing trailing pipe

(MD055, table-pipe-style)


[warning] 55-55: Table column count
Expected: 3; Actual: 2; Too few cells, row will be missing data

(MD056, table-column-count)


[warning] 55-55: Table column style
Table pipe does not align with header for style "aligned"

(MD060, table-column-style)

🤖 Prompt for AI Agents
In `@README.md` at line 55, The table row for "Time (CLI utility)" is malformed —
add the missing third column (Example Project) and the trailing pipe to restore
table alignment; update the row containing "Time (CLI utility)" and the link
[`time`][time-workflow-example] so it has a third column (can be an example
project name or an empty cell) and ends with a final pipe.


All benchmark charts from above workflows are gathered in GitHub pages:

Expand Down Expand Up @@ -651,6 +653,7 @@ Every release will appear on your GitHub notifications page.
[examples-page]: https://benchmark-action.github.io/github-action-benchmark/dev/bench/
[pytest-benchmark]: https://pypi.org/project/pytest-benchmark/
[pytest]: https://pypi.org/project/pytest/
[time-cli-util]: https://man7.org/linux/man-pages/man1/time.1.html
[alert-comment-example]: https://github.com/benchmark-action/github-action-benchmark/commit/077dde1c236baba9244caad4d9e82ea8399dae20#commitcomment-36047186
[rust-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Rust+Example%22
[go-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Go+Example%22
Expand All @@ -660,6 +663,7 @@ Every release will appear on your GitHub notifications page.
[catch2-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Catch2+C%2B%2B+Example%22
[julia-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Julia+Example+with+BenchmarkTools.jl%22
[java-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22JMH+Example%22
[time-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22time+Example%22
[help-watch-release]: https://docs.github.com/en/github/receiving-notifications-about-activity-on-github/watching-and-unwatching-releases-for-a-repository
[help-github-token]: https://docs.github.com/en/actions/security-guides/automatic-token-authentication
[minimal-workflow-example]: https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Example+for+minimal+setup%22
Expand Down
1 change: 1 addition & 0 deletions action-types.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ inputs:
- jmh
- benchmarkdotnet
- benchmarkluau
- time
- customBiggerIsBetter
- customSmallerIsBetter
output-file-path:
Expand Down
2 changes: 1 addition & 1 deletion action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ inputs:
required: true
default: 'Benchmark'
tool:
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "customBiggerIsBetter", "customSmallerIsBetter"'
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "time", "customBiggerIsBetter", "customSmallerIsBetter"'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tool description is incomplete.

The description is missing "jmh" and "benchmarkluau" from the list of valid tools. While this appears to be a pre-existing issue, it would be good to fix it for completeness since you're already updating this line.

📝 Proposed fix
-    description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "time", "customBiggerIsBetter", "customSmallerIsBetter"'
+    description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "jmh", "benchmarkdotnet", "benchmarkluau", "time", "customBiggerIsBetter", "customSmallerIsBetter"'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "benchmarkdotnet", "time", "customBiggerIsBetter", "customSmallerIsBetter"'
description: 'Tool to use get benchmark output. One of "cargo", "go", "benchmarkjs", "pytest", "googlecpp", "catch2", "julia", "jmh", "benchmarkdotnet", "benchmarkluau", "time", "customBiggerIsBetter", "customSmallerIsBetter"'
🤖 Prompt for AI Agents
In `@action.yml` at line 14, Update the description string for the action (the
description key in action.yml) to include the missing valid tool names "jmh" and
"benchmarkluau" so the full list of allowed tools reflects all supported
options; simply add "jmh" and "benchmarkluau" into the quoted comma-separated
list in the existing description value.

required: true
output-file-path:
description: 'A path to file which contains the benchmark output'
Expand Down
41 changes: 41 additions & 0 deletions examples/time/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
Example for parsing output of the `time` CLI utility
====================================================

- [Workflow for this example](../../.github/workflows/time.yml)
- [Action log of this example](https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Rust+Example%22)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Incorrect workflow link - copy-paste error from Rust example.

The action log link points to the "Rust Example" workflow instead of the "time Example" workflow.

📝 Proposed fix
-- [Action log of this example](https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Rust+Example%22)
+- [Action log of this example](https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22time+Example%22)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- [Action log of this example](https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22Rust+Example%22)
- [Action log of this example](https://github.com/benchmark-action/github-action-benchmark/actions?query=workflow%3A%22time+Example%22)
🤖 Prompt for AI Agents
In `@examples/time/README.md` at line 5, The README link points to the "Rust
Example" workflow by mistake; update the markdown link in
examples/time/README.md to reference the "time Example" workflow by replacing
workflow%3A%22Rust+Example%22 with workflow%3A%22time+Example%22 in the URL (and
change the link text if it currently says "Rust Example" to "time Example") so
the Action log opens the correct workflow.

- [Benchmark results on GitHub pages](https://benchmark-action.github.io/github-action-benchmark/dev/bench/)

This directory shows how to use [`github-action-benchmark`](https://github.com/benchmark-action/github-action-benchmark)
with [`cargo bench`](https://doc.rust-lang.org/cargo/commands/cargo-bench.html).
Comment on lines +8 to +9
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Incorrect description - mentions cargo bench instead of time.

This appears to be a copy-paste error from the Rust example. The text should describe the time CLI utility, not cargo bench.

📝 Proposed fix
 This directory shows how to use [`github-action-benchmark`](https://github.com/benchmark-action/github-action-benchmark)
-with [`cargo bench`](https://doc.rust-lang.org/cargo/commands/cargo-bench.html).
+with the [`time`](https://man7.org/linux/man-pages/man1/time.1.html) CLI utility.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
This directory shows how to use [`github-action-benchmark`](https://github.com/benchmark-action/github-action-benchmark)
with [`cargo bench`](https://doc.rust-lang.org/cargo/commands/cargo-bench.html).
This directory shows how to use [`github-action-benchmark`](https://github.com/benchmark-action/github-action-benchmark)
with the [`time`](https://man7.org/linux/man-pages/man1/time.1.html) CLI utility.
🤖 Prompt for AI Agents
In `@examples/time/README.md` around lines 8 - 9, The README line currently
describes using github-action-benchmark with "cargo bench" (a Rust tool) by
mistake; update that sentence to describe using the time CLI utility instead:
replace the phrase "with [`cargo
bench`](https://doc.rust-lang.org/cargo/commands/cargo-bench.html)" with wording
like "with the `time` CLI utility" and point the link to the appropriate time
documentation (e.g., the `time` man page or project page), keeping the reference
to [`github-action-benchmark`] intact so the sentence accurately describes using
github-action-benchmark with the time tool.


## Run benchmarks

Official documentation for usage of `time`:

https://man7.org/linux/man-pages/man1/time.1.html

So to run a benchmark, simply call time first:

```yaml
- name: Measure execution time with `time`
run: (time (head --bytes=1000000000 /dev/random >/dev/null)) 2>&1 | tee output.txt
```

Note the `2>&1` too capture also stderr output. By default that's where `time` will direct its output.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo: "too" should be "to".

📝 Proposed fix
-Note the `2>&1` too capture also stderr output. By default that's where `time` will direct its output.
+Note the `2>&1` to capture also stderr output. By default that's where `time` will direct its output.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Note the `2>&1` too capture also stderr output. By default that's where `time` will direct its output.
Note the `2>&1` to capture also stderr output. By default that's where `time` will direct its output.
🤖 Prompt for AI Agents
In `@examples/time/README.md` at line 24, The sentence "Note the `2>&1` too
capture also stderr output." contains a typo: replace "too" with "to" in that
sentence (the line containing "Note the `2>&1` too capture also stderr output.")
so it reads "Note the `2>&1` to capture also stderr output." and commit the
corrected README entry.



## Process benchmark results

Store the benchmark results with step using the action. Please set `cargo` to `tool` input.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Incorrect tool reference - says cargo instead of time.

The text incorrectly instructs users to set cargo as the tool input.

📝 Proposed fix
-Store the benchmark results with step using the action. Please set `cargo` to `tool` input.
+Store the benchmark results with step using the action. Please set `time` to `tool` input.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Store the benchmark results with step using the action. Please set `cargo` to `tool` input.
Store the benchmark results with step using the action. Please set `time` to `tool` input.
🤖 Prompt for AI Agents
In `@examples/time/README.md` at line 29, The README line "Store the benchmark
results with step using the action. Please set `cargo` to `tool` input."
incorrectly references `cargo`; update the documentation to instruct users to
set the action's tool input to `time` (i.e., replace `cargo` with `time`) so the
example and explanation use the correct tool name mentioned in the README
phrase.


```yaml
- name: Store benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
tool: 'time'
output-file-path: output.txt
```

Please read ['How to use' section](https://github.com/benchmark-action/github-action-benchmark#how-to-use) for common usage.


1 change: 1 addition & 0 deletions src/config.ts
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ export const VALID_TOOLS = [
'julia',
'jmh',
'benchmarkdotnet',
'time',
'customBiggerIsBetter',
'customSmallerIsBetter',
] as const;
Expand Down
91 changes: 91 additions & 0 deletions src/extract.ts
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
import { promises as fs } from 'fs';
import * as github from '@actions/github';
import { Config, ToolType } from './config';
import * as core from '@actions/core';
import { z } from 'zod';

export const BenchmarkResult = z.object({
Expand Down Expand Up @@ -34,7 +35,7 @@
}

interface PullRequest {
[key: string]: any;

Check warning on line 38 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

Unexpected any. Specify a different type
number: number;
html_url?: string;
body?: string;
Expand Down Expand Up @@ -441,7 +442,7 @@
const extra = `mean: ${mean} ${meanUnit}\nrounds: ${stats.rounds}`;
return { name, value, unit, range, extra };
});
} catch (err: any) {

Check warning on line 445 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

Unexpected any. Specify a different type
throw new Error(
`Output file for 'pytest' must be JSON file generated by --benchmark-json option: ${err.message}`,
);
Expand All @@ -452,7 +453,7 @@
let json: GoogleCppBenchmarkJson;
try {
json = JSON.parse(output);
} catch (err: any) {

Check warning on line 456 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

Unexpected any. Specify a different type
throw new Error(
`Output file for 'googlecpp' must be JSON file generated by --benchmark_format=json option: ${err.message}`,
);
Expand Down Expand Up @@ -580,7 +581,7 @@
return ret;
}

function extractJuliaBenchmarkHelper([_, bench]: JuliaBenchmarkGroup, labels: string[] = []): BenchmarkResult[] {

Check warning on line 584 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

'_' is defined but never used
const res: BenchmarkResult[] = [];
for (const key in bench.data) {
const value = bench.data[key];
Expand Down Expand Up @@ -611,7 +612,7 @@
let json: JuliaBenchmarkJson;
try {
json = JSON.parse(output);
} catch (err: any) {

Check warning on line 615 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

Unexpected any. Specify a different type
throw new Error(
`Output file for 'julia' must be JSON file generated by BenchmarkTools.save("output.json", suit::BenchmarkGroup) : ${err.message}`,
);
Expand All @@ -629,7 +630,7 @@
let json: JmhBenchmarkJson[];
try {
json = JSON.parse(output);
} catch (err: any) {

Check warning on line 633 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

Unexpected any. Specify a different type
throw new Error(`Output file for 'jmh' must be JSON file generated by -rf json option: ${err.message}`);
}
return json.map((b) => {
Expand All @@ -646,7 +647,7 @@
let json: BenchmarkDotNetBenchmarkJson;
try {
json = JSON.parse(output);
} catch (err: any) {

Check warning on line 650 in src/extract.ts

View workflow job for this annotation

GitHub Actions / Run linting and formatting check

Unexpected any. Specify a different type
throw new Error(
`Output file for 'benchmarkdotnet' must be JSON file generated by '--exporters json' option or by adding the JsonExporter to your run config: ${err.message}`,
);
Expand Down Expand Up @@ -692,6 +693,93 @@
return results;
}

function parseTimeOutput(line: string): number | undefined {
core.debug(`parse time ${line}`);
const t = line.split('\t')[1];
if (t === undefined) return;
const tparts = t.split('m');

core.debug(tparts[0]);
if (tparts[1] === undefined || !tparts[1].endsWith('s')) return;
return parseFloat(tparts[0]) * 60.0 + parseFloat(tparts[1].substring(-1));
}
Comment on lines +696 to +705
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical bug: substring(-1) does not remove the trailing 's'.

String.prototype.substring() treats negative arguments as 0, so tparts[1].substring(-1) returns the entire string including the trailing 's'. This causes parseFloat to fail silently and return NaN for the seconds portion, resulting in incorrect benchmark values.

Use slice(0, -1) instead to correctly remove the trailing character.

🐛 Proposed fix
 function parseTimeOutput(line: string): number | undefined {
     core.debug(`parse time ${line}`);
     const t = line.split('\t')[1];
     if (t === undefined) return;
     const tparts = t.split('m');
 
     core.debug(tparts[0]);
     if (tparts[1] === undefined || !tparts[1].endsWith('s')) return;
-    return parseFloat(tparts[0]) * 60.0 + parseFloat(tparts[1].substring(-1));
+    return parseFloat(tparts[0]) * 60.0 + parseFloat(tparts[1].slice(0, -1));
 }
🤖 Prompt for AI Agents
In `@src/extract.ts` around lines 696 - 705, The parseTimeOutput function is
incorrectly using tparts[1].substring(-1) which returns the whole string
(including the trailing 's') and causes parseFloat to yield NaN; update
parseTimeOutput to remove the trailing 's' using tparts[1].slice(0, -1) before
calling parseFloat (and ensure you still guard for undefined t and tparts[1] as
currently done).

/* Expect to process text with the following structure:

$ time echo Hello there
Hello there

real 0m0,100s
user 0m0,020s
sys 0m0,003s
$ time echo Hello there
Hello again

real 0m0,100s
user 0m0,020s
sys 0m0,003s

Into:
[{"name": "real",
"value": 0,1,
"unit": "s",
"extra": "testName: Hello there"},
...
*/
function extractTimeBenchmarkResult(output: string): BenchmarkResult[] {
const lines = output.split(/\n/);
const results: BenchmarkResult[] = [];
let firstline = true;
let name: string | undefined = undefined;

for (const line of lines) {
core.debug(line);
if (firstline) {
name = line;
firstline = false;
core.debug(`firstline: ${name}`);
continue;
}
if (line.startsWith('real')) {
const v = parseTimeOutput(line);
core.debug(`v: ${v}`);
if (v !== undefined)
results.push({
extra: name,
name: 'real',
value: v,
unit: 's',
});
Comment on lines +746 to +751
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

All three benchmark entries will have the same name value.

The current implementation assigns 'real', 'user', and 'sys' as the name field for all benchmarks. If tracking multiple different commands, they will all have the same names, making it difficult to distinguish them in the results. Consider incorporating the command name (from extra) into the benchmark name.

♻️ Proposed improvement to include command context in name
         if (line.startsWith('real')) {
             const v = parseTimeOutput(line);
             core.debug(`v: ${v}`);
             if (v !== undefined)
                 results.push({
                     extra: name,
-                    name: 'real',
+                    name: name ? `${name} - real` : 'real',
                     value: v,
                     unit: 's',
                 });
         }

Apply similar changes for 'user' and 'sys' entries.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
results.push({
extra: name,
name: 'real',
value: v,
unit: 's',
});
results.push({
extra: name,
name: name ? `${name} - real` : 'real',
value: v,
unit: 's',
});
🤖 Prompt for AI Agents
In `@src/extract.ts` around lines 746 - 751, The three benchmark results all use
the literal name values ('real', 'user', 'sys'), losing per-command context;
update the results.push calls (the objects constructed in the results.push in
this snippet) to include the command identifier from the existing extra/name
variable in the benchmark name (e.g. combine the command name variable "name"
with the metric label so the name field becomes commandName + '-real' (and
similarly '-user' and '-sys')), and ensure you make the same change for the
other two pushes that produce 'user' and 'sys' entries.

}
if (line.startsWith('user')) {
const v = parseTimeOutput(line);

core.debug(`v: ${v}`);
if (v !== undefined)
results.push({
extra: name,
name: 'user',
value: v,
unit: 's',
});
}
if (line.startsWith('sys')) {
const v = parseTimeOutput(line);
core.debug(`v: ${v}`);
if (v !== undefined)
results.push({
extra: name,
name: 'sys',
value: v,
unit: 's',
});
firstline = true;
}
core.debug('loop');
}

return results;
}
Comment on lines +728 to +781
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Consider handling locale-specific decimal separators.

The example in the comment block (lines 711-713) shows time output with commas as decimal separators (0m0,100s), which is common in European locales. However, parseFloat only recognizes . as a decimal separator. If the time command outputs locale-specific formatting, the parsing will fail.

🔧 Proposed fix to handle both comma and period decimals
 function parseTimeOutput(line: string): number | undefined {
     core.debug(`parse time ${line}`);
     const t = line.split('\t')[1];
     if (t === undefined) return;
     const tparts = t.split('m');

     core.debug(tparts[0]);
     if (tparts[1] === undefined || !tparts[1].endsWith('s')) return;
-    return parseFloat(tparts[0]) * 60.0 + parseFloat(tparts[1].slice(0, -1));
+    // Handle both '.' and ',' as decimal separators for locale compatibility
+    const minutes = parseFloat(tparts[0].replace(',', '.'));
+    const seconds = parseFloat(tparts[1].slice(0, -1).replace(',', '.'));
+    return minutes * 60.0 + seconds;
 }
🤖 Prompt for AI Agents
In `@src/extract.ts` around lines 728 - 781, The parsing of time outputs in
extractTimeBenchmarkResult (which calls parseTimeOutput) fails for locales using
commas as decimal separators (e.g., "0m0,100s"); update parseTimeOutput to
normalize locale decimals by replacing a single comma in the fractional part
with a dot (and strip any thousands separators) before calling parseFloat,
ensuring lines like "0m0,100s" and "0m0.100s" both parse correctly; adjust
parseTimeOutput (referenced by extractTimeBenchmarkResult) to perform this
normalization and add tests for comma and dot decimals.


export async function extractResult(config: Config): Promise<Benchmark> {
const output = await fs.readFile(config.outputFilePath, 'utf8');
const { tool, githubToken, ref } = config;
Expand Down Expand Up @@ -725,6 +813,9 @@
case 'benchmarkdotnet':
benches = extractBenchmarkDotnetResult(output);
break;
case 'time':
benches = extractTimeBenchmarkResult(output);
break;
case 'customBiggerIsBetter':
benches = extractCustomBenchmarkResult(output);
break;
Expand Down
2 changes: 2 additions & 0 deletions src/write.ts
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,8 @@ function biggerIsBetter(tool: ToolType): boolean {
return false;
case 'benchmarkdotnet':
return false;
case 'time':
return false;
case 'customBiggerIsBetter':
return true;
case 'customSmallerIsBetter':
Expand Down
53 changes: 53 additions & 0 deletions test/__snapshots__/extract.spec.ts.snap
Original file line number Diff line number Diff line change
Expand Up @@ -1005,3 +1005,56 @@ rounds: 5",
"tool": "pytest",
}
`;

exports[`extractResult() extracts benchmark output from time - time_output.txt 1`] = `
{
"benches": [
{
"extra": "Hello there",
"name": "real",
"unit": "s",
"value": 0.1,
},
{
"extra": "Hello there",
"name": "user",
"unit": "s",
"value": 0.02,
},
{
"extra": "Hello there",
"name": "sys",
"unit": "s",
"value": 0.003,
},
{
"extra": "Hello again",
"name": "real",
"unit": "s",
"value": 0.1,
},
{
"extra": "Hello again",
"name": "user",
"unit": "s",
"value": 0.02,
},
{
"extra": "Hello again",
"name": "sys",
"unit": "s",
"value": 0.003,
},
],
"commit": {
"author": null,
"committer": null,
"id": "123456789abcdef",
"message": "this is dummy",
"timestamp": "dummy timestamp",
"url": "https://github.com/dummy/repo",
},
"date": 1712131503296,
"tool": "time",
}
`;
10 changes: 10 additions & 0 deletions test/data/extract/time_output.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Hello there

real 0m0.100s
user 0m0.020s
sys 0m0.003s
Hello again

real 0m0.100s
user 0m0.020s
sys 0m0.003s
4 changes: 4 additions & 0 deletions test/extract.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -135,6 +135,10 @@ describe('extractResult()', function () {
tool: 'benchmarkdotnet',
file: 'benchmarkdotnet.json',
},
{
tool: 'time',
file: 'time_output.txt',
},
{
tool: 'customBiggerIsBetter',
file: 'customBiggerIsBetter_output.json',
Expand Down
Loading