-
Notifications
You must be signed in to change notification settings - Fork 29
Open
Description
Description
I would like to have mlx-benchmark benchmark tool to b comparable to llama.cpp so that the output and performance metrics can be easily compared. The main goal is to allow users to invoke the benchmark from the CLI with the same flags that llama-bench provides, specifically:
--model/-m: path to the quantized model-p: prompt text used for the test-n: number of tokens to generate
Having these options directly available would let users benchmark new models side‑by‑side and verify performance regressions.
Motivation
llama.cpp’s llama-bench is the de‑facto benchmark tool; aligning with it helps maintain consistency across the ecosystem.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels