Skip to content

feat: Make it compatible with llama.cpp’s benchmark tool #45

@olegshulyakov

Description

@olegshulyakov

Description

I would like to have mlx-benchmark benchmark tool to b comparable to llama.cpp so that the output and performance metrics can be easily compared. The main goal is to allow users to invoke the benchmark from the CLI with the same flags that llama-bench provides, specifically:

  • --model / -m: path to the quantized model
  • -p: prompt text used for the test
  • -n: number of tokens to generate

Having these options directly available would let users benchmark new models side‑by‑side and verify performance regressions.

Motivation

llama.cpp’s llama-bench is the de‑facto benchmark tool; aligning with it helps maintain consistency across the ecosystem.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions