Skip to content

Conversation

@FBumann
Copy link
Contributor

@FBumann FBumann commented Feb 11, 2026

Summary

Add a unified SolverMetrics dataclass that provides solver-independent access to performance metrics after solving. Accessible via Model.solver_metrics.

Fields: solver_name, solve_time, objective_value, dual_bound, mip_gap — all default to None; solvers populate what they can.

Design:

  • Frozen dataclass (immutable after creation)
  • Base Solver._extract_metrics() populates solver_name and objective_value
  • Each solver subclass overrides to add solver-specific fields via dataclasses.replace()
  • All attribute access wrapped in _safe_get() with debug logging so extraction never breaks the solve
  • The pattern is easily extensible for new solvers — just override _extract_metrics() and use _safe_get(). PRs adding metrics for additional solvers are welcome!

Solver coverage:

Solver solve_time dual_bound mip_gap tested
Gurobi
HiGHS
SCIP
CPLEX
Xpress
Mosek
CBC ✓* ✓*
COPT base only
MindOpt base only
cuPDLPx base only
GLPK base only

*CBC parses solve_time and mip_gap from log output via regex. These fields depend on the CBC log format, which varies across versions.

Solvers without a tested _extract_metrics override still get solver_name and objective_value from the base class. We intentionally did not add solver-specific overrides for untested solvers — incorrect attribute names would silently return None, giving a false sense of coverage.

Bugs fixed along the way:

  • HiGHS: Used non-existent info key mip_objective_bound — fixed to mip_dual_bound. Also fixed status comparison (== 0== highspy.HighsStatus.kOk).
  • Xpress: miprelgap attribute doesn't exist — compute gap manually from mipbestobjval and bestbound.
  • CPLEX: m.solution.progress.get_time() doesn't exist — use time.perf_counter() around the solve call.

Test plan

  • SolverMetrics dataclass unit tests (defaults, partial, repr, frozen)
  • Result backward compatibility tests (with/without metrics)
  • Model integration tests (metrics before solve, after mock solve, after reset)
  • Parametrized LP tests over direct and file-IO solvers
  • Parametrized MIP tests asserting mip_gap and dual_bound are populated
  • All solver-specific overrides tested with real solves (Gurobi, HiGHS, SCIP, CPLEX, Xpress, Mosek)

Closes #428

  - 7 optional fields: solver_name, solve_time, objective_value, best_bound, mip_gap, node_count, iteration_count — all default to None
  - Custom __repr__ that only shows non-None fields
  - Added as metrics field on Result (backward-compatible — defaults to None)

  Solver-specific metric extraction (linopy/solvers.py)

  - Base Solver class: _extract_metrics() returns solver_name + objective_value
  - Gurobi: extracts Runtime, ObjBound, MIPGap, NodeCount, IterCount
  - HiGHS: extracts getRunTime(), mip_node_count, simplex_iteration_count, mip_gap, mip_objective_bound
  - SCIP: extracts getSolvingTime(), getDualbound(), getGap(), getNNodes(), getNLPIterations()
  - CBC: uses already-parsed mip_gap and runtime from log output
  - All other solvers (GLPK, Cplex, Xpress, Mosek, COPT, MindOpt, cuPDLPx): use base class default
  - All 12 return Result(...) sites updated to pass metrics
  - Every attribute access is wrapped in try/except so extraction never breaks the solve

  Model integration (linopy/model.py)

  - _solver_metrics slot, initialized to None
  - solver_metrics property
  - Stored from result.metrics after solve()
  - Set to basic metrics in _mock_solve()
  - Reset to None in reset_solution()

  Package export (linopy/__init__.py)

  - SolverMetrics added to imports and __all__

  Tests (test/test_solver_metrics.py)

  - 13 tests covering: dataclass defaults, partial values, repr, Result backward compat, Model integration (before/after solve, reset), parametrized solver-specific tests for
  both direct and file-IO solvers
  - Added mock-based unit tests for all 10 solver overrides (CBC, Highs, Gurobi, SCIP, Cplex, Xpress, Mosek, COPT, MindOpt, cuPDLPx)
  - Added test_extract_metrics_graceful_on_missing_attr — verifies _safe_get degrades gracefully
  - Tests skip for unavailable solvers using @pytest.mark.skipif
@FBumann FBumann changed the title Feature/solving metrics Add unified SolverMetrics dataclass Feb 11, 2026
FBumann and others added 5 commits February 11, 2026 16:54
Remove all mock/patch-based _extract_metrics tests. The parametrized
integration tests (test_solver_metrics_direct, test_solver_metrics_file_io)
now assert solve_time >= 0 for every available solver, ensuring attribute
names are correct against real solver objects.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@FBumann FBumann changed the title Add unified SolverMetrics dataclass feat: add unified SolverMetrics Feb 11, 2026
Only parametrize over solvers with _extract_metrics overrides
(gurobi, highs, scip, cplex, xpress, mosek), so solvers with
base-only metrics (glpk, copt, cbc) don't fail on solve_time.
@FBumann
Copy link
Contributor Author

FBumann commented Feb 11, 2026

Covereage fails, as not all solvers are in CI. But this is a change which might affect CI in general.

@FBumann FBumann marked this pull request as ready for review February 11, 2026 20:51
@FBumann
Copy link
Contributor Author

FBumann commented Feb 12, 2026

#428

Copy link
Member

@lkstrp lkstrp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would really like to see something like this. Not sure though if this would be enough to reliable benchmark the model in automated runs, but with a fixed node/ environment I don't see why this wouldn't already be helpful. Any thoughts on getting memory usage as well? I think gurobi gives you peak memory, not sure for the other solvers

@FBumann
Copy link
Contributor Author

FBumann commented Feb 12, 2026

I would really like to see something like this. Not sure though if this would be enough to reliable benchmark the model in automated runs, but with a fixed node/ environment I don't see why this wouldn't already be helpful. Any thoughts on getting memory usage as well? I think gurobi gives you peak memory, not sure for the other solvers

I tried to design this in a way to be extensible, add new attributes and populate them for solvers that provide them.
I dont know how/which solvers monitor this, but feel free to modify my PR as you wish and try if the pattern feels as extensible as i want it to be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Achieved gap of the objective

2 participants