Skip to content

Conversation

@ankitlade12
Copy link

Feat: Multi-Objective Evaluation and Pareto-Optimal Model Selection

Summary

This PR introduces a robust multi-objective evaluation framework to mlforecast. Traditionally, forecasting pipelines optimize for a single metric (e.g., RMSE). However, real-world applications often require balancing competing objectives such as Accuracy vs. Bias or Precision vs. Interval Width.

This feature enables users to evaluate models against multiple metrics simultaneously and identify the Pareto-Optimal set of models (the "Pareto Frontier")—those that represent the best possible trade-offs between objectives.

Key Changes

1. mlforecast.evaluation Module

A new utility module for specialized performance assessment:

  • PerformanceEvaluator: Automates computation of multiple utilsforecast.losses metrics across all models in cross-validation results.
  • ParetoFrontier:
    • find_non_dominated(): Mathematically identifies models that are not dominated by any other model across all chosen objectives.
    • plot_pareto_2d(): A visualization helper to plot the trade-off curve between two metrics.

2. MLForecast.evaluate Convenience Method

Integrated a new .evaluate() method directly into the MLForecast class. This simplifies the workflow by allowing users to go from cross-validation to multi-objective analysis without manual aggregation.

3. Public API Exposure

Exposed PerformanceEvaluator and ParetoFrontier at the package level for immediate access.


Example Usage

from mlforecast import MLForecast
from mlforecast.evaluation import ParetoFrontier
from utilsforecast.losses import rmse, mae, bias

# 1. Run Cross-Validation
cv_results = fcst.cross_validation(df, n_windows=3, h=7)

# 2. Evaluate multiple metrics
perf = fcst.evaluate(cv_results, metrics=[rmse, mae, bias])

# 3. Identify Pareto-optimal models
frontier = ParetoFrontier.find_non_dominated(perf)
print("Pareto Optimal Models:", frontier.index.tolist())

# 4. Visualize trade-offs
ParetoFrontier.plot_pareto_2d(perf, 'rmse', 'bias', title="Accuracy vs Bias Trade-off")

@CLAassistant
Copy link

CLAassistant commented Feb 11, 2026

CLA assistant check
All committers have signed the CLA.

@nasaul nasaul changed the title feat: add multi-objective evaluation and pareto-optimal selection [FEAT] add multi-objective evaluation and pareto-optimal selection Feb 12, 2026
@nasaul
Copy link
Contributor

nasaul commented Feb 12, 2026

Thanks for your PR @ankitlade12 it looks like a promising feature, but before I check it I need the tests passing. I suggest you look at the Contribuiting guide so you can run the test locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants