Skip to content

ML-KULeuven/Proof2Seq

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

81 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Using Certifying Constraint Solvers for Generating Step-wise Explanations

This repository contains the code for the following paper:

Bleukx, Ignace et al. Using Certifying Constraint Solvers for Generating Step-wise Explanations (AAAI 2026)

Abstract In the field of Explainable Constraint Solving, it is common to explain to a user why a problem is unsatisfiable. A recently proposed method for this is to compute a sequence of explanation steps. Such a step-wise explanation shows individual reasoning steps involving constraints from the original specification, that in the end explain a conflict. However, computing a step-wise explanation is computationally expensive, limiting the scope of problems for which it can be used. We investigate how we can use proofs generated by a constraint solver as a starting point for computing step-wise explanations, instead of computing them step-by-step. More specifically, we define a framework of abstract proofs, in which both proofs and step-wise explanations can be represented. We then propose several methods for converting a proof to a step-wise explanation sequence, with special attention to trimming and simplification techniques to keep the sequence and its individual steps small. Our results show our method significantly speeds up the generation of step-wise explanation sequences, while the resulting step-wise explanation has a quality similar to the current state-of-the-art.

link to paper pdf

Overview of the repository

.
├── benchmarks # files for benchmarks and model loading from disk
├── example.py # small example generating a sequence of a Sudoku
├── experiments.py # Runner to reproduce the experiments in the paper
├── __init__.py
├── proof2seq # main code of the method
│   ├── __init__.py 
│   ├── minimize.py # code for proof minimization
│   ├── mus.py # implementation of MUS-algorithms used
│   ├── parsing.py # code for solving the model with proof loggin enabled 
│   ├── pipeline.py # implementation of the full modular pipeline
│   ├── simplify.py # code for proof simplification
│   └── utils.py # utilities used throughout the code base

Getting started

To run the code, you will need to install the CPMpy constraint solving library, and the Pumpkin solver. Additionally, you will need a few utility packages.

$ pip install cpmpy numpy pandas tqdm natsorted maturin

To install the Pumpkin LCG solver on the same version as used in the paper, clone the repository and checkout the correct commit:

git clone https://github.com/ConSol-Lab/Pumpkin/
cd Pumpkin
git checkout d730b056
cd pumpkin-solver-py
maturin develop

Installing Pumpkin requires a fairly recent rust build toolchain. For more information on how to build the solver from source, go to the main github page of the Pumpkin solver.

You can also try to install the latest version via pip, and enable the experimental flag in proof2seq/pipeline.py; but we provide no guarantees whether this will work. Pumpkin is still under development and the API is subject to change.

If you want to generate a step-wise explanation of your unsatisfiable CPMpy model, we recommend to look at example.py. Here, you can simply drop in your CPMpy model instead of the Sudoku model and run the pipeline shown in the figure below.

Our method can be tweaked to your liking by enabling certain options in the pipeline. For the best result, we recommend to use simple trimming in the first phase, and global minimization in the second (these are also the default parameters used in the pipeline).

Reproducing the experiments

This repo relies on the runexp tool to run a batch of experiments. Install it using pip intall git+https://github.com/IgnaceBleukx/Run-Experiments, and run the following commands to produce the results shown in the paper.

For each benchmark, we split up the proof generation and the conversion to an explanation sequence. This is done to ensure each method is ran on exactly the same proof. To produce the proofs for each benchmark set, run the following commands

python3 experiments.py configs/sudoku_proof.json get_cpmpy_proof ProofGenerator proofs/sudoku --unravel
python3 experiments.py configs/jobshop_proof.json get_cpmpy_proof ProofGenerator proofs/jobshop --unravel
python3 experiments.py configs/cpbench_proof.json get_cpmpy_proof ProofGenerator proofs/cpbench --unravel

To then convert the proofs to an explanation sequence, run the following commands.

python3 experiments.py configs/sudoku_explain.json get_explanation_sequence ExplanationGenerator results/sudoku --unravel
python3 experiments.py configs/jobshop_explain.json get_explanation_sequence ExplanationGenerator results/sudoku --unravel
python3 experiments.py configs/cpbench_explain.json get_explanation_sequence ExplanationGenerator results/sudoku --unravel

After running these commands, you can run the plot.py script to generate the plots shown in the paper.

Citing

@inproceedings{bleukx2026certifying,
    author = {Ignace Bleukx and 
              Maarten Flippo and 
              Emir Demirović and 
              Bart Bogaerts and 
              Tias Guns},
    title = {Using Certifying Constraint Solvers for Generating Step-wise Explanations},
    booktitle = {Proceedings of The 40th Annual {AAAI} Conference on Artificial Intelligence},
    year = {2026},
    note = {accepted for publication}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages