Skip to content

[ICML 2025 Spotlight] Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

License

Notifications You must be signed in to change notification settings

ntuaislab/BRONet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

arXiv

🚂 Overview

Official PyTorch implementation for our ICML 2025 spotlight paper. We introduce:

  • Block Reflector Orthogonal Layer (BRO): A low-rank, approximation-free orthogonal convolutional layer designed to efficiently construct Lipschitz neural networks, improving both stability and expressiveness.
  • Logit Annealing Loss (LA): An adaptive loss function that dynamically balances classification margins across samples, leading to enhanced certified robustness.

📁 Repository Structure

  • bronet/ — Contains the implementation for BRONet experiments.
  • lipconvnet/ — Contains the implementation for the LipConvNet experiments.

Key modules:

🚀 Getting Started

To set up the environment and run our code:

1. Requirements

  • Python 3.11
  • PyTorch ≥ 2.0 with CUDA support
  • A recent NVIDIA GPU (e.g., Ampere or newer) is recommended for training and certification

2. Reproduce the paper results

To reproduce the main results in the paper, run the following command:

cd bronet
bash run.sh

🎯 Pre-trained Models

Datasets Models Checkpoint
ImageNet (Table 1) BRONet (+LA) Link
ImageNet w/ EDM2 2M (Table 2) BRONet (+LA) Link

To test the provided models, download the checkpoint and config file, then run:

cd bronet
OMP_NUM_THREADS=1 torchrun --nproc_per_node=1 \
  test.py --launcher=pytorch \
  ---master_port $MASTER_PORT=$((12000 + $RANDOM % 20000)) \
  --config='path_to_config' \
  --resume_from='path_to_downloaded_checkpoint'

See bronet/README.md for instructions on reproducing the results.

🤝 Acknowledgements

This work builds on and benefits from several open-source efforts:

We sincerely thank the authors of these projects for making their work publicly available.

📜 License

This project is licensed under the MIT License - see the LICENSE file for details.

📄 Citation

If you find our work useful, please cite us:

@inproceedings{lai2025enhancing,
    title={Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss},
    author={Bo-Han Lai and Pin-Han Huang and Bo-Han Kung and Shang-Tse Chen},
    booktitle={International Conference on Machine Learning (ICML)},
    year={2025},
    note={Spotlight}
}

About

[ICML 2025 Spotlight] Enhancing Certified Robustness via Block Reflector Orthogonal Layers and Logit Annealing Loss

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •