- 📄 Paper: Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning
- 🎓 Conference: NeurIPS 2025 (accepted)
- 🧑🏻💼 Authors: Remco F. Leijenaar, Hamidreza Kasaei
Figure: Overview of AsymDSD
If you find this repository useful, please cite our paper:
@article{leijenaar2025asymmetric,
title={Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning},
author={Leijenaar, Remco F and Kasaei, Hamidreza},
journal={Advances in Neural Information Processing Systems},
year={2025}
}Make sure your system supports the following (some of these will be handled automatically when using the Conda environment, but the system CUDA toolkit is required for building PyTorch3D):
- Python 3.11
- CUDA 12.4
- cuDNN 8.9
- GCC version between 6.x and 13.2 (inclusive)
⚠️ The code may work with other versions, but only the above configuration has been tested and verified.
We recommend using Mamba via Miniforge for managing environments.
mamba env create -f conda.yaml
conda activate asymdsdThis allows you to make changes to the source code and see updates without reinstalling.
pip install -e .If you prefer not to use Conda, set up the environment using Python's built-in venv:
python -m venv .venv
source .venv/bin/activate
pip install --upgrade pip setuptools wheel
pip install -r requirements.txt
pip install git+https://github.com/facebookresearch/pytorch3d.git@stable --no-build-isolation --use-pep517
pip install -e .PyTorch3D requires the CUDA toolkit to be installed and available on your system, even when using Conda. It is not provided as a prebuilt wheel. If you run into issues during setup with Conda, try:
pip install git+https://github.com/facebookresearch/pytorch3d.git@stable --no-build-isolation --use-pep517If you run out of memory while compiling, remove ninja:
mamba remove ninjaand try again.
🐢 This will slow down the build process but significantly reduce memory usage.
With the provided configurations, dataset files should be placed inside the data folder. Do not extract the archive files—the code reads directly from the compressed archives.
- Request access via ShapeNet/ShapeNetCore-archive on Hugging Face.
- After approval, download
ShapeNetCore.v2.zipand place it in thedatafolder.
wget -P data http://modelnet.cs.princeton.edu/ModelNet40.zip📖 For more information, visit the ModelNet40 project page.
- Visit the ScanObjectNN website and agree to the terms of use.
- Download the dataset
h5_files.zipand place it in thedata/ScanObjectNNdirectory.
mkdir -p data/ScanObjectNN
# Replace the placeholder below with the actual download link after gaining access
wget -P data/ScanObjectNN <DOWNLOAD_LINK>- Download the dataset from ModelNet40 Few-Shot, by selecting all files and downloading them as a zip file
ModelNetFewshot.zip. - Place the zip file in the data folder:
data.
To start pretraining the small version of AsymDSD on ShapeNetCore, run:
sh shell_scripts/sh/train_ssrl.sh🧭 You may be prompted to log in to Weights & Biases (wandb).
The first time you run this, it will compile and preprocess the datasets. This process may take a while, but all data is cached under the data directory—making subsequent runs much faster.
To train with specific modes, use the corresponding configuration files:
-
MPM mode:
sh shell_scripts/sh/train_ssrl.sh --model configs/ssrl/variants/model/ssrl_model_mask.yaml
-
CLS mode:
sh shell_scripts/sh/train_ssrl.sh --model configs/ssrl/variants/model/ssrl_model_cls.yaml
💡 To accelerate pre-training, you can disable evaluation callbacks by editing the trainer config file, or skip all callbacks by passing
--trainer.callbacks null
To evaluate the model on object recognition tasks, use the following command:
python shell_scripts/py/train_neural_classifier_all.py --runs <num_eval_runs> --model.encoder_ckpt_path <path_to_model>For the MPM pre-trained version without a CLS-token, you can add:
--model configs/classification/variants/model/classification_model_mask.yamlFor few-shot evaluation on ModelNet40:
python shell_scripts/py/train_neural_classifier_all.py --model.encoder_ckpt_path <path_to_model>🔍 You can find logged results on Weights and Biases. A link to the run is provided in the script output.
We plan on releasing the following resources:
- Pre-trained Models: Checkpoints for both small and base versions of AsymDSD, including AsymDSD-CLS and AsymDSD-MPM.
- Additional Datasets: Dataset preparation modules including Mixture and Objaverse.
- Training Scripts: Full training configurations for larger model variants and part segmentation on ShapeNet-Part.
