ObjectClear is an object removal model that can jointly eliminate the target object and its associated effects leveraging Adaptive Target-Aware Attention, while preserving background consistency.
For more visual results, go checkout our project page
- [2026.02] 🔥 OBER Dataset is Now Released! Our training dataset is now publicly available on Hugging Face 🤗.
- [2025.09] We have released our benchmark datasets for evaluation, along with our results to facilitate comparison.
- [2025.07] Release the inference code and Gradio demo.
- [2025.05] This repo is created.
- Release our training datasets
- Release our benchmark datasets
-
Release the inference code and Gradio demo
OBER (OBject-Effect Removal) is a hybrid dataset designed to support research in object removal with effects, combining both camera-captured and simulated data.
🔥 We have released the full dataset OBERDataset_ObjectClear on Hugging Face. We hope it can serve as a strong training resource and benchmark for future object removal research.
🚩 Note that the OBER dataset are made available solely for non-commercial research use. Any use, reproduction, or redistribution must strictly comply with the terms of NTU S-Lab License 1.0.
-
Clone Repo
git clone https://github.com/zjx0101/ObjectClear.git cd ObjectClear -
Create Conda Environment and Install Dependencies
# create new conda env conda create -n objectclear python=3.10 -y conda activate objectclear # install python dependencies pip3 install -r requirements.txt # [optional] install python dependencies for gradio demo pip3 install -r hugging_face/requirements.txt
We provide some examples in the inputs folder. For each run, we take an image and its segmenatation mask as input. The segmentation mask can be obtained from interactive segmentation models such as SAM2 demo. For example, the directory structure can be arranged as follows:
inputs
├─ imgs
│ ├─ test-sample1.jpg # .jpg, .png, .jpeg supported
│ ├─ test-sample2.jpg
└─ masks
├─ test-sample1.png
├─ test-sample2.png
Run the following command to try it out:
## Single image inference
python inference_objectclear.py -i inputs/imgs/test-sample1.jpg -m inputs/masks/test-sample1.png --guidance_scale 2.5 --use_fp16
## Batch inference on image folder
python inference_objectclear.py -i inputs/imgs -m inputs/masks --guidance_scale 2.5 --use_fp16Note:
--guidance_scalecontrols the trade-off: higher values lead to stronger removal, while lower values better preserve background details.
The default setting is--guidance_scale 2.5. For all benchmark results reported in our paper, we used--guidance_scale 1.0.
Our ReMOVE+ metric addresses the limitations of the original ReMOVE by assessing consistency between the output's object-effect region and the input's background (outside the object-effect mask), making it more suitable for object-effect removal evaluation.
Please refer to the detailed instructions in the evaluation/README.md file for installation, setup, and running the ReMOVE+ evaluation pipeline.
To get rid of the preparation for segmentation mask, we prepare a gradio demo on hugging face and could also launch locally. Just drop your image, assign the target masks with a few clicks, and get the object removal results!
cd hugging_face
# install python dependencies
pip3 install -r requirements.txt
# launch the demo
python app.pyNon-Commercial Use Only Declaration
The ObjectClear is made available for use, reproduction, and distribution strictly for non-commercial purposes. The code, models, and datasets are licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.
If you find our repo useful for your research, please consider citing our paper:
@InProceedings{zhao2026objectclear,
title = {Precise Object and Effect Removal with Adaptive Target-Aware Attention},
author = {Zhao, Jixin and Wang, Zhouxia and Yang, Peiqing and Zhou, Shangchen},
booktitle = {CVPR},
year = {2026},
}If you have any questions, please feel free to reach us at jixinzhao0101@gmail.com and shangchenzhou@gmail.com.






