Skip to content

[CVPR'26] ObjectClear: Precise Object and Effect Removal with Adaptive Target-Aware Attention

License

Notifications You must be signed in to change notification settings

zjx0101/ObjectClear

Repository files navigation

ObjectClear Logo

Precise Object and Effect Removal with Adaptive Target-Aware Attention

S-Lab, Nanyang Technological University 
CVPR 2026

ObjectClear is an object removal model that can jointly eliminate the target object and its associated effects leveraging Adaptive Target-Aware Attention, while preserving background consistency.

For more visual results, go checkout our project page


⭐ Update

  • [2026.02] 🔥 OBER Dataset is Now Released! Our training dataset is now publicly available on Hugging Face 🤗.
  • [2025.09] We have released our benchmark datasets for evaluation, along with our results to facilitate comparison.
  • [2025.07] Release the inference code and Gradio demo.
  • [2025.05] This repo is created.

✅ TODO

  • Release our training datasets
  • Release our benchmark datasets
  • Release the inference code and Gradio demo

🎃 Overview

overall_structure

📷 OBER Dataset

OBER_dataset_pipeline

OBER (OBject-Effect Removal) is a hybrid dataset designed to support research in object removal with effects, combining both camera-captured and simulated data.

🔥 We have released the full dataset OBERDataset_ObjectClear on Hugging Face. We hope it can serve as a strong training resource and benchmark for future object removal research.

🚩 Note that the OBER dataset are made available solely for non-commercial research use. Any use, reproduction, or redistribution must strictly comply with the terms of NTU S-Lab License 1.0.

OBER_dataset_samples

⚙️ Installation

  1. Clone Repo

    git clone https://github.com/zjx0101/ObjectClear.git
    cd ObjectClear
  2. Create Conda Environment and Install Dependencies

    # create new conda env
    conda create -n objectclear python=3.10 -y
    conda activate objectclear
    
    # install python dependencies
    pip3 install -r requirements.txt
    # [optional] install python dependencies for gradio demo
    pip3 install -r hugging_face/requirements.txt

⚡ Inference

Quick Test

We provide some examples in the inputs folder. For each run, we take an image and its segmenatation mask as input. The segmentation mask can be obtained from interactive segmentation models such as SAM2 demo. For example, the directory structure can be arranged as follows:

inputs
   ├─ imgs
   │   ├─ test-sample1.jpg      # .jpg, .png, .jpeg supported
   │   ├─ test-sample2.jpg
   └─ masks
       ├─ test-sample1.png
       ├─ test-sample2.png

Run the following command to try it out:

## Single image inference
python inference_objectclear.py -i inputs/imgs/test-sample1.jpg -m inputs/masks/test-sample1.png --guidance_scale 2.5 --use_fp16

## Batch inference on image folder
python inference_objectclear.py -i inputs/imgs -m inputs/masks --guidance_scale 2.5 --use_fp16

Note: --guidance_scale controls the trade-off: higher values lead to stronger removal, while lower values better preserve background details.
The default setting is --guidance_scale 2.5. For all benchmark results reported in our paper, we used --guidance_scale 1.0.

📊 Evaluation with ReMOVE+

Our ReMOVE+ metric addresses the limitations of the original ReMOVE by assessing consistency between the output's object-effect region and the input's background (outside the object-effect mask), making it more suitable for object-effect removal evaluation.

Please refer to the detailed instructions in the evaluation/README.md file for installation, setup, and running the ReMOVE+ evaluation pipeline.

🪄 Interactive Demo

To get rid of the preparation for segmentation mask, we prepare a gradio demo on hugging face and could also launch locally. Just drop your image, assign the target masks with a few clicks, and get the object removal results!

cd hugging_face

# install python dependencies
pip3 install -r requirements.txt

# launch the demo
python app.py

📝 License

Non-Commercial Use Only Declaration

The ObjectClear is made available for use, reproduction, and distribution strictly for non-commercial purposes. The code, models, and datasets are licensed under NTU S-Lab License 1.0. Redistribution and use should follow this license.

📑 Citation

If you find our repo useful for your research, please consider citing our paper:

@InProceedings{zhao2026objectclear,
    title   = {Precise Object and Effect Removal with Adaptive Target-Aware Attention},
    author  = {Zhao, Jixin and Wang, Zhouxia and Yang, Peiqing and Zhou, Shangchen},
    booktitle = {CVPR},
    year    = {2026},
    }

📧 Contact

If you have any questions, please feel free to reach us at jixinzhao0101@gmail.com and shangchenzhou@gmail.com.

About

[CVPR'26] ObjectClear: Precise Object and Effect Removal with Adaptive Target-Aware Attention

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages