Skip to content

EvEnhancer: Empowering Effectiveness, Efficiency and Generalizability for Continuous Space-Time Video Super-Resolution with Events (CVPR 2025, Highlight)

Notifications You must be signed in to change notification settings

W-Shuoyan/EvEnhancer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EvEnhancer: Empowering Effectiveness, Efficiency and Generalizability for Continuous Space-Time Video Super-Resolution with Events (CVPR 2025, Highlight)

Authors: Shuoyan Wei1, Feng Li2,*, Shengeng Tang2, Yao Zhao1, Huihui Bai1,*

1Beijing Jiaotong University2Hefei University of Technology

*Corresponding Authors

Project Page arXiv GitHub Stars YouTube Video Views

This repository contains the reference code for the paper "EvEnhancer: Empowering Effectiveness, Efficiency and Generalizability for Continuous Space-Time Video Super-Resolution with Events" accepted to CVPR 2025 (Highlight).


EvEnhancer-demo

HEAD

In this paper, we introduce EvEnhancer, a novel approach that amalgamates the unique advantages of event streams to enhance the effectiveness, efficiency, and generalizability of Continuous Space-Time Video Super-Resolution. Our EvEnhancer is underpinned by two critical components: 1) the event-adapted synthesis module (EASM) capitalizes on the spatiotemporal correlations between frames and events to discern and learn long-term motion trajectories, facilitating the adaptive interpolation and fusion of informative spatiotemporal features; and 2) the local implicit video transformer (LIVT), which integrates a local implicit video neural function with cross-scale spatiotemporal attention to learn continuous video representations, enabling the generation of plausible videos at arbitrary resolutions and frame rates.

🔈News

  • 🌟 [Oct 2025] The extended version of our work, EvEnhancerPlus, has been released 👉 arXiv

    This version introduces a controllable switching mechanism (CSM) that enables achieving better performance with lower computational cost. Also, the extended paper provides more refined technical details and a clearer methodological explanation. Welcome to read and reference it!

  • ✅ [May 2025] The source code is now available 👉 GitHub Stars

  • ✅ [May 2025] The arXiv version of our paper has been released 👉 arXiv

  • ✅ [Apr 2025] 🎉Our paper is selected to be presented as a Highlight in CVPR 2025!

  • ✅ [Mar 2025] A demo video for our paper has been released 👉 YouTube Video Views

  • ✅ [Feb 2025] 🎉Our paper is accepted to CVPR 2025!

📚 Installation

Dependencies

git clone https://github.com/W-Shuoyan/EvEnhancer.git
cd EvEnhancer
conda create -n EvEnhancer python=3.9.21
conda activate EvEnhancer
pip install torch==1.10.1+cu111 torchvision==0.11.2+cu111 torchaudio==0.10.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html
pip install -r requirements.txt
python setup.py develop
cd basicsr/archs/DCNv2 && python setup.py install && cd -

🚀 Usage

Data Preparation

Synthetic Datasets* Real-World Datasets
Adobe240 (Train & Eval**) BS-ERGB (Eval)
GoPro (Eval) ALPIX-VSR (Eval)

Given a temporal scale t, consecutive (t+1) frames are selected as a clip.

* Event Simulation: We use vid2e to simulate Adobe240 and GoPro events in high-resolution. We first use the pre-trained EMA-VFI video frame interpolation model to interpolate the frames in each clip, and then use esim_torch in vid2e to simulate the events for each clip, where the parameters are followed as EvTexture:

import random
import esim_torch

config = {
'refractory_period': 1e-4,
'CT_range': [0.05, 0.5],
'max_CT': 0.5,
'min_CT': 0.02,
'mu': 1,
'sigma': 0.1,
}

Cp = random.uniform(config['CT_range'][0], config['CT_range'][1])
Cn = random.gauss(config['mu'], config['sigma']) * Cp
Cp = min(max(Cp, config['min_CT']), config['max_CT'])
Cn = min(max(Cn, config['min_CT']), config['max_CT'])

esim = esim_torch.ESIM(Cn, Cp, config['refractory_period']* 1e9)

** The Adobe240 dataset split follows the setting of Preparing Dataset in VideoINR.

To accommodate our dataset reading code, it is recommended that all datasets be organized in a similar manner to the BS-ERGB dataset:

BS-ERGB/ 
├── 1_TEST/   # Test set
│   ├── acquarium_08/    # Video sequence
│   │   ├── events/    # Aligned event data         
│   │   │   ├── 000000.npz
│   │   │   ├── 000001.npz
│   │   │   └── ...
│   │   └── images/  # Video frames         
│   │       ├── 000000.png
│   │       ├── 000001.png
│   │       └── ...
│   └── ...
├── 2_VALIDATION/    # Validation set      
│   └── ...
└── 3_TRAINING/    # Training set   
    └── ...

Training

  • Step1:
# EvEnhancer-light
python basicsr/train.py -opt options/train/EvEnhancer_light_step1.yml 
# EvEnhancer
python basicsr/train.py -opt options/train/EvEnhancer_step1.yml
  • Step2:
# EvEnhancer-light
python basicsr/train.py -opt options/train/EvEnhancer_light_step2.yml 
# EvEnhancer
python basicsr/train.py -opt options/train/EvEnhancer_step2.yml

Evaluation

  • A demo example : temporal scale t=8, spatial scale s=4
# GoPro dataset (demo)
# EvEnhancer-light
python basicsr/test.py -opt options/test/EvEnhancer_light_GoPro_demo_T8S4.yml
# EvEnhancer
python basicsr/test.py -opt options/test/EvEnhancer_GoPro_demo_T8S4.yml

Pretrained Model

💡 Cite

If you find this work useful for your research, please consider citing our paper~ 😎

@inproceedings{wei2025evenhancer,
  title={EvEnhancer: Empowering Effectiveness, Efficiency and Generalizability for Continuous Space-Time Video Super-Resolution with Events},
  author={Wei, Shuoyan and Li, Feng and Tang, Shengeng and Zhao, Yao and Bai, Huihui},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={17755--17766},
  year={2025}
}

📕 Acknowledgement

Our code is built upon BasicSR, which is an open-source image and video restoration toolbox based on PyTorch. Thanks to the code reference from:

About

EvEnhancer: Empowering Effectiveness, Efficiency and Generalizability for Continuous Space-Time Video Super-Resolution with Events (CVPR 2025, Highlight)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published