Skip to content
/ MMHCL Public

[ACM TOMM'2025] "MMHCL: Multi-Modal Hypergraph Contrastive Learning for Recommendation"

Notifications You must be signed in to change notification settings

Xu107/MMHCL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MMHCL: Multi-Modal Hypergraph Contrastive Learning for Recommendation

This is the Pytorch implementation for our ACM TOMM'2025 paper:MMHCL: Multi-Modal Hypergraph Contrastive Learning for Recommendation.

MMHCL

🚀MMHCL is a new multimodal recommendation framework designed to alleviate data sparsity and cold-start issues. It constructs two hypergraphs, a user-to-user (u2u) hypergraph and an item-to-item (i2i) hypergraph, to mine shared preferences among users and intricate multimodal semantic resemblances among items, respectively. Meanwhile, we design a synergistic contrastive learning paradigm to effectively enhance feature distinguishability.

Dependencies

The code has been tested running under Python 3.10.13. All experiments are conducted on an NVIDIA 24GB 3090 GPU. The required packages are as follows:

Datasets

The origin recommendation datasets can be found via Tiktok platform and Amazon product.

✨✨ We provide public three pre-processed datasets requiring no additional processing, including (1) basic user-item interactions and (2) multi-modal features, available on Google Drive/Baidu Cloud/Huggingface. The TikTok dataset is provided by and identical to MMSSL. The Clothing and Sports datasets are provided by and identical to LATTICE. Thanks for their work!

🔥🔥To advance multimodal recommender systems, we provide reference code for preprocessing the original Amazon datasets in preprocess/data_preprocess.ipynb to support other research needs (e.g., multimodal sequential recommendation). We spent some time on this preprocessing process.If this is helpful to your work, please kindly cite our work.

Datasets

Usage

Download and place the datasets (Google Drive/Baidu Cloud/Huggingface) in the project subdirectory:

├─ MMHCL/ 
    ├── data/
      ├── Tiktok/
      ...
    ├── codes/
    ...

Start training and inference as:

cd MMHCL
python ./main.py --dataset {DATASET}

Supported datasets: Tiktok, Amazon-Clothing, Amazon-Sports More parameter tuning can be changed in file codes/utility/parser.py. And the best training log in /log.

Citing

If you find this work helpful to your research, please kindly consider citing our paper.

@article{guo2025mmhcl,
  title={MMHCL: Multi-Modal Hypergraph Contrastive Learning for Recommendation},
  author={Guo, Xu and Zhang, Tong and Wang, Fuyun and Wang, Xudong and Zhang, Xiaoya and Liu, Xin and Cui, Zhen},
  journal={arXiv preprint arXiv:2504.16576},
  year={2025}
}

Acknowledgement

The structure of this code is mainly based on MMSSL and LATTICE. Thanks for their excellent work!

About

[ACM TOMM'2025] "MMHCL: Multi-Modal Hypergraph Contrastive Learning for Recommendation"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published