Skip to content

πŸš€πŸš€πŸš€Official code for the paper "YOLO-Master: MOE-Accelerated with Specialized Transformers for Enhanced Real-time Detection." *(YOLO = You Only Look Once)* πŸ”₯πŸ”₯πŸ”₯

License

Notifications You must be signed in to change notification settings

Tencent/YOLO-Master

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

39 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Image

Hugging Face Spaces Open In Colab arXiv Model Zoo AGPL 3.0 Ultralytics

YOLO-Master: MOE-Accelerated with Specialized Transformers for Enhanced Real-time Detection.

Xu Lin1*, Jinlong Peng1*, Zhenye Gan1, Jiawen Zhu2, Jun Liu1
1Tencent Youtu Lab Β Β  2Singapore Management University
*Equal Contribution


YOLO-Master Mascot

YOLO-Master is a YOLO-style framework tailored for Real-Time Object Detection (RTOD). It marks the first deep integration of Mixture-of-Experts (MoE) into the YOLO architecture for general datasets. By leveraging Efficient Sparse MoE (ES-MoE) and lightweight Dynamic Routing, the framework achieves instance-conditional adaptive computation. This "compute-on-demand" paradigm allows the model to allocate FLOPs based on scene complexity, reaching a superior Pareto frontier between high precision and ultra-low latency.

Key Highlights:

  • Methodological Innovation (ES-MoE + Dynamic Routing): Utilizes dynamic routing networks to guide expert specialization during training and activates only the most relevant experts during inference, significantly reducing redundant computation while boosting detection performance.
  • Performance Validated (Accuracy Γ— Latency): On MS COCO, YOLO-Master-N achieves 42.4% AP @ 1.62ms latency, outperforming YOLOv13-N with a +0.8% mAP gain while being 17.8% faster.
  • Compute-on-Demand Intuition: Transitions from "static dense computation" to "input-adaptive compute allocation," yielding more pronounced gains in dense or challenging scenarios.
  • Out-of-the-Box Pipeline: Provides a complete end-to-end workflow including installation, validation, training, inference, and deployment (ONNX, TensorRT, etc.).
  • Continuous Engineering Evolution: Includes advanced utilities such as MoE pruning and diagnostic tools (diagnose_model / prune_moe_model), CW-NMS, and Sparse SAHI inference modes.


πŸ’‘ A Humble Beginning (Introduction)

"Exploring the frontiers of Dynamic Intelligence in YOLO."

This work represents our passionate exploration into the evolution of Real-Time Object Detection (RTOD). To the best of our knowledge, YOLO-Master is the first work to deeply integrate Mixture-of-Experts (MoE) with the YOLO architecture on general-purpose datasets.

Most existing YOLO models rely on static, dense computationβ€”allocating the same computational budget to a simple sky background as they do to a complex, crowded intersection. We believe detection models should be more "adaptive", much like the human visual system. While this initial exploration may be not perfect, it demonstrates the significant potential of Efficient Sparse MoE (ES-MoE) in balancing high precision with ultra-low latency. We are committed to continuous iteration and optimization to refine this approach further.

Looking forward, we draw inspiration from the transformative advancements in LLMs and VLMs. We are committed to refining this approach and extending these insights to fundamental vision tasks, with the ultimate goal of tackling more ambitious frontiers like Open-Vocabulary Detection and Open-Set Segmentation.

Abstract Existing Real-Time Object Detection (RTOD) methods commonly adopt YOLO-like architectures for their favorable trade-off between accuracy and speed. However, these models rely on static dense computation that applies uniform processing to all inputs, misallocating representational capacity and computational resources such as over-allocating on trivial scenes while under-serving complex ones. This mismatch results in both computational redundancy and suboptimal detection performance.

To overcome this limitation, we propose YOLO-Master, a novel YOLO-like framework that introduces instance-conditional adaptive computation for RTOD. This is achieved through an Efficient Sparse Mixture-of-Experts (ES-MoE) block that dynamically allocates computational resources to each input according to its scene complexity. At its core, a lightweight dynamic routing network guides expert specialization during training through a diversity enhancing objective, encouraging complementary expertise among experts. Additionally, the routing network adaptively learns to activate only the most relevant experts, thereby improving detection performance while minimizing computational overhead during inference.

Comprehensive experiments on five large-scale benchmarks demonstrate the superiority of YOLO-Master. On MS COCO, our model achieves 42.4% AP with 1.62ms latency, outperforming YOLOv13-N by +0.8% mAP and 17.8% faster inference. Notably, the gains are most pronounced on challenging dense scenes, while the model preserves efficiency on typical inputs and maintains real-time inference speed. Code: isLinXu/YOLO-Master


🎨 Architecture

YOLO-Master Architecture

YOLO-Master introduces ES-MoE blocks to achieve "compute-on-demand" via dynamic routing.

πŸ“š In-Depth Documentation

For a deep dive into the design philosophy of MoE modules, detailed routing mechanisms, and optimization guides for deployment on various hardware (GPU/CPU/NPU), please refer to our Wiki: πŸ‘‰ Wiki: MoE Modules Explained

πŸ“– Table of Contents

πŸš€ Updates (Latest First)

  • 2026/01/16: [feature] Add pruning and analysis tools for MoE models.
    1. diagnose_model: Visualize expert utilization and routing behavior to identify redundant experts.
    2. prune_moe_model: Physically excise redundant experts and reconstruct routers for efficient inference without retraining.
  • 2026/01/16: RepoisLinXu/YOLO-Master Transferred to Tencent.
  • 2026/01/14: ncnn-YOLO-Master-android support deploy YOLO-Master. Thanks to them!
  • 2026/01/09: [feature] Add Cluster-Weighted NMS (CW-NMS) to trade mAP vs speed.

    cluster: False # (bool) cluster NMS (MoE optimized)

  • 2026/01/07: TensorRT-YOLO accelerates YOLO-Master. Thanks to them!
  • 2026/01/07: Add MoE loss explicitly into training.

    Epoch GPU_mem box_loss cls_loss dfl_loss moe_loss Instances Size

  • 2026/01/04: Split MoE script into modules

    Split MoE script into separate modules (routers, experts)

  • 2026/01/03: [feature] Added Sparse SAHI Inference Mode: Introduced a content-adaptive sparse slicing mechanism guided by a global Objectness Mask, significantly accelerating small object detection in high-resolution images while optimizing GPU memory efficiency.
  • 2025/12/31: Released the demo YOLO-Master-WebUI-Demo.
  • 2025/12/31: Released YOLO-Master v0.1 with code, pre-trained weights, and documentation.
  • 2025/12/30: arXiv paper published.

πŸ“Š Main Results

Detection

Radar chart comparing YOLO models on various datasets

Table 1. Comparison with state-of-the-art Nano-scale detectors across five benchmarks.

Dataset COCO PASCAL VOC VisDrone KITTI SKU-110K Efficiency
Method mAP
(%)
mAP50
(%)
mAP
(%)
mAP50
(%)
mAP
(%)
mAP50
(%)
mAP
(%)
mAP50
(%)
mAP
(%)
mAP50
(%)
Latency
(ms)
YOLOv10 38.553.8 60.680.3 18.732.4 66.088.3 57.490.0 1.84
YOLOv11-N 39.455.3 61.081.2 18.532.2 67.889.8 57.490.0 1.50
YOLOv12-N 40.656.7 60.780.8 18.331.7 67.689.3 57.490.0 1.64
YOLOv13-N 41.657.8 60.780.3 17.530.6 67.790.6 57.590.3 1.97
YOLO-Master-N 42.459.2 62.181.9 19.633.7 69.291.3 58.290.6 1.62

Segmentation

Model Size mAPbox (%) mAPmask (%) Gain (mAPmask)
YOLOv11-seg-N 640 38.9 32.0 -
YOLOv12-seg-N 640 39.9 32.8 Baseline
YOLO-Master-seg-N 640 42.9 35.6 +2.8% πŸš€

Classification

Model Dataset Input Size Top-1 Acc (%) Top-5 Acc (%) Comparison
YOLOv11-cls-N ImageNet 224 70.0 89.4 Baseline
YOLOv12-cls-N ImageNet 224 71.7 90.5 +1.7% Top-1
YOLO-Master-cls-N ImageNet 224 76.6 93.4 +4.9% Top-1 πŸ”₯

πŸ–ΌοΈ Detection Examples

Detection Examples
Detection Detection 1 Detection 2
Segmentation Segmentation 1 Segmentation 2

🧩 Supported Tasks

YOLO-Master builds upon the robust Ultralytics framework, inheriting support for various computer vision tasks. While our research primarily focuses on Real-Time Object Detection, the codebase is capable of supporting:

Task Status Description
Object Detection βœ… Real-time object detection with ES-MoE acceleration.
Instance Segmentation βœ… Experimental support (inherited from Ultralytics).
Pose Estimation 🚧 Experimental support (inherited from Ultralytics).
OBB Detection 🚧 Experimental support (inherited from Ultralytics).
Classification βœ… Image classification support.

βš™οΈ Quick Start

Installation

Install via pip (Recommended)
# 1. Create and activate a new environment
conda create -n yolo_master python=3.11 -y
conda activate yolo_master

# 2. Clone the repository
git clone https://github.com/isLinXu/YOLO-Master
cd YOLO-Master

# 3. Install dependencies
pip install -r requirements.txt
pip install -e .

# 4. Optional: Install FlashAttention for faster training (CUDA required)
pip install flash_attn

Validation

Validate the model accuracy on the COCO dataset.

from ultralytics import YOLO

# Load the pretrained model
model = YOLO("yolo_master_n.pt") 

# Run validation
metrics = model.val(data="coco.yaml", save_json=True)
print(metrics.box.map)  # map50-95

Training

Train a new model on your custom dataset or COCO.

from ultralytics import YOLO

# Load a model
model = YOLO('cfg/models/master/v0/det/yolo-master-n.yaml')  # build a new model from YAML

# Train the model
results = model.train(
    data='coco.yaml',
    epochs=600, 
    batch=256, 
    imgsz=640,
    device="0,1,2,3", # Use multiple GPUs
    scale=0.5, 
    mosaic=1.0,
    mixup=0.0, 
    copy_paste=0.1
)

Inference

Run inference on images or videos.

Python:

from ultralytics import YOLO

model = YOLO("yolo_master_n.pt")
results = model("path/to/image.jpg")
results[0].show()

CLI:

yolo predict model=yolo_master_n.pt source='path/to/image.jpg' show=True

Export

Export the model to other formats for deployment (TensorRT, ONNX, etc.).

from ultralytics import YOLO

model = YOLO("yolo_master_n.pt")
model.export(format="engine", half=True)  # Export to TensorRT
# formats: onnx, openvino, engine, coreml, saved_model, pb, tflite, edgetpu, tfjs

Gradio Demo

Launch a local web interface to test the model interactively. This application provides a user-friendly Gradio dashboard for model inference, supporting automatic model scanning, task switching (Detection, Segmentation, Classification), and real-time visualization.

python app.py
# Open http://127.0.0.1:7860 in your browser

🀝 Community & Contributing

We welcome contributions! Please check out our Contribution Guidelines for details on how to get involved.

  • Issues: Report bugs or request features here.
  • Pull Requests: Submit your improvements.

πŸ“„ License

This project is licensed under the GNU Affero General Public License v3.0 (AGPL-3.0).

πŸ™ Acknowledgements

This work builds upon the excellent Ultralytics framework. Huge thanks to the community for contributions, deployments, and tutorials!

πŸ“ Citation

If you use YOLO-Master in your research, please cite our paper:

@article{lin2025yolomaster,
  title={{YOLO-Master}: MOE-Accelerated with Specialized Transformers for Enhanced Real-time Detection},
  author={Lin, Xu and Peng, Jinlong and Gan, Zhenye and Zhu, Jiawen and Liu, Jun},
  journal={arXiv preprint arXiv:2512.23273},
  year={2025}
}

⭐ If you find this work useful, please star the repository!

About

πŸš€πŸš€πŸš€Official code for the paper "YOLO-Master: MOE-Accelerated with Specialized Transformers for Enhanced Real-time Detection." *(YOLO = You Only Look Once)* πŸ”₯πŸ”₯πŸ”₯

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages