Skip to content

HengjieLiu/DIR-MUSA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MUsculo-Skeleton-Aware (MUSA) Deep Learning for Anatomically Guided Head-and-Neck CT Deformable Registration

This is the official PyTorch implementation of the paper:

Liu, H., McKenzie, E., Xu, D., Xu, Q., Chin, R. K., Ruan, D., & Sheng, K. (2025). MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration. Medical Image Analysis, 99, 103351. https://doi.org/10.1016/j.media.2024.103351

Introduction

MUSA is a two-stage deformable image registration framework for head-and-neck CT. It decomposes the complex head-and-neck deformation into a bulk posture change and residual fine deformation by leveraging spatially variant regularization on bony structures and soft tissue. We highlight the importance of explicit multiresolution modeling and anatomical constraints for achieving anatomically plausible deformations.

Preview In the animation above, we linearly scale the deformation field to visualize the "deforming process". This is NOT a rigorous way to analyze deformation, because the true transformation is NOT guaranteed to be linear. Nevertheless, it can highlight some aspects of plausibility/implausibility of the entire process.
For the 1-stage method, we divide the total deformation into 10 evenly spaced steps. For the 2-stage method, we apply the stage 1 and stage 2 deformations sequentially, using 5 steps for each stage (10 steps total). The difference is visible in how the head pitches upward and in the Jacobian determinant maps.

Progress

  • Upload musa code
  • Upload training scripts
  • Update README.md

Planned Enhancements

The items below are planned enhancements. They may be delayed or even skipped, depending on available time and if proper data is available.

  • Upload pretrained model weights and test scripts for inference
  • Visualization demos

Run the code

Environment setup

Please see requirements.txt

Train your own model

Follow the training scripts under scripts/

Dataset and preprocessing

We cannot share the processed dataset. However, the raw inter-subject datasets used in this study can be obtained from The Cancer Imaging Archive (TCIA).

The preprocessing steps include the following:

  • Background removal: Remove the background, including the scanning bed and patient immobilization devices.
  • Standardizing orientation: Reorient all images to follow the convention:
    • i: Right-to-Left (R → L)
    • j: Anterior-to-Posterior (A → P)
    • k: Inferior-to-Superior (I → S)
  • Centering: Rigid alignment to a common template.
  • Intensity clippling and normalization: Clip image intensity values to the range [-1024, 3000] Hounsfield Units (HU) and normalize them to the range [0,1].
  • Spatial interpolation and cropping: All images are resampled to an isotropic pixel spacing of 2 mm using trilinear interpolation and then cropped to a matrix size of 160x160x192. The half-resolution images used in the first stage of the two-stage approaches are downsampled to a resolution of 4 mm and a matrix size of 80x80x96.

Segmentation for bony structures and related soft tissue organs at risk (OARs) can be obtained using existing deep learning-based autosegmentation methods, for example:

Contact

Contributions and feedback are welcome! Please open an issue or submit a pull request. For direct inquiries, you can also reach me at hjliu@g.ucla.edu.

Citation

If you find this repository useful in your research, please consider to cite:

@article{liu2025musa,
    title = {MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration},
    journal = {Medical Image Analysis},
    volume = {99},
    pages = {103351},
    year = {2025},
    issn = {1361-8415},
    doi = {https://doi.org/10.1016/j.media.2024.103351},
    url = {https://www.sciencedirect.com/science/article/pii/S1361841524002767},
    author = {Hengjie Liu and Elizabeth McKenzie and Di Xu and Qifan Xu and Robert K. Chin and Dan Ruan and Ke Sheng},
}

Code reference

The implementation of MUSA is based on the following open-source code:

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages