This is an open-source research library for polyp segmentation. Model configuration follows OpenMMLab style while other is implemented by us to conduct experiments faster.
It helps you have better understanding of your model by providing some useful debug tools:
- CAM-based visualization
- Per channel visualization
- Model params and flops count
This project belongs to Sun-Asterisk Inc.
- Version: 1.10.1 (recommended)
conda install pytorch==1.10.1 torchvision==0.11.2 torchaudio==0.10.1 cudatoolkit=11.3 -c pytorch -c conda-forge
- Install openmim:
pip install openmim - Install mmcv:
mim install mmcv-full==1.6.0 - Install mmseg:
cd sun-polyp
pip install -v -e .- Install wandb (for logging):
pip install wandb - Install pytorch-lightning:
pip install pytorch-lightning - Install segmentation model pytorch:
pip install segmentation-models-pytorch
Config everything in mcode/config.py
What to config?
- Model:
- Follow
mmsegconfig pretrained: path to ImageNet pretrained MiT backbone- Please change
pretrainedinbackbonetopretrained=pretrained - Config model head to head of your choice
- Follow
- Wandb:
use_wandb: True, False if debugwandb_key: Please use your wandb authorize keywandb_name: Name of your experiments, please make it as specific as possiblewandb_group: We need 5 runs/experiments, grouping makes it easier to see on wandb UI
- Dataset:
train_images: path to image in Train Datasettrain_masks: path to mask in Train Datasettest_folder: path to Test Datasettest_imagesandtest_masks: leave itnum_workers: torch workerssave_path: path to save checkpoints and logsbs: this should be 16 if possiblegrad_accumulate_rate: num iters to backward, ifbs=16, this should be1