Auxo-: growth, increase; a Greek goddess representing growth
-drome: a place for running or racing
“Auxodrome” is also a real word referring to “a plotted curve indicating the relative development of a child at any given age.”
Please follow the installation instructions provided on the pytorch-3dunet GitHub page.
After completing those steps:
-
Create a conda environment as instructed on the pytorch-3dunet page and activate it.
-
Install the required packages listed on the pytorch-3dunet page, and additionally:
conda install -c pytorch torchvision pytorch -c conda-forge numpy av -
Replace the following files in the cloned pytorch-3dunet repository:
-
In the pytorch3dunet/datasets folder, replace hdf5.py with the version provided in the substitution folder of this repository.
-
In the pytorch3dunet/unet3d folder, replace predictor.py with the version provided in the same folder.
-
-
Create a conda environment and activate it.
conda create --name 3dunet-env python=3.11 -
Install the following packages:
conda install -c conda-forge numpy=1.26.4 av=12.3.0 tensorboard tqdm setuptools h5py scipy scikit-image pyyaml pytest conda install pytorch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 pytorch-cuda=12.1 -c pytorch -c nvidia -
Download packages provided on the pytorch-3dunet GitHub page. In the cloned pytorch-3dunet repository, replace the pytorch3dunet folder with the version provided in the substitution folder of this repository.
-
Install QuPath.
-
Annotate larvae as foreground and all other components (e.g., food, eggs, pupae, etc.) as background.
-
Export annotations using the provided export script. Label indices should be: foreground = 1, background = 0, unlabeled = 2 (ignored during training).
-
Convert the training and validation datasets into HDF5 format with /raw and /label datasets, as specified by pytorch-3dunet. A minimum of 16 frames along the time axis is required for both training and validation sets.
-
An example training YAML file is provided as train_config.yml in the example folder.
-
We use a YAML file generator to create YAML files for analyzing experimental videos. This generator identifies the center of each well, and creates a testing yaml file for each well separately. Use the trained model for each larval stage (eggs & L1, L2, L3 & pupae) and adults to run tests on that stage separately.
-
There are two types of testing yaml files, test_config-VideoDataset.yml and test_config-ProbField.yml. The example yaml files for one example well in the example folder.
-
The VideoDataset yaml file is to use the trained 3D-Unet model to run predictions on the testing frames you specified, and it will generate batches of raw frames and predicted frames for your specified well. The raw frames are just original avi videos of that well, and the predicted frames are a probability field representing the probability of each pixel being the foreground.
-
Use CombineVideo.ipynb to combine all the batches of predicted frames into a large video for each well. Then use ProbField yaml file to threshold the probability field, turn the thresholded predicted frames into batches of avi videos, calculate the areas and centroids of the predicted larvae and save those two metrics into batches csv files for future analysis.
-
We use a PlotGenerator to find the timings of hatching, pupation, and eclosion for all wells. This ipynb file will read the csv files generated from ProbField testings, apply noise filters on the generated metrics, and spit out the timings of hatching, pupation, and eclosion.
This code is released under the MIT License. See the LICENSE.md file for details.
