- Download the fine-tuned Stacked Hourglass detections of MotionBERT's preprocessed H3.6M data here and unzip it to 'data/motion3d', or direct download our processed data here and unzip it.
- Slice the motion clips by running the following python code in
tools/convert_h36m.py:
python convert_h36m.py
Please refer to - MotionAGFormer for dataset setup.
After dataset preparation, you can train the model as follows:
CUDA_VISIBLE_DEVICES=0 python train.py --config <PATH-TO-CONFIG> --checkpoint <PATH-TO-CHECKPOINT>where config files are located at configs/h36m.
Please refer to - MotionAGFormer for training.
You can download and unzip it to get pretrained weight.
After downloading the weight, you can evaluate Human3.6M models by:
python train.py --eval-only --checkpoint <CHECKPOINT-DIRECTORY> --checkpoint-file <CHECKPOINT-FILE-NAME> --config <PATH-TO-CONFIG>
Our demo is a modified version of the one provided by MotionAGFormer repository. First, you need to download YOLOv3 and HRNet pretrained models here and put it in the './demo/lib/checkpoint' directory. Next, download our base model checkpoint from here and put it in the './checkpoint' directory. Then, you need to put your in-the-wild videos in the './demo/video' directory. We provide demo. You can download and unzip it to get demo file. Run the command below:
python vis.py --video sample_video.mp4 --gpu 0Our code refers to the following repositories:
We thank the authors for releasing their codes.
