Skip to content

we propose FreePose, a frequency-decoupled framework that separates motion trajectories into low-frequency and high-frequency components for dedicated modeling.

Notifications You must be signed in to change notification settings

Lxg-233/FreePose

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[2026]FreePose: Modeling Frequency-Decoupled Motion Trajectories for 3D Human Pose Estimation

PyTorch a(1)(1)(1)(1)(1)-第 7 页

Dataset

Human3.6M

Preprocessing

  1. Download the fine-tuned Stacked Hourglass detections of MotionBERT's preprocessed H3.6M data here and unzip it to 'data/motion3d', or direct download our processed data here and unzip it.
  2. Slice the motion clips by running the following python code in tools/convert_h36m.py:
python convert_h36m.py

MPI-INF-3DHP

Preprocessing

Please refer to - MotionAGFormer for dataset setup.

Training

After dataset preparation, you can train the model as follows:

human3.6M

CUDA_VISIBLE_DEVICES=0 python train.py --config <PATH-TO-CONFIG> --checkpoint <PATH-TO-CHECKPOINT>

where config files are located at configs/h36m.

MPI-INF-3DHP

Please refer to - MotionAGFormer for training.

Evaluation

You can download and unzip it to get pretrained weight.

After downloading the weight, you can evaluate Human3.6M models by:

python train.py --eval-only --checkpoint <CHECKPOINT-DIRECTORY> --checkpoint-file <CHECKPOINT-FILE-NAME> --config <PATH-TO-CONFIG>
WPS图片(1)

Demo

Our demo is a modified version of the one provided by MotionAGFormer repository. First, you need to download YOLOv3 and HRNet pretrained models here and put it in the './demo/lib/checkpoint' directory. Next, download our base model checkpoint from here and put it in the './checkpoint' directory. Then, you need to put your in-the-wild videos in the './demo/video' directory. We provide demo. You can download and unzip it to get demo file. Run the command below:

python vis.py --video sample_video.mp4 --gpu 0

Acknowledgement

Our code refers to the following repositories:

We thank the authors for releasing their codes.

About

we propose FreePose, a frequency-decoupled framework that separates motion trajectories into low-frequency and high-frequency components for dedicated modeling.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages