-
Notifications
You must be signed in to change notification settings - Fork 2
Open
Description
Hi!
Great work and I enjoyed reading the paper a lot.
While the paper/code is on FineGym, Diving48 and FisV, I wonder how in general I should process a sequential dataset (e.g., say I have a lot of facial mesh geometry frames) to fit the approach described in the paper. Specifically:
- Do you extract trajectories of the same (temporal) length (e.g., a continuous 90 frames) after you extract the keypoints? (I assume so given the Table 2 in appendix.)
- Do these trajectories overlap at all? For example, if you have a video
vof 2000 frames at 30fps, are the trajectories something likev[:90],v[90:180],v[180:270], etc.? Or could there be a trajectory e.g.,v[20:110], which overlaps with thev[:90]trajectory? - How are the timesteps t provided for samples in these trajectories? For example, will
v[:90]have timesteps 1-90, andv[90:180]have (again) t=1-90? - Do you randomly select segments (e.g., past, future) from a trajectory? I'm particularly confused by this part as I did not find a clear answer in the paper. Given a long trajectory, how do you create segments from them, and how many do you create?
Thanks in advance for your answers. Again, excellent work!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels