Drone View
#653
Replies: 1 comment
-
|
Have a read of the original behaviour in the original paper: Tl;dr it uses parts of the image/features (usally visually sharp edges) and tracks how they move to guesstimate how the camera moves. This movement allows it to triangulated guessed positions of the features that it tracks through frames to give depth. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Thank you for this amazing project. I successfully installed it without any issues on Ubuntu 22.04. I'm testing it with video data I captured with a drone around my campus.
I still don't understand how the points are obtained. Perhaps there's an explanation of how keypoints can also detect depth, since the drone flies past buildings and is also visible in the viewer.
Thank you for the answer.
Thanks for the answer.
Beta Was this translation helpful? Give feedback.
All reactions