-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Description
I'm trying to replicate these results, however I'm running into memory limitations with similar hardware reported in the paper.
- Did you guys use a minibatch of 8 per GPU (effective minibatch size 32 across 4 GPUs) or minibatch size of 2 per GPU (effective minibatch size 8 across 4 GPUs)
- Did you freeze the backbone or any other sections of the network during training?
- In your
Depth2normalLightimplementation, did you iterate over random samples, or unroll them over an additional dimension in your tensors? - In your
Depth2normalLightimplementation, did you randomly bilinearly sample your guidance featuremap, or did you simply select pixels from the discrete neighbor set?
Thanks!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels