Skip to content

Training Memory Requirements #4

@BrianPugh

Description

@BrianPugh

I'm trying to replicate these results, however I'm running into memory limitations with similar hardware reported in the paper.

  1. Did you guys use a minibatch of 8 per GPU (effective minibatch size 32 across 4 GPUs) or minibatch size of 2 per GPU (effective minibatch size 8 across 4 GPUs)
  2. Did you freeze the backbone or any other sections of the network during training?
  3. In your Depth2normalLight implementation, did you iterate over random samples, or unroll them over an additional dimension in your tensors?
  4. In your Depth2normalLight implementation, did you randomly bilinearly sample your guidance featuremap, or did you simply select pixels from the discrete neighbor set?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions