Skip to content

How was Figure 4(a) produced? #10

@XuM007

Description

@XuM007

Thanks for your wonderful work. I am curious about how fig4a in the article was drawn.
image

# rgb_cams_ = (rgb_cams_ - self.mean.to(device)) / self.std.to(device) # B*S,3,H,W
# feat_cams_ = self.encoder(rgb_cams_) # B*S,latent_dim,H/8,W/8
feat_cams_ = rgb_cams_

I simply set feat_cams_ to rgb_cams_ in the mvdet code, and I can get a reasonable single-view project visualization result, but I can't get a reasonable multi-view fusion result. (Tried on the multiviewx dataset)

pro_img_all = feat_mems[0].permute(0, 2, 3, 1).cpu().numpy() # 6,3,160,250 -> 6,160,250,3
pro_img_all = np.mean(pro_img_all, axis=0) # 160,250,3

image

Can you provide ideas for making this figure? I think this is a good way to verify whether the mapping process is correct.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions