Skip to content

I would like to know if the code for the attention mechanism interaction of your MIA module has been commented out. Does this seem to be inconsistent with the description in your paper? #12

@L0310

Description

@L0310
Image OR because the default model of the code is resnet50, so convolution is used instead of the attention mechanism? If that's the case, could you please provide me with the code for SwinTransformer?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions