Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Has anyone tried using EigenGradCAM for Class Activation Mapping of YOLOv5? #548

Open
BaTteREg opened this issue Dec 30, 2024 · 1 comment

Comments

@BaTteREg
Copy link

BaTteREg commented Dec 30, 2024

During the process of adding back - propagation gradients, I found that the output of the model in the BaseCAM function, self.outputs, has a size of (25200, 85). When continuing with the gradient calculation, how should I select the output tensors to calculate the loss and complete the back - propagation?

       if self.uses_gradients:
            self.model.zero_grad()
            # for param in self.model.parameters(): # 设置requires_grad=True
            #     if not param.requires_grad:
            #         param.requires_grad = True
            for name, param in self.model.named_parameters():
                print(f"Parameter name: {name}, requires_grad: {param.requires_grad}")
            loss = sum([target(output) for target, output in zip(targets, outputs)])
            loss.backward(retain_graph=True)
            if 'hpu' in str(self.device):
                self.__htcore.mark_step()

outputs, has a size of (25200, 85).

@BaTteREg
Copy link
Author

May I ask whether the output of the YOLO model needs to go through Non-Maximum Suppression (NMS) processing here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant