Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How should I run the inference? #63

Open
AlpTuncay opened this issue Jun 14, 2020 · 0 comments
Open

How should I run the inference? #63

AlpTuncay opened this issue Jun 14, 2020 · 0 comments

Comments

@AlpTuncay
Copy link

In this implementation of Faster RCNN, to my understanding, the inference in test_frcnn.py is run using two different networks: first, image is fed to network that consists of shared layers and RPN, second, the ROI output of the RPN is fed to classifier network. So there are two different neural networks and in between there are some processes done on the output of the first network. In the paper, however, it is stated that the system is a single, unified network. From this statement, I understand that when we want to run inference, there should be just one network in which the images are fed with no disconnection between two graphs and we get the bounding boxes and scores as a result.

  1. Can someone tell me whether my understanding of the original paper is correct?
  2. If so, is it possible to run inference as I explained above?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant