Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hello, I have calculated the FLOPs of NeuFlowV2 and SEA-RAFT and found that the FLOPs of NeuFlowV2 seem to be higher, but why is it so much faster than SEA-RAFT? #3

Open
yuefanhao opened this issue Sep 2, 2024 · 2 comments

Comments

@yuefanhao
Copy link

No description provided.

@Study-is-happy
Copy link
Collaborator

Very good question. Our assumption is that, depth-wise conv has way much lower FLOPs but the low-level library (cudnn) does not optimize the computation time. We use many 3x3 convolution layers which is highly optimized by low-level library.

@yuefanhao
Copy link
Author

@Study-is-happy Thank you very much. Another question is, have you done any optimization for cost volume? Because you have done one iteration at 1/16 resolution and eight iterations at 1/8 resolution, which is an operation that consumes a lot of computing and memory. I need to estimate optical flow at a larger resolution, so the cost volume will be very large, and it's very time-consuming to look up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants