Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About speed comparison? #8

Open
IzouGend opened this issue Jul 27, 2018 · 6 comments
Open

About speed comparison? #8

IzouGend opened this issue Jul 27, 2018 · 6 comments

Comments

@IzouGend
Copy link

Thanks for your excellent work!
I noticed convcrf is run on gpu, while fullcrf is run on cpu. Is it unfair to compare two algorithms on different platforms?

@MarvinTeichmann
Copy link
Owner

The reason for this comparison is that there is simply no working fullcrf implementation on gpu available. CRFasRNN, DeepLab and other segmentation systems utilizing CRFs use the very same CPU implementation. So it is a fair comparison since it improves the state of the art.

@suhangpro
Copy link

Thanks for the excellent work too! I have another question regarding the speed comparison:

you use a 4x4 average pooling before doing message passing, effectively reducing the computation 16 folds. But there isn't such a step in densecrf, right? In you arXiv draft, the only thing that seems to be relevant is the "gaussian blur" described in sec.4.2. Is that refering to this pooling operation?

@MarvinTeichmann
Copy link
Owner

Yes, densecrf does bilinear downsampling internally.

@suhangpro
Copy link

Thanks for the prompt reply!

I understand that there is some interpolation happening when mapping the pixels onto a permutohedral lattice. Are you referring to that? There downsampling is done in the high-dimensional lattice space. In ConvCRF, the downsampling is done on image space (XY) instead.

@hrtan
Copy link

hrtan commented Sep 26, 2019

Hi, Marvin:

I notice that in your paper, you post the speed of ConvCRF with different receptive field size from 3 to 13. Here is my question, why there's no more larger conv-size than size-13 ? And is the speed posted in table.1 indicate that the time cost for just one iteration?

@MarvinTeichmann
Copy link
Owner

MarvinTeichmann commented Sep 26, 2019

Hi Alex,

the performance does not improve past 13. Also, if you go bigger then 21 (I think it was 21) I ran into GPU memory issues (@ 11GB). The memory consumption also increases quadratically with filter size. So there is no reason why you want to go past 11.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants