-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about the training implementation #5
Comments
ConvCRFs can be trained using PyTorch. Training is straight forward and can be done like any other neural network. Iterate over the training data, apply softmax cross-entropy loss and use the pytorch autograd package to backprop. I strongly recommend that you implement your own pipeline. Having a good understanding of your training process is quite crucial in deep learning. I am considering to make my pipeline public, however the code is currently quite messy, undocumented and will not work out of the box. I think implementing your own pipeline by following some of the pytorch tutorials is much more rewarding and easiert then trying to make mine work. Edit: I deleted part of my earlier response to increase my overall niceness. You can find the full response in the changelog. |
Thanks for your detailed response. I appreciate it, and I agree with you. I will implement my own pipeline according to your paper and my task. |
Hi Marvin, Hai |
Hi Hai, may I ask you why have you used the nll loss and not the cross entropy loss in the training? Thanks |
Hi prio1988, I think nll loss is actually multiclass cross entropy, right? It should also work when I set the model to work on only two classes, that is background and foreground. Right? |
Nll loss assume that you have already applied a logSoftMax layer on the top of your network. The multi class cross entropy loss is the torch.nn.CrossEntropyLoss. I think that probably you should use the last one. Instead I am still wondering why to apply a logsoftmax on the unary instead that just a softmax. |
Oh, thank you for the very good suggestion! I will dig into the problem of logsoftmax+nll or softmax+crossEntropyLoss.I read somewhere that logsoftmax is numerically more stable than softmax. On Tuesday, August 28, 2018, 5:00:05 PM EDT, prio1988 <[email protected]> wrote:
Nll loss assume that you have already applied a logSoftMax layer on the top of your network. The multi class cross entropy loss is the torch.nn.CrossEntropyLoss. I think that probably you should use the last one. Instead I am still wondering why to apply a logsoftmax on the unary instead that just a softmax.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
If you use the crossEntropyLoss you can avoid also the softmax. It is done internally by the loss. |
OK. Then that would be much better. Since the implementation of crossEntropyLoss already considered the numerical stability issues.Thank you!
On Tuesday, August 28, 2018, 5:09:38 PM EDT, prio1988 <[email protected]> wrote:
If you use the crossEntropyLoss you can avoid also the softmax. It is done internally by the loss.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Is there any one having tried training? |
There is a paper called PAC-CRF, you may find the convCRF implementation there.
On Tuesday, June 2, 2020, 02:00:21 AM PDT, pvthuy <[email protected]> wrote:
@HqWei @qiqihaer Could you share a portion of your code for training convCRF?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@SHMCU It's very helpful. Thank you very much! |
Hi, did you solve the in-place operation problem? |
Hi, I have a question about the training step with this wonderful CRF impletement. |
Great work. Thanks for your code. Do you have a plan to publish the training implementation? I really want to follow your job.
The text was updated successfully, but these errors were encountered: