Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Activation functions for training and testing phases #75

Open
SITUSITU opened this issue Jan 13, 2023 · 1 comment
Open

Activation functions for training and testing phases #75

SITUSITU opened this issue Jan 13, 2023 · 1 comment

Comments

@SITUSITU
Copy link

Do we also have to scale the labels to [-1, 1] and calculate the loss while using tanh activation function in the training phase?

If my task is to generate images (labels in [0, 800]), how can I get predicted outputs in [0, 800] during the testing phase ?

@sarihl
Copy link

sarihl commented Jun 15, 2023

Given a label y you can normalize it as $\tilde{y} = \frac{y}{400} - 1 \in [-1,1]$, this way you can train your network to predict the normalized labels, in inference time just do the inverse operation, so prediction = (model_output + 1) * 400.

Alternatively, you can use an activation that is f(x) = (tanh(x) + 1) * 400 while training\testing, but notice that this scales your gradients by 400, so you would need to scale your learning rate to account for this.

On another note, if really want to predict labels, you should not use a 1D output. You should predict a 800D output, which is a score indicator for each label, using a single dimension introduces an ordering and geometry to the class space that you probably do not want.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants