You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Given a label y you can normalize it as $\tilde{y} = \frac{y}{400} - 1 \in [-1,1]$, this way you can train your network to predict the normalized labels, in inference time just do the inverse operation, so prediction = (model_output + 1) * 400.
Alternatively, you can use an activation that is f(x) = (tanh(x) + 1) * 400 while training\testing, but notice that this scales your gradients by 400, so you would need to scale your learning rate to account for this.
On another note, if really want to predict labels, you should not use a 1D output. You should predict a 800D output, which is a score indicator for each label, using a single dimension introduces an ordering and geometry to the class space that you probably do not want.
Do we also have to scale the labels to [-1, 1] and calculate the loss while using tanh activation function in the training phase?
If my task is to generate images (labels in [0, 800]), how can I get predicted outputs in [0, 800] during the testing phase ?
The text was updated successfully, but these errors were encountered: