The neural network relies on a training dataset to predict an outcome based upon the input and hidden layer. Each feature or input in the neural network layer has a weight attached to it. These weights are multiplied by each input value and then added together.
Forward Propagation refers to the computational movement from input layers toward the output layer and back propagation refres to the computational movement from the output layer to the the input layers.
In a neural network, the predicted output depends upon:
- Input Values
- Activation Function
- Beta Coefficients of the inputs(weights associated with inputs)
- Optimizers(Biased Terms)
Backpropagation in neural networks is a suitable method used to reduce the loss function and hence improve the prediction accuracy. In backward propagation, we compute the gradient of the loss function with respect to the inputs to adjust the weights to minimize the loss function.