Backpropagation

Backpropagation, short for “backward propagation of errors”, is a method used in the training of deep neural networks to calculate the weights to be used.

In practice, during the training phase (gradient descent), the error between the neural nets’ output and the expected results from the training data set needs to be minimized.

In feedforward neural networks, since data input are entered in the first layer and processed only in one direction in the subsequent layers of neurons, the error must be calculated from the last layer of neurons and propagated back to the previous layers.

Read more on backpropagation, gradient descent and the adjustment of weights in neural networks in the post on programming a simple neural network or review the video on principles of neural nets.

Read more on Brilliant.org and Wikipedia

« Back to Glossary Index