Artificial Neural Network

Artificial Neural Network

One of the most sustained and ambitious inquiries of humankind has been the secrets of its brain, its learning and thought processes. Artificial Neural Networks (ANN) has been evolved in view of achieving human like performance in certain tasks where they outperform the conventional computers. The progress in the area of ANN is attributed to long efforts made by neurobiologists as well as neuroanatomists in developing models of human learning. McCulloch and Pitts (1943) conceived the artificial neuron derived as prototype to the biological neurons.

Backpropagation Neural Network

Most popular training method for neural networks, the generalized delta rule, also known as Backpropagation algorithm. The explanation here is intended to give an outline of the process involved in Backpropagation algorithm. The (NN) explained here contains three layers. These are input, hidden, and output layer. During the training phase, the training data is fed into to the input layer. The data is propagated to the hidden layer and then to the output layer. This is called the forward pass of the Backpropagation algorithm. In forward pass, each node in hidden layer gets input from all the nodes from input layer, which are multiplied with appropriate weights and then summed. The output of the hidden node is the nonlinear transformation of this resulting sum. Similarly each node in output layer gets input from all the nodes of the hidden layer, which are multiplied with appropriate weights and then summed. The output of this node is the non-linear transformation of the resulting sum. The output values of the output layer are compared with the target output values. The target output values are used to teach network. The error between actual output values and target output values is calculated and propagated back toward hidden layer. This is called the backward pass of the Backpropagation algorithm. The error is used to update the connection strengths between nodes, for example, weight matrices between inputhidden layers and hidden-output layers are updated. During the testing phase, no learning takes place for example, weight matrices are not changed. Each test vector is fed into the input layer. The feedforward of the testing data is similar to the feed-forward of the training data. Backpropagation architecture was developed in the early 1970s by several independent sources. There are many laws (algorithms) used to implement the adaptive feedback required to adjust the weights during training. The most common technique is backward-error propagation, more commonly known as back propagation. The Backpropagation algorithm searches for weight values that minimize the total error of the network over the set of training examples (training set). Backpropagation consists of the repeated application of the following two passes:

- Forward pass: in this step the network is activated on one example and the error of (each neuron of) the output layer is computed.

- Backward pass: in this step the network error is used for updating the weights (credit assignment problem).  

Therefore, starting at the output layer, the error is propagated backwards through the network, layer by layer. This is done by recursively computing the local gradient of each neuron. Backpropagation algorithm uses supervised training, if the output is not correct, the weight are adjusted according to the formula: 

W(new) = W(old) + a(desired - output) * input

This process is repeated for a number of iterations until the error is minimum admissible for all the patterns. 

To view or add a comment, sign in

Explore content categories