Mapping
An ANN maps inputs to outputs

Mapping

Mapping

The purpose of an ANN trained by a backpropagation algorithm is to map objects within sets of input data to output nodes that represent those objects. It has 3 phases. 

Initialize: In the first phase, an ANN designer selects the number of layers, the number of nodes per layer and the design of the connections between pairs of layers. An ANN is created with these options and initialized to random numbers.

Train: In the second phase, the ANN is trained to match a set of objects to a set of nodes that represent them. Every node has a weight parameter, a bias parameter, an activation function and a threshold assigned to it. A large dataset of several types of objects are used as input. A training process adjusts the parameters of the nodes in order to create pathways in which impulses from data that contains objects are directed to corresponding nodes in the output layer that represent them. It adjusts weights and biases of nodes in the hidden layer in order to form distinct pathways from the input layer to the output layer for each type of object. 

Recognize: In the third phase, the ANN has been trained on a set of objects. That means that it should now be able to map all input data that contains any objects upon which it is trained to corresponding output nodes. The ANN recognizes an object when data that contains an object is input and the result is that the output node that represents it is set to a high value. When a set of data that contains an object is input to the ANN, values are guided through the particular pathway that leads to the output node for that object because the training process has adjusted the node parameters in the hidden layers. Objects of that type in input data are mapped to the output node. 

During recognition, you can imagine that a substance called a value travels through the ANN. Values emanate from the nodes in the input layer and move through the layers of the network. Each node inputs the values passed to it from the nodes that link to it. An activation function calculates the weighted sum and adds the bias to it to determine an output value. If the value exceeds the threshold value for the node, it is passed to the nodes to which it is linked. This process occurs through all of the hidden layers and to the output layer. Values in the output layer indicate the presence of the objects they represent. 

Pathways: With a backpropagation ANN, nodes and links are analogous to a labyrinth of pipes between the input and output layers. Links are the pipes. The weights, biases and activation functions of the nodes act like valves that let more or less value pass to other nodes. The purpose of training is to set the valves so that node values within sets of input data that contain objects in the input layer are guided by the valves through the hidden layers to the output nodes that match them.

With backprop ANNs, objects do not have meanings. An ANN doesn’t know or understand objects or scenarios and it is not capable of reasoning. A cognitive function is produced because after training, pathways are in place that map input data to output nodes. Cognitive functions are limited to those which are possible via mapping.

You are right in that it does not make sense of the input but just produces a mapping of it. Maybe changing the nonlinearities of the neural network could make a difference. Also our intelligence also consists of reasoning which is absent in the current neural net models. Recurrent neural networks which have some kind of memory(I don't fully understand their working) probably are a little better.

Do you agree with everything in this article? Is there anything that you would like to clarify, correct or add? Please comment!

Like
Reply

To view or add a comment, sign in

More articles by Gary Frank

  • Is Deep Learning a Brute Force Method?

    There is a range of intelligence. An ANN is intelligent if it performs a cognitive function, no matter how simple or…

Others also viewed

Explore content categories