Introduction to Neural Networks
Perceptrons
The perceptron is the basic unit powering what is today known as deep learning. It is the artificial neuron that, when put together with many others like it, can solve complex, undefined problems much like humans do.
It can take in a few inputs, each of which has a weight to signify how important it is, and generate an output decision of “0” or “1”. However, when combined with many other perceptrons, it forms an artificial neural network.
Multilayer Perceptrons
A multilayer perceptron (MLP) is a deep, artificial neural network. It is composed of more than one perceptron. They are composed of an input layer to receive the signal, an output layer that makes a decision or prediction about the input, and in between those two, an arbitrary number of hidden layers that are the true computational engine of the MLP. MLPs with one hidden layer are capable of approximating any continuous function.
If we take the simple example the three-layer network, first layer will be the input layer and last will be output layer and middle layer will be called hidden layer. We feed our input data into the input layer and take the output from the output layer. We can increase the number of the hidden layer as much as we want.
Feed Forward Network, is the most typical neural network model. Its goal is to approximate some function f (). Given, for example, a classifier y = f ∗ (x) that maps an input x to an output class y, the MLP find the best approximation to that classifier by defining a mapping, y = f(x; θ) and learning the best parameters θ for it.
The MLP networks are composed of many functions that are chained together.
A network with three functions or layers would form f(x) = f (3)(f (2)(f (1)(x))).
Each of these layers is composed of units that perform an affine transformation of a linear sum of inputs.
Each layer is represented as y = f(WxT + b), where f is the activation function, W is the set of parameter, or weights, in the layer, x is the input vector, which can also be the output of the previous layer, and b is the bias vector.
The layers of an MLP consists of several fully connected layers because each unit in a layer is connected to all the units in the previous layer. In a fully connected layer, the parameters of each unit are independent of the rest of the units in the layer, that means each unit possess a unique set of weights.
The process by which a multilayer perceptron learns is Backpropagation Algorithm.
Forward Propagation- Here, we will propagate forward, i.e, calculate the weighted sum of the inputs and add the bias.
Backward Propagation and Weight Updation- We calculate the total error at the output nodes and propagate these errors back through the network using Backpropagation to calculate the gradients. Then we use an optimization method such as Gradient Descent to adjust all the weights in the network with an aim of reducing the error at the output layer.
Working of Gradient Descent Optimizer
First we calculate the error. Error/Loss is given as below:-
Comments
Post a Comment