Posts

Showing posts from November, 2020

Convolutional Neural Networks(Part-4)

Image
  AlexNet AlexNet is considered to be the first paper/ model which rose the interest in CNNs when it won the ImageNet challenge in 2012. AlexNet is a deep CNN trained on ImageNet and outperformed all the entries that year. It was a  major improvement  with the next best entry getting only 26.2% top 5 test error rate. Compared to modern architectures, a relatively simple layout was used in this paper. ZFNet ZFNet is a modified version of AlexNet which gives a better accuracy.  One major difference in the approaches was that ZFNet used 7x7 sized filters whereas AlexNet used 11x11 filters. The intuition behind this is that by using bigger filters we were losing a lot of pixel information, which we can retain by having smaller filter sizes in the earlier conv layers. The number of filters increase as we go deeper. This network also used ReLUs for their activation and trained using batch stochastic gradient descent. GoogLeNet The GoogLeNet architecture is very different from previous state-

Convolutional Neural Networks(Part-3)

Image
  Forward and Backward Propagation using Convolution operation For the forward pass, we move across the CNN, moving through its layers and at the end obtain the loss, using the loss function. And when we start to work the loss backwards, layer across layer, we get the gradient of the loss from the previous layer as  ∂L/∂z.  In order for the loss to be propagated to the other gates, we need to find  ∂L/∂x  and  ∂L/∂y . Now, lets assume the function  f is a convolution  between   Input X and a Filter F.  T he basic difference between  convolution  and  correlation  is that the  convolution  process rotates the matrix by 180 degrees .   Input X is a 3x3 matrix and Filter F is a 2x2 matrix, as shown below: Convolution between Input X and Filter F, gives us an output O. This can be represented as: To derive the equation of the gradients for the filter values and the input matrix values we will consider that the convolution operation is same as correlation operation, just for simplicity. The

Convolutional Neural Networks(Part-2)

Image
  Cross- Correlation Cross-correlation  is a  measure of similarity  of two series as a function of the displacement of one relative to the other. This is also known as a  sliding  dot product  or  sliding inner-product . It is commonly used for searching a long signal for a shorter, known feature. It has applications in  pattern recognition ,  single particle analysis ,  electron tomography ,  averaging ,  cryptanalysis , and  neurophysiology .  Steps followed in Cross- Correlation Take two matrices with same dimensions. Multiply them one by one, element by element (i.e , not the dot product, just the simple multiplication). Sum the elements together. VGG16 VGG16 is a convolutional neural network model proposed by K. Simonyan and A. Zisserman from the University of Oxford in the paper “Very Deep Convolutional Networks for Large-Scale Image Recognition”.  The model achieves 92.7% top-5 test accuracy in ImageNet,  which is a dataset of over 14 million images belonging to 1000 classes. A

Convolutional Neural Networks(Part-1)

Image
  Convolutional Neural Networks Convolutional Neural networks are designed to process data through multiple layers of arrays. This type of neural networks is used in applications like image recognition or face recognition. The primary difference between CNN and any other ordinary neural network is that CNN takes input as a two-dimensional array and operates directly on the images rather than focusing on feature extraction which other neural networks focus on. The classic, and arguably most popular, use case of these networks is for image processing. Image classification is the task of taking an input image and outputting a class (a cat, dog, etc) or a probability of classes that best describes the image. When a computer sees an image (takes an image as input), it will see an array of pixel values. Depending on the resolution and size of the image, it will see a 32 x 32 x 3 array of numbers (The 3 refers to RGB values). What we want the computer to do is to be able to differentiate betw

Introduction to Neural Networks

Image
  Perceptrons The perceptron is the basic unit powering what is today known as deep learning. It is the artificial neuron that, when put together with many others like it, can solve complex, undefined problems much like humans do. It can take in a few inputs, each of which has a weight to signify how important it is, and generate an output decision of “0” or “1”. However, when combined with many other perceptrons, it forms an  artificial   neural network . Multilayer Perceptrons A multilayer perceptron (MLP) is a  deep, artificial neural network . It is composed of more than one perceptron. They are composed of an input layer to receive the signal, an output layer that makes a decision or prediction about the input, and in between those two, an arbitrary number of hidden layers that are the true computational engine of the MLP. MLPs with one hidden layer are capable of approximating any continuous function. If we take the simple example the three-layer network, first layer will be the 

Graph Analysis(Part-2)

Image
  Graph Partitioning A graph partition is the reduction of a graph to a smaller graph by partitioning its set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. Important terms Cliques-  fully connected sub graphs of original graph G. Maximum Cliques-   cliques where no node can be added from parent graph where new sub graph is also a clique. Betweenness-  it is the number of shortest path that includes the edge. GN algorithm is used to calculate the betweenness of the path from one node to all other nodes. Cutting Graphs In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. Quality of Cluster (Cut) Number of edges or sum of weights of edges such that the edge is in between a node in cluster, and another node is not i