Introduction to Deep Learning

Introduction to Deep Learning

A brief intro to deep learning without mathematics involved

Currently, AI is progressing rapidly, and deep learning is one of the contributors. Deep learning is a branch of machine learning that constantly changes the world around us.

From driver-less cars to voice recognition, Deep Learning makes everything possible. It has become a hot topic for industry and science and affects almost all industries related to Machine Learning and artificial intelligence (AI). This is the principal article in deep learning arrangement and will clarify diverse deep learning models in upcoming articles.

What is Deep learning?

AI vs ML vs Deep Learning

Deep learning is a sub-field of machine learning dealing with algorithms enlivened by the structure and function of the brain called artificial neural networks. As it were, it copies the working of our minds. Deep learning algorithms are like how the sensory systems organized where every neuron associated one another and passing information. Deep learning is part of a broader family of Machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised.

Deep Neural Networks

Deep learning models work in layers and a typical model at least has three layers. Each layer accepts the information from previous and passes it on to the next one.

Why should one know about Deep learning?

Deep Learning vs Older Learning Algorithms

Deep Learning vs Older Learning Algorithms

One of the major reasons for deep learning is the amount of data we handle nowadays. Deep learning models tend to perform well with the amount of data whereas old machine learning models stop improving after a saturation point.

Nearly every industry is going to be affected by AI and ML and Deep learning play a big role in it. Regardless of whether you are a health professional or a lawyer, there is a possibility that one day you may be replaced by a highly autonomous robot. The accuracy of deep learning has improved significantly over the years and continues to evolve. Understanding its nuances will help us all.

Some of the wide applications of Deep learning are:

Self-driving cars: A self-driving car is the ultimate evolutionary goal of developing ADASes — Advanced Driver Assistance Systems, to the point when there’s nobody to assist anymore.

Visual tasks: Which include, among others, the recognition of lanes, the recognition of pedestrians and the recognition of traffic signals, are solved through in-depth learning.

The importance of deep learning for autonomous driving systems can be illustrated by the fact that Nvidia maintains long-term relationships with car manufacturers and works on integrated and real-time operating systems developed for that purpose.

Humanoids: In the same way, deep learning simplifies the interaction between robots and human beings’ day by day. We already have personal agents like Alexa and Siri who listen to our questions and respond intelligently.

NLP and image processing: The great advances in such fields have been possible thanks to deep learning. In view of the growth rate of robotics and deep learning, autonomous robots are not far away. A good example is Google Duplex, a human-like virtual assistant by Google.

Medical care: the adoption of deep learning in health care is increasing and solving a variety of problems for patients, hospitals and the entire health care industry.

Research has shown that deep neural networks can be trained to produce radio logical results with high reliability by training archival data from millions of patient scans collected by health systems. These advances will soon change the health and care scenario by replacing doctors with AI-based expert systems and autonomous robotic surgeons.

This should be enough to give you an idea of the vast applications of deep learning. Unless you’re planning to head in the woods, sooner or later you’ll get to interact with DL in some manner. Now let’s have a look at how it works!

Implementation of Deep learning

Given that Deep learning is implemented by large Artificial Neural Networks (or simply Neural Networks or NN), let’s find out more about them.

What’s an Artificial Neural Network

Artificial Neural Network is a network of interconnected artificial neurons (or nodes) where each neuron represents an information processing unit.

These interconnected nodes pass information to each other mimicking the human brain.

The nodes interact with each other and share information. Each node takes input and performs some operation on it before passing it forward.

The operation is performed by what is called an Activation function (non-linearity). It converts the input into an output which can be then used as input for other nodes.

ANN

The links between nodes are mostly weighted. These weights are adjusted based on the performance of the network. If the performance (or accuracy) is high, then weights are not adjusted, but if the performance is low, then weights are adjusted through specific calculation.

The leftmost layer of neurons is called the input layer and similarly, the rightmost layer is called the output layer. All the other layers in between are called hidden layers. In a nutshell, an Artificial neuron takes input from other nodes and applies the activation function to the weighted sum of input (Transfer function) and then passes the output. A threshold (called Bias) is added to the weighted sum to avoid passing no (zero) output.

A Neuron

How are Neural Networks used for Deep learning?

For Deep learning, several Neural Network layers are connected in feedforward or feedback style to pass information to each other.

Feedforward: This is the simplest type of ANN. Here, the connections do not form a cycle and hence has no loops. The input is directly fed to output (in a single direction) through a series of weights. They are extensively used in pattern recognition. This type of organization is referred to as bottom-up or top-down.

Feed Forward

Feedback (or recurrent): The connections in the feedback network can move in both directions. The output derived from the network is fed back into the network to improve performance (loops).

These networks can become very complicated but are comparatively more powerful than feedforward. Feedback networks are dynamic and are extensively used for a lot of problems.

Now let’s discuss some specific types of ANN extensively used for Deep Learning. All these types will be discussed in detail separately in upcoming articles.

1). Multilayer Perceptron’s: These are the most basic Neural Networks with feed-forward networks. They generally use non-linear activation functions (like Tang or Relu) and compute the losses through Mean Square Error (MSE) or Logloss. The loss is backpropagated to adjust the weights and make the model more accurate. They are generally used as a part of a bigger deep learning network.

No alt text provided for this image

2). Convoluted Neural Network: Convoluted Neural Networks (ConvNet or CNN) are similar to ordinary Neural Networks but their architecture is specifically designed for images as input. Unlike a regular Neural Network, the layers of a ConvNet have neurons arranged in 3 dimensions: width, height, depth.

No alt text provided for this image

They are particularly suitable for spatial data, object recognition and image analysis using multidimensional neurons structures. One of the main reasons for the popularity of deep learning lately is due to Convolutional Neural Networks. Some of the common usages of Convoluted Neural Networks are self-driving cars, drones, computer vision, and text analytics.

3). Recurrent Neural Networks: RNNs are also a feedforward network, however with recurrent memory loops that take the input from the previous and/or same layers (backpropagation). Here connections form a directed graph along a graph. This gives them a unique capability to model along the time dimension and arbitrary sequence of events and inputs.

RNN

In simpler terms, for any given instant, the network maintains a memory up till that moment and therefore can predict the next action. The most common type of RNNs are Long Short Term Memory (LSTM) network. RNNs are used for the next word prediction and grammar learning.

Thank you for reading..!

Source: Google Engine


To view or add a comment, sign in

More articles by Shruthi L

  • Visualize it...!

    A Comprehensive Guide to Data Visualization As we all know, a picture is worth a thousand words – especially when you…

Others also viewed

Explore content categories