MLops : Session - 10
Deep Learning:
In today article, we talk about deep learning introduction, why we need deep learning, Deep learning vs machine learning, TensorFlow, Core logic behind TensorFlow how it works.
Introduction:
At a basic level, Deep Learning is a machine learning technique. It teaches a computer to filter inputs through layers to learn how to predict and classify information. Observations can be in the form of images, text, or sound.
The inspiration for Deep Learning is the way that the human brain filters information. It a method which select features automatically and do prediction. Deep learning allows machines to solve complex problems even when using a data set that is very diverse, unstructured and inter-connected.
Why we need this while having traditional Machine Learning?
Our traditional machine learning(which we have seen till now) is not working well when we have Big Data. You can't say that Deep learning is for getting more accuracy. Sklearn or Numpy has used a traditional approach for computation that's why we don't use them for Big Data
The main difference between deep learning and machine learning is due to the way data is presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks). In below, we see that how this works internally.
TensorFlow:
We see that NumPy library is not well for computation when we have large dataset. That's why Google develops a new library Tensorflow. It is a numerical processing library similar to numpy. This have Tensor datatype which is similar to array datatype. Tensor is an N-dimensional vector, used to represent N-dimensional datasets. Let's talk about some points why this is fast in computation in comparison to numpy array...
- Tensor uses Lazy Execution approach.
- In this, computations are done with data flow graphs.
What do you mean by the graph in Tensor?
Tensorflow has a lazy evaluation, In other words Tensorflow will first create a computational graph with the operations as the nodes of the graph and tensors to it’s edges and the execution happens when the graph executed in a session. This is commonly called as dataflow programming model specially for parallel computing.
Let's see through a small example.
This is how tensor works...
For print a constant on normal python,
This give output 5, but in tensorflow,
this gives output as, tf.constant(5,shape=(),dtype=int32).
You can say that lazy execution is like whenever you need that operation, compute this otherwise not. In Tensorflow 2.0, we by default use eager execution. For change that in lazy execution, we use decorator before our function,
@tf.constant
2nd important reason to use lazy execution is Distributive Computing. By using graph flow between operations, it is possible for TensorFlow to partition your program across multiple devices (CPUs, GPUs, and TPUs) attached to different machines. TensorFlow inserts the necessary communication and coordination between devices.
I would discuss some more terminology like perceptron, optimizer, Keras and do a lot of stuff with TensorFlow and start to train our machine using deep learning in our next session.
See you in next session...
Happy Learning :)
Thanks a lot