Automated Machine Learning

Automated Machine Learning

Why is Automated Machine Learning Important?

Manually constructing a machine learning model is a multistep process that requires domain knowledge, mathematical expertise, and computer science skills – which is a lot to ask of one company, let alone one data scientist. Not only that, there are countless opportunities for human error and bias, which degrades model accuracy and devalues the insights you might get from the model. Automated machine learning enables organizations to use the baked-in knowledge of data scientists without expending time and money to develop the capabilities themselves, simultaneously improving return on investment in data science initiatives and reducing the amount of time it takes to capture value.

Automated machine learning makes it possible for businesses in every industry – health care sector, marketing, retail, sports, manufacturing, and more – to leverage machine learning and AI technology — technology previously only available to organizations with vast resources at their disposal. By automating most of the modeling tasks necessary in order to develop and deploy machine learning models, automated machine learning enables business users to implement machine learning solutions with ease, thereby allowing an organization’s data scientists to focus on more complex problems.

HYPER PARAMETERS:What is a hyperparameter?

A hyperparameter is a parameter that is set before the learning process begins. These parameters are tunable and can directly affect how well a model trains. Some examples of hyperparameters in machine learning:

1.Learning Rate

2.Number of Epochs

3.Momentum

4.Regularization constant

5.Number of branches in a decision tree

6.Number of clusters in a clustering algorithm (like k-means)


Optimizing Hyperparameters


Hyperparameters can have a direct impact on the training of machine learning algorithms. Thus, in order to achieve maximal performance, it is important to understand how to optimize them. Here are some common strategies for optimizing hyperparameters:


Grid Search: Search a set of manually predefined hyperparameters for the best performing hyperparameter. Use that value. (This is the traditional method)


Random Search: Similar to grid search, but replaces the exhaustive search with random search. This can outperform grid search when only a small number of hyperparameters are needed to actually optimize the algorithm.


Bayesian Optimization: Builds a probabilistic model of the function mapping from hyperparameter values to the target evaluated on a validation set.


Gradient-Based Optimization: Compute gradient using hyperparameters and then optimize hyperparameters using gradient descent.


Evolutionary Optimization

: Uses evolutionary algorithms (e.g. genetic functions) to search the space of possible hyperparameters.



Task description


1. Create container image that’s has Python3 and Keras or numpy installed using dockerfile 

No alt text provided for this image
No alt text provided for this image






 syntax:
    docker build -t <imagename> <path to Dockerfile>
      

    docker build -t sklearn_ml .     

(here "." means the current location/folder)

                                           docker build -t NN .



Job1 : Pull the Github repo automatically when some developers push repo to Github.


No alt text provided for this image
No alt text provided for this image


2. When we launch this image, it should automatically starts train the model in the container.

By looking at the code or program file, Jenkins should automatically start the respective machine learning software installed interpreter install image container to deploy code and start training( eg. If code uses CNN, then Jenkins should start the container that has already installed all the softwares required for the cnn processing).

No alt text provided for this image

I have created a python script which would differentiate among the codes and also copies the data as well as the code file in their respective folders which will be mounted later on to the container working directory .

Job3 : Train your model and predict accuracy or metrics.

Job4 : if metrics accuracy is less than 80% , then tweak the machine learning model architecture.

No alt text provided for this image


No alt text provided for this image


I have created a python script which would first train as it is provided by the developer .

Then it do some auto feature scaling and addition of layers to the Neural Network Architecture and every time it checks for the accuracy threshold and if it crosses the threshold the developer is informed through mail.

No alt text provided for this image


AUTO FEATURE SCALING





No alt text provided for this image


Auto addition of Dense layers in case of ANN and CNN

I have set a limit of 3 -4 layers of Dense if the accuracy doesnot increases it changes the units ie number of neurons in each layer



No alt text provided for this image


AUTO ADDITION OF CONVOLUTION LAYER

WITH DIFFERENT NUMBER OF FILTERS,FILTER_SIZE



Job5: Retrain the model or notify that the best model is being created


No alt text provided for this image


HERE add means to add the number of layers.


No alt text provided for this image






No alt text provided for this image


Tweaking by adding layers ,changing numbers of neurons,



Here we got the best accuracy in 4TH job of retraining

then the mail is sent to the developer.

For this job i have used mailx

No alt text provided for this image


No alt text provided for this image

Mail code


Create One extra job: for monitor : If container where app is running. fails due to any reason then this job should automatically start the container again from where the last trained model left


No alt text provided for this image


No alt text provided for this image
No alt text provided for this image

Thanks "Vimal Sir " for guiding and leading us towards the world of creators not the users by providing us the #rightEducation.


Github repo login:

https://github.com/raghav1674/MLOPS/blob/master/README.md


Very well done Raghav Gupta you are our #debugger_devta . I expected this from you, such a great work bro...

To view or add a comment, sign in

More articles by Raghav Gupta

  • What actually Containers are?

    If you are familiar with containers, you might know Docker, What actually Docker is? Is it a Container Engine or a…

    2 Comments
  • Provisioning and Configuring Jenkins Dynamic Slaves on AWS EC2 Using Ansible

    Jenkins: Jenkins is a free and open-source automation server. It helps automate the parts of software development…

    2 Comments
  • Kubernetes Industry Use Case- OpenAI

    OpenAI is an artificial intelligence research laboratory consisting of the for-profit corporation OpenAI LP and its…

  • BOSCH And Azure AKS

    PROBLEM: THE WRONG WAY DRIVING PROBLEM The problem of drivers going the wrong way on highways, the goal was to save…

  • ARTH TASK 3

    TASK DESCRIPTION: 🔅 Create a key pair 🔅 Create a security group 🔅 Launch an instance using the above created key…

    2 Comments
  • Slack Case Study - AWS

    Slack provides a messaging platform that integrates with and unifies a wide range of communications services such as…

    2 Comments
  • INTRO TO BIG DATA AND DISTRIBUTED STORAGE CLUSTERS

    What is Data? The quantities, characters, or symbols on which operations are performed by a computer, which may be…

    2 Comments
  • HYBRID MULTI CLOUD TASK 6

    TASK DESCRIPTION: Deploy the Wordpress application on Kubernetes and AWS using terraform including the following steps;…

  • HYBRID MULTI CLOUD TASK 2

    Task-Description: Create/launch Application using Terraform 1. Create a Security group that allows the port 80.

  • Ansible Task 3

    𝗧𝗮𝘀𝗸 3:- ♦️ Provision EC2 Instances Through Ansible. ♦️ Retrieve The IP Address of Instances Using The Dynamic…

    1 Comment

Others also viewed

Explore content categories