Automating Testing for ML/DL models
Image Source: www.mlops.org

Automating Testing for ML/DL models

Hey Folks!!!

Learning never stops!!! Keeping this 3 words in mind, I have spent my past 2 months into the topics of Machine Learning, Deep Learning, Neural Networks, etc. etc. along with the topics of DevOps tools hands-on, and finally integrating both of them together what we call MLOps. As working through the learning days, it was lots of topics that I have to keep focus on and really it was challenging. The most interesting thing that I learned in this context was Hyperparameter(yes for those who still don't know it then please first research a bit on Google or books ;p)

The thing about hyperparameters that I found out was that it is a great barrier in terms of conquering the training and testing phase of models. As research have shown 87% of AI models are scrapped at this stage, the main reason that I got to know was because of the tuning of Hyperparameters, how many layers need to be designed, how many hidden layers, how many dense layers, how many epochs, which optimizers, etc. even when one has done great work in hyperparameter tuning, manually fine-tuning is a big problem both in terms of labour and waste of time.

To avoid this to some extent I have created a setup that helps developers to write code once with lots of options and combinations of hyperparameters that he/she think can result in the best accuracy of the models. Mainly the idea behind is to use containers for training this model until the expectant accuracy is achieved.

Let's get to the setup!!!

No alt text provided for this image

Let's observe the image above first for a few minutes. Keep it in mind, if you don't understand I will explain things of each steps that I took. Well the model I used is face mask detection as it is quite relevant in situation of COVID19 pandemic and lockdowns. You can visit and check my model on github. I have also pasted the link in the bottom of this article.

Step 1: Developer prepares a model

It is the responsibility of a developer to design a good model with covering all the required criteria for which the model was created. In case he has doubts on some layers in hyperparameters or which optimizers to use, he can put those layers in if-else block inside a function body. For ex:-

def arch(option):
    if option == 1:
        model.add(Conv2D(filters=random.randint(60,100),
                       kernel_size=random.choice(((2,2),(3,3),(4,4),(5,5),(6,6))),
                        activation='relu'
                       ))
    elif option == 2:
        model.add(Conv2D(filters=random.randint(60,100),
                       kernel_size=random.choice(((2,2),(3,3),(4,4),(5,5),(6,6))),
                        activation='relu'
                       ))
        model.add(MaxPooling2D(pool_size=random.choice(((2,2),(3,3),(4,4),(5,5),(6,6)))))
        

Then after creating a CNN, ANN, or any other model, he/she then commit it to github. After it is commited then comes then role of jenkins comes.

JOB1: Which keeps track of code through POLL SCM and then copies the file to the localhost repository.

JENKINS: JOB1

No alt text provided for this image
No alt text provided for this image

Step 2: Create Container Image Using Dockerfile

All the basic packages and softwares that are required for the running of models inside containers are made possible with a techinque of creating images with the help of Dockerfile that helps to running installation of various softwares and tools, create environment, setting working directory, executing commands, with exposing ports to give access to the local system.

#Dockerfile

FROM centos:latest


MAINTAINER Aditya Raj <rajadityaranjan@gmail.com>


RUN yum install python36 -y && yum install net-tools -y
RUN pip3 install --upgrade pip setuptools wheel
RUN pip3 install keras && pip3 install tensorflow
RUN pip3 install opencv-python && pip3 install opencv-contrib-python
RUN ["mkdir", "notebooks"]
RUN pip3 install sklearn && pip3 install matplotlib
RUN yum install libXtst -y && yum install libSM -y && yum install libXrender -y
# Store files in this mounted directory
VOLUME /notebooks
COPY * /notebooks
WORKDIR /root/notebooks
CMD python3 Train.py

Building a custom image can be done using various OS images available on Docker hub but it requires commands for installation as each OS has their own syntax, but the setup is same for all the others.

# Dockerfile is by default to create custom image
# mytensor:v3 is the name of my image that I set

docker build -f Dockerfile -t mytensor:v3

There are multiple requirements like creating volumes and copying files, which is according to the execution of our python code.

Step 3: Jenkins Automation For Model Training

JOB2: Launch container for this model with pre-created image. The container launch can be automated with setting conditions that if certain keywords are encountered inside the python file for ex:- Conv2d, Maxpooling(which requires tensorflow and keras), seaborn, matplotlib(for plotting graphs), sklearn for train_test_split and linear regression models.(You can experiment these with the help of file systems and if-else conditions.

JENKINS: JOB2

No alt text provided for this image

JOB3: Monitor the container as the docker image that we have created after launching and executing the task it stops. So JOB3 launches the container again and again with different, random combinations of hyperparameter architecture.

No alt text provided for this image

The results of the layers used as well as the output of the result can be sent to the developer to his test mail if the accuracy is above expectant accuracy set by developer.

No alt text provided for this image

I think that I made a good explanation and you all were to understand it.

So finally I would like to tell that although this setup increases efficiency of model still many of you would have doubts, but while working it through I found out that making this setup intelligent is the continuous process and I would also like to say that an intelligent model can be trained to make this process more accurate but it will require many people and I would love to work with those people.

My Github repository link:- https://github.com/rajadityaranjan/face_mask_covid19

Hope you have a wonderful life, learning and gain insights for future works.Thank You!!!

Great work... I would like to add one thing. Mlops and AIOps are two different things.. Don't consider them at once.. And what u did here is the mlops not aiops.. So be clear in that! As AIOPS is when we use ai for our operation whereas mlops is handling the workloads of ml using the help of operations... Aditya Raj

To view or add a comment, sign in

More articles by Aditya Raj

  • Transfer Learning - Face Recognition using MobileNet

    Idea In today's world data scientists have done a wonderful job in making comprehensive models that are able to tell…

    1 Comment
  • Docker-Jenkins-Git Project #2

    HEY GUYS!! This is my second mini-project(task) which is based on integrating the tools like jenkins, github, docker…

    2 Comments

Others also viewed

Explore content categories