BRAIN TUMOR DETECTION BOT using Deep-Learning Framework
.

BRAIN TUMOR DETECTION BOT using Deep-Learning Framework


INTRODUCTION

Our objective of the project is to make a user-friendly environment for the detection of brain tumor using machine learning. Brain tumor, which is a cancerous or non-cancerous growth of the cell in the brain, which begins with the strong headaches, blurred vision, may pose serious concerns. A bot that simplifies our project will help with identification and determine whether or not anyone has a brain tumor with just MRI scan (magnetic resonance imaging, our project will be able to give the results with the confidence level. Image processing has found multiple uses. In medical field, it helps with the fast and efficient results. It has a bigger role in analyzing such images with accuracy. Connecting the bot with the real person, can help people to save time by providing the automated answers. This can lead to an easier access for the users and can help to provide with the necessary information, that might be required. These days with the raising cases of COVID-19,the fear of the people to step out of the house and to have a check up with the doctor, also due to the possibility of being one of the cases might lead to even let a person not considering about this medical condition. To avoid these kind of scenario, our bot can help to provide an idea about the cancer, just by sitting at home. Brain cancer is one of the leading cause of death in patients younger than age 35. It can be dangerous over time as a malignant one if it begins to press on a vital area of the tissue in brain. Generally the treatment of choice is surgery unless the tumor is in an inaccessible or delicate area. Hence to give an idea about whether a person has it or not is crucial. This can result in taking precautions and actions to help to fight brain cancer, and avoiding such severity with this issue. The future of image processing with bringing an impact on the medical field, and the advancement in techno world can even bring an impact on the lives of the people with getting results easier by bots and bringing a change in lifestyle. Due to advances in image processing, the concept of such combination with the medical field can go beyond the present limits and in future.


BACKGROUND

As the project suggest, we are detecting the tumor in the brain. So the theory behind this is, that we are giving images or specifically MRI images as our input to the telegram bot. The input images is processed by the bot and the output is given to the user. Behind the Telegram bot, our ML project works which saves the input image for processing it. The model we created using python runs in the background first for training the model with the help of the training data set. After running the model using the training data set only we can launch it in our Bot. Then later on it saves the training phase as a reference for the images we send in as for testing. The theory behind it is that we can train our model using Convolution Neural Network (CNN) and get started with our model. The CNN helps our model train and improve the accuracy rate and decrease the loss rate. By this we can say that our model is efficiently functioning. Here we also use Max Pooling for normalizing the batch, in order to get a better output classification. Here we use Telegram Bot because the Telegram Bot API is an open source code which we can modify without raising any legal issues from Telegram. Telegram saves all the images given to it as a token because every image has unique ID which it saves as a token in a path variable to refer it later on. We have created two types of the Bot, one when you chat with him personally and when you add that bot to any group. In the personal chat Bot directly replies to the image input the user gives and during the group chat Bot receives input when user replies to the image in the group. Keep in mind that the Bot can take images in compressed form only. If the image is not compressed then it will return an error to the user. We can give command to the Bot to what to do. The list of command will be there is a help folder inside the Bot and when you start the Bot it will pop up and let you know that what are commands there which the user can give to the Bot. The Telegram Bot is user-friendly and it doesn't affect the user’s privacy. The Bot doesn't have access to the gallery on its own without the permission of the user. At last the Telegram Bot is integrated with the CNN Model to run our project for detecting the tumor in a brain.

PROJECT IMPLEMENTATION

Block Diagram:

No alt text provided for this image

A block diagram is a diagram of a system in which the principal parts or functions are represented by blocks connected by lines that shows the relationship between the blocks. The training phase is done before the server for the Bot starts. Training image set is first pre -processed for the Feature extraction. In feature extraction there are many ways for extracting it like Max pooling etc. After getting the features we need for our model, we transfer it to the CNN where the layers train our image set and plots an accuracy rate and a loss rate. Plotting these helps us know if our model is efficient or not and if efficient then its specification. The labels helps us plot this loss function. After this phase is completed, the Bot is ready on server and for our users. Now the user can send in the test image set to the Bot to check for Tumor Detection. The Bot sends the image to the back-end server for Testing Phase. In the Testing Phase, it again goes for the same process as went for the training phase. The Pre-processing is done first for feature extraction and send in to the CNN. Inside the CNN it goes through dense layers to be tested and then a classified result is put out. Here the classified result are divided into two parts one, the image is having tumor: two, the image doesn't have tumor. These classified result is then send to the Bot so that it will give it as output to the user.

Software Stimulation:

Here the IDE we are using is Visual Studio Code 1.56.0 in which we are running our code for the Bot and other Model parts of the project. VS code is the most advanced IDE till date. Here we can find many other dependencies being already there i.e.- python, Keras &etc.

Here the Python 3.9.2 is used to run our code and the codes are saved in ‘.py’ format. All our codes are mostly python language because python is widely used by the programmers all around the world are it has vast libraries to work on with. These libraries help making the code small and efficient as it already has many functions stored in it for our use.

TensorFlow 2.2 is an open source library for numerical computation and large-scale machine learning. TensorFlow bundles together a slew of machine learning and deep learning (aka neural networking) models and algorithms and makes them useful by way of a common metaphor. TensorFlow is a low-level library which provides more flexibility. Thus you can define your own functionality or services for your models. This is a very important parameter for researchers because it allows them to change the model based on changing user requirements. TensorFlow provides more network control.

Telegram Bot API 5.2 is used here as we can make many changes to it and command it to run according to our wish. This Bot API is the latest version released by the telegram. Telegram has open source Bot API which we can use. Here we can give a list of command to the Bot which it can perform while asked.

NumPy is a general-purpose array-processing package. It provides a high performance multidimensional array object, and tools for working with these arrays. It is the fundamental package for scientific computing with Python.

Pandas is an open-source library that is built on top of NumPy library. It is a Python package that offers various data structures and operations for manipulating numerical data and time series. It is mainly popular for importing and analyzing data much easier. Pandas is fast and it has high performance & productivity for users.

Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is a multi-platform data visualization library built on NumPy arrays and designed to work with the broader SciPy stack. One of the greatest benefits of visualization is that it allows us visual access to huge amounts of data in easily digestible visuals. Matplotlib consists of several plots like line, bar, scatter, histogram etc.

In this project, from sklearn preprocessing we have imported OneHotEncoder to encode two classes-’yes’ and ‘no’ into integers, to feed into model in a better way. Example- encoder.fit([[0],[1]]) {as per our project, ‘0’ represents tumor and ‘1’ represents normal or no tumor.

Telegram APIs offer two kinds of APIs for developers. The Bot API allows you to easily create programs that use Telegram messages for an interface. The Telegram API and TDLib allow you to build your own customized Telegram clients. We are welcome to use both APIs free of charge in telegram. Bot API allows you to connect bots to the Telegram system. Telegram Bots are special accounts that do not require an additional phone number to set up. These accounts serve as an interface for code running somewhere on the client server.

Algorithm:

The algorithm used here are the basics of any ML project. Here we are passing our input images into the conv2D kernel by zero padding it and normalizing using activation ‘relu’. Then by max-pooling we are passing the images into the ‘flatten’ which has a fully connected layers for training the model. The architecture for the Neural Network is given below to understand the process better.

No alt text provided for this image

Social and Environmental Impact:

Keeping in mind the current situation of the health care sector in mind, this bot can help in various ways. As this is for now a minor project that we are working on so when we improve the bot more there we can add more features to it. It can help decrease the work load on the medical sector by automating their work using this bot. This Bot can be added to an official medical website where the patients can clear their query and many more features.

RESULTS & DISCUSSIONS

Layers used:

CONV2D: This layer creates a convolution kernel that is wind with layers input which helps produce a tensor of outputs. They are generally smaller than the input image and so we move them across the whole image. As,it is easy to process in a faster manner.

BATCH NORMALISATION: This allows every layer of the network to do learning more independently. It is used to normalize the output of the previous layers. This has the effect of stabilizing the learning process and dramatically reducing the number of training epochs required to train deep network. Regularizes the model and reduces the need for dropout.

reLU:The rectified linear activation function or reLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. It overcomes the vanishing gradient problem, allowing models to learn faster and perform better.

Max-pool: It is a down sampling layer .This operation selects the maximum element from the region of the feature map covered by the filter. Thus, the output after max pooling layer would be a feature map containing the most prominent features of the previous feature map. Background in these images is made black to reduce the computation cost and reducing the noise.

DROPOUT: This prevents our sequential model from over fitting.It deactivates some of the neurons randomly.This effects the network to be less sensitive.

FLATTEN: It converts the pooled feature map to a single column that is passed to the fully connected layer.It is used for converting the data into a 1-dimensional array for inputting it to the next layer. We flatten the output of the convolution layers to create a single long feature vector.

DENSE: Adds the fully connected layer to the neural network. A Dense layer feeds all outputs from the previous layer to all its neurons, each neuron providing one output to the next layer. It's the most basic layer in neural networks. It classifies the class.

EPOCH: An epoch refers to one cycle through the full training data-set. Since one epoch is too big to feed to the computer at once we divide it in several smaller batches. One epoch means that each sample in the training dataset has had an opportunity to update the internal model parameters.

ANALYSIS

Model Loss:

No alt text provided for this image

Loss & Validation Loss: A loss function quantifies how “good” or “bad” a given predictor is at classifying the input data points in a data set. The smaller the loss, the better a job the classifier is at modelling the relationship between the input data and the output targets. Training loss is measured during each epoch while validation loss is measured after each epoch.

Model Accuracy:

No alt text provided for this image

Accuracy & Validation Accuracy: A accuracy function determines the efficiency of the model after the model is trained. The more the accuracy the better the job the classifier is at modelling the relationship between the input data and the output target. Training accuracy is measured during each epoch while validation accuracy is measured after each epoch.

Model Building:

Table 4.1 showing details related to Model Building

No alt text provided for this image

This table determines the layers we are using in our model like the conv2d, batch normalization, max pooling, dropout, flatten, dense. It also determines the output shape of the given input at every layer it passes including the parameters they are in. Here the model we are using is ‘sequential’. At the end of the table it shows all the brief details of the parameter like total parameters, trainable parameters, non trainable parameters. Looking into it we can say if the model is ready for training or not. There are various models we could have used but this model is the best suited for our desired output.

Epoch:

Table 4.2 showing details related to Epoch

No alt text provided for this image

This list shows all the epochs we have run in the model for training it. Here we can see that the loss rate is decreasing and the accuracy rate is increasing and this is a good sign. The less the loss rate and high the accuracy rate, it is better for saying that our model is efficient. At the end the accuracy is 0.9979 and the loss rate is 0.0063.


CONCLUSION

In a changing world where new diseases emerge on a daily basis, all we can do is progress technologically. This initiative will be one of the first steps in that direction. This project will assist an individual in determining whether or not they have a brain tumor. During these lock-down times, this initiative would be very beneficial to the patients. As we all know, the majority of the human race is now confined to their households and forced to operate from home in order to combat the pandemic. During this moment of disaster, this Bot can be a lifesaver for many people who need assistance with medical issues. The aim of this initiative is to reduce hospital workloads so that they can concentrate on combating the pandemic. The Bot is simple to use and does not compromise your privacy. The project's back-end is built with CNN, and the front-end is built with Telegram Bot API. Here, we've combined our Bot with our trained model to provide the desired result. It also demonstrates that our model is effective and functional.


REFERENCES

1.GitHub- https://github.com/MohamedAliHabib/Brain-Tumor-Detection

2.Kaggle- https://www.kaggle.com/akshitmadan/tumor-classifification on-using-keras-for-beginners

3.YouTube- https://www.youtube.com/watch?v=1fBx2laX9pg

4.ML- https://www.youtube.com/playlist?list=PLZbbT5o_s2xq7LwI2y8_QtvuXZedL6tQU

5. Telegram Bot API- https://core.telegram.org/bots/api

6. GitHub: https://github.com/AbhishekNanda7429/brain-tumor-detection-bot




To view or add a comment, sign in

More articles by Abhishek Nanda

Explore content categories