Optimizing Brain Tumor Detection Using Deep Learning: A Comparison of Adam and SGD Optimizers
Goal -
The primary goal of this study is to analyze the effect of different optimizers on the accuracy of a brain tumor detection model and classify brain tumors using deep learning techniques such as Convolutional Neural Networks (CNN) and Artificial Neural Networks (ANN).
Introduction-
Brain tumors are among the most aggressive diseases, affecting both children and adults. Timely detection and classification of brain tumors are crucial for effective treatment and planning.
Tumors are classified into various types, including:
Magnetic Resonance Imaging (MRI) is the most effective technique for detecting brain tumors. Radiologists examine these MRI images for tumor diagnosis.
With advancements in machine learning and deep learning, automated classification techniques can aid in improving the accuracy of tumor detection, especially in the early stages.
Dataset Description-
This dataset, sourced from Kaggle, is divided into training and testing sets with the following image distribution:
Folder Structure-
Training:
Testing:
Implementation-
The model is built using Python and libraries such as Pandas, NumPy, Matplotlib, and PyTorch. The architecture is based on a modified version of AlexNet, with added convolutional and fully connected layers to optimize performance.
The ImageFolder and DataLoader functions in PyTorch are used to read images from different folders and create batches for faster processing. Since images have varying dimensions (e.g., 512x512, 350x350), they are resized to a uniform size of 512x512 using the Resize function from the transforms module to ensure consistency for CNN input.
Model Summary -
The model is a CNN-based architecture, similar to AlexNet but with additional layers to capture deeper features for more accurate classification.
Optimizers-
torch.optim is a package used to implement various optimization algorithms.
This study mainly focuses on finding the optimal batch size and optimizer function. The two optimizers used are Adaptive Moment Estimation (Adam) and Stochastic Gradient Descent (SGD).
Recommended by LinkedIn
Adam Optimizer:
SGD Optimizer:
Training Time Comparison-
The training time required for each combination of optimizer and batch size is recorded and visualized. This provides insights into the computational efficiency and effectiveness of each method at different batch sizes.
Visualizations-
Conclusion & Findings -
Interesting!!
Good work!! wish you all the best!!
Congrats Gautam 👏