Exploring Tuberculosis Binary Classification with Transfer Learning using VGG16 and InceptionV3 by GoogleNet Architecture

Exploring Tuberculosis Binary Classification with Transfer Learning using VGG16 and InceptionV3 by GoogleNet Architecture

I recently worked on an exciting deep learning project where I classified tuberculosis from X-ray images. Leveraging transfer learning techniques, I explored both the VGG16 and GoogLeNet (InceptionV3) architectures to achieve high accuracy in distinguishing between tuberculosis and normal X-rays.

Dataset & Preprocessing

The dataset contained a total of 1400 images, with 700 tuberculosis and 700 normal X-ray images. To prepare the data for model training:

  • I resized the images to 224x224 pixels for consistency.
  • Created two arrays: one to store the images and another for the labels.
  • Applied label encoding to convert the data into numerical format, making it ready for training.
  • The dataset was split into 77% for training and 33% for testing, ensuring a balanced evaluation.

Article content
Dataset Images


Article content
Splitting Dataset into training and testing AND performing Label encoding

Modeling with VGG16 and GoogLeNet (InceptionV3)

I chose to experiment with two powerful convolutional neural networks (CNNs):

VGG16 Architecture:

  • I used a pre-trained VGG16 model, freezing the last 4 layers and adding custom layers on top.
  • Fine-tuned the model with Adam and AdamW optimizers for optimal performance.
  • Trained the model over 6 epochs, achieving impressive results:99% Training Accuracy100% Validation Accuracy

Article content
Loading base Model and Architecture VGG16


Article content
Adding layers


Article content
Training Model


Article content
Training accuracy and validation accuracy

InceptionV3 Architecture (GoogLeNet):

  • Similarly, I applied transfer learning to the InceptionV3 model, modifying the layers to suit my dataset.
  • After training, the model achieved:89% Training Accuracy91% Validation Accuracy

Article content
Loading Base Model InceptionV3


Article content
Training Model


Article content
Training and validation accuracy

In conclusion I would say that, VGG16 performed exceptionally well, reaching near-perfect accuracy, while InceptionV3 also delivered solid results, demonstrating that both architectures are robust for medical image classification tasks. The combination of transfer learning and the use of effective optimizers like Adam and AdamW proved crucial in fine-tuning and maximizing model performance.

This project has greatly strengthened my understanding of deep learning and transfer learning, and has provided valuable insights into optimizing models for real-world applications. I look forward to exploring more advanced architectures and working with other medical imaging datasets in the future.



Very helpful Ahmad I enjoyed reading your article. Your detailed analysis of both architectures effectively highlights their unique strengths. I particularly appreciated your discussion on the importance of these type of models. Great job! MASHALLAH👏🏻

That sounds like an important analysis!

To view or add a comment, sign in

More articles by Mohammad Ahmad

Others also viewed

Explore content categories