Logistic regression for classification problems

Logistic regression for classification problems

Logistic regression is a fundamental algorithm used in machine learning for binary classification problems, though it can also be extended to multiclass classification. Here’s a breakdown :

Applications

  • Binary Classification: Common in tasks like spam detection, disease diagnosis, and sentiment analysis.
  • Multiclass Classification: Can be extended using techniques like One-vs-Rest or Softmax Regression for scenarios involving more than two classes.
  • Risk Prediction: Used in fields like finance and healthcare to predict various risk factors.

Advantages

  • Simplicity: Easy to implement and interpret.
  • Efficiency: Works well on smaller datasets and requires fewer computational resources.
  • Probabilistic Interpretation: Outputs probabilities, which can be useful for understanding confidence levels in predictions.

Limitations

  • Linearity Assumption: Assumes a linear relationship between independent variables and the log-odds of the dependent variable, which might not hold true for all datasets.
  • Sensitivity to Outliers: Can be affected by outliers in the data, influencing the decision boundary.
  • Requires Feature Engineering: Often need to preprocess data (like scaling or transforming) to improve model performance.

Step-by-Step Implementation

Here’s how to perform logistic regression using the Iris dataset:

1. Import Necessary Libraries

#Import Necessary Libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score,confusion_matrix, classification_report
from sklearn.datasets import load_iris        

2. Load the Iris Dataset

# Load the Iris dataset
iris = load_iris()
X = iris.data
y = iris.target
iris_df = pd.DataFrame(data=X, columns=iris.feature_names)  
iris_df['target'] = y          

3. Split the Data into Training and Testing Sets

#Split the Data into Training and Testing Sets
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state = 42)        

5. Feature Scaling:

# Standardize the features 
scaler = StandardScaler()  
X_train = scaler.fit_transform(X_train)  
X_test = scaler.transform(X_test)          

5. Model Training

# Create and fit the logistic regression model  
model = LogisticRegression(multi_class='multinomial', max_iter=200)  
model.fit(X_train, y_train)          

6. Predictions

# Make predictions  
y_pred = model.predict(X_test)         

7. Evaluation

# Evaluate the model  
accuracy = accuracy_score(y_test, y_pred)  
conf_matrix = confusion_matrix(y_test, y_pred)  
class_report = classification_report(y_test, y_pred)  
print("Accuracy:", accuracy)  
print("Confusion Matrix:\n", conf_matrix)  
print("Classification Report:\n", class_report)          

8. Visualization

#Visualizing the confusion matrix
plt.figure(figsize=(8, 6))  
plt.imshow(conf_matrix, interpolation='nearest', cmap=plt.cm.Blues)  
plt.title('Confusion Matrix')  
plt.colorbar()  
tick_marks = np.arange(len(iris.target_names))  
plt.xticks(tick_marks, iris.target_names, rotation=45)  
plt.yticks(tick_marks, iris.target_names)  
plt.xlabel('Predicted label')  
plt.ylabel('True label')  
plt.tight_layout()  
plt.show()        


Article content


To view or add a comment, sign in

More articles by Vandana K

  • Introduction to reinforcement learning

    Reinforcement Learning (RL) is a branch of machine learning focused on how agents should take actions in an environment…

  • Anomaly Detection Techniques in Machine Learning

    Anomaly detection refers to the process of identifying patterns or observations that deviate significantly from the…

  • Time series forecasting with ARIMA and Prophet

    Time series forecasting involves predicting future values based on previously observed values. Two popular methods for…

  • Introduction to neural networks with Keras

    What Are Neural Networks? Neural networks are a type of machine learning model inspired by how the human brain works…

  • Hyperparameter tuning with GridSearchCV

    Hyperparameter tuning is a crucial step in building machine learning models because it helps you find the best…

  • K-means clustering for unsupervised learning

    K-means clustering is a popular unsupervised learning algorithm used for partitioning data into distinct clusters based…

    1 Comment
  • Decision trees and random forests

    Decision trees and random forests are both popular machine learning algorithms used for classification and regression…

  • Building data pipelines in Python

    A data pipeline is a series of processes or steps that automate the flow of data from one system to another. The…

  • Working with large datasets using Dask

    Dask is a Python library that helps you handle large datasets efficiently. If you're familiar with pandas, NumPy, or…

    1 Comment
  • Handling missing data in pandas

    Handling missing data is a common task when working with data in Python's pandas library. Here’s a detailed overview of…

Explore content categories