Explainable AI

Explainable AI

It’s time to get rid of the black boxes and cultivate trust in Machine Learning.

Machine Learning models have been branded as ‘Black Boxes’ by many. This means that though we can get accurate predictions from them, we cannot clearly explain or identify the logic behind these predictions. But how do we go about extracting important insights from the models? What things are to be kept in mind and what features or tools will we need to achieve that? These are the important questions which come to mind when the issue of Model Explainability is raised.

Some examples of how ML is currently used can be:

  • Predictions while Commuting
  • Videos Surveillance
  • Product Recommendations
  • Online Fraud Detection

Insights which can be extracted from the models

To interpret a model, we require the following insights :

  • Features in the model which are most important.
  • For any single prediction from a model, the effect of each feature in the data on that particular prediction.
  • Effect of each feature over a large number of possible predictions

Working

Consider a model that predicts whether a customer is involved in any money laundering activity or not based on certain parameters (Transaction History).

Permutation importance is calculated after a model has been fitted. So, let’s train and fit a Logistic Regression model denoted as pickled_logistic_model on the training data.

Calculating and Displaying importance using the eli5 library:

Calculating and Displaying importance using the eli5 library:

Interpretation

  • The features at the top are most important and at the bottom, the least. For this example, wire transactions done in last 30 days (amount_wire_30dd) was the most important feature.
  • The number after the ± measures how performance varied from one-reshuffling to the next.
  • Some weights are negative. This is because in those cases predictions on the shuffled data were found to be more accurate than the real data.

Ethics

The main aims of Explainable AI (XAI) is to make machines explain themselves and to reduce the impact of biased algorithms.

Conclusion

Machine Learning doesn’t have to be a black box anymore. What use is a good model if we cannot explain the results to others. Interpretability is as important as creating a model. To achieve wider acceptance among the population, it is crucial that Machine learning systems are able to provide satisfactory explanations for their decisions. As Albert Einstein said,” If you can’t explain it simply, you don’t understand it well enough”.

To view or add a comment, sign in

Others also viewed

Explore content categories