Explainable AI

No alt text provided for this image

I felt like posting another thought which has been bugging me for quite sometime now. It is about Explainable AI.

Deep Learning models can find association between anything and everything and bring out a model with 99.99% accuracy. It can use week of a day, gender of the operator etc. to predict machine failure. Don't ask how. It simply does not know. So now the AI world is running helter skelter to find out how to build explainable AI models. Because clients have started asking - On what grounds have you (your ML model) identified a customer as fraudulent? Why are you (your model) asking me to target this customer and not that one. Unfortunately Deep Learning models have their lips sealed on this. They can't tell you how they found these out. Faced with GDPR and compliance norms, clients get frustrated.

Cut back to 2007. That is when I submitted my PhD thesis. I used a technique called Structural Equation Modeling which was a craze among researchers during those days. Concept is similar to Deep Learning, just a slight difference. It is a two step process. You first define a structural model; backed by theory, business understanding and common sense. Then you define a measurement model which is essentially identifying variables which can be used to measure the various constructs you defined in step 1. Then you go and collect the data, and try and fit the model you have defined. And as Enrico Fermi said (Thanks to Mu Sigma for the graffiti) " If the results confirm the hypothesis, then you have made a measurement. If the result is contrary to the hypothesis, then you have made a discovery." What we used to do back then (in fact the very nature of statistical modeling) was 'explainable'. Off course the accuracy of statistical models were not as good as that of Deep Learning models; but whatever it predicted, it could explain perfectly.

What we have done in the last one decade is dump something that was explainable, fell in love with a complex thing and is now trying to figure out how this thing works. Does that sound similar? At a personal level, perhaps :-). Wouldn't it have been easier to improve the prediction accuracy of statistical methods (for which you can blame the stats community) than what we are trying to do now?

When I try to tell young data scientists that they should first start thinking on what could be the probable predictors even before they touch the data; they give me a strange look which literally means "What century are you from?". And they say, "There is no need. The machine will learn by itself". There you go...

"machine will learn by itself" --> so say consumers too. Good one!

Like
Reply

To view or add a comment, sign in

More articles by Bindu Narayan PhD

Others also viewed

Explore content categories