Why Explainable Artificial Intelligence (XAI)?
Machine learning has opened paths for the development of algorithms and AI models that are taking the Internet by storm. But we know little about how the algorithms function or how the outputs are generated by them. AI has been taking huge leaps in terms of machine learning and newer models that facilitate our needs. It is important we, as developers and users, understand AI models better. We do not understand how outputs are generated through inputs given by us, we do not know how a machine makes decisions or comes to a conclusion. This raises questions about its trustworthiness, accuracy and fairness.
AI is as susceptible to bias as we are because data fed into machine learning algorithms is data provided by us and human biases reflect in documented data or research. It has been noted that even when gender, race, sexual orientation and other labels that might be discriminated against were removed, there were certain incidents where AI has made biased decisions. A few years back, in 2018, Amazon had stopped using a recruiting system for assessing the resumes after discovering that showed it favoured men based on the language used. WAI has made its way to administration, security, law, and legal systems, that means decisions made by AI or the way they process data directly affects the lives of people. This calls for the need for transparency and understanding ‘why’ the algorithms work the way they do.
Explainable Artificial Intelligence, or XAI, aims to provide transparency. Its objective is to make more explainable AI models so that a user can understand, trust and use them efficiently. There have been several approaches to explain algorithms but they are broadly categorised as self-interpretable models and post-hoc explanations.
Recommended by LinkedIn
Self-interpretable models such as decision trees and regression models are some models which can be understood and read directly by us. Post-hoc explanations like LIME (Local Interpretable Model-Agnostic Explanation) provide an explanation for a decision made by an algorithm by making queries about outputs.
Explainable AI is being worked upon by researchers and developers alike and it is very important to have autonomous machines which do not appear to us as “black boxes” but rather trustworthy, reliable algorithms which we understand. Thus, XAI would be a very necessary development in the future of AI.