Just like 'showing your work' in a math problem, explainable AI is an AI service whose outputs can be explained. All AI decision-making processes and machine learning don't take place in a black box. It's a transparent service, built with the ability to be dissected and understood by human practitioners. Adding 'explanation' to the output, adding input/output mapping is key.
Explainable AI Example
Given one input (in this case, a photo), two outputs are needed. AI doesn't see what we see. It can't even see pictures. We don't have the compute power yet for this. AI sees a "summary" of the photo. Or a "summary" of the video. This is called "Vectorization." Instead of a video (series of pictures), the AI sees only the vectorized data (an obstacle with XYZ coordinates). Instead of seeing a car, it sees a box. This important to know about when considering "explainable AI". The AI only sees these vectors.
Why Explainable AI is Necessary
Understanding why your AI service made a certain decision or how it derived a certain insight is key for AI practitioners to better integrate AI services. Take autonomous vehicles. How the AI system is built, and how it interacts with the vehicle is high stakes. It could mean life or death. If the AI system makes a mistake, builders need to understand why it did that so they can improve and fix it. If their AI service lives and operates in a black box, they have no insight into how to debug and improve it.
Business Use Cases
For small things like AI-powered chatbots or sentiment analysis of social feeds, it doesn't really matter if the AI system operates in a black box. But for use cases with a big human impact - autonomous vehicles, aerial navigation and drones, military applications - being able to understand the decision-making process is mission critical. As we rely more and more on AI in our everyday lives, we need to be able to understand its 'thought process' and make changes and improvements over time.
Also known as Interpretable AI or Transparent AI.