The Necessity of a Gradient of Explainability in AI | by Kevin Berlemont, PhD | Jul, 2023


Too much detail can be overwhelming, yet insufficient detail can be misleading.

Kevin Berlemont, PhD

Towards Data Science

Photo by No Revisions on Unsplash

Any sufficiently advanced technology is indistinguishable from magic” — Arthur C. Clarke

With the advances in self-driving cars, computer vision, and more recently, large language models, science can sometimes feel like magic! Models are becoming more and more complex every day, and it can be tempting to wave your hands in the air and mumble something about backpropagation and neural networks when trying to explain complex models to a new audience. However, it is necessary to describe an AI model, its expected impact, and potential biases, and that’s where Explainable AI comes in.

With the explosion of AI methods over the past decade, users have come to accept the answers they are given without question. The whole algorithm process is often described as a black box, and it is not always straightforward or even possible to understand how the model arrived at a specific result, even for the researchers who developed it. To build trust and confidence in its users, companies must characterize the fairness, transparency, and underlying decision-making processes of the different systems they employ. This approach not only leads to a responsible approach towards AI systems, but also increases technology adoption (https://www.mckinsey.com/capabilities/quantumblack/our-insights/global-survey-the-state-of-ai-in-2020).

One of the hardest parts of explainability in AI is clearly defining the boundaries of what is being explained. An executive and an AI researcher will not require and accept the same amount of information. Finding the right level of information between straightforward explanations and all the different paths that were possible requires a lot of training and feedback. Contrary to common belief, removing the maths and complexity of an explanation does not render it meaningless. It is true that there is a risk of under-simplifying and misleading the person into thinking they have a deep understanding of the model and of what they can do with it. However, the use of the right techniques can give clear explanations at the right level that would lead the person to ask questions to someone else, such as a data scientist, to further…



Source link

Leave a Comment