Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Information Processing and Learning

Presenter

Kevin Bauer

SAFE Leibniz Institute

Abstract

To overcome inefficiencies associated with the black-box nature of machine learning (ML) systems, researchers, practitioners, and regulators alike increasingly call for explainability methods that allow different human stakeholders to understand better how and why systems generate specific predictions. Focusing on lay users and local explanations, the paper at hand empirically studies how explainability (i) affects users’ reliance on a ML system, (ii) influences users’ information processing, and (iii) shapes knowledge transfers from the system to humans. Explainability causes users to put more weight on information highlighted as important than the overall prediction, even though it does not decrease their belief about the predictive performance. Due to this change in the weighing of information, explainability leads to worse decision-making relative to an opaque system. We find that users learn from explanations and continue to put more weight on specific information even when they have no more access to the system. However, only when explanations are in line with their prior beliefs. Our results emphasize that explainability can be a double-edged sword. On the one hand, it may lead users to question system outputs more frequently. On the other hand, it may entail selective knowledge transfers justifying and reinforcing subjective beliefs.