(Tho)roughly Explained: The Impact of Algorithmic Transparency on Human Decision Making

Presenter

Kevin Bauer

SAFE Leibniz Institute

Abstract

An increasing number of scholars and practitioners are raising alarm about the black-box nature of machine learning (ML) based decision support systems and demand more algorithmic transparency. While more transparent ML systems arguably mitigate accountability and discriminatory concerns, it remains an open question of how such algorithmic transparency impacts users’ preferences, beliefs, and behavior. To shed light on this issue, we conduct a novel between-subject experiment. In all treatments, subjects decide as first movers in a series of trust games whether or not to trust another individual of which they observe 10 personal traits. There are two blocks of trust game scenarios. In block one, we vary across treatments whether subjects (i) have access to a trained, state-of-the-art ML prediction about the second mover’s propensity to reciprocate cooperation, and (ii) whether the prediction is presented together with an explanation about why a specific prediction is produced. In the second block of trust games, all subjects only observe second movers’ 10 personal traits before making a transfer decision. Our analysis comprises comparisons of subjects’ transfer decisions across treatments while controlling for a broad set of additionally elicited covariates.