Skip to main content

Faculty & Research

Close

Denied by an (Unexplainable) Algorithm: Teleological Explanations for Algorithmic Decisions Enhance Customer Satisfaction

Working Paper
Algorithmic or automated decision-making has become commonplace, with firms implementing either rule-based or statistical models to determine whether or not to provide services to customers based on their past behaviors. Policy-makers are pressed to determine if and how to require firms to explain the decisions made by their algorithms, especially in cases where the algorithms are “unexplainable,” or are equivalently subject to legal or commercial confidentiality restrictions or too complex for humans to understand. The authors study consumer responses to goal-oriented, or “teleological,” explanations, which present the purpose or objective of the algorithm without revealing its mechanism, making them candidates for explaining decisions made by “unexplainable” algorithms. In a field experiment with a technology firm and several online lab experiments, they demonstrate the effectiveness of teleological explanations and identify conditions when teleological and mechanistic explanations can be equally satisfying. Participants perceive teleological explanations as fair, even though algorithms with a fair goal may employ an unfair mechanism. These results show that firms may benefit by offering teleological explanations for unexplainable algorithm behavior. Regulators can mitigate possible risks by educating consumers about the potential disconnect between an algorithm’s goal and its mechanism.
Faculty

Assistant Professor of Marketing

Professor of Marketing