Skip to main content

Faculty & Research

Close

Artificial Intelligence, Trust, and Perceptions of Agency

Working Paper
Extant theories of trust assume the trustee has agency (i.e. intentionality and free will). The authors propose that a crucial qualitative distinction between placing trust in Artificial Intelligence (AI) vs. trust in a human lies in the degree to which attributions of agency are made to the trustee by the trustor (human). They specify two mechanisms through which the extent of agency attributions can affect human trust in AI. First, the importance of the benevolence of the trustee—the AI—increases if the AI is seen as more agentic, but so does the anticipated psychological cost if it violates the trust (because of betrayal aversion, see Bohnet & Zeckhauser, 2004). Second, attributions of benevolence and competence become less relevant for placing confidence in a non-agentic seeming AI system, and instead benevolence and competence attributions to the designer of the system become important. Both mechanisms imply that making an AI appear more agentic may increase or decrease the trust that humans place in it. While designers of AI technology often strive to endow their creations with features that convey its benevolent nature (e.g. through anthropomorphizing or transparency), this may also change agency perceptions in a manner that results in making it less trustworthy in human eyes.
Faculty

Professor of Strategy