People May Be More Trusting of AI When They Can’t See How It Works

Lambada/Getty Images
Summary.
Georgetown University’s Timothy DeStefano and colleagues—Harvard’s Michael Menietti and Luca Vendraminelli and MIT’s Katherine Kellogg—analyzed the stocking decisions for 425 products of a U.S. luxury fashion retailer across 186 stores. Half the decisions were made after employees received recommendations from an easily understood algorithm, the other half after recommendations from one that couldn’t be deciphered. A comparison of the decisions showed that employees followed the guidance of the uninterpretable algorithm more often. The conclusion: People may be more trusting of AI when they can’t see how it works.