Files
Abstract
What makes people trust algorithms? We know that demonstrated accuracy, high interpretability, and prior familiarity with AI, among other factors, increase the likelihood that subjects comply with an algorithmic recommendation. However, most of the prior research investigates compliance with an algorithmic recommendation relative to one’s belief, which is usually confounded by human overconfidence. We mitigate this confound by exposing subjects to identical advice labeled as either algorithmic or from a human crowd. Thus, we isolate the effect of algorithmic recommendations relative to the recommendations of a crowd without being confounded by natural human overconfidence. This three-experiment dissertation submits three research projects that investigate how people choose to respond to an algorithmic recommendation, moderated by the type and difficulty of task. The tasks are taken from three quadrants of McGrath’s Circumplex Model of Group Tasks, to achieve task type diversity. Paper One investigates how humans weigh the estimates of a crowd compared with estimates of an algorithm for an objective, intellective task. Paper Two investigates how humans respond to recommendations from a crowd and an algorithm in the context of a creative task. Paper Three investigates how humans respond to recommendations from an algorithm when resolving conflicting interests.