Image of Business Instructional Facility

Mar 31, 2025 Business Administration Faculty Research in Education

When it comes to AI, to trust or not to trust, that is the question

Artificial intelligence is rapidly expanding into nearly every field, but skepticism about its recommendations remains. That’s what Gies Business professor Sarah Lim and her colleague, Gizem Yalcin Williams, a behavioral scientist from the University of Texas-Austin, found in their research.

The duo aggregated several emerging papers on AI and how AI affects people’s emotions, cognitions (the way they think), as well as their behaviors. They published a summary of their findings in their paper, “Psychology of AI: How AI impacts the way people feel, think, and behave,” in Current Opinion in Psychology.

Highlights from the research include:

  1. Job applicants feel more comfortable being evaluated by a human resource manager as opposed to an AI algorithm because they don’t want to be perceived as just a number.

    “When algorithms evaluate, interviewees believe that it cannot process their unique personality,” Lim explained. “They feel that AI dehumanizes people and sees them more as a number. They believe they deserve better.”

    She adds that while most don’t necessarily take issue with the outcomeafter all AI would decide objectively without bias – they find the process itself dehumanizing. She also said that those surveyed admitted that human evaluation takes greater effort, whereas AI can come to a conclusion much faster.

  2. If given the option of visiting an AI generated doctor by putting symptoms into a computer or visiting a human doctor, patients overwhelmingly choose a human doctor.

    “They were afraid of AI missing something unique about them,” Lim said. “In general people want to be seen as a unique person with their own personality.”

  3. Mistakes made by AI are seen as a failure of AI itself, but human errors were attributed more to the individual person making the evaluation and not necessarily on the reliability of human decision making as a whole.

    “People believe that humans have the ability to learn from their mistakes, but don’t believe the same is true for algorithms,” Lim noted. “That is one reason I believe people trust AI less.”

  4. The study found that people are willing to trust AI when objectivity is necessary. According to the paper, “algorithm aversion decreases when objectivity matters. Due to the perceived superiority of algorithms in objective assessments, people particularly prefer algorithms for tasks that require objective evaluations.”

    “For instance, when it comes to job performance, people actually were more willing to be monitored by an AI manager because humans are more likely to judge them.” Lim said. “They were afraid to be seen as lazy or not competent or to receive social judgment.”

    However, when the decisions are more emotionally driven, like finding the ideal vacation spot, people find AI less reliable.

  5. When being evaluated for a loan, the outcome made a difference on where applicants attributed the result. If they were approved, they credited it to human evaluations, but if they were rejected, they were quick to blame it on the algorithm

    “That’s because they want to attribute that positive outcome more to themselves,” Lim said. “If they were rejected, they want it to be attributed to external factors.”

    Lim, who holds a master’s degree in social psychology and a PhD in marketing, believes that there is more to learn on the subject. For instance, she is looking forward to understanding how humans can collaborate with AI in the future.

    “There is ongoing research on the topic, but not many papers published,” she said.

     

To that end, she is currently working on two related AI projects -- one on whether people prefer to delegate their work to AI or humans, and the second on how behavior changes when collaborating with AI vs. humans.