Sidney Scott-Sharoni

Sidney Scott-Sharoni at Ph.D. commencement December 2025

Would you follow a chatbot’s advice more if it sounded friendly? 

That question matters as artificial intelligence (AI) spreads into everything from customer service to self-driving cars. These autonomous agents often have human names — Alexa or Claude, for example — and speak conversationally, but too much familiarity can backfire. Earlier this year, OpenAI scaled down its “sycophantic” ChatGPT model, which could cause problems for users with mental health issues. 

New research from Georgia Tech suggests that users may like more personable AI, but they are more likely to obey AI that sounds robotic. While following orders from Siri may not be critical, many AI systems, such as robotic guide dogs, require human compliance for safety reasons. 

These surprising findings are from research by Sidney Scott-Sharoni, who recently received her Ph.D. from the School of Psychology. Despite years of previous research suggesting people would be socially influenced by AI they liked, Scott-Sharoni’s research showed the opposite. 

“Even though people rated humanistic agents better, that didn't line up with their behavior,” she said. 

Likability vs. Reliability 

Scott-Sharoni ran four experiments. In the first, participants answered trivia questions, saw the AI’s response, and decided whether to change their answer. She expected people to listen to agents they liked.

“What I found was that the more humanlike people rated the agent, the less they would change their answer, so, effectively, the less they would conform to what the agent said,” she noted.

Surprised, Scott-Sharoni studied moral judgments with an AI voice agent next. For example, participants decided how to handle being undercharged on a restaurant bill. 

Once again, participants liked the humanlike agent better but listened to the robotic agent more. The unexpected pattern led Scott-Sharoni to explore why people behave this way.

Bias Breakthrough

Why the gap? Scott-Sharoni’s findings point to automation bias — the tendency to see machines as more objective than humans.

Scott-Sharoni continued to test this with a third experiment focused on the prisoner’s dilemma, where participants cooperate with or retaliate against authority. In her task, participants played a game against an AI agent. 

“I hypothesized that people would retaliate against the humanlike agent if it didn’t cooperate,” she said. “That’s what I found: Participants interacting with the humanlike agent became less likely to cooperate over time, while those with the robotic agent stayed steady.”

The final study, a self-driving car simulation, was the most realistic and troubling for safety concerns. Participants didn’t consistently obey either agent type, but across all experiments, humanlike AI proved less effective at influencing behavior.

Designing the Right AI

The implications are pivotal for AI engineers. As AI grows, designers may cater to user preferences — but what people want isn’t always best.

“Many people develop a trusting relationship with an AI agent,” said Bruce Walker, a professor of psychology and interactive computing and Scott-Sharoni’s Ph.D. advisor. “So, it’s important that developers understand what role AI plays in the social fabric and design technical systems that ultimately make humans better. Sidney's work makes a critical contribution to that ultimate goal.” 

When safety and compliance are the point, robotic beats relatable.

News Contact

Tess Malone, Senior Research Writer/Editor

tess.malone@gatech.edu