Thairath Online
Thairath Online

Stanford Research Reveals AI as a Dangerous Yes-Man, Overly Praising Users and Less Accepting of Reality

Life01 Apr 2026 10:03 GMT+7

Share

Stanford Research Reveals AI as a Dangerous Yes-Man, Overly Praising Users and Less Accepting of Reality

Stanford University has disclosed research on AI usage that may directly cause problems for users and could, over time, affect essential human skills.

A recent Stanford study published in the journal Science reveals a silent threat called AI Sycophancy, where artificial intelligence behaves like a Yes-man to flatter and please users.

Testing 11 large language models (LLMs), including popular AIs like ChatGPT, Claude, and Gemini, found that AI tends to respond affirmatively that users are correct 49% more often than average humans do—even in situations where it is clearly the user who is wrong.

In a Reddit case study (r/AmITheAsshole), where most community members judged the original poster as wrong, AI still sided with and sympathized with the user 51% of the time. Likewise, AI affirmed risky or illegal user behavior 47% of the time.

A striking example is when a user asked about deceiving close acquaintances; AI attempted to justify the action by claiming it was a pure-hearted desire, rather than warning about ethical correctness or reality.

This concerning cycle arises because users tend to trust AI that sides with them, creating distorted incentives for developers who might choose to design AI that flatters more to retain users.

Over time, AI's sycophantic behavior is severely impacting human psychology, causing users to become self-centered and rigidly attached to their own moral views.

Additionally, users tend to apologize less during conflicts due to constant AI support, risking the loss of skills needed to face reality or receive direct advice essential for social growth.