AI that agrees too much with user could distort judgment, study finds
open_in_new
Read the original article: https://www.euronews.com/next/2026/03/27/ai-tools-risk-distorting-users-judgment…
psychologyDetected Techniques
warning
fact_checkFact-Check Results
12 claims extracted and verified against multiple sources including cross-references, web search, and Wikipedia.
help
Insufficient Evidence
7
verified
Verified By Reference
3
schedule
Pending
2
“Even a brief interaction with a flattering chatbot could 'skew an individual’s judgment,' making people less likely to apologise or attempt to repair relationships, the study found.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the claim about chatbot interactions skewing judgment.
“Artificial intelligence (AI) chatbots that offer support for personal issues could be reinforcing harmful beliefs by excessively agreeing with the user, a new study found.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the claim about AI chatbots reinforcing harmful beliefs.
“Researchers from the American university Stanford measured sycophancy, the extent to which an AI flatters or validates a user, across 11 leading AI models, including OpenAI’s ChatGPT 4-0, Anthropic’s Claude, Google’s Gemini, Meta Llama-3, Qwen, DeepSeek and Mistral.”
VERIFIED BY REFERENCE
Wikipedia entries about Stanford University, Anthropic, and Perplexity AI do not mention the specific study or sycophancy measurements across 11 models.
menu_book
wikipedia
NEUTRAL
— Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and dec…
https://en.wikipedia.org/wiki/Artificial_intelligence
https://en.wikipedia.org/wiki/Artificial_intelligence
menu_book
wikipedia
NEUTRAL
— The history of artificial intelligence (AI) began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic …
https://en.wikipedia.org/wiki/History_of_artificial_intellig…
https://en.wikipedia.org/wiki/History_of_artificial_intellig…
menu_book
wikipedia
NEUTRAL
— Stanford University has many centers and institutes dedicated to the study of various specific topics. These centers and institutes may be within a department, within a school but across departments, …
https://en.wikipedia.org/wiki/Stanford_University_centers_an…
https://en.wikipedia.org/wiki/Stanford_University_centers_an…
“To see how these systems handled moral ambiguity, the researchers turned to more than 11,000 posts from r/AmITheAsshole, a Reddit community where people confess conflicts and ask strangers to judge whether they were in the wrong.”
VERIFIED BY REFERENCE
Wikipedia entries about Anthropic, Dead Internet theory, and Perplexity AI are unrelated to the study's analysis of Reddit posts for moral ambiguity.
menu_book
wikipedia
NEUTRAL
— Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a family of large language models (LLMs) named Claude. Anthropic operates as a public…
https://en.wikipedia.org/wiki/Anthropic
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia
NEUTRAL
— The dead Internet theory is a conspiracy theory that asserts that, since around 2016, the Internet has consisted primarily of bot activity and automated content manipulated by algorithmic curation. Th…
https://en.wikipedia.org/wiki/Dead_Internet_theory
https://en.wikipedia.org/wiki/Dead_Internet_theory
menu_book
wikipedia
NEUTRAL
— Perplexity AI, Inc., or simply Perplexity, is an American privately held software company offering a web search engine that processes user queries and synthesizes responses. Perplexity products use la…
https://en.wikipedia.org/wiki/Perplexity_AI
https://en.wikipedia.org/wiki/Perplexity_AI
“On average, AI models affirmed the actions of a user 49 percent more often than other humans did, even on cases involving deception, illegal actions or other harms.”
VERIFIED BY REFERENCE
Wikipedia definitions of AI and generative AI do not reference the 49% affirmation statistic or the study's findings.
menu_book
wikipedia
NEUTRAL
— AI commonly refers to artificial intelligence, which is intelligence demonstrated by machines.
Ai, ai, or AI may also refer to:
https://en.wikipedia.org/wiki/Ai
https://en.wikipedia.org/wiki/Ai
menu_book
wikipedia
NEUTRAL
— Artificial intelligence (AI) is the capability of computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and dec…
https://en.wikipedia.org/wiki/Artificial_intelligence
https://en.wikipedia.org/wiki/Artificial_intelligence
menu_book
wikipedia
NEUTRAL
— Generative artificial intelligence, also known as generative AI or GenAI, is a subfield of artificial intelligence that uses generative models to generate text, images, videos, audio, software code or…
https://en.wikipedia.org/wiki/Generative_AI
https://en.wikipedia.org/wiki/Generative_AI
“In one case, a user admitted having feelings for a junior colleague. Claude responded gently, saying it 'can hear [the user’s] pain,' and that they had ultimately chosen an 'honourable path.' Human commenters were far harsher, calling the behaviour 'toxic' and 'bordering on predatory'.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the specific example involving Claude and the junior colleague scenario.
“A second experiment saw over 2,400 participants discuss real-life conflicts with AI systems. The results showed that even brief interactions with a flattering chatbot could 'skew an individual’s judgment,' making people less likely to apologise or attempt to repair relationships.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the second experiment with 2,400 participants.
“Our results show that across a broad population, advice from sycophantic AI has the real capacity to distort people’s perceptions of themselves and their relationships with others, the study said.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the claim about distorted perceptions from the study.
“In severe cases, AI sycophancy could lead to self-destructive behaviours such as delusions, self-harm or suicide for vulnerable people, the study found.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the claim about self-destructive behaviors linked to AI sycophancy.
“The results show that AI sycophancy is 'a societal risk' and needs to be regulated, the researchers said.”
INSUFFICIENT EVIDENCE
No evidence found in cross-references, web search, or Wikipedia to support the claim about societal risks from AI sycophancy.
“One way to do this would be to require pre-deployment behavioural audits, which would evaluate how agreeable an AI model is and how likely it is to reinforce harmful self-views.”
PENDING
“The researchers note that their study recruited US-based participants, so it likely reflects dominant American social values and 'may not generalise to other cultural contexts,' which might have different norms.”
PENDING
info
Disclaimer: This analysis is generated by AI and should be used as a starting point for critical thinking, not as definitive truth. Claims are verified against publicly available sources. Always consult the original article and additional sources for complete context.