eFinder

eFinder

Anthropic's AI downgrade stings power users

Corporate Transparency AI Performance Degradation Digital Divide/Socioeconomic Stratification

Users of Anthropic's AI, Claude, have reported a perceived decline in performance, leading to speculation about 'nerfing' to save compute resources. Anthropic denies these claims, attributing the experience to default reasoning settings, while analysts suggest a combination of configuration changes and user habituation.

analyticsAnalysis

20%
Propaganda Score
confidence: 95%
Minor concerns. Some persuasive language detected, but largely factual.

psychologyDetected Techniques

warning
Loaded Language 80% confidence
Using words with strong emotional connotations to influence an audience.
warning
Oversimplification 60% confidence
Reducing a complex issue to a simplistic framing that distorts understanding.

fact_checkFact-Check Results

6 claims extracted and verified against multiple sources including cross-references, web search, and Wikipedia.

info Single Source 4
check_circle Corroborated 2
check_circle
“Anthropic is testing a more powerful model, Mythos”
CORROBORATED
Two independent sources confirm the existence of the Mythos model: one describing it as a model skilled in cyber-security and another mentioning it as a 'more powerful model' that Anthropic is testing.
travel_explore
web search NEUTRAL — Anthropic is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude and focuses on AI safety. [7] .
https://en.wikipedia.org/wiki/Anthropic
travel_explore
web search NEUTRAL — Feb 4, 2026 · Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
https://www.anthropic.com/
travel_explore
web search NEUTRAL — Claude is Anthropic's AI, built for problem solvers. Tackle complex challenges, analyze data, write code, and think through your hardest work.
https://claude.com/product/overview
info
“Anthropic says it adjusted the default level of reasoning in Claude Code”
SINGLE SOURCE
Only one source (forgeeks) explicitly states that Anthropic tied online backlash to changes in the default level of reasoning in Claude Code. Other sources mention performance issues but not specifically the 'default level of reasoning' adjustment.
travel_explore
web search NEUTRAL — Anthropic is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude and focuses on AI safety. [7] .
https://en.wikipedia.org/wiki/Anthropic
travel_explore
web search NEUTRAL — Feb 4, 2026 · Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
https://www.anthropic.com/
travel_explore
web search NEUTRAL — Claude is Anthropic's AI, built for problem solvers. Tackle complex challenges, analyze data, write code, and think through your hardest work.
https://claude.com/product/overview
info
“Anthropic denies the changes were tied to compute constraints or Mythos”
SINGLE SOURCE
The source 'forgeeks' explicitly states that Anthropic denies the shift was caused by compute shortages or efforts to divert resources toward Mythos.
travel_explore
web search NEUTRAL — Anthropic addressed user complaints about Claude Code's degraded performance, confirming that it identified and fixed three key issues affecting response.
https://dataconomy.com/2026/04/24/anthropic-denies-intention…
travel_explore
web search NEUTRAL — Mythos is one of Anthropic's latest models developed as part of its broader AI system called Claude.Why are there concerns? Anthropic says during tests it found the model was highly skilled at cyber-s…
https://www.bbc.com/news/articles/crk1py1jgzko
travel_explore
web search NEUTRAL — Anthropic says the online backlash is tied to changes in the default level of reasoning in Claude Code. The company denies that the shift was caused by compute shortages or by any effort to divert res…
https://forgeeks.dev/claude-anthropic-ai-downgrade/
info
“Boris Cherny, head of Claude Code, posted on X on March 6 regarding the /model selector”
SINGLE SOURCE
While sources confirm Boris Cherny is the head/creator of Claude Code and that he posted on Hacker News, there is no evidence in the provided results of a specific post on X (Twitter) on March 6 regarding a '/model selector'.
travel_explore
web search NEUTRAL — Boris Cherny is the Creator of Claude Code but few people know his full career story.
https://www.youtube.com/watch?v=AmdLVWMdjOk
travel_explore
web search NEUTRAL — Boris Cherny — head of Claude Code at Anthropic — confirmed on Hacker News that this is the recommended workaround until a permanent fix ships.
https://www.devpik.com/blog/claude-code-67-percent-dumber-an…
travel_explore
web search NEUTRAL — 6,852 sessions show Claude Code thinking depth fell 67% from baseline. March 8 regression date, cost went from $345 to $42K/month.
https://tokencost.app/blog/claude-code-getting-worse-april-2…
check_circle
“Anthropic is also reportedly close to upgrading its high-end Opus model to version 4.7”
CORROBORATED
Multiple independent sources (Anthropic's own site, 9to5Mac, and SunBrief) confirm the release and availability of Claude Opus 4.7.
travel_explore
web search NEUTRAL — Our latest model, Claude Opus 4.7, is now generally available. Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks.
https://www.anthropic.com/news/claude-opus-4-7
travel_explore
web search NEUTRAL — Anthropic has announced its latest AI model with Claude Opus 4.7. The new version arrives two months after the previous model upgrade, matching Anthropic’s previous upgrade cadence.
https://9to5mac.com/2026/04/16/anthropic-reveals-new-opus-4-…
travel_explore
web search NEUTRAL — Anthropic released Claude Opus 4.7 as a general-availability upgrade focused on hard software engineering, higher-resolution vision, and new cybersecurity guardrails while keeping pricing the same as …
https://www.smarterwithai.news/p/sunbrief-76-anthropic-drops…
info
“Anthropic recently moved large enterprise customers to a fully usage-based (token) pricing model”
SINGLE SOURCE
The provided evidence for this claim consists of general company descriptions from Wikipedia and the Anthropic homepage; none of these sources mention a transition to a fully usage-based token pricing model for enterprise customers.
travel_explore
web search NEUTRAL — Anthropic is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude and focuses on AI safety. [7] .
https://en.wikipedia.org/wiki/Anthropic
travel_explore
web search NEUTRAL — Feb 4, 2026 · Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
https://www.anthropic.com/
travel_explore
web search NEUTRAL — Claude is Anthropic's AI, built for problem solvers. Tackle complex challenges, analyze data, write code, and think through your hardest work.
https://claude.com/product/overview

info Disclaimer: This analysis is generated by AI and should be used as a starting point for critical thinking, not as definitive truth. Claims are verified against publicly available sources. Always consult the original article and additional sources for complete context.