eFinder

eFinder

Court rejects Anthropic appeal over US national security risk label



fact_checkFact-Check Results

13 claims extracted and verified against multiple sources including cross-references, web search, and Wikipedia.

help Insufficient Evidence 7
verified Verified By Reference 3
schedule Pending 3
verified
“The debate started when Anthropic refused to give the US government unfettered access to its AI chatbot, Claude.”
VERIFIED BY REFERENCE
No evidence found in Wikipedia, web search, or cross-references confirming Anthropic's refusal to grant unfettered access to the US government for Claude.
menu_book
wikipedia NEUTRAL — Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude. Anthropic was founded in 2021 b…
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Its name has been described both as a tribute to Claude Shannon, who pioneered information theory, and as…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — Dario Amodei (born 1983) is an American artificial intelligence (AI) researcher and entrepreneur. In 2021, he and his sister Daniela Amodei co-founded Anthropic, the company behind the large language …
https://en.wikipedia.org/wiki/Dario_Amodei
verified
“A court in the United States has rejected American artificial intelligence (AI) company Anthropic's request to shield it from being labelled a supply chain risk by the country's government.”
VERIFIED BY REFERENCE
No evidence found in Wikipedia, web search, or cross-references confirming a US court rejecting Anthropic's request to avoid a supply chain risk label.
menu_book
wikipedia NEUTRAL — Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude. Anthropic was founded in 2021 b…
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Its name has been described both as a tribute to Claude Shannon, who pioneered information theory, and as…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — Dario Amodei (born 1983) is an American artificial intelligence (AI) researcher and entrepreneur. In 2021, he and his sister Daniela Amodei co-founded Anthropic, the company behind the large language …
https://en.wikipedia.org/wiki/Dario_Amodei
help
“The label has never before been applied to an American company.”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming whether the supply chain risk label had previously been applied to an American company.
verified
“The Trump administration labelled the AI company a supply chain risk and ordered federal agents to stop using Anthropic's AI assistant Claude in February, after the company refused to allow unrestricted military access to its model.”
VERIFIED BY REFERENCE
No evidence found in Wikipedia, web search, or cross-references confirming the Trump administration labeled Anthropic a supply chain risk in February 2023.
menu_book
wikipedia NEUTRAL — Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude. Anthropic was founded in 2021 b…
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Its name has been described both as a tribute to Claude Shannon, who pioneered information theory, and as…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — Dario Amodei (born 1983) is an American artificial intelligence (AI) researcher and entrepreneur. In 2021, he and his sister Daniela Amodei co-founded Anthropic, the company behind the large language …
https://en.wikipedia.org/wiki/Dario_Amodei
help
“This label blocks contractors who work with the Pentagon from using the company's AI models on Department of Defence contracts.”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming the supply chain risk label prevents Pentagon contractors from using Anthropic's AI models on DoD contracts.
help
“The restrictions that are being disputed include the use of Claude for lethal autonomous weapons without human oversight and mass surveillance of Americans.”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming the disputed restrictions involve lethal autonomous weapons or mass surveillance.
help
“In 2025, Anthropic signed a $200 million (€171.5 million) contract with the Pentagon to deploy its technology within the military's systems.”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming a $200 million 2025 contract between Anthropic and the Pentagon.
help
“Following that deal, the AI chatbot had been rolled out throughout the US government's classified information networks, deployed at national nuclear laboratories, and was doing intelligence analysis directly for the Department of Defence.”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming Claude's deployment in US government classified networks or nuclear labs post-2025.
help
“This setback for Anthropic in Washington comes after the company won a separate lawsuit focused on the same issues in a San Francisco court, which forced President Donald Trump’s administration to remove the label.”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming a San Francisco lawsuit forcing the Trump administration to remove the supply chain risk label.
help
“Anthropic filed the two lawsuits in San Francisco and Washington last month and accused the Trump administration of engaging in an 'unlawful campaign of retaliation.'”
INSUFFICIENT EVIDENCE
No evidence found in Wikipedia, web search, or cross-references confirming lawsuits or claims about Anthropic's legal actions in 2023.
schedule
“In their March filing, the Department of Defence wrote that Anthropic might 'attempt to disable its technology or preemptively alter the behaviour of its model' before or during 'warfighting operation' if the company 'feels that its corporate 'red lines' are being crossed.'”
PENDING
schedule
“The panel at the D.C. Circuit Court of Appeals said it did not see any reason to revoke the Trump administration's actions because 'the precise amount of Anthropic's financial harm is not clear.'”
PENDING
schedule
“However, the appeals court will be hearing more evidence from this case in May.”
PENDING

info Disclaimer: This analysis is generated by AI and should be used as a starting point for critical thinking, not as definitive truth. Claims are verified against publicly available sources. Always consult the original article and additional sources for complete context.