'An attempt to cripple Anthropic': US judge questions Anthropic ban
Fact-Check Results
“The US government's ban on Anthropic appears punitive, following the company's public dispute with the Pentagon over its refusal to allow unrestricted military use of its Claude AI model.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to confirm or refute claims about US government's ban on Anthropic or its connection to Pentagon disputes.
“Anthropic made its case before a San Francisco federal court on Tuesday, seeking an injunction against the US government's decision to blacklist it as a national security risk.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to verify Anthropic's legal actions or court proceedings mentioned in the claim.
“The District Judge Rita F. Lin said at the outset of the hearing that 'it looks like an attempt to cripple Anthropic,' adding she was concerned the government could be punishing Anthropic for openly criticising the government's position, US media reported.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to confirm Judge Rita F. Lin's statements or the context of the hearing described.
“US President Donald Trump and Defense Secretary Pete Hegseth publicly declared in February that it was cutting ties with the artificial intelligence (AI) company after it refused to allow unrestricted military use of its Claude AI model.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to verify statements by Trump and Hegseth regarding Anthropic in February.
“The restrictions in dispute include the use of lethal autonomous weapons without human oversight and mass surveillance of Americans.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to confirm the specific restrictions in dispute involving Anthropic's AI model.
“In response, the US government labelled Anthropic a 'supply chain risk to national security' and ordered federal agents to stop using Claude.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to verify the government's designation of Anthropic as a national security risk.
“On March 9, Anthropic filed two lawsuits against the government over its designation as a supply chain risk. One is a case for reconsideration of the supply chain risk and the other alleges the Trump administration violated the company's First Amendment right to speech.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to confirm Anthropic's lawsuits or the specific claims about First Amendment violations.
“Lin told the courtroom that the Pentagon has a right to decide on the AI products it uses but she questioned whether the government broke the law by banning agencies from using Anthropic, and when Hegseth announced that those seeking relations with the Pentagon should cut ties with Anthropic, NPR reported.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to verify Judge Lin's statements about the legality of the government's actions.
“A lawyer for the government said the Pentagon’s actions were not retaliatory and based on how Anthropic’s AI model could be used and not on the company’s decision to go public about the disagreement.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to confirm the government lawyer's statements about the Pentagon's actions.
“NPR also reported that Anthropic could be at risk in the future because it could update its Claude AI model in a way that endangers national security.”
❓
INSUFFICIENT EVIDENCE
— No evidence in archive to verify NPR's report about Anthropic's future risks from AI model updates.
“Euronews Next reached out to Anthropic for comment but did not receive a reply at the time of publication.”
❓
PENDING
“Being a supply chain risk usually only applies to foreign companies.”
❓
PENDING
“Judge Rita F. Lin stated that she expected to make a ruling in the coming days on whether to temporarily pause the government's ban while the court continues to examine the broader case.”
❓
PENDING