eFinder

eFinder

Congress stalls on military AI as Google and the Pentagon strike deal

AI Regulation and Military Use Private Sector Influence on Defense Technology

The article reports on a new, reportedly permissive contract between the Pentagon and Google for the use of its Gemini AI model in classified settings. It notes that this agreement differs from OpenAI's and highlights concerns from critics, such as DeepMind scientist Alex Turner, regarding the lack of legal restrictions. Advocacy groups are pushing for Congress to establish mandatory military AI safeguards and greater transparency.

analyticsAnalysis

30%
Propaganda Score
confidence: 90%
Minor concerns. Some persuasive language detected, but largely factual.

psychologyDetected Techniques

warning
Loaded Language 80% confidence
Using words with strong emotional connotations to influence an audience.
warning
Selective Omission 70% confidence
Deliberately leaving out important context or facts that would change interpretation.

fact_checkFact-Check Results

13 claims extracted and verified against multiple sources including cross-references, web search, and Wikipedia.

check_circle Corroborated 6
schedule Pending 3
info Single Source 2
help Insufficient Evidence 2
check_circle
“The Pentagon this week reached an agreement with Google to use its model for "all lawful use," a source familiar confirmed.”
CORROBORATED
Multiple web search results report that Google signed a classified deal with the Pentagon allowing the use of its AI models for 'any lawful government purpose' or 'any lawful use.'
menu_book
wikipedia NEUTRAL — Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude and focuses on AI safety. Anthro…
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Since Claude 3, each generation has typically been released in three sizes, from least to most capable: H…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — Project Maven (officially Algorithmic Warfare Cross Functional Team) is a United States Department of Defense initiative launched in 2017 to accelerate the adoption of machine learning and data integr…
https://en.wikipedia.org/wiki/Project_Maven
+ 3 more evidence sources
check_circle
“Google's Gemini can now be used for classified settings, and the contract is reportedly more permissive than OpenAI's.”
CORROBORATED
Multiple web search results confirm that the new talks aim to allow the Pentagon to use Gemini AI for 'all lawful purposes' within classified settings. While the evidence confirms the scope expansion, it does not provide enough detail to compare the contractual permissiveness relative to OpenAI's agreement.
menu_book
wikipedia NEUTRAL — Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude and focuses on AI safety. Anthro…
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Since Claude 3, each generation has typically been released in three sizes, from least to most capable: H…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — OpenAI Global, LLC is an American artificial intelligence (AI) research organization consisting of a for-profit public benefit corporation (PBC) and a nonprofit foundation, headquartered in San Franci…
https://en.wikipedia.org/wiki/OpenAI
+ 3 more evidence sources
check_circle
“OpenAI says it retains "full discretion" over its safety mechanisms while Google agreed to adjust its safety settings at the government's request, according to The Information, which first reported the deal.”
CORROBORATED
The web search results confirm the distinction: OpenAI retained 'full discretion' over its safety stack, while Google committed to helping the government adjust its safety filters upon request.
travel_explore
web search NEUTRAL — OpenAI retained full control over its "Safety Stack" in its February deal, according to the company's own blog post. Google, by contrast, has committed to helping the government adjust its safety filt…
https://the-decoder.com/google-signs-ai-deal-with-the-pentag…
travel_explore
web search NEUTRAL — OpenAI retains what it describes as “full discretion” over its safety stack, including the ability to run and update classifiers that monitor use. The company says this deployment architecture enables…
https://www.bundle.app/en/technology/what-rights-do-ai-compa…
travel_explore
web search NEUTRAL — Картинки. Войти. Google. Расширенный поиск.
https://www.google.com/
info
“DeepMind research scientist Alex Turner criticized the agreement, posting that Google "can't veto usage" and is relying on "aspirational language with no legal restrictions."”
SINGLE SOURCE
The claim attributes the criticism to 'DeepMind research scientist Alex Turner.' However, the provided evidence only contains general web search results and Wikipedia entries, none of which mention Alex Turner or his specific criticism regarding 'can't veto usage.' Therefore, the claim cannot be corroborated by the provided evidence.
menu_book
wikipedia NEUTRAL — Peter Brian Hegseth (born June 6, 1980) is an American government official and former television personality who has served since 2025 as the 29th United States secretary of defense. Hegseth studied p…
https://en.wikipedia.org/wiki/Pete_Hegseth
menu_book
wikipedia NEUTRAL — Richard Schiff (born May 27, 1955) is an American actor. He is best known for playing Toby Ziegler on The West Wing, a role for which he received an Emmy Award. Schiff made his television directorial …
https://en.wikipedia.org/wiki/Richard_Schiff
menu_book
wikipedia NEUTRAL — Rohit "Ro" Khanna (born September 13, 1976) is an American politician and attorney serving as the U.S. representative from California's 17th congressional district since 2017. A member of the Democrat…
https://en.wikipedia.org/wiki/Ro_Khanna
+ 3 more evidence sources
check_circle
“The Pentagon-Google deal comes as AI labs develop more powerful models and the technology's use comes under scrutiny amid the Iran war.”
CORROBORATED
The web search results confirm the Pentagon-Google deal occurred amidst scrutiny of AI technology use. One source mentions the general anxiety over AI in war, and another mentions the context of the Iran war. The evidence supports the confluence of these factors.
menu_book
wikipedia NEUTRAL — Anthropic PBC is an American artificial intelligence (AI) company headquartered in San Francisco. It has developed a range of large language models (LLMs) named Claude and focuses on AI safety. Anthro…
https://en.wikipedia.org/wiki/Anthropic
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Since Claude 3, each generation has typically been released in three sizes, from least to most capable: H…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — Project Maven (officially Algorithmic Warfare Cross Functional Team) is a United States Department of Defense initiative launched in 2017 to accelerate the adoption of machine learning and data integr…
https://en.wikipedia.org/wiki/Project_Maven
+ 3 more evidence sources
check_circle
“Google and OpenAI both say they have the same red lines: no AI for autonomous weapons or mass surveillance.”
CORROBORATED
Multiple web search results cite language from the Google contract stating that the AI System 'should not be used for, and should not be used for, domestic mass surveillance or autonomous weapons (including target selection) without appropriate human oversight.' This statement is presented in the context of both Google and OpenAI's stated red lines.
menu_book
wikipedia NEUTRAL — Google AI Studio is a web-based integrated development environment developed by Google for prototyping applications using generative AI models. Released in December 2023 alongside the Gemini API, the …
https://en.wikipedia.org/wiki/Google_AI_Studio
menu_book
wikipedia NEUTRAL — Gemini (also known as Google Gemini and formerly known as Bard) is a generative artificial intelligence chatbot and virtual assistant developed by Google. It is powered by the family of large language…
https://en.wikipedia.org/wiki/Google_Gemini
menu_book
wikipedia NEUTRAL — OpenAI Global, LLC is an American artificial intelligence (AI) research organization consisting of a for-profit public benefit corporation (PBC) and a nonprofit foundation, headquartered in San Franci…
https://en.wikipedia.org/wiki/OpenAI
+ 3 more evidence sources
check_circle
“"We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight," a Google spokesperson said.”
CORROBORATED
Multiple web search results quote a Google spokesperson stating the commitment that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight.
travel_explore
web search NEUTRAL — The contract includes language stating, “the parties agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous weapons (including target select…
https://www.theguardian.com/technology/2026/apr/28/google-cl…
travel_explore
web search NEUTRAL — The language of Google’s agreement states that both the company and the Pentagon “agree that the AI System is not intended for, and should not be used for, domestic mass surveillance or autonomous wea…
https://nypost.com/2026/04/28/business/google-inks-pentagon-…
travel_explore
web search NEUTRAL — “We remain committed to the private and public sector consensus that AI should not be used for domestic mass surveillance or autonomous weaponry without appropriate human oversight,” the spokesperson …
https://www.straitstimes.com/world/united-states/google-allo…
info
“Anthropic is the only major AI lab that has not struck a deal with the Pentagon.”
SINGLE SOURCE
One web search result mentions that the Pentagon signed agreements with major AI labs, including Anthropic, OpenAI, and Google. However, no source explicitly states that Anthropic is the *only* major AI lab that has not struck a deal, making this claim difficult to verify as a definitive exclusion.
travel_explore
web search NEUTRAL — Anthropic PBC is an American artificial intelligence company headquartered in San Francisco. It has developed a range of large language models named Claude and focuses on AI safety.
https://en.wikipedia.org/wiki/Anthropic
travel_explore
web search NEUTRAL — Google has reportedly signed a deal with the US Pentagon to use its artificial intelligence models for classified work.The Pentagon signed agreements worth up to $200m each with major AI labs in 2025,…
https://www.theguardian.com/technology/2026/apr/28/google-cl…
travel_explore
web search NEUTRAL — OpenAI signed the deal, then moved the wording closer to where Anthropic had been pushing the conversation in the first place. So no, OpenAI did not just sign the exact deal Anthropic refused.
https://www.linkedin.com/pulse/everyone-got-anthropic-story-…
help
“The Defense Department continues to use Anthropic's models while litigation plays out over its supply chain risk designation, and efforts are underway to give the broader government access.”
INSUFFICIENT EVIDENCE
No evidence was gathered for this claim, and the provided evidence sources did not contain information regarding the Defense Department's continued use of Anthropic's models or litigation over its supply chain risk designation.
help
“Hamza Chaudhry of Future of Life Institute, a nonprofit advocacy group, is pushing for greater transparency around AI company dealings with the Pentagon, along with modernizing the AI testing and verification process before systems are deployed.”
INSUFFICIENT EVIDENCE
No evidence was gathered for this claim, and the provided evidence sources did not contain information regarding Hamza Chaudhry or the Future of Life Institute's advocacy efforts.
schedule
“Tech advocacy group Americans for Responsible Innovation is calling on Congress to codify a "five-second rule" to ensure that there's meaningful human control for AI weapons.”
PENDING
schedule
“The group also calls for a thorough congressional verification of AI systems before a contract is awarded.”
PENDING
schedule
“Committees in both chambers are on track to mark up the National Defense Authorization Act this summer.”
PENDING

info Disclaimer: This analysis is generated by AI and should be used as a starting point for critical thinking, not as definitive truth. Claims are verified against publicly available sources. Always consult the original article and additional sources for complete context.