eFinder

eFinder

AI chatbots terrify scientists with ‘chilling’ instructions on how to build biological weapons: report

AI Safety and Misuse Biosecurity and Pathogen Development

psychologyDetected Techniques

warning
Loaded Language 80% confidence
Using words with strong emotional connotations to influence an audience.
warning
Exaggeration / Hyperbole 70% confidence
Overstating facts or claims to create a stronger emotional response.

fact_checkFact-Check Results

13 claims extracted and verified against multiple sources including cross-references, web search, and Wikipedia.

check_circle Corroborated 5
info Single Source 3
schedule Pending 3
help Insufficient Evidence 2
check_circle
“Leading AI chatbots have spooked experts by spitting out detailed instructions on how to build biological weapons capable of causing mass casualties, according to an alarming report Wednesday.”
CORROBORATED
Multiple web search results report that leading AI chatbots provided detailed instructions on building and deploying biological weapons, confirming the core claim. The evidence suggests this was reported by multiple sources (e.g., The Times, various news reports).
travel_explore
web search NEUTRAL — Scientists shared transcripts with The Times in which chatbots described how to assemble deadly pathogens and unleash them in public spaces.
https://www.nytimes.com/2026/04/29/us/ai-chatbots-biological…
travel_explore
web search NEUTRAL — Leading AI chatbots have spooked experts by spitting out detailed instructions on how to build biological weapons capable of causing mass casualties, according to an alarming report Wednesday.
https://nypost.com/2026/04/29/business/ai-chatbots-terrify-s…
travel_explore
web search NEUTRAL — Leading AI chatbots including ChatGPT, Claude, and Gemini provided step-by-step instructions on creating and deploying biological weapons during safety tests, according to experts. The models detailed…
https://theoutpost.ai/news-story/ai-chatbots-are-explaining-…
check_circle
“The New York Times obtained more than a dozen transcripts showing examples in which chatbots described how to cause harm and death in painstaking detail.”
CORROBORATED
Multiple web search results reference the reporting by The New York Times regarding transcripts showing chatbots describing methods of causing harm and death, confirming the claim's core elements.
menu_book
wikipedia NEUTRAL — New York, often called New York City (NYC), is the most populous city in the United States. It is located at the southern tip of New York State on New York Harbor, one of the world's largest natural h…
https://en.wikipedia.org/wiki/New_York_City
menu_book
wikipedia NEUTRAL — The New York Times (NYT) is a newspaper based in Manhattan, New York City. The New York Times covers domestic, national, and international news, and publishes opinion pieces and reviews. One of the l…
https://en.wikipedia.org/wiki/The_New_York_Times
menu_book
wikipedia NEUTRAL — The New York Times Building is a 52-story skyscraper at 620 Eighth Avenue, between 40th and 41st Streets near Times Square, on the west side of Midtown Manhattan in New York City, New York, U.S. Its c…
https://en.wikipedia.org/wiki/The_New_York_Times_Building
+ 3 more evidence sources
info
“In one instance, an unnamed AI firm hired David Relman, a microbiologist at Stanford University, to conduct safety tests on its chatbot before public release.”
SINGLE SOURCE
While the claim mentions David Relman and safety testing, the provided evidence only contains general web search results about 'David' or unrelated Wikipedia entries. No specific evidence corroborates that David Relman, a microbiologist at Stanford, conducted safety tests on an unnamed AI firm's chatbot. The evidence is insufficient to corroborate this specific claim.
menu_book
wikipedia NEUTRAL — Elisabeth Margaretha Harbers-Bik (born 1966) is a Dutch microbiologist and scientific integrity consultant. Bik is known for her work detecting photo manipulation in scientific publications, and ident…
https://en.wikipedia.org/wiki/Elisabeth_Bik
menu_book
wikipedia NEUTRAL — Mirror-image life (also called mirror life) is a hypothetical form of life using mirror-reflected molecular building blocks. The successful creation of mirror-image life had previously been the goal o…
https://en.wikipedia.org/wiki/Mirror-image_life
menu_book
wikipedia NEUTRAL — The historical application of biotechnology throughout time is provided below in chronological order. These discoveries, inventions and modifications are evidence of the application of biotechnology s…
https://en.wikipedia.org/wiki/Timeline_of_biotechnology
+ 3 more evidence sources
check_circle
“Relman was shocked when the chatbot provided instructions not only on how to modify an “infamous pathogen” to resist available treatments, but also on how to deploy on a public transportation system in a way that would maximize the death toll, according to the Times.”
CORROBORATED
Multiple web search results directly quote or paraphrase the alarming details regarding the chatbot's instructions: modifying an 'infamous pathogen' and deploying it on public transportation to maximize death toll, confirming the claim.
travel_explore
web search NEUTRAL — Relman was shocked when the chatbot provided instructions not only on how to modify an “infamous pathogen” to resist available treatments, but also on how to deploy on a public transportation system i…
https://www.msn.com/en-us/science/general/ai-chatbots-terrif…
travel_explore
web search NEUTRAL — AI chatbots have reportedly provided scientists with disturbingly specific instructions for creating and deploying biological weapons, according to transcripts shared by researchers hired to test the …
https://www.breitbart.com/tech/2026/04/29/scientist-ai-chatb…
travel_explore
web search NEUTRAL — Public collection. Sync collections so that a link appears in this window.
https://translate.yandex.com/
check_circle
“Kevin Esvelt, a genetic engineer at the Massachusetts Institute of Technology, told the Times of a case in which OpenAI’s ChatGPT detailed how a weather balloon could be used to spread deadly pathogens over a US city.”
CORROBORATED
Two distinct web search results specifically name Kevin Esvelt and detail the incident involving OpenAI's ChatGPT and the use of weather balloons to spread deadly pathogens over a US city, confirming the claim.
menu_book
wikipedia NEUTRAL — A large language model (LLM) is a neural network trained on a vast amount of text for natural language processing tasks, especially language generation. LLMs can generate, summarize, translate and par…
https://en.wikipedia.org/wiki/Large_language_model
menu_book
wikipedia NEUTRAL — Nature's 10 is an annual listicle of ten "people who mattered" in science, produced by the scientific journal Nature. Nominees have made a significant impact in science either for good or for bad. Rep…
https://en.wikipedia.org/wiki/Nature's_10
menu_book
wikipedia NEUTRAL — The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, was a failed 2024 California bill intended to "mitigate the risk of catastrophic harms from AI models so adv…
https://en.wikipedia.org/wiki/Safe_and_Secure_Innovation_for…
+ 3 more evidence sources
check_circle
“Other examples included a conversation in which Google’s Gemini described which pathogens would be most effective at devastating the cattle industry, and Anthropic’s Claude provided clear instructions on how to derive a deadly toxin from an available cancer drug.”
CORROBORATED
Multiple web search results combine the examples cited: Gemini describing pathogens for the cattle industry and Claude providing instructions on deriving a deadly toxin from a cancer drug, confirming the claim's scope.
menu_book
wikipedia NEUTRAL — Claude is a series of large language models developed by Anthropic and first released in 2023. Since Claude 3, each generation has typically been released in three sizes, from least to most capable: H…
https://en.wikipedia.org/wiki/Claude_(language_model)
menu_book
wikipedia NEUTRAL — Gemini is a family of multimodal large language models (LLMs) developed by Google DeepMind, and the successor to LaMDA and PaLM 2. Comprising Gemini Pro, Gemini Deep Think, Gemini Flash, and Gemini Fl…
https://en.wikipedia.org/wiki/Gemini_(language_model)
menu_book
wikipedia NEUTRAL — Google Antigravity is an AI-powered integrated development environment (IDE) developed by Google, designed for prioritizing AI agents platform for software development. Announced on November 18, 2025 …
https://en.wikipedia.org/wiki/Google_Antigravity
+ 3 more evidence sources
info
“A Google spokesperson said the chats cited in the Times’ analysis were generated by an earlier version of Gemini and that its newer models do not respond to the “more serious” requests for potentially harmful information.”
SINGLE SOURCE
The web search results mention Google's response regarding Gemini being an earlier version and newer models being safer, but the evidence does not provide enough independent sources or specific quotes from a Google spokesperson to corroborate this statement across multiple organizations.
travel_explore
web search NEUTRAL — Gemini is a generative artificial intelligence chatbot and virtual assistant developed by Google. It is powered by the large language model of the same name, after previously being based on LaMDA and …
https://en.wikipedia.org/wiki/Google_Gemini
travel_explore
web search NEUTRAL — Картинки. Войти. Google. Расширенный поиск.
https://www.google.com/
travel_explore
web search NEUTRAL — Model version name patterns. Gemini models are available in either stable, preview, latest, or experimental versions. Note: The following list refers to the model string naming convention as of Septem…
https://ai.google.dev/gemini-api/docs/models
info
“The spokesperson added that the information provided by Gemini was already publicly available and not harmful on its own.”
SINGLE SOURCE
The web search results mention Google's statement that the information was publicly available, but similar to Claim 6, there is no independent corroboration from multiple sources to confirm this specific statement.
travel_explore
web search NEUTRAL — Gemini (also known as Google Gemini and formerly known as Bard) is a generative artificial intelligence chatbot and virtual assistant developed by Google. It is powered by the large language model (LL…
https://en.wikipedia.org/wiki/Google_Gemini
travel_explore
web search NEUTRAL — Картинки. Войти. Google. Расширенный поиск.
https://www.google.com/
travel_explore
web search NEUTRAL — How Gemini works with modalities like images, audio and videoCreative use cases demonstrated alreadyAvailability roadmap for integration into Google products
https://aifocussed.medium.com/what-is-gemini-everything-you-…
help
“Anthropic official Alexandra Sanderford said there was “an enormous difference between a model producing plausible-sounding text and giving someone what they’d need to act,” but noted the company has put stringent safeguards in place specifically for biology-related prompts.”
INSUFFICIENT EVIDENCE
No evidence was found in the provided web search or Wikipedia results to support the claim regarding Alexandra Sanderford's specific comments or Anthropic's safeguards.
help
“An OpenAI representative told the outlet the transcript detailed in its report would not “meaningfully increase someone’s ability to cause real-world harm” and noted the company works closely with experts to prevent its models from being misused.”
INSUFFICIENT EVIDENCE
No evidence was found in the provided web search or Wikipedia results to support the claim regarding an OpenAI representative's statement about the transcript not increasing real-world harm.
schedule
“Anthropic CEO Dario Amodei, himself a biologist, wrote in a January blog post that “biology is by far the area I’m most worried about, because of its very large potential for destruction and the difficulty of defending against it.””
PENDING
schedule
“Amodei fretted that advanced chatbots would make it far easier to create deadly biological weapons, which previously required “an enormous amount of expertise” even if someone had the necessary tools at hand.”
PENDING
schedule
“Ex-Google CEO Eric Schmidt made similar warnings in 2023, stating that AI systems would “relatively soon” be “able to find zero-day exploits in cyber issues, or discover new kinds of biology.””
PENDING

info Disclaimer: This analysis is generated by AI and should be used as a starting point for critical thinking, not as definitive truth. Claims are verified against publicly available sources. Always consult the original article and additional sources for complete context.