AI Safety Newsletter #69: Department of War, Anthropic, and National Security
Also, Anthropic Removes a Core Safety Commitment
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
In this edition, we discuss the conflicts between Anthropic and the Department of War and Anthropic’s recent removal of a core safety commitment.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
We’re Hiring. We’re hiring an editor! Help us surface the most compelling stories in AI safety and shape how the world understands this fast-moving field.
Other opportunities at CAIS include: Head of Public Engagement, Program Manager, Operations Associate, and other roles. If you’re interested in working on reducing AI risk alongside a talented, mission-driven team, consider applying!
Pentagon Declares Anthropic a Supply Chain Risk to National Security
Thursday, March 5th, the US Department of War (DoW) announced that Anthropic is designated a “supply chain risk,” meaning that Anthropic products cannot be used by the DoW or in any defense contracts. This comes after several weeks of tensions between the two organizations over whether Anthropic models would be used for autonomous weapons and surveillance of Americans, with Anthropic ultimately refusing the DoW’s requests.
This started as contract negotiation. On February 27th, President Trump posted on Truth Social that the US government would be canceling their contract with Anthropic due to the company’s limits on the uses of its AI, Claude. While the Pentagon wanted to be able to use Claude for “any lawful use,” Anthropic insisted on two restrictions: fully autonomous weapons and domestic mass surveillance.
Negotiations quickly escalated. Later the same day, Secretary of War Pete Hegseth posted on X that Anthropic would be designated a supply chain risk. Undersecretary of War Emil Michael later clarified that this designation was due to concerns that the loyalties of Anthropic AIs could be subverted, possibly causing sabotage during high-stakes operations.
Further, Hegseth announced that Anthropic would be barred from doing business with any organization that does business with the US military, even outside of defense contracts. These stronger proposed restrictions are closer to those imposed by congress on foreign companies like Huawei, and are outside of the Department of War’s authority.
Anthropic is challenging the designation in court. Legal analysis from Lawfare suggests that this action is a questionable use of a designation meant for foreign adversaries, not contract disputes. No other AI companies, including Chinese AI companies, have faced equivalent sanctions. DeepSeek is banned from several federal agencies individually, but is not considered a supply chain risk despite the fact that it sabotages work it performs for anti-CCP users.
Anthropic Drops Core Safety Commitment

Anthropic recently removed their commitment to never release catastrophically harmful AI. This continues the trend of Anthropic and other frontier AI companies progressively weakening safety commitments as profit incentives grow. None of Anthropic, OpenAI, or DeepMind currently have robust commitments against releasing AIs they assess to be highly dangerous.
The new policy emphasizes voluntary restraint over hard commitments. Anthropic has repeatedly removed safety commitments, citing their need for increased access to dangerous AIs and freedom to decide how to execute their mission. This comes at a time when Anthropic is becoming increasingly consumer-focused, with over 1 million new users joining each day recently.
Competitive pressures are creating a race to the bottom on frontier AI safety. Anthropic’s justification for the changes are largely based on the fact that other AI companies are not going to stop development; the argument is that, if Anthropic alone were to stick to stricter safety commitments, it would simply fall behind other developers, while doing little to reduce overall risk. This causes a vicious cycle, as loosened safety commitments increase the speed of AI development, which in turn incentivizes further loosening.
Opportunity for Experienced Researchers: AI and Society Fellowship
Applications are now open for the AI and Society Fellowship at the Center for AI Safety: a fully funded, 3-month summer fellowship in San Francisco for scholars in economics, law, IR, and adjacent fields to conduct research on the societal impacts of advanced AI. The fellowship will include regular guest speaker events by professors at Stanford, Penn, Johns Hopkins, and more. Apply by March 24. For more information, visit: https://safe.ai/fellowship
In Other News
Government
OpenAI is working on voice control technology for drone swarms in a US military trial
Florida Governor Ron DeSantis directed state agencies to work with the Future of Life Institute on protecting children from AI harms
The US Commerce Department is reportedly considering new, “tiered” controls on AI chip exports, with conditions for sales approvals dependent on the size of the export
OpenAI amended its agreement with the Department of War, claiming to prohibit the use of its models for domestic surveillance, but skeptics have pointed out that the vagueness of the wording in the agreement may in fact allow for such uses
The White House reportedly pressured Republican lawmakers in Utah to drop an AI safety bill aiming to reduce cyber risks and protect children
In AI Frontiers, Erich Grunewald and Raghav Akula argue that the US Government should close gaps in export controls on high-bandwidth memory, to prevent China catching up to frontier AI development
Industry
OpenAI launched GPT-5.4 in ChatGPT, Codex, and the company’s API
NVIDIA reportedly ceased production of H200 chips intended for export to China, shifting TSMC capacity to produce its newer Vera Rubin chips instead
OpenAI announced it had raised new investment of $110 billion at a valuation of $730 billion
Anthropic announced it had raised $30 billion, reaching a valuation of $380 billion
SpaceX acquired xAI, creating the most valuable private company in history
Yann LeCun’s start-up, AMI Labs, raised more than $1 billion at a valuation of $3.5 billion
Reuters reported on new ASML technology that could increase chip production by 50% by 2030
In AI Frontiers, Poe Zhao analyzes how economic constraints are driving China’s startups to pursue more pragmatic strategies than their US counterparts
Civil Society
Tech companies Block and Atlassian cut thousands of jobs, citing AI efficiency as a factor in the decisions
A lawsuit filed against Google alleged that the company’s AI model Gemini encouraged a 36-year-old man from Florida to commit suicide
Anthropic launched The Anthropic Institute to research the societal challenges of AI
Researchers from GovAI and the University of Oxford described 14 metrics for assessing how much AI is automating AI research and development — which has implications for how much AI capabilities could accelerate
Summer Yue, a director of alignment at Meta, said she had temporarily lost control of her OpenClaw agent, needing to run in order to unplug the computer it was running on.
Anthropic published a new study on AI’s impacts on the labor market
In AI Frontiers, Benjamin Jones explains how AI automating some jobs could be economically positive for workers, provided that that AI far outperforms the humans it displaces









