AI Safety Newsletter #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
Plus, Bills on Whistleblower Protections, Chip Location Verification, and State Preemption
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
In this edition: The Trump Administration rescinds the Biden-era AI diffusion rule and sells AI chips to the UAE and Saudi Arabia; Federal lawmakers propose legislation on AI whistleblowers, location verification for AI chips, and prohibiting states from regulating AI.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
The Center for AI Safety is also excited to announce the Summer session of our AI Safety, Ethics, and Society course, running from June 23 to September 14. The course, based on our recently published textbook, is open to participants from all disciplines and countries, and is designed to accommodate full-time work or study.
Applications for the Summer 2025 course are now open. The final application deadline is May 30th. Visit the course website to learn more and apply.
Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
On May 12th, the Department of Commerce announced that it had rescinded the Framework for Artificial Intelligence Diffusion, which was set to take effect May 15th. The rule would have regulated the export of AI chips and models across three tiers of countries, each with its own set of restrictions. (Other AI chip export controls, including those prohibiting sales to China, remain on the books.)
The announcement states that the Bureau of Industry and Security (BIS) will issue a replacement rule in the future. In the meantime, the BIS will focus on working to prevent US chips from being used in Chinese AI development. Bloomberg reports that new restrictions will focus on countries that have diverted US chips to China, including Thailand and Malaysia.
The Trump Administration wants to capture the global AI chip market. Though China has yet to export its own AI chips, the BIS will also issue guidance that states using Huawei Ascend chips violates US export controls. This preemptive restriction supports the Trump Administration’s intent for the US to dominate the global AI chip market.
UAE and Saudi Arabia are set to receive hundreds of thousands of AI chips. Last week, Trump announced trade deals with the UAE and Saudi Arabia, respectively.
The UAE is set to receive up to 500,000 of Nvidia's most advanced chips per year, beginning in 2025. 100,000 of these would go to the Emirati firm G42, with the remainder going to U.S. companies building datacenters in the UAE. Following the deal’s announcement, G42 announced the construction of a five GW AI campus in Abu Dhabi—the largest AI infrastructure project anywhere in the world.

Nvidia announced a strategic partnership with Saudi Arabia’s new sovereign AI company, Humain. In the first phase of the partnership, Humain is set to receive 18,000 Blackwell chips. AMD also announced a partnership with Humain.
The chip sales affect several US priorities. The deals will direct large investments to US AI companies that might have otherwise gone to China (China is the leading source of revenue for both the UAE and Saudi Arabia). It will also allow US AI companies to circumvent compute capacity limitations imposed by the US’ energy grid.
Some US officials argue that the Trump Administration’s chip sales threaten to undermine the US’ lead in compute capacity, and consequently US national security, since compute capacity may soon become a key determinant of state power. However, it’s difficult to evaluate the sales’ overall effects on US interests, since the terms of the agreement are unclear.
Bills on Whistleblower Protections, Chip Location Verification, and State Preemption
A federal AI whistleblower protection act. Senate Judiciary Committee Chair Chuck Grassley introduced the Artificial Intelligence Whistleblower Protection Act, which would protect employees who come forward with information about harmful or illegal activities happening inside AI companies.
Current AI whistleblower protections aren’t effective. Currently, these sorts of laws only exist as a patchwork across jurisdictions, making it difficult for would-be AI whistleblowers to predict whether they would be protected. They also often only protect reporting violations of law. Because AI regulation is minimal, developer behavior that poses a threat to public safety may not violate any law.
AI companies can also require employees to sign NDAs preventing them from disparaging the company even after they leave. OpenAI had employees sign such an NDA, which they later discontinued after public pressure.
The AI Whistleblower Protection Act addresses these shortcomings. It covers disclosing any “substantial and specific” danger that AI developer behavior might pose to public safety, public health, or national security. It also prohibits AI companies from requiring employees to sign NDAs or other contracts that undermine their ability to make such disclosures.
A bill requiring location verification for AI chips. Senator Tom Cotton introduced the Chip Security Act, which would require location verification mechanisms for export-controlled AI chips.
The bill would strengthen US export controls by preventing AI chips from being smuggled into China. AI chip smuggling is a growing problem, with potentially 100,000 chips smuggled in 2024.
Currently, US officials struggle to determine what happens to AI chips once they’re shipped overseas. Location verification would allow export authorities to tell when a shipment of chips isn’t where it’s supposed to be, triggering further investigation.
A provision in a tax bill prohibiting states from regulating AI. The House Energy and Commerce Committee included a provision that would prohibit states from regulating AI in its markup of House Republicans’ tax bill.
Ever since California’s SB 1047 almost became law, AI companies have argued that states should be prohibited from regulating AI, and instead leave the problem to the federal government. SB 1047 would have made AI companies liable for harm caused by their models.
However, the provision seems to run afoul of the Senate’s “Byrd Rule,” which prohibits policy provisions from being included in budget reconciliation bills.
In Other News
Industry
Google announced AlphaEvolve, an evolutionary coding agent powered by Gemini. Among other results, AlphaEvolve found an algorithm for multiplying 4x4 complex-valued matrices that bested a record set in 1969, and a method to run Google’s datacenters 0.7% more efficiently.
OpenAI introduced Codex, a cloud-based agent powered by a version of o3 optimized for software engineering.
Grok started responding to unrelated queries by discussing “white genocide” in South Africa. xAI blamed the incident on an “unauthorized modification” (again).
OpenAI released GPT-4.1 and GPT-4.1 mini.
Bloomberg reports that a growing number of companies are reducing their workforce because of AI. For example, Microsoft cut about 6,000 jobs last week, about 3 percent of its workforce.
Civil Society
Dan Hendrycks and Laura Hiscott argue that despite years of effort, mechanistic interpretability has failed to provide insight into AI behavior—the result of a flawed foundational assumption.
Scott Mulligan discusses whether location verification can stop AI chip smuggling.
Eliezer Yudkowsky and Nate Soares are publishing a new book on AI safety: If Anyone Builds It, Everyone Dies.
The NYT discusses Pope Leo XIV’s focus on AI’s risks to humankind.
See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.