The Trump White House may require government review of powerful AI models after Anthropic and OpenAI restricted dangerous cybersecurity tools in spring 2026.
Anthropic's Claude Mythos Preview, restricted to just 40 organizations under Project Glasswing, can find previously unknown flaws in the Linux kernel and chain them into working exploits. That single capability has managed to unsettle the Trump administration's standing AI policy almost entirely.
Reports this week from the New York Times and Politico describe a potential executive order that would require AI companies to submit new models for government review before public release. National Economic Council Director Kevin Hassett signaled support for that approach on Wednesday. No final decision has been announced.
Pushback arrived fast. A former Trump White House official told The Hill the "flip-flopping nature of the administration's tech response" signals there is no clear leader driving the agenda. Industry groups that spent the past year lobbying against state-level AI restrictions now face the possibility of a federal vetting regime more intrusive than anything the states ever proposed.
The security trigger
Leaked in late March, when Anthropic engineers internally described it as posing "unprecedented cybersecurity risks," Claude Mythos was eventually released in restricted form through Project Glasswing. Anthropic's stated goal is to use the system to harden critical infrastructure before adversaries can exploit the same vulnerabilities it discovers. As Euronews reported, the model identified thousands of high-severity flaws across major operating systems and web browsers, including previously unknown bugs in the Linux kernel, which underpins most of the world's servers.
OpenAI followed within days with GPT-5.4-Cyber, a variant of its flagship model tuned for defensive security work. Access runs through the company's Trusted Access for Cyber program, limited to vetted security vendors, researchers, and critical-infrastructure teams. The model adds binary reverse engineering capabilities that allow analysts to examine compiled software for malware without needing source code, Euronews noted. TechXplore reported that major U.S. bank executives met separately with Treasury Secretary Scott Bessent and Federal Reserve Chairman Jerome Powell to discuss the financial sector's exposure to Mythos-class artificial intelligence systems.
Both labs converging on restricted releases reflects a shared, if uncomfortable, calculation: the same artificial intelligence capabilities that make a model useful to defenders make it equally useful to attackers.
What a vetting regime would mean
Mandatory pre-release review would be a sharp departure from how the Trump administration has operated since January 2025. White House AI policy focused almost entirely on preempting state laws seen as hostile to innovation, leaving safety largely to industry self-governance. A federal review layer would upend that approach.
Practical complications abound. Any vetting process raises immediate questions about timelines, the technical capacity of government reviewers, and whether proprietary model weights would need to be shared with federal agencies. Critics have already warned that mandatory review could slow U.S. artificial intelligence development while foreign competitors operate without similar constraints. Those same arguments were used to block state bills this spring; deploying them now against a White House initiative puts the industry in an uncomfortable position.
No regulatory framework has yet solved how to evaluate whether an AI system is too dangerous to release publicly without actually releasing it and observing what follows. That is the core technical problem the administration faces, and neither side of the debate has a clean answer.
What happens next matters beyond Washington. If the White House establishes even a lightweight vetting process, it sets a precedent that capability thresholds can trigger mandatory government review, something no major democracy has yet formally codified for artificial intelligence models. If it backs down, the episode still confirms that the offensive-defensive arms race in AI is now visible enough to reach the highest levels of government. Either path narrows the window for unchecked model launches.
---
What is Project Glasswing?
Anthropic's restricted-access initiative that makes Claude Mythos Preview available to 40 vetted technology organizations for the purpose of identifying and patching software vulnerabilities before broader exposure.
What is OpenAI's Trusted Access for Cyber (TAC) program?
A vetted-access scheme granting security vendors, researchers, and critical-infrastructure teams access to GPT-5.4-Cyber, a model with fewer content restrictions on cybersecurity queries and added capabilities for reverse engineering compiled software.
Could the White House actually require pre-release AI model vetting?
Possibly. Both the New York Times and Politico reported that a "vetting regime" executive order was under internal discussion. NEC Director Kevin Hassett signaled interest on Wednesday, but no formal order has been published.
Why are U.S. banks concerned about Mythos-class AI?
The model's ability to autonomously discover and chain together unknown software vulnerabilities raises fears that similar tools could target financial infrastructure. Major bank executives met with Treasury Secretary Bessent and Fed Chair Powell in April to assess the exposure.
