Cybersecurity experts warn that Anthropic's new Mythos AI model could let attackers map and exploit the legacy infrastructure underpinning global banking.
Anthropic launched Mythos on April 7, billing it as the company's most capable model for coding and autonomous tasks. Six days later, cybersecurity researchers and former regulators were describing those same capabilities as a specific threat to the global banking system.
The concern is precise. A model that can write and audit code at an expert level can also reverse-engineer it. TJ Marlin, chief executive of enterprise AI security firm Guardrail Technologies, told Reuters via Yahoo Finance that Mythos Preview can scan layered architectures and surface vulnerabilities in legacy infrastructure that human attackers typically miss. What was obscure becomes findable; what was findable becomes exploitable at speed.
Banks are especially exposed because their technology stacks were built to last, not to be isolated. A typical large institution runs cloud-native services layered atop core systems built decades ago, bridged by middleware and third-party integrations accumulated over years. Patching one layer can open gaps in another. Marlin described the risk as a potential "force multiplier" for any attacker equipped with such a tool.
The homogeneity problem
Scale is the deeper issue. Naresh Raheja, a San Francisco-based consultant who previously worked at the Office of the Comptroller of the Currency, notes that the banking sector's regulatory uniformity has created a hidden systemic vulnerability. A small number of vendors supply the same KYC software, transaction processing systems, and customer onboarding tools to dozens of competing institutions. "Many banks use the same vendors and the same solutions," Raheja told Reuters via Yahoo Finance. A single AI-generated exploit that clears one bank's defenses could clear many others before a patch reaches anyone.
Government officials in at least three countries, including the US and Canada, have begun engaging with these risks in the days since Mythos's announcement. The Reuters reporting did not identify the third country or specify what regulatory responses are under consideration.
Anthropics position here is genuinely uncomfortable. The company has framed its identity around AI safety, publishing research on constitutional AI and advocating for careful deployment. Earlier this year Anthropic raised safety concerns and lost a Defense Department contract, the Los Angeles Times reported. Launching a model that the company itself warns could supercharge cyberattacks sits awkwardly against that record.
What it means for the industry
The dual-use problem in AI coding tools is not new, but Mythos raises the stakes. Earlier AI-assisted hacking required technical knowledge and prompt engineering skill to chain together a viable attack. A model capable of autonomous action, executing multi-step tasks without continuous human input, can reduce that barrier further and put sophisticated exploit capability within reach of less skilled actors. Venture investors appear to have already priced in the defensive opportunity: cybersecurity ranked among the top-funded sectors in New York's startup ecosystem during Q1 2026, according to AlleyWatch.
Competitive pressure is unlikely to slow any of this down. OpenAI last week launched a new $100 ChatGPT Pro tier anchored in part on expanded Codex access, directly targeting Anthropic's Claude Max subscriber base, The Next Web reported. Coding power is now the primary competitive metric between frontier labs, and the security implications of that race deserve more than a footnote in a launch post.
Financial institutions that have not already begun AI threat modeling now have a specific, named reason to start. Anthropic can add guardrails, restrict certain use cases, or coordinate with regulators on disclosure protocols. What it cannot do is un-release a capability once it is public.
When the first confirmed AI-assisted bank breach is traced to a commercial model, and the maker had acknowledged the risk on day six, the legal and reputational questions will be unlike anything the industry has faced before.
---
Frequently asked questions
Q: What is Anthropic's Mythos model?
A: Mythos is Anthropic's newest AI model, announced April 7, 2026. The company describes it as its most capable model yet for coding and agentic tasks, meaning it can act autonomously across multi-step workflows.
Q: How could Mythos be used to attack banks?
A: Security experts say its advanced coding ability allows it to scan complex, multi-layer architectures for vulnerabilities, including in legacy systems, and generate working exploits without the level of human expertise previously required.
Q: Why are banks more at risk than other industries?
A: Banks combine decades-old core systems with modern software and heavily share vendors across the sector. A single exploit that works on one shared platform could propagate across many institutions simultaneously.
Q: What are governments doing about AI-powered cyber threats?
A: Officials in at least three countries, including the US and Canada, have begun assessing the risks following Mythos's announcement. No specific regulatory actions have been announced publicly as of April 13.
Read Next
Federal lawsuit asks court to order OpenAI to ban violent stalker from ChatGPT
A civil case filed this week asks a federal judge to force OpenAI to ban a user whose ChatGPT-enabled stalking campaign ended in four felony arrests in January 2026.
Sam Altman's Home Targeted in Molotov Attack Linked to AI Backlash
OpenAI CEO Sam Altman responded to a Friday arson attempt on his San Francisco home, calling for de-escalation as anti-AI anger reaches a new threshold.