A powerful new artificial intelligence model developed by Anthropic has sparked a flurry of crisis meetings among finance ministers, central bankers and senior financiers who fear the technology could be applied to the global financial system with devastating consequences.
The model, known as the Claude Mythos, was proven to detect vulnerabilities in many of the world’s most widely used operating systems, raising alarm at the highest levels of government and commerce. While some experts believe this represents a major advance in AI’s ability to detect and exploit cybersecurity vulnerabilities, others urge caution, arguing that far more independent testing is needed before its true capabilities can be assessed.
Canada’s Finance Minister François-Philippe Champagne confirmed to the media that myth had dominated discussions at this week’s International Monetary Fund meetings in Washington DC. “Certainly it is serious enough to warrant the attention of all finance ministers,” he said. Drawing a comparison to geopolitical risks, he added: “The difference is that the Strait of Hormuz – we know where it is and we know how big it is… the problem we face at Anthropic is that it is the unknown, unknown. This requires a lot of attention so that we have safeguards in place and processes in place to ensure that we ensure the resilience of our financial systems.”
Mythos is among the latest additions to Anthropic’s Claude family of models, directly competing with OpenAI’s ChatGPT and Google’s Gemini. It was unveiled earlier this month by developers responsible for stress testing so-called “misaligned” AI behavior, cases in which a model violates human values or intended goals. Their verdict was that Mythos was “remarkably capable in computer security tasks.”
Anthropic said it was concerned that the model could reveal long-dormant software bugs or reveal new ways to exploit system weaknesses and has decided not to release it publicly. Instead, a handful of tech giants, including Amazon Web Services, CrowdStrike, Microsoft and Nvidia, were granted access as part of an initiative called Project Glasswing, which the company describes as an “effort to secure the world’s most critical software.”
On Thursday, Anthropic released an updated version of its existing Claude Opus model, saying it would allow Mythos’ cyber capabilities to be evaluated in less powerful systems.
Not everyone in the cybersecurity community is convinced that the fears are proportionate, especially given the limited number of independent tests that have been conducted so far. The UK’s AI Security Institute, which was given access to a preview version, is the only body to have published an independent assessment. Its researchers concluded that although Mythos Preview could compromise systems with weak defenses, it was not significantly more powerful than its predecessor, Opus 4. “Our testing shows that Mythos Preview can exploit systems with weak security postures, and it is likely that additional models with these capabilities will be developed,” the report’s authors wrote.
Skeptics have also pointed to a precedent: In February 2019, OpenAI also delayed the release of GPT-2 for security reasons, a decision critics dismissed at the time as a marketing tool.
Senior bankers now have early access to Mythos so they can test their own defenses ahead of a wider release. CS Venkatakrishnan, chief executive of Barclays, told the BBC: “It’s so serious that people need to be worried. We need to understand it better, and we need to understand the vulnerabilities that are being exposed and fix them quickly.” He added that a far more interconnected financial system had created both new opportunities and new risks, warning: “This is what the new world will look like.”
For the UK’s small and medium-sized businesses, who rely on the integrity of banking, payments and cloud infrastructure every day, the impact is significant. A cyber incident that could destabilize a major lender or payment processor could quickly impact SMB supply chains, impacting cash flow, invoicing and customer trust in a matter of hours.
Anthropic has previously noted that Mythos has exposed multiple vulnerabilities in core operating systems, financial platforms and web browsers. Governments and banks are being offered early access to strengthen their defenses before a public launch.
Bank of England Governor Andrew Bailey said the development needed to be treated with the utmost seriousness. “We now need to look very closely at what this latest AI development could mean for the threat of cybercrime,” he said. “The consequence could be that there is an evolution in AI and modeling that makes it easier to identify existing vulnerabilities in some sort of core IT systems, and then cybercriminals, the bad actors, could obviously try to exploit them.”
The US Treasury Department has confirmed that it has raised the matter directly with major American banks and asked them to conduct internal testing before a public release. Industry sources also suggest that a rival US AI company could soon unveil a similarly powerful model, but without comparable guardrails.
For the UK tech sector, the controversy could prove to be both an opportunity and a threat. James Wise, partner at Balderton Capital and chairman of the newly formed Sovereign AI Unit, a £500m government-backed venture capital fund targeting home-grown AI companies, argued that Mythos was merely “the first of many more, even more powerful models” capable of exposing systemic weaknesses.
Speaking to the BBC’s Today programme, he said his department was investing “in UK AI companies that are doing this, companies that are doing AI security”, adding: “We hope that the models that uncover vulnerabilities are also the models that fix them.”
For the country’s AI companies and cybersecurity start-ups, the message from Threadneedle Street and Washington alike is unmistakable: the defensive side of the AI arms race has just become one of the most commercially significant challenges in British entrepreneurship.




