Finance ministers, central bankers, and financiers have expressed serious concerns about a powerful new AI model they fear could undermine the security of financial systems.

The development of the Claude Mythos model by Anthropic has led to crisis meetings, after it found vulnerabilities in many major operating systems.

Experts say it potentially has an unprecedented ability to identify and exploit cybersecurity weaknesses - though others caution further testing is needed to properly understand its capabilities.

Canadian Finance Minister François-Philippe Champagne told the BBC that Mythos had been discussed extensively at the International Monetary Fund (IMF) meeting in Washington DC this week.

Certainly it is serious enough to warrant the attention of all the finance ministers, he said.

The difference is that the Strait of Hormuz - we know where it is and we know how large it is... the issue that we're facing with Anthropic is that it's the unknown, unknown.

This is requiring a lot of attention so that we have safeguards, and we have processes in place to ensure the resiliency of our financial systems, he added.

What is Claude Mythos?

Mythos is one of Anthropic's latest models developed as part of its broader AI system called Claude, a rival to OpenAI's ChatGPT and Google's Gemini.

It was revealed by Anthropic earlier this month, when developers responsible for testing AI models and their performance of so-called misaligned tasks - which go against human values, goals, and behavior - said it was strikingly capable at computer security tasks.

Citing concerns it could surface old software bugs or find ways to easily exploit system vulnerabilities, Anthropic has not released the model.

Instead, it has made Mythos available to tech giants like Amazon Web Services, CrowdStrike, Microsoft, and Nvidia as part of an initiative called Project Glasswing - which it labels an effort to secure the world's most critical software.

On Thursday, Anthropic released a new version of an existing model, Claude Opus, saying it would allow Mythos' cyber capabilities to be tested in less powerful systems.

Concerns raised about Mythos may exceed chatter around previous AI models, but some cybersecurity experts have questioned how justified they are - especially given the model has not been tested widely in the industry to see how capable it actually is.

The UK’s AI Security Institute has been given access to a preview version of it, and has published the only independent report into the model's cybersecurity skills.

Its researchers noted it was a powerful tool able to find many security holes in undefended environments, but suggested Mythos was not dramatically better than Claude's predecessor, Opus 4.

While developer Anthropic has stated the model has already exposed multiple security vulnerabilities in some critical operating systems, governments and banks are being offered access in advance of its public release to help protect their own systems.

Bank of England governor Andrew Bailey emphasized the importance of these developments: We are having to look very carefully now what this latest AI development could mean for the risk of cyber crime. He remarked on the possibility that cyber criminals could seek to exploit the vulnerabilities identified by such advanced AI systems.