Nightmare Fuel: Why the Next Wave of AI Models Is a Gift to Hackers
Adobe Stock
Toggle Dark Mode
At this point, AI’s capabilities aren’t lost on any of us. At least they shouldn’t be. The same companies and technologies powering our desktop chatbots and coding platforms — including those that are or will be part of Apple Intelligence — are also used by the government as part of autonomous weapons systems and possibly mass surveillance. We’re well on the way to the technological singularity where AI surpasses human intelligence and triggers rapid, uncontrollable self-improvement. AI is proliferating faster than the legal and regulatory landscape can keep up.
Researching this topic leaves the questions of why we’re allowing AI to develop essentially unchecked and what’s the best framework for governing AI largely unanswered. Recently, AI and government officials told Axios’ CEO Jim VandeHei that Anthropic, OpenAI, and others “will soon release new models that are scary good at hacking sophisticated systems at scale.” Axios also reports that Anthropic themselves has already warned top government officials their upcoming model, “Mythos,” makes “large-scale cyberattacks much more likely in 2026.”
Last year, Anthropic itself published a report after detecting “suspicious activity” that a later investigation revealed was a Chinese state-sponsored group manipulating the Claude Code tool into attempting to hack about 30 global targets, and “succeeded in a small number of cases.” The targets were tech companies, government agencies, financial institutions, and chemical manufacturing companies. Anthropic’s detailed report and warning is best captured by Axios:
The new models are even better at powering agents to think, act, reason and improvise on their own without rest or pause or limitation. Think of a warehouse full of the most sophisticated criminals who never sleep, learn on the fly and persist until successful — except the warehouse is infinite.
To make matters worse, Anthropic’s Claude Code just accidentally leaked 500,000 lines of its own source code. This error put the tool’s full architecture and unreleased features in the public’s hands, allowing it to be reverse engineered outside of Anthropic’s control. Per Anthropic’s report detailing the Chinese state-sponsored hacking using Claude Code last year, it took Anthropic ten days to “[ban] accounts as they were identified, [notify] affected entities as appropriate, and [coordinate] with authorities…”
We have one of the most popular AI companies admitting their publicly available tool was successfully used to hack businesses, government agencies, chemical companies, and financial institutions. Responding to the incident took at least ten days. Today, we’re being told this same company is now warning government officials their next model, Mythos, will make large cyberattacks more likely and “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”
Anthropic’s report addresses the elephant in the room. “This raises an important question: if AI models can be misused for cyberattacks at this scale, why continue to develop and release them? The answer is that the very abilities that allow Claude to be used in these attacks also make it crucial for cyber defense.”
There’s no doubt that America wants to win the AI race — as it should — but speed shouldn’t come at the cost of fundamental security. Hopefully, this news will spark some productive debate about the overall direction and governance of AI for the best possible outcome. Maybe we have to trust these companies to govern themselves. As a former Facebook executive recently warned, AI companies are prioritizing rapid deployment over safety using the justification that if they don’t do it, someone else will, just as social media companies did. This same executive suggests public control, citizens’ assemblies, and opposition to liability shields as a more effective means of AI governance. While these are two extreme approaches, it’s a starting point.
In the meantime, it’s a good idea for business owners to button up their cybersecurity strategy and educate employees on the safe use of systems and management of personal information.

