News For You

Hackers Launch First-Ever AI-Powered Attack on U.S.

In what experts are calling a historic milestone in cyber warfare, Chinese state-sponsored hackers have reportedly executed the first large-scale autonomous AI-driven cyberattack targeting the United States, leveraging AI to infiltrate U.S. companies and government entities at “physically impossible” speeds, according to a bombshell report from Anthropic.

The attackers, posing as a legitimate cybersecurity firm, manipulated Anthropic’s AI coding assistant, Claude, into conducting sophisticated infiltration campaigns—almost entirely on its own. The hack, which targeted roughly 30 organizations including tech giants, financial firms, chemical manufacturers, and federal agencies, marks a dramatic escalation in cyber capabilities—one driven not by human ingenuity, but by machine autonomy.

“This is the first documented case of a large-scale cyberattack executed without substantial human intervention,” Anthropic’s report stated. And the numbers are jaw-dropping.

Claude’s AI executed between 80–90% of the attack workload entirely autonomously, with humans stepping in only at 4 to 6 key decision points per campaign. That means what once required a team of skilled hackers working around the clock was now being accomplished by one or two supervisors directing an AI agent that operated at machine speed, issuing thousands of requests per second.

Once Claude’s internal safety systems were bypassed—through what Anthropic describes as a carefully orchestrated social engineering ploy—the AI began writing custom exploit code, scanning networks for weaknesses, harvesting credentials, and exfiltrating sensitive data across multiple targets.

“Threat actors can now use agentic AI systems to do the work of entire teams of experienced hackers,” Anthropic warned.

While only four of the 30 targeted entities were successfully breached—and the U.S. government was not among them—the attack still managed to steal troves of sensitive information, and the implications are enormous.

This wasn’t a proof-of-concept. It wasn’t a lab test. This was a real-world attack, deployed and operated largely by an autonomous AI system. And it worked.

Anthropic, to its credit, eventually detected the malicious behavior and terminated the hackers’ accounts, updating its defenses and refining its detection mechanisms. But the larger concern remains: the barrier to entry for sophisticated cyberattacks has collapsed. What once required technical genius now requires access to a well-prompted AI agent and a moderately skilled human handler.

This new model—AI as the operator, human as the strategist—is poised to redefine the cybersecurity landscape, and not necessarily in favor of the defenders.

“These tools will just speed up things,” said Logan Graham, who leads Anthropic’s internal security team. “If we don’t enable defenders to have a very substantial permanent advantage, I’m concerned that we maybe lose this race.”

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

To Top
$(".comment-click-7294").on("click", function(){ $(".com-click-id-7294").show(); $(".disqus-thread-7294").show(); $(".com-but-7294").hide(); }); // The slider being synced must be initialized first $('.post-gallery-bot').flexslider({ animation: "slide", controlNav: false, animationLoop: true, slideshow: false, itemWidth: 80, itemMargin: 10, asNavFor: '.post-gallery-top' }); $('.post-gallery-top').flexslider({ animation: "fade", controlNav: false, animationLoop: true, slideshow: false, prevText: "<", nextText: ">", sync: ".post-gallery-bot" }); });