Anthropic Reports China-Backed Hackers Used Its AI Tool for Automated Intrusions

Date:

Anthropic has claimed that a Chinese state-sponsored hacking group misused its Claude AI system to conduct cyberattacks with substantial automation. Financial firms and government agencies were among those targeted.
The company said the attackers targeted 30 organizations and succeeded in breaching several. Claude Code was manipulated into acting as a cybersecurity analyst to bypass restrictions.
Anthropic said the AI model autonomously carried out most attack operations, estimating an automation rate as high as 90%. It described this as a worrying development in malicious AI use.
Claude’s inaccuracies were significant. The model fabricated information, misunderstood system environments, and misidentified publicly accessible material as classified.
Cybersecurity analysts expressed mixed reactions. Some believe the incident underscores growing AI-related risks, while others argue it resembles complex automated scripting rather than true autonomy.

Related articles

Mark Zuckerberg’s $80 Billion Metaverse: Why Scale Couldn’t Save a Product Nobody Needed

Scale is a competitive advantage for many things. It is not a substitute for product-market fit. Meta is...

Instagram Removes Encrypted Messaging: What Regulators Are Watching

Meta's decision to remove end-to-end encryption from Instagram direct messages, set for May 8, 2026, is being closely...

Google’s Amateur Health Advice AI Feature: Launched in Spring, Gone by Autumn

In the span of a few months, Google introduced and then silently discontinued a search feature that used...

Microsoft’s Court Support for Anthropic Exposes Deep Tensions Between AI Innovation and Pentagon Control

Microsoft's decision to file a court brief supporting Anthropic in its battle against the Pentagon's supply-chain risk designation...