
“Is AI a tool to help us, or will it become a weapon for hackers?” A recent report by Anthropic offers a weighty answer to this question.
This report is more than just a simple incident log; it shows how cybercrime is evolving in the age of AI.
The Actual Weaponization of AI
1. Era of AI Attacking Alone - Vibe Hacking
According to Entropic's Threat Report, a hacker used Claude Code, Claude's specialized coding model, to attack and extort 17 companies.
First, the hacker asked Claude to generate a list of vulnerable companies for cyber attacks. Using AI, they targeted companies by utilizing open source intelligence tools and scanning exposed internet assets.
The attacker injected CLAUDE.md, containing TTPs (Tactics, Techniques, and Procedures), into Claude Code's continuous context to assume the roles of 'operator + analyst' for reconnaissance, vulnerability exploitation, privilege escalation, lateral movement, data exfiltration, ransom pricing, and creating extortion notes.

Without traditional ransomware encryption, the attacker publicly blackmailed companies and institutions with the exfiltrated data, demanding ransoms ranging from $75,000 to $500,000 in exfiltration to extortion crimes.
Moreover, the attacker organized the stolen data and classified which information was sensitive using Claude. With Claude's assistance, the hacker analyzed company financial data to calculate “a realistic ransom demand” and even drafted the extortion email.
AI-Driven Attack Flow
Attack Preparation: Generate list of vulnerable companies, account hijacking, network infiltration
Data Analysis: Classification of sensitive information like financial data, defense contracts, and medical records
Ransom Evaluation & Extortion Email Drafting: Estimation of ransom amount suitable for the company’s situation ($75K~$500K)
Creation of Customized Ransom Note: Psychological pressure like “leak of employee pay” “sale of donor information”
Entropic's Threat Intelligence team recreated documents used by actual hackers, revealing sophisticated monetary demand scenarios that targeted company executives directly, using lists of donors and details of defense contracts.
This case shocked the industry as it marks the first instance where AI did not just assist but simultaneously played the planner, executor, and analyst of the attack.
The organizations affected by this incident reportedly include at least 17 entities, such as defense contractors, financial firms, and medical institutions. Sensitive data equivalent to the U.S. social security numbers, bank accounts, patient medical records, and defense confidential documents were leaked. Hackers demanded a ransom of at least $75,000 (about KRW 100 million) to $500,000 (about KRW 700 million) from each institution. The actual payment status remains unclear. This case was not merely about data breach but a compound crime threatening national security, financial stability, and patient safety.
2. North Korea's Remote Work Scam - Using AI for False Identities and Coding Tests
According to the report, North Korean IT employees performed remote work for Fortune 500 tech companies in the U.S. using Claude.
North Korean IT personnel used AI to create fake academic and career resumes, generating false identities. They also utilized AI for programming coding tests required for employment. After employment, they generated English communications and project reports using AI.
This activity is a strategy by the North Korean regime to earn foreign currency, which previously required years of specialized training, but AI has now shortened this process. The FBI is also tracking these activities, identifying them as a new cyber economic crime model that circumvents international sanctions.
3. Lowering Barriers to Cyber Attacks - 'No-Code Ransomware'
In the past, creating ransomware required understanding encryption algorithms, Windows internal structures, and anti-debugging techniques. Now, even novice hackers can instruct AI like Claude.
A cyber criminal developed multiple ransomware variants using Claude. Each variant came equipped with evasion, recovery prevention, and robust encryption features. The criminal sold them on the dark web for $400 to $1,200.
This signifies the popularization of ‘Ransomware as a Service (RaaS).’ Criminals without specialized knowledge can now distribute malware with AI assistance. An environment has emerged where ransomware variants can be continuously produced and sold with ease.
AI was initially hailed as a 'productivity tool' that automatically writes code or analyzes data. Recent threat cases demonstrate that AI can explosively increase a hacker’s criminal productivity. Even non-expert hackers can conduct sophisticated attacks with AI help. This shortens preparation and execution time, enabling 'factory-style attacks' on multiple companies simultaneously.
Entropic stated, “Despite strong safety mechanisms, persistent hackers continue to try to circumvent them.”
4. Implanting AI in MITRE ATT&CK Tactics
Attackers targeted Vietnam's key infrastructure in operations lasting 9 months. They integrated Claude into 12 out of 14 ATT&CK tactics, including reconnaissance (network scanning, upload fuzzing, WordPress exploit), credential collection (Hydra, hashcat), privilege escalation (Linux kernel exploit), proxy chain setup, and lateral movement planning. The investigation found indications of breaches in telecommunications, government databases, and agricultural management systems, revealing characteristics of APTs that could impact manufacturing and economic security.
AI attacks are defended with AI
Entropic immediately blocked abusive accounts and started additional security filtering. Additionally, the company developed a dedicated classifier to detect AI model abuse. They shared attack-related technical indicators (IOC) with authorities and partners.
The issue, as Entropic acknowledges, is that AI lowers the barriers to entry for cybercrime and expands its scope.
This requires a new security paradigm for enterprises, governments, and individuals alike.
AI has already become a weapon for hackers, and now AI must be faster than hackers to prevent them.
Cyber attacks through AI will become increasingly sophisticated, and their speed will be much faster than it currently is. Once a vulnerability is discovered, automatic attacks can lead to the exposure of personal information for millions or the paralysis of national infrastructure. The challenge is that it is difficult for humans to analyze code at the pace of hackers.
To address this limitation, the US DARPA is conducting the AI Cyber Challenge.
The reason for using AI to counter AI-made threats is because AI can respond the fastest.
The time is coming when a structured AI security governance system, like AICC (AI Cybersecurity Control), is needed.
It involves real-time detection of AI-specific attack patterns that traditional security equipment misses. When an attack is detected, the system switches to an AI-driven response that immediately isolates, blocks, and patches. Furthermore, just as attacker AI evolves, defensive AI must learn from data to enhance security performance.
This is bringing about a transformation in the security paradigm at the national, industrial, and corporate levels, beyond merely a technical response.
Popular Articles