'Agentic AI Has Been Weaponized': Major AI Company Says Hackers With No AI Skills Used Its Chatbot to Generate Phishing Schemes and Ransom Demands AI startup Anthropic reports that cybercriminals used its Claude AI chatbot for "vibe hacking" schemes that automate attacks, calculate ransom fees, and generate "visually alarming ransom notes."
By David James
Key Takeaways
- AI startup Anthropic revealed that it detected cybercriminals using its Claude AI for hacking.
- The AI allowed hackers to automate sophisticated cyberattacks using little technical knowledge.
- Anthropic says it has stopped the attacks and shared best practices for cybersecurity.
Hackers recently exploited Anthropic's Claude AI chatbot to orchestrate "large-scale" extortion operations, a fraudulent employment scheme, and the sale of AI-generated ransomware targeting and extorting at least 17 companies, the company said in a report.
The report details how its chatbot was manipulated by hackers (with little to no technical knowledge) to identify vulnerable companies, generate tailored malware, organize stolen data, and craft ransom demands with automation and speed.
"Agentic AI has been weaponized," Anthropic said.
Related: Instagram Head Was the Victim of an 'Experienced a Sophisticated Phishing Attack'
It's not yet public which companies were targeted or how much money the hacker made, but the report noted that extortion demands went up to $500,000.
Key Details of the Attack
Anthropic's internal team detected the hacker's operation, observing the use of Claude's coding features to pinpoint victims and build malicious software with simple prompts—a process termed "vibe hacking," a play on "vibe coding," which is using AI to write code with prompts in plain English.
Upon detection, Anthropic said it responded by suspending accounts, tightening safety filters, and sharing best practices for organizations to defend against emerging AI-borne threats.
Related: This AI-Driven Scam Is Draining Retirement Funds—And No One Is Safe, According to the FBI
How Businesses Can Protect Themselves From AI Hackers
With that in mind, the SBA breaks down how small business owners can protect themselves:
Strengthen basic cyber hygiene: Encourage staff to recognize phishing attempts, use complex passwords, and enable multi-factor authentication.
Consult cybersecurity professionals: Employ external audits and regular security assessments, especially for companies handling sensitive data.
Monitor emerging AI risks: Stay informed about advances in both AI-powered productivity tools and the associated risks by following reports from providers like Anthropic.
Leverage Security Partnerships: Consider joining industry groups or networks that share threat intelligence and best practices for protecting against AI-fueled crime.