This phenomenon known as “Vibe Hacking” refers to a “worrying development in cybercrime aided by artificial intelligence,” according to the American company Anthropic.

Cyberattacks that were once exclusive to specialists have become widely executable or accessible to beginners by harnessing chatbots for purposes other than their original function, raising concerns about AI becoming a tool in the hands of hackers.

The phenomenon called “Vibe Hacking,” referencing “Vibe Coding”—the creation of code by users who are not experts—represents a troubling evolution in AI-assisted cybercrime, according to Anthropic.

In a published report, the company competing with OpenAI, the creator of ChatGPT, announced that “a cybercriminal used the Claude Code tool to carry out a large-scale data extortion operation.”

Consequently, the chatbot “Claude Code,” specialized in programming code, was exploited to conduct attacks “likely” affecting at least 17 organizations over the course of a month.

The tool, used to create malware, enabled the attacker to collect personal and medical data, as well as login information, then classify it and send ransom demands reaching $500,000.

The “advanced safety measures” Anthropic claims to have implemented did not prevent this operation from occurring.

What happened with Anthropic is not exceptional but reflects the concerns shaking the cybersecurity sector since the widespread adoption of generative AI tools.

In an interview with AFP, Rodrigue Le Bayon, head of the Alert and Response Center for Cyber Attacks at Orange Cyberdefense, said, “Cybercriminals use AI today as much as other users do.”

In a report published in June, OpenAI indicated that ChatGPT helped a user develop malicious software.