News

Ransomware Code Generated by FunkSec AI

Experts from Kaspersky Lab have studied the activity of the FunkSec group, which emerged in late 2024. The main features of the group turned out to be: the use of AI-based tools (including in the development of ransomware), a high degree of adaptability, and the large scale of cyberattacks.

According to experts, FunkSec targets organizations in the public sector, as well as IT, finance, and education industries in European and Asian countries.

Typically, FunkSec operators demand unusually small ransoms — sometimes no more than $10,000 USD. Additionally, the attackers sell the data stolen from victims at a relatively low price.

Experts believe that such an approach allows for a large number of cyberattacks and quickly builds a reputation within the criminal community. Additionally, the scale of the attacks indicates that the perpetrators are using AI to optimize and scale their operations.

The report highlights that the FunkSec ransomware is distinguished by its complex technical architecture and use of AI. The malware’s developers have included full-scale encryption and data theft capabilities in a single executable file written in Rust. It can terminate more than 50 processes on victim devices and is equipped with self-cleaning functions, making incident analysis more difficult.

It is also noted that FunkSec employs advanced methods to evade detection, which complicates the work of researchers.

The FunkSec ransomware does not come as a standalone package: it is supplemented with a password generator (used for brute force attacks and password spraying), as well as a tool for DDoS attacks.

In all cases, researchers discovered clear signs of code generation using large language models (LLMs). Many code fragments were evidently written not manually, but automatically. This is confirmed by placeholder comments (e.g., “placeholder for actual validation”), as well as technical inconsistencies. For instance, it was observed that a single program contained commands for different operating systems. Additionally, the presence of declared but unused functions reflects how LLMs combine several code fragments without trimming excess elements.

“We are increasingly seeing that attackers are using generative AI to create malicious tools. It speeds up the development process, allowing attackers to adapt their tactics more quickly, and also lowers the entry barrier into the industry. However, such generated code often contains errors, so attackers cannot fully rely on new technologies during development,” comments Tatyana Shishkova, a leading expert at Kaspersky GReAT.

it? Share: