Summary
- Google has identified malware that uses generative AI, such as PromptFlux and PromptSteal, to create new code and avoid detection.
- Experts consider that AI malware is still limited and ineffective, with weak prompts and frequent failures.
- Google adjusted Gemini settings after discovering flaws that allowed it to generate malicious code under the guise of an ethical hacker.
Google published a report revealing that it found families of malware that use generative artificial intelligence during execution, creating new codes to steal data or circumvent detection systems.
One example is PromptFlux. It uses the Gemini API to rewrite its source code and avoid detection by defense systems. Another sample found, PromptSteal accesses an LLM hosted on Hugging Face to generate command lines to be executed on the infected machine, with the aim of stealing data from the victim.
PromptLock was created as part of an academic study that aimed precisely to analyze whether large-scale language models (LLMs) are capable of “planning, adapting and executing a ransomware attack”.
“While some implementations are experimental, they provide an early indicator of how threats are evolving and how they can integrate AI capabilities into future hacking activities,” the document says. “Agents are going beyond ‘vibe coding’ and the level seen in 2024, of using AI tools as technical support.”
Threat exists, but real impact is still limited
Despite the findings, cybersecurity experts believe that there is nothing very dangerous about malware created with the help of artificial intelligence. Researcher Marcus Hutchins, famous for his work against the WannaCry ransomware, points out that the prompts present in the samples analyzed by Google are still weak or useless.


“[O prompt] it doesn’t specify what the block of code should do or how it should escape an antivirus. It starts from the premise that Gemini will instinctively know how to bypass protections (it doesn’t),” Hutchins wrote on his LinkedIn page.
Kevin Beaumont, also an expert in the sector, has a similar assessment. “I looked at the samples. Many don’t even work, they fail immediately. There is no danger, if you have basic safety controls”, he commented on his colleague’s post.
O site Ars Technica spoke with security professionals. One of them, who did not want to be identified, also minimized the use of technology. “[A IA está] just helping malware authors do what they were already doing. Nothing new. AI will improve, but we don’t know when or how much”, he ponders.
Google itself says, in the report, that PromptFlux is still experimental, without being able to invade a victim’s device or network. And the researchers responsible for PromptLock stated that their proof of concept had clear limitations in techniques such as persistence, lateral movement and advanced evasion tactics.
In the same report, Google reveals that it found a flaw in Gemini’s protections. A malicious actor managed to trick AI into generating malicious code by posing as an ethical hacker, who was participating in a cybersecurity competition. The company says it has adjusted the settings to prevent attacks of this type.
With information from Ars Technica and PCMag
Source: https://tecnoblog.net/noticias/google-descobre-malware-que-usa-ia-para-gerar-novos-codigos-apos-invasao/
