
Cybersecurity company ESET has revealed it has discovered PromptLock, a ransomware variant codenamed PromptLock, powered by artificial intelligence (AI).
The newly identified strain written in Golang generates malicious LUA scripts in real time using Openai’s GPT-OSS:20B model via the Ollama API. The Open-Weight Language model was released by Openai earlier this month.
“Promptlock leverages LUA scripts generated from hardcoded prompts to enumerate local file systems, inspect target files, remove selected data, and perform encryption,” ESET said. “These LUA scripts are cross-platform compatible and work on Windows, Linux, and MacOS.”
Ransomware code embeds steps to create custom notes based on “affected files”, and the infected machine can be a personal computer, company server, or power distribution controller. It is currently unclear who is behind the malware, but ESET told Hacker News that Artifact was uploaded to Virustotal from the US on August 25, 2025.

“PromptLock uses AI-generated LUA scripts, meaning the compromise metric (IOC) can vary depending on the execution,” pointed out Slovak Cybersecurity Company. “This variation poses detection challenges. If implemented properly, such an approach can significantly complicate threat identification and make defender tasks more difficult.”
Recognised as a proof of concept (POC), rather than fully working malware deployed in the wild, Promptlock uses the SPECK 128-bit encryption algorithm to lock files.
In addition to encryption, analysis of ransomware artifacts suggests that the functionality to actually perform erasure does not appear to be implemented yet, but can also be used to remove or destroy it.
“PromptLock doesn’t download the entire model. This could be a few gigabytes in size,” ESET revealed. “Instead, an attacker can simply establish a proxy or tunnel from the compromised network to a server running the Ollama API on the GPT-OSS-20B model.”
The emergence of PromptLock is another indication that AI has made it easier for cybercriminals, even those who lack technical expertise to quickly set up new campaigns, develop malware, and create engaging phishing content and malicious sites.
Today, humanity has revealed that it has banned accounts created by two different threat actors who committed massive theft and fear tor of personal data targeting at least 17 different organizations using the Claude AI chatbot, and developed several variations of ransomware with advanced evasion capabilities, encryption and repetition mechanisms.
The development comes as large language models (LLMs) powering various chatbots and AI-focused developer tools, such as Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Butterfly Effect Manus, Google Jules, Lenovo Lena, Microsoft GitHub Copilot, OpenAI ChatGPT Deep Research, OpenHands, Sourcegraph Amp, and Windsurf, have been found susceptible to prompt injection Attacks, can allow for information disclosure, data stripping, and code execution.
Despite incorporating robust security and safety guardrails to avoid unwanted behavior, AI models repeatedly fall prey to new variants of injection and jailbreak, highlighting the complexity and evolving nature of security challenges.

“As a rapid injection attack, AIS will delete files, steal data and engage in financial transactions,” says humanity. “New forms of rapid injecting attacks are also constantly being developed by malicious actors.”
Additionally, new research reveals a simple yet clever attack called Promisqroute. “Reconfigure operations using prompt-based router open mode operations, trust avoidance induced via queries like SSRF” – Abuse the ChatGPT model routing mechanism to trigger downgrades and create prompts.
Bypass millions of dollars with additional AI safety research with phrases like “using compatibility mode” and “need fast response,” Absversa AI said in a report released last week, with the attack targeting the cost-saving model routing mechanisms used by AI vendors.
Source link