
Cybersecurity researchers have demonstrated a new prompt injection technique called PromptFix, which tricks the Generated Artificial Intelligence (Genai) model to perform the intended action by embedding malicious instructions within fake Captcha checks on web pages.
The attack technique described by Guardio Labs as “AI-ERA accepts Clickfix scams,” shows how AI-driven browsers such as Perplexity comets promise to automate commonplace tasks such as online shopping and processing emails on behalf of users.
“With PromptFix, the approach is different. We don’t try to glitch the model into submission,” Guardio says. “Instead, we use techniques borrowed from human social engineering playbooks to mislead them. They are directly appealing to their core design goals. We support humans quickly, completely, and without hesitation.”
This leads to a new reality that the company calls Porte Manteau, the terms “fraud” and “complexity.” Agent AI – A system that can autonomously pursue goals, make decisions, and act with minimal human supervision takes fraud to a whole new level.

AI-powered coding assistants like Lovable have proven susceptible to technologies like Vibescamming, so attackers can trick AI models into handing over sensitive information or making purchases on visually-like websites that pretend to be Walmart.
All this can be achieved by issuing simple instructions such as “Buy Apple Watch” after a human lands on the fake website in question through one of several ways, such as social media ads, spam messages, and search engine optimization (SEO) addiction.
Scammers are “a complex new era of fraud, and the convenience of AI will clash with new, invisible fraud surfaces, causing humans to be collateral damage,” Guardio said.
The cybersecurity company said it ran several tests on Comet, asking human users to manually complete the checkout process, simply because the browser just halts once in a while. However, in some examples, the browsers all came in, added the product to the cart, and automates the user’s stored address and credit card details without asking for confirmation on fake shopping sites.

Similarly, we know that simply asking the comet to check the email message of an action item is sufficient to parse spam emails from the bank, automatically clicking on the embedded link in the message and entering your login credentials on the phony login page.
“The outcome: the complete trust chain has become fraudulent. By handling the entire email-to-website interaction, the comet has effectively guaranteed its phishing page,” Guardio says. “Humans never saw the address of a suspicious sender, they had no opportunity to hover over a link or question the domain.”
That’s not all. As rapid injection continues to plague AI systems in direct and indirect ways, AI browsers must also address hidden prompts hidden within web pages that are invisible to human users but can be parsed by AI models and trigger unintended actions.
This so-called PromptFix attack is designed to convince AI models to click an invisible button on a web page. Bypassing capture checks and downloading malicious payloads without involvement with some human users, resulting in a drive-by download attack.
“PromptFix only works with Comet (which really acts as an AI agent), and for that reason it also works in agent mode in ChatGpt. Here, I managed to click on a button or follow the instructions to perform the action,” Guardio told Hacker News. “The difference is that in the case of CHATGPT, all downloaded files will be run in a sandboxed setup, so they land in a virtual environment rather than directly on the computer.”
Findings show the need for AI systems to predict, detect and neutralize these attacks beyond reactive defenses by building robust guardrails for phishing detection, URL reputation checks, domain spoofing, and malicious files.
Also, the enemy will automate large-scale deployments using services such as low-code site builders and genai platforms, including website builders and writing assistants, with realistic phishing content, clone trust brands, and services such as low-code site builders per unit 42 of the Palo Alto Network.
Additionally, the AI coding assistant can mistakenly expose its own code or sensitive intellectual property, creating potential entry points for targeted attacks, the company added.

Enterprise Security Firm Proofpoint has observed “many campaigns that utilize adorable services to distribute multi-factor authentication (MFA) phishing kits, including malware such as Tycoon, Cryptocurrency Wallet Drainers and Malware Loaders, as well as phishing kits targeting credit cards and personal information.”
Fake websites created using Lovable Lead To Captcha will check the Captcha check once they resolve redirecting to Microsoft branded certification phishing pages. Other websites have been found to lead to delivery and logistics services like UPS to enter personal and financial information to victims or to pages that download remote access trojans like Zgrat.
The beloved URL has been abused for investment fraud and banking qualification phishing, significantly lowering the barrier to entry for cybercrime. Lovable has since implemented AI-driven security protections to defeat the site and prevent the creation of malicious websites.
Other campaigns use deceptively deep content distributed on YouTube and social media platforms to redirect users to fraudulent investment sites. These AI trading scams are often hosted on platforms like Medium, Blogger, and Pinterest, and often rely on fake blogs and review sites to create a false sense of legitimacy.
“Genai will strengthen the operation of threat actors rather than replacing existing attack methods,” Crowdstrike said in its 2025 Threat Hunting Report.
Source link