Close Menu
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
What's Hot

300 servers and 3.5 million euros have been seized as Europol attacks ransomware networks worldwide

China criticizes the US ban on Harvard international students

Why the Event Industry Doesn’t Support DEI

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Academy
  • Events
  • Identity
  • International
  • Inventions
  • Startups
    • Sustainability
  • Tech
  • Español
    • Português
Fyself News
Home » A vulnerability in the Gitlab duo allowed attackers to hijack AI responses with hidden prompts
Identity

A vulnerability in the Gitlab duo allowed attackers to hijack AI responses with hidden prompts

userBy userMay 23, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Gitlab Duo Vulnerability

Cybersecurity researchers have discovered an indirect rapid injection flaw from Gitlab’s artificial intelligence (AI) assistant duo that may have allowed attackers to steal source code and inject unreliable HTML into their responses.

GitLab Duo is an AI-powered coding assistant that allows users to write, review and edit code. Built using Anthropic’s Claude model, the service was first launched in June 2023.

But just as legal security was found, Gitlab Duo chat has become susceptible to indirect rapid injection flaws that allow attackers to “steal source code from private projects, manipulate code suggestions presented to other users, and even remove confidential, private zero-day vulnerabilities.”

Rapid injection refers to a class of vulnerabilities common to AI systems, when threat actors weaponize large-scale language models (LLMs) to manipulate responses to user prompts, resulting in unwanted behavior.

Indirect rapid injection is much more difficult indirect rapid injection in that it is embedded in a different context, such as a document or web page that the model is designed to process, instead of providing the input created by AI directly.

Cybersecurity

Recent research has shown that LLM is also vulnerable to jailbreak attack techniques. This allows AI-driven chatbots to ignore ethical and safety guardrails and generate harmful and illegal information that effectively removes the need for carefully crafted prompts.

Additionally, prompt leakage (peak) methods can be used to inadvertently reveal prompts or instructions in the preset system that the model intends to follow.

“For organizations, this means that they can leak personal information such as internal rules, features, filtering criteria, permissions, and user roles,” Trend Micro said in a report published earlier this month. “This gives attackers the opportunity to exploit the weaknesses of the system, leading to data breaches, trade secret disclosure, regulatory violations, and other adverse consequences.”

Gitlab Duo VulnerabilityDemonstration of Preak Attack – Credit Overload/Secret Features Exposure

The latest findings from Israeli software supply chain security companies show that hidden comments are placed within merge requests, commit messages, or comments about the issue, and that the source code is sufficient to leak sensitive data or insert HTML into Gitlab Duo’s answers.

These prompts can be further hidden to reduce detection using encoding tricks such as Base16 encoding, Unicode smuggling, and Katex rendering. The lack of input sanitization and the fact that Gitlab did not deal with any of these scenarios may have not dealt with it with more scrutiny than the source code.

Gitlab Duo Vulnerability

“The duo analyzes the entire context of a page, including comments, descriptions, source code, and more, and becomes vulnerable to injected instructions hidden anywhere in that context.”

This also means that attackers can deceive AI systems and redirect victims to fake login pages that collect qualifications by including malicious JavaScript packages in their integrated code or presenting malicious URLs as secure.

In addition to that, by taking advantage of GitLab Duo Chat’s ability to access information about specific merge requests and their internal changes, Legalins Security discovered that when DUO handles it, it can insert hidden prompts in the merge request description of a project that portrays private source code into an attacker control server.

This is possible to use streaming markdown rendering to interpret and render responses into HTML as output is generated. In other words, feeding HTML code via indirect rapid injection could result in code segments being executed in the user’s browser.

Following the responsible disclosure of February 12, 2025, this issue is being addressed by GitLab.

“This vulnerability highlights the double-edged nature of AI assistants like Gitlab Duo. When deeply integrated into the development workflow, they inherit risks as well as context,” Meilaz said.

“By embedding hidden instructions in seemingly harmless project content, we were able to manipulate the duo’s actions, remove private source code, and demonstrate how AI responses can be exploited for unintended, harmful outcomes.”

Cybersecurity

The disclosure is that Pentest Partners have revealed how SharePoint or SharePoint Agent Microsoft Copilot has been exploited by local attackers to access sensitive data and documents, even from files with “restricted views” privileges.

“One of the main benefits is the ability to search and troll large datasets in a short time, such as SharePoint sites for large organizations,” the company says. “This could significantly increase the chances of finding information that is useful to us.”

Attack techniques follow a new study that Elizaos (formerly AI16Z), a new distributed AI agent framework for automated Web3 operations, can manipulate malicious instructions by injecting prompts or historical interaction records, effectively corrupting saved contexts, leading to unintended asset transfers.

“The implications of this vulnerability are particularly serious given that ElizaoSagents is designed to interact with multiple users simultaneously and rely on shared context inputs from all participants.”

“A single successful operation by a malicious actor can undermine the integrity of the entire system and can create cascade effects that are difficult to detect and mitigate.”

Aside from rapid injection and jailbreak, another important issue with today’s LLMS illness is hallucination. This occurs when the model is not based on input data or simply generates a response that is manufactured.

According to a new survey published by AI testing company Giskard, instructing LLMS to keep their answers concise can negatively affect factuality and exacerbate hallucinations.

“It appears that this effect is occurring because effective counterarguments generally require longer explanations,” it said. “When forced to be concise, models face an impossible choice to make short but inaccurate answers or to appear useless by rejecting the question entirely.”

Did you find this article interesting? Follow us on Twitter and LinkedIn to read exclusive content you post.

Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleDo you think India, Pakistan and Iran are all pleading? Taliban | Taliban News
Next Article Thunder-wolves 118-103: MVP SGA Sets Up 2-0 NBA West Final Lead | Basketball News
user
  • Website

Related Posts

300 servers and 3.5 million euros have been seized as Europol attacks ransomware networks worldwide

May 23, 2025

US dismantles Danabot malware network and charges 16 for $50 million global cybercrime operation

May 23, 2025

CISA warns that there are widespread suspected SaaS attacks that exploit app secrets and cloud Misconfig

May 23, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

300 servers and 3.5 million euros have been seized as Europol attacks ransomware networks worldwide

China criticizes the US ban on Harvard international students

Why the Event Industry Doesn’t Support DEI

Fast delivery of medical technology for emergencies

Trending Posts

EU membership, seizing Russian money needed to rebuild Ukraine: Analysts | News of the Russian-Ukraine War

May 23, 2025

US sanctions after dominating chemical weapons used during the Civil War | Sudan War News

May 23, 2025

Thunder-wolves 118-103: MVP SGA Sets Up 2-0 NBA West Final Lead | Basketball News

May 23, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

B2Broker launches its first turnkey liquidity provider solution

DiffusedRive raises $3.5 million to solve the biggest challenges of physical AI: high quality training data

Top Startup and Tech Funding News – May 22, 2025

Apple, who will launch smart glasses in 2026 as part of API push, drops plans for camera-equipped smartwatch

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.