Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Langchain is about to become a unicorn, sources say

Glock is anti-Semitism again, and the sky is blue

Genai as a shopping assistant set that explodes during Prime Day sales

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » New “rule file backdoor” attack allows hackers to inject malicious code through an AI code editor
Identity

New “rule file backdoor” attack allows hackers to inject malicious code through an AI code editor

userBy userMarch 18, 2025No Comments2 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

March 18, 2025Ravi LakshmananAI Security/Software Security

Malicious code via AI code editor

Cybersecurity researchers have revealed details of new supply chain attack vector file backdoors that will affect AI-powered code editors such as GitHub Copilot and Cursor.

“This technique allows hackers to quietly compromise AI-generated code by injecting malicious instructions hidden in seemingly innocent configuration files used by Cursor and Github Copilot,” said CTO Ziv Karliner, co-founder of Pillar Security, in a technical report shared with Hacker News.

Cybersecurity

“By leveraging hidden Unicode characters and sophisticated evasion techniques in models facing order payloads, threat actors can manipulate AI to insert malicious code that bypasses typical code reviews.”

The attack vector is noteworthy in the fact that malicious code can quietly propagate throughout the project, pose supply chain risks.

Malicious code via AI code editor

The core of the attack resides in the rules files that AI agents use to guide behavior, which helps users define the best coding practices and project architecture.

Specifically, AI tools generate code that contains security vulnerabilities or backdoors, involving embedding carefully crafted prompts in seemingly benign rules files. In other words, poisoned rules tweak AI to create malicious code.

This can be achieved by using zero-width joiners, bidirectional text markers, and other invisible characters to hide malicious instructions, and leveraging the AI’s ability to interpret natural language and generate vulnerable code through vulnerable patterns across ethical and safety constraints.

Cybersecurity

Following responsible disclosures in late February and March 2024, both Cursor and GIHUB state that users are responsible for reviewing and accepting proposals generated by the tool.

“The ‘rule file backdoor’ represents a serious risk by weaponizing AI itself as an attack vector, effectively transforming the developer’s most trusted assistant into an unconscious accomplice, and can affect millions of end users through compromised software,” Karliner said.

“When addiction rules files are incorporated into the project repository, they will affect all future code generation sessions by team members. Additionally, malicious instructions will withstand project branching and create vectors of supply chain attacks that can affect downstream dependencies and end users.”

Did you find this article interesting? Follow us on Twitter and LinkedIn to read exclusive content you post.

Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHarrogate Convention Center will be green
Next Article Grubmarket raises $50 million Series G at a $3.5 billion valuation to expand AI-driven supply chain solutions
user
  • Website

Related Posts

Hackers use leaked shelter tool licenses to spread Lumma Stealer and Sectoprat malware

July 8, 2025

Anatsa Android Banking Trojan hits 90,000 users with fake PDF apps on Google Play

July 8, 2025

Malicious Pull Request Targets Over 6,000 Developers Target via Vulnerable Escode vs Code Extensions

July 8, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Langchain is about to become a unicorn, sources say

Glock is anti-Semitism again, and the sky is blue

Genai as a shopping assistant set that explodes during Prime Day sales

After PC player was hacked, Activision defeated the Call of Duty game, sources say

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Robots Play Football in Beijing: A Glimpse into China’s Ambitious AI Future

TwinH: A New Frontier in the Pursuit of Immortality?

Meta’s Secret Weapon: The Superintelligence Unit That Could Change Everything 

Unlocking the Power of Prediction: The Rise of Digital Twins in the IoT World

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.