Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Accelerating Québec’s advanced materials ecosystem

$15B Crypto Bust, Satellite Spying, Billion-Dollar Smishing, Android RATs & More

£30m partnership between Toyota and UK to boost zero-emission vehicle research and development

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Cursor AI Code Editor Flaw Enables silent code execution via malicious repository
Identity

Cursor AI Code Editor Flaw Enables silent code execution via malicious repository

userBy userSeptember 12, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The weaknesses of security are revealed in an AI-powered code editor cursor that can trigger code execution when a maliciously created repository is opened using a program.

This issue is due to the fact that immediate access security settings are disabled by default, opening the door for attackers to execute arbitrary code on the user’s computer with privileges.

“A cursor with workspace trust disabled by default has a code-style task configured with runoptions.runon. “Malicious .vscode/tasks.json turns casual ‘open folders’ into silent code execution in the user’s context. ”

Cursor is an AI-powered fork in Visual Studio code that supports a feature called Workspace Trust, allowing developers to safely view and edit their code, regardless of where they came from or who wrote it.

If this option is disabled, attackers can make the project available on GitHub (or any platform) and include a hidden “Autorun” instruction that tells the IDE to perform the task as soon as the folder is opened.

“This has the potential to leak sensitive credentials, modify files, or act as a vector for broader system compromises, putting cursor users at a serious risk from supply chain attacks.”

To combat this threat, users are encouraged to enable workplace trust with their cursor, open untrusted repository in another code editor, and audit them before opening it with the tool.

Audit and subsequent

Development emerges as a stealth and systemic threat that plagues AI-driven coding and inference agents such as Claude Code, Cline, K2 Think, and Windsurf, allowing threat actors to embed malicious instructions in a malicious way, taking malicious actions and leaking data from the software development environment.

Software Supply Chain Security Costume CheckMarx revealed in a report last week that an automated security review with Anthropic’s newly introduced Claude code allows projects to inadvertently expose themselves to security risks.

“In this case, careful comments can convince Claude that even obviously dangerous code is completely safe,” the company said. “The end result: developers can think it’s safe to think of Claude easily as a vulnerability, whether they’re malicious or just trying to close Claude.”

Another problem is that the AI ​​inspection process generates and runs test cases. This can lead to scenarios in which malicious code is executed against the production database if the Claude code is not properly sandboxed.

Additionally, the AI ​​company that recently launched the ability to create and edit new files on Claude warns that the feature has rapid injection risks as it runs in a “sandbox computing environment with limited internet access.”

Specifically, it is possible for bad actors to add “inconspicuous” instructions via external files or websites (indirect rapid injection). This will trick the chatbot to download and run or read sensitive data from connected knowledge sources via the Model Context Protocol (MCP).

“This means that you can trick Claude into sending information to malicious third parties from that context (such as prompts via MCP, projects, data, Google integrations, etc.),” ​​Anthropic said. “To mitigate these risks, we recommend using features to monitor Claude and stop it if you are using or accessing data.”

That’s not all. Later last month, the company also revealed that browser-using AI models like Claude for Chrome could face rapid injecting attacks, and implemented several defenses to address the threat and reduce the attack success rate of 23.6% to 11.2%.

“New forms of rapid spurt attacks are also constantly being developed by malicious actors,” he added. “By revealing real-world examples of insecure behaviors and new attack patterns that do not exist in controlled testing, we teach the model to recognize attacks and explain relevant behaviors, ensuring that the safety classifier receives everything the model itself has missed.”

CIS Build Kit

At the same time, these tools are known to be susceptible to traditional security vulnerabilities and broaden their attack surface with potential real-world influences –

A WebSocket authentication bypass in Claude Code IDE extensions (CVE-2025-52882, CVSS score: 8.8) that could have allowed an attacker to connect to a victim’s unauthenticated local WebSocket server simply by luring them to visit a website under their control, enabling remote command execution An SQL injection vulnerability in the Postgres MCP server that could have allowed an attacker to bypass the read-only restraint A Microsoft NLWeb path traversal vulnerability that executes arbitrary SQL statements could have allowed a remote attacker to read a sensitive file containing system configuration (“/etc/passwd”) or cloud credentials (.env files). An attacker who admitted to reading and writing to any database table in a generated site can now open a Base44 Cross-Site Script (XSS) that could allow attackers to access the victim’s app and development workspace, save cross-Site Script (XSS), leak vulnerabilities, and access applications that are difficult to access malicious logic when decking APIs. If it results from incomplete cross-origin control that could allow an attacker to host a drive-by attack, when you visit a malicious website, you can also reconfigure the application settings to intercept the chat and modify the response using the addiction model.

“As AI-driven development accelerates, the most pressing threat is often not exotic AI attacks, but classical security control failures,” Imperva said. “To protect the growing ecosystem of “vibe coding” platforms, security must be treated as a foundation rather than an afterthought. ”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCalifornia bill regulating AI companion chatbots is approaching the law
Next Article At the world’s top speeds, no solvent, verified PFA-free
user
  • Website

Related Posts

$15B Crypto Bust, Satellite Spying, Billion-Dollar Smishing, Android RATs & More

October 16, 2025

CISA reports flaw in Adobe AEM with perfect 10.0 score – already under active attack

October 16, 2025

Chinese threat group Jewelbug secretly infiltrated Russian IT networks for months

October 15, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Accelerating Québec’s advanced materials ecosystem

$15B Crypto Bust, Satellite Spying, Billion-Dollar Smishing, Android RATs & More

£30m partnership between Toyota and UK to boost zero-emission vehicle research and development

Promoting global and environmental health research in Canada

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

The AI Revolution: Beyond Superintelligence – TwinH Leads the Charge in Personalized, Secure Digital Identities

Revolutionize Your Workflow: TwinH Automates Tasks Without Your Presence

FySelf’s TwinH Unlocks 6 Vertical Ecosystems: Your Smart Digital Double for Every Aspect of Life

Beyond the Algorithm: How FySelf’s TwinH and Reinforcement Learning are Reshaping Future Education

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2025 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.