Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Turning lignocellulosic biomass into sustainable fuel for transportation

SolarWinds Web Help Desk exploited by RCE in multi-stage attack against public servers

Nominations now being accepted for the 2026 Startup Battlefield 200 | Tech Crunch

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » Two important defects revealed in Wondershare Repaid and reveals user data and AI models
Identity

Two important defects revealed in Wondershare Repaid and reveals user data and AI models

userBy userSeptember 24, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

Cybersecurity researchers have disclosed two security flaws in Wondershare Repaist that could expose private user data and potentially expose systems to artificial intelligence (AI) tampering and supply chain risks.

The vulnerabilities in critical assessment of issues discovered by Trend Micro are listed below –

CVE-2025-10643 (CVSS score: 9.1) – Authentication bypass vulnerability present within permissions granted to storage account CVE-2025-10644 (CVSS score: 9.4) – Authentication bypass vulnerability present within permissions granted to SAS is

The successful exploitation of two flaws allows the attacker to bypass the authentication protection of the system, launch a supply chain attack, and ultimately executes arbitrary code at the customer’s endpoint.

Trend micro-researchers Alfredo Oliveira and David Fiser said that AI-powered data repair and photo editing applications “contradicted their privacy policy and inadvertently leaked personal user data by collecting and storing development, storage, and development, security and operational (DevseCops) practices.”

Poor development practices include embedding a tolerant cloud access token directly into the application’s code, allowing reads and writes to sensitive cloud storage. Furthermore, data is said to be stored without encryption, which could open the door to more widely misuse of users’ uploaded images and videos.

Worse, exposed cloud storage includes not only user data, but also software binaries for various products developed by AI models, Wondershare, container images, scripts, and company source code, allowing attackers to open up a way for supply chain attacks where they tamper with AI models or executables and target downstream customers.

DFIR Retainer Service

“Binaries automatically capture and run AI models from unstable cloud storage, allowing attackers to change these models or their configurations and infect users without their knowledge,” the researchers said. “An attack like this could distribute malicious payloads to legitimate users through software updates or downloads of AI models signed by the vendor.”

Beyond customer data exposure and manipulation of AI models, this issue can have serious consequences, ranging from intellectual property theft and regulatory penalties to erosion of consumer trust.

The cybersecurity company said it had responsibly disclosed two issues through the Zero Day Initiative (ZDI) in April 2025, but despite repeated attempts, it did not say it had yet to receive responses from the vendor. If there are no modifications, users are recommended to “limit interaction with the product.”

“The constant need for innovation will fuel the market with a rush to acquire new features and stay competitive, but we may not foresee how new, unknown ways or future features can be used to use these features,” Trend Micro said.

“This explains how security impacts can be overlooked. That’s why it’s important to implement strong security processes across your organization, including the CD/CI pipeline.”

The Need to Be Intimately Related to AI and Security

This development is because Trend Micro previously exposed Model Context Protocol (MCP) servers without authentication, or storing sensitive credentials such as MCP configurations in plain text, allowing threat actors to exploit them to gain access to cloud resources, databases, or malicious code.

Each MCP server acts as an open door to a database, cloud service, internal API, or data source for a project management system.

In December 2024, the company also discovered that it could abuse exposed container registry to obtain unauthorized access, obtain target docker images to extract AI models within AI models, modify the model’s parameters to influence predictions, and push the tampered images back into the exposed registry.

“A tampered model can work properly under normal conditions and can only display malicious changes if triggered by a particular input,” Trend Micro said. “This makes attacks particularly dangerous as they allow basic testing and security checks to bypass.”

The supply chain risk posed by MCP servers is highlighted by Kaspersky, devising a proof-of-concept (POC) exploit to highlight how MCP servers installed from unreliable sources can hide reconnaissance and data removal activities in the guise of AI-based productivity tools.

“When you install an MCP server, you essentially get permission to run code on a user machine using user privileges,” said security researcher Mohamed Ghobashy. “Unless there is a sandbox, third-party code, like any other program, can read the same files that the user is accessing and accessing outbound network calls.”

The findings show that by quickly adopting MCP and AI tools in enterprise settings to enable agent functionality, particularly without clear policies or security guardrails, new attack vectors can be opened, such as tool addiction, lag pull, shadowing, rapid injection, and fraudulent privilege escalation.

CIS Build Kit

In a report published last week, Palo Alto Networks Unit 42 revealed that the context attachment feature used by AI code assistants to bridge the knowledge gaps in AI models is sensitive to indirect rapid injection.

Indirect rapid injection depends on the inability to distinguish between user-issued instructions from those issued by the assistant and those secretly embedded in an external data source by the attacker.

So, if the user inadvertently supplies third-party data (such as files, repository, or URLs) of coding assistants that have already been contaminated by an attacker, the hidden malicious prompt can trick the tool into running a backdoor, inject arbitrary code into an existing codebase, and even weaponize it by leaking sensitive effects.

“Add this context to the prompt allows the code assistant to provide more accurate and specific output,” said Osher Jacob, researcher at Unit 42. “However, this feature could also create an opportunity for an indirect, rapid injection attack if the user unintentionally provides a contaminated contextual source by the threat actor.”

AI coding agents have been found to be vulnerable to what is called “Lies to Loop” (LITL) attacks, which aims to convince them that the instructions provided to LLM are actually much safer and that they effectively override the human (HITL) defenses set up when performing high-risk operations.

“Litl abuses trust between humans and agents,” said Ori Ron, a researcher at CheckMarx. “After all, humans can only respond to what agents prompt them and what agents prompt them. The user is inferred from the context the agents are given. It’s easy to lie to an agent.

“And the agent is willing to lie to the user, obscuring the malicious behavior that prompts protect, and essentially makes the agent an accomplice when delivering the keys to the kingdom.”


Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHow one bad password ended business 158 years ago
Next Article Emergent raises $23 million from Lightspeed to help consumers build apps
user
  • Website

Related Posts

SolarWinds Web Help Desk exploited by RCE in multi-stage attack against public servers

February 9, 2026

AI Skill Malware, 31Tbps DDoS, Notepad++ Hack, LLM Backdoors and More

February 9, 2026

How top CISOs can overcome burnout and speed up MTTR without hiring more people

February 9, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Turning lignocellulosic biomass into sustainable fuel for transportation

SolarWinds Web Help Desk exploited by RCE in multi-stage attack against public servers

Nominations now being accepted for the 2026 Startup Battlefield 200 | Tech Crunch

Gather AI, maker of ‘curious’ warehouse drones, wins $40 million led by Keith Block’s company

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.