Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

Backdoor attackers know, but most security teams haven’t shut them down yet

MetInfo CMS CVE-2026-29014 can be exploited for remote code execution attacks

Finalists announced for the 2026 European Sustainable Energy Award

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » We scanned 1 million publicly available AI services. How Bad Is Security Really?
Identity

We scanned 1 million publicly available AI services. How Bad Is Security Really?

By May 5, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The software industry has made real strides in delivering products securely over the past few decades, but the breakneck pace of AI adoption is putting that progress at risk. The promise of AI as a power multiplier and the pressure to deliver more value, faster, are driving companies to rapidly migrate to self-hosted LLM infrastructure. However, speed comes at the expense of security.

In the wake of the ClawdBot debacle, a viral self-hosted AI assistant that generates an astonishing average of 2.6 CVEs per day, the Intruder team wanted to investigate just how bad the security of AI infrastructure really is.

To investigate the attack surface, we used certificate transparency logs to extract just over 2 million hosts with 1 million public services. What we found wasn’t very pretty. In fact, the AI ​​infrastructure we scanned was more vulnerable, exposed, and misconfigured than any other software we’ve ever investigated.

No authentication by default

It didn’t take long to find a surprising pattern. A significant number of hosts were deployed out of the box with no authentication configured. Examining the source code reveals why. Many of these projects simply don’t have authentication enabled by default.

Actual user data and corporate tools were exposed for all to see. In the wrong hands, the effects can range from reputational damage to outright compromise.

Here are some of the most striking examples of what has been exposed.

Freely accessible chatbot

Many cases involved chatbots, leaving users’ conversations exposed. In one example, a user’s complete LLM conversation history was exposed based on OpenUI. While it may seem relatively benign on the surface, chat history in a corporate environment can reveal a lot.

Of further concern was a general-purpose chatbot that hosted and freely used a wide range of models, including multimodal LLMs. Malicious users can jailbreak most models and bypass safety guardrails for malicious purposes, such as generating illegal images or soliciting advice with criminal intent. And since you’re using someone else’s infrastructure, you can run it without fear of repercussions. This is not a hypothesis. People are finding creative ways to exploit the company’s chatbots to access more capable models without paying a fee or logging the request to their account.

There were also some questionable chatbots that published large amounts of private NSFW conversations. As if that wasn’t bad enough, the software running the Claude-powered Goonbot exposed its API key in clear text.

Broad and open agent management platform

We also discovered publicly available instances of agent management platforms such as n8n and Flowise. Some instances that users thought were clearly internal were exposed to the internet without authentication. One of the most egregious examples was the Flowise instance that exposed the entire business logic of the LLM chatbot service.

Their list of credentials was also exposed. Flowise is hardened enough to not expose stored values ​​to unauthenticated visitors, which limits the immediate damage, but attackers could use tools connected to these credentials to exfiltrate sensitive information.

This is what makes these platforms especially dangerous. AI tools clearly lack proper access management controls. This means that access to a bot that integrates with third-party systems often means access to everything it touches.

In another example, this setup exposed a number of internet analysis tools and potentially dangerous local capabilities such as file writing and code interpretation, making server-side code execution a reality.

We identified more than 90 exposed instances across sectors including government, marketing, and finance. These chatbots, their workflows, prompts, and external access were all open. An attacker could modify the workflow, redirect traffic, expose user data, or poison the response.

Greetings to the insecure Ollama API

One of the more surprising discoveries was the large number of exposed Ollama APIs that can be accessed without authentication once the model is connected. I issued one prompt (“Hello”) to all the servers that listed their connected models and checked to see if they were prompted to authenticate. We queried over 5,200 servers and 31% responded.

The responses let us know what these APIs are used for. It was morally impossible to investigate further, but the implications are far-reaching. Some examples:

“Hello, Master. Your orders are my law. What do you want? Speak freely. I am here to make it happen, without hesitation or questions.”

“I’m here to help you in any way I can with your health and well-being issues. Whether it’s anxiety, sleep issues, or any other concerns, please don’t hesitate to reach out to me for help.”

“Welcome! I’m an AI assistant integrated into your cloud management system. I’ll help you with operational tasks, infrastructure deployments, and service queries.”

Ollama does not directly store messages, so there is no immediate risk of conversation data being compromised. However, many of these instances were wrapping paid frontier models from Anthropic, Deepseek, Moonshot, Google, and OpenAI. Of all models identified across all servers, 518 wrapped well-known Frontier models.

unsafe by design

After triaging the results, it was clear that some techniques required further investigation. We spent time analyzing a subset of applications in a lab environment. As a result, we found that the following insecure patterns were repeated throughout:

Bad deployment practices: Insecure defaults, misconfigured Docker setups, hard-coded credentials, applications running as root No authentication on new installations: Many projects drop users directly into highly privileged accounts with full administrative access Hard-coded static credentials: Embedded in example setups and docker-compose files rather than generated during installation New technical vulnerabilities: One popular AI project has already been found to execute arbitrary code within days of working in the lab.

These misconfigurations are even worse when agents have access to tools such as code interpretation. If the sandbox is weak and the infrastructure is not within the DMZ, the explosion radius will be significantly larger.

Speed ​​is the name of the game. security is behind

Some projects to harden LLM infrastructure have clearly abandoned decades of hard-won security best practices in favor of rapid shipping. However, this is not purely a vendor issue. The speed of AI adoption and the pressure to outperform competitors in the marketplace are driving AI adoption.

Don’t wait for an attacker to discover your exposed AI infrastructure first. Intruder finds misconfigurations and displays what’s visible from the outside.

Was this article interesting? This article is a contribution from one of our valued partners. Follow us on Google News, Twitter, and LinkedIn to read more exclusive content from us.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleCopernicus Sentinel 1D satellite begins full operation
Next Article NHS launches rapid injection of pembrolizumab for 14 cancers

Related Posts

Backdoor attackers know, but most security teams haven’t shut them down yet

May 5, 2026

MetInfo CMS CVE-2026-29014 can be exploited for remote code execution attacks

May 5, 2026

ScarCruft hacks gaming platforms and deploys BirdCall malware on Android and Windows

May 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

Backdoor attackers know, but most security teams haven’t shut them down yet

MetInfo CMS CVE-2026-29014 can be exploited for remote code execution attacks

Finalists announced for the 2026 European Sustainable Energy Award

NHS launches rapid injection of pembrolizumab for 14 cancers

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.