
A new joint study by SentinelOne SentinelLABS and Censys reveals that the deployment of open source artificial intelligence (AI) has created a vast “layer of unmanaged, publicly accessible AI computing infrastructure” spanning 175,000 unique Ollama hosts in 130 countries.
The company says these systems span both cloud and residential networks around the world and operate outside of the guardrails and monitoring systems that platform providers have in place by default. The majority of exposure is in China, accounting for just over 30%. Countries with the largest infrastructure footprint include the United States, Germany, France, South Korea, India, Russia, Singapore, Brazil, and the United Kingdom
Researchers Gabriel Bernadette Shapiro and Cyrus Cutler added: “Nearly half of the observed hosts were configured with tool invocation capabilities that allow them to execute code, access APIs, and interact with external systems, indicating the increasing implementation of LLM into large system processes.”

Ollama is an open-source framework that allows users to easily download, run, and manage large-scale language models (LLMs) locally on Windows, macOS, and Linux. While the service is bound to localhost address 127.0.0,[.]The default is 1:11434, but a simple change to configuring it to bind to 0.0.0 allows it to be exposed to the public internet.[.]0 or public interface.
Like the recently popular Moltbot (formerly Clawdbot), the fact that Ollama is locally hosted and operates outside the corporate security perimeter raises new security concerns. This requires new approaches to distinguish between managed and unmanaged AI computing, the researchers say.
Over 48% of observed hosts advertise tool invocation capabilities via API endpoints, which when queried return metadata highlighting the capabilities they support. Tool calls (or function calls) are features that allow LLM to interact with external systems, APIs, and databases to enhance LLM functionality and obtain real-time data.
“The ability to invoke tools fundamentally changes the threat model. Text-generating endpoints can generate harmful content, whereas tool-enabled endpoints can perform privileged operations,” the researchers noted. “The combination of insufficient authentication and network exposure creates what we consider to be the most serious risks within the ecosystem.”
The analysis also identified hosts that support a variety of modalities beyond text, such as reasoning and vision capabilities, with 201 hosts running unmodified prompt templates that remove safety guardrails.
Due to the exposed nature of these systems, they may be susceptible to LLM jacking. LLM jacking means that a victim’s LLM infrastructure resources are exploited by a malicious attacker at the victim’s expense. These can range from spam email generation and disinformation campaigns to cryptocurrency mining and even reselling access to other criminal groups.
Risk is not theoretical. According to a report released this week by Pillar Security, threat actors are actively targeting publicly exposed LLM service endpoints in order to monetize access to AI infrastructure as part of an LLM jacking campaign called Operation Bizarre Bazaar.
The findings point to a criminal service that includes three components: systematically scanning the internet for public Ollama instances, vLLM servers, and OpenAI-compatible APIs running without authentication, assessing response quality and validating endpoints, and commercializing access at a discounted rate by promoting them as Silver.[.]inc acts as a Unified LLM API gateway.

“This end-to-end activity, from reconnaissance to commercial resale, represents the first documented LLM jacking market with full attribution,” said researchers Eilon Cohen and Ariel Vogel. This operation has been attributed to a threat actor named Hecker (also known as Sakuya and LiveGamer101).
The decentralized nature of the exposed Ollama ecosystem, distributed across cloud and residential environments, creates governance gaps, not to mention new avenues for rapid injection and proxying of malicious traffic through the victim’s infrastructure.
“Much of the infrastructure is residential, which complicates traditional governance and requires new approaches that distinguish between managed cloud deployments and distributed edge infrastructure,” the companies said. “Importantly for defenders, LLMs are increasingly being deployed at the edge to translate instructions into actions, so they must be treated with the same authentication, monitoring, and network controls as any other externally accessible infrastructure.”
Source link
