Close Menu
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
What's Hot

High-performance computing market expected to reach $91 billion by 2030

Double-Tap Skimmers, PromptSpy AI, 30Tbps DDoS, Docker Malware & More

How exposing endpoints increases risk across your LLM infrastructure

Facebook X (Twitter) Instagram
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
Facebook X (Twitter) Instagram
Fyself News
  • Home
  • Identity
  • Inventions
  • Future
  • Science
  • Startups
  • Spanish
Fyself News
Home » How exposing endpoints increases risk across your LLM infrastructure
Identity

How exposing endpoints increases risk across your LLM infrastructure

userBy userFebruary 23, 2026No Comments8 Mins Read
Share Facebook Twitter Pinterest Telegram LinkedIn Tumblr Email Copy Link
Follow Us
Google News Flipboard
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

hacker newsFebruary 23, 2026Artificial Intelligence/Zero Trust

As more organizations run their own large-scale language models (LLMs), they are also introducing more internal services and application programming interfaces (APIs) to support those models. Modern security risks are increasingly introduced not from the models themselves, but from the infrastructure that serves, connects, and automates them. Each new LLM endpoint expands the attack surface in ways that are easily overlooked during rapid deployment, especially if the endpoint is implicitly trusted. If an LLM endpoint accumulates excessive permissions and exposes long-lived credentials, it can potentially provide far more access than intended. As exposed endpoints become an increasingly popular attack vector for cybercriminals to gain access to the systems, identities, and secrets that power LLM workloads, organizations must prioritize endpoint privilege management.

What is an endpoint in a modern LLM infrastructure?

In a modern LLM infrastructure, an endpoint is an interface through which something, such as a user, application, or service, can communicate with a model. Simply put, endpoints allow you to send requests to LLM and return responses. Common examples include inference APIs that process prompts and produce output, model management interfaces that are used to update models, and administrative dashboards that allow teams to monitor performance. Many LLM deployments also rely on plug-in or tool execution endpoints that allow the model to interact with external services, such as databases, that connect LLM to other systems. These endpoints define how LLM connects to the rest of its environment.

The main challenge is that most LLM endpoints are built for internal use and speed rather than long-term security. These are typically created to support experimentation or early deployment and are left running with minimal oversight. As a result, oversight tends to be inadequate and more access than necessary is granted. In reality, the endpoint becomes the security perimeter, and its scope of identity control, secret handling, and privilege determines how far cybercriminals can attack.

How LLM endpoints are exposed

LLMs are rarely exposed by a single failure. They are often put at risk over time by small assumptions and decisions made during development and deployment. Over time, these patterns transform internal services into externally accessible attack surfaces. Some of the most common exposure patterns include:

APIs that are publicly accessible without authentication: Internal APIs may be exposed publicly to speed testing and integration. Authentication is delayed or skipped entirely, leaving endpoints accessible after they should have been restricted. Weak or static tokens: Many LLM endpoints rely on tokens or API keys that are hard-coded and never rotate. If these secrets are leaked through a misconfigured system or repository, unauthorized users may gain unrestricted access to your endpoints. Assumption that what’s inside is safe: Teams often assume that internal endpoints cannot be accessed by unauthorized users and treat them as trusted by default. However, internal networks are often accessible through VPNs or misconfigured controls. Temporary test endpoints that are persisted: Endpoints designed for debugging or demo purposes are rarely cleaned up. Over time, these endpoints remain active but unmonitored and poorly secured while the surrounding infrastructure evolves. Misconfiguration of the cloud that exposes your service: Misconfiguring your API gateway or firewall rules can unintentionally expose your internal LLM endpoints to the internet. These misconfigurations often occur gradually and go unnoticed until the endpoint is exposed.

Why exposed endpoints are dangerous across LLM infrastructures

Exposed endpoints are particularly dangerous in LLM environments, as LLM is designed to connect multiple systems within a broader technology infrastructure. When cybercriminals compromise a single LLM endpoint, they often gain access to more than just the model itself. Unlike traditional APIs that perform a single function, LLM endpoints typically integrate with databases, internal tools, or cloud services to support automated workflows. Therefore, once a single endpoint is compromised, cybercriminals can potentially move quickly and laterally between systems that already trust LLM by default.

The real danger does not come from LLM being too powerful, but rather from the implicit trust placed in the endpoint from the beginning. Once exposed, the LLM endpoint acts as a power multiplier. Instead of manually exploring systems, cybercriminals can use compromised endpoints for various automated tasks. Exposed endpoints can put your LLM environment at risk by:

Prompt-driven data extraction: Cybercriminals can create prompts that summarize sensitive data that the LLM has access to, turning the model into an automated data extraction tool. Abuse of tool invocation privileges: When LLM calls internal tools or services, these tools can be exploited by using the exposed endpoints to modify resources or perform privileged actions. Indirect Prompt Injection: Even with restricted access, cybercriminals can manipulate data sources and LLM inputs to indirectly cause the model to perform harmful actions.

Why NHI is especially dangerous in an LLM environment

Non-Human Identity (NHI) is a credential used by a system on behalf of a human user. In an LLM environment, service accounts, API keys, and other non-human credentials are used to enable models to access data, interact with cloud services, and perform automated tasks. NHI poses a significant security risk in LLM environments due to the model’s continued dependence on NHI. For convenience, teams grant broad permissions to NHIs, but often fail to review and tighten access controls later. Once an LLM endpoint is compromised, cybercriminals can inherit NHI access behind the endpoint and operate using trusted credentials. Several common issues exacerbate this security risk.

Secret proliferation: API keys and service account credentials are often scattered across configuration files and pipelines, making them difficult to track and protect. Static credentials: Many NHIs use long-lived credentials that are rarely updated. Once these credentials are published, they remain available for a long time. Overauthorization: Broad access to national health insurance is often granted to avoid delays, but it is inevitably forgotten. Over time, NHI accumulates more authority than is actually necessary for its work. Identity Sprawl: Growing LLM systems generate large numbers of NHIs throughout the environment. Without proper monitoring and management, this identity expansion reduces visibility and increases the attack surface.

How to reduce risk from exposed endpoints

Mitigating the risk from exposed endpoints starts with assuming that cybercriminals will eventually reach exposed services. Security teams should aim to not just prevent access, but limit what can happen once it reaches the endpoint. An easy way to do this is to apply Zero Trust security principles to all endpoints. Access must be explicitly verified, continuously evaluated, and strictly monitored in all cases. Security teams must also:

Enforce least-privileged access for human and machine users: Endpoints should allow users, human or non-human, to access only what they need to perform a specific task. Reducing privileges limits the amount of damage cybercriminals can do to compromised endpoints. Use just-in-time (JIT) access: Privileged access should not be available to any endpoint all the time. With JIT access, privileges are granted only when needed and automatically revoked after the task is completed. Monitoring and recording of privileged sessions: Monitoring and recording of privileged activity helps security teams detect privilege abuse, investigate security incidents, and understand how endpoints are actually used. Automatically rotate secrets: Tokens, API keys, and service account credentials should be rotated regularly. Automated secret rotation reduces the risk of long-term credential misuse if secrets are compromised. If possible, remove long-lived credentials. Static credentials are one of the biggest security risks in LLM environments. Replacing these with short-lived credentials limits the amount of time a compromised secret can be used in the wrong hands.

These security measures are especially important in an LLM environment because LLM relies heavily on automation. Because models operate continuously without human oversight, organizations must protect access by time-limiting and heavily monitoring access.

Prioritize endpoint privilege management to improve security

In LLM environments where models are deeply integrated with internal tools and sensitive data, risk is rapidly amplified when endpoints are exposed. Traditional access models are insufficient for systems that operate autonomously and at scale. As a result, organizations need to rethink how they grant and manage access in their AI infrastructure. Endpoint privilege management shifts the focus from trying to prevent endpoint compromise to limiting impact by eliminating persistent access and controlling what both human and non-human users can do once they reach the endpoint. Solutions like Keeper support this zero trust security model by allowing organizations to remove unnecessary access and better protect critical LLM systems.

Note: This article was thoughtfully written and contributed by Keeper Security content writer Ashley D’Andrea for our readers.

Was this article interesting? This article is a contribution from one of our valued partners. Follow us on Google News, Twitter, and LinkedIn to read more exclusive content from us.

Source link

#BlockchainIdentity #Cybersecurity #DataProtection #DigitalEthics #DigitalIdentity #Privacy
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleEU’s energy-intensive industries reduce emissions, but momentum slows
Next Article Double-Tap Skimmers, PromptSpy AI, 30Tbps DDoS, Docker Malware & More
user
  • Website

Related Posts

Double-Tap Skimmers, PromptSpy AI, 30Tbps DDoS, Docker Malware & More

February 23, 2026

Malicious npm package collects cryptographic keys, CI secrets, and API tokens

February 23, 2026

MuddyWater uses GhostFetch, CHAR, HTTP_VIP to target MENA organizations

February 23, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest Posts

High-performance computing market expected to reach $91 billion by 2030

Double-Tap Skimmers, PromptSpy AI, 30Tbps DDoS, Docker Malware & More

How exposing endpoints increases risk across your LLM infrastructure

EU’s energy-intensive industries reduce emissions, but momentum slows

Trending Posts

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Please enable JavaScript in your browser to complete this form.
Loading

Welcome to Fyself News, your go-to platform for the latest in tech, startups, inventions, sustainability, and fintech! We are a passionate team of enthusiasts committed to bringing you timely, insightful, and accurate information on the most pressing developments across these industries. Whether you’re an entrepreneur, investor, or just someone curious about the future of technology and innovation, Fyself News has something for you.

Castilla-La Mancha Ignites Innovation: fiveclmsummit Redefines Tech Future

Local Power, Health Innovation: Alcolea de Calatrava Boosts FiveCLM PoC with Community Engagement

The Future of Digital Twins in Healthcare: From Virtual Replicas to Personalized Medical Models

Human Digital Twins: The Next Tech Frontier Set to Transform Healthcare and Beyond

Facebook X (Twitter) Instagram Pinterest YouTube
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
  • User-Submitted Posts
© 2026 news.fyself. Designed by by fyself.

Type above and press Enter to search. Press Esc to cancel.