
The most active enterprise infrastructure within a company is the developer workstation. That laptop is where credentials are created, tested, cached, copied, and reused across services, bots, build tools, and now local AI agents.
In March 2026, TeamPCP threat actors proved how valuable developer machines can be. A supply chain attack against LiteLLM, a popular AI development library downloaded millions of times every day, turned developer endpoints into a systematic credential harvesting operation. The malware only needed access to the plaintext secrets already on disk.
LiteLLM Attack: Developer Endpoint Compromise Case Study
This attack was easy to execute, but devastating in scope. TeamPCP compromised the LiteLLM package versions 1.82.7 and 1.82.8 on PyPI and injected information-stealing malware that is activated when a developer installs or updates the package. The malware systematically collected SSH keys, AWS, Azure, and GCP cloud credentials, Docker configurations, and other sensitive data from developer machines.
PyPI removed the malicious package within hours of detecting it, but the damage was significant. GitGuardian’s analysis found that 1,705 PyPI packages were configured to automatically pull the compromised LiteLLM version as a dependency. Popular packages such as dspy (5 million monthly downloads), opik (3 million), and crawl4ai (1.4 million) may have triggered malware execution during installation. The cascading effect meant that even organizations that had never used LiteLLM directly could be compromised by transitive dependencies.
Why developer machines are an attractive target
This attack pattern is not new. It just stands out more. Similar tactics were demonstrated on a large scale during Operation Shai Khurd. When GitGuardian analyzed 6,943 developer machines compromised in this incident, researchers discovered 33,185 unique secrets, at least 3,760 of which were still valid. What’s even more surprising is that each live secret resides in approximately eight different locations on the same machine, and 59% of the compromised systems were CI/CD runners rather than personal laptops.
Attackers now infiltrate toolchains through compromised dependencies, malicious plugins, or harmful updates. Once there, we collect local environment data using the same methodical approach that security teams use to scan for vulnerabilities. The difference, however, is that it looks for credentials stored in .env files, shell profiles, device history, IDE settings, cached tokens, build artifacts, and the AI agent’s memory store.
Secrets exist everywhere in plain text
The LiteLLM malware was successful because the developer’s machine had a high concentration of cleartext credentials. Secrets are stored in source trees, local configuration files, debug output, copied terminal commands, environment variables, and temporary scripts. These accumulate in .env files that are supposed to be local only, but are now a permanent part of the codebase. Convenience turns into residue, which becomes opportunity.
Developers are running agents, local MCP servers, CLI tools, IDE extensions, build pipelines, and retrieval workflows, all of which require credentials. These credentials are spread across predictable paths known to the malware: ~/.aws/credentials, ~/.config/gh/config.yml, project .env files, shell history, and agent configuration directories.
Protect developer endpoints at scale
It’s important to build continuous protection across all developer endpoints where credentials are accumulated. GitGuardian addresses this by extending the security of secrets beyond the code repository and onto the developer machine itself.
The LiteLLM attack demonstrated what happens when credentials are accumulated in clear text across developer endpoints. Here’s what you can do to reduce that exposure.
understand your exposure
Let’s start with visualization. Treat workstations as the primary environment for secret scanning, rather than as an afterthought. Use ggshield to scan your local repository for credentials that have slipped into your code or are left in your Git history. Scan file system paths where secrets accumulate outside of Git: project workspaces, dotfiles, build output, and agent folders where local AI tools generate logs, caches, and “memory” stores.
ggshield detect secret of specific file from path
Don’t assume that just because environment variables are not in a file, they are safe. Shell profiles, IDE settings, and generated artifacts often keep environment values on disk indefinitely. Scan these locations the same way you scan repositories.
Add a pre-commit hook for ggshield to prevent commits from creating new leaks while cleaning up old leaks. This turns covert detection into a default guardrail that catches mistakes before they become incidents.
ggshield pre-commit command catches secrets
Move secrets to vault
Detections without remediation are just noise. When credentials are compromised, remediation typically requires coordination across multiple teams. Security identifies a breach, the infrastructure owns the service, the original developer may have left the company, and the product team worries about production disruption. Without clear ownership and workflow automation, remediation becomes a manual process and becomes a low priority.
This solution treats secrets as managed identities with defined ownership, lifecycle policies, and automated remediation paths. Move credentials to a centralized vault infrastructure and enable security teams to enforce rotation schedules, access policies, and usage monitoring. By integrating incident management with your existing ticketing system, remediation occurs in context instead of constantly switching tools.
GitGuardian Analytics showing the state of monitored secrets
Treat AI agents as credential risks
Agent tools can read files, run commands, and move data. For OpenClaw-style agents, “memory” is literally a file on disk (SOUL.md, MEMORY.md) stored in a predictable location. Don’t paste credentials into agent chats, give secrets to agents “later”, or regularly scan agent memory files for sensitive data stores.
Delete the entire secret class
The easiest way to reduce secret proliferation is to eliminate the need for an entire category of shared secrets. On the human side, we employ WebAuthn (passkey) instead of passwords. On the workload side, you can move to OIDC federation to make your pipelines independent of stored cloud keys and service account secrets.
Start with the riskiest paths where a compromised credential would cause the most damage, and work your way up. Migrate developer access to passkeys and CI/CD workflows to OIDC-based authentication.
Use temporary credentials
If you can’t delete the secret yet, shorten the secret’s lifetime and replace it automatically. Use SPIFFE to issue cryptographic identity documents (SVIDs) that are automatically rotated instead of relying on static API keys.
Start with long-lived cloud keys, deployment tokens, and service credentials that developers keep locally for convenience. Move to short-lived tokens, automatic rotation, and workload identity patterns. Each time you migrate, you lose one less durable secret that can be stolen or weaponized.
The goal is to reduce the value an attacker can extract from a successful foothold on a developer machine.
Honeytoken as an early warning system
Honeytokens provide interim protection. Decoy credentials are placed in locations that attackers systematically target, such as developer home directories, common configuration paths, and agent memory stores. Once collected and verified, these tokens generate instant alerts, reducing detection time from “discovering the damage weeks later” to “catching the attack during deployment.” This is not the final state, but the response window changes while systematic cleanup continues.
Developer endpoints have become part of your critical infrastructure. They sit at the intersection of privilege, trust, and execution. The LiteLLM incident proved that attackers understand this better than most security programs. Organizations that treat developer machines with the same governance disciplines that are already applied to production systems can survive the next supply chain breach.
Source link
