
According to Penera’s AI and Adversarial Testing Benchmark Report 2026, the majority of security leaders struggle to defend their AI systems with tools and skills that are not suited to the task.
This report, based on a survey of 300 U.S. CISOs and senior security leaders, examines how organizations protect their AI infrastructure and highlights critical gaps related to skills shortages and reliance on security controls not designed for the AI era.
AI adoption outpaces security visibility
AI systems are rarely deployed in isolation. They are layered and integrated across existing enterprise technologies, from cloud platforms and identity systems to applications and data pipelines. Effective centralized monitoring broke down as ownership was dispersed among different teams.
As a result, 67% of CISOs reported having limited visibility into how AI is being used across their organization. None of the respondents said they had complete visibility. Rather, they acknowledge that they are aware of or accept some form of uncontrolled or unauthorized use of AI.
Without a clear understanding of where AI systems operate or what resources they have access to, security teams struggle to effectively assess risk. Fundamental questions often remain unanswered, such as what identities AI systems rely on, what data they can access, and how they behave when controls fail.
Skills, not budget, are the main barrier
AI security is now a regular topic of board and executive discussions, but research shows that the biggest challenges are not financial.
CISOs identified the following as the biggest obstacles to securing AI infrastructure:
Lack of in-house expertise (50%) Limited visibility into AI usage (48%) Insufficient security tools specifically designed for AI systems (36%)
Only 17% cited budget constraints as their main concern. This suggests that while many organizations are willing to invest in AI security, they do not yet have the specialized skills needed to assess AI-related risks in real-world environments.
AI systems introduce behaviors that security teams are still learning to appreciate, such as autonomous decision-making, indirect access paths, and privileged interactions between systems. Without appropriate expertise and aggressive testing, it becomes difficult to assess whether existing controls are as effective as intended.
Traditional controls carry most of the load
In the absence of AI-specific best practices, skills, and tools, most companies extend existing security controls to cover their AI infrastructure.
The survey found that 75% of CISOs rely on traditional security controls such as endpoint, application, cloud, and API security tools to protect their AI systems. Only 11% reported using security tools specifically designed to protect their AI infrastructure.
This approach reflects a familiar pattern seen during previous technology transitions, where organizations first adapted their existing defenses before more customized security practices emerged. While this can provide basic coverage, controls built for traditional systems may not take into account how AI changes access patterns and expands potential attack vectors.
Familiar challenges apply to AI too
Taken together, these findings demonstrate that AI security challenges stem from fundamental gaps rather than a lack of awareness or intent.
As AI becomes a core part of enterprise infrastructure, the report suggests that organizations should focus on building expertise and improving how they validate security controls across the environments in which AI is already operating.
To learn more about our findings, download the AI and Adversarial Testing Benchmark Report 2026 for a deeper dive into the data and key takeaways.
Note: This article was written by Ryan Dory, Director of Technical Advisors at Pentera.
Source link
