
The growing role of AI in enterprise environments has increased the urgency for the Chief Information Security Officer (CISO) to promote effective AI governance. When it comes to new technologies, governance is difficult, but effective governance is even more difficult. For most organizations, the first instinct is to respond with strict policies. We hope that we will create a policy document, distribute a set of restrictions, and include risks. However, effective governance does not work that way. It must be a living system that guides safe, transformative change without reducing the pace of innovation that shapes how AI is used every day.
For CISOS, we see that the balance between security and speed is important in the age of AI. This technology represents the biggest opportunities and greatest risks businesses have faced since the dawn of the internet. If you don’t have guardrails, sensitive data will leak to the prompts, spreading shadow AI, and the regulation gaps become debt. It moves too much and competitors are transformative efficiency that is too strong to compete. Both passes have impacts that can cost a job.
Second, AI adoption initiatives cannot lead “NO divisions” where organizational security functions are hampered. Instead, it is important to find a path to map governance to organizational risk tolerance and business priorities so that security features act as true revenue enablers. In the course of this article, we share three components that will help CISOs make that shift and drive AI governance programs that allow for large-scale, secure adoption.
1. Understand what’s going on on the ground
When ChatGpt first arrived in November 2022, most CISOs I know scrambled to publish strict policies telling employees what they shouldn’t. Given that leaking sensitive data is a legitimate concern, it came from a place of positive intent. However, while the policy written from that “backward document” approach is in theory good, it rarely works in practice. How fast AI is evolving, AI governance must be designed through a “real world forward” mindset that explains what’s actually happening on the ground within an organization. This requires CISOs to have a basic understanding of AI. It’s the technology itself, the embedded technology itself, what the SaaS platform makes it possible, and how employees use it to get the job done.
While AI inventory, model registry, and committees beyond functionality may sound like a buzzword, it is a practical mechanism that helps security leaders develop this AI flow. For example, the AI Bill (AIBOM) provides visibility into the components, datasets, and external services that supply AI models. Just as Software Materials (SBOM) clarify third-party dependencies, AIBOM assures leaders of the data they are using, where it comes from, and what risks they will introduce.
The model registry plays a similar role in AI systems already in use. They track which models are deployed, when they were last updated, and how they are doing to prevent “black box sprawls” and inform decisions regarding patching, obsolescence, or scaling. The AI Committee ensures that surveillance does not fall into security or anything else. These groups, often chaired by designated AI leads or risk officers, include representatives of legal, compliance, HR, and business units. This transforms governance from siloed directives into a common responsibility that bridges security concerns to business outcomes.
2. Adjust your policy to your organization’s speed
Without real-world advancement policies, security leaders often fall into the trap of codifying controls that they cannot realistically provide. I saw this in person through my CISO colleague. Knowing that employees were already experimenting with AI, he worked to enable responsible adoption of several Genai applications across his workforce. However, when a new CIO joined the organization and felt that there were too many Genai applications in use, the CISO was instructed to ban all Genais until one enterprise-wide platform was selected. A year later, because that single platform had not yet been implemented, employees used unapproved Genai tools to expose their organizations to shadow AI vulnerabilities. The CISO was stuck trying to implement a blanket ban that failed to protect criticism without the authority to implement a viable solution.
This type of scenario unfolds when policies are written faster than they can be implemented, or when the pace of adoption of an organization is unpredictable. Policies that appear critical on paper quickly become obsolete if they don’t evolve in leadership changes, embedded AI capabilities, and organic ways employees integrate new tools into their work. Governance must be flexible enough to adapt, or else there is a risk that security teams will leave them with the impossible to implement.
The future method is to design the policy as a living document. They need to evolve like business, be informed by actual use cases and evolve to fit measurable results. Governance cannot be stopped by policy either. It should be cascaded into standards, procedures, and baselines that guide daily work. Only then will employees know what safe AI adoption actually looks like.
3. Make AI governance sustainable
Even with strong policies and roadmap in place, employees will continue to use AI in ways that are not formally approved. The goal of a security leader is not to ban AI, but to use responsibility as the easiest and most attractive option. This means that whether purchased or homemade, employees will equip their employees with enterprise-grade AI tools, so they don’t have to reach for a volatile alternative. Furthermore, it means emphasizing and reinforcing positive actions so that employees can see value by tracking the guardrails rather than bypassing them.
Sustainable Governance also uses AI to stem the protection of AI, two pillars of the SANS Institute’s recently published Secure AI Blueprint. To effectively manage AI, CISOS needs to enable SOC teams to effectively utilize AI for cyber defense. You need to automate noise reduction and enrichment, verify detection for threat intelligence, and ensure that analysts maintain the loop for escalation and incident response. You should also ensure that you have proper controls in place to protect your AI systems from hostile threats, as outlined in SANS’s Key AI Security Guidelines.

Find out more at SANS Cyber Defense Initiative 2025
This December, SANS will provide LDR514: Security Strategic Planning, Policy and Leadership at the SANS Cyber Defense Initiative 2025. Create actionable policies, cover how governance is tailored to business strategy, incorporate security into your culture, and enable you to safely lead the AI era.
If you are ready to turn AI governance into a business enabler, sign up for SANS CDI 2025 here.
Note: This article was contributed by Frank Kim of SANS Institute Fellow.
Source link