
We are at an inflection point in cybersecurity—one that is unfolding faster and across more dimensions than anything we’ve experienced before. AI is no longer a theoretical capability or a future‑state aspiration. It’s rapidly reshaping the global economy and with it the security and risk landscape. Generative and agentic AI capabilities are being deployed inside enterprises faster than security teams can establish control frameworks, ownership models, or risk boundaries. This challenge is compounded by a noisy marketplace where AI‑themed vendors are flooding the ecosystem, fueled by aggressive investment and hype‑driven positioning. Cyber practitioners are inundated with claims of “AI‑powered” differentiation and cannot reasonably evaluate innovation from noise at this velocity.
Research with members of CRA’s CyberRisk Collaborative – whose mission is to strengthen the cybersecurity workforce through professional and leadership development – surfaced the following priorities:
AI is reshaping how security operations function, shifting from manual, reactive workflows to augmented and increasingly automated models. While human judgment remains central, members tell us that AI is accelerating and restructuring key phases of the SOC lifecycle:
It’s important to note that the net effect is not the elimination of security analysts, but the elevation of their role—from manual triage to decision authority, detection engineering, and resilience strategy.
Alongside formal AI initiatives, leaders are confronting a parallel and less controlled development: Shadow AI. Generative tools, copilots, embedded APIs, and autonomous agents are being adopted across business units without formal approval or governance. In many organizations, usage is spreading faster than visibility and policy frameworks can keep pace.
Members describe several consistent risk patterns emerging:
Unlike traditional shadow IT, Shadow AI operates inside everyday workflows—often invisible to centralized security teams. The issue is not intent, but unmanaged acceleration. Leaders emphasize that the priority is not prohibition, but bringing AI usage into a governed, visible framework before risk scales beyond control.
Much like Generative AI consumed tech and cyber conversations in 2025, Agentic AI will dominate conversations this year. These systems create content, make recommendations, summarize sensitive data, and—in some cases—take autonomous actions. This raises new questions about reliability, bias, data leakage, model abuse, and unintended behavior.
Addressing these risks requires a modern governance framework built on:
Managing agentic AI risk is essential for business resilience and our national security more broadly. And once again, our members emphasized how this increases the need for human oversight, not the opposite.
Given the scale and speed of change, organizations need a structured, disciplined approach to integrating AI risk management into their security strategies:
But above all, navigating the path ahead requires us to acknowledge a simple truth: everyone is figuring this out at the same time.
Unlike more established cybersecurity domains—where decades of standards, regulatory frameworks, and maturity models exist—AI security is still in its infancy. NIST’s AI Risk Management Framework is an important starting point, but the depth and breadth of prescriptive AI standards simply do not exist yet.
This makes peer learning and shared lessons an essential part of any credible roadmap. Across our community, we consistently hear that some of the most actionable insights come not from theoretical models but from what fellow CISOs have already tried, broken, corrected, or proven effective.
The path forward is clear: in a space with no settled playbook, managing AI risk starts with community.