The AI Reckoning: Elevating Cybersecurity, Governance to the Boardroom
8:25
author photo
By Cam Sivesind
Tue | Dec 9, 2025 | 8:54 AM PST

Artificial intelligence is no longer an emerging technology—it is a general-purpose capability poised to reshape how every company competes, operates, and grows.

According to a recent report from McKinsey & Company, "The AI reckoning: How boards can evolve," AI's implications could be existential for companies, making it a mandatory board-level priority. Yet, despite this massive risk and opportunity, the report reveals a critical governance gap: 66 percent of directors report having "limited to no knowledge or experience" with AI.

This disconnect means that while organizations are rapidly deploying AI, boards lack the fluency and structure needed for proper oversight. For CISOs and cybersecurity leaders, this gap represents both a challenge and a clear mandate: you are uniquely positioned to translate abstract AI risks into concrete, strategic business concerns, effectively acting as the board's interpreter for the secure integration of AI.

The foundation of the McKinsey guidance in the report is the understanding that not all companies will or should approach AI in the same way. A company's AI posture is determined by two dimensions: its Source of Value (Optimize Internally vs. Expand Strategically) and its Degree of Adoption (Selective vs. Holistic).

Boards must align with management on which of these four archetypes the business is pursuing to tailor their governance and risk tolerance:

  • Business Pioneers: AI drives new offerings and redefines competition (e.g., a manufacturer becoming an AI-driven solutions provider).

    • CISO risk focus: Data Moats, IP Protection, Compute Cost Sustainability, AI-native regulatory compliance.

  • Internal Transformers: AI becomes the "enterprise nervous system," rewiring the operating model at scale (e.g., a mining company optimizing everything from exploration to refining).

    • CISO risk focus: Resilience, Observability, Explainability, Systemic dependency risk.

  • Functional Reinventors: AI is used in disciplined, ROI-driven ways for targeted workflow improvements (e.g., using AI for specialized scheduling or predictive maintenance).

    • CISO risk focus: Vendor Risk, Vendor Lock-in, Resource Allocation, Coherence across multiple initiatives.

  • Pragmatic Adopters: AI is adopted selectively, usually after solutions are proven in the market by others (a fast-follower approach).

    • CISO risk focus: Risk of Inaction, Strategic readiness to pivot quickly, Tracking competitor AI maturity.

"At first, my main goal was simply protecting the organization against AI threats. I also explored how the security team could leverage AI," said Justin Armstrong, Founder and CEO of Armstrong Risk Management LLC. "As time has gone on, I have shifted to 'how can I help the organization adopt AI?' and have been working with the team leading AI adoption. It's important for CISOs to take a leading role in advancing the use of AI in a responsible way."

The six strategic actions where security must lead

To overcome the governance gap, the report outlines six actions for boards. For cybersecurity leaders, these actions are not suggestions—they are your mandate to ensure secure AI integration.

1. Align on AI posture and review it annually

Boards need to revisit their posture regularly due to changes in the competitive, regulatory, and technological environments.

CISO action: Provide the security risk intelligence and threat landscape analysis that informs this annual review, especially in areas like supply chain risk and regulatory drift.

2. Clarify ownership of AI oversight

Boards must explicitly define which topics belong to the full board, which belong to committees (e.g., risk), and which are operational.

CISO action: Proactively define the Cyber-AI Risk Committee agenda, ensuring topics like major vendor security reviews and AI risk frameworks are addressed.

3. Codify a framework for AI governance policy

Fewer than 25 percent of companies have board-approved AI policies. A credible framework must specify crucial security and risk components, including:

  • Risk thresholds (where human sign-off is needed).

  • Vendor or data guardrails (IP protections, security, and lineage standards).

  • Escalation triggers (what incidents reach the board and how fast)

CISO action: The security team is the architect for these guardrails. Deliver the policy draft, focusing on data integrity (provenance and poisoning threats) and explicit incident response plans for adversarial AI attacks.

4. Build AI fluency

Directors must understand AI's role in creating opportunities and risks.

CISO action: Lead the AI fluency program. Offer regular, targeted briefings and external expert input on emerging AI cybersecurity threats (e.g., prompt injection, model extraction) and evolving international regulatory alignment. Reframe AI not just as technology, but as a business catalyst that affects the competitive dynamics of the sector.

Dr. Kimberly KJ Haywood, in her book "Here We Go Again., Except It's AI: AI Governance: From Analysis to Enterprise Action," cites a couple passages that fit the storytelling about AI in the boardroom topic.

Chapter VIII: AI Governance, Trenches to the Boardroom; What Works, What Fails, and Who Actually Owns the Mess: "At some point, governance stops being a checklist and starts becoming who you are. That's the evolution we're after. Not compliance for its own sake, but leadership that can be trusted even when no one is watching. Because that's the real test of ethical AI. It's as much about how we lead through it as how we build it."

Chapter X: Emerging Tech, Evolving Rules: AI Governance for What's Next: "AI doesn't wait for your next quarterly review—it evolves by the minute. So governance needs to be continuous, not occasional. Think feedback loops, not approval gates. Governance needs to listen, learn, and adapt in real time. Security plays a central role here… 'Set it and forget it' is a dangerous mindset. AI systems don't fail like traditional software; they fail quietly, until they don't."

Dr. Haywood is Principal and CEO at Nomad Cyber Concepts, and Adjunct Cybersecurity Professor at Collin College. Here's a non-Amazon link to the book.

According to a McKinsey blog post regarding the report:

  • More than 88 percent of organizations report using AI in at least one business function; however, board governance has not matched that pace. While interest in AI seems to have spiked after the introduction of ChatGPT, as of 2024, only 39 percent of Fortune 100 companies disclosed any form of board oversight of AI—whether through a committee, a director with AI expertise, or an ethics board.

  • Even more telling, a global survey of directors found that 66 percent report their boards have "limited to no knowledge or experience" with AI, and nearly one in three say AI does not even appear on their agendas.

  • AI-savvy boards will be able to help their companies navigate these risks and opportunities. According to a 2025 MIT study, organizations with digitally and AI-savvy boards outperform their peers by 10.9 percentage points in return on equity, while those without are 3.8 percent below their industry average.

  • The analysis highlights two priorities for boards:

    • Defining the company's posture toward AI adoption. Most organizations still lack a clear view of how AI fits into their strategy or transformation agenda. Without alignment between the board and management, oversight becomes either superficial or paralyzing.

    • Tailoring the governance model to match the company's AI posture. The board's task is to calibrate its role around where to engage, what to oversee, and the cadence to use.

The AI reckoning demands that the CISO evolve beyond being a technical risk manager and step into the role of a strategic enabler. By aligning the technical controls of the security program with the board's governance needs, the CISO ensures that the pursuit of AI value is never separated from a robust framework of safety and security. Your ability to codify, measure, and communicate AI risk is the key to unlocking the enormous potential AI offers while mitigating the existential threats it presents.

Comments