GRANT THORNTON
2026 AI Impact Survey Report
The AI proof gap: Why AI isn’t delivering the performance leaders expected
Most organizations are scaling AI they cannot explain, measure or defend. Our survey of 950 C-suite and senior business leaders reveals why — and what the organizations pulling ahead are doing differently.
INTRODUCTION
Executives are scaling AI.
They are not governing it.
GOVERNANCE
Management is moving fast.
Oversight hasn't caught up.
Boards are giving AI the green light, but many are not asking what happens if something goes wrong. Three in four boards have approved major AI investments, but fewer than half have set governance expectations, and fewer than half have made AI risk a standing agenda item for board or committee oversight.
Most governance models were not built for the volume of AI use cases organizations are now deploying. Centralized review bodies get overwhelmed as use cases multiply, creating bottlenecks that slow the business without actually reducing risk. Organizations that develop stronger governance adopt AI faster. Among organizations still piloting AI, only 7% are very confident they could pass an independent AI governance audit in 90 days, unlike those with fully integrated AI, where 74% are very confident.
Organizations are moving through discovery and deployment unable to show that AI is working safely, defensibly and at the scale the business requires. Each ungoverned initiative does not just create one gap. It creates a gap that makes the next initiative harder to govern, harder to measure, and harder to defend. The proof gap does not grow linearly. It compounds.
The proof gap is real and it is measurable. The question is what separates the organizations that can prove their AI works from those that cannot. The answer, revealed consistently across the survey data, is governance. Not governance as most organizations practice it. Governance built as a performance system.
Governance and growth metrics rise with integration stage
Source: Grant Thornton’s 2026 AI Impact Survey, n = 950
Note: None of the 28 “early AI exploration” stage respondents were “very confident” they could pass an independent AI governance audit. Proof at the earliest stages is not low. It is nonexistent. Organizations do not drift into governance confidence. They build it deliberately. The gap between piloting and fully integrated is tenfold.
Without strong governance, piloting and scaling produce activity — not outcomes. Every gap compounds the next.
STRATEGY
Strategy drives AI ROI.
Three in four haven’t built one.
Organizations are succeeding on breadth with more pilots, more use cases, more functions touched by AI, but they are failing on depth. In our survey, business leaders identified competitor moves as the biggest external pressure driving adoption. Many are motivated by the fear of falling behind rather than a clear, practical view of where AI creates value for their specific business model.
Closing the gap requires discipline, not just vision.
Building measurement targets and governance infrastructure enables teams to move faster with confidence. That means consistent ROI measurement across initiatives, feedback loops that inform where the next investment should go, and the courage to exit experiments that are not delivering. It also means starting where the evidence is easiest to build.
operations leaders say they need formalized AI strategy or governance to improve in the next six months. The organizations that move now are already pulling away. Planning to build a strategy is not the same as building one.
When AI strategy doesn’t connect visions to outcomes, the gap emerges in the distance between the details.
WORKFORCE
AI is speeding ahead.
The workforce isn’t ready.
RISK
Agentic AI is accelerating.
Most aren’t prepared for its failure.
Nearly three in four organizations are giving agentic AI access to their data and processes — piloting, scaling or running it in production. Just 20% have a tested AI incident response plan for when it fails. The few organizations that have built governance into how AI operates are able to scale with confidence. Others remain limited in how far they can apply it.
Most organizations are not yet permitting fully autonomous decision-making: Only 5% allow agents to execute high-stakes decisions without human review, and 60% limit agents to moderate-risk task automation. But even at those levels, governance infrastructure has not kept pace, and C-suite misalignment is a contributing factor.
More than half (54%) of COOs are concerned about regulatory and compliance uncertainty related to agentic AI ¬— compared with just 20% of CIOs/CTOs. That gap in concern is itself a risk. When the people deploying the technology aren't worried about what the people running operations are worried about, control breaks down.
Tested AI incident response protocols are a critical governance tool.
The harder shift is structural. Governance needs to move from static policy to continuous oversight: monitoring agent behavior, detecting deviations and adjusting controls as systems evolve. Organizations that build that capability now will scale agentic AI without increasing their exposure.
The question is no longer whether your organization will experience an agentic AI failure. It is whether you will be able to explain it when you do. Most cannot — yet. The infrastructure already exists: nearly every organization has built these capabilities for cybersecurity. The elements translate directly to AI.
If autonomy outpaces scrutiny, AI agents can turn the gap into a chasm.
PERFORMANCE
Governance delivers performance.
The leaders reap the benefits.
The data is clear. Organizations that built governance first, prepared their workforce before demanding ROI, and had the discipline to stop what wasn't working are outperforming their peers across every measure.
Measurable benefits by integration stage
Source: Grant Thornton’s 2026 AI Impact Survey, n = 950
Note: These are not different types of organizations. They are the same organizations at different stages of the same journey — and the difference in outcomes is the cost of the proof gap. The leaders did not get there by accident. They built the infrastructure first.
Governance built early enables the confidence to scale. Every week it is deferred, the gap widens.
CLOSING THE GAP
Defensible AI delivers results
— but it takes discipline to build
The proof gap is an accountability problem. Boards approved investments without setting governance expectations. Leadership deployed AI without defining who owns the outcomes. Organizations scaled without building the infrastructure to prove any of it works.
The organizations closing the proof gap are not waiting for better technology, a regulatory mandate or an incident to force the issue. They are building governance now, and the gap between them and the rest is already measurable in revenue, efficiency, and innovation. It will not close on its own.
Are you in the AI proof gap?
5 questions every executive must answer
If you answered “no” to any of these questions, you are in the AI proof gap.
Our report shows what organizations closing the gap are doing differently.
Download the full report
Get a deeper analysis of the AI proof gap and how leading organizations have bridged it
Methodology
Between February 23 and March 18, 2026, Grant Thornton conducted a survey of 950 business leaders across 10 industries. Respondents were drawn from senior leadership, including C-suite executives (CEOs, CFOs, COOs, CIOs/CTOs) and leaders reporting directly to the C-suite.
Respondents came from asset management (N=100), banking (N=50), construction/real estate (N=100), energy (N=100), insurance (N=100), manufacturing (N=100), media and entertainment (N=100), private equity fund leadership (N=100), services (N=100) and technology and telecommunications (N=100).
Functional representation came from operations (390), finance (313), IT (234) and CEO/managing partner (13).