Search

Seven AI questions used by leading boards

 

Boards are feeling the pull of AI from every direction at once. Strategy, risk, compliance, talent and technology are converging in ways that stretch traditional governance models. The challenge is maintaining clear accountability and confidence without impeding execution or slowing the organization’s momentum.

 

Boards stay effective when they anchor oversight in process, outcomes and learning rather than technical detail. Strong governance shows up in clear decision paths, early and frequent collaboration between the business and risk functions and fast pilots that inform policy instead of waiting for academic perfection. Employee adoption grows when guidance is practical and accessible, and AI investments make sense when focused on specific business outcomes. Recognizing these lessons from recent years helps boards guide AI with confidence while supporting speed, accountability and long-term capability. 

 

1. How do we oversee AI without drifting into operational detail? Boards across industries are wrestling with how to maintain proper oversight while avoiding the temptation toward execution. AI creates this tension because it affects strategy, risk, compliance, talent and technology simultaneously. When a board cannot see how decisions are made, risks are considered, or documentation is handled, the gravitational pull toward management’s territory increases significantly.

 

An approach that’s more aligned with the board’s responsibilities is to focus on whether management has a clear and functioning governance process. Walking through how one real AI decision moved from idea to pilot and ultimately to approval provides the board with the right level of visibility. Once the mechanics of ownership, decision logic and documentation are clear, boards can keep their oversight role sharp without stepping into day-to-day decisions.

 

2. How do we put guardrails in place without slowing down the work? Many boards hear that teams want to use AI but often run into bottlenecks created by the risk management function. Boards can help solve this problem by encouraging compliance personnel to maintain controls but perhaps shift their focus to introduce detective controls where preventive controls might stymie progress. Such guidance results in an acknowledgment of the importance of internal controls while also signaling to these groups that they need to work with the business to figure out a way to go from “no” to “yes.”

 

The tension between the business and compliance usually stems from prospective AI users engaging risk and compliance too late in the AI development process, which causes these teams to take a cautious stance. When that happens, employees get frustrated while seeking approval and may look for easier, unapproved alternatives.

 

Boards can help by encouraging the business to involve risk, legal and compliance partners early in the AI design and development process. When these functions work together with the business to shape the boundaries upfront, they are better positioned to guide teams toward an informed green light. Asking management how early these groups engage and how clearly the rules are communicated helps ensure that guardrails don’t become barriers.

 

3. How do we learn quickly from AI pilots instead of waiting for perfect conditions? Boards recognize that organizations often slow their own progress by trying to achieve perfect data, processes or clarity before experimenting with AI. Waiting for perfect readiness delays learning, encourages overly cautious behavior, frustrates users who may be highly influential in the adoption cycles and hides valuable early insights that could strengthen governance.

 

A healthier approach is to support small, focused pilots that move quickly and aim to uncover lessons, not perfection. What matters most — and what should be unambiguously encouraged by boards — is how those lessons influence policy, process and risk management going forward. When management can point to specific changes informed by pilot activity, boards gain confidence that the organization is adapting at the pace AI demands — and no faster than the organization can manage.

 

 

How we can help you

 
 

 

Ready to talk? We’re ready to listen.

 

Request a meeting -->

 

4. How do we understand our data well enough to feel comfortable with AI? Boards know that AI success relies heavily on high-quality input data, yet many organizations still operate with fragmented systems or legacy environments. The concern is less about having perfect data and more about knowing where sensitive information resides, who is responsible for it and how it is protected. Without this clarity, AI oversight feels uncertain and risk discussions feel incomplete.

 

Boards can ask management for a simple, non-technical view of the data landscape. This includes what data is considered sensitive, what data can be used safely in AI models, what data must be restricted and how the organization monitors these various data categories within its legacy environments. With this foundation, boards can have clearer conversations about risk and readiness.

 

Meanwhile, boards and management alike need to understand that the entire inventory of the organization’s data does not need to be perfect to support AI use. It’s acceptable to optimize only the data needed to support a particular AI use case instead of fixing a broader data trove or embarking on an enterprise-wide data-cleansing initiative.

 

5. How do we encourage employees to use AI safely without creating shadow AI? Many boards are grappling with how to support employee use of AI tools while preventing unsafe workarounds. When approved tools are slow, confusing or overly restricted, employees often turn to outside solutions. This behavior stems from a desire to be productive more than any sinister motive, but it’s a phenomenon fraught with risk.

 

Boards can support a more constructive approach by encouraging management to provide clear, positive guidance and to develop effective policy, training and tool adoption. Employees need a clear picture of what is acceptable, that AI adoption is encouraged and the best way to bring forward ideas that are sanctioned by the organization. Employees naturally gravitate toward sanctioned tools instead of shadow options when organizations create:

  • Accessible prompt libraries
  • Easy access to training and tool-specific resources
  • Internal channels for sharing improvements
  • Simple rewards for safe innovation
 

6. How do we make sense of many AI use cases and understand which ones truly matter? Boards often receive AI proposals labeled as efficiency moves or transformative efforts. These labels can oversimplify what is really at stake. The real objective in evaluating a particular proposal is to understand the purpose of each initiative, how it connects to strategy, and what risks or dependencies accompany it. Without this clarity, it becomes difficult to prioritize or compare AI investments.

 

A clearer path is to ask management to describe each use case in terms of the business outcome it intends to create. Speed, cost, decision quality, customer experience and workflow simplification are easier to evaluate than more technical classifications. This shifts the board’s role toward validating outcomes and appetite for risk, rather than sorting through jargon or understanding the myriad reasons why a byzantine business process operates the way it does.

 

 

7. How do we prepare for the workforce and leadership shifts AI will bring? Boards are increasingly aware that AI will reshape roles, skills and career paths. As AI automates some early-career tasks, companies risk depriving employees of the experiences that develop future leaders. At the same time, new responsibilities are emerging, such as validating AI outputs or coordinating AI agents. This creates unusual pressure on both talent planning and long-term capability building.

 

Boards can support readiness by asking for a forward look at how management envisions that roles and skills will change over the next few years. This includes plans for training, reskilling and managing quality review when AI is involved. It also includes protecting meaningful early-career development. Approaching talent this way helps ensure the organization has the capability and judgment needed as AI becomes more integrated into core work functions. It also signals to the workforce that management has a long-term horizon in mind that does not include replacing their jobs indiscriminately with technology.

 
 

Contact:

 

Atlanta, Georgia

Service Experience

  • Advisory Services
 

Content disclaimer

This content provides information and comments on current issues and developments from Grant Thornton Advisors LLC and Grant Thornton LLP. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC and Grant Thornton LLP. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.

For additional information on topics covered in this content, contact a Grant Thornton professional.

Grant Thornton LLP and Grant Thornton Advisors LLC (and their respective subsidiary entities) practice as an alternative practice structure in accordance with the AICPA Code of Professional Conduct and applicable law, regulations and professional standards. Grant Thornton LLP is a licensed independent CPA firm that provides attest services to its clients, and Grant Thornton Advisors LLC and its subsidiary entities provide tax and business consulting services to their clients. Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

 

 

Tax professional standards statement

This content supports Grant Thornton Advisors LLC’s marketing of professional services and is not written tax advice directed at the particular facts and circumstances of any person. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. If you are interested in the topics presented herein, we encourage you to contact a Grant Thornton Advisors LLC tax professional. Nothing herein shall be construed as imposing a limitation on any person from disclosing the tax treatment or tax structure of any matter addressed herein.

The information contained herein is general in nature and is based on authorities that are subject to change. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. This material may not be applicable to, or suitable for, the reader’s specific circumstances or needs and may require consideration of tax and nontax factors not described herein. Contact a Grant Thornton Advisors LLC tax professional prior to taking any action based upon this information.

 

Changes in tax laws or other factors could affect, on a prospective or retroactive basis, the information contained herein; Grant Thornton Advisors LLC assumes no obligation to inform the reader of any such changes. All references to “Section,” “Sec.,” or “§” refer to the Internal Revenue Code of 1986, as amended.

Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

 

Trending topics