Navigating the AI revolution

 

Insights on AI oversight for the board

 

Artificial intelligence (AI) is transforming business models and operations at a dizzying pace. At the recent National Association of Corporate Directors (NACD) Summit, Grant Thornton partnered with Michelle Lee, board director and CEO of AI consulting firm Obsidian Strategies, and Deborah Dunie, board director and CEO of DBD Insights, to offer observations to help corporate boards provide effective oversight as their organizations experiment with and implement AI.

 

The discussion covered culture, strategic alignment, risk management policies, talent management and more. The stage was set with survey questions of the audience that are shared here, which demonstrated the topic’s relevance. Results revealed few respondents have had discussions to align with management on the topic of AI and the majority don’t have robust processes, metrics for management, board training or board-level AI expertise in place.

 
 

The magnitude of AI’s impact

 
 
Michelle Lee

“There is an AI or ML opportunity for virtually every company and business function that has data. The board’s role is to oversee the strategy, manage the risks and balance the two, determining the right path for companies and stakeholders.”

Michelle Lee

CEO, Obsidian Strategies
 

Lee’s interdisciplinary insights stem from experience at Amazon Web Services, Google and the U.S. Patent and Trademark Office.

 

Laying a foundation for the discussion, Lee stated, “Artificial Intelligence (AI) is one of the most transformational technologies of our generation,” and went on to say, “It’s not an overstatement to say the impact of AI is akin to the discovery of electricity.” Further building the analogy, she explained how, over time, we determined how to generate, store and transport electricity into businesses and homes. A myriad of applications were developed, which increased productivity and improved personal lives and businesses. AI and machine learning (ML) are being used to create operational efficiency, enhance customer experience and drive revenue.

 

Lee explained, “There is an AI or ML opportunity for virtually every company and business function that has data. The board’s role is to oversee the strategy, manage the risks and balance the two and determine the right path for companies and stakeholders.” She cited two key findings from a 2023 study by OpenAI and The University of Pennsylvania that predict the impact of GPT (generative pre-trained) models on the U.S. labor market. Approximately 80% of the workforce could have at least 10% of their work tasks affected by GPTs, while around 19% of workers may see at least 50% of their tasks affected.

 

Grant Thornton Chief Strategy Officer Chris Smith shared how recent challenges such as machine learning, digital transformation, addressing cybersecurity, and the pandemic have helped prepare boards for AI. “These challenges re-awakened boards to threats and disruptions requiring immediate action,” he said. This prepared boards to address governance over the strategy and risk of AI.

 

AI is going to expose the importance of cultural alignment, according to Grant Thornton Chief Transformation Officer Enzo Santilli. He explained how traditionally, challenges faced by boards predominantly affected a few specific elements from the mix of people, processes, technology, legal, management or ethics. However, rarely did a single challenge exert a significant impact across all these facets simultaneously.

 

Emphasizing the unique nature of AI, Santilli remarked, “AI transcends the conventional boundaries of boardroom challenges. It’s not just about technology or the potential displacement of jobs. AI influences every single element under the board’s governance, and it does so profoundly.”

 

Related resources

 

ARTICLE

 

VIDEOS

 

ARTICLE

 

PODCAST

 

WHITE PAPER

 
 

Use cases

 
 

As boards assessed the possibilities of how AI may affect their organizations, Lee drew upon her experiences advising companies and boards on AI strategy and the art of the possible and practical, sharing some compelling industry use cases. 

  • Pizza restaurant: An AI online ordering system was developed that predicted demand, toppings, location, time of day and quantity. Its predictions were so accurate that within three minutes of hitting send on an online order, a pizza would be ready for pickup, or it could be delivered to a home in 20 minutes in a specified radius.
  • Healthcare/life sciences: An AI system was developed to predict congestive heart failure 15 months before clinical manifestation.
  • Pharma: Accelerating drug development, AI was used to determine the efficacy of molecular structures. This data was combined with information on manufacturability and shelf-life. This enabled scientists to choose the most promising molecular structures for R&D investment, helping the company select the best drugs for patient access and profitability.
  • Retail: Companies are using AI predictive models to tell us what we want to buy before we know it. This is driving customer demand, improving supply chain management, and allowing companies previously unknowable insights into purchasing habits.
  • Marketing effectiveness: AI can be used to predict customer churn based on account-level activities and intervene to prevent it. AI can also be used to create customized, personalized emails with promotions and discounts based on past activities and customized content.
  • Call centers customer experience: Call centers are improving customer experience with chatbots trained on company knowledge bases and manuals, which support call center personnel. Customers receive faster access to accurate information delivered in the company’s desired tone.
  • Augment or automate operations and improve decision speed and accuracy: Google Maps and Amazon “suggestions” are AI-driven apps that achieve these goals, which we have all probably been using for years. It’s an example that demonstrates AI has not sneaked up on us — we just may not be aware we’ve been using it.
  • Create content with speed and efficiency: Law firms, for example, can use AI to search through volumes of case law to find the right precedent for the argument they’re making.
  • Increase worker productivity: Whether people are writing code, scheduling meetings, or even composing emails, AI can be used to help them accomplish more during a working day.

Lee expressed that whether you sell pizza or widgets there are some incredible customer experience opportunities, all driven by rich sets of data and machine learning. “Every department, every industry, every function where there’s data is a potential opportunity.”

 

AI insights for industries

 

ARTICLE

 

ARTICLE

 

ARTICLE

 

ARTICLE

 
 

Aligning AI and culture

 
 
Enzo Santilli

“Cultural alignment is going to be important at a higher level with AI. With AI, the board must be a strategic asset to management."

Enzo Santilli

Chief Transformation Officer, Grant Thornton LLP
 

Santilli's experience also includes serving as a director on Grant Thornton's partnership board.

 

Just 12% of respondents to Grant Thornton’s NACD Summit audience survey indicated they’d had in-depth discussions with management on AI philosophy and cultural alignment, while the majority (59%) had touched on the topic briefly or had not yet had conversations (27%).

 

“Cultural alignment is going to be important at a higher level with AI,” Santilli said. “With AI, the board must be a strategic asset to management.” This includes the board’s duty to guide management strategically through AI’s complexities and opportunities, all while staying true to the organization’s culture and core values.

 

Lee discussed how board governance can drive responsible adoption of AI, considering a company’s values (cultural alignment), goals and shareholder value, as well as the impacts on employees and the community. “It’s going to fall on boards to navigate and manage this well, bringing to bear our judgment, experience, and moral compass. This is critical because the regulations aren’t coming anytime soon,” she said. “We have an incredibly important role, and a front-row seat to navigate AI.”

 

Looking at the cultural and human impacts of AI, Santilli said a lack of openness about how AI is being addressed will result in fear, skepticism and resistance among employees. He said strikes and work stoppages throughout history have been focused on the role of humans in the workforce of the future. The recent writers’ guild and actors’ guild strikes are examples. Hollywood talent is pushing back on studios’ use of generative AI to build new content using their previous work products (and in some cases, their images and likenesses).

 

Santilli said if boards encourage management’s focus on AI to be centered around the workforce and getting the organization to be successful and competitive in the marketplace, they’ll fare much better. “Does our workforce know where we’re going?” Santilli said. “Can they place some degree of trust and feel they’re a part of the mission and what we’re doing?”

 
 
 

Risks, frameworks and
acceptable use policies

 
 
Deborah Dunie

“When we think about ethical implications, board members ask questions and management teams are loathe to tell you the negative side. Encourage them to come in and tell you what could potentially go wrong.”

Deborah Dunie

CEO, DBD Insights LLC
 

Dunie’s technology background includes service as CTO of a defense department contractor and National GeoSpatial Intelligence Agency roles.

 

Just 8% of audience member survey respondents had robust processes in place for verifying management’s approach to AI, and just 2% had metrics related to AI on management’s scorecard.

 

Director and technologist, Deborah Dunie shared that as a board member, “When you think about the things AI can add to create value, there’s much we can be doing, but the board and management need to be having open, honest conversations and articulating the risk in these technologies.” Dunie also emphasized that technology use must support the organization’s ultimate strategy, its growth strategy and the protection of its assets.

 

There are numerous implementation issues for consideration once a direction on AI is established that’s consistent with the organization’s values and ethical principles. The first is choosing a risk management framework. The panelists recommended the National Institute of Standards and Technology (NIST) framework, which provides a flexible, seven-step process that can be used to manage numerous technology-related risks. But there are others that can be used as well.

 

Santilli said that any framework applied to AI use will need to be flexible and iterative, and because the technology is developing quickly, it must allow for experimentation. The policy should provide for the board to ask:

  • How AI uses are going to be determined
  • How AI models will be trained
  • How management will align the AI uses with organizational principles
  • How AI-related problems or ethical matters will be escalated to management and the board.

Under the umbrella of that framework, Santilli said, “The board should challenge management to develop an acceptable use policy for AI that evolves and use it to create a burning platform for how to adjudicate risk and ethical issues over time.”

 

Dunie remarked that, “When we think about ethical implications, board members ask questions and management teams are loathe to tell you the negative side. Encourage them to come in and tell you what could potentially go wrong. Having the conversation about how you will address situations when they go wrong puts you in a better position to recover if it does go wrong or can help ensure it doesn’t happen from the get-go.”

 

She emphasized the importance of board members demanding answers on AI from management on questions such as:

  • What could go wrong?
  • How do you turn off the AI in the event that something does go wrong?
  • What’s your backup plan if you turn it off?
  • How does your resilience model enable you to conduct business in a productive manner for however long it takes to re-achieve the efficiencies the AI models have offered?

“When you’re prepared, you are having the conversation about how you would address situations where things go wrong,” Dunie said. “Then you’re in a much more powerful position to recover.”

 

A historical precedent for hitting the stop button when things go astray was set after the Black Monday stock market crash of 1987. “Circuit breakers” were built into the financial system to stop trading when prices fall rapidly past a certain point. Santilli believes that similar measures can be built into AI systems, and he said the governance framework plays a key role in this.

 

“It’s to understand, where should we stop? Where does it get uncomfortable?” Santilli said. “And where are the places where we’re going to put that barbed wire fence around and not use AI?”

 
 

 

 

 

Intellectual property considerations and AI

 

Use of open-source AI and large language models may pose considerable risks to organizations because they may incorporate source material from throughout the internet without regard for whether it’s infringing on sources’ intellectual property rights, said Michelle Lee, who in addition to being an board director and AI executive led the U.S. Patent and Trademark Office.

 

Meanwhile, Lee added, companies that are using open-source AI and large language models to build their own intellectual property may find themselves ineligible for a patent since a work product must be created by a human to be eligible for a patent.

 

Finally, if employees enter their own companies’ confidential, proprietary information into open-source AI, this data could be incorporated into the model and potentially used by others.

 
 

Implementation, data and horizons

 
 

Implementation of AI is not much different from that of any other digital transformation effort, Lee said. She advised board members to make sure that management takes the following actions:

  • Select appropriate use cases, making sure that there’s buy-in from the business-level leaders who will be using the technology.
  • Verify that it has the necessary data. “If there’s no data, there’s no AI opportunity,” Lee said.
  • Ensure that the culture permits experimentation and a time horizon that’s long enough to allow for success via AI, as solutions often don’t work best initially but get better over time with more data and training.
  • Acquire the right people to drive AI strategy and use. Talent is scarce and expensive, and the environment is competitive. “That probably means a combination of internal and external talent,” Lee said.
  • Gain a proper perspective from governance and answers why AI is the right tool to solve a particular problem.

“If possible, to start, try to avoid AI applications that have implications on life, liberty, basic human rights, access to medical care, access to education, and the ability to earn a living,” Lee said.

 

Lee added that a data strategy cannot be overlooked in AI implementation, and it takes a long time to develop data that’s accurate and voluminous enough to enable AI. Lee advised board members to inquire about data horizons: the data currently available, the methods for its collection, and the data needs anticipated over one, three, five, or ten years.

 

“What can we do to begin gathering that data?” Lee said. “Because that data will enable your future AI and machine learning opportunities. And if you don’t gather it, you won’t have it.”

 

Also on the horizon is quantum computing’s significant enhancement of AI’s capabilities. Dunie explained that quantum computing is essentially processing power on steroids, which will take things to an entirely different level of computing. It will operate at a very fast rate and will be based on a substantially larger data set. When quantum comes online, data in transit is likely to be unprotected for some time. Encryption, data assurance and security in the processing engines that are supporting AI will be needed.

 
 

 

 

 

Consider data’s source and quality

 

To enable AI, Deborah Dunie suggests that boards consider the origin and quality of their data.

 

“It’s OK to ask the folks that are aggregating or feeding data into these systems: where does it come from, how recent is it, how long have you had it — because some data is perishable — and how broad is the data,” Dunie said.

 

She also cautioned about the ability of bots to replicate, tweak and amplify data so that it appears authentic.

 

“Getting some security in the mix that allows you to have some comfort with the data is very important,” Dunie said.

 
 

Getting governance right

 
 
Chris Smith

“In some ways the pandemic jolted us into how fast we have to act when we don’t know the answer, and that’s a precursor to what we are dealing with in AI.”

Chris Smith

Chief Strategy Officer, Grant Thornton LLP
 

Smith has served as a director on Grant Thornton's partnership board and on other external boards.

 

More than two-thirds (69%) of survey respondents have not undergone AI-specific upskilling, and 95% reported that some board members have only peripheral knowledge or no AI expertise.

 

Smith said the same governance principles that have helped boards handle cybersecurity and other technological risks will help them address AI. However, the time frame is different, as generative AI development has been accelerating rapidly over the past year. “In some ways, the pandemic jolted us into how fast we have to act when we don’t know the answer, and that’s a precursor to what we are dealing with in AI,” he said. AI may require more frequent attention from boards, similar to the timely feedback boards provided during the pandemic. “If we’re still meeting quarterly, does that make sense?” Smith said. “There are so many opportunities for governance to evolve because we’re being challenged from a speed perspective.”

 

Lee also urged organizations to move faster. She is often asked if organizations that are reluctant to be early adopters of AI will be OK if they are “fast followers” instead.

 

The answer is no.

 

“Those who move ahead with implementation achieve the data advantage,” Lee said. “They create a moat, and it becomes bigger and bigger, and it becomes harder to catch up. It’s not too late to get started now, but I personally would not be a fast follower.”

 

Dunie addressed the issue of board expertise. Directors shouldn’t work at the coding level that enables AI to perform tasks that provide their organizations with opportunities to perform more efficiently. However, it is important for board members to be knowledgeable and understand how AI fits into the organization’s priorities and business model.

 

“They don’t need to be technology experts, but they need to be good board members,” Dunie said. “Think about the diversity of your committees and what needs to happen there to bring in the right folks.” She encouraged the directors to get a little out of their comfort zone when they think about the future of AI, ML and big data.

 

Lee added that board members have the skills to help their organizations negotiate these difficult questions. She said board members have always been inherently curious lifelong learners, and encouraged them to continue educating themselves on the critically important issue of AI governance.

 

As a strategic asset, boards can guide and encourage the management team through probing questions. When deeper knowledge is needed, directors can have outside experts come in to advise them. And they can rely on their own experiences to effectively govern AI. “Apply your good judgment, common sense, human values and wisdom,” Lee said.

 

3:01 | Transcript

 
 

Contacts:

 
 
Enzo Santilli

Enzo Santilli is Grant Thornton’s chief transformation officer and a member of the firm’s Senior Leadership Team.

Pittsburgh, Pennsylvania

Industries
  • Healthcare
  • Manufacturing, Transportation & Distribution
  • Technology, media & telecommunications
  • Not-for-profit & higher education
Service Experience
  • Advisory
 
Chris Smith

Chris Smith is Grant Thornton’s chief strategy officer and a member of the firm’s Senior Leadership Team.

Charlotte, North Carolina

Industries
  • Technology, media & telecommunications
  • Not-for-profit & higher education
Service Experience
  • Advisory
 
 
 
 

Our fresh thinking