Internal audit can be fundamental for AI success

 

Organizations are moving quickly to adopt artificial intelligence (AI) solutions … but who is controlling the risks?

 

Kaitlyn Ramirez

“We're at a point in history where it is now easy to access open-source AI algorithms and models — and the performance of these models is reaching beyond what humans can accurately do.”

Kaitlyn Ramirez

Grant Thornton Growth Advisory Senior Manager

“AI has been around for a while. It’s not new, but it has evolved significantly in recent years,” noted Grant Thornton Risk Advisory Managing Director Vikrant Rai. Now, organizations are integrating the latest capabilities into their processes. “We're at a point in history where it is now easy to access open-source AI algorithms and models — and the performance of these models is reaching beyond what humans can accurately do,” added Grant Thornton Growth Advisory Senior Manager Kaitlyn Ramirez.

 

“So, it’s a pretty exciting place to be,” Ramirez said. “But, as internal auditors, you need to be very attentive to your organization’s data collection, consent for data uses, security, privacy, data accuracy, bias, compliance with regulations and more.”

 

To address new AI capabilities, internal audit teams might need to re-frame their roles in the organization. The Artificial Intelligence Audit Framework from the Institute of Internal Auditors defines internal audit's role in AI as helping an organization evaluate, understand, and communicate the extent to which artificial intelligence will have an effect, negative or positive, on the organization's ability to create value. “I think that, as auditors, we often focus on risk mitigation and protecting assets. Those are really valuable things, but I think our role is really about the organization's ability to create value for its shareholders, employees and customers,” said Grant Thornton Risk Advisory Partner Scott Peyton. “We need to go back to the organization’s strategy, and the overall mission of AI, and then make sure that we follow that thread all the way through our audits.”

 

In a recent Grant Thornton webinar about internal audit and AI risks, more than a thousand attendees weighed in on where internal audit teams will focus their attention on AI:

 

 

Internal Audit is uniquely positioned to help institutions harness the benefits of AI while ensuring the associated risks are known and adequately mitigated, and it starts with knowing where to look

 

 

 

Where are you using AI?

 

“A common question we hear is, ‘How do I determine where AI is being used in my organization?’” Ramirez said.

 

“From an internal audit perspective, you want to pay attention to both where AI technology is being used now, and also where it's being developed or planned for more use,” Ramirez said. “Figure out where AI is being used by reviewing how the enterprise’s cloud or hybrid cloud systems are being used. These environments are key to facilitating AI development, and building a quick understanding of how they are enabling AI use cases is an efficient route to an inventory that you can assess risks against. This understanding may also reveal use of shadow IT, where AI use cases are being developed outside of enterprise standards.”

 

With the growing interest in generative AI, a simple way for internal auditors to locate uses is to consider which applications are getting a productivity boost from algorithm-enabled text, code, image, speech and video capabilities.

 

 

 

Show image description -->

This indicates various AI applications, organized into categories. The AI text category includes marketing, sales, support, general writing, note taking and other applications. The AI code category includes code generation, code documentation, text to SQL and web app builder applications. The AI image category includes image generation, consumer, social, media, advertising and design applications. The AI speech category includes voice synthesis applications. The AI video category includes video editing and generation applications. The AI 3D category includes 3D modeling and scene applications. Finally, the AI other category includes gaming, RPA, music, audio, biology and chemistry applications.

 

 

“Right now, we see clients adopting generative AI in the left side of this spectrum because foundation models are very useful for language, advancing text capabilities, and can be adapted to a range of different, more specific purposes,” Ramirez said.

 

Once you identify the AI solutions in your organization, you need to understand how people are using those solutions. Even packaged AI solutions with pre-trained models can have complex interactions with your business processes.

 

 

 

How are you using AI?

 

Internal audit teams need to help organizations understand where and how AI solutions are being used, so that they can ensure security, compliance and quality. To do that, internal audit teams need to act early.

 

“One great way to look at this is to think of front-office and back-office applications. In the front office, we're seeing a lot of adoption to drive down operating costs,” Ramirez said. “We're seeing an accelerated growth trend for using AI chatbots to drive down the time, resources and costs to get your customers to use your products. In the back office, we're seeing high adoption of AI for knowledge access and management, where AI is understanding language to read a whole corpus of information from industry research, or internal knowledge from ERPs and other systems.”

 

“A lot of AI solutions use a black-box environment,” Rai said, “meaning you basically don't understand the inner workings of the box. Most organizations are at that early stage where we can still understand the inner workings to an extent. However, as these programs evolve and mature, the ‘black box factor’ becomes harder to understand.”

 

 

 

What are you feeding AI?

 

When internal audit knows where an organization is using AI, and how, then it must look at the data architecture and data quality. Organizations increasingly rely on data for making impactful decisions. These decisions are only as impactful as the data quality, data models and training models. In addition to this cybersecurity is a top consideration in terms of security and privacy as well as model formation.

 

You need to understand AI solutions early on, to see their functionality and also help form it. Many AI solutions use algorithms that learn and change over time, meaning that they begin with 

 

Vikrant Rai

“An AI solution is almost like an infant that needs to be trained, with input and feedback … The way you involve training, as part of the entire AI modeling process, is going to be critical.”

Vikrant Rai

Grant Thornton Risk Advisory Managing Director 

Data feeds into that learning process. “An AI solution is almost like an infant that needs to be trained, with input and feedback,” Rai said. “So, your focus should be on what data sets are used for training as well as data protection, data quality and everything that relates to managing data lifecycle — which includes collection, use, storage and destruction. The way you involve training, as part of the entire AI modeling process, is going to be critical.”

 

“AI technology can be a great tool to help you search and query everything you have but, from an internal audit perspective, you need to make sure the technology is used in a proper, compliant way,” Ramirez said.

 

Ramirez said that she considers three factors when evaluating the data being fed into an AI solution: 

  • Data diversity: Assess the diversity of your data sources, and the growing volumes of data. For internal auditors, that means you have a wide range of unstructured and structured data sources that you need manage for quality governance.
  • Data integration: Consider how AI solutions are integrated into other solutions. Your team is probably not developing models and algorithms to stand alone. They’re integrating these technologies into other systems. “A metaphorical spiderweb can emerge where these technologies are interwoven into other applications to drive business value,” Ramirez said.
  • Data interference: When you use large language models (LLMs), for example, there’s a risk of those models fabricating information that does not directly correspond to the provided input data. “It’s what you’ll hear about in the media as ‘hallucination’ — the model generates text that is incorrect, nonsensical, or not real,” Ramirez said.

 

Every industry uses data differently, but every internal audit team needs to watch for inappropriate bias in that data. “It’s important for internal auditors to have an understanding about how discrimination plays into and is used in algorithms and model development,” Ramirez said.

 

From a cybersecurity perspective, attackers can introduce malicious data into training data (also referred to as “model poisoning”) to influence the output. If inappropriate biases are present in your data, they can result in a significant deviation of behavior. “When you consider that biases are reprocessed as part of an algorithm, it is important to make sure that the quality of data and training models is high. While bias is an inherent part of the outcome, we must ensure that there are sufficient processes and controls to rule out negative bias from the outcome,” Rai said.

 

 

 

When are there risks in AI?

 

“AI, especially generative AI, provides significant opportunity for breakthroughs in medicine, dramatic efficiency gains in business, and development of innovative products. As organizations move to embrace AI and its benefits, the internal audit team needs to help the organization identify and address the associated risks before they’re realized,” Peyton said.

 

There are many risks associated with AI, including some specific to individual industries. To evaluate AI risks, internal audit can focus a program on primary risk domains.

 

 

 

Show image description -->

This shows the AI risks domains, and the risks that can arise.
In the domain of governance and strategy, the risk is misguided and inconsistent adoption.
In the domain of ethical use and bias, the risk is inherent bias and lack of ethical balance.
In the domain of data privacy and security, the risk is data requirements and expectations breached.
In the domain of evolving regulations and compliance, the risk is adoption misaligned with emerging regulation.
In the domain of reliability and accountability, the risk is overreliance on AI systems. In the domain of third-party reliance, the risk is third-party use of AI..

 

 

These program-level risks are critical to consider early in an organization’s evaluation and adaptation of AI. They are often overlooked as individual business units or technical teams deploy AI within their own “four walls” to solve individual use cases and give little consideration to enterprise domains like governance, ethics, data privacy and regulatory compliance. 

 

As organizations mature their use of AI, they often start developing their own AI solutions or applications that sit on top of open-source AI platforms. This development cycle can create even deeper areas of risk.  “As the reliance on AI becomes more significant,” Peyton said, “internal audit teams need to consider technical design, socio-technical characteristics and the guiding principles that build trust.” 

 

 

 

Show image description -->

This indicates how internal audit needs to examine risks in three areas.
In technical design, auditors need to examine accuracy, reliability, robustness, resilience and security.
In socio-technical characteristics, auditors need to examine explainability, interpretability, privacy, safety and managing bias.
In guiding principles contributing to trustworthiness, auditors need to examine fairness, accountability, and transparency.

 

 

Clarity, traceability and explainability are likely to be important values for every user and stakeholder in an AI solution. And, ultimately, you need to be ready to explain your AI solutions to the agencies or organizations who regulate AI solutions.

 

 

 

Who will regulate AI?

 

In the U.S., the regulatory responsibilities and requirements for AI solutions are still evolving, but we can already see some core themes.

  • Regulators continue to work on and send signals on specific changes.
  • Expansion of existing data privacy and protection regulations are likely candidates for near-term AI regulation
  • Consumer harms will continue to be a focus area.
  • Enterprises will invest more in AI, and require more risk mitigation strategies.

 

“From the internal audit perspective, you need to know that ‘complex algorithms’ is not a legal defense,” Ramirez said. “That is plainly what regulators around the globe are saying, and it is not an excuse for not knowing how your algorithms work.”

 

 

 

Show image description -->

This indicates that, since 2016, countries have passed 123 AI-related bills. One important regulation, the General Data Protection Regulation, has four key principles: accountability, fairness, data minization and transparency. In general, companies need to be ready to explain three things: their AI governance and other oversight, how their AI reaches decisions, and how they audit the algorithms themselves.

 

 

What else can we expect, as U.S. regulations mature? We can take some cues from Europe’s Global Data Protection Regulation (GDPR). “First and foremost, we're seeing regulators reference four principles from the GDPR: Accountability, fairness, data minimization, and security,” Ramirez said. “Those four aspects of the GDPR tie to what regulators see as risks around AI.”

 

“Regulators are thinking about harm to customers,” Ramirez said. “If you can apply that lens to how you think about controls and guardrails for AI, you'll be in good position. With so much to analyze, it’s important for internal auditors to set a scope that allows business innovation while also ensuring compliance.”

 

 

 

Why are you using AI?

 

It’s easy for internal audit to get stuck in the details of an AI solution, but auditors must first consider how the solution connects to the organization’s greatest goals.

 

Scott Peyton

“As we think about how we approach auditing AI, it's important to start with the mission of AI within each organization.”

Scott Peyton

Grant Thornton Risk Advisory Partner

“As we think about how we approach auditing AI, it's important to start with the mission of AI within each organization,” Peyton said.

 

“There are some really fantastic things being done, in many industries. However, there are also some downstream consequences that aren't as good,” Peyton said. “As auditors, it's important for us to think about that, talk to those that are on the forefront of what the organization does, and ask the question, "If we are using AI, how are we using it to help advance our objectives?"

 

This foundational question sets an important context for internal audit’s work. “That's going to give us the basis for our audits, point of view, risk assessments, use cases, unintended consequences, and the list go on,” Peyton said. “That's where the audit committee and board really need to ground themselves, which is really the mission and strategy of AI and how it can drive an organization forward.”

 

To get the most value out of an AI internal audit, you need to adapt the audit to the current maturity of the organization's AI adoption. To do that, performing an AI maturity assessment that includes these elements:

  • Gain an understanding of the full AI landscape throughout the organization
  • Perform a high-level AI risk assessment
  • Evaluate the organization’s governance structures, policies and procedures
  • Assess data security and privacy processes and requirements
  • Evaluate the alignment of the organization’s AI deployments with its ethical guidelines and values

 

Your assessment should also consider the foundational programs on which AI solutions depend. Examples include enterprise ERM, Ethics and Code of Conduct, data governance, SLDC methodology and processes, and third-party risk management. Open findings and known limitations of these programs should be taken into account in the AI risk assessment.

 

After an initial AI maturity assessment, internal audit can move to a “deeper dive” that considers more technical aspects of AI algorithms and development. This deeper dive can take advantage of the Artificial Intelligence Risk Framework (AI RFM 1.0) recently issued by the National Institute of Standards and Technology (NIST), which is organized around three primary functions:

  • Govern: Be ready to demonstrate your governance, oversight and accountability of operating practices.
  • Measure: Be ready to explain how algorithm-driven decisions were reached — how you're getting outcomes, your probabilities, and your insights from your models.
  • Manage: Be ready to show that your company can audit the algorithms themselves, and that you have self-imposed guardrails.

 

It’s important for internal audit teams to fully understand how and where other teams are already using AI — and even how the internal audit team can use AI. “One of the questions I’ve seen is, ‘Is AI effectively going to take away all the internal audit or accounting positions?’” Peyton said. “I think the short answer is ‘No.’ I think more so what we see is that the people who AI as part of their job will take jobs from those that do not use AI as part of their jobs. Those that embrace AI, to save research time, and double-check their assumptions will make their work more efficient and effective.”

 

“I think AI has a lot of opportunity, both in the internal audit profession and more broadly, to sharpen our focus and time on those things where we add value — assessing risk, making decisions,” Peyton said. “What's really incumbent on us, as internal audit professionals, is to understand what the strategies are, what the technologies are, and how are those being developed in a well-controlled manner within the organization.”

 

 

Contacts:

 
 
 
 
 
 

Our featured insights