Search

Pick the right AI path for your business

 

Across industries, businesses are considering how they can apply AI — and many have already begun moving forward.

 

Before adopting the next AI solution, businesses should examine proposed use cases against their business model, customers, data and risk profile. Companies commonly find many ways to drive business value with AI. Today, most AI use cases can be grouped into three broad categories of opportunity:

  • Efficiency and analysis
  • Personalization
  • Idea generation
 

Efficiency and analysis

 

Many businesses rely on manual processes that consume time. When those processes become more efficient, the benefits can be realized quickly.

“Efficiency improvements typically involve the acceleration of manual and/or repetitive tasks,” said Grant Thornton Risk Advisory Services Principal Johnny Lee. “Opportunities for efficiency are what first attract most companies to a proof of concept for AI — and those opportunities exist across all industries.”

Operational processes such as reporting, compliance, billing, documentation or administrative workflows often involve repetitive manual tasks that consume significant back-office resources. These activities can divert time away from higher-value work, and they can be rife with errors, given typically low levels of automation.

Across many industries, AI can improve efficiency in areas such as:

  • Researching relevant regulations, standards or historical data
  • Analyzing large volumes of emails or digital information, summarizing insights or identifying items of interest
  • Reviewing contracts, proposals or documents received from partners and vendors
  • Evaluating designs, plans or technical specifications against parameters and constraints to suggest improvements
  • Responding to inbound proposals or coordinating responses to RFPs
  • Modeling systems or infrastructure to identify performance issues or energy efficiencies
  • Managing projects dynamically by adjusting timelines, dependencies, resources and budgets as conditions change

“Properly applied, GenAI is good at helping you understand the data you already have,” Lee said. That capability can provide an advantage not only in analyzing customer data but also in improving operations for a business. 

 

Personalization

 

AI also can be used to personalize experiences, communications and services for specific audiences. AI-driven personalization can enhance a company’s marketing, customer engagement or service delivery by:

  • Analyzing a current customer base to refine segmentation, identifying customer types, patterns and preferences that can inform interactions.
  • Anticipating customer needs and providing proactive communications or services as those needs arise.
  • Engaging customers through targeted channels, with campaigns tailored to reach them where they are — potentially using highly tailored advertisting and/or chatbots to answer questions or handle routine requests at any time.

“I think the ability to identify the entire population of an existing customer base or potential customer base, and then to segment that base, considering interests and patterns of behavior — that is one very attractive promise of AI,” Lee said.

 

AI tools also can reveal insights that businesses may not have considered. For example, analytics might show that most customers come from a small set of geographic regions or share similar demographic characteristics.

 

“In addition to personalization, GenAI can be used to identify new customer populations to consider, from broader demographic data,” Lee said. These insights can help tailor promotional strategies and improve the ROI on marketing efforts, where less reliable benchmarks and sampling methodologies can sometimes lead to data of questionable quality our utility. Properly applied, GenAI can analyze patterns at a scale in ways that are that far more mathematically robust than traditional sampling methods used in marketing and advertising.

 

By analyzing greater and different information about their operations and customers, businesses can move toward a new level of AI-driven insight — idea generation.

 

How we can help you

 
 

 

Ready to talk? We’re ready to listen.

 

Request a meeting -->

 

Idea generation

 

“Idea generation, which many strategists call ‘ideation,’ involves creating new outputs based on ostensibly useful hypotheses,” Lee said. This activity often involves developing scenarios using internal data in combination with broader market information, including:

  • Measuring and optimizing internal functions by analyzing data across marketing, HR, operations and other functions
  • Analyzing broader market conditions to model the potential impact of expanding into new markets, acquiring other organizations or launching new offerings that address unmet customer needs
  • Examining financial and operational data to identify trends, predict outcomes and model potential returns on new products or services
  • Designing, testing and managing subscription-based offerings or other alternative service models

These insights often begin with broad strategic questions, such as asking, “What offerings are adjacent to our current capabilities, and how could we bundle them — perhaps via the acquisition of another organization to strengthen that offering?”

 

“Idea generation is one very powerful promise of AI,” Lee said. In particular, GenAI offers a dynamic platform for exploring ideas through a wider market aperture — but only if organizations understand how to use it effectively. “In GenAI, it all comes down to the quality of the source data, the chosen language model and the quality of the agent design and/or prompts used.”

 
 

Understand the risks

 

Each category of AI use cases carries unique risks. Managing these requires a structured approach. There are many AI risk-management frameworks available, and businesses can choose one that best fits their regulatory environment, organizational model and culture.

 

“There are over 1,000 initiatives globally focused on AI risk-management approaches, though only a handful are gaining real traction in terms of adoption and use,” Lee said. The number and nature of these frameworks and standards continues to grow due to evolving regulatory requirements around the world.

 

Most frameworks address similar categories of risk, which typically fall into three areas:

  • Design
  • Consistency
  • Trust
 

Design

 

Technical design characteristics are primarily controlled by system engineers and developers. One of the most common risk domains is accuracy — a factor frequently cited when discussing AI failures, Lee said.

 

Other important considerations include cybersecurity and data privacy. Businesses must ensure that any AI solution accessing its data manages these risks in a way that satisfies regulatory and reporting requirements.

 

Put differently, AI adoption must include a cybersecurity strategy that specifically addresses the risks introduced by the AI systems themselves ꟷ including the security of any source data being accessed and monitoring to ensure that security controls remain effective over time.

 

Consistency

 

Businesses require reliable and predictable results. GenAI systems dynamically generate answers based upon information available to the model(s) in use at that time. The implication of this is that outputs may change as the model encounters new information.

 

“All GenAI models, large or small, experience the concept of ‘drift,’” Lee said. Model drift refers to subtle changes in output over time. For example, a prompt entered on Monday may produce a different answer on Friday. Without human oversight, that variability can produce inconsistent results, and that inconsistency could create real business risk.

 

While such variation might be of nominal interest in casual use, organizations depend on reliable outputs. Businesses need to know that the same question will elicit the same answer, unless and until that answer needs to change.

 

Reliability factors into resilience, security, transparency and other business-critical requirements. Some malicious actors even exploit model drift through “model poisoning,” deliberately attempting to corrupt models with harmful prompts designed to produce incorrect outputs. These risks can erode trust in AI systems if they are not carefully managed.

 

Trust

 

The third category of risk relates to trustworthiness. Lee said business leaders should ask, “Is this fair? Are we unintentionally reinforcing things that are inequitable in an existing system?”

 

Businesses must ensure fairness, transparency and accountability in AI systems. That requires an understanding of the data that drive the model. “You have to understand the data that produce the outputs,” Lee said.

 

If a language model is not tailored to a business’s specific needs, it may be the wrong solution altogether. Organizations must also ensure that AI adoption does not expose intellectual property, violate privacy or introduce new liabilities.

 
 

Manage your approach

 

AI technology, and the associated risk management, can seem like a complex proposition for many, particularly smaller or mid-size businesses. Some firms consider managed third-party AI services where controls and safeguards are already in place.

 

“It’s common,” Lee said. “Many business leaders say, ‘We don’t have the staff or infrastructure to handle this nuanced, complicated technology. Can we find a vendor to help?’”

 

However, organizations should approach this carefully. “The most dangerous part is the implicit part — the things you haven’t considered before you start,” Lee said. Companies should ask several key questions before introducing AI capabilities into their environment.

 

Isolation

 

“The first question to ask, perhaps above all the others, is: ‘Are we operating within a walled garden?’” Lee asked. Consider the developer’s maxim that, “If you aren’t paying for the product, then you are the product.” Most free services pay their expenses by monetizing the data and other information they collect from users.

 

“If you just go to a free GenAI interface and enter proprietary information, you’re giving that IP away in ways that are pretty occult, hard to track and impossible to recover,” Lee said. “A key consideration in seeking external help is to dig into that very issue. You should trust but verify that vendors are speaking candidly about these things; blind trust in these scenarios is very dangerous.”

 

Stakeholders

 

While individual business units often initiate technology adoption for a given domain, AI solutions must involve stakeholders across the organization. “Like cybersecurity, AI governance works best as a multidisciplinary approach,” Lee said.

 

Businesses should identify stakeholders impacted by AI use including:

  • Employees
  • Customers
  • Vendors
  • Partners
  • Third-party intellectual property holders

HR, legal, finance, IT, operations and compliance teams should all participate in identifying use cases, relevant metrics, risks and other issues.

 

“Harkening back to the can’t-we-just-hire-a-vendor-for-that commentary, you can't outsource this multi-disciplinary approach; you should really insist that your in-house domain experts contribute to these decisions,” Lee said. “As you adopt the proof of concept, it's important that key stakeholder perspectives are contemplated for proper risk management. And it’s crucial to have an internal champion.”

 

Human oversight

 

Businesses also need human oversight for AI outputs. “If you look at examples of AI technology gone wrong, the absence of human oversight is usually at the heart of such failures,” Lee said. “You need to have a human overseer to make sure the solution is telling the truth, and to confirm that it's providing utility as opposed to hurling you into some expensive liability nightmare.”

 

Third-party AI services come with the same warning as outsourced cybersecurity services: you can outsource the solution, but not the risk. Those risks remain with the organization, Lee said.

 

“You can have advisors help identify a proof of concept, but if you don't dedicate an internal champion to oversee the outputs — to be accountable for the quality, consistency, and reliability of same, it can lead to very bad results,” Lee said. “Not only should you keep the system itself a short leash, you should never let it completely off the leash.”

 

Moving forward with AI

 

When businesses understand both the opportunities and the risks of AI, they can build the governance and risk-management frameworks needed for responsible adoption. These frameworks help bridge the risks between today’s operational needs and tomorrow’s AI opportunities.

 
 

Contact:

 
 

Content disclaimer

This Grant Thornton Advisors LLC content provides information and comments on current issues and developments. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.

Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.

For additional information on topics covered in this content, contact a Grant Thornton Advisors LLC professional.

 

Trending topics