AI technology is constantly generating new capabilities — and new risks. The pervasive and powerful nature of AI is both an incentive and a concern for companies that are building new solutions.
“Companies are building products in an uncertain environment, and in a rapidly evolving environment,” said Grant Thornton Technology and Telecommunications Industries National Leader Steven Perkins.
This rapid evolution creates risks. President Joe Biden recently spoke with advisors about how to help ensure responsible AI innovation that protects rights and avoids disinformation. At the meeting, the President said, “AI can help deal with some very difficult challenges, like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security… And so, tech companies have a responsibility, in my view, to make sure their products are safe before making them public.”
That responsibility can be significant, yet very complex.
AI developers face an uncertain regulatory environment, expanding AI capabilities, and new applications for those capabilities across industries. These factors can create risks that companies have not mitigated, or even anticipated.
“It's absolutely my belief that AI is a different architecture from prior architectures of cloud computing, client-server and internet-based models. It will be every bit as transformational.”
The risks that accompany AI are as significant as the risks that have accompanied new architectures, Perkins said. “It's absolutely my belief that AI is a different architecture from prior architectures of cloud computing, client-server and internet-based models. It will be every bit as transformational. It's not just another component. It is a fundamentally transformative architectural component.”
“The more that AI gets embedded in the architecture, the greater the opportunity for return — but the greater the risk, as it gets used in so many unanticipated ways,” Perkins said.
As software companies build upon new AI capabilities, they need to build trust. They need to demonstrate responsibility and risk management that protects users and keeps driving adoption.
Related Resources
ARTICLE
ARTICLE
ARTICLE
Build trust in the technology
AI technology will continue to affect business models for the companies that develop solutions, along with the companies that use the solutions. Those models will keep evolving as new capabilities achieve adoption. “With cloud, we saw new revenue models, support models, customer engagement models, corporate risk structures, workforce structures and more,” Perkins said.
“Underpinning all of it is trust,” he said. “Trust is what drives the adoption.”
“Understanding where the elements of trust exist, between the builders of the technology and the users of the technology, is tremendously important.”
That trust must be built with components of reliability, performance, scalability, interoperability, transparency and more. “Understanding where the elements of trust exist, between the builders of the technology and the users of the technology, is tremendously important,” Perkins said. The complex integration of AI technology within many solutions can make it difficult to discern the lines of responsibility. Solution users can be left wondering:
- How do I even know AI is there?
- How do I know when it’s being used?
- How do I understand what the effects are?
- How do I know the data sets it’s using?
- How do I know that the guardrails that protect me?
These questions create uncertainty for both users and developers. “I'd say that's problem number one,” Perkins said. “Derived from that problem is the need put structure around the way you build, document and distribute AI technologies.”
So, how do you prove that AI technology — and your application of it — can be trusted?
Stay ahead of regulations
Following AI regulations can be difficult. “There are no universal guidelines on what constitutes trust in AI technology,” Perkins said, noting that, “The other problem is: There are lots of people tackling it.” That has created a complex web of overlapping guidelines and regulations from organizations and governments around the world.
“There's a real risk that the industry will be overregulated, or it will be regulated in the wrong way,” Perkins said.
That’s why it’s essential to:
1. Implement proactive risk management based on the concepts driving compliance, so you can prepare for the trajectory of requirements — without stifling innovation.
2. Track evolving regulations to ensure that you are prepared to stay compliant.
3. Establish a regular compliance audit that identifies any relevant changes within your organization, solutions and regulations which require adaptation.
A proactive demonstration of responsibility and compliance will help you avoid regulatory actions that could limit innovation in the future, for your company and the industry at large.
“Technology companies largely don't have control over how their technologies will be used in building applications.”
Clarify applications
The applications for AI technology, like the regulations for AI technology, are constantly evolving. “Technology companies largely don't have control over how their technologies will be used in building applications,” Perkins noted. A machine learning model can be embedded in solutions for government, healthcare, banking or other sectors where a client’s faulty data might lead to unethical biases and other issues.
Who is responsible for those issues?
Framed another way, the question might be: What is the responsible thing for AI developers to do?
While developers cannot control all applications of their technologies, they can make a best effort to warn clients about potential issues. Make sure to inform clients about the unique risks that could arise from a misapplication of AI technology, in areas like:
- Data accuracy and fairness
Solutions that use AI technology are often the tip of an iceberg that is supported by massive volumes of data. The accuracy and fairness of the data determine the accuracy and fairness of the solution’s results. Yet, AI solutions often need to ingest and analyze so much data that solution developers and implementers need to be extremely diligent about ensuring consistent and ongoing data hygiene over time. Without that diligence, solutions can generate results with biases and other misinformation that can be difficult to detect before people use the results to drive decisions. - Data privacy, security and ownership
Given the volume of data that many AI-driven solutions ingest, developers can sometimes fail to sufficiently apply the privacy, security, copyrights and other restrictions that should accompany, limit or block use of the data. Data diligence must consider these factors, along with accuracy and fairness, in the design and implementation of AI-driven solutions. - Closed loops
AI technology is not, and may never be, at a point where it can be trusted to drive entirely closed-loop systems that have significant and permanent impacts. Solutions must responsibly apply AI technology, limiting its part in decision-making processes with an understanding of impacts — and the places where appropriate, skilled and representative human interaction must play a role. - Software supply chains
Most of today’s solutions are built from externally developed components or services, and solutions often constitute parts of larger systems. Solution developers and implementers must not only consider the impact of AI technology within their own solutions, but the impacts of the components being provided, and the impacts that solutions have within larger systems. - Continual development
All of these considerations will evolve over the life of a solution. So, developers and implementers must continue to weigh considerations like these beyond an initial evaluation at implementation. They need to establish mechanisms to re-evaluate and adapt as needed over time.
“If clients are building applications on top of my natural language processing, machine learning or other AI component, what's their responsibility?” Perkins asked. Help your clients understand factors like the ones above, making and documenting your best effort to ensure they can manage risks.
Most importantly, understand your responsibility as an AI developer, establishing your own risk management models that are conceptually sound and evolve over time.
Maintain transparency
Regulators and clients cannot consider the risks that they don’t see. That’s why it’s important for AI developers to offer the appropriate level of transparency. “Models need to be transparent, but there's clearly a balance between transparency and protecting the IP that makes your products unique and sustains your business model,” Perkins said.
This is another place where documentation can play an important role. “Part of it is being diligent in the documentation of what you’ve done and how you’ve done it,” Perkins said. This documentation can help drive and demonstrate governance for how you specify and build technology, including relevant internal and external traceability.
Work with industry and regulatory groups to make sure that you provide transparency that is aligned to expectations, and that you can answer the questions stakeholders ask.
Exceed expectations
“There are lessons that we can learn from prior architectural generations, and how we came to build trust,” Perkins said. Looking back at how clients came to trust cloud solutions, it’s clear that software developers need to be the ones to cross the gap of trust and meet customers where they are. “Tech companies needed to go above and beyond what you consider as a normal business practice, to demonstrate the efficacy of their products in that environment.”
That said, the trust gap could be even larger for AI. “This is about trusting the content, and the way the content is being articulated to me as a user, which is beyond a cloud infrastructure component,” Perkins said. Also, the impact and the audience, even for small solutions, might be expansive. “You need to think of this comprehensively, and think about who the participants are in defining and managing the risk profile for an AI solution.” Once you understand the participants and stakeholders, consider what you need to do to meet them where they are.
As risks continue to evolve, software developers and implementers must keep managing those risks to build trust and adoption. The companies that build the most trust could be the ones in the best position to capitalize on the growing demand for AI technology.
Prepare to meet demand
The tech industry is poised for strength overall, and recent attention to solutions like ChatGPT shows that AI-driven solutions are an important part of what’s driving that demand.
The market’s readiness to try and adopt new solutions has been primed by impressive new AI-driven capabilities, along with a convergence of three factors:
- The pandemic forced many companies to adopt cloud, remote and mobile solutions that they had been considering for years, sometimes overnight. While these sudden shifts weren’t ideal, they changed corporate and user expectations about how quickly new solutions can be adopted, and how important it is for a company’s technology to keep up with the competition. Many of today’s most competitive new customer solutions are AI-driven.
- The retirements of legacy enterprise IT professionals and leaders in recent years have created more customer appetite to adopt new models, platforms and solutions. The perspective that companies should “leave legacy alone,” keeping older systems as long as they run, has given way to a transformational perspective. New platforms, and new ERP and other solutions, often open the door to integrating new AI-driven capabilities.
- The third-party AI technology market has developed the capacity, capabilities and integration models to enable almost any company to build AI into its solutions. Without the need to hire or train in-house staff to develop or maintain AI-driven solutions, companies can now be more agile in responding to customer demands and competitive pressures.
Even with the convergence of these factors, the adoption of AI-driven solutions cannot be driven by capabilities alone. “Is it the right thing to just airdrop a highly engaging and consuming AI use case?” Perkins asked. He said that releasing new capabilities without building understanding and trust can generate a backlash of concern, adding, “We've seen a number of notable people come out and say, ‘We ought to take a six-month pause in rolling out generative AI applications.’”
Opposing forces are trying to drive and restrain AI technology, sometimes within the same organization. “This will be a regular occurrence, as the technology matures, deployment gets more pervasive, and it gets embedded more places, especially as it gets blended with other technologies.” New solutions and platforms will introduce new complexities, such as metaverse applications where lines around identity and content sources can become blurred — but corporate responsibilities remain.
“The concern about AI risks is absolutely legitimate and appropriate,” Perkins said, but companies must learn to identify and manage those risks so that they can build trust in the technology.
The tech companies that demonstrate responsibility, manage risks and build trust can be the first ones to meet the growing demand for AI.
Our technology and telecommunications featured industry insights
No Results Found. Please search again using different keywords and/or filters.
Share with your network
Share