AI technology is constantly generating new capabilities — and new risks. The pervasive and powerful nature of AI is both an incentive and a concern for companies that are building new solutions.
“Companies are building products in an uncertain environment, and in a rapidly evolving environment,” said Grant Thornton Technology and Telecommunications Industries National Leader Steven Perkins.
This rapid evolution creates risks. President Joe Biden recently spoke with advisors about how to help ensure responsible AI innovation that protects rights and avoids disinformation. At the meeting, the President said, “AI can help deal with some very difficult challenges, like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security… And so, tech companies have a responsibility, in my view, to make sure their products are safe before making them public.”
That responsibility can be significant, yet very complex.
AI developers face an uncertain regulatory environment, expanding AI capabilities, and new applications for those capabilities across industries. These factors can create risks that companies have not mitigated, or even anticipated.
“It's absolutely my belief that AI is a different architecture from prior architectures of cloud computing, client-server and internet-based models. It will be every bit as transformational.”
The risks that accompany AI are as significant as the risks that have accompanied new architectures, Perkins said. “It's absolutely my belief that AI is a different architecture from prior architectures of cloud computing, client-server and internet-based models. It will be every bit as transformational. It's not just another component. It is a fundamentally transformative architectural component.”
“The more that AI gets embedded in the architecture, the greater the opportunity for return — but the greater the risk, as it gets used in so many unanticipated ways,” Perkins said.
As software companies build upon new AI capabilities, they need to build trust. They need to demonstrate responsibility and risk management that protects users and keeps driving adoption.
Build trust in the technology
AI technology will continue to affect business models for the companies that develop solutions, along with the companies that use the solutions. Those models will keep evolving as new capabilities achieve adoption. “With cloud, we saw new revenue models, support models, customer engagement models, corporate risk structures, workforce structures and more,” Perkins said.
“Underpinning all of it is trust,” he said. “Trust is what drives the adoption.”
“Understanding where the elements of trust exist, between the builders of the technology and the users of the technology, is tremendously important.”
That trust must be built with components of reliability, performance, scalability, interoperability, transparency and more. “Understanding where the elements of trust exist, between the builders of the technology and the users of the technology, is tremendously important,” Perkins said. The complex integration of AI technology within many solutions can make it difficult to discern the lines of responsibility. Solution users can be left wondering:
- How do I even know AI is there?
- How do I know when it’s being used?
- How do I understand what the effects are?
- How do I know the data sets it’s using?
- How do I know that the guardrails that protect me?
These questions create uncertainty for both users and developers. “I'd say that's problem number one,” Perkins said. “Derived from that problem is the need put structure around the way you build, document and distribute AI technologies.”
So, how do you prove that AI technology — and your application of it — can be trusted?
Stay ahead of regulations
Following AI regulations can be difficult. “There are no universal guidelines on what constitutes trust in AI technology,” Perkins said, noting that, “The other problem is: There are lots of people tackling it.” That has created a complex web of overlapping guidelines and regulations from organizations and governments around the world.
“There's a real risk that the industry will be overregulated, or it will be regulated in the wrong way,” Perkins said.
That’s why it’s essential to:
1. Implement proactive risk management based on the concepts driving compliance, so you can prepare for the trajectory of requirements — without stifling innovation.
2. Track evolving regulations to ensure that you are prepared to stay compliant.
3. Establish a regular compliance audit that identifies any relevant changes within your organization, solutions and regulations which require adaptation.
A proactive demonstration of responsibility and compliance will help you avoid regulatory actions that could limit innovation in the future, for your company and the industry at large.
“Technology companies largely don't have control over how their technologies will be used in building applications.”
The applications for AI technology, like the regulations for AI technology, are constantly evolving. “Technology companies largely don't have control over how their technologies will be used in building applications,” Perkins noted. A machine learning model can be embedded in solutions for government, healthcare, banking or other sectors where a client’s faulty data might lead to unethical biases and other issues.
Who is responsible for those issues?
Framed another way, the question might be: What is the responsible thing for AI developers to do?
While developers cannot control all applications of their technologies, they can make a best effort to warn clients about potential issues. Make sure to inform clients about the unique risks that could arise from a misapplication of AI technology, in areas like:
- Data accuracy and fairness
Solutions that use AI technology are often the tip of an iceberg that is supported by massive volumes of data. The accuracy and fairness of the data determine the accuracy and fairness of the solution’s results. Yet, AI solutions often need to ingest and analyze so much data that solution developers and implementers need to be extremely diligent about ensuring consistent and ongoing data hygiene over time. Without that diligence, solutions can generate results with biases and other misinformation that can be difficult to detect before people use the results to drive decisions.
- Data privacy, security and ownership
Given the volume of data that many AI-driven solutions ingest, developers can sometimes fail to sufficiently apply the privacy, security, copyrights and other restrictions that should accompany, limit or block use of the data. Data diligence must consider these factors, along with accuracy and fairness, in the design and implementation of AI-driven solutions.
- Closed loops
AI technology is not, and may never be, at a point where it can be trusted to drive entirely closed-loop systems that have significant and permanent impacts. Solutions must responsibly apply AI technology, limiting its part in decision-making processes with an understanding of impacts — and the places where appropriate, skilled and representative human interaction must play a role.
- Software supply chains
Most of today’s solutions are built from externally developed components or services, and solutions often constitute parts of larger systems. Solution developers and implementers must not only consider the impact of AI technology within their own solutions, but the impacts of the components being provided, and the impacts that solutions have within larger systems.
- Continual development
All of these considerations will evolve over the life of a solution. So, developers and implementers must continue to weigh considerations like these beyond an initial evaluation at implementation. They need to establish mechanisms to re-evaluate and adapt as needed over time.
“If clients are building applications on top of my natural language processing, machine learning or other AI component, what's their responsibility?” Perkins asked. Help your clients understand factors like the ones above, making and documenting your best effort to ensure they can manage risks.
Regulators and clients cannot consider the risks that they don’t see. That’s why it’s important for AI developers to offer the appropriate level of transparency. “Models need to be transparent, but there's clearly a balance between transparency and protecting the IP that makes your products unique and sustains your business model,” Perkins said.
This is another place where documentation can play an important role. “Part of it is being diligent in the documentation of what you’ve done and how you’ve done it,” Perkins said. This documentation can help drive and demonstrate governance for how you specify and build technology, including relevant internal and external traceability.
Work with industry and regulatory groups to make sure that you provide transparency that is aligned to expectations, and that you can answer the questions stakeholders ask.
“There are lessons that we can learn from prior architectural generations, and how we came to build trust,” Perkins said. Looking back at how clients came to trust cloud solutions, it’s clear that software developers need to be the ones to cross the gap of trust and meet customers where they are. “Tech companies needed to go above and beyond what you consider as a normal business practice, to demonstrate the efficacy of their products in that environment.”
That said, the trust gap could be even larger for AI. “This is about trusting the content, and the way the content is being articulated to me as a user, which is beyond a cloud infrastructure component,” Perkins said. Also, the impact and the audience, even for small solutions, might be expansive. “You need to think of this comprehensively, and think about who the participants are in defining and managing the risk profile for an AI solution.” Once you understand the participants and stakeholders, consider what you need to do to meet them where they are.
As risks continue to evolve, software developers and implementers must keep managing those risks to build trust and adoption. The companies that build the most trust could be the ones in the best position to capitalize on the growing demand for AI technology.
Our technology and telecommunications featured industry insights
No Results Found. Please search again using different keywords and/or filters.