Take action to control cybersecurity risks in AI

 

This article is an extract from our white paper Control AI cybersecurity risks

 

Artificial intelligence (AI) has opened the door to a range of new business capabilities — and a range of new security risks.

 

Your security team needs to select and design the right mitigation strategies to define a clear roadmap, with prioritized milestones and timelines for execution.

 

The first stop on this journey should be to define rules of the road. For instance, that can include rules that guide the secure and responsible use of GenAI — review and augment existing policies related to acceptable use, business email and data sharing with third parties.

 

To take the next steps on your roadmap, consider some important security questions.

 

 

 

Evaluate current policies

 

Start by asking:

Do we have the right policies, standards and procedures in place to tackle AI-related security and privacy risks?

 

Acceptable use policies typically address how users should use computing resources like networks, systems and software. These policies are meant to ensure that people use these resources in a responsible, ethical and legal manner. Explicitly include GenAI or other AI technologies in these policies, alongside existing use provisions for websites, social media, email and communications, to emphasize the potential risks involved.

 

Revisit your third-party data sharing policy, which typically outlines the types of data that can be shared, the parties that can receive the data, the purposes for which the data can be used, and the methods to ensure security and privacy of the data. These policies should include either the prohibition or limitation on the types of data that may be used during conversations and interactions with AI-driven solutions.

 

Finally, conduct security training and awareness campaigns to address the risks associated with the use of GenAI and other AI technologies, including appropriate uses, how to identify and respond to potential security and data breaches, and whom to contact in the event of an incident.

 

 

 

Design the new policies you need

 

Once you’ve evaluated current policies, ask:

Do we need new policies, standards and procedures to cover any gaps in the existing practices or emerging domains?

 

When creating new security policies addressing AI usage in the workplace, it’s important to consider several factors. Consider the following to help ensure the security and privacy of employees, customers and others whose confidential information you might hold:

 

  1. Who can use AI solutions, for what purpose and under what circumstances?
    Some organizations might have strict URL filtering protocols to block access to GenAI and other AI solution portals, along with social media sites. However, most organizations rely on less draconian “acceptable use” policies that provide guidelines about how and when to use these tools to make work more efficient. Similarly, some organizations may only allow access for specific people or even specific endpoints. Web browsing from corporate servers has always been frowned upon, and in many circumstances disabled, but access to the Internet and appropriate websites has generally been allowed for employees and management without many restrictions.

    Consider providing explicit guidance for IT personnel, engineers and developers on using or integrating AI tools into existing applications or software, to help ensure adherence to the appropriate data sharing policies and standards.
  2. Do we have a well-defined, documented and socialized data classification policy?
    You should have a documented data classification policy, and use data loss prevention (DLP) tools to enforce the policy. The use of data in AI technology, or for the purposes of training technology, should be included in these policies. You should explicitly train and remind users to not use any PII, proprietary information, patent information, source code or financial information in interactions with external AI systems.
  3. How do data privacy, data retention and terms of service affect the adoption and use of GenAI?
    Evaluate when and how to let people use external GenAI technology in your enterprise, in accordance with your enterprise policies, culture and values.

    Consider how the GenAI creators approach and communicate their commitment to privacy, their data retention policies, and clauses included in the terms of service (ToS). The GenAI data collection, processing and sharing practices should be clearly outlined and published so that you can answer:
    • Is input data pseudonymized, anonymized or encrypted?
    • Does the GenAI provider comply with published privacy-related policies and regulations?
    • Does the GenAI provider mention their data retention and destruction processes?
    • Does the GenAI provider publish and release any third-party audit reports, certifications or attestations demonstrating their commitment to security?
    • In the ToS, are there clauses that may identify any potential legal, regulatory or compliance risks and do they include limitations on data usage, sharing and intellectual property rights?
  4. How do you properly integrate AI into existing or newly developed applications or software?
    The development or integration of AI should start with a threat modeling exercise. Any significant changes to existing software should merit a risk assessment. In this way, you can document the risks associated with the software, calculate impact and implement commensurate controls to reduce the risks to acceptable levels.
  5. Whom do employees contact and report when AI results are questionable or expose undesirable information?
    Establish a clear accountability for the AI policies, processes, technology and implementation at all levels of leadership. Help ensure that employees are aware of the process for reporting erroneous outputs, or suspicious activity, to appropriate individuals for follow up and resolution.
  6. What are the opt-out policies for AI, and how do people use them?
    Opt-out policies give users, customers or employees a set of choices and mechanisms to decline or withdraw their consent for solutions to collect, process or share their data. In the E.U., the General Data Protection Regulation (GDPR) emphasizes an opt-in approach where businesses must obtain explicit, informed and freely given consent from users. The GDPR also recognizes the “right to object,” which lets individuals opt out of certain data processing activities. The U.S. does not have a unified federal privacy regulation, which means that opt-out policies are sector-specific or state-specific (as with the California Consumer Privacy Act).

    As you plan software integration and data storage, consider the potential need to implement opt-in or opt-out policies in the future. If users ask to review, modify or delete their data in the future, you need to ensure that you can find and manage all relevant data — and that you can show regulators you have that capability.

For a sample a GenAI-specific security policy, see the Appendix in Control AI cybersecurity risks. The sample policy considers many of the above points, to help stay ahead of AI adoption by employees and potential integrations into existing tools and applications. 

 

 

 

Collaborate for the future

 

The rapid growth and adoption of AI technologies have brought about significant advancements and improvements in various industries, streamlining processes and enhancing productivity. However, alongside these benefits come concerns related to privacy, security, misuse and ethical considerations.

 

As AI continues to reshape our world, it is imperative for businesses, governments, regulatory agencies and individuals to collaborate on developing responsible AI practices and regulations that address these concerns. We need to focus on transparency, fairness and ethical use, while mitigating potential risks, to harness the power that AI can have to create a more efficient, innovative and inclusive future. 

 

Once organizations can understand and anticipate the cybersecurity and privacy risks in their AI solutions, they must determine how to mitigate those risks. To find out more about AI use cases, cybersecurity risks, privacy regulations and mitigation strategies, contact us or download our white paper Control AI cybersecurity risks

 
 
 
 

Contacts:

 
 
 
 

Our cybersecurity and privacy insights