Form a strategy to mitigate cybersecurity risks in AI

 

This article is an extract from our white paper Control AI cybersecurity risks

 

Many organizations have begun to mitigate risks from generative AI by enhancing privacy solutions and monitoring AI models. To mitigate the cybersecurity risks in new AI solutions, you should review and update your existing cybersecurity program. Programs must include appropriate security measures and technologies to safeguard data and systems from inadvertent mistakes and malicious attempts.

 

Consider the following eight aspects when building security and privacy practices in the age of AI:

 

 

 

1. Policies and procedures

 

Review and amend existing policies and procedures to define the necessary AI-specific security requirements, designate roles to oversee the AI operations and ensure implementation of the security guidelines.

 

 

 

2. Threat modeling

 

Conduct threat modeling exercises to help identify potential security threats to AI systems and assess their impact. Some common threats to model include data breaches, unauthorized access to systems and data, adversarial attacks and AI model bias. When you model threats and impacts, you can identify a structured approach with proactive measures to mitigate risks

 

Consider the following activities as part of your threat modeling:

 

1. Criticality

Document the business functions and objectives of each AI-driven solution, and how they relate to the criticality of your organization’s operations. This helps you establish a baseline for criticality, making controls commensurate with the criticality of the AI application and determining the thoroughness of the threat model.

2. Connections

Identify the AI platforms, solutions, components, technologies and hardware, including the data inputs, processing algorithms, and output results. This will assist in identifying the logic, critical processing paths and core execution flow of the AI that will feed into the threat model and help edify the organization on the AI application.

3. Boundaries

Define system boundaries by creating a high-level architecture diagram, including components like data storage, processing, user access and communication channels. This will help you understand the AI application’s data and activity footprint, threat actors and dependencies.

4. Data characteristics

Define the flows, classifications and sensitivity for the data that the AI technology will use and output. This will help determine the controls and restrictions that will apply to data flows, as you might need to pseudonymize, anonymize or prohibit certain types of data.

5. Threats

Identify potential threats for your business and technologies, like data breaches, adversarial attacks and model manipulation.

6. Impacts

Assess the potential impacts of identified threats, and assign a risk level based on vulnerability, exploitability and potential damage.

7. Mitigation

Develop and implement mitigation strategies and countermeasures to combat the identified threats, including technical measures like encryption, access controls or robustness testing, along with non-technical measures like employee training, policies or third-party audits.

8. Adaptation

Review and update the threat model on an ongoing basis as new threats emerge or as the system evolves.

 

 

 

3. Data governance

 

Use effective data governance to help ensure that data is properly classified, protected and managed throughout its lifecycle. Governance can include:

 

1. Roles and responsibilities

Establish policies with roles and responsibilities for data governing, along with requirements for documenting data provenance, handling, maintenance and disposal.

2. Data quality assessments

Regular data quality assessments help identify and remove potentially malicious data from training datasets in a timely manner.

3. Data validation

Data validation techniques like hashing can help ensure that training data is valid and consistent.

4. Identity and access management

Identity and access management policies can help define who has access to training data, with access controls to help prevent unauthorized modifications.

5. Acceptable data use

Acceptable data use policies can outline what data can be used and how it can be used. Each data classification (like public, internal, confidential, or PII) should include its uses and restrictions pertaining to AI technologies. Policies should also include procedures for users to follow if they find a restricted data type in an AI solution or training set.

 

Implement secure data management and governance practices to help prevent model poisoning attacks, protect data security, maintain data hygiene and ensure accurate outputs. 

 

 

 

4. Access control

 

To control access to your AI infrastructure, including your data and models, establish identity and access management policies with technical controls like authentication and authorization mechanisms.

 

To define the policies and controls you need, consider:

  1. Who should have access to what AI systems, data or functionality?
  2. How and when should access be re-evaluated, and by whom?
  3. What type of logging, reporting and alerts should be in place?
  4. If we use AI with access to real data that may contain PII or other sensitive information, what access controls do we need, especially as related to the data annotation process?

Reassess and update policies and technical controls periodically, to align with the evolving AI landscape and emerging threat types, ensuring that your security posture remains robust and adaptable. 

 

 

 

5. Encryption and steganography

 

Encryption is a technique that can help protect the confidentiality and integrity of AI training data, source code and models. You might need to encrypt input data or training data, in-transit and at-rest, depending on the source. Encryption and version control can also help mitigate the risk of unauthorized changes to AI source code. Source code encryption is especially important when AI solutions can make decisions with potentially significant impacts.

 

To protect and track AI models or training data, you can use steganographic techniques like:

 

Watermarking that inserts a digital signature into a file, or the output of an AI solution, to identify when a proprietary AI is being used to generate an output.

 

Radioactive data makes a slight modification to a file, or training data, to identify when an organization is using the training data. For instance, radioactive data can help you protect your public data against unauthorized use in the training of AI models.

 

 

 

6. End-point security or UEBA

 

End points (like laptops, workstations and mobile devices) act as primary gateways for accessing and interacting with AI systems. Historically, they have been a principal attack vector for malicious actors seeking to exploit vulnerabilities. With AI-augmented attacks on the horizon, end-point devices warrant special consideration as part of the potential attack surface.

 

User entity and behavior analytics (UEBA) enabled end-point security solutions can help detect early signs of AI misuse and abuse by malicious actors. UEBA is known for its capability to detect suspicious activity by using an observed baseline of behavior, rather than a set of predefined patterns or rules. As a result, it offers a more effective solution than rule-based or supervised-learning security tools. UEBA employs advanced techniques like unsupervised machine learning and deep learning to detect new patterns and abnormal behaviors, providing a more dynamic and adaptive approach to identifying potential security threats. 

 

 

 

7. Vulnerability management

 

AI systems can be vulnerable at many levels, like the infrastructure running the AI, the components used to build the AI or the coded logic of the AI itself. These vulnerabilities can pose significant risks to the security, privacy and integrity of the AI systems, and you need to address them through appropriate measures like robust security protocols, testing and validation procedures, and ongoing monitoring and maintenance.

 

You should regularly apply software updates and patching to keep all software and firmware components of the AI infrastructure up to date. You should conduct regular assessments of AI infrastructure components, including hardware, software and data, to identify and remediate vulnerabilities in a timely manner.

 

Conduct periodic penetration tests on the AI solutions and functionality. This ensures that patches on infrastructure are working as intended, access controls are operating effectively and there is no exploitable logic within the AI itself.

 

 

 

8. Security awareness

 

With the advent of any new technology, you need to ensure that executives, developers, system engineers, users and others understand the appropriate uses and the risks.

 

Board members and executives need to know:

  • privacy and data protection regulations that are applicable to their use
  • ethical implications of AI technologies (like potential biases, discriminatory and unintended consequences)
  • legal and regulatory compliance requirements affected by use of AI, like intellectual property, liability and accountability

Users need to know:

  • how they can and cannot use AI
  • what data is permitted to be used with AI
  • which knowledge base that AI is using, and procedures for reporting incidents

System engineers need to know:

  • guidelines for designing, building and integrating AI systems in a secure and compliant way
  • processes, tollgates, security reviews and approvals required for AI solutions
  • resources and knowledge bases available for AI solutions

Developers need to know:

  • security coding standards to which they must adhere
  • approved repositories and libraries
  • processes, tollgates, security reviews and approvals required for AI solutions
  • resources and knowledge bases available for AI solutions

Every role needs security training that includes specific responsibilities along with these core topics:

  • your security responsibilities
  • the processes you are required to follow
  • resources you can use to learn more, or whom you can contact

To improve resilience against threats and safeguard sensitive data, you need to foster a culture of security awareness. You also need to regularly update security training materials to keep pace with the rapidly evolving threat landscape and emerging techniques. 

 

Once organizations can understand and anticipate the cybersecurity and privacy risks in their AI solutions, they must determine how to mitigate those risks. To find out more about AI use cases, cybersecurity risks, privacy regulations and immediate actions for security teams, contact us or download our white paper Control AI cybersecurity risks.

 
 

Contacts:

 
 
 
 
 
 

Our cybersecurity and privacy insights