Anticipate cybersecurity and privacy risks in AI

 

This article is an extract from our white paper Control AI cybersecurity risks

 

As AI tools become more prevalent and sophisticated, they can be used to pose significant cybersecurity risks. Malware can now use AI techniques to evade traditional antivirus software, so traditional approaches and countermeasures might not mitigate new risks like:

  • Incorrect or biased outputs
  • Vulnerabilities in AI-generated code
  • Copyright or licensing violations
  • Reputation or brand impacts
  • Outsourcing or losing human oversight for decisions
  • Compliance violations

As companies adapt their business strategies for new AI capabilities, they must also adapt their risk mitigation strategies. Cybersecurity and data privacy are an essential part of mitigating AI risks. 

 

 

 

Cybersecurity risks

 

Implementing AI technologies in your organization can introduce five primary cybersecurity risks.

 

1. Data breaches and misuse

Data breaches pose a significant cybersecurity risk for AI platforms that store and process vast amounts of confidential or sensitive data like personally identifiable information, financial data and health records.

 

Several risk factors can contribute to data breaches in AI platforms. Internally, AI instances that process and analyze data can be vulnerable due to weak security protocols, insufficient encryption, lack of adequate monitoring, lax access controls and internal threats. Externally, AI solutions and platforms can be vulnerable to various security risks and can be targets for data theft, especially if data used to interact with these platforms is logged or stored.

 

The risk of misuse and data loss has increased due to the unfettered availability of GenAI projects like GPT-4 or PaLM2, along with other open-source variants. The risk is especially high for IT, engineering, development and even security staff, who may want to use GenAI to expedite their daily tasks or simply experiment with new tech. They can inadvertently feed sensitive data through browser extensions, APIs or directly to the GenAI platform. Without an enterprise-sanctioned solution, some may use their personal accounts, potentially committing their companies to terms of use that may not be acceptable from the privacy and risk perspective.

 

 

2. Adversarial attacks

 

Adversarial attacks manipulate input data to cause errors or misclassification, bypassing security measures and controlling the decision-making process of AI systems. There are several forms of adversarial attacks, and two of the most common types are evasion attacks and model extraction attacks.

 

Evasion attacks try to design inputs that evade detection by the AI system's defenses and allow attackers to achieve their goals (like bypassing security measures or generating false results). Since the inputs appear to be legitimate to the AI system, these attacks might produce outputs that are incorrect or unexpected without triggering any detection or alerts. Model extraction attacks try to steal a trained AI model from an organization to use it for malicious purposes.

 

Some applications are particularly vulnerable to these attacks. The impacts of adversarial attacks vary by use case and industry, but can include:

  • Errors or misclassifications in the output for medical diagnostics, where adversarial attacks can misdiagnose cases and potentially cause improper treatment. In the context of automated vehicles, such attacks might incorrectly interpret traffic signs and cause accidents
  • Decision-making manipulations that could coerce a system into divulging sensitive information or performing unauthorized actions

 

3. Malware and ransomware

 

Malware and ransomware have plagued IT systems for years, and even AI platforms can also be subject to these attacks. In fact, AI lowers the cost of malware generation, so attackers can deploy new variants of malware quicker, cheaper and with less skill. The risks for any solution include:

  • Disruption of services, caused by encrypting data or overloading networks to Prevent legitimate access
  • Hijacking resources to use for crypto mining or a botnet attack
  • Exploiting publicly available AI platforms to pivot into your network and cause harm

 

4. Vulnerabilities in AI infrastructure

 

Like any software, AI solutions rely on components of software, hardware and networking that can be targeted by attackers. In addition to traditional attack vectors, AI can be targeted through cloud-based AI services, graphic processing units (GPUs) and tensor processing units (TPUs).

 

GPUs and TPUs are specialized processors designed to accelerate AI workloads, and they can introduce new attack vectors. Design flaws in processors and other hardware can affect a range of products. For instance, the Row hammer (or rowhammer) flaw affects dynamic random access memory chips in smartphones and other devices. Attackers can use a "Flip Feng Shui" technique on the Row hammer flaw to manipulate memory deduplication in virtualized environments, compromising the target's cryptographic keys or other sensitive data.

 

AI solutions are also built upon and integrated with other components that can fall victim to more traditional attacks. Compromises to the software stack can trigger denial of service, gain unauthorized access to sensitive data or pivot into your internal network. 

 

 

5. Model poisoning

 

Adversarial attacks target AI models or systems in their production environment, but model poisoning attacks target AI models in a development or testing environment. In model poisoning, attackers introduce malicious data into the training data to influence the output — sometimes creating a significant deviation of behavior from the AI model.

 

For example, after a successful model poison attack, an AI model may produce incorrect or biased predictions, leading to inaccurate or unfair decision making. Some organizations are investing in training closed large language model (LLM) AI to solve specific problems with their internal or specialized data. These applications can be subject to serious damage from model poisoning attacks without proper security controls and measures in place.

 

Model poisoning attacks can be challenging to detect, because the poisoned data can be innocuous to the human eye. Detection is also complicated for AI solutions that include open-source or other external components, as most solutions do.

 

Related Resources

 

 

ARTICLE

 

ARTICLE

 

ARTICLE

 

ARTICLE

 

ARTICLE

 

ARTICLE

 

ARTICLE

 

 

 

 

Regulatory considerations for AI use

 

As AI use and adoption grows, regulations will emerge to help ensure ethical use, data protection and data privacy.

 

Currently, one of the most comprehensive regulatory frameworks for data protection is the EU’s General Data Protection Regulation (GDPR). It governs the collection, processing and storage of personal data. For example, Article 35 of the GDPR requires organizations to perform a Data Privacy Impact Assessment (DPIA) for certain types of processing activities, particularly when new technology like AI is involved. You must be sure that you can comply with the requirements of the DPIA before initiating the processing activities. It could be difficult for you to give regulators an adequate DPIA without a thorough understanding of how an AI model works.

 

Many other jurisdictions are also considering or have implemented regulations to mitigate AI bias. These regulations typically require transparency in AI-driven decision making, auditing of AI systems for bias, and penalties for companies that fail to comply. One such regulation comes from The New York City Department of Consumer and Worker Protection (DCWP) and is set to begin enforcement in 2023.

 

Current regulations have varying requirements but, overall, these nascent attempts to regulate AI use are focused on key principles and avoid being too prescriptive. You should assess their current compliance requirements — and the key principles being addressed — to verify that their use of AI can continue to comply.

 

 

 

Privacy concerns

 

The use of Personally Identifiable Information (PII) to train AI models has given privacy and security professional a reasonable cause for concern. By incorporating PII into the training process, developers risk creating models that inadvertently reveal sensitive information about individuals or groups. As AI models become more powerful and adaptable, they might also learn to extract sensitive information from users in the course of conversations. A failure to sufficiently protect PII could lead to privacy breaches, scams, phishing and other social engineering attacks.

 

To mitigate these risks, consider a range of factors and potential issues in AI technology:

 

 

1. Loss of sensitive information

 

One of the most pressing concerns is the potential exposure of sensitive information that end users enter in conversational AI systems. While this information may seem harmless on its own, it can be combined with other data points to create detailed profiles of individuals, potentially jeopardizing their privacy. 

 

 

2. Model explainability

Many advanced AI models are so complex that even their developers might see them as “black boxes.” That makes it challenging for organizations to explain the models and their results to regulators. In heavily regulated industries like finance and healthcare, regulators often require clear explanations of a model’s outputs and decisioning processes. A lack of explainability can lead to undiagnosed errors and unidentified process improvements, along with more serious issues like undetected biases and ethical implications. It also obscures who or what is responsible when things go wrong. Some of these risks can be mitigated by adopting ethical AI development principles, promoting transparency, and improving user awareness and vigilance. However, mitigations must continue to evolve as AI use continues to expand in scale and complexity.

 

 

3. Data sharing and third-party access

AI platforms can involve collaboration between multiple parties, or use third-party tools and services. This increases the risk of unauthorized access or misuse of personal data, especially when data is shared across jurisdictions with different privacy regulations.

 

 

4. Data retention and deletion

Some AI solutions store data for extended periods so that they can continue referencing, analyzing and comparing it as part of informing their machine learning, predictive and other capabilities. This long-term data storage increases the risk of unauthorized access or misuse. The context and complexity of AI solutions can also make it challenging to ensure that data is deleted when it is no longer needed or when individuals exercise their rights to request deletion.

 

 

5. Inference of sensitive information

Increasingly sophisticated and pervasive AI capabilities can connect and infer sensitive information about users based on inputs that seem innocuous on their own. For instance, inferences could combine inputs to identify political beliefs, sexual orientations or health conditions, posing a layer of risk that is hard to identify without a comprehensive analysis across potential data connections. Even when data is pseudonymized, AI might be able to use advanced pattern recognition or combine datasets to re-identify individuals without permission.

 

 

6. Surveillance and profiling

AI technologies like facial recognition and social media monitoring can enable invasive surveillance and profiling of individuals that endangers rights to privacy, anonymity and autonomy.

 

Once organizations can understand and anticipate the cybersecurity and privacy risks in their AI solutions, they must determine how to mitigate those risks. To find out more about AI use cases, mitigation strategies and immediate actions for security teams, contact us or download our white paper Control AI cybersecurity risks

 
 
 
 

Contacts:

 
 
 
 
 

Our cybersecurity and privacy insights