skip navigation
skip mega-menu

Artificial intelligence in your business: Data protection

How do you ensure that any artificial intelligence (AI) you are using or developing throughout your business has data protection 'baked-in'?

With novel and developing technologies there is often a multitude of factors and risks to consider when deciding how they should be used across a business. AI is no different. In this article we will explore some key data protection implications of using AI in your business.

AI and data protection

Some AI applications do not involve personal data. However many do: examples might be automating a decision to grant a loan to an individual, or whether to take a job applicant to the next stage of the process. If that’s true in your case, then you should be aware of the potential data protection implications.

Data protection needs to be considered throughout the whole lifecycle of an AI system. Even where no personal data is involved in the design and development of the AI system, it’s important to build in ‘data protection by design and by default’ principles at that point, particularly if personal data will feature later on in the lifecycle. This is to ensure that, for example, humans can provide meaningful oversight of the AI system, that individuals’ data protection rights can be met and that the risk of bias is minimised.

Key considerations for organisations using or considering using AI

At each stage of using AI, you should be ‘baking in’ data protection compliance, and, in almost all cases, ensuring that a data protection impact assessment (DPIA) is carried out. A DPIA involves identifying any high risks to individuals’ fundamental rights, including privacy rights and other human rights, and then mitigating and managing those risks. Possible risks are outlined below.

Accuracy

The accuracy principle in data protection requires personal data to be accurate and, where necessary, kept up to date. Depending on how the personal data is being used, if it’s inaccurate every reasonable step must be taken to delete or correct it without delay. The statistical accuracy found in AI is not the same as the data protection accuracy principle. In AI, it is about the accuracy of the AI system, that is the proportion of answers that the AI system gets right or wrong; not the accuracy of the personal data.

AI works on the basis of probability. It doesn’t necessarily have to get it right 100% of the time. You need to assess the impact of the risk that the AI isn’t always accurate, mitigate that risk and manage any residual risk. You need to be very clear that output provided is a prediction, not a certainty. One way of doing that is to have and make available confidence scores, for example, that the percentage likelihood that a particular outcome will occur is 85%. Once the AI system is in use, you need to evaluate the statistical accuracy throughout its lifecycle because it can change over time.

Explainability

You need to be able to explain how your AI works. That involves providing meaningful information about the logic involved, plus what that means for the affected individuals, what the significance is and what the expected consequences are.

Subscribe to our newsletter

Sign up here