The Information Commissioner’s Office (ICO) has published guidance for the development and deployment of systems using artificial intelligence (the “Guidance”).

The Guidance is designed to act as an “aide memoire” to organisations running AI projects with the ICO noting that many of the key data protection considerations, for even the most complex AI projects, are largely the same as for any other new project.

What does the Guidance say?

The Guidance is divided into four sections, covering different data protection principles and rights.

1. The principle of accountability in AI systems

Organisations are responsible for complying with the data protection principles in a demonstrable way, ensuring that they take a risk-based approach and that compliance activities are well resourced, by diverse teams.

The ICO notes that one of the best ways to demonstrate compliance and ensure “data protection by design” is to carry out a meaningful DPIA. This can serve not just to identify and mitigate any risks to rights and freedoms but also help demonstrate accountability for decisions made as part of the design or procurement of AI systems. DPIAs should include some AI-specific considerations, such as an assessment of the necessity and proportionality of the processing by AI and whether the task could be accomplished in a less intrusive way, weighing the interests of using AI against the risks to data subjects. The ICO stresses the importance of organisations identifying and understanding the controller/processor relationships involved in AI projects. However, the ICO also notes that identifying these relationships can be complex, particularly where processing happens in the cloud. The ICO plans to consult on this issue when it revises its Cloud Computing Guidance in 2021.

2. The lawfulness, fairness and transparency of processing personal data in AI systems

Different legal bases will likely be required for different phases of AI development and deployment. The Guidance considers different legal bases as follows:

  • Consent: consent must be freely given, specific, informed and unambiguous, and involve a clear affirmative act. Consent must also be capable of being easily withdrawn. Given the nature of AI, it may not be easy to ensure that consent is sufficiently specific and informed.
  • Contract: where processing using AI is objectively necessary to deliver a contractual service to the relevant individual and there is no less intrusive way of processing data to provide the same service. The ICO notes that while the contract as a legal basis could apply for processing data on a system once deployed, this may not be suitable for the development phase of the AI system.
  • Legitimate Interests: to rely on this basis, organisations must carry out a legitimate interest assessment (“LIA”) and consider the three part test for identifying a legitimate interest, showing that the processing is necessary to achieve it and balancing it against the individual’s interests, rights and freedoms. As the AI lifecycle progresses, the LIA should be revisited to decide whether another legal basis may be more appropriate.

The ICO notes that special care should be taken in relation to special category data, data relating to criminal offences and where organisations are carrying out solely automated decision-making that has legal, or similarly significant, effects on individuals. Statistical accuracy (meaning how often AI reaches the correct answer) and risks of bias should be addressed throughout the AI product lifecycle to ensure that personal data is being processed fairly by the AI system.

3. The principles of security and data minimisation in AI systems

There may be different security challenges in the development versus deployment of AI systems and the key message is that organisations should review their security risk management practices ensuring personal data is secure in an AI context.

Two particular security risks that AI can increase are the potential for loss or misuse of the large amounts of personal data often required to train AI systems and software vulnerabilities introduced as a result of incorporating new AI-related code and infrastructure. Standard practices for developing and deploying AI naturally involve processing large volumes of data and organisations should consider how much data is really necessary for their particular purpose. For example, the question of how much data is needed to ensure statistical accuracy should be balanced against the principle of data minimisation. In addition, organisations should consider privacy-enhancing tools and methods, such as using “synthetic” data where possible (i.e. data which has been generated artificially and does not relate to real people). Anonymisation is also likely to play an important role in data minimisation in the context of AI technology and the ICO is currently developing new guidance on this.

4. Issues relating to compliance with individual rights

Whenever AI utilises personal data – whether contained in training data or used to make a prediction once deployed – data subjects must be able to exercise their rights under the GDPR. The Guidance considers best practice for developing and deploying AI while complying with data subjects’ rights of information, access, rectification and erasure, and to restriction of processing, data portability and objection.

The Guidance notes that, although it may be difficult to identify an individual’s personal data in training data which often has few or no clear identifiers, it may still be considered personal data if it can be used to ‘single out’ the individual it relates to, on its own or in combination with other data that an organisation may process. In certain circumstances, should an individual exercise their right to erase their data, organisations may require to erase the existing model and/or re-train the model. Once AI is deployed, the outputs of the system are usually stored in an individual’s profile and it may be easier to deal with a request relating to that individual.

Regulatory guidance in the AI space is continuing to evolve rapidly to keep up with technological advancement. As part of its framework for auditing AI, the ICO plans to release a toolkit to provide further practical support to organisations for auditing their own AI systems.

How can we help?

If your organisation is involved in the development or deployment of AI systems, our team is able to assist with any data protection related queries that you might have, so please do not hesitate to contact us.