After being successfully trialled in nine NHS trusts, the deployment of artificial intelligence (“AI”) tools to tackle waiting list backlogs is to be rolled out in a further 100 NHS trusts.

The AI tool will be used in A&E departments of hospitals and is specifically designed to demand forecasting to tackle waiting list backlogs by using machine learning and modelling techniques to predict the pressures that may occur from emergency demands. The AI tool will analyse data such as public holidays, the weather, COVID-19 infection rates and 111 calls to predict daily A&E admissions up to three weeks in advance. Further, the AI tool has the ability to break down daily admissions by age categories. This, in turn, is intended to allow NHS employees to make informed decisions in relation to allocation of staff and resources ahead of time based on what treatment delivery should be prioritised.

It is not the first time that AI has been used within the NHS. Faculty, the AI firm which co-developed the current tool, previously worked with the NHS to build a COVID-19 Early Warning System. This was used to forecast hospital admissions with the predictions then made available on a national, regional and hospital level. The system was heralded as a crucial tool in increasing efficiencies and supporting life-saving decisions throughout the pandemic. This speaks to the innovative nature of AI and the potential value it has to benefit society in many unassuming ways.

When deploying AI systems there are, of course, a multitude of risks involved as a result of the use of technology where personal data may be processed. The underlying foundation on which AI can be securely and lawfully used is by remaining mindful of the fact that all AI technologies, regardless of their level of complexity, are still subject to the same set of data protection principles. The basics remain the same, such as: data must be used fairly, lawfully, and transparently; data must be kept secure; and data subjects must be aware of how their personal data are being used.

There are aspects of AI that, on the face of it, seem incompatible with certain data protection principles. For example, AI systems generally need a large amount of data to operate and perform effectively as a backdrop against which to analyse future data. It is interesting to consider this alongside the UK GDPR’s data minimisation principle outlined at Article 5(1)(c) which requires that personal data must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed.”  Article 5(1)(c) appears to contradict the need for AI tools to process large amounts of personal data to operate effectively. However, as long as personal data which is necessary and proportionate for the purpose is being processed there should be no conflict with the UK GDPR. Developers and users of AI systems should default to collecting and processing relevant and proportionate data without having to find an alternative method to achieve the same outcome with less data. An AI system collecting any personal data available to it, without distinguishing the minimum amount of data required to be processed to fulfil its purpose would not be UK GDPR compliant. After all, an organisation cannot collect personal data that it does not require to fulfil its purpose on the off-chance that the personal data may be useful in the future.

The use of AI in relation to the processing of personal data triggers the requirement to undertake a (Data Protection Impact Assessment (a “DPIA”). As with any project, it is imperative that completion of a DPIA is undertaken at the early stages of the development. When undertaking a DPIA an organisation must consider: the underlying process of the AI including the data flows and any automated decision making; the necessity and proportionality of the data being processed; identification of relevant risks to rights and freedoms; and, how any such risks shall be addressed when the AI is deployed. This allows organisations to be fully informed as to how to navigate its use of AI in compliance with its obligations to data subjects.

As the use of AI systems becomes increasingly more widespread, there will be ever more extensive discourse on how this can sit peacefully alongside the requirements of the UK GDPR.

Contact Us 

If you require any advice in relation data protection compliance, please contact a member of our specialist Data Protection & Cyber Security team.

This article was co-written by Katie MacLeod, Trainee Solicitor.