The UK Government has recently released their paper that seeks to set out the proposed framework for regulating Artificial Intelligence (“AI”) here in the UK. The Secretary of State for Digital, Culture, Media and Sport describes the proposed framework as ‘proportionate, light-touch and forward-looking’. Here, we summarise and provide commentary on the proposals.
The need for reform
One of the main criticisms of the current regulatory framework is that it is not appropriate for AI. At present, there is no specific legislation for AI. Instead, AI has been patchworked into existing legislation, such as the Data Protection Act 2018 and the Equality Act 2010. The UK Government’s position is the current legislative landscape is hindersome to business, identifying the following issues that the upcoming White Paper (due to be published later this year) will address.
- A lack of clarity: The paper cites ambiguity in the UK legal framework as it relates to AI because the current legal framework has not been created with AI in mind. For SMEs, it is argued the existing legal framework can be difficult to navigate and without the resource or finance to instruct expert lawyers, this can be hindersome to potential AI businesses entering the market.
- Overlaps: Highlighted as an issue is multi-levels of regulation with multiple regulators that may apply to an organisation. Alleviating this burden is seen as a key objective of the reforms.
- Inconsistency: Linked to the above, where multiple regulators are involved in overseeing the regulation of a particular AI project, there are different powers available to different regulators. The example given in the paper relates to the ICO’s ability to issue fines for data breaches whereas the Equality and Human Rights Commission cannot issue fines. Clearly this creates further confusion and ambiguity for businesses large and small.
- Gaps in our approach: It is highlighted the current legislative and regulatory framework is outdated for a modern and developing technology such as AI. For example, AI developers are likely to have to explain why AI tech makes decisions in a certain way and explain and rationalise these decisions in relation to important issues such as data protection. Ensuring that these issues are catered for in the reforms is a key objective.
To combat these areas, the UK Government proposes that the solution for AI regulation is not through legislative reform (at least not initially) but through sector-specific regulatory bodies offering guidance and compliance.
The pillars of the pro-innovation approach for regulatory bodies to comply with are set to be as follows:
- Context-specific: it is recognised that AI does not exist and operate in a vacuum and instead is dynamic and is used and operated in a wide range of environments (from chat-bots on customer websites to AI-driven motor vehicles). It is recommended that regulatory bodies adapt and react to emerging risks rather than attempting to perceive what risks are likely to occur in the future;
- Pro-innovation and risk-based: in keeping with post-Brexit government policy, eliminating ‘burdensome or excessive administrative compliance obligations’ is seen as a key pillar to the reforms. The UK government envisages sector-specific regulatory bodies establishing risk-based criteria and thresholds at which organisations will have to comply with additional regulatory requirements. It remains to be seen how this approach will make it easier, particularly for SMEs, to know when and if regulatory requirements attach to their AI tech;
- Coherent: ensuring the system is simple, clear, predictable and stable is seen as a priority for the forthcoming White Paper. It is difficult to envisage how sector-specific regulation will create a simple, clear, predictable and stable system, particularly for those organisations that operate in multiple sectors. Whilst cross-sectoral principles (discussed below) are recommended to be implemented, it is foreseeable that there will be divergence between sectors and the interpretation of these principles could vary drastically, creating confusion for businesses; and
- Proportionate and adaptable: ‘lighter touch’ options such as guidance or voluntary measures are recommended as a first port of call for regulatory bodies moving forward. This may be appropriate in certain circumstances. However for AI that faces legal challenges in the future, such as liability apportionment or data protection compliance, it is inevitable that more hard-lined legislation will be required to regulate this space.
The cross-sectoral principles at this stage are:
- Ensuring that AI is used safely. The paper suggests sector-specific bodies to manage risk effectively using a test of proportionality. For sectors such as healthcare, safety of AI is of paramount importance. For data-learned AI in healthcare, it is likely that further emphasis will be placed on how that data is harvested and secondly, how that data thereafter influences the AI tech, with the checks and balances built into the AI being scrutinised;
- Ensuring that AI is technically secure and functions as designed. This is particularly important in the context of the principle of data minimisation i.e. data should only be ingathered to the extent it is required for the purpose, and data protection generally;
- Making sure that AI is appropriately transparent and explainable. Taking the example of an AI-powered vehicle that causes multiple fatalities due to learned behaviour. Can the organisation responsible for developing the AI explain in human terms why the AI developed and learned behaviour in a certain manner? With such vast and complex sets of data and algorithms being utilised by AI, this is a hugely technical task for organisations responsible for developing and bringing AI to the marketplace;
- Embedding considerations of fairness into AI. A large number of people-facing organisations are now using AI to some extent. For instance, a number of organisations are using AI to sift CVs due to the large volume of applications received. Whilst there is an obvious advantage in man-power saved through this technology, it is important that principles of fairness are embedded in any AI tech devolved to comply with existing legislation;
- Defining legal persons’ responsibility for AI governance. It is evident that when AI goes wrong, there must be a legal person who is responsible for the failings of the AI. As AI grows, and, in particular, when AI interacts with private individuals in the real world, there is likely to be increased litigation on corporate bodies for when things go wrong. To use the above example of fatal road traffic accidents, the law of corporate manslaughter in Scotland is likely to develop. More so than ever, organisations will require to showcase and justify their internal policies for monitoring AI development; and
- Clarifying routes to redress or contestability. It is envisaged that sector-specific regulatory bodies shall implement proportionate measures to ensure that affected groups/individuals should be able to contest an outcome. The obvious concern here would be divergence between sectors and a requirement for legislation to enshrine these rights is almost inevitable.
Potential issues to consider for the future of AI regulation
As identified in the paper, the UK Government identifies the contradiction in its approach between sector-specific regulation and the current regulatory framework where there are differences between regulators in different sectors. Uniformity, to an extent, is required to ensure businesses have the clarity and certainty they desire and need.
Moreover, it is unclear how organisations who manufacture AI that falls under a number of different sectors are to be treated under the proposals. If a particular AI product is within the remit of multiple regulatory bodies, how is hierarchy of regulatory bodies to be decided? This is likely to add to, rather than resolve, the regulatory burdens of organisations, with organisations facing increased costs of compliance.
The apparent inconsistency and contradictions of AI with the existing UK GDPR is a further consideration in favour of AI-specific legislation. Article 5(1)(C) of the UK GDPR sets out the data minimisation principle that states personal data must be “adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed.” With AI processing huge swathes of data every second, organisations have to demonstrate that the data obtained is the minimum amount of data required to fulfil its purpose. It is difficult to envisage, as AI tech develops and becomes more and more advanced, how the application of this principle will always be compatible with AI.
The consultation period for offering views, opinions and evidence to inform the forthcoming White Paper is open until 26 September 2022. If this is an area of interest to you, you can send your views to: email@example.com or alternatively by writing to: Office for AI, 100 Parliament Street, London, SW1A 2BQ.
Should you require assistance with any AI projects please do not hesitate to get in touch with one of our specialists.