Earlier this week it was announced that the Home Office is to stop using a computer algorithm to help determine visa applications following allegations that it contained "entrenched racism." The Joint Council for the Welfare of Immigrants (JCWI) and digital rights group Foxglove had launched a legal challenge against the system with Foxglove describing it as "speedy boarding for white people".

This announcement once again brings into sharp focus the extent to which we are ready to trust robots and other forms of artificial intelligence to make decisions that may affect many aspects of our lives. We are not alone in wrestling with this conundrum – indeed, many governments and other organisations around the world are currently doing exactly that.

Here in Scotland, the Scottish Government is committed to developing an AI Strategy for Scotland and The Data Lab is currently coordinating the strategy development process. In looking to create the right AI ecosystem in Scotland, a model is evolving which has identified that AI in Scotland should be trusted, responsible and ethical. If the vision for AI in Scotland is to be realised, public trust and confidence in AI, supported by the right ethical and regulatory framework, will be central to its success.

As work continues on the development of Scotland’s AI Strategy, it would be remiss of us to overlook parallel international developments. Close to home, the European Commission’s High-Level Expert Group on Artificial Intelligence has recently published its final Assessment List for Trustworthy Artificial Intelligence (ALTAI). The ALTAI is a practical tool intended to help businesses and organisations assess the trustworthiness of their AI systems.

The ALTAI puts into practical terms the Ethics Guidelines for Trustworthy Artificial Intelligence published in 2019 by the High-Level Expert Group on AI (AI HLEG) set up by the European Commission. According to the Ethics Guidelines, Trustworthy AI is based on seven requirements. These requirements have been translated into an Assessment List and an online tool (both of which can be found here), taking into account feedback from a six-month long piloting process within the European AI community.

What does the ALTAI say?

Prior to self-assessing an AI system with ALTAI, organisations should carry out a fundamental rights impact assessment in relation to fundamental human rights such as non-discrimination, freedom of expression, as well as privacy rights and data protection. Organisations should consider whether they have suitable processes in place to test, monitor and rectify potential bias during the development, deployment and use phases of an AI system (something which may have helped the Home Office).

The ALTAI requires organisations to cover the following areas:

1. Human Agency and Oversight

This section is particularly concerned with whether AI systems which are aimed at guiding or influencing human decision-making could create over-reliance by users affecting human autonomy, illegitimately manipulate users' behaviour or cause confusion for end-users as to whether a decision, content or advice is the outcome of an algorithmic, rather than a human, decision.

2. Technical Robustness and Safety

Achieving trustworthy AI systems is crucial for ensuring security and resilience to cyber attacks, minimising the possibility for misuse and minimising potential adverse effects arising from AI systems which are inaccurate or whose decisions are unreliable and difficult to replicate.

3. Privacy and Data Governance

Organisations should consider the impact of the AI system on the right to privacy, particularly where the AI system is being trained or developed using personal data. Minimising harm to privacy rights requires good data governance, which crosses over with mandatory requirements under the General Data Protection Regulation, for example, carrying out a Data Protection Impact Assessment or appointing a Data Protection Officer. Organisations should also consider what mechanisms are in place to minimise potential harm and to enable reporting and addressing any concerns.

4. Transparency

The data and processes that lead to the AI system’s decisions should be properly documented to increase transparency and organisations should also openly communicate about the limitations of the AI system. Of particular concern are "black boxes", i.e. models that generate an output/decision that cannot be explained to the user. In addition, users should always be made aware when they are communicating with AI rather than a human and informed about the purpose, criteria and limitations of the decisions generated by the AI system.

5. Diversity, non-discrimination and fairness

AI systems should avoid bias both in terms of use of input data and algorithmic design. Particularly for business-to-consumer AI, systems should be designed to be accessible to all end-users regardless of age, race, disability or other characteristics, tested accordingly and refined taking into account user feedback.

6. Societal and environmental wellbeing

AI systems may affect our social relationships or undermine democratic processes. Systems may also have an impact on jobs, and these should be anticipated and mitigated. Possible adverse environmental impacts and any negative effects on society should be assessed and minimised.

7. Accountability

Organisations should seek to identify and mitigate risks in a transparent way that can be explained to, and audited by, third parties and provide accessible mechanisms for accountability and the possibility of redress when unjust or adverse impacts arise from AI systems.

Time will tell how widely and effectively the ALTAI will be employed by businesses and organisations involved in AI. However, this is a rapidly evolving area that is generating much attention. To what extent the ALTAI will influence the development of Scotland’s own AI strategy remains to be seen.