NHS England is preparing to run a pilot project aimed at eliminating any bias in the use of artificial intelligence in healthcare.
The Department for Health and Social Care (DHSC) said this will involve testing the use of algorithmic impact assessments (AIA) to address risks before algorithms are used on NHS data.
The announcement comes with the publication of a report on the use of AIA in healthcare by the Ada Lovelace Institute, which has developed the methodology as a tool for assessing the societal impacts of AI.
The pilot will be carried out as part of the work of the NHS AI Lab, running across a number of its initiatives and used as part of the data access process for the National Covid-19 Chest Imaging Database (NCCID) and the proposed National Medical Imaging Platform (NMIP).
It will complement ongoing work by the ethics team at the lab on ensuring datasets for training and testing AI systems are diverse and inclusive.
Patient engagment
DHSC said the NHS will also support researchers and developers to engage patients and healthcare professionals at an early stage of AI development when there is greater flexibility to make adjustments and respond to concerns.
Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, said: "Building trust in the use of AI technologies for screening and diagnosis is fundamental if the NHS is to realise the benefits of AI. Through this pilot, we hope to demonstrate the value of supporting developers to meaningfully engage with patients and healthcare professionals much earlier in the process of bringing an AI system to market.
“The algorithmic impact assessment will prompt developers to explore and address the legal, social and ethical implications of their proposed AI systems as a condition of accessing NHS data. We anticipate that this will lead to improvements in AI systems and assure patients that their data is being used responsibly and for the public good.”
Octavia Reeve, interim lead of the Ada Lovelace Institute, said: “Algorithmic impact assessments have the potential to create greater accountability for the design and deployment of AI systems in healthcare, which can in turn build public trust in the use of these systems, mitigate risks of harm to people and groups, and maximise their potential for benefit.
“We hope that this research will generate further considerations for the use of AIAs in other public and private sector contexts.”
Roadmap and recommendations
The institute’s report on AIA says they are not widely used in public or private sectors and there is not yet a standard methodology, but there is a growing consensus on the importance of principles for the development and use of AI systems. In response, it sets out a roadmap towards the implementation of AIA.
It recommends that the NHS AI Lab adopts seven steps for the trials: a reflective exercise; application filtering; participatory workshops; AIA synthesis; making data access decisions; the publication of the completed AIA; and a further iteration.