Projects to improve the fairness of AI systems in higher education and to use the technology in addressing bias in healthcare are among those to have won funding under the Government’s AI Fairness Innovation Challenge.
The Department for Science, Innovation and Technology (DSIT) has announced the winners of the competition, saying the fund has been set up to find ways to deal with statistical, human and structural bias and discrimination in AI systems.
The projects were selected by expert assessors chosen by DSIT and Innovate UK and will receive up to £130,000 each.
The Open University is leading the higher education project, aimed is at developing a solution to improve the fairness of AI systems in the sector and a framework for universities to develop their own systems.
King’s College London is behind the healthcare initiative, which will initially be focused on mitigating bias in early warning systems used to predict cardiac arrest in hospital wards, based on the CogStack Foresight generative AI model powered by MedGTP.
Building public trust
Technology Secretary Michelle Donelan said: “Our AI white paper is fostering greater public trust in the development of AI, while encourage a growing number of people and organisations to tap into its potential.
“The winners of the Fairness Innovation Challenge will now develop state of the art solutions, putting the UK at the forefront of leading the development of AI for public good.”
The other two projects to win support are: the Alan Turing Institute’s initiative to create a fairness toolkit for SMEs and developers to self-assess the use of large language models in the financial sector; and Coefficient Systems’ development of a solution to reduce bias in algorithms for the automated screening of CVs in recruitment.
The launch of the competition reflects significant concerns over the potential for bias as public and private sector organisations increase their use of AI solutions. As far back as 2020 the Information Commissioner's Office outlined measures that could reduce bias in the use of the technology.