The UK Government’s Central Digital and Data Office (CDDO) has developed a transparency standard for the use of algorithms by government departments and public sector bodies.
It has carried out the initiative with the Centre for Data Ethics and Innovation (CDEI), saying this makes the UK one of the first countries in the world to take such a step.
The standard will be piloted over the next few months by several public sector organisations, and further developed based on their feedback.
This derives from commitments made in the Government’s National AI Strategy and National Data Strategy and is aimed at building trust in the use of artificial intelligence.
The standard is organised into two tiers: a short description of the algorithmic tool, including how and why it is being used; and more detailed information on how it works, the datasets used to train the model and the level of human oversight.
It also comes with a data standard for algorithmic transparency – taking in attributes, names, tiers, categories, types, descriptions and reference numbers (not yet set) – along with a template and a guidance document. The pilot version of the guidance takes the user through checking whether their tool is in scope, filling in the template, then sending it to the CDDO for help with the next steps.
CDDO said this will help teams to be transparent in their use of algorithmic tools, especially in cases where they might have a legal or economic impact on individuals, and promote trustworthy innovation across the public sector.
Data protection
Publication of the standard comes after the Government sought views on a proposal for transparency as part of its consultation on the future of the UK’s data protection regime. It is currently analysing the feedback received.
Lord Agnew, Minister of State at the Cabinet Office, said: “Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery. However, they must be used in decision making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.
“I’m proud that we have today become one of the first countries in the world to publish a cross-government standard for algorithmic transparency, delivering on commitments made in the National Data Strategy and National AI Strategy, whilst setting an example for organisations across the UK.”
Representatives of several organisations active in the field have expressed their support for the standard.
Call for register
Among them, Imogen Parker, associate director (policy) at the Ada Lovelace Institute, said: “Meaningful transparency in the use of algorithmic tools in the public sector is an essential part of a trustworthy digital public sector. The Ada Lovelace Institute has called for a transparency register of public sector algorithms to allow the public – and civil society who act on their behalf – to know what systems are in use, where and why.
“The UK Government’s investment in developing this transparency standard is an important step towards achieving this objective, and a valuable contribution to the wider conversation on algorithmic accountability in the public sector. We look forward to seeing trials, tests and iterations, followed by government departments and public sector bodies publishing completed standards to support modelling and development of good practice.”
Image from iStock, Mike_Kiev