A team within the Department for Science, Innovation and Technology (DSIT) has developed a tool to help public sector bodies innovate responsibly with data and AI.
Named the Model for Responsible Innovation, it has been created by the Responsible Technology Adoption (RTA) Unit in DSIT, and is already being used internally in sessions to map data and AI projects to identify possible risks and prioritise actions to ensure an approach is trustworthy.
The introduction to the model says it does two things, one being to set out a vision of what responsible innovation in AI looks like, along with the fundamentals and conditions to ensure it is built with a trustworthy approach.
The other is as a practical tool that public sector teams can use to rapidly identity the potential risks in the development and deployment AI and understand how to mitigate them.
Supporting innovation
“We are now making the model publicly available to enable more teams to take advantage of this approach,” the document says.
“Using the model to build trustworthiness into AI tools across the public sector will help the UK to innovate with data driven technologies whilst addressing the risks and building trust.”
Details include a brief guide to trustworthiness, the fundamentals – which include transparency, accountability, security and societal wellbeing – and the conditions such as meaningful engagement, robust technical design, access to appropriate and available data, and effective governance.
Four underlying themes are also identified: legal compliance, understanding, continuous evaluation and organisational culture.
The RTA Unit is also offering workshops ‘red teaming’ workshops, created to guide public sector teams through the ethical risks associated with projects.
DSIT has also recently announced the planned development of a self-assessment tool for organisations to take a responsible approach in their development and use of AI systems. It is currently running a consultation on the plan.