The Department for Science, Innovation and Technology (DSIT) has launched a portfolio of AI assurance techniques aimed at supporting the development of trust in the use of the technology.
It has been developed by the Centre for Data Ethics and Innovation (CDEI) and IT industry association techUK as a resource for anyone involved in designing, developing, deploying or procuring AI enabled systems.
The portfolio document says assurance is about meeting criteria such as regulation, standards, ethical guidelines and organisational values, and identifies techniques in areas such as impact assessment and evaluation, bias and compliance audits, certification, conformity assessment, performance testing and formal verification.
It also cites a series of case studies as examples of using the techniques.
Research findings
Nuala Polo, senior policy adviser at CDEI, said it has conducted extensive research on attitudes towards and take-up of tools for trustworthy AI.
“One of the key barriers identified in this research was a significant lack of knowledge and skills regarding AI assurance,” she said in a blogpost. “Research participants reported that even if they want to assure their systems, they often don’t know what assurance techniques exist, or how these might be applied in practice across different contexts and use cases.
“To address this lack of knowledge and help industry to navigate the AI assurance landscape, we are pleased to announce the launch of the DSIT portfolio of AI assurance techniques.”