
Almost three quarters of the UK public say that laws and regulation would increase their comfort with AI, according to the results of a published by the Ada Lovelace Institute and the Alan Turing Institute.
The public attitudes survey, involving 3,513 UK residents and part of the Public Voices in AI programme funded by UK Research and Innovation, showed that 72% are in favour of regulating the use of the technology.
This is up from 62% from a survey carried out in 2022, before the release of ChatGPT and other large language model (LLM) based chatbots.
The institutes said the survey found that public awareness of different AI uses varies widely. While 93% have heard of driverless cars and 90% of facial recognition in policing, only 18% were aware of the use of AI for welfare benefits assessments.
It also found that LLMs have gone mainstream, with 61% saying they have heard of them and 40% having used them, demonstrating rapid adoption since their release to the public in 2022.
Benefits and concerns
Perceptions of overall benefits have been stable since the 2022 survey, with the most commonly reported being speed and efficiency improvements. But levels of concern have increased across all six uses of AI asked about in both surveys, with common concerns being around overreliance on technology, mistakes being made and lack of transparency in decision making.
The latter is particularly strong, with 83% of respondents saying there were concerned about public sector bodies sharing their data with private companies to train AI systems.
When asked about the extent to which they felt their views and values are represented in current decisions being made about AI and how it affects their lives, 50% said they do not feel represented.
The survey also revelaed that exposure to harms from AI is widespread. 67% reported they have encountered some form of AI related harm at least a few times, with false information (61%), financial fraud (58%) and deepfakes (58%) being the most common.
This leads to the public demand for laws, regulation and action on AI policy. 88% said they believe it is important that government or regulators have the power to stop the use of an AI product if it is deemed to pose a risk of serious harm to the public and over 75% said government or independent regulators, rather than private companies alone, should oversee AI safety.
Appeals and transparency
The survey also found support for the right to appeal against AI based decisions, and for more transparency. 65% said that procedures for appealing decisions and 61% said getting more information about how AI has been used to make a decision would make them more comfortable.
Recognising that much of the existing evidence on public attitudes to AI does not adequately represent marginalised groups, the survey deliberately oversampled three underrepresented demographics: people from low income backgrounds; digitally excluded people; and people from minority ethnic groups.
It found that attitudes vary between different demographics, with underrepresented populations reporting more concern and perceiving AI as less beneficial. For example, 57% of Black people and 52% of Asian people expressed concern about facial recognition in policing, compared to 39% in the general population.
Across all of the AI use cases asked about in the survey, people on lower incomes perceived them as less beneficial than people on higher incomes.
Responsible deployment
Octavia Field Reid, associate director at the Ada Lovelace Institute, said: ‘This new evidence shows that – for AI to be developed and deployed responsibly – it needs to take account of public expectations, concerns and experiences.
“The Government’s current inaction in legislating to address the potential risks and harms of AI technologies is in direct contrast to public concerns and a growing desire for regulation. This gap between policy and public expectations creates a risk of backlash, particularly from minoritised groups and those most affected by AI harms, which would hinder the adoption of AI and the realisation of its benefits.
“There will be no greater barrier to delivering on the potential of AI than a lack of public trust.’
Professor Helen Margetts, programme director for public policy at the Alan Turing Institute, said: ‘To realise the many opportunities and benefits of AI, it will be important to build consideration of public views and experiences into decision making about AI.
“These findings suggest the importance of government's promise in the AI Action Plan to fund regulators to scale up their AI capabilities and expertise, which should foster public trust.
“The findings also highlight the need to tackle the differential expectations and experiences of those on lower incomes, so that they gain the same benefits as high income groups from the latest generation of AI.”