Skip to the content

A starting point for AI in citizen services

19/02/24

Industry Voice

Get UKAuthority News

Share

Finger on AI icon
Image source: istock.com/Shutter2U

Artificial intelligence has a huge potential for the public sector, but there is a lot to learn before it can be fulfilled, writes Rob McCarthy, CEO and founder of GOSS

Artificial intelligence is moving into the mainstream of thinking about public services. The emergence last year of ChatGPT triggered a surge of interest in generative AI and large language models (LLMs), and added fuel to the existing focus on machine learning.

It has come as the sector faces up to an urgent necessity to do things differently. Demand for its services is intensifying while its financial outlook remains bleak, and it has to harness new technology to produce major time savings among staff and ease the pressure on its finances.

AI can provide solutions to ease the pressure and provide better services for citizens, and a survey of 327 public sector officials by GOSS showed that 29% said their organisations are already using the technology for digital self-service and 40% said there are plans to do so in the next 12 months.

But it is an unfamiliar technology to most, there is a range of issues to address and a lot to learn. This provided the focus of a recent UKA Live discussion supported by GOSS, involving myself, director of the Socitm Institute Sam Smith, director of digital services at Norfolk County Council Geoff Connell, digital services manager at West Berkshire Council Phil Rumens and UKA publisher Helen Olsen Bedford.

It laid out a useful distinction between machine learning’s big strength in handling quantitative data and generative AI’s more qualitative focus in creating imagery, audio, text and code.

Analysing knowledge

As examples of the latter, Connell cited the potential to use it analysing an internal knowledge base to support call handlers, and Rumens described West Berkshire’s use of ChatGPT to create job ads and LinkedIn posts from recruitment data. He made the point that the cost per transaction of such efforts is relatively small – commodity AI platforms are rapidly reducing the cost and open source platforms are available – but said the results are variable, ranked around B-, and need some human quality assurance.

This could soon be overcome by innovations in which AI agents are being developed as ‘guardrails’ for the outputs of other AI agents. While a master agent looks at a specific problem and works out an initial solution, others could be programmed to apply different skills to different contexts to deal with other related problems, then a further one to assess the quality of their outputs and teach them to discard those that score badly.

While the use cases of this are still to emerge, it could soon enable the platforms to produce more results that would be rated as B+ or even A.

There is also a potential to ‘mash up’ machine learning and generative AI. Connell outlined a possibility of using the former to identify people at risk of becoming homeless, with details of their circumstances and location, then the latter to produce tailored correspondence on where they could find support locally. Local authorities may not be able to mobilise people for every case, but they could use the technology to direct individuals to where support is available.

Trust and transparency

There are also familiar issues to be addressed, particularly in the need to build public trust. The discussion reiterated the need for transparency around the use of AI and the consensus that there still has to be a human role in any decisions that affects individuals.

Smith pointed to Socitm’s draft policy guidance with a list of ‘do and don’ts’ for deployments of AI, while emphasising that this does not smother the potential for using the technology to digest huge amounts of information in services such as social care.

There is also the question of whether public organisations will have adequate staff skills. As with all digital roles they struggle to compete financially with big private sector bodies, and the survey revealed that 72% of respondents saw a lack of in-house skills as a barrier to digital transformation.

Possible solutions are emerging: Rumens pointed to the potential in low code platforms for AI that do not require heavy coding skills, and Connell pointed to Norfolk’s creation of an in-house AI board in which the HR director has taken a role.

Beneath all this is the need to learn as quickly as possible what does and does not work, to provide evidence for investments and time and money in any initiatives. There is a consensus on the need for public sector bodies to learn from other, using community groups such as those in Socitm and Local Digital; and a recognition that the technology suppliers can also provide valuable lessons through the experiences of their clients.

Principles for deployments

It all conveys a mixture of opportunity, complexity and challenge, and there are some principles that should be at the forefront of thinking about deployments of AI.

One is that it is not a silver bullet. Potential benefits have to be weighed against the challenges of using an emerging technology, unfamiliarity and the issue of public trust. Also, it will not solve a problem if a service is badly designed or there are weaknesses in the policy on which it is based.

But the potential is there, and it is best to begin by identifying a problem then talking to a supplier about how it can be solved. It might be with a more established technology such as robotic process automation or in the new possibilities provided by AI. It is helped by assessing the existing costs of a process, which makes it easier to estimate the savings that could be made through an AI deployment.

Another step is to assess the risk in any use case. There are some, notably those involving sensitive interactions with the public, in which the stakes are high. But there are others for repetitive processes and routine operations that involve a lower risk, a better chance of a safe deployment and potentially a quick payback.

The other is to look for an AI platform that makes it possible to share solutions with other organisations. The GOSS platform includes an import/export capability that makes this possible among its users, helping to bring a community together around the majority of use cases that are broadly similar across organisations. This enables users to adopt other solutions while adapting to any differences and provides scope to build different mechanisms for the less conventional cases.

Overall, AI is now showing a terrific potential – on a par with the early days of the internet – for the public sector to transform how it interacts with citizens, optimises operations and provides better services. This is crucial in dealing with the intense pressures on the sector and makes this the time to embrace the technology.

For an example of a successful local authority application of AI, you can read here how West Berkshire Council used the GOSS Digital Platform with ChatGPT to produce job ads and manage citizen enquiries. You can also find the full results of the survey on digital self-service challenges from here.

Register For Alerts

Keep informed - Get the latest news about the use of technology, digital & data for the public good in your inbox from UKAuthority.