Civil servants should be open to using generative AI, but be cautious about the information entered into large language models (LLMs) such as ChatGPT, according to new guidance.
The Cabinet Office has published some general principles for the use of the technology, including that sensitive information or personal data should never be entered into the tools, and that three ‘hows’ should be considered whenever it used.
These are: how a question will be used by a system; how any answers could be misleading, even when they look credible; and how generative AI operates. A key aspect of the last of these is that while it can choose words from options that it considers plausible, it cannot understand the context or any bias and this could distort the results.
As a result, any outputs should aways be treated with caution and users should draw on their own judgement and knowledge.
The guidance includes details of how to use generative AI for different purposes – research, summarising information, developing code and textual data analysis – and provides examples of two inappropriate uses.
Authoring messages, inputting data
One is for authoring messages and summarising facts for others, such as in a paper on a policy position, as the policy position would have to be entered into the tool first and this would contravene the point not to enter sensitive material.
The other is inputting data for analysis without the consent of the data owner.
In an accompanying blogpost, the UK Government’s chief technology officer David Knott said: “I’m pleased to see that we’re actively encouraging and enabling civil servants to get familiar with the technology and the benefits it could provide, but to do so in a safe and responsible manner.”
He added that the guidance will be iterative, and subject to a review after six months to address emerging practices and better understanding of this technology.