Image source: Privacymaven CC BY SA 4.0 via Wikimedia Commons
The Information Commissioner’s Office (ICO) has launched its third consultation on how data protection law applies to the development and use of generative AI.
The new initiative focuses on how data protection’s accuracy principle applies to the outputs of generative AI models, and the impact that accurate training data has on the output.
The ICO said that where people wrongly rely on generative AI models to provide factually accurate information about people, this can lead to misinformation, reputational damage and other harms.
John Edwards, UK information commissioner said: “In a world where misinformation is growing, we cannot allow misuse of generative AI to erode trust in the truth. Organisations developing and deploying generative AI must comply with data protection law – including our expectations on accuracy of personal information.”
Silicon Valley discussions
The third call comes as Edwards and his team are visiting leading tech firms in Silicon Valley to reinforce the ICO’s regulatory expectations around generative AI, as well as seeking progress from the industry on children’s privacy and online tracking.
The regulator has already considered the lawfulness of web scraping to train generative AI models and examined how the purpose limitation principle should apply to generative AI models.
Further consultations on information rights and controllership in generative AI are scheduled to follow in the summer.
The ICO is seeking views from a range of stakeholders, including developers and users of generative AI, legal advisors and consultants working in this area, civil society groups and other public bodies with an interest in generative AI. The consultation is open until 5pm on 10 May 2024.