The Information Commissioner’s Office (ICO) has highlighted three key considerations for organisations looking to implement federated learning as a privacy enhancing technology (PET).
It has published a blogpost on the issue, complementing its guidance on the use of PETS and following its recent release of a cost-benefit awareness tool for the technologies.
Federated learning is a technique that allows different parties to train AI models on their own information (‘local’ models). They then combine some of the patterns that those models have identified (known as ‘gradients’) into a single, more accurate ‘global’ model, without having to share any training information with each other.
Three steps
Nick Patterson of the ICO Innovation Hub said a first step before using any PET should be to run a data protection impact assessment to determine whether it can mitigate the risk to people; and assess the risks of using federated learning indirectly exposing identifiable information used for local training of machine learning models.
Secondly, if a risk is identified the use of federated learning should be combined with other PETs, thinking about the aims, maturity, scalability and cost of the technology. The options include secure multiparty computation, homomorphic encryption, differential privacy and secure communications protocols.
Thirdly, a motivated intruder test should be used to assess the risk of identifying people at each stage of the data lifecyle.
In addition, differential privacy – which can guarantee people’s indistinguishability – can be used to mitigate the risk of re-identification by anonymising outputs. It can be used to add noise and hide the use of a particular person’s information in a training task.
Patterson said that PETs provide great opportunities to build trust in the use of personal information but “are not a silver bullet, and security and privacy risks can remain”.