Data Protection Impact Assessments and AI

Reuben Binns, our Research Fellow in Artificial Intelligence (AI), and Andrew Paterson, Principal Technology Adviser, discuss new security risks associated with AI, whereby the personal data of the people who the system
was trained on might be revealed by the system itself.

This post is part of our ongoing Call for Input on developing the ICO framework for auditing AI. We encourage you to share your views by leaving a comment below or by emailing us at AIAuditingFramework@ico.org.uk.

In addition to exacerbating known data security risks, as we have discussed in a previous blog, AI can also introduce new and unfamiliar ones.

For example, it is normally assumed that the personal data of the individuals whose data was used to train an AI system cannot be inferred by simply observing the predictions the system returns in response to new inputs. However, new types of privacy attacks on Machine Learning (ML) models suggest that this is sometimes possible.

In this update we will focus on tw…