A call for participation: Building the ICO’s auditing framework for Artificial Intelligence

Applications of ArtificialIntelligence (AI) are starting to permeate many aspects of our lives. I see newand innovative uses of this technology every day: in health care, recruitment, commerce.
. . the list goes on and on.

We know the benefits that AIcan bring to organisations and individuals. But there are risks too. Andthat’s what I want to talk about in this blog post.

The General Data ProtectionRegulation (GDPR) that came into effect in May was a much-needed modernisationof data protection law.

Its considerable focus on newtechnologies reflects the concerns of legislators here in the UK and throughoutEurope about the personal and societal effect of powerful data-processingtechnology like profiling and automated decision-making.

The GDPR strengthensindividuals’ rights when it comes to the way their personal data is processedby technologies such as AI. They have, in some circumstances, the right toobject to profiling and they have the right to challenge a decision made solelyby a machine, for example.

The law requiresorganisations to build-in data protection by design and to identify and addressrisks at the outset by completing data protection impact assessments. Privacyand innovation must sit side-by-side. One cannot be at the expense of theother.

That’s why AI is one of ourtop three strategicpriorities.

And that’s why we’ve added toour already expert tech department by recruiting DrReubenBinns, our first Postdoctoral Research Fellow in AI. He will head a teamfrom my Technology Policy and Innovation Directorate to develop our firstauditing framework for AI.

The framework will give us asolid methodology to audit AI applications and ensure they are transparent,fair; and to ensure that the necessary measures to assess and manage dataprotection risks arising from them are in place.

The framework will alsoinform future guidance for organisations to support the continuous andinnovative use of AI within the law. The guidance will complement existingresources, not least our award winning BigDataand AI report.

But we don’t want to workalone. We’d like your input now, at the very start of our thinking.
Whether you’re a datascientist, app developer or head up a company that relies on AI to do business,whether you’re from the private, public or third sector, we want you to joinour open discussion about the genuine challenges arising from the adoption ofAI. This will ensure the published framework will be both conceptually soundand applicable to real life situations.

We welcome your thoughts onthe plans and approach we set out in this post. We will shortly publish anotherarticle here to outline the proposed framework structure, its key elements andfocus areas.

On this new blog site youwill be able to find regular updates on specific AI data protection challengesand on how our thinking in relation to the framework is developing. And we wantyour feedback. You can leave us a comment or email us direct.

The feedback you give us willhelp us shape our approach, research and priorities. We’ll use it to inform aformal consultation paper, which we expect to publish by January 2020. Thefinal AI auditing framework and the associated guidance for firms is on trackfor publication by spring 2020.

We look forward to workingwith you!