Various law makers -- such as the state of Illinois and the city of San Francisco -- are moving to restrain or prohibit facial recognition through specific legislation. Evan Selinger and Woody Hartzog writing in the New York Times have called for a ban. And now, the French privacy regulator CNIL appears to have found facial recognition trials in schools to be in breach of general data protection law.
How should we look at face recognition and biometrics technology? Does it need special policy & legal treatment, or is it already covered by general privacy law?
In my view, most of the world already has an established legal and analytical framework for dealing with automated facial recognition. We can see face matching as an automated collection of fresh Personal Data, synthesised and thence collected by algorithms, and along the way subject to conventional privacy principles. Most global data protection laws are technology neutral: they don’t care how Personal Data ends up in a database, but concern themselves with the transparency, proportionality and necessity of the collection.
Technology-neutral privacy legislation does not specifically define the term ‘collection’. So while collection might usually be associated with forms and questionnaires, we can interpret the regulations more broadly.
‘Collect’ is not necessarily a directive verb, so collection can be said to have occurred passively, whenever and however Personal Data appears in an information. Therefore, the creation or synthesis of new Personal Data counts as a form of collection. Indeed, the Australian federal privacy regulator is now explicit about “Collection by Creation”.
So if Big Data can deliver brand new insights about identifiable people, like the fact that a supermarket customer may be pregnant, then those insights get much the same protection under information privacy law as they would had the shopper filled out a form expressly declaring herself to be pregnant.
Likewise, if automatic facial recognition leads to names being entered in a database alongside erstwhile anonymous images, then new Personal Data can be said to have been collected. In turn, the data collector is required to obey applicable privacy principles.
Which is to say simply: if an organisation holds Personal Data, then unless it has the specific consent of individuals concerned, the organisation should hold as little Personal Data as possible, it should confine itself to just the data needed for an express purpose, refrain from re-purposing that data, and let individuals know what data is held about them.
Data privacy principles should apply regardless of how the organisation came to have the Personal Data; that is, whether the collection was direct and explicit, or automated by face recognition. If Personal Data has come to be held by the organisation thanks to opaque (often proprietary) algorithms, then the public expectation of privacy protection is surely all the more acute.
Across cyberspace now, facial recognition processes are running 24X7, sight unseen, poring over billions of images -- most of which were innocently uploaded to the cloud for fun -- and creating new Personal Data in the form of identifications. Some of this activity is for law enforcement, and some is to train biometric algorithms; I think it likely that other facial matching is being used to feed signals into people-you-may-know algorithms. But almost all facial recognition is occurring behind our backs, exploiting personal photos for secondary purposes without consent, and in stark breach of commonplace Data Protection rules.