This is an edited extract from a chapter I contributed to Darek Kloza and Dan Svantesson's new book "Trans-Atlantic Data Privacy Relations as a Challenge for Democracy?"
Privacy is contentious today. Some say the information age has brought real changes to privacy norms. With so much private data leaking through breaches, accidents and digital business practices, it’s often said that ‘the genie is out of the bottle’. Many think privacy has become hopeless. Yet in Europe and many jurisdictions, privacy rights have been strongly and freshly enforced, and for the very latest digital processes.
For security pros coming to grips with privacy, the place to start is the concept of Personally Identifiable Information (PII). The threshold for data counting as PII is low: any data about a person whose identity is readily apparent constitutes PII in most places, regardless of where it came from, or who might be said to ‘own’ it. This is not obvious to engineers without legal training, who may form a more casual understanding of what ‘private’ means. So it seems paradoxical to them that the words ‘public’ and ‘private’ don’t even figure at all in laws like Australia’s Privacy Act!
There is a cynical myth that ‘Technology outpaces the Law’. In practice, it is the law that challenges technology, not the other way around! The grandiose claim that the ‘law cannot keep up with technology’ is often a rhetorical device used to embolden developers and entrepreneurs. New technologies can make it easier to break old laws, but the legal principles in most cases still stand. If privacy is the fundamental right to be let alone, then there is nothing intrinsic to technology that supersedes that right. It turns out that technology neutral privacy laws framed over 30 years ago are powerful against very modern trespasses, like wi-fi snooping by Google, over-zealous use of biometrics by Facebook, and intrusive search results extracted from our deep dark pasts by the all-seeing Google. So technology really only outpaces policing.
One of the leading efforts to inculcate privacy into engineering practice has been the ‘Privacy by Design’ movement (PbD), started in the 1990s by Ontario privacy commissioner Ann Cavoukian. PbD seeks to embed privacy ‘into the design specifications of technologies, business practices, and physical infrastructures’. As such it is basically the same good idea as building in security, or building in quality, because to retrofit these things too late leads to higher costs and disappointing outcomes.
In my view, the problem with the Privacy by Design manifesto is its idealism. Privacy is actually full of contradictions and competing interests, and we need to be more mature about this.
Just look at the cornerstone privacy principles. Collection Limitation for example can contradict the security instinct to retain as much data as possible, in case it proves useful one day. Disclosure Limitation can conflict with usability, because it means PII may be siloed and less freely available to other applications. And above all, Use Limitation can restrict revenue opportunities in all the raw material digital systems can gather. Businesses today accumulate masses of personal information (sometimes inadvertently, sometimes by design) as a by-product of online transactions; real privacy means resisting the temptation to exploit it (as Apple promises to). Privacy at its heart is about restraint. Privacy is less about what you do with personal information than what you don't do with it.
PbD naively asserts that privacy can be maximised along with security and other system objectives, as a “positive sum” game. But it is better that engineers be aware of the trade-offs that privacy can entail, and that they be equipped to deal with real world compromises entailed by privacy just as they do with other design requirements. Privacy can take its place in engineering along with all the other real world considerations that need to be carefully weighed, including cost, usability, efficiency, profitability, and security.