« UK terror case judgment illustrates a failed system that cannot protect privacy | Main | Hawktalk: Privacy by Design: A view from the Commissioner’s office »

18/01/2010

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Dear Sirs,

In the past, you have demonstrated an appreciation of Dr. Cavoukian’s concept of Privacy by Design on several occasions in this blog. However, with this new post, you assert a surprising, if not alarming potential outcome of its application. In the spirit of collegial discourse, I respectfully offer a different perspective.

Few would argue that world events have helped to create a condition of heightened vigilance, in an effort to deter and detect those who would inspire fear or do harm to otherwise innocent people. The implementation of technologies embodying the principles of Privacy by Design (PbD) is a unified approach to meeting these needs in what Dr. Cavoukian has called a Positive-Sum or win/win manner (as opposed to Zero-Sum, win/lose). In other words, PbD ensures that privacy is protected while at the same time, meeting valid security needs – not either privacy or security (i.e. win/lose), but both. As privacy advocates, I think we become marginalized by failing to acknowledge the legitimacy of security as a necessary and understandable requirement in today’s society.

Your argument that PbD “helps to drive a world of widespread screening” appears to be interpreted through a particular view of economics. In essence, you say that assurances of privacy will, in turn inspire increased demand for security technology (e.g. Whole Body Image scanners), thus creating a growing market where such devices will become less expensive and are apt to be installed in an increasingly wide variety of environments. Once broadly implemented, rescinding the promise of privacy leaves a large, invasive technological infrastructure in place.

It would seem that the central issue is not falling cost. In fact, it is wishful thinking to rely on cost as a long-term impediment to the adoption of any technology. Rather, time brings with it inevitable improvements in the effectiveness and efficiency of every technology and process, which require proper consideration regarding their impacts on privacy. Assuming that a technology or process meets a need; demand for either will naturally increase. Logic demands that we begin to work today, as this particular scanning technology emerges, to ensure that both the scanners and the processes employed to manage them embody the privacy protective measures that society will be demanding.

PbD is not a rubber stamp to be applied to any invasive technology, as you appear to suggest. Rather, once the need for such a technology is properly demonstrated, PbD constitutes the collection of principles that we believe designers should follow to integrate privacy – making it an essential component – as inseparable from the technology itself. Embedding privacy in this manner ensures that it cannot easily be stripped away in the future. We know it’s a tall order, but one that we feel we must strive for.

Far from “the way to hell” or a “decline of privacy,” as you have described it, PbD, if executed, will in fact result in the opposite – ensuring the long-term protection of our essential freedoms. Designing privacy into technology and processes from the outset is a more flexible approach to ensuring long-term privacy than reliance on legislation or regulation alone which rarely keeps pace nor can fully envision future technologies. Without seeking to achieve multiple objectives – in this case, security and privacy, in a positive-sum manner – privacy will invariably decline because it will always be forfeited in favour of security. That is a future we do not wish to contemplate.


Ken Anderson
Assistant Commissioner (Privacy)
Office of the Information & Privacy Commissioner of Ontario, Canada

I had the pleasure of discussing some of this with Chris prior to this most recent blog posting and the excellent responses above. Most briefly, what I see here is actually what my Irish colleagues twenty years ago called "violent agreement".

We seem to all agree that Bad Things are happening to privacy, in the name of security. We also seem to agree that PETs, PbD, etc can reduce the negative impact to privacy of any particular (supposedly) pro-security technology or process at a given moment.

The apparent disagreement is in whether the apparent protection of privacy through PETs, PbD, etc themselves will result in a net longer term decrease in privacy than would occur in the absence of PETs, PbD, etc.

Chris' argument appears to be a slippery slope argument - we take the first few steps down the top of that slope, then later on find ourselves sliding - more likely pushed and pulled - faster and farther down than we'd ever have wanted, by the same forces which compelled us to begin the journey, as they point out "you agreed that you got benefit before, well, here's more of the same!"

I argue that what is missing from this latter argument is something which our earlier discussions did in fact include, and on which we do agree. (By the way, if you haven't read Ian Kerr's speech transcript, go back and read it now. It is excellent stuff.) Specifically, this is that an idealistic threshold decision must always be made at any new technology, and at any new use of a technology.

Dr Cavoukian's PbD idea does not suggest that every potential privacy invasive technology is a fait accompli. Rather, she argues that, any such particular fait being accompli, PbD effects some reasonable limitation on the privacy damage. When is the idealistic threshold decision taken? *Before* the “D” (whether or not “Pb”).

What we as privacy leaders, whether academic/idealist, or regulator/pragmatist (and I'll add the third category, in to which my own career work fits: commercial/pragmatist) must [continue to] do is to be those idealists, asking the threshold questions (and doing the massive education job required for the threshold questions to reach and be understood by the right audiences), while we also work to minimize and then monitor the privacy impacts of all technologies, systems, and uses which pass the threshold questions.

The comment that “I suspect that in many instances, the main effect of PbD is to shift the privacy problem from data collection to data access or elsewhere in the system of checks and balances.”nails it.

I think I have observed a phenomenon in the terminology of both PETs and PbD that the terms use in parallel disjoint senses in the policy and comp.sci communities. Crudely the lawyers are playing at information science from “common sense” when they dream up legal frameworks. (But I am techie at heart so I would say that).


Two key insights for me were that

(a) from the regulatory side if data was not identifiable, it was not regulated, and this provided an incentive to anonymize, and

(b) it was clear that early ideas about an “Identity Protector” in the terminology of PETs comprehended only naive pseudonymity, rather than comp.sci ideas about zero-knowledge proof and blind signatures, and that the weak concept of non-identifiability (which could be caricatured as a program for universal ID escrow and traceability) was a threat to the acceptance of genuine “non-zero-sum” privacy technologies.

As far as Privacy by Design goes, it has achieved some mindshare with privacy commissioners (where they always seemed resistant to educating themselves about advanced PETs), but I think the major problem remains that the regulators would like to believe that PbD can be achieved in “technology neutral” way at the level of process design and system architecture.

This idea is obsolete. You still need good process design and architecture, but a system which uses technology (appropriately) which has been specifically designed to protect privacy, can protect privacy better than a system which doesn’t use such technology.

If PbD entrenches the idea that it is satisfactory not to use advanced privacy techniques from computer science (properly integrated), but that lack of these can be somehow be compensated for in the architectural whole design, then the ideology of PbD will be net harmful.

The comments to this entry are closed.

All materials on this website are the copyright of Amberhawk Training Limited, except where otherwise stated. If you want to use the information on the blog, all we ask is that you do so in an attributable manner.