“The way to hell is paved with good intentions” is a well known English saying. I am convinced that it applies to the claim that one can have "security and privacy" by adopting Privacy by Design (PbD) techniques.
For example, Dr Cavoukian in her document on the use of whole body scanners at airports (see last week’s blog - 6th Jan) states that a PbD implementation “can satisfy security requirements without sacrificing (and perhaps enhancing) passenger privacy”. At one level this is correct. Obviously anonymous scanning is far less invasive than having someone physically pat you down or give you a strip search.
But the more you think about it, serious worries emerge. I raised some of them at the end of September (see Blog "Do privacy enhancing technologies have an Achilles Heel?"; Sept 30th), but I have to admit that Dr Cavoukian’s bold assertion really “did it for me”.
I think PbD accelerates a gradual, step-by step, progression that I am calling "The descent of privacy” (or “The ascent of security” depending on your viewpoint). The decline (or ascent) has ten steps which are as follows:
(1) PbD techniques can make airport scanners privacy neutral – assume Dr Cavoukian’s starting premise is correct.
(2) Why are airports special? – What about main bus and rail termini or ports? These locations also need these privacy protecting scanners. Note that assertive and authoritative privacy reassurances can be uttered (e.g. such all-body scanning is more “privacy enhancing” than the alternative).
(3) A market for all-body scanners develops – they become cheaper, quicker, more reliable and more innovative and designed for specific locations or tasks.
(4) Screening gradually becomes an accepted norm (e.g. for travelling on public transport between major cities).
(5) The use of scanners has thus expanded. As there is no risk to privacy, everybody queuing to enter certain places can be screened on an increasingly cost effective basis.
(6) As costs fall, screening can be further extended to reduce the security risks at other locations: especially at gatherings where orderly queues of people form (e.g. sporting events, theatres, entrances to government or iconic buildings).
(7) If there is no screening at a public event where people queue, then people begin to think there is a security risk. The demand for screening increases irrespective of whether the evidence shows that screening is effective.
(8) Authorities install screening because:
(a) as installation becomes relatively inexpensive, the political desire to respond to demands of the electorate in Step 7 above can be satisfied;
(b) there is a genuine desire to be seen to be doing all one can to reduce the risks – the fact that there is privacy protection is a bonus;
(c) it provides the cover of having done “everything possible” if there were to be a terrorist attack and the press/public/politicians want to find “someone to blame”; and
(d) politicians reduce the risk of a serious attack “on their watch”.
(9) People become screened regularly and become conditioned to expect to be screened.
(10) People are a risk or not trusted unless they are screened
So the end-point of the “privacy enhanced” descent is Step 10. Even though the process is protective of privacy one has arrived at a position that can be rewritten in a more familiar guise: “If you have nothing to hide, you have nothing to fear”.
Now here comes the important bit. The anonymity associated with PbD cannot be guaranteed “for ever and ever, amen”; the anonymity guaranteed by PbD will always hang by a tenuous thread. Harold Macmillan, UK Prime Minister in the 1950s, was once asked by a journalist: “what was most likely to blow governments off course?”. He responded: “Events dear boy, events”.
So it is with PbD. Once implemented in its full “privacy protected” glory, it is only a matter of waiting for an event that allows the argument about removeing anonymity to prevail. The implementation of the infrastructure had been anonymous – but after the pivotal event and for all future time, it isn’t.
To be clear, I am not saying that full body screening at airports is wrong – nor am I saying that every surveillance activity ends at Step 10 - nor even that PbD should not be considered.
What I am saying that the promotion by privacy and security specialists of the idea that one can have privacy and security is seriously misguided. It will be seized upon by those who need security at the expense of privacy and who also know that a future event is likely to make the case for removing anonymity. In other words, PbD is merely a catalyst that can accelerate the descent down these 10 Steps: or “the way to hell”, if you prefer.
Final comment: Even without employing PbD techniques, the surveillance associated with CCTV is arguably is around Step 8 or 9; vetting against criminal records is at Step 9 or possibly Step 10. In reality, I think PbD cannot address what is the main problem: the serious democratic deficit in the system of data protection and human rights. See “Nine principles for assessing whether privacy is protected in a surveillance society (Parts 1 and 2)” which can be accessed on http://www.amberhawk.com/policydoc.asp.
Dr Cavoukian's comments that these scanners would enhance privacy were fundamentally flawed in that she assumed the scanners would be the only method of security employed at airports.
This is not the case, they will be used alongside the traditional pat down methods - so all of her comments that this would remove visual bias (such as security guards patting someone down because they look 'shifty') illustrate a complete misunderstanding as to how these scanners will be used.
This, in my opinion, made her entire article completely moot.
Posted by: Alexander Hanff | 11/01/2010 at 03:11 PM
This analysis is less convincing than it should be for two reasons; first, PbD can be better represented throughout, and second, the conditions behind the slippery slope argument presented have not been fully elucidated.
To embrace PbD simply means that a system or technical implementation will not have privacy layered on as an afterthought, in response to external input such complaints or regulatory duties -- but rather that the controls and functionality required to assure privacy will be inherent. Those from the information security side have long known that this is the preferred manner to implement controls. If one wishes a system to provide privacy, PbD says this should be a design-level consideration.
How building such controls into systems raises a "descent of privacy" fear is beyond me; in all such real-world work with system designers and implementers, design-level privacy efforts have in fact considerably raised the profile of privacy, and put it in a more actionable context for such an audience. It also shifts privacy from a policy/procedures driven exercise to one more integral to the system in place. Again, it's not clear how technically bolstering privacy capablility or making controls inherent to the system places us in a "descent of privacy" situation.
The idea that PdB will somehow embolden or empower those who want "security at the expense of privacy" is strange; even a cursory look at the lay of the land shows that folks who see things this way require no such assistance with their agenda, and have much of the buy-in they'll ever need as things stand now.
The risk and cost analysis that's supposedly taking place during the slippery slope portion of this post needs to be examined in greater detail. There is nothing about the use of PbD techniques that alters the threat environment, nor inherently reduces the cost of systems with potential privacy impact. All of that analysis still must take place prior to implementation, and the piece here does not convince that PbD itself would skew the cost-benefit analysis behind enhanced adoption of such systems. There are also a litany of other factors that would have far more direct influence on such analysis. As the post here mentions, CCTV has proliferated without the help of any such PbD effort -- which is precisely why CCTV is a prime candidate for privacy-enhancement (something that Cavoukian has spoken about in the past). Why not improve upon existing systems?
Finally, that those in control may make the argument to "remove anonymity" in response to events is not only irrelevant to older systems already in place, but even more trivially accomplished within a policy/procedures model for these systems vs. a PbD model where the system is built to offer inherent controls. If cost is being held up here as a powerful deciding factor in adoption, then surely abandoning simple policy and practice is less expensive than re-engineering the system itself.
Posted by: WD | 11/01/2010 at 05:58 PM
You are assuming that the cost of the scanner is the major limiter, and as it falls, scanners will become ubiquitous.
The cost of the salary for the person watching the screen is important too, if the goal is to stop attacks in real time.
Posted by: Mark | 11/01/2010 at 07:35 PM
I certainly agree with the argument that applying privacy controls to a fundamentally invasive system does not make it privacy-friendly, but then I don't think that is what Privacy by Design is about, and moreover I'm not convinced that whole body scanning is a valid case study - Dr Cavoukian may have picked a poor example here.
To suggest that this body scanners undermine the broader principles of Privacy by Design is unreasonable, since there is no evidence of Privacy by Design in use here. Privacy by Design is about building systems that fundamentally respect and preserve personal information rights throughout every aspect of their processing. A machine that is designed to undress people is clearly never going to be privacy-friendly no matter how one tries to apply controls: it is, however, possible to minimise the risk of information leaking outside of the operating environment, being stored or aggregated with other information.
Furthermore, the system is a poor example because most reasonable individuals would tolerate a degree of privacy loss to preserve their safety in an environment such as air travel, where we generally have an imbalanced perception of risk because of the terrible visual impact of the likes of 9-11.
I would almost (but not quite) like to see these systems operated without privacy controls, so that public outrage forces politicians to accept the unpalatable (and unglamorous) fact that the only effective way to catch terrorists in airports is to look for the terrorists, not their bombs. Detailed profiling, targeting of security inspections, watching body language - these skills have proven time and again to be much more useful than machines. But they're much less camera-friendly, and there will of course always be individuals who are unfairly targeted by profile-driven security checks.
The choices are tough. But this particular case doesn't herald the end of Privacy by Design.
Posted by: Toby Stevens | 12/01/2010 at 08:44 AM
If one had to choose between being scanned by one of these scanners or strip searched at the check-in desk, the scanner is the less bad option. I have no wish for a physical internal examination, still less one in full view of everyone traveling that day. Mind you, I would prefer not to be scanned at all, and I would like to be able to travel with a pair of nail scissors in my washbag.
If there is truly a need for such things to be done, I am relieved that someone is trying to make them less unpleasant than they might be. That said, less bad is less bad: it should not be confused with being good.
Posted by: Simon | 13/01/2010 at 01:37 AM
The Center for Democracy and Technology (CDT), in their Dec 21 testimony and submission to the FTC Roundtable on Consumer Privacy, have strongly endorsed the Dr. Cavoukian's Privacy by Design approach. Details:
Overview: www.cdt.org/blogs/erica-newland/privacy-design
Submission: www.cdt.org/files/pdfs/20091221_ftc_comments_privacy_design.pdf
Posted by: Frank | 31/01/2010 at 02:50 AM