I am taking the opportunity to post on the blog a response from Ken Anderson, Assistant Commissioner, Privacy, Office of the Information & Privacy Commissioner of Ontario, Canada following my comments on Privacy by Design (PbD).
There is in fact two letters – the earlier of which I will post as a comment to this blog.
Just to set the scene re my concerns re PbD (which I think resemble the discussions on nuclear disarmament in the 1970s). In the Nixon-Brezhnev/Kosygin era, the USSR wanted to install an Anti-Ballistic Missile system around Moscow as a defence against the USA’s Inter-Continental Missiles (ICM). The USA said that such an installation would be an act of nuclear proliferation. The USSR argued that it wasn’t: if the system was purely defensive, how could it be an act of proliferation?
The USA then said that if the USSR installed its defensive system, the USA’s response would be to build so many more ICM missiles to guarantee that, in the event of nuclear war, Moscow’s defensive system would be swamped. This explains why the USA argued that a defensive system was an act of proliferation.
I think there is an element of the idea underpinning Brezhnev’s thinking in PbD. If, for example, CCTV system only records privacy protective images you can install many CCTV systems. The problem of privacy is then transferred from the privacy protecting CCTV images to who can have access to the unprotected image. This gets you back to political and legal issues (e.g. is a warrant needed? Can images be released voluntarily to help the police with respect to any crime?).
I suspect that in many instances, the main effect of PbD is to shift the privacy problem from data collection to data access or elsewhere in the system of checks and balances. I also think that PbD privacy guarantees are vulnerable to a legislative changes that requires access to unencrypted personal data. In other words, you get back to basic issues such as regulator, Parliamentary scrutiny etc etc. An insightful speech of Dr Ian Kerr (iankerr.ca/content/view/526/1/) has raised a similar concern (recommended).
The target of the blog was not PbD, but the assertion that you can have privacy and security. I need a lot of convincing with respect to that statement because I think it wrong.
RESPONSE FROM THE COMMISSIONER
Dear Chris,
I appreciated your note of the 13th. You present an interesting perspective on this question. Yes please, I accept your offer to post my response, and I guess this one too. Thank you for that.
There is one point that you make with which I must take issue – your statement that “the target of the blog was not PbD, but the assertion that you can have privacy and security.” The notion that you can do precisely that represents Dr. Cavoukian’s “Positive-Sum” principle, which forms a central tenet in the concept of Privacy by Design. Given that, I believe you can appreciate my interest in responding to your note. I'll follow your sequence of ideas.
You made reference to a speech of Dr. Ian Kerr which is posted on his website. You should also note that Dr. Kerr has written a prologue to the posting where he describes his remarks as "a kind of off-the-cuff 'moment'". He goes on to describe his later conversations with Commissioner Ann Cavoukian (to whom he refers) which provides an extra context (and maybe some counterpoint) for his remarks.
It is possible that our failure to reach a meeting of the minds on this issue stems from the fact that, as Dr. Kerr notes, we defend our positions from different philosophical orientations. As a regulator, we are, essentially, pragmatists. In your role, I suspect you are afforded greater opportunity to indulge in wider reflection.
Your reference to the proliferation of nuclear missiles is intriguing, but I don't think the facts are the same. On so many issues – patron and employee safety; deterrence and detection of crime – to name a few, society demands that the state offer appropriate protections. Increasingly, the timeframe to accomplish this is not “some time next year” or “soon;” but rather, it is now. At the same time, we are confronted with an explosion of digital activity and the growing deployment of information and communications technologies into every sphere of human activity, both online and off. Tens of millions of individuals are already carrying biometric-enabled identity and travel documents; hundreds of millions are participating in online social networks, and billions of people around the world are using portable devices in new and transformative ways.
The technologies are not going away. If anything, their use will grow. They will operate in both the public and private spheres, and all of this activity generates digital footprints. It is important to also note that, as a regulator, it is not within our mandate to prevent the implementation of technology which answers valid safety and security requirements. In our view, complete withdrawal of regulators is not a viable option in today’s information society. We believe in the need for engagement, not confrontation. We also believe that privacy is fully protected when measures are drawn from an umbrella of protections that the Commissioner calls “SmartPrivacy.”
SmartPrivacy is a model which incorporates an arsenal of protections – everything necessary to ensure that an organization’s complete holdings of personal information are appropriately managed. Each element is important, but PbD represents its sine qua non. Failing to envision privacy requirements from the outset will, despite the presence of the other elements, either fail to protect PI or do so in a sub-optimal manner.
I am attaching our report entitled, Privacy and Video Surveillance in Mass Transit Systems because I believe that it addresses the concerns you note with respect to CCTV systems. The Toronto Transit Commission, the subject of the report, has employed cameras within their system for many years. In addition to the comparatively recent justifications related to crime prevention and detection, they have long been an important dimension of the system’s patrons and employee safety system, as well as an important tool to manage platform crowding, especially at choke-points during congested rush hours. It represents an excellent example of proliferation occurring prior to privacy issues and assurances. Recognizing the perception of invasiveness, the TTC has embraced PbD and implemented a full suite of controls:
• There is no routine monitoring on surface vehicles because the technology does not provide a live feed — so no one is actually watching.
• Special Constable Services do not monitor live video surveillance feed. All access is strictly logged and incident-driven; it's erased/re-written every 15 hours
• Recorded video surveillance images are only accessed by the TTC when an incident has taken place, where an investigator must isolate and copy the image before it is automatically overwritten.
• Images collected from surface vehicles are erased and automatically overwritten every 15 hours.
• The TTC reduced its retention periods for subway images from seven days to a maximum of 72 hours.
• Unauthorized access to images obtained through the video surveillance system is prevented. Hard drives containing recorded video images are only accessible through the use of a strong password, which is only available to a small number of TTC supervisors. The operators of TTC vehicles do not themselves have any access to the recorded images.
• The TTC must ensure that its video surveillance program is subjected to an effective and thorough yearly audit conducted by an independent third party, using the GAPP Privacy Framework.
• And most important, a “two-key” sign-off process (one of which must be the Chief of Police) must be invoked before police officers are permitted to retrieve any stored images.
As Dr. Kerr says, there is a role, in our society, to be played by both privacy idealists and pragmatists. I believe that it is important that the two groups maintain an open dialogue and that the critical threshold questions which exist, are both asked and answered. Once answered, if an authorized decision is made to invoke technology for use in a program, Privacy by Design is the only tool of which I’m aware which, when properly deployed, ensures the selected technology is privacy protective. We are not alone in the view that PbD is a powerful tool for privacy. For example, recently, in December 2009, the Centre for Democracy and Technology made a submission to the Federal Trade Commission in Washington D.C. on The Role of Privacy by Design in Protecting Consumer Privacy (copy attached for convenience) which concludes, "But if legislators, regulators, and innovators work together to buttress this framework with best practices that reflect Privacy by Design, then consumers and companies alike will discovery that privacy and innovation are not mutually exclusive, but that privacy is instead an essential element of the innovative Internet."
Chris, thank you for taking the time to consider these views.
Best, Ken
Ken Anderson
Assistant Commissioner, Privacy
Office of the Information & Privacy Commissioner of Ontario
Canada
Dear Sirs,
In the past, you have demonstrated an appreciation of Dr. Cavoukian’s concept of Privacy by Design on several occasions in this blog. However, with this new post, you assert a surprising, if not alarming potential outcome of its application. In the spirit of collegial discourse, I respectfully offer a different perspective.
Few would argue that world events have helped to create a condition of heightened vigilance, in an effort to deter and detect those who would inspire fear or do harm to otherwise innocent people. The implementation of technologies embodying the principles of Privacy by Design (PbD) is a unified approach to meeting these needs in what Dr. Cavoukian has called a Positive-Sum or win/win manner (as opposed to Zero-Sum, win/lose). In other words, PbD ensures that privacy is protected while at the same time, meeting valid security needs – not either privacy or security (i.e. win/lose), but both. As privacy advocates, I think we become marginalized by failing to acknowledge the legitimacy of security as a necessary and understandable requirement in today’s society.
Your argument that PbD “helps to drive a world of widespread screening” appears to be interpreted through a particular view of economics. In essence, you say that assurances of privacy will, in turn inspire increased demand for security technology (e.g. Whole Body Image scanners), thus creating a growing market where such devices will become less expensive and are apt to be installed in an increasingly wide variety of environments. Once broadly implemented, rescinding the promise of privacy leaves a large, invasive technological infrastructure in place.
It would seem that the central issue is not falling cost. In fact, it is wishful thinking to rely on cost as a long-term impediment to the adoption of any technology. Rather, time brings with it inevitable improvements in the effectiveness and efficiency of every technology and process, which require proper consideration regarding their impacts on privacy. Assuming that a technology or process meets a need; demand for either will naturally increase. Logic demands that we begin to work today, as this particular scanning technology emerges, to ensure that both the scanners and the processes employed to manage them embody the privacy protective measures that society will be demanding.
PbD is not a rubber stamp to be applied to any invasive technology, as you appear to suggest. Rather, once the need for such a technology is properly demonstrated, PbD constitutes the collection of principles that we believe designers should follow to integrate privacy – making it an essential component – as inseparable from the technology itself. Embedding privacy in this manner ensures that it cannot easily be stripped away in the future. We know it’s a tall order, but one that we feel we must strive for.
Far from “the way to hell” or a “decline of privacy,” as you have described it, PbD, if executed, will in fact result in the opposite – ensuring the long-term protection of our essential freedoms. Designing privacy into technology and processes from the outset is a more flexible approach to ensuring long-term privacy than reliance on legislation or regulation alone which rarely keeps pace nor can fully envision future technologies. Without seeking to achieve multiple objectives – in this case, security and privacy, in a positive-sum manner – privacy will invariably decline because it will always be forfeited in favour of security. That is a future we do not wish to contemplate.
Ken Anderson
Assistant Commissioner (Privacy)
Office of the Information & Privacy Commissioner of Ontario, Canada
Posted by: Ken Anderson | 18/01/2010 at 07:09 PM
I had the pleasure of discussing some of this with Chris prior to this most recent blog posting and the excellent responses above. Most briefly, what I see here is actually what my Irish colleagues twenty years ago called "violent agreement".
We seem to all agree that Bad Things are happening to privacy, in the name of security. We also seem to agree that PETs, PbD, etc can reduce the negative impact to privacy of any particular (supposedly) pro-security technology or process at a given moment.
The apparent disagreement is in whether the apparent protection of privacy through PETs, PbD, etc themselves will result in a net longer term decrease in privacy than would occur in the absence of PETs, PbD, etc.
Chris' argument appears to be a slippery slope argument - we take the first few steps down the top of that slope, then later on find ourselves sliding - more likely pushed and pulled - faster and farther down than we'd ever have wanted, by the same forces which compelled us to begin the journey, as they point out "you agreed that you got benefit before, well, here's more of the same!"
I argue that what is missing from this latter argument is something which our earlier discussions did in fact include, and on which we do agree. (By the way, if you haven't read Ian Kerr's speech transcript, go back and read it now. It is excellent stuff.) Specifically, this is that an idealistic threshold decision must always be made at any new technology, and at any new use of a technology.
Dr Cavoukian's PbD idea does not suggest that every potential privacy invasive technology is a fait accompli. Rather, she argues that, any such particular fait being accompli, PbD effects some reasonable limitation on the privacy damage. When is the idealistic threshold decision taken? *Before* the “D” (whether or not “Pb”).
What we as privacy leaders, whether academic/idealist, or regulator/pragmatist (and I'll add the third category, in to which my own career work fits: commercial/pragmatist) must [continue to] do is to be those idealists, asking the threshold questions (and doing the massive education job required for the threshold questions to reach and be understood by the right audiences), while we also work to minimize and then monitor the privacy impacts of all technologies, systems, and uses which pass the threshold questions.
Posted by: Jay Libove, CISSP, CIPP | 18/01/2010 at 08:14 PM
The comment that “I suspect that in many instances, the main effect of PbD is to shift the privacy problem from data collection to data access or elsewhere in the system of checks and balances.”nails it.
I think I have observed a phenomenon in the terminology of both PETs and PbD that the terms use in parallel disjoint senses in the policy and comp.sci communities. Crudely the lawyers are playing at information science from “common sense” when they dream up legal frameworks. (But I am techie at heart so I would say that).
Two key insights for me were that
(a) from the regulatory side if data was not identifiable, it was not regulated, and this provided an incentive to anonymize, and
(b) it was clear that early ideas about an “Identity Protector” in the terminology of PETs comprehended only naive pseudonymity, rather than comp.sci ideas about zero-knowledge proof and blind signatures, and that the weak concept of non-identifiability (which could be caricatured as a program for universal ID escrow and traceability) was a threat to the acceptance of genuine “non-zero-sum” privacy technologies.
As far as Privacy by Design goes, it has achieved some mindshare with privacy commissioners (where they always seemed resistant to educating themselves about advanced PETs), but I think the major problem remains that the regulators would like to believe that PbD can be achieved in “technology neutral” way at the level of process design and system architecture.
This idea is obsolete. You still need good process design and architecture, but a system which uses technology (appropriately) which has been specifically designed to protect privacy, can protect privacy better than a system which doesn’t use such technology.
If PbD entrenches the idea that it is satisfactory not to use advanced privacy techniques from computer science (properly integrated), but that lack of these can be somehow be compensated for in the architectural whole design, then the ideology of PbD will be net harmful.
Posted by: cb | 28/01/2010 at 11:12 PM