HCOMP 2016 › Publication › Learning Privacy Expectations by Crowdsourcing Contextual Informational Norms
Designing programmable privacy logic frameworks that correspond to social, ethical, and legal norms has been a fundamentally hard problem. Contextual integrity (CI) (Nissenbaum, 2010) offers a model for conceptualizing privacy that is able to bridge technical design with ethical, legal, and policy approaches. While CI is capable of capturing the various components of contextual privacy in theory, it is challenging to discover and formally express these norms in operational terms. In the following, we propose a crowdsourcing method for the automated discovery of contextual norms. To evaluate the effectiveness and scalability of our approach, we conducted an extensive survey on Amazon’s Mechanical Turk (AMT) with more than 450 participants and 1400 questions. The paper has three main takeaways: First, we demonstrate the ability to generate survey questions corresponding to privacy norms within any context. Second, we show that crowdsourcing enables the discovery of norms from these questions with strong majoritarian consensus among users. Finally, we demonstrate how the norms thus discovered can be encoded into a formal logic to automatically verify their consistency Read More ›

arXive › Technical report › Crowdsourced, Actionable and Verifiable Contextual Informational Norms
There is often a fundamental mismatch between programmable privacy frameworks, on the one hand, and the ever shifting privacy expectations of computer system users, on the other hand. Based on the theory of contextual integrity (CI), our paper addresses this problem by proposing a privacy framework that translates users’ privacy expectations (norms) into a set of actionable privacy rules that are rooted in the language of CI. These norms are then encoded using Datalog logic specification to develop an information system that is able to verify whether information flows are appropriate and the privacy of users thus preserved. A particular benefit of our framework is that it can automatically adapt as users’ privacy expectations evolve over time. To evaluate our proposed framework, we conducted an extensive survey involving more than 450 participants and 1400 questions to derive a set of privacy norms in the educational context. Based on the crowdsourced responses, we demonstrate that our framework can derive a compact Datalog encoding of the privacy norms which can in principle be directly used for enforcing privacy of information flows within this context. In addition, our framework can automatically detect logical inconsistencies between individual users’ privacy expectations and the derived privacy logic. Read More ›