Playing with Privacy: Exploring the Social Construction of Privacy Norms Through a Card Game.
Berkholz, J.; Rahman, A.; and Stevens, G.
Proceedings of the ACM on Human-Computer Interaction, 9(GROUP): 1–23. January 2025.
Paper
doi
link
bibtex
abstract
1 download
@article{berkholz_playing_2025,
title = {Playing with {Privacy}: {Exploring} the {Social} {Construction} of {Privacy} {Norms} {Through} a {Card} {Game}},
volume = {9},
issn = {2573-0142},
shorttitle = {Playing with {Privacy}},
url = {https://dl.acm.org/doi/10.1145/3701202},
doi = {10.1145/3701202},
abstract = {Investigating digital privacy behavior requires consideration of its contextual nuances and the underlying social norms. This study delves into users' joint articulation of such norms by probing their implicit assumptions and "common sense" surrounding privacy conventions. To achieve this, we introduce
Privacy Taboo,
a card game designed to serve as a playful breaching interview method, fostering discourse on unwritten privacy rules. Through nine interviews involving pairs of participants (n=18), we explore the decision-making and collective negotiation of privacy's vagueness. Our findings demonstrate individuals' ability to articulate their information needs when consenting to fictive data requests, even when contextual cues are limited. By shedding light on the social construction of privacy, this research contributes to a more comprehensive understanding of usable privacy, thereby facilitating the development of democratic privacy frameworks. Moreover, we posit
Privacy Taboo
as a versatile tool adaptable to diverse domains of application and research.},
language = {en},
number = {GROUP},
urldate = {2025-01-12},
journal = {Proceedings of the ACM on Human-Computer Interaction},
author = {Berkholz, Jenny and Rahman, Aniqa and Stevens, Gunnar},
month = jan,
year = {2025},
pages = {1--23},
}
Investigating digital privacy behavior requires consideration of its contextual nuances and the underlying social norms. This study delves into users' joint articulation of such norms by probing their implicit assumptions and "common sense" surrounding privacy conventions. To achieve this, we introduce Privacy Taboo, a card game designed to serve as a playful breaching interview method, fostering discourse on unwritten privacy rules. Through nine interviews involving pairs of participants (n=18), we explore the decision-making and collective negotiation of privacy's vagueness. Our findings demonstrate individuals' ability to articulate their information needs when consenting to fictive data requests, even when contextual cues are limited. By shedding light on the social construction of privacy, this research contributes to a more comprehensive understanding of usable privacy, thereby facilitating the development of democratic privacy frameworks. Moreover, we posit Privacy Taboo as a versatile tool adaptable to diverse domains of application and research.
Prevalence Overshadows Concerns? Understanding Chinese Users' Privacy Awareness and Expectations Towards LLM-based Healthcare Consultation.
Liu, Z.; Hu, L.; Zhou, T.; Tang, Y.; and Cai, Z.
In
2025 IEEE Symposium on Security and Privacy (SP), pages 92–92, Los Alamitos, CA, USA, May 2025. IEEE Computer Society
Paper
doi
link
bibtex
abstract
@inproceedings{liu_prevalence_2025,
address = {Los Alamitos, CA, USA},
title = {Prevalence {Overshadows} {Concerns}? {Understanding} {Chinese} {Users}' {Privacy} {Awareness} and {Expectations} {Towards} {LLM}-based {Healthcare} {Consultation}},
url = {https://doi.ieeecomputersociety.org/10.1109/SP61157.2025.00092},
doi = {10.1109/SP61157.2025.00092},
abstract = {Large Language Models (LLMs) are increasingly gaining traction in the healthcare sector, yet expanding the threat of sensitive health information being easily exposed and accessed without authorization. These privacy risks escalate in regions like China, where privacy awareness is notably limited. While some efforts have been devoted to user surveys on LLMs in healthcare, users' perceptions of privacy remain unexplored. To fill this gap, this paper contributes the first user study (n=846) in China on privacy awareness and expectations in LLM-based healthcare consultations. Specifically, a healthcare chatbot is deployed to investigate users' awareness in practice. Information flows grounded in contextual integrity are then employed to measure users' privacy expectations. Our findings suggest that the prevalence of LLMs amplifies health privacy risks by raising users' curiosity and willingness to use such services, thus overshadowing privacy concerns. 77.3\% of participants are inclined to use such services, and 72.9\% indicate they would adopt the generated advice. Interestingly, a paradoxical “illusion” emerges where users' knowledge and concerns about privacy contradict their privacy expectations, leading to greater health privacy exposure. Our extensive discussion offers insights for future LLM-based healthcare privacy investigations and protection technology development.},
booktitle = {2025 {IEEE} {Symposium} on {Security} and {Privacy} ({SP})},
publisher = {IEEE Computer Society},
author = {Liu, Zhihuang and Hu, Ling and Zhou, Tongqing and Tang, Yonghao and Cai, Zhiping},
month = may,
year = {2025},
keywords = {contextual integrity, healthcare, large language models, privacy, user study},
pages = {92--92},
}
Large Language Models (LLMs) are increasingly gaining traction in the healthcare sector, yet expanding the threat of sensitive health information being easily exposed and accessed without authorization. These privacy risks escalate in regions like China, where privacy awareness is notably limited. While some efforts have been devoted to user surveys on LLMs in healthcare, users' perceptions of privacy remain unexplored. To fill this gap, this paper contributes the first user study (n=846) in China on privacy awareness and expectations in LLM-based healthcare consultations. Specifically, a healthcare chatbot is deployed to investigate users' awareness in practice. Information flows grounded in contextual integrity are then employed to measure users' privacy expectations. Our findings suggest that the prevalence of LLMs amplifies health privacy risks by raising users' curiosity and willingness to use such services, thus overshadowing privacy concerns. 77.3% of participants are inclined to use such services, and 72.9% indicate they would adopt the generated advice. Interestingly, a paradoxical “illusion” emerges where users' knowledge and concerns about privacy contradict their privacy expectations, leading to greater health privacy exposure. Our extensive discussion offers insights for future LLM-based healthcare privacy investigations and protection technology development.