We study the critical and timely matter of adolescent online safety research using human-centered approaches to understand and identify online risks. We continued understanding teens' real-world social media data such as Instagram Data Donation Project. This project has led to insights into online risks experiences of teens in Instagram direct messages, multimodal detection of unsafe conversations, connecting offline and online risk experiences for low to high risk scenarios, understanding Direct Messages related to suicide and self-harm, contextualizing youth perspective on risk for labeling this dataset versus third party annotators, the impact of online harassment on their mental health in private, and ethical research practices and the development of reflective practices among youth participants and researchers who annotated data for risks to understand the effects of participating in and conducting research about sensitive data based on trauma-informed approaches.
Given today’s teens time spent with algorithmically personalized “for you” content (FYC), we sought to understand how teens interact with FYC and how that plays in forming teen senses of self. Our work showed that today’s teens expect and enjoy pervasive interactions with hyper-personalized FYC, despite conflicted feelings about the surveillance culture underlying FYC.
We have looked into how adolescents seek and receive support within private networked spaces. We found that the conversations often started for casual reasons, and others were for seeking or offering help. The most disclosed topics included concerns about mental health and relationships, to which others shared relatable experiences and gave informational (e.g., coping strategies) and emotional support (e.g., empathy). However, we also discovered unsupport as a theme, where conversation members denied giving support, a unique finding in the online social support literature.
Recent Publications
G. Freeman, D. Zytko, A. Razi, C. Lampe, H. Candello, T. Jakobi, K. Aal, 2025. New Opportunities, Risks, and Harm of Generative AI for Fostering Safe Online Communities. In The 2025 ACM International Conference on Supporting Group Work. https://doi.org/10.1145/3688828.3700747
A. Alsoubai, A. Razi, Z. Agha, S. Ali, G. Stringhini, C. D, P. Wisniewski, 2024. Profiling the Offline and Online Risk Experiences of Youth to Develop Targeted Interventions for Online Safety. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), p.1--37. https://doi.org/10.1145/3637391
A. Alsoubai, J. Park, S. Qadir, G. Stringhini, A. Razi, P. Wisniewski, 2024. Systemization of Knowledge (SoK): Creating a Research Agenda for Human-Centered Real-Time Risk Detection on Social Media Platforms. In Proceedings of the CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3613904.3642315 CHI’24 Honorable Mention Award
Upcoming Publications
Tyler Chang, Joseph J Trybala III, Sharon Bassan, and Afsaneh Razi. 2025. Opaque Transparency: Gaps and Discrepancies in the Report of Social Media Harms. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’25), April 26–May 01, 2025, Yokohama, Japan. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3706599.3719829