We study the critical and timely matter of online safety, security, and privacy using human-centered approaches, specifically for vulnerable populations such as adolescents. We create and analyze ecologically valid data based on users' real-world social media data interactions to provide insights into their online risk and harm experiences, such as cyberbullying, sexual risks, and mental health in private and public settings.
Our research investigates Human-AI Interaction with a focus on understanding current AI systems and designing AI systems by taking a human-centered approach, integrating interface design, system development, and ethical considerations for AI systems that align with human values. Specifically, we explore the role of conversational user interfaces (CUIs) for online support and companionship. Our work examines AI-induced harms, such as ethical concerns in AI chatbots, and investigates youth perspectives on AI-generated social support. By bridging technical innovation with ethical AI design, we aim to provide insight into AI systems that assist humans in safer and more effective ways.
Our research in health informatics focuses on sociotechnical support and self-management for people with epilepsy (PLWE). We investigate how PLWE seek online social support, examining challenges, tools, and community interactions, including those of caregivers and different subpopulations. Additionally, we study self-management behaviors and the role of technology in improving quality of life. By understanding these needs, we provide insights for designing more effective online support systems and self-management technologies tailored to PLWE.