Ze Shi Li
-
BSc (51³Ô¹Ï, 2018)
-
MSc (51³Ô¹Ï, 2020)
Topic
Exploring Automation of User Feedback Analysis for Requirements Engineering
Department of Computer Science
Date & location
-
Tuesday, July 22, 2025
-
5:00 P.M.
-
Virtual Defence
Reviewers
Supervisory Committee
-
Dr. Daniela Damian, Department of Computer Science, 51³Ô¹Ï (Co-Supervisor)
-
Dr. Neil Ernst, Department of Computer Science, UVic (Co-Supervisor)
-
Dr. David Lo, School of Computing and Information Systems, Singapore Management University (Outside Member)
External Examiner
-
Dr. Travis Breaux, Software and Societal Systems Department, Carnegie Mellon University
Chair of Oral Examination
- Dr. Daniela Constantinescu, Department of Mechanical Engineering, UVic
Abstract
In modern software development, products collect heterogeneous feedback from end users on various platforms such as app stores, social media, forums, and videos. This user feedback is a source for identifying emerging needs, bugs, and potential features. As organizations shift toward rapid release cycles and continuous delivery models, the volume and breadth of user feedback have increased significantly. Traditional requirements elicitation techniques, such as interviews and surveys, remain time-consuming, stakeholder-dependent, and difficult to scale. Moreover, newer mediums like TikTok, YouTube, and Reddit have introduced informal, crowd-driven forms of feedback that are often unstructured and scattered across platforms. This has created a pressing need for scalability and methodological support to analyze and synthesize large-scale user feedback. This dissertation addresses this challenge by exploring scalable, AI-driven approaches to feedback analysis in requirements engineering.
I first conducted a grounded theory interview study with 40 practitioners from 32 companies to explore how organizations manage user feedback. My analysis identified many feedback channels and activities. Synthesizing these, I propose a life cycle of managing user feedback along with best practices for managing large-scale crowd feedback. Next, I explore and bring forth insights to automate requirements analysis from user feedback. For textual feedback, such as Reddit and app store reviews, I applied large language models (LLMs) to identify requirements relevant feedback and important themes from the data. This LLM-based approach was much faster than manual analysis for the same purpose. Additionally, I examined automating the analysis of video-based feedback. I extracted transcripts and on-screen text and employed deep learning classifiers to detect requirements relevant content. My work shows that AI can identify multi-modal user feedback for requirements insights at scale.
The next main goal of my dissertation was to explore how we can leverage AI tools to assist with requirements analysis. Through 26 interviews, I developed a theory that outlines the factors (i.e., motives and challenges) influencing AI adoption in software teams at both the individual and organizational levels. Understanding these factors details how AI tools could be introduced and supported in practice. Finally, I conducted a think-aloud study with requirements practitioners and product managers to understand how they use AI tools during requirements analysis. Participants were observed forming prompts and integrating AI-generated suggestions while analyzing user feedback and formulating requirements. This study highlighted the observed practitioners’ practices.
To summarize, this dissertation highlights the findings across all studies, which culminate in a conceptual model for AI-assisted requirements analysis. This conceptual model synthesizes the lifecycle of user feedback management, automation techniques for multi-modal analysis, and the socio-technical factors shaping tool adoption. This model offers both theoretical and practical contributions by providing scalable, human-centered strategies for transforming crowd-driven user feedback into actionable requirements.