Changhao Du’s research while affiliated with Jilin University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (2)


Fig. 1: Illustration of LIVE together.
Fig. 2: Example of the script for multi-user feature testing.
Fig. 4: The example of prompting agent.
Fig. 5: Failure examples of our approach.
Fig. 6: Comparison of time performance

+1

Agent for User: Testing Multi-User Interactive Features in TikTok
  • Preprint
  • File available

April 2025

·

9 Reads

Sidong Feng

·

Changhao Du

·

·

[...]

·

TikTok, a widely-used social media app boasting over a billion monthly active users, requires effective app quality assurance for its intricate features. Feature testing is crucial in achieving this goal. However, the multi-user interactive features within the app, such as live streaming, voice calls, etc., pose significant challenges for developers, who must handle simultaneous device management and user interaction coordination. To address this, we introduce a novel multi-agent approach, powered by the Large Language Models (LLMs), to automate the testing of multi-user interactive app features. In detail, we build a virtual device farm that allocates the necessary number of devices for a given multi-user interactive task. For each device, we deploy an LLM-based agent that simulates a user, thereby mimicking user interactions to collaboratively automate the testing process. The evaluations on 24 multi-user interactive tasks within the TikTok app, showcase its capability to cover 75% of tasks with 85.9% action similarity and offer 87% time savings for developers. Additionally, we have also integrated our approach into the real-world TikTok testing platform, aiding in the detection of 26 multi-user interactive bugs.

Download

Distinguishing GUI Component States for Blind Users using Large Language Models

March 2025

ACM Transactions on Software Engineering and Methodology

Graphical User Interfaces (GUIs) serve as the primary medium for user interaction with mobile applications (apps). Within these GUIs, editable text views, buttons, and other visual elements exhibit different states following user actions. However, developers often present these states only in various colors without providing textual hints for blind users. This results in significant difficulties for blind users to discern the transitions in component states, thereby hindering their ability to proceed with subsequent actions. Traditional rule-based methods and attribute settings often struggle to adapt to diverse component styles and fail to address the component state changes influenced by context. Recently, pre-trained large language models (LLMs) have demonstrated their generalization ability to various downstream tasks. In this work, we leverage LLMs and propose a tool called CasGPT ( C omponent st a te s distinguishing GPT) to automatically distinguish component states in GUIs and provide corresponding textual hints, thereby aiding blind users in app usage. Our experiments demonstrate that CasGPT is a lightweight approach capable of accurately distinguishing component states (accuracy=86.5%). The usefulness of our method is validated through a user study, where participants expressed positive attitudes towards it. Also, we compare and find that our method outperforms other open-source LLMs and different versions of GPT.