Lab

Wei Lu's Lab

Institution: Wuhan University

Featured research (1)

The swift advancement of Large Language Models (LLMs) and their associated applications has ushered in a new era of convenience, but it also harbors the risks of misuse, such as academic cheating. To mitigate such risks, AI-generated text detectors have been widely adopted in educational and academic scenarios. However, their effectiveness and robustness in diverse scenarios are questionable. Increasingly sophisticated evasion methods are being developed to circumvent these detectors, creating an ongoing contest between detection and evasion. While the detectability of AI-generated text has begun to attract significant interest from the research community, little has been done to evaluate the impact of user-based prompt engineering on detectors’ performance. This paper focuses on the evasion of detection methods based on prompt engineering from the perspective of general users by changing the writing style of LLM-generated text. Our findings reveal that by simply altering prompts, state-of-the-art detectors can be easily evaded with F-1 dropping over 50%, highlighting their vulnerability. We believe that the issue of AI-generated text detection remains an unresolved challenge. As LLMs become increasingly powerful and humans become more proficient in using them, it is even less likely to detect AI text in the future.

Lab head

Wei Lu

Members (11)

Rui Meng
  • University of Pittsburgh
Liu Jiawei
  • Wuhan University
Yu Chi
  • University of Kentucky
Qikai Cheng
  • Wuhan University
Zhuoran Luo
  • Wuhan University
Zhifeng Liu
  • Peking University
Yongqiang Ma
  • Wuhan University
Jiajia Qian
  • Wuhan University
Wei Lu
Wei Lu
  • Not confirmed yet
Yong Huang
Yong Huang
  • Not confirmed yet
Ying Ding
Ying Ding
  • Not confirmed yet
Qikai Cheng
Qikai Cheng
  • Not confirmed yet
Heng Ding
Heng Ding
  • Not confirmed yet
Zi Xiong
Zi Xiong
  • Not confirmed yet
Xiaojuan Zhang
Xiaojuan Zhang
  • Not confirmed yet
Fan Yi
Fan Yi
  • Not confirmed yet