July 2024
·
79 Reads
·
2 Citations
The introduction of ChatGPT in academia has raised significant concerns, particularly regarding issues like plagiarism and the ethical use of generative text for academic purposes. Existing literature has predominantly focused on ChatGPT adoption, leaving a notable gap in addressing the strategies users employ to mitigate these emerging challenges. To bridge this gap, this research utilizes the Uncertainty Reduction Theory (URT) to formulate user strategies for reducing uncertainty. These strategies include both interactive and passive approaches. Concurrently, the study identifies key sources of uncertainty, which include concerns related to transparency, information accuracy, and privacy. Additionally, it introduces the concepts of seeking clarification and consulting peer feedback as mediating roles to facilitate reduced uncertainty. We tested these hypotheses with a sample of Indonesian users (N = 566) using structural equation modeling via Smart-PLS 4.0 software. The results confirm that interactive Uncertainty Reduction Strategies (URS) are more effective in reducing uncertainty when using ChatGPT compared to passive URS. Transparency concerns, information accuracy, and privacy concerns are identified as factors that increase the level of uncertainty. In contrast, consulting peer feedback is the most effective strategy for reducing uncertainty compared to seeking clarification at the individual-system level. Insights from the mediating effects confirm that consulting peer feedback significantly mediates uncertainty sources to reduce uncertainty. The study discusses various strategies for the ethical use of ChatGPT by users in the educational context and contributes significantly to theoretical development.