Fig 1 - uploaded by Martin Ruskov
Content may be subject to copyright.
Source publication
Artificial intelligence’s (AI) progress holds great promise in tackling pressing societal concerns such as health and climate. Large Language Models (LLM) and the derived chatbots, like ChatGPT, have highly improved the natural language processing capabilities of AI systems allowing them to process an unprecedented amount of unstructured data. Howe...
Context in source publication
Context 1
... aim of the workshop was to introduce students to the topic of AI and encourage them to explore and question the capabilities and limitations of ChatGPT. The study procedure, depicted in Figure 1, was designed to facilitate learning through active exploration. In particular, the educational learning plan saw two phases, the first one introduced stu-dent to AI and allowed them to freely explore the capabilities and limitations of ChatGPT and the second phase introduced students to prompting techniques to enhance ChatGPT's capabilities. ...
Citations
... In a recent survey, nearly half of the respondents ranked learning how to use AI as their most important objective concerning AI education [Vo24]. Although existing concepts for teaching prompt strategies are available [Th23], there is little empirical evidence available on learners' current use of chatbots at the micro level. This challenge is compounded by the numerous scenarios in which chatbots based on large language models (LLMs) can be utilized. ...
Understanding how to use generative AI can greatly benefit the learning process. Despite available concepts for teaching "how to prompt", little empirical evidence exists on students' current micro-level chatbot use that would justify a need for instruction on how to prompt. This pilot study investigates students' chatbot use in an authentic setting. Findings reveal general interaction patterns, including a notable lack of conversational patterns, indicating an underutilization of this central chatbot capability. However, despite having no formal instruction, some students discovered specific chatbot affordances. While basic prompting skills are displayed or acquired during exploration, explicit training on effective chatbot interaction could enhance skillful chatbot use. This training should integrate cognitive and metacognitive strategies as well as technological knowledge, helping students leverage the technology's full potential.
... GPT-175B is based on the well-known Transformer architecture. It has become famous since ChatGPT [33], an exceptional chatbot that harnesses the power of such an architecture by providing accurate and helpful text-based responses to users, making it a valuable resource for anyone seeking information or assistance [30]. GPT differs from the other models mentioned as it is trained on a vast amount of general text data instead of a specific text simplification dataset. ...
... However, the successful integration of AI technologies into education is determined on students' perceptions and attitudes towards these advancements [15]. To effectively integrate AI into the educational setting, it is important to cultivate positive attitudes towards AI among students by providing them with AI related experiences, fostering programming skills and encouraging AI use [11,13,35]. ...
... Notably, our findings unveil a distinctive association between positive attitudes towards artificial intelligence (AI) and the acceptance of ChatGPT, underscoring the key role of AI openness in shaping students' inclination to embrace and actively engage with technological innovations such as ChatGPT. This correlation resonates with the perspectives of [11,15,35], who posit that the successful integration of AI technologies in education hinges on students' perceptions and attitudes. Our research, therefore, emphasizes the significant link between positive attitudes towards artificial intelligence (AI) and the acceptance of ChatGPT, highlighting the crucial role of fostering AI openness for encouraging students to embrace and actively engage with technological innovations like ChatGPT in educational contexts. ...
The advent of Artificial Intelligence (AI) has revolutionized multiple sectors including education. The popularization of tools such as ChatGPT has sparked the debate concerning the impact of AI on traditional education and the nature of learning. This paper explores undergraduate students’ attitudes towards AI and ChatGPT acceptance. A descriptive cross-sectional study with 72 Public Relations students (M age = 19.2 years old) took place in Barcelona (Spain) during the first semester of 2023. The study implemented a mixed method approach with two validated questionnaires and an open text question to gather comprehensive insights. Findings reveal positive attitudes towards artificial intelligence and ChatGPT acceptance. The assessment of negative perceptions show concerns regarding artificial intelligence and the use of ChatGPT among participants. The correlational analysis of scales showed an intricate relationship between AI attitudes and ChatGPT acceptance while the qualitative analysis highlighted three major attitudes among students: openness, awareness, and alertness. The present study contributes to the ongoing discourse surrounding the use of ChatGPT in educational settings, emphasizing the importance of exploring students’ attitudes and concerns. As artificial intelligence continues to permeate various aspects in our daily life, it becomes crucial to explore its impact on education, particularly in higher education. By understanding students’ attitudes, both educators and institutions can enhance their proficiency in integrating artificial intelligence in a more efficient manner, ensuring a well-balanced approach that maximizes benefits while mitigating potential drawbacks of adopting AI technology.
... Similarly, other research finds that LLMs could be used to analyze patient feedback, clinical notes, and public health discussions, thereby gauging public sentiment on health-related matters [104], understanding patient experiences [105], monitoring mental health trends [106,107], and identifying cognitive distortions or suicidal tendencies [108]. Additionally, LLMs in sentiment analysis facilitate medical education [102,[109][110][111] by fostering interactions between medical trainees and educators, detecting thematic differences and potential biases, and revealing how feedback language may reflect varying attitudes toward learning and improvement [112]. LLMs could also contribute to the sentiment analysis of research articles and medical journals, offering insights into the research community's responses to novel findings or treatments [113,114]. ...
Large language models (LLMs) have rapidly become important tools in Biomedical and Health Informatics (BHI), potentially enabling new ways to analyze data, treat patients, and conduct research. This study aims to provide a comprehensive overview of LLM applications in BHI, highlighting their transformative potential and addressing the associated ethical and practical challenges. We reviewed 1698 research articles from January 2022 to December 2023, categorizing them by research themes and diagnostic categories. Additionally, we conducted network analysis to map scholarly collaborations and research dynamics. Our findings reveal a substantial increase in the potential applications of LLMs to a variety of BHI tasks, including clinical decision support, patient interaction, and medical document analysis. Notably, LLMs are expected to be instrumental in enhancing the accuracy of diagnostic tools and patient care protocols. The network analysis highlights dense and dynamically evolving collaborations across institutions, underscoring the interdisciplinary nature of LLM research in BHI. A significant trend was the application of LLMs in managing specific disease categories, such as mental health and neurological disorders, demonstrating their potential to influence personalized medicine and public health strategies. LLMs hold promising potential to further transform biomedical research and healthcare delivery. While promising, the ethical implications and challenges of model validation call for rigorous scrutiny to optimize their benefits in clinical settings. This survey serves as a resource for stakeholders in healthcare, including researchers, clinicians, and policymakers, to understand the current state and future potential of LLMs in BHI.
... Similarly, Korte et al. (2024) found that students who attended five hours of online AI literacy lectures improved their understanding of AI and felt more confident using it in their everyday lives. Theophilou et al. (2023) found that just two interactive lectures on genAI for high school students were enough to reduce their fears about genAI and improve their prompting skills. Many universities and academic skills centres are creating resources and training to develop students' AI literacies. ...
... Genuinely critical evaluation and engagement with genAI tools will likely take longer to develop than a single workshop. Theophilou et al. (2023) observed that students improved their prompting strategies after a second workshop. However, another study by Sheese et al. (2024) found that over a 12-week introductory computer science course, students continued to use relatively simple prompts and did not effectively use the provided genAI tool (CodeHelp) to help deepen their understanding. ...
With the emergence of generative artificial intelligence (genAI), it has become increasingly important to ensure that students are equipped with AI literacy to use these tools effectively and appropriately. We ran a 90-minute, optional workshop for students to demonstrate how to use genAI in the assessment process appropriately. By the end of the workshop, participants felt significantly more confident in using genAI, had more intentions to use genAI, and understood the University's genAI policy better. The types of genAI use that participants envisioned shifted from general academic and life uses to specific, acceptable uses for learning. Students could identify some methods for assessing the output of genAI. However, it is suggested that this skill needs more development.
... It mainly involves humans' perception of AI products and their interaction processes. By summarizing the related current research, we found that competence perception of ChatGPT may emerge as a dominant factor fostering users' positive attitudes (Shoufan, 2023;Theophilou et al., 2023), while various factors influencing the public's negative attitudes toward ChatGPT are concentrated in the form of AI anxiety (Praveen & Vajrobol, 2023b;Tian et al., 2023;Yan, 2023). Therefore, we focus on ChatGPT, an advanced AI tool with certain objective characteristics. ...
... First, ChatGPT still has some functional shortcomings, such as making incorrect or inappropriate responses (Mohamed, 2024;Shoufan, 2023), generating answers with errors or biases Tian et al., 2023), and apprehensions about the model's accuracy and reliability (Chan & Hu, 2023;Praveen & Vajrobol, 2023a;Temsah et al., 2023;Tian et al., 2023). In this regard, users expect continual improvement in its functionality (Theophilou et al., 2023). Second, the deployment of ChatGPT raises ethical and privacy concerns, encompassing issues like academic integrity and dishonesty (Mohamed, 2024;Praveen & Vajrobol, 2023a;Tian et al., 2023;Yan, 2023). ...
... Sixth, this study highlights the significance of the socioemotional attribute of advanced GAI. Powerful function is advanced GAI's primary brand usually (Theophilou et al., 2023). However, the technological leaps forward advanced GAI (take ChatGPT as an example) to possess both functional and socio-emotional aspects. ...
... The purpose of such regulation would be to protect the consumer when it comes to exaggerated claims about what ChatGPT can do. A study by Emily Theophilou and her colleagues (Theophilou et al., 2023) shows that when students also learn about the fallibility and limitations of AI in the context of learning how to prompt LLMs, the results appear to be more effective (Theophilou et al., 2023). Thus, a possible regulation in this area should include an emphasis on the limitation of LLMs such as ChatGPT. ...
... The purpose of such regulation would be to protect the consumer when it comes to exaggerated claims about what ChatGPT can do. A study by Emily Theophilou and her colleagues (Theophilou et al., 2023) shows that when students also learn about the fallibility and limitations of AI in the context of learning how to prompt LLMs, the results appear to be more effective (Theophilou et al., 2023). Thus, a possible regulation in this area should include an emphasis on the limitation of LLMs such as ChatGPT. ...
... Virtual agents endowed with AI capabilities can offer a suite of mental well-being services that are both individualized and broadly accessible. These AI-driven interventions range from non-judgmental listening and psychoeducation to evidence-based therapeutic strategies; and together with machine learning algorithms, they can refine these strategies by analyzing treatment-response data, thereby optimizing the likelihood of successful outcomes [54]. ...
The advancement of artificial intelligence (AI) and the ubiquity of social media have become transformative agents in contemporary educational ecosystems. The spotlight of this inquiry focuses on the nexus between AI and social media usage in relation to academic performance and mental well-being, and the role of smart learning in facilitating these relationships. Using partial least squares–structural equation modeling (PLS-SEM) on a sample of 401 Chinese university students. The study results reveal that both AI and social media have a positive impact on academic performance and mental well-being among university students. Furthermore, smart learning serves as a positive mediating variable, amplifying the beneficial effects of AI and social media on both academic performance and mental well-being. These revelations contribute to the discourse on technology-enhanced education, showing that embracing AI and social media can have a positive impact on student performance and well-being.
... However, the successful integration of AI technologies into education is determined on students' perceptions and attitudes towards these advancements [15]. To effectively integrate AI into the educational setting, it is important to cultivate positive attitudes towards AI among students by providing them with AI related experiences, fostering programming skills and encouraging AI use [13,11,35]. ...
... Notably, our findings unveil a distinctive association between positive attitudes towards artificial intelligence (AI) and the acceptance of ChatGPT, underscoring the key role of AI openness in shaping students' inclination to embrace and actively engage with technological innovations such as ChatGPT. This correlation resonates with the perspectives of [11,15,35], who posit that the successful integration of AI technologies in education hinges on students' perceptions and attitudes. Our research, therefore, emphasizes the significant link between positive attitudes towards artificial intelligence (AI) and the acceptance of ChatGPT, highlighting the crucial role of fostering AI openness for encouraging students to embrace and actively engage with technological innovations like ChatGPT in educational contexts. ...
The advent of Artificial Intelligence (AI) has revolutionized multiple sectors including education. The popularization of tools such as ChatGPT has sparked the debate concerning the impact of AI on traditional education and the nature of learning. This paper explores undergraduate students' attitudes towards AI and ChatGPT acceptance. A descriptive cross-sectional study with 72 Public Relations students (M age = 19.2 years old) took place in Barcelona (Spain) during the first semester of 2023. The study implemented a mixed method approach with two validated questionnaires and an open text question to gather comprehensive insights. Findings reveal positive attitudes towards artificial intelligence and ChatGPT acceptance. The assessment of negative perceptions show concerns regarding artificial intelligence and the use of ChatGPT among participants. The correlational analysis of scales showed an intricate relationship between AI attitudes and ChatGPT acceptance while the qualitative analysis highlighted three major attitudes among students: openness, awareness, and alertness. The present study contributes to the ongoing discourse surrounding the use of ChatGPT in educational settings, emphasizing the importance of exploring students' attitudes and concerns. As artificial intelligence continues to permeate various aspects in our daily life, it becomes crucial to explore its impact on education, particularly in higher education. By understanding students' attitudes, both educators and institutions can enhance their proficiency in integrating artificial intelligence in a more efficient manner , ensuring a well-balanced approach that maximizes benefits while mitigating potential drawbacks of adopting AI technology.
... Thanks to these features, AI algorithms can automate repetitive and time-consuming tasks, allowing individuals to focus on more critical activities, whereas generative AI techniques can also create from a simple prompt realistic text, images, music, and other media [2,3]. Concerning this, scholars have already recognized advancements brought by AI in various areas [4], such as finance [5], retail [6], healthcare [7], and education [8,9]. ...
... In this regard, the present work made it possible for the first time to classify AI applications to be manipulated in 9 Human Behavior and Emerging Technologies future studies to deepen the role of perceived social risk and social value in shaping the acceptance of AI technologies and to understand why people perceive some AI applications as risky and others not. ...
Artificial intelligence (AI) is a rapidly developing technology that has the potential to create previously unimaginable chances for our societies. Still, the public’s opinion of AI remains mixed. Since AI has been integrated into many facets of daily life, it is critical to understand how people perceive these systems. The present work investigated the perceived social risk and social value of AI. In a preliminary study, AI’s social risk and social value were first operationalized and explored by adopting a correlational approach. Results highlighted that perceived social value and social risk represent two significant and antagonistic dimensions driving the perception of AI: the higher the perceived risk, the lower the social value attributed to AI. The main study considered pretested AI applications in different domains to develop a classification of AI applications based on perceived social risk and social value. A cluster analysis revealed that in the two-dimensional social risk × social value space, the considered AI technologies grouped into six clusters, with the AI applications related to medical care (e.g., assisted surgery) unexpectedly perceived as the riskiest ones. Understanding people’s perceptions of AI can guide researchers, developers, and policymakers in adopting an anthropocentric approach when designing future AI technologies to prioritize human well-being and ensure AI’s responsible and ethical development in the years to come.