Figure - uploaded by Philipp Brauner
Content may be subject to copyright.
Multi-stage research design of this study with expert workshop and subsequent survey study. The questionnaire captures demographics, exploratory user factors and the evaluation of the 38 AI-related scenarios.
Source publication
Introduction: Artificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysteriou...
Similar publications
This study explores the effectiveness of puzzle games in improving the learning of Malay Language reading skills among preschool children. The study focuses on inculcating blending skills through puzzle games since one of the most critical elements in learning reading is phonics which involves the skills of letter recognition and phonemic segmentat...
Citations
... Considering the transformative potential of AI-MC, much research has focused on improving the algorithm and work efficiency, conducting studies on its logic, economic, structure, and ethical impacts (Brauner et al., 2023). While technological advancements remain at the forefront of AI development, understanding public perception is equally critical, as societal attitudes toward AIGC can either boost or delay adoption. ...
... Greater public acceptance can drive investment, regulatory support, and widespread implementation, enabling AI-driven innovations to integrate seamlessly into various industries. Conversely, concerns regarding misinformation, bias, authorship, and ethical risks may lead to resistance, stricter regulations, or even rejection of AIGC in specific domains (Brauner et al., 2023). As AIGC is deeply embedded in AI-MC, it is essential to examine how individuals perceive and engage with AI-MC, as these perceptions can translate into AI regulation and governance to foster appropriate usage (Fast & Horvitz, 2017). ...
This study explored public perceptions and attitudes toward AI-mediated communication (AI-MC), with a special focus on AI-generated content (AIGC) in journalism. As AI technologies become more embedded in current news production, understanding societal responses is crucial for guiding AI development and regulation. By reviewing selected empirical studies, this paper identified three major trends: (1) widespread public awareness of AIGC, (2) optimism about its capacity to enhance journalism, and (3) fear of AIGC from both the public and journalists’ perspectives. This study also illustrated a significant trust gap among the public due to the opaque nature of AI systems and limited public knowledge. This study contributed to the dynamic discourse on AI-MC and suggested a more ethical algorithm design and timely legislation to promote responsible AI-MC.
... The higher the awareness and trust in AI are, the more positive the attitudes and the greater the use of AI tools (Obenza et al., 2024), and a strong positive view on AI will enhance academic performance (Bation & Pudan, 2024). Yet public trust defines perceptions: when trust is lowest, perceptions are divided on AI and its consequences; when trust is higher, perceptions tend to be consistently positive (Brauner et al., 2023). Despite positive sentiments, students express concerns about AI's ethical implications, including plagiarism risks, privacy issues and lack of institutional transparency (Arowosegbe et al., 2024;Vaněček et al., 2024). ...
... Lower scores with respect to teaching perception ultimately indicate that the lower scores, ≤22.000, will mean less engagement. In addition, by the statement above, The Technology Acceptance Model (TAM) model has been expanded as barriers related to ethical considerations in terms of perceived usefulness , a dimension that is not so pronounced in common applications of TAM, become especially unavoidable in an AI context (Brauner et al., 2023). ...
This study explores the multifaceted dynamics of student sentiment towards artificial intelligence (AI)‐based education by integrating sentiment analysis techniques with statistical methods, including Monte Carlo simulations and decision tree modelling, alongside qualitative grounded theory analysis. Data were collected from 540 university students, whose responses to open‐ended and scale‐based questions were systematically analysed to capture the nuances of their perceptions regarding the transformative potential and inherent challenges of AI in educational settings. Quantitatively, sentiment scores were derived using GPT‐4, categorised into positive, neutral and negative bins, and further examined through descriptive statistics, one‐way ANOVA and Scheffé post hoc tests. Monte Carlo simulations provided a resilient estimation of sentiment distributions, while decision tree analysis elucidated key demographic and attitudinal predictors of AI adoption, particularly highlighting the roles of age and ethical perceptions. Qualitatively, grounded theory was employed to extract emergent themes that reflect both the enthusiasm for personalised, efficient learning and the concerns over ethical dilemmas, social isolation and diminished teacher–student interactions. The findings reveal a dual‐edged view of AI‐based education, while a majority of students acknowledge its advantages for enhancing learning efficiency and access to information.
... The increased discussion of artificial intelligence (AI) since 2009 (Fast andHorvitz, 2016) reflects the growing presence of AI in modern life. As there is no universal definition that can comprehensively capture its essence (Brauner et al., 2023), AI acts as a kind of umbrella term for various technological references and shapes our society in different aspects (Kelley et al., 2021;Makridakis, 2017). One area concerns the economy, where AI has the potential to fundamentally reshape it (Lee et al., 2018;O'Shaughnessy et al., 2023;Pallathadka et al., 2023). ...
... In addition, contextual influences play a crucial role in the perception of AI. For instance, due to the context, the use of AI in medicine for profitable purposes results in a more positive perception compared to AI in business, where human jobs are replaced (Brauner et al., 2023). Perceptions are dynamic and can evolve as they are influenced by fresh ideas, experiences, and social interactions over time (Moscovici, 2000). ...
... The present study, which employed a total of 355 different terms, revealed a highly diverse spectrum of associations. This diversity can also be attributed to the absence of a universally accepted definition of AI, which would facilitate a clear delineation of its constituent elements (Brauner et al., 2023). Upon analysis of the entire data set, the most prevalent association with . ...
This study aims to explore students' associations with Artificial Intelligence (AI) and how these perceptions have evolved following the release of Chat GPT. A free word association test was conducted with 836 German high school students aged 10–20. Associations were collected before and after the release of Chat GPT, processed, cleaned, and inductively categorized into nine groups: technical association, assistance system, future, human, negative, positive, artificial, others, and no association. In total, 355 distinct terms were mentioned, with “robot” emerging as the most frequently cited, followed by “computer” and “Chat GPT,” indicating a strong connection between AI and technological applications. The release of Chat GPT had a significant impact on students' associations, with a marked increase in mentions of Chat GPT and related assistance systems, such as Siri and Snapchat AI. The results reveal a shift in students' perception of AI-from abstract, futuristic concepts to more immediate, application-based associations. Network analysis further demonstrated how terms were semantically clustered, emphasizing the prominence of assistance systems in students' conceptions. The findings underscore the importance of integrating AI education that fosters both critical reflection and practical understanding of AI, encouraging responsible engagement with the technology. These insights are crucial for shaping the future of AI literacy in schools and universities.
... Our findings highlight a gap between how LLMs are used in practice and how the models are being evaluated. Ubiquitous performance degradation over multi-turn interactions is likely a reason for low uptake of AI systems [73,4,28], particularly with novice users who are less skilled at providing complete, detailed instructions from the onset of conversation [87,35]. ...
... Notable model providers acknowledge the non-determinism implicitly or explicitly; Anthropic recommends sampling multiple times to cross-validate output consistency, 4 Google also highlights that their model outputs are mostly deterministic, 5 and OpenAI recommends setting seed parameter to further reduce the non-determinism. 6 Nevertheless, we caution users that multi-turn conversations can be increasingly unreliable owing to divergent LLM responses. ...
Large Language Models (LLMs) are conversational interfaces. As such, LLMs have the potential to assist their users not only when they can fully specify the task at hand, but also to help them define, explore, and refine what they need through multi-turn conversational exchange. Although analysis of LLM conversation logs has confirmed that underspecification occurs frequently in user instructions, LLM evaluation has predominantly focused on the single-turn, fully-specified instruction setting. In this work, we perform large-scale simulation experiments to compare LLM performance in single- and multi-turn settings. Our experiments confirm that all the top open- and closed-weight LLMs we test exhibit significantly lower performance in multi-turn conversations than single-turn, with an average drop of 39% across six generation tasks. Analysis of 200,000+ simulated conversations decomposes the performance degradation into two components: a minor loss in aptitude and a significant increase in unreliability. We find that LLMs often make assumptions in early turns and prematurely attempt to generate final solutions, on which they overly rely. In simpler terms, we discover that *when LLMs take a wrong turn in a conversation, they get lost and do not recover*.
... The OECD 2030 Learning Framework (OECD 2019) stresses the need to equip students with both the cognitive and socioemotional capacities to cope in our times. But this world is also highly and increasingly digitised, and especially in the current boom of artificial intelligence, end-users' expectations and evaluations of algorithmic output is evolving (Brauner et al. 2023). An ability, or even a propensity, to blindly compute one's way through life's challenges is simply not good enough. ...
In Singapore, where primary and secondary students routinely top standardized worldwide mathematics examinations, a paradox emerges: when reaching university, many struggle to apply their skills critically in real-world contexts. This commentary examines the challenges and strategies involved in teaching quantitative reasoning (QR) to mathematically literate students in a top-ranking Singaporean university. While these students arrive well-trained in computation and procedural problem-solving, they often lack confidence and flexibility in ambiguous, data-driven decision-making. This article argues that fostering QR education is crucial not only for Singapore but for education globally, as QR skills underpin evidence-based reasoning within and across disciplines. Such an approach would involve embracing the novelty of QR, cultivating confidence through inquiry-based learning, building skills through authentic problem-solving, and fostering a collaborative environment where communication – perhaps over and above computation – is a core competency.
... Mind az egyéni karrierdöntések, mind pedig a technológiai szabályozást és gazdasági javak redisztribúcióját övező társadalmi/politikai döntések megkövetelik, hogy a jelenleginél sokkal szélesebb körű tájékoztatás álljon rendelkezésre az AI-technológia mibenléte, működési mechanizmusai, lehetőségei és veszélyei tekintetében, hogy felelős állampolgárok demokratikus jogaikkal élve tudjanak megalapozott döntéseket hozni mind egyéni, mind közös jövőjük tekintetében, hiszen mint azt Brauner et al. (2023) kutatása is implikálja, egyelőre igen erős félreértések és előítéletek akadályozzák a felvilágosult nyilvános diskurzust. ...
A mesterséges intelligencia (AI) megoldások jelentős előretörése a szoftveriparban mélyreható változásokat hoz, amelyek nemcsak a munkaerőpiac struktúráját alakítják át, de a programozók munkavégzésének módját és munkaerőpiaci kilátásaikat is jelentősen befolyásolják. Jelen tanulmány a gazdálkodástudomány szemszögéből vizsgálja az AI-megoldások szoftveripari alkalmazásának hatásait, különös tekintettel a programozók munkájára. A kutatás kvantitatív adatok alapján kialakított kérdésekkel mélyinterjús kvalitatív elemzést hajt végre, amely specifikusan a szenior programozók tapasztalataira és perspektíváira összpontosít, ám kifejezett hangsúlyt fektet a junior és szenior szakemberek közti „generációs” problémák kérdéskörére. A cikk átfogó képet kíván nyújtani arról, hogyan befolyásolja az AI a szakemberek szerepét, munkamódszereit és karrierlehetőségeit, és milyen tágabb hatással van a programozók munkaerőpiaci szegmensére.
... Understanding public sentiment is increasingly recognized as essential for developers, policymakers, and businesses, particularly in the context of rapidly evolving AI technologies like ChatGPT. The integration of AI into daily life necessitates a nuanced comprehension of public attitudes and concerns, as these perceptions can significantly influence the trajectory of AI development and its acceptance in society [6], [7], [8]. For instance, Müller and Bostrom highlight that understanding public sentiment is crucial for guiding responsible AI development and informing policymaking, especially as AI technologies continue to shape various sectors. ...
... Without a deep understanding of these societal reactions, the development of AI technologies may encounter resistance or be misaligned with public expectations [6]. Research shows that public perception of AI is shaped by a complex interplay of admiration for its potential benefits and apprehension regarding its risks [8]. While AI promises improvements in efficiency, healthcare, and many other fields, it also raises concerns about job displacement, privacy breaches, and ethical dilemmas. ...
The rapid emergence of artificial intelligence (AI) technologies has ignited global discussions, particularly around ChatGPT, an AI tool designed to transform how humans interact with digital systems. This study explores public sentiment and emotional reactions towards ChatGPT during its initial launch period, analyzing a dataset of tweets sourced from Kaggle. Leveraging the VADER sentiment analysis algorithm, the research categorizes user reactions into positive, negative, and neutral sentiments, while also identifying key emotional tones such as joy, fear, and skepticism. The findings reveal that positive sentiment prevailed, reflecting excitement about ChatGPT’s innovative capabilities, while concerns regarding ethics and job displacement gradually surfaced, underscoring the dual nature of public opinion. Through visualizations such as bar charts, time-based sentiment trends, and word clouds, the study highlights the dynamic engagement of users with ChatGPT and its broader implications for society. Key insights suggest that public perceptions of AI are influenced by its perceived utility, accessibility, and ethical considerations. While the study demonstrates the efficacy of VADER in capturing sentiment trends, it also acknowledges limitations, including the inability to detect sarcasm or nuanced emotional expressions. The implications of this research extend to AI developers, policymakers, and researchers, emphasizing the importance of public engagement strategies that address ethical concerns and build trust. Additionally, the study contributes to the growing body of knowledge on digital society, offering a framework for understanding how emerging technologies shape public discourse. Future research could focus on comparative analyses across different social media platforms or delve deeper into the evolution of public sentiment over time. By unraveling these complexities, this study aims to guide the responsible development and deployment of AI technologies in an increasingly interconnected world.
... However, human attitudes toward AI systems can impact their willingness to team up with it. Scholars indicate that human positive perceptions of AI positively facilitates the spread and adoption of AI in business (Brauner et al., 2023). However, research continues to be limited in understanding human perception of AI, its trajectory and impact in different context. ...
The relationship between humans and artificial intelligence has sparked considerable debate and polarized opinions. A significant area of focus in this discourse that has garnered research attention is the potential for humans and AI to augment one another in order to enhance outcomes. Despite the increasing interest in this subject, the existing research is currently fragmented and dispersed across various management disciplines, making it challenging for researchers and practitioners to build upon and benefit from a cohesive body of knowledge. This study offers an organized literature review to synthesize the current literature and research findings, thereby establishing a foundation for future inquiries. It identifies three emerging themes related to the nature, impacts, and challenges of Human-AI augmentation, further delineating them into several associated topics. The study presents the research findings related to each theme and topic before proposing future research agenda and questions.
... Studies have shown that a mix of human input, training data, and system design errors frequently contribute to the sense of bias in AI. Gaining user trust, addressing ethical issues, and promoting inclusion in AI applications all depend on an understanding of these perspectives (Brauner et al., 2023; Jones-Jang & Park, 2023). To close the gap between technical performance and societal expectations, this study combines different viewpoints to investigate how the public and workplace view Gen AI biases. ...
Generative Artificial Intelligence (Gen AI) is perhaps one of the most significant technological inventions in the last decade.It enhances content generation across various domains, from personal messages to work-related tasks, encompassing text, images, and videos. However, there have also been several debates surrounding its inadvertent risks for bias and perpetuating stereotypes (Ferrara, 2023; Xavier 2024) both from gender and racial perspectives (Nicoletti & Bass, 2023; Zhou, 2024; Sadeghiani, 2024).Today, Gen AI is also being used in the workplace, with many organisations adopting custom-built Gen AI tools as part of their working tools and systems. Consequently, the objective of this research was tofind out if employees perceive work-related outputs of Gen AI to be biased against women, people of colour or neurodiverse people, and how this perception compares to that of the public, who use Gen AI tools for both work and non-work-related purposes. A mixed-methods approach was employed in this research. From workplace perspectives, quantitative data were collected using a structured questionnaire from UK employees. From the public perspective, qualitative data were collected through text mining using specific keywords from Tweets on X platform. Findings showed that, while workplace respondents reported modest levels of perceived bias across all groups, public sentiment analysis and themes showed significant mistrust and negative perception of bias in generative AI outputs for women and people of colour. There was however positive perception as it relates to Neurodiverse people, with the public data showing positive sentiments for Gen AI outputs as it relates to Neurodiverse people, as the users view it as a tool for helping dyslexic users communicate better.
... • Baseline Attitudes: Personal experiences with AI systems have been found to impact user perceptions of AI. 28,29 Therefore, we ask participants to choose a set of five values from Jakesch et al. 30 that they consider most important for clinical AI systems and to indicate their attitudes towards AI on a 5-point Likert scale. ...
Explainable AI (XAI) techniques are necessary to help clinicians make sense of AI predictions and integrate predictions into their decision-making workflow. In this work, we conduct a survey study to understand clinician preference among different XAI techniques when they are used to interpret model predictions over text-based EHR data. We implement four XAI techniques (LIME, Attention-based span highlights, exemplar patient retrieval, and free-text rationales generated by LLMs) on an outcome prediction model that uses ICU admission notes to predict a patient's likelihood of experiencing in-hospital mortality. Using these XAI implementations, we design and conduct a survey study of 32 practicing clinicians, collecting their feedback and preferences on the four techniques. We synthesize our findings into a set of recommendations describing when each of the XAI techniques may be more appropriate, their potential limitations, as well as recommendations for improvement.