ArticlePDF Available

Abstract

This study explores whether AI is perceived as a threat or an opportunity and examines whether the perception of AI varies according to demographic factors, religious commitment, trust in science, beliefs in conspiracy theories, and previous experiences with AI utilizing data collected from 1443 participants. While most respondents believe that AI will simplify life (63%), increase efficiency (62%), and therefore, the development of AI should be encouraged (51%), a significant portion of the respondents is concerned that AI will increase unemployment (52%) and lead to social inequalities (47%). Around 21% of the respondents believe that AI may destroy humanity eventually. Respondents’ age, gender, occupation, religious commitment, beliefs in conspiracy theories, and previous experiences with AI (familiarity) significantly influence respondents’ perception of AI both as an opportunity and threat. Findings suggest that AI is paradoxically seen as a double-edged sword, perceived both as an opportunity and a threat, which indicates a pronounced confusion about AI among respondents. Keywords: Artificial intelligencesocial impactsopportunitiesthreats https://www.tandfonline.com/doi/full/10.1080/10447318.2023.2297114
A preview of the PDF is not available
... Novawan et al. (2024) highlighted that many lecturers lack the technical expertise necessary to utilize AI tools effectively, presenting a significant obstacle. Bozkurt and Gursoy (2023) discovered mixed perceptions among lecturers, viewing AI as both an opportunity and a threat, with some expressing concerns over job security and loss of control, indicating a resistance to technological adoption. Furthermore, ethical and privacy issues, as discussed by Nguyen et al. (2023), raise critical questions about data privacy, consent, and the broader ethical implications of using AI in education, giving emphasis to the need for comprehensive training and robust ethical guidelines to navigate these challenges responsibly. ...
... The dissemination of misinformation further fuels the evolution of conspiracy theories, incorporating details from unrelated theories and real-world events (Gerts et al., 2021). This phenomenon extends to the topic of AI (Bozkurt & Gursoy, 2023), for instance, North Dakota passed a law denying legal personhood for AI due to fears of government replacement (Conspiracy Theories Give a Boost to State Efforts to Tackle AI, 2024). Authorities claim this action is taken to protect their citizens from potential dangers of AI replacing their politicians. ...
... This research develops an empirically grounded framework which contributes to the marketing literature by providing insights into customers' responses to the use of generative AI in digitalizing content production and consumption processes and how different types of technology affordances are actualized in support of gradual digitalization. Inspired by Bozkurt and Gursoy's (2023) work proposing that AI is a double-edged sword, presenting both an opportunity and a threat, this research illuminates both the affordances and constraints of generative AI technology. ...
Article
Full-text available
Generative artificial intelligence (AI) has gained prominence across various industries and domains, offering capabilities to generate human‐like text, creative ideas, and solutions. This paper explores customers' responses to the use of generative AI in digitalizing content production and consumption processes. Drawing on technology affordance theory, this article examines how are the affordances of generative AI leveraged to contribute to the gradual digitalization of individuals. This netnographic study is based on over 9 months naturalistic observations of the AI Community online, culminating in 1572 pages of data. The findings identify different types of affordances that foster digitalization: automated content creation, automated data analysis, and AI‐generated content dissemination. This study also identifies the constraints of generative AI and discusses potential interventions to address these constraints and prevent unintended consequences. This research provides insights for scholars, professionals, and educators to better understand the dynamics of leveraging generative AI.
... Consumers often voice hesitancy to use new technologies, such as those integrated with artificial intelligence(Zhan et al., 2023). Such hesitancies frequently stem from fears that AI will increase unemployment, lead to social inequality, and potentially threaten humanity(Bozkurt & Gursoy, 2023), suggesting that technology avoidance may be driven by our innate need to protect ourselves and our ingroups. The Technology Acceptance Model(Davis, 1989) is typically used to explain consumer reluctance to engage with new technologies, but evolutionary perspectives might lead to a more nuanced view. ...
Article
Full-text available
This paper is the first to offer a comprehensive literature review of the role of evolutionary psychology (EP) in marketing and consumer behavior. This study takes a holistic approach, combining techniques of a systematic review with bibliometric analysis, to provide a performance analysis and identify theories and methodologies used in the literature. Most importantly, by studying the current state of EP, we elucidate six major themes: the role of gender in families, the role of affect in consumer behavior, food preferences and shopping behavior, motivations for and consequences of status signaling, the impact of ovulation on consumer motives and behaviors, and contributions to the greater good. The findings enable researchers to understand the current state of the literature. Further, to advance the application of EP in consumer behavior, we identify gaps related to each theme and offer research questions that can serve as catalysts for future research. Thus, we offer two primary contributions: a comprehensive overview of the literature as it relates to methods, theories, and themes and detailed guidance that can be used to invigorate research on EP.
... In a liberal market economy, the upper classes have rapid access to new technologies and can benefit significantly from them, while the lower classes do not enjoy these benefits to the same extent. This can be seen as a reflection of technological injustice [1]. The development of AI technology is currently in the hands of a small number of large companies. ...
Article
Full-text available
The aim of this study is to examine the risks associated with the use of artificial intelligence (AI) in medicine and to offer policy suggestions to reduce these risks and optimize the benefits of AI technology. AI is a multifaceted technology. If harnessed effectively, it has the capacity to significantly impact the future of humanity in the field of health, as well as in several other areas. However, the rapid spread of this technology also raises significant ethical, legal, and social issues. This study examines the potential dangers of AI integration in medicine by reviewing current scientific work and exploring strategies to mitigate these risks. Biases in data sets for AI systems can lead to inequities in health care. Educational data that is narrowly represented based on a demographic group can lead to biased results from AI systems for those who do not belong to that group. In addition, the concepts of explainability and accountability in AI systems could create challenges for healthcare professionals in understanding and evaluating AI-generated diagnoses or treatment recommendations. This could jeopardize patient safety and lead to the selection of inappropriate treatments. Ensuring the security of personal health information will be critical as AI systems become more widespread. Therefore, improving patient privacy and security protocols for AI systems is imperative. The report offers suggestions for reducing the risks associated with the increasing use of AI systems in the medical sector. These include increasing AI literacy, implementing a participatory society-in-the-loop management strategy, and creating ongoing education and auditing systems. Integrating ethical principles and cultural values into the design of AI systems can help reduce healthcare disparities and improve patient care. Implementing these recommendations will ensure the efficient and equitable use of AI systems in medicine, improve the quality of healthcare services, and ensure patient safety.
Article
Bu makalenin amacı yapay zekâyla beraber çalışmanın dönüşümüne dair küresel alanda oluşturulan gelecek tahayyüllerini incelemektir. Makale bu incelemeyi Türkiye’de sosyolojinin bir alt alanı olarak henüz varlık göstermeyen gelecek sosyolojisi çerçevesinde yaparak alana katkı sağlar. Böylece Türkiye sosyolojisine hem yapay zekâ ve çalışma hayatının dönüşümü konularında uluslararası düzeyde yapılan araştırmaları hem de gelecek sosyolojisi alanını tanıtmayı hedefler. Bu sayede, yapay zekâ konusunda son zamanlarda oluşmaya başlayan literatüre katkı yapmayı hem de Türkiye sosyolojisinde gelecek sosyolojisine bir giriş niteliği sağlamayı amaçlar. Makalede küresel kurumların raporlarında yapay zekânın çalışma hayatına gelecekte getireceği değişikliklerin nasıl tartışıldığı incelenecek, bu bağlamda iş gücünün dönüşümü üzerine oluşturulan gelecek tahayyülleri gelecek sosyolojisi literatürü çerçevesinde değerlendirilecektir. Bunun için küresel alanda yapay zekâ konusunda söylem yaratma gücü olan üç kurum ve grup seçilmiştir. Bunlar, Uluslararası Para Fonu (IMF), Dünya Ekonomik Forumu (WEF) ve Ekonomik Kalkınma ve İşbirliği Örgütü (OECD) ev sahipliğinde çalışmalar yürüten ve farklı alanlardan uzmanlardan oluşan Yapay Zekâ Küresel Ortaklığı (GPAI)’dır. Makalenin kullandığı metodolojik yaklaşım küresel kurumlar tarafından oluşturulan gelecek senaryosu çalışmalarını metin analizi yoluyla yapısöküme uğratmaktır.
Article
Full-text available
Artificial Intelligence (AI) emerges as a transformative force set to reshape industries and societies, yet it also presents significant threats to humanity. This study explores the multifaceted risks associated with AI, analyzing ethical, economic, and governance dimensions. Ethical concerns arise from algorithmic bias, privacy breaches, and the potential misuse of autonomous weapons. Economically, AI-driven automation poses a threat of widespread job displacement, demanding proactive measures to address societal inequality. Additionally, AI’s security vulnerabilities jeopardize cyber security and national safety, necessitating robust protective measures. Moreover, the prospect of AI surpassing human intelligence raises existential questions, prompting the exploration of strategies for its safe development. Tackling these challenges requires coordinated international efforts, including regulatory reforms and ethical frameworks. By examining AI threats comprehensively, this research aims to guide policymakers, technologists, and society toward responsible AI deployment for humanity’s well-being in the AI-driven era.
Conference Paper
Despite being the oldest, the mining industry continues to be a major source of pollution, with more people killed or injured than in all other industries. Additionally, social tension related to this sector is widespread around the world, since mining businesses continue to have a significant negative influence on land, water, air, biota, and people through direct and indirect mechanisms. The mining machinery workplaces, which are in the focus of this study have the largest environmental footprint. The dominance of technology-centered design in present research streams is most likely the explanation for the lack of advancement in the mining industry. The SmartMiner project creates shift from technology-centered design and its concept creates solutions for improving the standard of environmental quality in complex systems and suggests a paradigm change to a Human and Data-Centric Engineering. By aligning advanced operator I4.0&5.0 and society S5.0 standards, the SmartMiner project develops solutions for raising the level of environmental quality in complex interactions between physical, behavioural, and organizational processes field. Proposed paradigm can be easily transferred to other industries. The safety of mining machinery operators in their immediate surroundings and their regular alignment with value chain stakeholders are the first steps in our original idea approval process. Research moves to the operator macro-environment, which is determined by organizational contextual factors, and is encompassed by the development of intelligent, ergonomic, non-invasive, and dependable operator aid systems for regulating physical environment job stressors-noise, human vibration, lighting, temperature, air quality, workplace layout issues, etc., with high potential to solve environmental and human health issues and to influence overall performance.
Article
Full-text available
Over the last decade, technological advancements, especially artificial intelligence (AI), have significantly transformed educational practices. Recently, the development and adoption of Generative Pre-trained Transformers (GPT), particularly OpenAI's ChatGPT, has sparked considerable interest. The unprecedented capabilities of these models, such as generating humanlike text and facilitating automated conversations, have broad implications in various sectors, including education and health. Despite their immense potential, concerns regarding their widespread use and opacity have been raised within the scientific community. ChatGPT, the latest version of the GPT series, has displayed remarkable proficiency, passed the US bar law exam, and amassed over a million subscribers shortly after its launch. However, its impact on the education sector has elicited mixed reactions, with some educators heralding it as a progressive step and others raising alarms over its potential to reduce analytical skills and promote misconduct. This paper aims to delve into these discussions, exploring the potential and problems associated with applying advanced AI models in education. It builds on extant literature and contributes to understanding how these technologies reshape educational norms in the "new AI gold rush" era.
Preprint
Full-text available
Artificial Intelligence (AI) based applications are an ever-expanding field, with an increasing number of sectors deploying this technology. While previous research has focused on trust in AI applications or familiarity as predictors for AI usage, we aim to expand current research by investigating the influence of knowledge as well as AI risk and opportunity perception as possible predictors for AI usage. To this end, we conducted a study (N= 450, representative for age and gender) covering a broad number of domains (health, eldercare, driving, data processing, and art), assessing well-established variables in AI research (trust, familiarity) as well as knowledge about AI and risk and opportunity assessment. We further investigated the influence of AI use related ratings on AI usage. Results show that the newly investigated variables best predict overall intention to use, above and beyond trust and familiarity. Higher AI-related knowledge, more positive use related ratings, and lower risk perception significantly predict general AI use intention, with a similar trend emerging for domain-specific AI use intention. These findings highlight the relevance of knowledge, risk and opportunity assessment, and use related ratings, in understanding laypeople's intention to use AI-based applications and open a new roster of research questions in understanding people's AI use behavior intentions and their perception of AI.
Article
Full-text available
Background: This paper reviews nationally representative public opinion surveys on artificial intelligence (AI) in the United States, with a focus on areas related to health care. The potential health applications of AI continue to gain attention owing to their promise as well as challenges. For AI to fulfill its potential, it must not only be adopted by physicians and health providers but also by patients and other members of the public. Objective: This study reviews the existing survey research on the United States' public attitudes toward AI in health care and reveals the challenges and opportunities for more effective and inclusive engagement on the use of AI in health settings. Methods: We conducted a systematic review of public opinion surveys, reports, and peer-reviewed journal articles published on Web of Science, PubMed, and Roper iPoll between January 2010 and January 2022. We include studies that are nationally representative US public opinion surveys and include at least one or more questions about attitudes toward AI in health care contexts. Two members of the research team independently screened the included studies. The reviewers screened study titles, abstracts, and methods for Web of Science and PubMed search results. For the Roper iPoll search results, individual survey items were assessed for relevance to the AI health focus, and survey details were screened to determine a nationally representative US sample. We reported the descriptive statistics available for the relevant survey questions. In addition, we performed secondary analyses on 4 data sets to further explore the findings on attitudes across different demographic groups. Results: This review includes 11 nationally representative surveys. The search identified 175 records, 39 of which were assessed for inclusion. Surveys include questions related to familiarity and experience with AI; applications, benefits, and risks of AI in health care settings; the use of AI in disease diagnosis, treatment, and robotic caregiving; and related issues of data privacy and surveillance. Although most Americans have heard of AI, they are less aware of its specific health applications. Americans anticipate that medicine is likely to benefit from advances in AI; however, the anticipated benefits vary depending on the type of application. Specific application goals, such as disease prediction, diagnosis, and treatment, matter for the attitudes toward AI in health care among Americans. Most Americans reported wanting control over their personal health data. The willingness to share personal health information largely depends on the institutional actor collecting the data and the intended use. Conclusions: Americans in general report seeing health care as an area in which AI applications could be particularly beneficial. However, they have substantial levels of concern regarding specific applications, especially those in which AI is involved in decision-making and regarding the privacy of health information.
Article
Full-text available
The aim of this paper is to examine the change in anxiety and feelings of depression within the Turkish population, including the factors behind these changes, during the most intense period of the COVID-19 pandemic crisis. Data were collected online from a population with similar characteristics using the convenience sampling method at the beginning of the pandemic (2020) and during its second year (2021). After parsing the data, a total of 9,369 questionnaires were evaluated. The Anxiety and Depressive Complaints questionnaire was prepared based on the conditions related to COVID-19. The scale was produced by selecting from a large set of questions using Factor Analysis (FA). The Confirmatory Factor Analysis (CFA) values of the measurement tool fell within the acceptable limits. It was observed that both anxiety and feelings of depression were extraordinarily high during this period. The data showed that gender, family communication problems, trust in the state, fear of losing one’s job, religious involvement, and time had predictive effects on anxiety. All the predictive variables for anxiety also had significant effects on depressive complaints. Age, household income, and living in rural or urban areas were also determined to be predictive for depressive complaints. Keywords: COVID-19 Pandemic • Anxiety • Depressive complaints • Trust • Religious commitment and predictive variables
Article
Fear of artificial intelligence (AI) has become a predominant term in users’ perceptions of emerging AI technologies. Yet we have limited knowledge about how end users perceive different types of fear of AI (e.g., fear of artificial consciousness, fear of job replacement) and what affordances of AI technologies may induce such fears. We conducted a survey (N = 717) and found that while synchronicity generally helps reduce all types of fear of AI, perceived AI control increases all types of AI fear. We also found that perceived bandwidth was positively associated with fear of artificial consciousness, but negatively associated with fear of learning about AI, among other findings. Our study provides theoretical implications by adopting a multi-dimensional fear of AI framework and analyzing the unique effects of perceived affordances of AI applications on each type of fear. We also provide practical suggestions on how fear of AI might be reduced via user experience design.
Article
This work examines worldview predictors of attitudes toward nanotechnology, human gene editing (HGE), and artificial intelligence. By simultaneously assessing the relative predictive value of various worldview variables in two Dutch samples (total N = 614), we obtained evidence for spirituality as a key predictor of skepticism across domains. Religiosity consistently predicted HGE skepticism only. Lower faith in science contributed to these relationships. Aversion to tampering with nature predicted skepticism across domains. These results speak to the importance of religiosity and spirituality for scientific innovation attitudes and emphasize the need for a detailed consideration of worldviews that shape these attitudes.
Chapter
In this chapter we extend earlier work (Vinuesa et al., Nat Commun 11, 2020) on the potential of artificial intelligence (AI) to achieve the 17 Sustainable Development Goals (SDGs) proposed by the United Nations (UN) for the 2030 Agenda. The present contribution focuses on three SDGs related to healthy and sustainable societies, i.e., SDG 3 (on good health), SDG 11 (on sustainable cities), and SDG 13 (on climate action). This chapter extends the previous study within those three goals and goes beyond the 2030 targets. These SDGs are selected because they are closely related to the coronavirus disease 19 (COVID-19) pandemic and also to crises like climate change, which constitute important challenges to our society.KeywordsAISDGs
Article
The extensive implementation of smart technology, artificial intelligence, automation, robotics, and algorithms (STAARA) in hospitality services has accelerated the need to understand their potential influence on hotel employees’ career perceptions. This study conducted two scenario-based experiments based on disruptive innovation theory and the Stimulus-Organism-Response (SOR) model to examine how STAARA awareness shapes hotel employees’ job insecurity and mobility. In addition, this study investigated career progression as a counterstrategy to attenuate the substitution effect of STAARA. Study 1 demonstrated that hotel employees’ negative (vs. positive) awareness of STAARA leads to higher job insecurity and mobility. Furthermore, according to Study 2, for hotel employees with low-level career progression, negative (vs. positive) awareness of STAARA induces higher job insecurity and mobility. However, among employees with high-level career progression, there were no significant differences, which means that high-level career progression attenuates the impact of STAARA. This study also considers theoretical and practical influences.
Article
La inteligencia artificial (IA) es la capacidad de una máquina o sistema informático para simular y realizar tareas que normalmente requerirían inteligencia humana, como el razonamiento lógico, el aprendizaje y la resolución de problemas. La inteligencia artificial se basa en el uso de algoritmos y tecnologías de aprendizaje automático para dar a las máquinas la capacidad de aplicar ciertas habilidades cognitivas y realizar tareas por sí mismas de forma autónoma o semiautónoma. La inteligencia artificial se distingue por su grado de capacidad cognitiva o por su grado de autonomía. Por capacidad puede ser débil o limitada, general o superlativa. Por su autonomía puede ser reactiva, deliberativa, cognitiva o totalmente autónoma. A medida que mejora la inteligencia artificial, muchos procesos se vuelven más eficientes y las tareas que hoy parecen complicadas se realizarán con mayor rapidez y precisión.
Article
The present study adapted the General Attitudes toward Artificial Intelligence Scale (GAAIS) to Turkish and investigated the impact of personality traits, artificial intelligence anxiety, and demographics on attitudes toward artificial intelligence. The sample consisted of 259 female (74%) and 91 male (26%) individuals aged between 18 and 51 (Mean = 24.23). Measures taken were demographics, the Ten-Item Personality Inventory, the Artificial Intelligence Anxiety Scale, and the General Attitudes toward Artificial Intelligence Scale. The Turkish GAAIS had good validity and reliability. Hierarchical Multiple Linear Regression Analyses showed that positive attitudes toward artificial intelligence were significantly predicted by the level of computer use (β = 0.139, p = 0.013), level of knowledge about artificial intelligence (β = 0.119, p = 0.029), and AI learning anxiety (β = −0.172, p = 0.004). Negative attitudes toward artificial intelligence were significantly predicted by agreeableness (β = 0.120, p = 0.019), AI configuration anxiety (β = −0.379, p < 0.001), and AI learning anxiety (β = −0.211, p < 0.001). Personality traits, AI anxiety, and demographics play important roles in attitudes toward AI. Results are discussed in light of the previous research and theoretical explanations.