ArticlePDF Available

Effects of country and individual factors on public acceptance of artificial intelligence and robotics technologies: A multilevel SEM analysis of 28-country survey data

Authors:

Abstract and Figures

Using data from 28 European countries, this study examines factors influencing public attitude towards the use of AI/Robot. Its multilevel SEM analysis finds that several factors at the individual level including Perceived threat of job loss and Digital technology efficacy predict public Acceptance of AI/Robot. Although country-level factors such as economic development, government effectiveness and innovation do not directly influence public acceptance of AI/Robot, they do have significant effects on Perceived threat of general job loss due to AI/Robot, and Digital technology efficacy. Findings indicate that these nationally macro variables influence people's perceptions of AI and robotics technologies and their confidence in their digital skills. This research enriches the application of the Technology Acceptance Model by using predictive variables at two levels: individual and country. Furthermore, at the individual level, this study uses two variables (e.g., Perceived threat of job loss and Digital technology efficacy) that are unconventional to TAM, thus contributing to this theoretical model.
Content may be subject to copyright.
Running Head: FACTORS INFLUENCING PUBLIC ACCEPTANCE OF AI/ROBOT
Effects of country and individual factors on public acceptance of artificial intelligence and
robotics technologies: A multilevel SEM analysis of 28-country survey data
Abstract
Using data from 28 European countries, this study examines factors influencing public
attitude towards the use of AI/Robot. Its multilevel SEM analysis finds that several factors at the
individual level including Perceived threat of job loss and Digital technology efficacy predict public Acceptance
of AI/Robot. Although country-level factors such as economic development, government
effectiveness and innovation do not directly influence public acceptance of AI/Robot, they do have
significant effects on Perceived threat of general job loss due to AI/Robot, and Digital technology efficacy.
Findings indicate that these nationally macro variables influence people’s perceptions of AI and
robotics technologies and their confidence in their digital skills. This research enriches the
application of the Technology Acceptance Model by using predictive variables at two levels:
individual and country. Furthermore, at the individual level, this study uses two variables (e.g.,
Perceived threat of job loss and Digital technology efficacy) that are unconventional to TAM, thus
contributing to this theoretical model.
Keywords: technology acceptance; multilevel SEM; AI/Robot acceptance; public opinion on
technology
Citation: Vu, H. T. & Lim, J. (Accepted for publication). Effects of country and individual factors
on public acceptance of artificial intelligence and robotics technologies: A multilevel SEM analysis of
28-country survey data. Behaviour & Information Technology.
2
Effects of country and individual factors on public acceptance of artificial
intelligence and robotics technologies: A multilevel SEM analysis of 28-country survey data
Issues related to emerging technologies including artificial intelligence (AI) and robotic (e.g.,
surgery and self-driving cars) have gained increasing prominence in the news recently (Chuan et al.,
2019). Stories on these topics have been concerned with how beneficial these new technologies can
be to society (Fast & Horvitz, 2017). Paradoxically, amidst all the hypes about AI and robotics, fears
and concerns over how the applications of these new technological advances in various industries
may influence the ways in which humans operate have also disquieted buzzes in the news on these
emerging technologies. Discussions revolving around AI/Robot have primarily focused on its three
broad adverse consequences: job loss, criminal use, and ethical issues (Garimella, 2018).
In scholarly areas, having been thrilled about how limitless the applications of these new
technologies can be, many studies have been conducted on how we can utilize such technological
advances in different research fields (Alam et al., 2019; Jha & Topol, 2016; Wenger, 2014; Zang et
al., 2015). Similarly, industry reports have revealed possible prospects and disruptions AI may have
to the global economy (Chui et al., 2018; Gillham et al., 2018). Less clear, however, are the societal
aspects of these emerging digital technologies, which could potentially be a key to whether the
public and, perhaps, policymakers would welcome the applications of such technologies. Public
perception and acceptance of emerging science and highly specialist technologies are chronically an
understudied area (Scheufele & Lewenstein, 2005). Limited research has been done on such societal
dimensions of the new technologies. This study fills this gap.
The primary purpose of this research is to examine what factors influence individuals’
attitude towards the use of AI and robotic technologies. In doing so, this study adopts a cross-
country comparative approach using nationally representative survey data collected in 28 European
countries. Theoretically, this study tests the technology acceptance model on the public’s willingness
3
to accept AI/Robot. This study adds to the literature of this theoretical framework by assessing the
effects of several macro variables on public perceptions of AI and robotic technologies. We argue
that besides individual-level variables (e.g., age, education, digital efficacy or perception of threat of
lob loss among others), country-level variables (e.g., economic developments represented by GDP
per capita, governance represented by government effectiveness, and innovation) can also influence
the public in accepting new technologies. Practically, findings of this research are expected to help
provide policymakers, research institutions as well as technology companies with insights into public
opinion on the important but also controversial use of AI and robotic technologies as well as their
applications in different fields.
Literature Review
Robot, AI and Their Societal Aspects
In its history, the robotics industry had seen remarkable developments in the second half of
the 20th century. Robotics technologies had transformed from using simple algorithms for
manipulating movements from one point to another in the 1950s and 1960s to having sensory
systems in the 1970s which allowed for stronger awareness of the surrounding environment
(Gasparetto, & Scalera, 2019a). The third generation of robots had taken place between the late
1970s to the end of the 20th century, where robots were controlled by computer systems. Robotics
technology could also perform simple self-programming activities to execute different tasks
(Gasparetto, & Scalera, 2019b). The latest generation, starting from 2000 to now, is characterized by
the birth of intelligent robots, which use advanced computers to reason and learn with sophisticated
sensory systems that allow them to effectively adapt to different environment. This generation of
robotics has also seen a stronger boost for professional and consumer robots which use innovative
AI technology (Zamalloa et al., 2017).
4
AI, however, is a fairly broad concept that refers to the use of algorithms to make a
computer perform complex functions that require human intelligence (Kurzweil, 1990; Russell &
Norvig, 2010). According to Nilsson (2010, p. 13), AI is an “activity devoted to making machines
intelligent, and intelligence is that quality that enables an entity to function appropriately and with
foresight in its environment.” The past few decades have seen an increasing use of AI in various
areas such as medicines (Cherkasov et al., 2008; Hamet & Tremblay, 2017; Zang et al., 2015),
chemistry and biology (Cartwright, 2008), manufacturing (Xiang & Lee, 2008), sales and marketing
(Siau & Yang, 2017), disaster response (Imran et al., 2014), and environmental monitoring (Chau,
2006) among others.
Advantages of AI are undoubtedly innumerable. Take, for example, the adoption of virtual
assistants including Alexa, Siri, Google Assistant or Cortana and the use of recommendation systems
in such websites as Amazon, Yelp, or TripAdvisor to see how much change these applications of AI
influence our everyday activities (Ma & Siau, 2018). Recent advances in the development of AI have
enabled the popularization of robots in our daily life. In manufacturing, robotics and automation
efforts, which aimed to mechanize intelligence, have utilized AI for years as automated machines
and computers are relatively more productive and cost-efficient than humans (Autor, 2015; Frey &
Osborne, 2017). These applications, however, have only been categorized as the Weak AI for their
focus on narrow tasks and operations within a pre-defined range of functions fed into the computer
system (Borana, 2016). The Strong AI, according to Kurzweil (2010), has the intelligence that can
surpass or replace humans. Although it may not (yet) be the answer to the philosophical question of
whether or not a machine can think, Strong AI is expected to have the ability to simulate human
thinking in a highly sophisticated way (Russell & Norvig, 2010).
Since the 2000s, the accumulation of digital data as well as advances in machine learning or
deep learning techniques in analyzing large amounts of data, identifying and learning from patterns,
5
have demonstrated that Strong AI has promising potentials in completing tasks that are much more
complex and, in many cases, at a mind-blowing speed in various domains (Lake et al., 2017). Some
even claimed that AI and robotics are set to transform the way we live and work with enormous
impacts on society (Chui et al., 2018). The London-based finance firm, PricewaterhouseCoopers,
recently predicted that AI-related technologies will contribute an equivalent of up to $15 trillion, an
increase of around 14%, to the global economy (Gillham et al., 2018) as these technologies are
expected to increase productivity, free up labors, save time, and improve quality of products.
Besides its positive impacts, scholars and analysts have been concerned about how disruptive
the use of AI and robotics technologies can be to society. One of the most frequently mentioned
implications of robots and AI is its impacts on the labor market. Improvements in robotics and
automation techniques are predicted to render many types of jobs obsolete. For example, in their
analysis of 702 occupation categories, Frey and Osborne (2017) estimated that 47% of U.S. workers
are at risk of seeing their jobs automated over the next 20 years. In his speech made in 2015, Andy
Haldane, Bank of England’s Chief Economist, sent shockwaves to British when warning that 15
million workers in the U.K., about half of the country’s workforce, would be technologically
unemployed because of the escalating computerization (Elliott, 2015). Projecting different scenarios
for 2030, a team of researchers from McKinsey Global Institute (2017) contended that with rapid
developments of AI and robotics technologies, the global job market will need a major
transformation of skill sets. About 375 million employments worldwide will be automated. The
authors also pointed out that different types of occupation will be created to meet the needs for
technological skills. Others have argued that an increasing use of AI will have serious social
implications ranging from increasing income inequality (Korinek & Stiglitz, 2017), to breaching
individuals’ privacy (Lewis, 2018), to automating warfare (Doward, 2018; Vasquez, 2018), and to
causing catastrophic crises when in the wrong hands (Fry, 2018). These effects may also induce
6
other societal outcomes such as social unrests, riots and crimes as a large portion of the population
will be left out of this new industrial development era (Su, 2018). One point that most of these
scholars agree is, however, that the adoption of AI and robot are inevitable. As such, public views
and acceptance of AI and robots play an important role in the social acceptance of these new
technologies.
Public Acceptance of Emerging Technology
Apart from the benefits they deliver, emerging technologies can also introduce risks as well
(Gupta et al., 2012). Public acceptance of emerging technologies has been shaped by how much risk
they will bring to society and whether the risk would outweigh the benefits they deliver (Gaskell et
al., 2004). As a case in point, global public acceptance of nuclear technology decreased significantly
after the 2011 Fukushima nuclear disaster (Kim et al. 2013). For example, in their experimental
research, Caporale and Monteleone (2004) found that disclosing information about how beer was
produced had psychological effects on participants’ perception of the product. Participants preferred
the beer labeled as traditional more than the one labeled as GMO. According to the researchers,
such an influence had, perhaps, stemmed from the fact that consumers often perceive genetic
modifications as morally wrong and unnecessary in food manufacturing. Understanding what drives
public acceptance of new technologies is, therefore, important.
Public acceptance of robotics and AI technologies has varied, depending on the field where
these technologies are used, and the specific concerns that are associated with how they are used.
Results from a recent survey conducted in Europe indicated that public acceptance of the use of
robots in caring for children, disabled or the elderly was low. About 60% of Europeans believed of
robots should be banned from these domains (Savela et al., 2018). In their experimental research,
Longoni et al. (2019) found that consumers were reluctant to use healthcare services offered by AI,
despite those services were of higher quality compared to those provided by humans. Rezaei and
7
Caulfield (2020), in survey of members of the public in Ireland, discovered that only one fifth of
them were willing to use autonomous driving cars. Concerns about privacy had a negative impact on
their acceptance of this new technology. Findings from a study by Yoo et al. (2018) suggested that
the public in the United State held a generally favorable attitude and strong willingness to use drones
or unmanned aerial vehicles for parcel delivery. Public acceptance of robotics and AI technologies in
general is critical to the development of these technologies as negative public response can slow
down their commercialization (Gupta et al., 2012)
Technology Acceptance Model
Technology Acceptance (TAM), one of the most widely used theoretical models, explains
factors influencing or predicting people’s motivation to adopt new technologies (Davis, 1985).
TAM, which was adapted from the Theory of Reasoned Action (Lee et al., 2003), conceptualizes
several antecedents to people’s intention to accept a new technology including perceived usefulness (PU)
and perceived ease of use (PEOU). Davis (1989, p. 320) defined PU as "the degree to which a person
believes that using a particular system would enhance his or her job performance.” PEOU,
according to Davis, refers to how easy users feel it is about using a particular system (Davis, 1989).
Originally, TAM was used to assess the general acceptance of computers. However, rapid
developments of technologies have seen an expansive use of TAM to examine the diffusion of
different new technologies (Marangunić & Granić, 2015). Ketikidis and colleagues (2012) found
empirical evidence of the applicability of TAM in predicting medical professionals’ intention to
adopt health information technologies. The results of a meta-analysis of TAM by Marangunic and
Granic (2015) indicated that most studies using the model found robust evidence supporting its
premises, suggesting a powerful predictability of this theoretical framework.
Since Davis first conceptualized the model more than 30 years ago, TAM has been extended
to add more variables. In the so-called augmented TAM, Vijayasarathy (2004)’s research results
8
pointed out that, besides PU and PEOU, comparability and security influence consumers’ attitude
towards online shopping. Habib et al. (2019) found that seven factors including effort expectancy,
self-efficacy, perceived privacy, perceived security, trust in technology, price value and trust in
government significantly predicted people’s intention to use smart-city services. Results from Gessl
et al.’s (2019) research suggested that besides demographic variables, several personality dimensions
including agreeableness and neuroticism are associated with future elderly’s (age between 20-60)
acceptance of artificially intelligent robotics. However, studies using TAM have mostly investigated
the effects of PU and PEOU. Other factors have less frequently been examined. The present study
takes a different approach by using different antecedents (e.g., variables at individual and country
levels) from the traditional ones. An important individual-level variable that this study will be
focusing on is the fear the public has about being replaced by technologies, as previous research
(Gillham et al., 2018; McClure, 2018) has shown that a popular concern among members of the
public is job loss due to AI and robots. Results from other industry reports added to that mounting
concern by providing forecasts on the number of people who would become technologically
unemployed in the near future due to technological advances (McKinsey Global Institute, 2017).
This study argues that the general concern of members of the public on the effects AI and robots
will have on the job market will affect their attitude toward these emerging technologies. Thus, it
hypothesizes that:
H1: Those who perceive AI/Robot as a threat to general job loss are less accepting of the
use of AI/Robot related technologies.
Past studies have examined efficacy as an important construct in predicting people’s
adoption of the latest technologies (Correia et al., 2017; Venkatesh, 2000; Zhang et al., 2017).
Venkatesh (2010), for example, suggested that efficacy, an intrinsic factor, should be added to TAM
for a stronger understanding of the mechanism in which people develop their (un)favorability
9
toward new technologies. For example, Asimakopoulos and colleagues (2017) reported that both
general efficacy and health technology efficacy were significantly associated with participants’
attitude toward fitness tracking devices. More closely related to this study, Latikka et al. (2019)’s
research found that robot use efficacy predicted the public’s acceptance of telepresence robot use.
Although efficacy has been found to influence attitude and adoption of new technologies, research
in this specific area remains limited. In this research, we attempt to examine TAM by including
Digital technology efficacy as an antecedent of AI and robotics technology acceptance. This study,
therefore, proposes:
H2: Digital technology efficacy will positively predict user acceptance of the use of
AI/Robot related technologies.
Besides individual-level variables, empirical evidence demonstrates that national factors
which weave together a broad techno-socio environment can impact the public’s acceptance of
technologies. Analyzing survey data on Europeans’ perceptions of the use of robots at work in two
different time points, 2012 and 2014, Turja and Oksanen (2019) found that nationally macro
variables including ICT exports, cellular phone ratio, and job automation risk significantly affected
public attitudes toward and intention to use robots. Similarly, others have discovered extensive
evidence of public policy (e.g., government effectiveness) and economic development (e.g., GDP
per capita) as influential antecedents of various types of technology adoption (e.g., mobile phone
penetration, ICT adoption) in different countries (Asongu & Biekpe, 2017; Corrales & Westhoff,
2006; Erumban, & de Jong, 2006). These country-level factors can form a broad techno-socio
environment that nourishes positive attitude towards and eventually acceptance of AI and robotic
technologies. Arguably, countries with a stronger techno-socio environment may also provide
people with more opportunities to directly experience AI- and robotic-related devices such as AI
speakers, intelligent robots, AI-driven car and so on, thus inducing acceptance of AI/Robot. This
10
may also help reduce their fears of technological adverse effects and increasing individuals’ digital
technology efficacy. The perceived benefits of AI/Robot for daily lives could be less concerned
about the disappearance of the existing jobs. This study hypothesizes that:
H3: Residents of countries with strong techno-socio environment are more likely to accept
the use of AI/Robot-related technologies (H3a), have stronger digital technology efficacy
(H3b,) and feel less threatened by job loss due to AI (H3c).
In this study, we focused on public acceptance of several applications of AI/Robot including
robots perform medical operations; robots assist at work; robots provide care and companionship to
sick people or elderly; robots and drone deliver goods, and; travel in a driverless car.
Method
Sample and Measures
This study used secondary survey data collected in 28 countries in Europe to test our hypotheses.
Responses to Eurobarometer surveys, which have generated nationally representative public opinion
data on a regular basis since 1974, were used (European Commission, 2018). The European
Commission contracted TNS Opinion and Social to survey the public in EU member states and
candidate countries on various topics related to public opinion on EU economic, political,
environmental, legal, cultural, and social issues. The data this paper used is Eurobarometer 87.1,
which were based on interviews with 27,901 Europeans across 28 countries. Data were collected
between March 18 27, 2017. Eligible participants were those who were 15 years or older at the
data collection time. Around 1,000 participants from each member state were selected using random
sampling procedures. The survey used a strict multi-stage, random sampling method. Specifically, in
each country, the first primary sampling units were randomly selected after being stratified based on
the distribution of the country’s population and residential types (e.g., metropolitan, urban, and rural
areas). In the second stage, clusters of addresses from these primary sampling units were chosen.
11
Within these clusters, specific addresses were selected using systematic sampling with a random
starting point. In some countries (e.g., Britain), respondents were randomly selected based on their
electoral registers (Gesis, 2020).
In this study, there were two types of variables for our two-level analysis: individual and
country levels. Individual-level consisted of five latent variables including Digital technology efficacy,
Perceived threat of general job loss, Usefulness, and Acceptance of AI/Robot
and one single-question variable,
Prior knowledge of AI.
<<<Insert figure 1 about here>>>
Individual-Level Variables
Regarding the reliability and validity of variables used in our analysis, this study first provided
internal consistency coefficients of secondary survey items, which was Cronbach’s Alpha (α). The
convergent and discriminant validity of the items for individual-level variables were checked by
performing an explorative factor analysis and tested factor loadings (Cable & DeRue, 2002). The
results can be found in Table 1. For country-level variables (e.g., Innovation, Government
effectiveness, and GDP per capita), this provided information on the original indices where each of
the variables were obtained. These are appropriate indicators of the reliability and validity of the
variables used in this research (Cable & DeRue, 2002; Kimberlin & Winterstein, 2008).
Digital technology efficacy (M = 2.87, SD = 1.03, Cronbach’s α = .92) was measured by three 4-point-scale
items (1: totally disagree; 4: totally agree): “You consider yourself to be sufficiently skilled in the use of
digital technologies in your daily life”, “You consider yourself to be sufficiently skilled in the use of
digital technologies to use online public services, such as filing a tax declaration or applying for a
visa online”, and “You consider yourself to be sufficiently skilled in the use of digital technologies to
benefit from digital and online learning opportunities.”
Perceived threat of general job loss (M = 3.09, SD = 0.79, Cronbach’s α = .77) was represented with two 4-
12
point-scale statements (1: totally disagree, 4: totally agree): “Due to the use of robots and artificial
intelligence, more jobs will disappear than new jobs will be created” and “Robots and artificial
intelligence steal peoples’ jobs.”
Perceived usefulness (M = 3.12, SD = 0.69, Cronbach’s α = .64) was assessed using two 4-point-scale
statements (1: totally disagree, 4: totally agree): “Robots and artificial intelligence are a good thing for
society, because they help people do their jobs or carry out daily tasks at home,and “Robots are
necessary as they can do jobs that are too hard or too dangerous for people.” This study used only
two items to measure Perceived threat of general job loss and Perceived usefulness because the study relied on
secondary survey data that included two items for each of those variables. Given its low Cronbach’s
alpha value, Perceived usefulness was dropped from the final model during a model fitting process (see a
modeling fitting section for details).
Prior knowledge of AI (M = 51, SD = 0.50) was measured using one question: “In the last 12 months,
have you heard, read or seen anything about artificial intelligence?” The answer was coded as yes
(51%) or no (49%).
Acceptance of AI/Robot
(M = 4.70, SD = 2.36, Cronbach’s α = .84) was represented by five 10-point-
scale questions (1: totally uncomfortable, 10: totally comfortable), asking how respondents would
“personally feel about”: having a robot perform a medical operation on them; having a robot assist
them at work; having a robot to provide them services and companionship when infirm or elderly;
receiving goods delivered by a drone or a robot, and; being driven in a driverless car in traffic. This
variable served as a response variable for predictors at the individual level.
Country-Level Variables
Country-level variables for the final model included Innovation, Government effectiveness, and
Gross Domestic Product (GDP) per capita. The variables represent different aspects (e.g., advanced
technology, governance, and economic development), that constitute a nation’s techno-socio
13
environment, in which the artificial intelligence and robotics industries develop. The three variables
were combined to make a composite variable (Cronbach’s α = .92). All country-level variables were
selected based on data for 2016, a year prior to when the surveys began in March 2017.
Innovation (M = 49.51, SD = 7.72) was adopted from the Global Innovation Index for 2016.
The index, which is a joint effort between Cornell University, the European Institute of Business
Administration (INSEAD), and the World Intellectual Property Organization (WIPO), ranks
countries on their innovation levels on a scale from zero to 100 (Cornell University et al., 2016) with
higher scores being more innovative. Scores for 2016 were calculated based on two sub-indices:
Innovation output and Innovation input. Innovation output was assessed using two pillars including
knowledge and technology as well as creative outputs. Innovation input was created with five main
pillars, representing five aspects that fuel innovation including political, legal and business
environment, education, ICT infrastructure, investment and market scale. Of the countries in our
study, Sweden had the highest innovation score (63.57). Romania had the lowest score (37.90). This
variable was chosen because it provides a comprehensive assessment of the environment for
technologies to develop at the national level. Prior studies have frequently used the index as an
important indicator in the national environment of a country’s technology development (D'este et
al., 2016; Pelagio-Rodriguez et al., 2014).
Government effectiveness (M = 81.36, SD = 12.31) measures several aspects of governance
including civil service and government independence from political pressures, policy formulation
and implementation, and government commitment to such policies (The World Bank, 2018). The
index ranks countries from 0 to 100, with higher scores demonstrating stronger effectiveness. Past
research has found the link between how effective government effectiveness is and citizens’
adoption of technologies (Andrés et al., 2017; Kassie et al., 2013). Thus, government effectiveness
was included.
14
GDP per capita (M = 29,236, SD = 17,153.12) has been used as an indicator representing how
wealthy a country is (Diener & Diener, 1995). Of the countries, Luxemburg had the highest GDP
per capita ($100,738/year for 2016). Bulgaria had the lowest ($7,469/year for 2016) (The World
Bank, 2018). This variable represents, perhaps, one of the most influential dimensions in assessing
the effects of nationally ecological factors on technology adoption. For example, scholars have
found strong correlations between GDP per capita and internet penetration (Corrales & Westhoff,
2006), ICT adoption (Erumban, & de Jong, 2006), and the adoption of new medical technologies
(Bech et al., 2009) among others. Therefore, we included GDP per capita as one of our country-level
predictors of the public’s acceptance of AI/Robot.
Multilevel Structural Equation Modelling
To examine any multilevel SEM effects between two levels, it is required to check the
intraclass correlations of the variables (Hox, 2013). As shown in Table 1, the correlation coefficients
ranged from .03 to .09, indicating the need for multilevel analysis.
This study used the lavaan’ package in R because it provides an effective algorithm for two-
level data and for the examination of the structural relationships at both individual and country
levels. Due to space limitation, a detailed description of the coding of the variables was not included
in this manuscript but is available upon request. The analytical steps were conducted as follow: First,
relevant analytical packages were installed into the R-Studio environment. Like many large-scale
surveys, this dataset suffered from missing values. The lavaan package is sensitive to missing data.
Missing values were therefore eliminated. To avoid any possible confounding results caused by the
different scales used to measure the country-level variables, all variables at this level were
standardized.
In the second step, confirmatory factor analyses were performed on the 24 items used for
latent variables. Results showed a weak goodness of fit (χ2 = 33435.043, p = .000, df = 160, TLI =
15
0.816, NNFI = 0.816, CFI = 0.845, NFI = 0.844, RMSEA = 0.112). Goodness of fit is important
“because a model that does not reproduce the covariance structure well cannot have its parameter
estimates interpreted as reasonably summarizing the relationships between the variables” (Ryu &
West, 2009, p. 584).
Several demographic variables were found to affect the fit indices of the model, thus were
eliminated. They were Gender, Age, Job, and Social class. Perceived usefulness was excluded because it
caused negative variance. Prior knowledge of AI was also eliminated because it had a zero intraclass
correlation for the two levels. Five latent variables emerged from all the variables at the individual
level, as confirmed by an explorative factor analysis (Figure 1). A factor analysis of actual data
converged with that of simulated and resampled data at the value of 4.
<<< Insert Figure 2 & Table 1 about here>>>
Results of an explorative factor analysis and factor loadings with intraclass correlation
coefficients indicated the need for an assessment using multilevel analysis (See Table 1). Five factors
were sufficient, given that the chi-square values were significant (p < 0.000). The factors accounted
for 58% of the entire variances of the variables.
This study formulated the final model for a multilevel SEM analysis using R codes as
following. All the entries were the coded names of the variables and survey question items (e.g.,
qd4.1 to qd13.5).
model <- "
level: individual
Digital_technology_efficacy =~ qd4.1 + qd4.4 + qd4.5
Perceived_threat_job_loss =~ qd12.1 + qd12.6
Acceptance_AI_Robot =~ qd13.1 + qd13.2 + qd13.3 + qd13.4 + qd13.5
16
Acceptance_AI_Robot ~ Digital_technology_efficacy + Perceived_threat_job_loss
level: country
Techno_socio_environment =~ Innovation.s + Government.effectiveness.s + GDP.percapital.s
Digital_technology_efficacy =~ qd4.1 + qd4.4 + qd4.5
Perceived_threat_job_loss =~ qd12.1 + qd12.6
Acceptance_AI_Robot =~ qd13.1 + qd13.2 + qd13.3 + qd13.4 + qd13.5
Acceptance_AI_Robot ~ Techno_socio_environment
Digital_technology_efficacy ~ Techno_socio_environment
Perceived_threat_job_loss ~ Techno_socio_environment
"
The final model showed strong fit indices compared to the initial confirmatory factor
analysis (χ2 = 915.371, p = .000, df = 91, TLI = 0.985, NNFI = 0.985, CFI = 0.989, NFI = 0.988,
RMSEA = 0.023).
As aforementioned, responses with too many missing values were excluded from the final
sample, which reduced the number of sample size from 27,901 to 16,672. In the remaining data,
Germany had 907 participants and Luxembourg had 193, with the average sample size by country
being 595. Of those, 8,152 respondents were males (48.9%). The average age was 49 (SD = 17.75).
The survey was conducted in multiple countries with vast differences in terms of income. Because of
that self-reported social class was used as an alternative. Of more than 16,000 respondents, half
(50.1%) belonged to the middle class; about one fourth (24.7%) were from the working class; 16.6%
were members of the lower middle class; less than one tenth (7.7%) belonged to the upper class,
and; a very small fraction (0.9%) were from the higher class. In general, respondents’ willingness to
accept AI/Robot was low (M = 4.7, SD = 2.36; Range: 0-10).
17
Results
In terms of the model, the indices showed the model was a good fit to the data after 238
iterations. The final model was significant (p < .000).
<<<Insert Table 2 about here>>>
This model was, therefore, used to examine the effects of the country-level variables on the
outcome variable (Acceptance of AI/Robot) as well as Digital technology efficacy. The findings were
explained at the individual level and later at the country level.
<<<Insert Table 3 about here>>>
In testing our hypotheses, H1, H2, and H3 focused on the individual-level variables (See
Table 3). Specifically, H1 was about the relationship between Perceived threat of general job loss due to
AI/Robot and Acceptance of AI/Robot. The average of perceived threat of job loss being 3.09 (SD =
0.79, Range: 1-4). Test results indicated that Perceived threat of general job loss negatively predicted
Acceptance of AI/Robot (β = -1.066, p < .000). This means that, the more respondents feel that
AI/Robot threatens their employment, the less accepting they will be of the use of AI/Robot related
technologies. H1 was supported.
H2 was concerned with the predictability of Digital technology efficacy. Regression coefficients
showed that Digital technology efficacy had a statistically significant association with Acceptance of
AI/Robot = 0.596, p< .000). This indicates that those who believe that they have sufficient digital
skills are more accepting the use of AI/Robot. H2 was supported.
These two variables (Perceived threat of general job loss and Digital technology efficacy) explained
22.5% of the total variance of the perceived acceptance of AI/Robot.
<<<Insert Table 4 about here>>>
H3 focused on the predictability of the country-level factors on the three individual-level
variables. The Techno-socio environment composite, which consisted of three variables: Innovation,
18
Government effectiveness, and GDP per capita, had statistically significant effects on individual-level
variables. Techno-socio environment was a significant predictor of two response variables (Perceived threat
of general job loss and Digital technology efficacy). It did not have a statistically significant relationship with
Acceptance of AI/Robot (β = 0.163, p = .212, R2 = .07). H3a was not supported. Though it was not a
significant association, the positive value of the coefficient implied a possibility that those who live
in countries with stronger Techno-socio environment would feel more comfortable people AI/Robot
technologies. Future studies may consider testing this assumption again.
Techno-socio environment positively predicted the respondents’ Digital technology efficacy (β = 0.183,
p < .000, R2 = .53). This indicates that in countries with a stronger Techno-socio environment people are
more confident about their skills in using digital technologies. H3b was supported.
Techno-socio environment negatively predicted respondents’ Perceived threat of general job loss due to
AI/Robot (β = -0.124, p < .000, R2 = .37). This demonstrates that citizens of nations with a stronger
Techno-socio environment feel less threatened about losing their employment to AI/Robot technologies.
H3c was supported (See Figure 3 for more details on the final model).
<<<Insert Figure 3 about here>>>
In comparing the two levels, the most noticeable finding was that the country-level
characteristics such as Techno-socio environment significantly influenced how confident people feel
about using digital technologies and their fear of job loss due to AI and robotic technologies. (See
Table 4 for more details).
Discussion and Conclusion
This study is among the first to examine public perception of AI and robotics technologies
across countries. It investigates the relationships between several factors at two different levels:
individual and country, and the public’s willingness to accept the use of AI and robotics
technologies. As AI and robotics technologies are increasingly present in our everyday life (Chui et
19
al., 2018), with substantial economic and societal impacts, understanding public opinion on these
technologies is important to delineating policies (at the country level) and strategies (at business and
individual levels) to adapt to changes. This study found that at the individual level, several factors
shape people’s attitude towards AI/Robot. They included Digital technology efficacy and Perceived threat of
job loss. Results echoed previous research in this area (McClure, 2018; Turja & Oksanen, 2019).
Results of this study also indicate that technophobia or more specifically the anxiety about
AI/Robot taking over existing jobs is real, with the mean for Perceived threat of general job loss due
AI/Robot reaching 3.09 out of 4. This finding makes a meaningful contribution to the literature of
TAM in that it demonstrates the importance of anxiety to the TAM model. Research using TAM has
frequently used two major “traditional” variables, PU and PEOU, and more recently technology
efficacy (Yi & Hwang, 2003; Zhang et al., 2017), this study suggests the inclusion of anxiety or
technophobia in the model, especially when investigating public perception of AI/Robot related
technologies. Our findings demonstrate that factors leaning to the sentiment side of members of the
public may exert a strong influence on their attitude toward or decision to accept emerging
technologies. This can be helpful for governments and the tech industry in communicating to the
public for a stronger acceptance of new technologies.
It is important to note that individual factors were strong in shaping public attitude towards
AI/Robot. The individual-level variables accounted for 22.5% of the variance of the dependent
variable in this study. However, findings of this study also show that macro factors which constitute
the broad techno socioeconomic environment of a country play a crucial role in the public’s
acceptance of AI/Robot. In this research, country-level variables accounted for 7% to 53% of the
total variance of each of the individual variables (Acceptance of AI/Robot, Perceive threat of general job loss
due to AI, and Technological efficacy). This finding is theoretically meaningful to TAM, which had
traditionally been assessed at one level. This study is among very few that tested the model with
20
multilevel analysis. Turja and Oksanen (2019) was one of those. Their research only assessed the role
of technological developments. Our study, however, identified a more complete set of macro factors
that represent several aspects of the national environment including innovation, governance, and
socioeconomic situation. Methodologically, it has pioneered a new way to assess TAM with
multilevel SEM.
At the structural level, it first appeared to be confounding as Techno-socio environment did not
predict people’s Acceptance of AI/Robot. However, it is possible that in countries with a strong
techno-socio environment, some of the transition into using AI and robotic technologies to replace
humans has already been happening, which did not motivate people to accept AI/Robot (Garimella,
2018).
All in all, the findings of this study suggest that technology acceptance may not inherently be
an individual issue; it also has much to do with national policies and developments. Increasing
applications of AI and robotics technologies are inevitable. Accurate information and
communication about what impact these technologies have on economies and societies as well as
what skill sets are needed to adapt to the technological-advanced environment may be helpful to
triumphing fear and anxiety. This argument has been empirically tested using multilevel structural
analysis, which suggests that to influence public attitude towards and confidence in AI and robotic
technologies, policymakers and business leaders need to encourage innovation and improve
governance and economic development. This is also the most important contribution of this
research to TAM as a theoretical model.
As AI and robotics are increasingly transforming our life, it is expected that more research
will be conducted to examine the social aspects and public acceptance of these emerging
technologies. Given the strong influence effects of the nationally macro variables on public
21
acceptance of AI and robotics technologies, findings of this study suggest that these variables should
be included in assessing public acceptance of AI and robotics technologies.
As for limitations, this study excluded demographic variables at the individual level because
they decreased the model fit. Future research should further investigate the effects of these variables
on public opinion on AI/Robot. The secondary survey dataset allowed this study to use only two
items for measuring Perceived threat of general job loss and Perceived usefulness. Subsequent research should,
perhaps, collect original data to increase the quality of measures and also to include additional
questions to test complex theoretical models such as the Unified Theory of Acceptance and Use of
Technology (UTAUT) (Venkatesh et al., 2003). The use of the uncommon analytical approach has
also left this study with several conundrums. First, the lavaan package that this study used does not
process data with missing values, leading to the elimination of a large number of responses from the
final sample. Future studies can compare the results of analyses with missing against those without
missing values by using other analytical packages for a cross-validation of the current study’s
findings. Second, fitting a model in multilevel SEM is complex (Hox, 2013). This study only assessed
the influence of several factors (e.g., Innovation, GDP per capita, and Government effectiveness) at the
country level. Exploring the effects of other factors (e.g., culture) on AI/Robot acceptance may help
deepen our understanding of what drives the public in adopting new technologies.
References
Alam, F., Ofli, F., & Imran, M. (2019). Descriptive and visual summaries of disaster events using
artificial intelligence techniques: case studies of Hurricanes Harvey, Irma, and
Maria. Behaviour & Information Technology, OnlineFirst,
https://doi.org/10.1080/0144929X.2019.1610908.
Andrés, A. R., Amavilah, V., & Asongu, S. (2017). Linkages between formal institutions, ICT
adoption, and inclusive human development in Sub-Saharan Africa. In H. Kaur, E.
22
Lechman, & A. Marszk (Eds.), Catalyzing Development through ICT Adoption: The Developing
World Experience (pp. 175203). Springer International Publishing.
https://doi.org/10.1007/978-3-319-56523-1_10
Asimakopoulos, S., Asimakopoulos, G., & Spillers, F. (2017). Motivation and user engagement in
fitness tracking: Heuristics for mobile healthcare wearables. Informatics, 4(1), 1-16.
Asongu, S. A., & Biekpe, N. (2017). Government quality determinants of ICT adoption in sub-
Saharan Africa. NETNOMICS: Economic Research and Electronic Networking, 18(2-3), 107-130.
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace
automation. Journal of Economic Perspectives, 29(3), 330.
Bech, M., Christiansen, T., Dunham, K., Lauridsen, J., Lyttkens, C. H., McDonald, K., ... & TECH
Investigators. (2009). The influence of economic incentives and regulatory factors on the
adoption of treatment technologies: a case study of technologies used to treat heart attacks.
Health Economics, 18(10), 1114-1132.
Borana, J. (2016). Applications of artificial intelligence & associated technologies. ETEBMS-2016, 4.
Jodhpur, India.
Cable, D. M., & DeRue, D. S. (2002). The convergent and discriminant validity of subjective fit
perceptions. Journal of applied psychology, 87(5), 875.
Caporale, G., & Monteleone, E. (2004). Influence of information about manufacturing process on
beer acceptability. Food Quality and Preference, 15(3), 271-278.
Cartwright, H. (2008). Using artificial intelligence in chemistry and biology: A practical guide.
https://doi.org/10.1201/9780849384141
Chau, K. (2006). A review on integration of artificial intelligence into water quality modelling. Marine
Pollution Bulletin, 52(7), 726733.
23
Cherkasov, A., Hilpert, K., Jenssen, H., Fjell, C. D., Waldbrook, M., Mullaly, S. C., … Hancock, R.
E. W. (2008, December 4). Use of artificial intelligence in the design of small peptide
antibiotics effective against a broad spectrum of highly antibiotic-resistant superbugs.
Chuan, C. H., Tsai, W. H. S., & Cho, S. Y. (2019, January). Framing Artificial Intelligence in
American Newspapers. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics,
and Society (pp. 339-344). ACM.
Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., Nel, P., & Malhotra, S. (2018). Notes from
the AI frontier: Applications and value of deep learning [Industry Report]. Retrieved from McKinsey
Global Institute website:
https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelli
gence/Notes%20from%20the%20AI%20frontier%20Applications%20and%20value%20of
%20deep%20learning/Notes-from-the-AI-frontier-Insights-from-hundreds-of-use-cases-
Discussion-paper.ashx
Corrales, J., & Westhoff, F. (2006). Information technology adoption and political regimes.
International Studies Quarterly, 50(4), 911-933.
Cornell University, INSEAD, & WIPO. (2016). The global innovation index 2016: Winning with global
innovation. Retrieved from Cornell University - INSEAD - World Intellectual Property
Organization website: http://www.wipo.int/edocs/pubdocs/en/wipo_pub_gii_2016.pdf
Correia, J., Compeau, D., & Thatcher, J. (2017). Implications of technological progress for the
measurement of Technology Acceptance Variables: The case of self-efficacy. ICIS 2017
Proceedings. Presented at the ICIS, Seoul, Korea. Retrieved from
https://aisel.aisnet.org/icis2017/HumanBehavior/Presentations/22
24
D'este, P., Amara, N., & Olmos-Peñuela, J. (2016). Fostering novelty while reducing failure:
Balancing the twin challenges of product innovation. Technological Forecasting and Social Change,
113, 280-292.
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information systems: Theory
and results (Unpublished Doctoral Dissertation, Massachusetts Institute of Technology).
Retrieved from:///C:/Users/thuca/Downloads/14927137-MIT.pdf
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information
technology. MIS Quarterly, 13(3), 319340.
Diener, E., & Diener, C. (1995). The wealth of nations revisited: Income and quality of life. Social
Indicators Research, 36(3), 275286.
Doward, J. (2018, November 10). Britain funds research into drones that decide who they kill, says
report. The Observer. Retrieved from
https://www.theguardian.com/world/2018/nov/10/autonomous-drones-that-decide-who-
they-kill-britain-funds-research
Elliott, L. (2015). Robots threaten 15m UK jobs, says Bank of England’s chief economist. The
Guardian. Retrieved November 12 from
https://www.theguardian.com/business/2015/nov/12/robots-threaten-low-paid-jobs-says-
bank-of-england-chief-economist
Erumban, A. A., & de Jong, S. B. (2006). Cross-country differences in ICT adoption: A consequence
of Culture? Journal of World Business, 41(4), 302314.
https://doi.org/10.1016/j.jwb.2006.08.005
European Commission. (2018). Public Opinion. Retrieved March 1, 2019, from European
Commission website: http://ec.europa.eu/commfrontoffice/publicopinion/index.cfm
25
Fast, E., & Horvitz, E. (2017). Long-term trends in the public perception of artificial intelligence.
The Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 963969. Sans Francisco,
California.
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to
computerization? Technological Forecasting and Social Change, 114, 254280.
Fry, H. (2018). How do we stop cutting-edge technology falling into the wrong hands? The Guardian.
Retrieved October 20 from
https://www.theguardian.com/commentisfree/2018/oct/20/technology-dangers-good-evil-
responsibility.
Garimella, K. (2018). Job loss from AI? There’s more to fear! Forbes. Retrieved August 7 from
https://www.forbes.com/sites/cognitiveworld/2018/08/07/job-loss-from-ai-theres-more-
to-fear/
Gaskell, G., Allum, N., Wagner, W., Kronberger, N., Torgersen, H., Hampel, J., & Bardes, J. (2004).
GM foods and the misperception of risk perception. Risk Analysis: An International Journal,
24(1), 185-194.
Gasparetto, A., & Scalera, L. (2019a). From the unimate to the Delta robot: the early decades of
industrial robotics. In Explorations in the history and heritage of machines and mechanisms (pp. 284-
295). United Kingdom: Springer.
Gasparetto, A., & Scalera, L. (2019b). A brief history of industrial robotics in the 20th century.
Advances in Historical Studies, 8(1), 24-35.
Gesis. (2020). Sampling and Fieldwork. https://www.gesis.org/eurobarometer-data-service/survey-
series/standard-special-eb/sampling-and-fieldwork/
26
Gessl, A. S., Schlögl, S., & Mevenkamp, N. (2019). On the perceptions and acceptance of artificially
intelligent robotics and the psychology of the future elderly. Behaviour & Information
Technology, 38(11), 1068-1087.
Gillham, J., Rimmington, L., Dance, H., Verweij, G., Rao, A., Roberts, B. K., & Paich, M. (2018).
The macroeconomic impact of artificial intelligence (p. 78). Retrieved from PricewaterhouseCoopers
website: https://www.pwc.co.uk/economic-services/assets/macroeconomic-impact-of-ai-
technical-report-feb-18.pdf
Gupta, N., Fischer, A. R., & Frewer, L. J. (2012). Socio-psychological determinants of public
acceptance of technologies: a review. Public Understanding of Science, 21(7), 782-795.
Habib, A., Alsmadi, D., & Prybutok, V. R. (2019). Factors that determine residents’ acceptance of
smart city technologies. Behaviour & Information Technology, OnlineFirst,
https://doi.org/10.1080/0144929X.2019.1693629.
Hamet, P., & Tremblay, J. (2017). Artificial intelligence in medicine. Metabolism, 69, S36S40.
Hox, J. J. (2013). Multilevel regression and multilevel structural equation modeling. In T. Little (ed.),
The Oxford handbook of quantitative methods, (2nd ed., Vol. 1, pp. 281-294). Oxford University
Press.
Imran, M., Castillo, C., Lucas, J., Meier, P., & Vieweg, S. (2014). AIDR: Artificial intelligence for
disaster response. Proceedings of the 23rd International Conference on World Wide Web, 159162.
Jha, S., & Topol, E. J. (2016). Adapting to artificial intelligence: Radiologists and pathologists as
information specialists. JAMA, 316(22), 23532354.
Kassie, M., Jaleta, M., Shiferaw, B., Mmbando, F., & Mekuria, M. (2013). Adoption of interrelated
sustainable agricultural practices in smallholder systems: Evidence from rural Tanzania.
Technological Forecasting and Social Change, 80(3), 525540.
https://doi.org/10.1016/j.techfore.2012.08.007
27
Ketikidis, P., Dimitrovski, T., Lazuras, L., & Bath, P. A. (2012). Acceptance of health information
technology in health professionals: An application of the revised technology acceptance
model. Health Informatics Journal, 18(2). Retrieved from
https://journals.sagepub.com/doi/abs/10.1177/1460458211435425?casa_token=vw59Bim
5xYQAAAAA:jXXdCPNnofs8CjsmYMJgo6AxP4cIGHRxQG724oVm9eYU9wJgVe-
77Z8Hj8KyR6X0BesjR6Akv6S_
Kim, Y., Kim, M., & Kim, W. (2013). Effect of the Fukushima nuclear disaster on global public
acceptance of nuclear energy. Energy Policy, 61, 822-828.
Kimberlin, C. L., & Winterstein, A. G. (2008). Validity and reliability of measurement instruments
used in research. American journal of health-system pharmacy, 65(23), 2276-2284.
Korinek, A., & Stiglitz, J. E. (2017). Artificial intelligence and its implications for income distribution and
unemployment (No. w24174). National Bureau of Economic Research.
Kurzweil, Ray. (2010). The singularity is near. London, UK: Gerald Duckworth & Co.
Kurzweil, Ray. (1990). The age of intelligent machines. Retrieved from
https://mitpress.mit.edu/books/age-intelligent-machines
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that
learn and think like people. Behavioral and Brain Sciences, 40, 1-72.
Latikka, R., Turja, T., & Oksanen, A. (2019). Self-efficacy and acceptance of robots. Computers in
Human Behavior, 93, 157-163.
Lee, Y., Kozar, K. A., & Larsen, K. R. T. (2003). The technology acceptance model: Past, present,
and future. Communication of the Association of Information Systems, 12, 50.
Lewis, P. (2018, July 7). “I was shocked it was so easy”: meet the professor who says facial
recognition can tell if youre gay. The Guardian. Retrieved from
28
https://www.theguardian.com/technology/2018/jul/07/artificial-intelligence-can-tell-your-
sexuality-politics-surveillance-paul-lewis
Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence.
Journal of Consumer Research, 46(4), 629-650.
Ma, Y., & Siau, K. (2018). Artificial intelligence impacts on higher education. MWAIS 2018
Proceedings. Presented at the 13th Annual Conference of the Midwest AIS, St. Louis, MO.
Retrieved from
Manyika, J., Lund, S., Chui, M., Bughin, J., Woetzel, J., Batra, P., … Sanghvi, S. (2017). Jobs lost, jobs
gained: Workforce transitions in time of automation [Industry Report]. Retrieved from
https://www.mckinsey.com/featured-insights/future-of-work/jobs-lost-jobs-gained-what-
the-future-of-work-will-mean-for-jobs-skills-and-wages
Marangunić, N., & Granić, A. (2015). Technology acceptance model: a literature review from 1986
to 2013. Universal Access in the Information Society, 14(1), 8195.
McClure, P. K. (2018). “You’re fired,” says the robot: The rise of automation in the workplace,
technophobes, and fears of unemployment. Social Science Computer Review, 36(2), 139156.
Nilsson, N. J. (2010). The quest for artificial intelligence. New York, NY: Cambridge University Press.
Pelagio-Rodriguez, R., Hechanova, M., & Regina, M. (2014). A study of culture dimensions,
organizational ambidexterity, and perceived innovation in teams. Journal of Technology
Management & Innovation, 9(3), 21-33.
Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). New York, NY:
Prentice Hall.
Ryu, E., & West, S. G. (2009). Level-specific evaluation of model fit in multilevel structural equation
modelling. Structural Equation Modeling: A Multidisciplinary Journal, 16(4), 583601.
29
Satterfield, T., Kandlikar, M., Beaudrie, C. E. H., Conti, J., & Herr Harthorn, B. (2009). Anticipating
the perceived risk of nanotechnologies. Nature Nanotechnology, 4(11), 752758.
Savela, N., Turja, T., & Oksanen, A. (2018). Social acceptance of robots in different occupational
fields: A systematic literature review. International Journal of Social Robotics, 10(4), 493-502.
Scheufele, D. A., & Lewenstein, B. V. (2005). The public and nanotechnology: How citizens make
sense of emerging technologies. Journal of Nanoparticle Research, 7(6), 659667.
Senocak, E. (2014). A survey on nanotechnology in the view of the Turkish public. Science, Technology
and Society, 19(1), 7994.
Siau, K. L., & Yang, Y. (2017, May 18). Impact of artificial intelligence, robotics, and machine learning on sales
and marketing. 2. Retrieved from
https://aisel.aisnet.org/cgi/viewcontent.cgi?article=1047&context=mwais2017
Stone, Z. (2017). Everything you need to know about Sophia, the world’s first robot citizen.
Retrieved March 27, 2019, from Forbes website:
https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-
about-sophia-the-worlds-first-robot-citizen/
Su, G. (2018). Unemployment in the AI Age. AI Matters, 3(4), 3543.
The World Bank. (2018). Indicators - Data. Retrieved February 3, 2019, from
https://data.worldbank.org/indicator
Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433460.
Turja, T., & Oksanen, A. (2019). Robot acceptance at work: A multilevel analysis based on 27 EU
countries. International Journal of Social Robotics.
Vasquez, Z. (2018). The truth about killer robots: The year’s most terrifying documentary. The
Guardian. Retrieved November 26 from
30
https://www.theguardian.com/film/2018/nov/26/the-truth-about-killer-robots-the-years-
most-terrifying-documentary
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic
motivation, and emotion into the Technology Acceptance Model. Information Systems Research,
11(4), 342365.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information
technology: Toward a unified view. MIS Quarterly, 27(3), 425-478.
Vijayasarathy, L. R. (2004). Predicting consumer intentions to use online shopping: the case for an
augmented technology acceptance model. Information & Management, 41(6), 747762.
Wenger, E. (2014). Artificial intelligence and tutoring systems: Computational and cognitive approaches to the
communication of knowledge. Los Altos, California: Morgan Kaufmann.
Xiang, W., & Lee, H. P. (2008). Ant colony intelligence in multi-agent dynamic manufacturing
scheduling. Engineering Applications of Artificial Intelligence, 21(1), 7385.
Yi, M. Y., & Hwang, Y. (2003). Predicting the use of web-based information systems: self-efficacy,
enjoyment, learning goal orientation, and the technology acceptance model. International
Journal of Human-Computer Studies, 59(4), 431449.
Yoo, W., Yu, E., & Jung, J. (2018). Drone delivery: Factors affecting the public’s attitude and
intention to adopt. Telematics and Informatics, 35(6), 1687-1700.
Zamalloa, I., Kojcev, R., Hernández, A., Muguruza, I., Usategui, L., Bilbao, A., & Mayoral, V.
(2017). Dissecting robotics-historical overview and future perspectives. arXiv preprint
arXiv:1704.08617.
Zang, Y., Zhang, F., Di, C., & Zhu, D. (2015). Advances of flexible pressure sensors toward artificial
intelligence and health care applications. Materials Horizons, 2(2), 140156.
31
Zhang, X., Han, X., Dang, Y., Meng, F., Guo, X., & Lin, J. (2017). User acceptance of mobile health
services from users’ perspectives: The role of self-efficacy and response-efficacy in
technology acceptance. Informatics for Health and Social Care, 42(2), 194206.
Figure 1. Hypothesized theoretical model
32
Figure 2. Scree plots for determining the five factors at individual level
Note: Parallel analysis suggests that the number of factors = 4, Number of components = NA
33
Table 1. Explorative factor analysis of relevant items at individual level
Uniqueness
(ICC*)
Efficacy1
Efficacy2
Job loss1
Job loss2
0.22 (.05)
0.20 (.07)
0.43 (.05)
0.31 (.09)
Acceptance1
Acceptance2
Acceptance4
Acceptance5
0.58 (.07)
0.42 (.05)
0.40 (.07)
0.48 (.04)
Loadings:
Factor1
Factor2
Factor4
Acceptance1
0.61
Acceptance2
0.69
Acceptance3
0.68
Acceptance4
0.72
Acceptance5
0.69
Efficacy1
0.86
Efficacy2
0.88
Efficacy3
0.87
Job loss1
Job loss2
Usefulness1
0.60
Usefulness2
0.70
Factor1
Factor2
Factor4
SS loadings
2.56
2.36
1.01
Proportion
Variance
0.21
0.20
0.08
Cumulative
Variance
0.21
0.41
0.60
*ICC for usefulness was not presented because it was excluded from the final model.
34
Table 2. Fit indices for multilevel SEM
lavaan 0.6-3 ended normally after 238 iterations
Optimization method NLMINB
Number of free parameters 68
Number of observations 16672
Number of clusters [Country] 28
Estimator ML
Model Fit Test Statistic 915.371
Degrees of freedom 91
P-value (Chi-square) 0.000
Parameter Estimates:
Information Observed
Observed information based on Hessian
Standard Errors Standard
χ2 = 915.371, p = .000, df = 91, TLI = 0.985, NNFI = 0.985, CFI = 0.989, NFI = 0.988,
RMSEA = 0.023
35
Table 3. Individual-level predictors for acceptance of AI/Robot
Level 1 [Individual]
Latent Variables:
Estimate
Std.Err
z-value
P(>|z|)
Std.lv
Std.all
Perceived efficacy =~
Efficacy1
1.000
0.923
0.876
Efficacy2
1.050
0.007
151.145
0.000
0.970
0.889
Efficacy3
1.044
0.007
149.666
0.000
0.964
0.882
Job loss =~
Job loss1
1.000
0.583
0.694
Job loss2
1.272
0.034
37.453
0.000
0.741
0.867
Perceived acceptance =~
Acceptance1
1.000
1.873
0.637
Acceptance2
1.156
0.015
74.715
0.000
2.166
0.743
Acceptance3
1.081
0.015
71.967
0.000
2.025
0.693
Acceptance4
1.241
0.016
75.711
0.000
2.324
0.766
Acceptance5
1.101
0.015
74.283
0.000
2.062
0.703
Regressions:
Estimate
Std.Err
z-value
P(>|z|)
Std.lv
Std.all
Perceived acceptance ~
Perceived efficacy
0.596
0.018
33.403
0.000*
0.294
0.294
Job loss
-1.066
0.032
-33.411
0.000*
-0.331
-0.331
R2
.23
Note: Std.lv = standardization of latent variables; Std.all = the standardization of all variables
36
Table 4. Effects of Techno-socio environment on individual level variables
Level 2 [Country]:
Latent Variables:
Estimate
Std.Err
z-value
P(>|z|)
Std.lv
Std.all
Techno-socio environment =~
Innovation
1.000
0.939
0.986
Govt effectiveness
0.896
0.108
8.286
0.000
0.841
0.874
GDP per capita
0.996
0.160
6.231
0.000
0.936
0.792
Perceived efficacy =~
Efficacy1
1.000
0.235
0.977
Efficacy2
1.275
0.066
19.409
0.000
0.299
0.997
Efficacy3
1.057
0.079
13.328
0.000
0.248
0.958
Job loss =~
Job loss1
1.000
0.191
0.981
Job loss2
1.388
0.097
14.332
0.000
0.265
0.982
Perceived acceptance =~
Acceptance1
1.000
0.599
0.735
Acceptance2
1.064
0.242
4.403
0.000
0.637
0.920
Acceptance3
0.806
0.233
3.459
0.001
0.483
0.704
Acceptance4
1.211
0.278
4.356
0.000
0.725
0.914
Acceptance5
0.853
0.184
4.646
0.000
0.511
0.880
Regressions: (R2)
Estimate
Std.Err
z-value
P(>|z|)
Std.lv
Std.all
Perceived acceptance (.07)
0.163
0.131
1.248
0.212
0.256
0.256
Perceived efficacy (.53)
0.183
0.036
5.067
0.000*
0.731
0.731
Job loss (.37)
-0.124
0.033
-3.804
0.000*
-0.610
-0.610
Note: Std.lv refers to the standardization of latent variables and Std.all means the standardization of all the variables
37
Figure 3. Final model tested with data
... (McClure, 2018). AI perceived job threats refers to the degree of user's fear that AI threaten his or her job security (Vu & Lim, 2022). Strong correlations were seen between the low level of AI acceptability and the high degree of AI perceived job risks, according to Schepman and Rodway (2020) and Vu and Lim (2022). ...
... AI perceived job threats refers to the degree of user's fear that AI threaten his or her job security (Vu & Lim, 2022). Strong correlations were seen between the low level of AI acceptability and the high degree of AI perceived job risks, according to Schepman and Rodway (2020) and Vu and Lim (2022). Also, Cave et al. (2019) and Khanfar et al., (2024) reported the public concerns of AI threaten job and becoming obsolete. ...
... Regarding socio-cultural influences on AI acceptance, AI perceived biases, job threats, and social norms were significant determinant of such an influence. This result supports that of Vu and Lim (2022), who found that users' adoption of AI is negatively impacted by their perception of a job threat. Furthermore, this result is consistent with other studies that shown that perceptions of AI bias have a major impact on how AI is actually used (Pillai & Sivathanu, 2020). ...
Article
Full-text available
In the business corporate world, artificial intelligence (AI) is becoming a disruptive force. This study explores the intricacies of adopting AI in corporate environments, emphasizing factors that affect both behavioral intentions and real usage patterns. This study, which drew on the Unified Theory of Acceptance and Use of Technology (UTAUT), identified the distinctive features of AI and added new determinants, including perceived humanness, bias, job threat, functionality, transparency, and privacy and security issues. These determinants cover technological, human-centric, and situational aspects which can either catalyze or hinder AI acceptance. Our quantitative research, involving 223 professionals across diverse sectors in Saudi Arabia, expanded the UTAUT model by revealing critical factors driving AI acceptance, including ethics and privacy considerations. Intriguingly, certain latent factors were identified to inversely affect AI application. This research addresses important ethical, security, and operational issues related to AI deployment, while also expanding the theoretical understanding of AI's role in business. Such insights are paramount for decision-makers, practitioners, and academics alike, ensuring the sustainable and responsible incorporation of AI in the business realm.
... A country's level of acceptance of AI will impact collaborative efforts. 34 A recent study conducted across 28 countries provides insight into the general acceptance of AI technologies, highlighting varying readiness levels to adopt AI for diverse applications, including environmental forecasting. The study revealed that countries with high levels of AI acceptance are more likely to integrate AI and machine learning into critical areas such as disaster management and climate change mitigation. ...
... The study revealed that countries with high levels of AI acceptance are more likely to integrate AI and machine learning into critical areas such as disaster management and climate change mitigation. 34 Nations with a strong digital infrastructure and AI-friendly policies, such as the United States, Japan, and several European countries, are more willing to deploy these technologies in environmental sciences. However, the purpose for which AI is used also influences acceptance. ...
... A large percentage of U.S. citizens accept drone delivery of parcels. 34 The approach to AI regulation and policymaking differs across countries. China's AI regulation is the Interim Measures for the Management of Generative Artificial Intelligence Services. ...
Article
Over the past forty years, advancements in artificial intelligence (AI) and machine learning (ML) have revolutionized Earth Sciences (ES). Driven by enhanced data from Earth observations, improved communications, and increased computing power, AI and ML are now critical in addressing real-world environmental challenges. The escalating severity of climate change impacts necessitates precise and timely environmental forecasts. This article critically examines the integration of ML techniques in environmental forecasting, highlighting their role in improving predictions of weather patterns, climate change, and ecological transformations. By automating the analysis of vast datasets, ML enhances environmental predictions' accuracy, timeliness, and applicability, supporting decision-making in agriculture, disaster preparedness, and environmental management. The review discusses the current state of ML applications, evaluates their effectiveness, and identifies future research directions. It also addresses the need for standardized data protocols, improved model interpretability, and ethical considerations in leveraging ML for climate research. The article concludes with a strong call for continued investment in research and cross-disciplinary collaboration, emphasizing the ongoing importance of these efforts to fully harness ML's potential in environmental forecasting.
... In this context and given the irremediable coexistence with robots in the workplace, this study aims to UNIVERSITAT ROVIRA I VIRGILI HOW EMPLOYEES ACCEPT ROBOTS AT WORK Juan Andres Montero Vilela contribute to the knowledge of robot acceptance by employees in general terms and for any industry. For the moment, most of the references on this regard refer to specific sectors such as production, hospitality, retail, health and social assistance (Argote et al., 1983;Nomura et al., 2006;BenMessaoud et al., 2011;Broehl et al., 2016;Turja & Oksanen, 2019;Molino et al., 2020;Molitor, 2020;Paluch et al., 2022;Parvez et al., 2022;Vu & Lim, 2022;Zhong et al., 2022). In the rest of the sectors, references were focused on information systems and automation, but not specifically about robots (Yi et al., 2006;Chang et al., 2007;Schnall & Bakken, 2011;Jacobs et al., 2019;Gauttier, 2019). ...
... Despite the meaning of risk can be linked to multiple factors, in terms of robotization, it has mostly analyzed as a threat for employment. In that sense, the more employees feel threaten their employment, less acceptance towards robotization (Vu & Lim, 2022). But human psychology is complex, and a study from the Technical University of Munich (TUM) and Erasmus University in Rotterdam, shows that employees would prefer to be replaced by a robot than by another human. ...
Thesis
Full-text available
This research analyzes the acceptance of robotics at work by employees. Robotization represents an opportunity for companies to improve their overall performance. However, this automation process may represent a challenge if it is not properly integrated within the organization and if potential risks are not minimized. In relation to the general trend towards automation, there are external and internal conditions that organizations need to manage simultaneously in accordance with their stakeholders’ expectations. This research considers employees both as users of these robots at work and as key stakeholders to shed light on how to manage robotization most effectively, from an organizational perspective. Based on the CAN Model, this research provides additional input about the acceptance of robots at work. For this purpose, data from 422 participants from different geographies, wide range of profiles and from numerous industries have been collected. The findings from this research confirmed elements of the CAN model, showing that employee’s attitude has a positive relationship with the intention to work with robots, and that attitude is positively influenced by the Performance Expectancy, Perceived Risk and Positive Emotions of employees. Based on these findings, in addition, it has been addressed an analysis, to determine implications on innovation, workplace and performance management. According to these findings, some high-level recommendations for managers, researchers and other stakeholders are shared in an attempt to illustrate that organizations should address specific managerial strategies to gain greater acceptance by employees regarding the implementation of robotization, which should be translated into employee’s attraction, retention, and engagement.
... According to the UTAUT, variables such as users' gender, age, experience, and education level can significantly differentiate the way in which a technology is adopted and used 35 . Moreover, the results of previous studies indicate associations of religiosity level 37 36 and cultural determinants with the perception of AI 38,39 . This induced us to extend the UTAUT model to include factors such as nationality, educational setting, and religiosity. ...
Article
Full-text available
The article aims to determine the sociodemographic factors associated with the level of trust in artificial intelligence (AI) based on cross-sectional research conducted in late 2023 and early 2024 on a sample of 2098 students in Poland (1088) and the United Kingdom (1010). In the times of AI progressively penetrating people’s everyday life, it is important to identify the sociodemographic predictors of trust in this increasingly dynamically developing technology. The theoretical framework for the article is the extended Unified Theory of Acceptance and Use of Technology (UTAUT), which highlights the significance of sociodemographic variables as predictors of trust in AI. We performed a multivariate ANOVA and regression analysis, comparing trust in AI between students from Poland and the UK to identify the significant predictors of trust in this technology. The significant predictors of trust were nationality, gender, length of study, place of study, religious practices, and religious development. There is a need for research into the sociodemographic factors of trust in AI and for expanding the UTAUT to include new variables.
... By understanding the public's stance on AI and scientific developments, creators and marketers can address apprehensions, cultivate trust and create a favorable image, which is essential for their broad acceptance [7], [8]. Public opinion often highlights worries about privacy, employment issues and social inequality that AI and scientific progress may cause [9], [10]. ...
Article
Full-text available
Three AI developments, classified as forms of human enhancements, center around progress at the intersection of AI, nanotechnology and biotechnology. Our research advances the understanding of AI and human enhancement by data-driven analytics and offers practical tools for future research and societal applications. It is based on a survey that was launched by PRC in February 2021 to more than 5,000 respondents from the U.S. It consists of about 100 questions that are grouped using a prefix code by the 3 above-mentioned human enhancement, science role, concerns and excitements, perceived algorithm fairness and demographics. To investigate this survey and extract insights regarding the general attitude, a data analytics framework is proposed that consists of clustering using DBSCAN and K-means, ANOVA for clusters, PCA, t-SNE and UMAP for graphical visualization, prediction and advanced customers' profiles analyses. Both clustering methods indicate distinct profiles for AI customers. Most of them are moderate, but two smaller groups define the tech-ethics advocates and tech-forward visionaries. For prediction, in the multi-class classification task, the ROC-AUC score is 0.852 and average F1 Score is 0.987. Following the results, both technology creators and legislators must work collaboratively to ensure that technological advancements are ethically grounded, widely accepted and aligned with societal values.
Article
Recent advancements in Generative Artificial Intelligence (GAI) have paved the way for developing sophisticated language models such as ChatGPT, which are widely employed across various areas, including marketing. However, despite its benefits in marketing, little attention has been paid to understanding the factors influencing ChatGPT adoption among marketing professionals. Also, existing literature tends to focus more on the advantages of ChatGPT, while there is a paucity of studies focusing on potential barriers to its adoption. To fill these gaps, a framework was developed using Behavioral Reasoning Theory and validated through a survey of 390 marketing experts. The study finds that marketers’ reasons for (against) positively (negatively) influence their attitude and adoption intention toward ChatGPT, and uncertainty avoidance as a cultural value significantly affects marketers’ attitudes and reasoning. Furthermore, this study highlights digital literacy’s moderating role, intensifying the negative relationship between “reasons against” and the intention to adopt ChatGPT, though it does not significantly affect the association between “reasons against” and attitude. Additionally, the findings offer managerial guidance for facilitating the adoption of GAI tools.
Article
Full-text available
This paper examines an often overlooked yet significant threat to survey validity and epistemic justice: the unequal communication of opinion. We discuss research that signals the presence of this threat when studying public opinion about AI. Furthermore, we apply Bourdieu’s theoretical framework as a potential explanation of the inequality in communicating an opinion about AI. We describe this inequality and test our explanation by performing a multilevel analysis on four questions about AI governance from the Eurobarometer 92.3 and two questions on its implications on our way of life and jobs from the Eurobarometer 95.2. Our results suggest that there is inequality in communicating opinions: higher social positions are more likely to communicate an opinion. We also find evidence to support the claim that the habitus is the underlying mechanism mediating this inequality in opinion Our results suggest significant effects of self-perceived social class, external political efficacy, internal political and scientific efficacy, and relevant cultural capital regarding science and technology. Lastly, we do not find consistent results regarding the effect of the selected contextual level variables across the two surveys. This paper examines an often overlooked yet significant threat to survey validity and epistemic justice: the unequal communication of opinion. We discuss research that signals the presence of this threat when studying public opinion about AI. Furthermore, we apply Bourdieu’s theoretical framework as a potential explanation of the inequality in communicating an opinion about AI. We describe this inequality and test our explanation by performing a multilevel analysis on four questions about AI governance from the Eurobarometer 92.3 and two questions on its implications on our way of life and jobs from the Eurobarometer 95.2. Our results suggest that there is inequality in communicating opinions: higher social positions are more likely to communicate an opinion. We also find evidence to support the claim that the habitus is the underlying mechanism mediating this inequality in opinion our results suggest significant effects of self-perceived social class, external political efficacy, internal political and scientific efficacy, and relevant cultural capital regarding science and technology. Lastly, we do not find consistent results regarding the effect of the selected contextual level variables across the two surveys. Our findings suggest that inequality in communicating an opinion is widely present when studying public opinion about AI. Future studies should check for this inequality before widely distributing their surveys. Should such inequality be detected, corrective measures should be taken to preserve research validity and mitigate epistemic injustice. Our findings suggest that inequality in communicating an opinion is widely present when studying public opinion about AI. Future studies should check for this inequality before widely distributing their surveys. Should such inequality be detected, corrective measures should be taken to preserve research validity and mitigate epistemic injustice.
Article
Full-text available
People increasingly use microblogging platforms such as Twitter during natural disasters and emergencies. Research studies have revealed the usefulness of the data available on Twitter for several disaster response tasks. However, making sense of social media data is a challenging task due to several reasons such as limitations of available tools to analyse high-volume and high-velocity data streams, dealing with information overload, among others. To eliminate such limitations, in this work, we first show that textual and imagery content on social media provide complementary information useful to improve situational awareness. We then explore ways in which various Artificial Intelligence techniques from Natural Language Processing and Computer Vision fields can exploit such complementary information generated during disaster events. Finally, we propose a methodological approach that combines several computational techniques effectively in a unified framework to help humanitarian organisations in their relief efforts. We conduct extensive experiments using textual and imagery content from millions of tweets posted during the three major disaster events in the 2017 Atlantic Hurricane season. Our study reveals that the distributions of various types of useful information can inform crisis managers and responders and facilitate the development of future automated systems for disaster management.
Article
Full-text available
Artificial intelligence (AI) is revolutionizing healthcare, but little is known about consumer receptivity toward AI in medicine. Consumers are reluctant to utilize healthcare provided by AI in real and hypothetical choices, separate and joint evaluations. Consumers are less likely to utilize healthcare (study 1), exhibit lower reservation prices for healthcare (study 2), are less sensitive to differences in provider performance (studies 3A-3C), and derive negative utility if a provider is automated rather than human (study 4). Uniqueness neglect, a concern that AI providers are less able than human providers to account for their unique characteristics and circumstances, drives consumer resistance to medical AI. Indeed, resistance to medical AI is stronger for consumers who perceive themselves to be more unique (study 5). Uniqueness neglect mediates resistance to medical AI (study 6), and is eliminated when AI provides care (a) that is framed as personalized (study 7), (b) to consumers other than the self (study 8), or (c) only supports, rather than replaces, a decision made by a human healthcare provider (study 9). These findings make contributions to the psychology of automation and medical decision making, and suggest interventions to increase consumer acceptance of AI in medicine.
Article
Full-text available
Industrial robotics is a branch of robotics that gained paramount importance in the last century. The presence of robots totally revolutionized the industrial environment in just a few decades. In this paper, a brief history of industrial robotics in the 20th century will be presented, and a proposal for classifying the evolution of industrial robots into four generations is set forward. The characteristics of the robots belonging to each generation are mentioned, and the evolution of their features is described. The most significant milestones of the history of industrial robots, from the 1950’s to the end of the century, are mentioned, together with a description of the most representative industrial robots that were designed and manufactured in those decades.
Article
Full-text available
Robots are increasingly being used to assist with various tasks ranging from industrial manufacturing to welfare services. This study analysed how robot acceptance at work (RAW) varies between individual and national attributes in EU 27. Eurobarometer surveys collected in 2012 (n = 26,751) and 2014 (n = 27,801) were used as data. Background factors also included country-specific data drawn from the World Bank DataBank. The study is guided by the technology acceptance model and change readiness perspective explaining robot acceptance in terms of individual and cultural attributes. Multilevel studies analysing cultural differences in technological change are exceptionally rare. The multilevel analysis of RAW performed herein accounted for individual and national factors using fixed and random intercepts in a nested data structure. Individual-level factors explained RAW better than national-level factors. Particularly, personal experiences with robots at work or elsewhere were associated with higher acceptance. At a national level, the technology orientation of the country explained RAW better than the relative risk of jobs being automated. Despite the countries’ differences, personal characteristics and experiences with robots are decisive for RAW. Experiences, however, are better enabled in countries open to innovations. The findings are discussed in terms of possible mechanisms through which the technological orientation and social acceptance of robots may be related.
Article
Over the last decade, the technology adopted for the automation of transportation has advanced at a pace that now the emergence of Autonomous Vehicles (AV’s) might not be as far away as it was thought a few years ago. However, the successful penetration of these vehicles in public roads will mainly rest upon their acceptance and adoption by individual road users and how they embrace this new generation of cars. This paper reports the results of a national survey study conducted among 475 Irish people to evaluate their interest in, and concerns about the adoption of AVs in their daily commute trends. The paper has also analysed people’s acceptance and Willingness to Pay (WTP) for AVs compared to Manually Driven Vehicles (MDVs). The results showed that people, in general, were not interested in driving AVs; only one-fifth of the population expressed a high level of interest. Concerns about recording data had an extreme and negative impact on interest since the majority of respondents were not ready to accept AVs' recording of data because of their concerns about privacy. People were also mostly unsure about or not likely to believe in the safety and security of AVs’ operation, and they were not at all willing to accept liability for AVs. In addition, the results revealed that cost substantially impacts people’s AV purchasing decisions, as when the cost was not an issue, people were much more interested in purchasing an AV.
Article
While some cities attempt to determine their residents’ demand for smart-city technologies, others simply move forward with smart-related strategies and projects. This study is among the first to empirically determine which factors most affect residents’ and public servants’ intention to use smart-city services. A Smart Cities Stakeholders Adoption Model (SSA), based on Unified Theory of Acceptance and Use of Technology (UTAUT2), is developed and tested on a mid-size U.S. city as a case study. A questionnaire was administered in order to determine the influence of seven factors – effort expectancy, self-efficacy, perceived privacy, perceived security, trust in technology, price value and trust in government – on behaviour intention, specifically the decision to adopt smart-city technologies. Results show that each of these factors significantly influenced citizen intention to use smart-city services. They also reveal perceived security and perceived privacy to be strong determinants of trust in technology, and price value a determinant of trust in government. In turn, both types of trust are shown to increase user intention to both adopt and use smart-city services. These findings offer city officials an approach to gauging residential intention to use smart-city services, as well as identify those factors critical to developing a successful smart-city strategy.
Conference Paper
Publics' perceptions of new scientific advances such as AI are often informed and influenced by news coverage. To understand how artificial intelligence (AI) was framed in U.S. newspapers, a content analysis based on framing theory in journalism and science communication was conducted. This study identified the dominant topics and frames, as well as the risks and benefits of AI covered in five major American newspapers from 2009 to 2018. Results indicated that business and technology were the primary topics in news coverage of AI. The benefits of AI were discussed more frequently than its risks, but risks of AI were generally discussed with greater specificity. Additionally, episodic issue framing and societal impact framing were more frequently used.
Article
The demographic shift marks the beginning of a social transformation with far reaching implications, and differences in aging processes across individuals render one-size fits-all policies ineffective. An area of increasing importance is assistive technology, including physical and Social Assistance Robots (SARs) for elderly support. In order to increase the effectiveness of such technologies, their design, functionality, and acceptance by target users must be evaluated. This paper presents a study that examines SAR technology acceptance among the future elderly (aged 20–60) in a German-speaking population (N=188). In doing this, we investigated the relationships between personality, resilience, technology experience, expectations for technology, fulfilment of expectations for technology, and technology acceptance. The study found significant correlations between age, gender, education, personality, resilience, experience, expectations, and technology acceptance and its subdimensions. Of the personality dimensions, agreeableness and neuroticism were found most relevant. Small effects were found between resilience and acceptance, and highly significant ones between technology acceptance, technology experience, expectations for SARs, and fulfilment of expectations for SARs. In keeping with previous research, the findings suggest that personality plays a significant role in the acceptance of SAR technologies. This study may be one of the first that considers and evaluates resilience as a factor in technology acceptance.