Article

Robots in the cloud with privacy: A new threat to data protection?

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The focus of this paper is on the class of robots for personal or domestic use, which are connected to a networked repository on the internet that allows such machines to share the information required for object recognition, navigation and task completion in the real world. The aim is to shed light on how these robots will challenge current rules on data protection and privacy. On one hand, a new generation of network-centric applications could in fact collect data incessantly and in ways that are “out of control,” because such machines are increasingly “autonomous.” On the other hand, it is likely that individual interaction with personal machines, domestic robots, and so forth, will also affect what U.S. common lawyers sum up with the Katz's test as a reasonable “expectation of privacy.” Whilst lawyers continue to liken people's responsibility for the behaviour of robots to the traditional liability for harm provoked by animals, children, or employees, attention should be drawn to the different ways in which humans will treat, train, or manage their robots-in-the-cloud, and how the human–robot interaction may affect the multiple types of information that are appropriate to reveal, share, or transfer, in a given context.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Therefore, privacy protection is an unavoidable practical problem. For privacy protection involved in technological development, researchers put forward the theory of reasonable expectations for privacy protection [10]. Confidentiality should not be a prerequisite for privacy protection [10]. ...
... For privacy protection involved in technological development, researchers put forward the theory of reasonable expectations for privacy protection [10]. Confidentiality should not be a prerequisite for privacy protection [10]. Moreover, whether shareholders' information is confidential or not, shareholders' privacy protection should be paid attention to in corporate governance. ...
... Secondly, the lack of corporate governance standards. Chinese current corporate governance codes, whether for listed companies, securities companies, bank insurance institutions, or securities investment funds [10], do not explain the privacy protection of shareholders and the corporate governance issues involved. However, it does not mean that protecting shareholders' privacy does not become a core concern of corporate governance, especially in the governance of multinational companies with decentralized shares. ...
Article
Full-text available
This paper identifies three aspects of challenges in blockchain practices through textual analysis and case studies in the governance of Chinese public health financial institutions. First, given the conflict of interest in corporate governance, under the background of shareholder activism, the attitude of controlling shareholders towards blockchain technology can become a crucial force in technology commercialization. Second, the massive amount of data derived from blockchain technology can lead to privacy protection problems in the financial development of the public health industry. Third, the lack of business customs can become a commercial factor hindering the application of blockchain technology. Therefore, reflecting on blockchain technology in corporate governance can facilitate the public health industry to adapt to the new opportunities brought by technological changes and cope with risks in advance. These findings are innovative and could provide insights into the future cross-border governance of blockchain technology.
... System-based autonomy allows service robots to collect data via their sensors, identify a situation and react accordingly (Allen et al. 2000). This approach allows them to adjust to a situation and apply knowledge absorbed from previous episodes (Pagallo 2013);this particular learning ability enables a service robot to provide an automated social presence (ASP) during its customer service interaction. For example, Tussyadiah and Miller (2019) suggest that a virtual assistant and companion robot in hotel rooms could provide cues in terms of agency and surveillance, thus triggering ASP. ...
... AI subsequently drives these components, which are critical for a successful service robot interaction with a consumer in a tourism setting (Maklan and Klaus 2011). Pagallo (2013) notes that, as technology evolves, service robots will be integrated into larger information systems and provide even more robust service options. Service robots could, for example, be used to collect data via embedded sensors in cameras, microphones and biometric scanners (Calo 2012). ...
... Service robots could, for example, be used to collect data via embedded sensors in cameras, microphones and biometric scanners (Calo 2012). By accessing cloud-based or knowledge databases, service robots could provide tourists with a highly personalised and tailored experience (Pagallo 2013). Researchers discuss these applications as examples of technology-driven service innovation. ...
Article
There is a growing need in the tourism and hospitality literature to harmonise service robots and artificial intelligence’s (AI) meaning and foundations, while also offering guidance on future discussions and research. We operationalize MacInnis’ Journal of Marketing, 75(4), 136–154, (2011) conceptual contribution to derive insights regarding service robots in the tourism and hospitality domain. This paper adopts an interdisciplinary stance and integrates insights from the tourism, hospitality, philosophy, psychology, sociology, management, robotics, information technology and marketing fields. Service robotics and related tourism and hospitality research follow three basic themes: deployment, acceptance and ethical considerations. The findings on the use of service robotics are subsequently delineated and a summary of the tourism and hospitality field’s current research needs is provided.
... The theory of privacy as contextual integrity emphasizes the need to consider the transfer of personal information contextually, depending on the situation and its social norms (Nissenbaum, 2004). This theory is addressed in several conceptual articles (Fosch Villaronga & Roig, 2017;Lutz & Tamò, 2015;Pagallo, 2013;Sedenberg et al., 2016). In contrast, fair information practices focus on the right of individuals to protect their privacy through informed and self-determined action. ...
... Furthermore, cloud storage of data collected by social robots is considered problematic (Pagallo, 2013), as is the connectivity of social robots with the Internet and among each other (Sedenberg et al., 2016;Sharkey, 2016). This leads to a higher susceptibility to hacking. ...
... The social dimension of social robots plays a key role in data collection: Through their social character and interactivity, social robots collect more sensitive information (Pagallo, 2013), particularly in cases of emotional relationships between user and social robot (Calo, 2011). By developing affection and trust, secrets can be revealed (Lee et al., 2011;Sharkey, 2016). ...
Article
Full-text available
In this contribution, we investigate the privacy implications of social robots, as an emerging mobile technology. Drawing on a scoping literature review and expert interviews, we show how social robots come with privacy implications that go beyond those of established mobile technology. Social robots challenge not only users’ informational privacy but also affect their physical, psychological, and social privacy due to their autonomy and potential for social bonding. These distinctive privacy challenges require study from varied theoretical perspectives, with contextual privacy and human-machine communication emerging as particularly fruitful lenses. Findings also point to an increasing focus on technological privacy solutions, complementing an evolving legal landscape as well as a strengthening of user agency and literacy.
... Safeguards and constraints of further fields of legal regulation, such as data protection and cybersecurity, come inevitably into play (Bassi et al., 2019). Humanoid AI robots that will populate the next generation of space missions and space hotels shall comply with such further fields of legal regulation as, for example, data privacy (Pagallo, 2013b). It is noteworthy, however, that most of such legal fields of technological regulation that affect the design, manufacturing, and use of AI systems and smart robots are currently under revision in Europe. ...
... Less controversial examples illustrate how the bar of ethical and legal standards can be lowered due to the extreme conditions of outer space, for instance, sharing personal data with a new set of humanoids for healthcare and entertainment. Scholars have extensively studied the impact of AI robots on privacy law and data protection, for example, considering what the US experts dub as a "reasonable expectation" of privacy, or vis-à-vis the tenets of EU data protection law, such as the principle of purpose limitation and data minimization (Barfield & Pagallo, 2020;Pagallo, 2013b). In addition to the troubles of both the US privacy law and EU data protection with the use of increasingly autonomous AI systems, it seems fair to admit, however, a further challenge that HRI will increasingly raise in outer space. ...
Article
Full-text available
The paper examines the open problems that experts of space law shall increasingly address over the next few years, according to four different sets of legal issues. Such differentiation sheds light on what is old and what is new with today’s troubles of space law, e.g., the privatization of space, vis-à-vis the challenges that AI raises in this field. Some AI challenges depend on its unique features, e.g., autonomy and opacity, and how they affect pillars of the law, whether on Earth or in space missions. The paper insists on a further class of legal issues that AI systems raise, however, only in outer space. We shall never overlook the constraints of a hazardous and hostile environment, such as on a mission between Mars and the Moon. The aim of this paper is to illustrate what is still mostly unexplored or in its infancy in this kind of research, namely, the fourfold ways in which the uniqueness of AI and that of outer space impact both ethical and legal standards. Such standards shall provide for thresholds of evaluation according to which courts and legislators evaluate the pros and cons of technology. Our claim is that a new generation of sui generis standards of space law, stricter or more flexible standards for AI systems in outer space, down to the “principle of equality” between human standards and robotic standards, will follow as a result of this twofold uniqueness of AI and of outer space.
... Robots are widely seen as machines capable of carrying out complex series of actions (Singer, 2009). They are capable of autonomous decision making based on the data they receive by various sensors and other sources (i.e. the sense-think-act paradigm) and adapt to the situation, thus they can learn from previous episodes (Pagallo, 2013;Allen et al., 2000). In a frontline service setting, they represent the interaction counterpart of a customer and therefore can be viewed as social robots. ...
... It is important to stress that in the future virtually all service robots will be connected and embedded into a bigger system (e.g. via knowledge bases and cloud-based systems; Pagallo, 2013). That is, in addition to their local input channels (e.g. ...
Article
Full-text available
Purpose – The service sector is at an inflection point with regard to productivity gains and service industrialization similar to the industrial revolution in manufacturing that started in the eighteenth century. Robotics in combination with rapidly improving technologies like artificial intelligence (AI), mobile, cloud, bigdata and biometrics will bring opportunities for a wide range of innovations that have the potential to dramatically change service industries. The purpose of this paper is to explore the potential role service robots will play in the future and to advance a research agenda for service researchers. Design/methodology/approach – This paper uses a conceptual approach that is rooted in the service, robotics and AI literature. Findings – The contribution of this paper is threefold. First, it provides a definition of service robots, describes their key attributes, contrasts their features and capabilities with those of frontline employees, and provides an understanding for which types of service tasks robots will dominate and where humans will dominate. Second, this paper examines consumer perceptions, beliefs and behaviors as related to service robots, and advances the service robot acceptance model. Third, it provides an overview of the ethical questions surrounding robot-delivered services at the individual, market and societal level. Practical implications – This paper helps service organizations and their management, service robot innovators, programmers and developers, and policymakers better understand the implications of a ubiquitous deployment of service robots. Originality/value – This is the first conceptual paper that systematically examines key dimensions of robot-delivered frontline service and explores how these will differ in the future. Keywords: Consumer behaviour, Ethics, Artificial intelligence, Privacy, Service robots, Markets.
... Chatbots have disadvantages as well as advantages. For businesses, these are listed as data protection (Colace et al., 2018), image and reputation risk, inability to answer complex questions of guests (Pereira et al., 2022) and lack of creativity and emotion; For tourists, ethical concerns are stated as (Luo et al., 2019), privacy and data protection (Pagallo, 2013) and disappointment and dissatisfaction caused by unanswered questions (Buhalis and Cheng, 2020). ...
Article
Purpose Artificial intelligence is one of the most significant and active fields of study in the last few years. Artificial intelligence-derived robotic technologies known as chatbots are gaining interest from both academic and industry sectors. By analyzing the development and patterns of research on the chatbot phenomena within the tourism field, this study seeks to develop a theoretical framework for the interaction between chatbots and tourism. Design/methodology/approach The Web of Science (WoS) database’s 33 articles on chatbots related to travel and hospitality were examined between 2019 and 2024 using VOSviewer software for bibliometric and thematic content analysis. Findings Research on chatbots for tourism and hospitality appears to be in its early stages. The factors influencing tourists' intentions to use chatbots have been thoroughly researched; the attitudes, perceptions and behavioral intentions of destinations, travel agencies and restaurant patrons regarding chatbots were examined, and it was found that the quantitative research approach was dominant. In addition, the majority of the studies are based on a particular theory or model. Originality/value This is one of the first attempts to directly comprehend and depict the interconnected structures of studies on the interaction between chatbots and tourism through the use of network analysis. Furthermore, the study’s findings can offer academics a comprehensive viewpoint and a reference manual for more accurate assessment and oversight of the chatbot-tourism interaction. Regarding the lack of research on the topic and the fragmented structure of the studies that exist, it is imperative to provide both a comprehensive overview and a roadmap for future investigations into the usage of chatbots in the travel and hospitality sector.
... The focus of research was made by Pagallo, Ugo in 2013. was on the class of robots for personal or context [4]. The research highlighted that the Robots in smart homes with internet connections cause privacy issues. ...
Conference Paper
By automating processes, improving decision-making, and altering the parameters of legal practice, artificial intelligence (AI) is reinventing the legal profession. This study examines the many ramifications of AI in the legal system, emphasizing its revolutionary potential and moral dilemmas. The paper examines how well-liked AI technologies are among legal professionals, looks into privacy issues raised by networked AI bots, and imagines the emergence of AI lawyers who can make and present cases on their own. In light of technical breakthroughs, the research paper emphasizes the necessity for fair regulation, moral AI development, and the preservation of fundamental legal principles.
... Robotics and Automation serve as inputs and outputs to these complex AI-powered decisionmaking processes and are increasingly playing a significant role in the assisted caregiver arena. Pagallo (2013) shed light on how robots will challenge current data protection and privacy rules. ...
Thesis
Full-text available
Cyber abuse has taken a tremendous financial and psychological toll on the elderly. This abuse is reportedly more pronounced during and after the COVID-19 era with the accelerated transitioning of essential functions to self-service applications in cyberspace. Basic services such as Internet banking, telemedicine, and online shopping force many to become reluctant and unprepared users of cyber technologies and services. Through inductive reasoning and analysis across literature reviews covering multiple disciplines related to the cyber abuse of older adults, emerging themes substantiate the worldview that social engineering attacks exploit vulnerabilities associated with socio-behavioral traits inherent to older adults. Through deductive reasoning and analysis of descriptive crime data from AARP, FTC, and FBI reports, this study sought to analyze themes for the feasibility of establishing discriminate artifacts associated with relevant threat vectors toward exploring avenues for personalized moderation of cyber situational aware countermeasures – to mitigate social engineering threats.
... Typically, service robots are linked to existing systems and function autonomously and in an adaptive fashion for greater efficiency by straddling multiple communication and interactive realms, and effectively perform service delivery functions (Manthiou et al., 2021;Wirtz et al., 2018). Specifically, robots in these contexts gather data via sensors and apply the same when tasked with identifying and responding to particular situations (Allen et al., 2000;Manthiou et al., 2021;Pagallo, 2013). Advanced robotics is projected to contribute and add "$13 trillion to the global economic output by 2030" (Manthiou et al., 2021, p. 511). ...
Article
Full-text available
Following COVID-19, there has been an increase in digitization and use of Artificial Intelligence (AI) across all spheres of life, which presents both opportunities and challenges. This commentary will explore the landscape of the gendered impact of AI at the intersections of Science and Technology Studies, feminist studies (socialist feminism), and computing. The Global Dialogue on Gender Equality and Artificial Intelligence (2020) organized by UNESCO highlighted the inadequacy of AI normative instruments or principles which focus on gender equality as a “standalone” issue. Past research has underscored the gender biases within AI algorithms that reinforce gender stereotypes and potentially perpetuate gender inequities and discrimination against women. Gender biases in AI manifest either during the algorithm’s development, the training of datasets, or via AI-generated decision-making. Further, structural and gender imbalances in the AI workforce and the gender divide in digital and STEM skills have direct implications for the design and implementation of AI applications. Using a feminist lens and the concept of affective labor, this commentary will highlight these issues through the lenses of AI in virtual assistants, and robotics and make recommendations for greater accountability within the public, private and nonprofit sectors and offer examples of positive applications of AI in challenging gender stereotypes.
... Although consumers are generally likely to use a chatbot for their travel planning, satisfaction with this service may affect usage (Pillai & Sivathanu, 2020). Some authors report privacy issues (Pagallo, 2013;Zamora, 2017), difficulty in obtaining information (Colace et al., 2017), and feelings of discomfort (Luo et al., 2019) in using chatbots. Emerging research shows the diverse contexts of chatbot applications. ...
Article
Several industries recognize the potential of Artificial Intelligence to complete tasks. However, there is limited research on chatbots, and a gap in the research on what factors contribute to consumers' intention to continue using them. This research aims to analyze the relationship of the TAM and ISS dimensions, continuing to use satisfaction and brand attachment as mediators, and using the need for interaction with the employee as a moderating dimension. The results indicated that the role of brand attachment increases the model's explanatory power, and the need for interaction with the employee positively favors the relationship between brand attachment and satisfaction.
... To this end, SRs receive data from various local input channels (e.g., sensors or cameras) and process these data to execute an intricate set of actions (Singer, 2009). SRs can learn from prior experiences and adapt to optimise their future performance (Pagallo, 2013), thanks to their robust perception systems (Pinillos et al., 2016). Furthermore, SRs 'represent the interaction counterpart of a customer' (Wirtz et al., 2018, p. 4). ...
Conference Paper
Full-text available
Stationary retailers continue to try to respond to customers’ needs with regard to service offering and quality. Consequently, they are attempting to develop innovative value propositions and co-create value with customers through new technologies. Among other technologies, service robots (SR) are said to have the potential to revitalise interactive value creation in stationary retail. Nonetheless, the integration of such technologies poses new challenges. Use cases are subject to research, but few studies have explored customers’ perceptions of SRs from a service systems’ perspective, though this is crucial to integrating SRs into stationary retail service systems. In this study, a mixed method approach is adopted to explore customers’ acceptance of and resistance to SRs. First, a qualitative exploratory interview study is conducted among 24 customers. Second, a qualitative survey and a quantitative questionnaire are carried out. The findings identify decisive drivers and barriers, i.a. ‘social presence’ and ‘role congruency’ and reveal i.a. that customers envision harmonious human-robot teams with transparent responsibilities that improve service interactions: while SRs assist frontline employees (FLE) and respond to simple customer inquiries, FLEs can dedicate more time engaging with customers and providing professional customer advice. Moreover, customers suggest that SRs be introduced gradually and with FLEs’ qualified assistance.
... Robots are generally considered machines that can perform a series of complex actions [18]. They can accept various environmental information for decision-making according to their own sensors and other sources (i.e., feeling thinking action paradigm) and complete the set purpose, so they can learn from the previous situation [19,20]. According to the function and nature of robots, robots can be divided into social robots, service robots, and auxiliary robots ( [21][22][23]). ...
Article
Full-text available
With the rapid development of machine learning and artificial intelligence, hotel service robots are widely used, but there are many problems to be solved in the scheduling scheme of hotel service robots. In this study, the Pareto optimal definition is used to model the problem, and a nondominated sorting heuristic method including genetic algorithm and differential evolution algorithm is designed to solve this problem. Experimental results show the effectiveness and stability of our algorithm. In addition, compared with the previous methods, the method proposed in this paper can provide a more personalized and reasonable service robot scheduling scheme for hotels. Finally, the hotel can optimize its management and operation and further deepen the degree of hotel intelligence.
... Service robots are also autonomous machines (physically present), or interfaces (virtual) (Wirtz et al., 2018) programmed to communicate and interact with people, but moreover, they also deliver certain services to consumers (Wirtz et al., 2018;Lu et al., 2020). With the help of embedded AI technologies, service robots possess decision making abilities and function with the help of self-learning and improving skills (Pagallo, 2013), similar to human behavior. This is especially important for service robots, as their role and scope vigorously involve deliberate engagement with consumers in various service-related sectors (Ivanov et al., 2019;Shin & Jeong, 2020). ...
Article
Full-text available
In the past years artificial intelligence (AI) has become an important subject for both companies and consumer due to the growth of personal AI assistants and external service robots. It is clear that many aspects of our life are not like they used to be. The fact that every one of us is sooner or later going to incorporate various AI functions in daily routine activities becomes somehow certain. It is not only our home-life that is transforming, but also the way we, as consumers, are going to interact with different product and service providers. In this paper we provide a comparative literature review on the challenges and research topics regarding personal AI assistants and external service robots. While the personal AI assistants intervene more in the private sphere of the consumer, the relation to the external service robots is more distant. The results of our literature review show that the relation between consumer and external service robot is more characterized by interaction, enjoyment and engagement, it is expected to have a parasocial friendship relation between consumer and personal AI assistants. Taking this difference of perspective the two types of AI will be differently involved in the future business and marketing activities of companies.
... On the other hand, Porter robots can handle complex transactions. They receive data from various sensors and other sources; they can learn from previous transactions and improve themselves over time (Pagallo, 2013;Buhalis et al., 2019). A burger robot named Flippy is able to perform up to 120 orders per hour. ...
Article
Full-text available
In the research, it is aimed to determine the applicability of robot technology and the importance of technological innovations in the tourism industry. The population of the research consists of academicians, managers and students in the tourism industry. In the research, the "convenience sampling" method was used, in which everyone who participated in the research could be included in the sample. All statements regarding the applicability of robot technology in the tourism industry and the importance of technological innovations in the tourism industry have been adapted from the relevant literature. The Cronbach Alpha test was applied for the reliability of the scale, along with the frequency distributions, percentiles, mean values, standard deviations and correlation coefficients from the descriptive statistics of the obtained data. In the research findings, it is accepted that airports, housekeeping activities, tour operator and travel agency services and hotel receptions are the most applicable areas of robot technology in the tourism industry.
... The PbD concept is discussed contrary as the complexity and variability of privacy issues are not discussed, and systematic methods are not appointed, which can be used to identify certain risks [26,27]. Nevertheless, this generalized and theoretical design concept was adapted in the context of robots for domestic use by the lawyer Pagallo called robot-cloud by design [28]: (1) "We have to view data protection in proactive rather than reactive terms, making PbD preventive and not simply remedial. ...
Article
Full-text available
Privacy is an essential topic in (social) robotics and becomes even more important when considering interactive and autonomous robots within the domestic environment. Robots will collect a lot of personal and sensitive information about the users and their environment. Thereby, privacy does consider the topic of (cyber-)security and the protection of information against misuse by involved service providers. So far, the main focus relies on theoretical concepts to propose privacy principles for robots. This article provides a privacy framework as a feasible approach to consider security and privacy issues as a basis. Thereby, the proposed privacy framework is put in the context of a user-centered design approach to highlight the correlation between the design process steps and the steps of the privacy framework. Furthermore, this article introduces feasible privacy methodologies for privacy-enhancing development to simplify the risk assessment and meet the privacy principles. Even though user participation plays an essential role in robot development, this is not the focus of this article. Even though user participation plays an essential role in robot development, this is not the focus of this article. The employed privacy methodologies are showcased in a use case of a robot as an interaction partner contrasting two different use case scenarios to encourage the importance of context awareness.
... These variables pertaining to the intention of using AI robot lawyers [41] were adopted to explore the conditions under which AI robot lawyers can be used by users in actual legal practice and accepted by real society. A preliminary analysis of different consumer attitudes toward the use of AI robot lawyers should be conducted to further confirm whether the "trust" variable is a user's subjective belief in the integrity, honesty, goodwill, capability, and predictable behavior of AI robots [28]. The user can personify the robot lawyer to form a oneway emotional bond [8] and give it an inner life and emotion of trust [3]. ...
Article
Full-text available
The rapid growth of artificial intelligence (AI) robots has brought new opportunities and challenges. The linkage between AI robots and humans has also gained extensive attention from the legal profession. This study focuses on the extended AI Robot Lawyer Technology Acceptance Model (RLTAM). A total of 385 valid questionnaires are collected through quantitative research, and the relationships among the five variables in the model are reanalyzed and revalidated. Results show that the “legal use” variable in the original extended model is not a direct key variable for consumers to accept AI robot lawyers, but it has a direct effect on “perceived ease of use” and “perceived usefulness” variables. AI robots still need to respond actively to attain legitimacy. AI robot lawyers with national legal certification and good user interface design provide humans a sense of trust. AI robot lawyers based on the development of extended intelligence theory can form a closely coordinated working model with humans. In addition, consumers indicate that the normalized use of AI robots could be a trend in the legal industry in the future, and the types of legal profession that robots can replace will not be affected by gender differences. Practitioners using AI robot lawyers need to establish a complete liability risk control system. This study further optimizes the integrity of RLTAM and provides a reference for developers in designing AI robots in the future.
... Similar examples of previous discussions on privacy and data protection can be found in "Cloud Robotics" or "Networked Robotics." Fosch-Villaronga and Millard mentioned the challenge of attributing legal responsibility in complex multiparty ecosystems [1], Pagallo has argued that when robots are more connected via the Internet, their interactive behaviors could cause harm in regard to "robotic privacy" [2]. Ishii has pointed out that the unpredictability of AI can cause new challenge for data protection, especially with regard to an algorithmic black box [3]. ...
Article
Full-text available
Recent developments have shown that not only are AI and robotics growing more sophisticated, but also these fields are evolving together. The applications that emerge from this trend will break current limitations and ensure that robotic decision making and functionality are more autonomous, connected, and interactive in a way which will support people in their daily lives. However, in areas such as healthcare robotics, legal and ethical concerns will arise as increasingly advanced intelligence functions are incorporated into robotic systems. Using a case study, this paper proposes a unique design-centered approach which tackles the issue of data protection and privacy risk in human-robot interaction.
... Hence, we hypothesize as: H1a: Sensing will have a positive influence on perceived cognitive empowerment. Pagallo (2013) highlights that intelligent vehicles increase users' confidence and positive emotions, especially in complex driving conditions, by providing timely reminders to make the driving experience safer, comfortable, and efficient, and thus help users become familiar with the surroundings and drive with ease. Moreover, sensing allows drivers to communicate with the environment as vehicles transmit helpful information through sensors, thus increasing their sense of control. ...
Article
Full-text available
With the advancement in AI and related technologies, we are witnessing more remarkable use of intelligent vehicles. Intelligent vehicles use smart automatic features that make travel happier, safer, and efficient. However, not many studies examine their adoption or the influence of intelligent vehicles on user behavior. In this study, we specifically examine how intelligent vehicles’ sensing and acting abilities drive their adoption from the lens of psychological empowerment theory. We identify three dimensions of users’ perceived empowerment (perceived cognitive empowerment, perceived emotional empowerment, and perceived behavioral empowerment). Based on this theory, we argue that product features (sensing and acting in intelligent vehicles) empower users to use the product. Our proposed model is validated by an online survey of 312 car owners who are familiar with driving conditions, the results of this study reveal that driver’s perceived empowerment is vital for using automatic features of intelligent vehicles. Theoretically, this study combines the concept of empowerment with the intelligent-driving scenario and reasonably explains the mechanism of the intelligence of vehicles on users’ behavior intention.
... Privacy risks. The promise of AI and robotic technologies is limitless, but they possess significant challenges to the existing rules on data protection and privacy (Pagallo, 2013). Privacy risks relate to "the concerns consumers feel about the risk of having personal data used improperly without agreement or private information disclose to third parties" (Hong et al., 2020, p. 6). ...
Article
Full-text available
Despite the growing body of research exploring factors associated with service robot adoption, the existing comprehension of this emerging technology remains largely fragmented. Previous studies have largely focused on the “net effect” between variables, leaving the complexity of consumer behavior uninvestigated. Building on relevant literature and complexity theory, this study intends to consolidate the fragmented views of service robot adoption literature by examining how human-likeness (i.e., anthropomorphism and perceived intelligence), technology-likeness (i.e., performance expectancy, hedonic motivation, and privacy risks), and consumer personalities (i.e., extraversion and openness to experience) combined as causal configurations to explain the behavioral intention to use service robots. Fuzzy set qualitative comparative analysis (fsQCA) was employed to analyze data from a sample of 566 Taiwanese consumers. The results from fsQCA results suggest that multiple, distinct, and equally effective combinations of human-like, technology-like, and consumer features exist to achieve high intention to use service robots. Four solutions are presented that lead to high adoption intention. This study contributes to the artificial intelligence literature by adopting a novel methodological approach to unveil the complexity behind the adoption of service robots. It also offers practical guidance for robot manufacturers and service managers to optimize the combination of human-likeness and technology-likeness in correspondence to consumer personalities for a successful service robots’ implementation.
... Relational services, which demand stronger human interaction, will benefit less from robots, as they perform worse if they are required to understand, share, and influence people's emotions (Huang and Rust, 2018). The most advanced generation of AI and other cognitive technologies robots can conduct cognitive-analytical tasks, as well as emotionalsocial tasks: they are capable of autonomous decision making based on the data they receive via various sensors and other sources, and to adapt to the situation; thus, they can learn from previous episodes (Pagallo, 2013). In this way, they can provide customized services to individuals (Wirtz et al., 2018) and can potentially make the individual feel they are in the company of another social entity (referred to by van Doorn et al. (2019) noted that even if robots can provide offerings that correspond well to customers' preferences, and can reduce or even eliminate search costs, this may negatively impact users' sense of decision autonomy, causing resistance to robot suggestions. ...
... Finally, Wirtz et al. [23] investigates the impact of service robots at three levels ( Fig. 1): the micro level is concerned with the customer's experience of certain issues like privacy [44,45], security [46], dehumanization, depersonalization [19,23], and social deprivation [23]. It is also related to the kind of training that employees need to be able to deliver consistent services; the meso level focuses on the markets for a specific service and on market prices fluctuations due to the falling cost of using robots and their viability in the services industry's processes [28]; the macro level is related to social and employment issues [47], as well as to inequalities within and between societies [23]. ...
Article
Services are changing at an impressive pace boosted by the technological advances felt in Robotics, Big Data, and Artificial Intelligence (AI) that have uncovered new research opportunities. Our objective is to contribute to the literature by exploring the pros and cons of the use of service robots in the hospitality industry and to practice, by presenting the architectural and technological characteristics of a fully automated plant based on a relevant case. To achieve such goal, this article uses a systematic literature review to assess the state-of-the-art, characterize the unit of analysis, and find new avenues for further research. The results indicate that, in high customer contact settings, service robots tend to outperform humans when performing standardized tasks, because of their mechanical and analytical nature. Evidence also shows that, in some cases, service robots have not yet achieved the desired technological maturity to proficiently replace humans. In other words, the technology is not quite there yet, but this does not contradict the fact that new robot technologies, enabled by AI, will be able to replace the employees’ empathetic intelligence. In practical terms, organizations are facing challenges where they have to decide whether service robots are capable of completely replacing human labor or if they should rather invest in balanced options, such as human-robot systems, that seem to be a much more rational choice today.
... Others have examined laws regarding privacy and robots. Pagallo [32] provides a summary of the different issues with privacy, Internet connectivity, robots, and what people can expect for privacy. Pagallo argues for the EU to examine these issues more. ...
Preprint
The elderly in the future will use smart house technology, sensors, and robots to stay at home longer. Privacy at home for these elderly is important. In this exploratory paper, we examine different understandings of privacy and use Palen and Dourish's framework to look at the negotiation of privacy along boundaries between a human at home, the robot, and its sensors. We select three dilemmas: turning sensors on and off, the robot seeing through walls, and machine learning. We discuss these dilemmas and also discuss ways the robot can help make the elderly more aware of privacy issues and to build trust.
... Pagallo et al. (Pagallo, 2013) investigated privacy in the context of robots for personal and domestic use, which are using cloud services, but solely from a legal point of view. In their scenario robots send recorded data to a networked repository and share it with other robots for object recognition and social interaction. ...
Article
Full-text available
Social robots as companions play an increasingly important role in our everyday life. However, reaching the full potential of social robots and the interaction between humans and robots requires permanent collection and processing of personal data of users, e.g. video and audio data for image and speech recognition. In order to foster user acceptance, trust and to address legal requirements as the General Data Protection Regulation of the EU, privacy needs to be integrated in the design process of social robots. The Privacy by Design approach by Cavoukian indicates the relevance of a privacy-respecting development and outlines seven abstract principle. In this paper two methods as a hands-on guideline to fulfill the principles are presented and discussed in the content of the Privacy by Design approach. Privacy risks of a typical robot scenario are identified, analyzed and solutions are proposed on the basis of the seven types of privacy and the privacy protection goals .
... As the new regulation states at Recital 20, "in view of the increasing reliance of civil aviation on modern information and communication technologies essential requirements should be laid down to ensure the security of information used by the civil aviation sector" [4]. Liability and privacy concerns are tied together too [33]. ...
... Oversight of autonomous decisions, and how these are made accountable to users is as much a design issue as it is a legal one (Edwards and Veale, 2017). It is predicted that eventually robots will achieve the level of autonomy where "they themselves become the data controller and responsible for compliance with data privacy legislation" (Holder et al., 2016, p. 395), a prediction also supported by Pagallo (2013). However, for now, focus should be on establishing and operationalising the responsibility of roboticists to their users, and in particular, protecting their legal rights. ...
Chapter
This chapter examines the internal mechanisms surrounding the protection of shareholder rights in FHCs, focusing on how they deal with the protection of shareholder rights in the face of the opportunities and challenges presented by new technologies in jurisdictions beyond China. In this chapter, two technologies, blockchain and AI, have gained widespread attention in corporate law. Specifically, blockchain and AI provide new solutions for the protection of shareholder rights in FHCs. Primarily, blockchain technology improves the rights to vote and information for shareholders, increases the transparency and security of shareholder engagement, optimises agency costs, and reduces the threat posed by insider control. In addition, AI technology can provide data standards for ongoing board evaluation and enhance the effectiveness of board evaluation by improving transparency standards. However, new technologies also pose some challenges to the internal governance of shareholder rights protection in FHCs. The challenges arise both from within the company and from the limitations of the technology itself.
Article
Full-text available
The introduction of AI in various sectors most especially robotic lawyers in the legal system by some developed countries, has made tasks seamlessly achievable. Uganda has also had its fair share and benefits from the use of AI in various sectors, including the legal sector as it concerns virtual proceedings and virtual meetings. Although, the trending concept of robotic lawyers seems to enhance legal practice, however, the Uganda legal and socio-economic nomenclature, seem to pose restrictions. Concerning this, the study examines the legal and socio-economic issues concerning robotic lawyers practicing in Uganda. The study adopts a doctrinal method, data obtained from primary and secondary sources of material were analyse through a descriptive and analytical approach. The study found that the incorporating of robotic lawyers in Uganda, will provides several prospects. However, there are legal and socio-economic challenges, such as non-legal recognition, and challenges of maintaining and updating robotic lawyers, and it may result in a high level of unemployment. The study concludes and recommends that the concept of robotic lawyers is a welcome development. However, could incorporate robotic lawyers as a means of consultation for legal advice, storage of information, drafting of legal documents, and predicting and analysing legal outcomes.
Article
Full-text available
div> Technological advancement has greatly enhanced the global environment, it has improved every facet of the global industry. Currently in Nigeria, the Legal Profession has taken a bold dive by incorporating the use of technology in enhancing the practice of law. However, the current innovation of robotic lawyers in most countries may seem to be consistent with their legal systems. In this regard, it suffices to opine that given the fact that Nigeria is a developing country, there are legal and socio-economic issues that may affect or truncate the adoption of a robotic lawyer in Nigeria. It is in this regard that this study adopted a hybrid method of research in ascertaining the relevance of robotic lawyers, and the legal and socio-economic issues. Questionnaires were distributed to 305 respondent residents in Nigeria. The study found that the current trend of robotic lawyers is quite impressive, however, the nomenclature of law concerning the study and practice of Law in Nigeria does not recognize a robotic lawyer. Furthermore, some socio-economic issues such as internet fraudster, unemployment, insecurity, and poor maintenance culture may pose a challenge to the adoption of a robotic lawyer in Nigeria. In this regard, it was therefore concluded and recommended that for a smooth adoption of robotic lawyers in Nigeria, there is a need for legal approval and streamlining their roles to mere advisory to a client, training of Nigerian lawyers and judges to enhance the legal profession. </div
Article
The introduction of robotic technology in the context of elderly care poses new privacy challenges. We conducted a systematic literature review of publications from 2010 to mid-2023, focusing on privacy-related concerns and efforts to resolve privacy conflicts associated with the use of robotic technologies in elderly care. The review is organized into three key discussion points: (a) conceptions of privacy, where we explore the multifaceted and multidimensional nature of privacy, building on Burgoon’s work; (b) privacy concerns, which we categorize using Rueben’s taxonomy of privacy constructs; and (c) mitigation strategies for addressing these concerns. Our analysis reveals that current design practices for robotic technologies in elderly care predominantly emphasize informational aspects of privacy. However, adopting a holistic approach to privacy is more advantageous. To extend privacy protection beyond mere data protection, we develop a framework that matches different mitigation strategies with privacy concerns across the identified dimensions of privacy, recognizing that no single approach is universally applicable across all aspects of privacy. We accomplish this by proposing an integration with existing privacy impact assessments and apply our findings to the design of robotic technology for elderly care.
Article
Despite booming online retail in cities, there is almost no research on the complex relationship between anthropomorphic delivery robot targets and consumer behavior. This study employs an in-depth investigation into the relationship between last-mile anthropomorphic delivery robots and consumer usage intentions by integrating current literature and applying the theories of service quality hierarchy and task-technology fit. The study gathered data from 663 Chinese customers and employed structural equation modeling to analyze the findings. The study results suggest that the anthropomorphic qualities of delivery robots, such as their intelligence, autonomy, ability to learn, and social behavior, positively influence consumers’ willingness to use these robots for delivery tasks. In addition, the quality of human-robot interaction and task-technology fit are critical to increasing consumer willingness to use these systems. This study enriches the discussion of consumer-robot dynamics in urban retail and provides strategic guidance for optimizing smart delivery solutions in the logistics and robotics industries.
Chapter
Artificial intelligence applications and robotic technologies, which are rapidly spreading and widely used throughout the world, are discussed by different disciplines in literature including tourism. In this context, robots come to the fore in the application areas of the tourism sector. However, there are many artificial intelligence applications that are becoming widespread in the tourism sector. From this point of view, this conceptual study, firstly artificial intelligence applications and robotic technologies were evaluated, traits of service robots were revealed, then the current technologies and a brief comparison of service robots over human employees were examined, and as a result, the future of these technologies in the hospitality industry was discussed. In this context, it can be said that this study, in which the current situation is revealed, and sector-experienced writers make inferences for the future, is an important study that can contribute to the literature and industry practitioners.
Article
Purpose To influence consumer pre-purchase decision-making processes, such as brand selection and perceived brand experience, brands are interested in adopting hyperconnected technological stimuli, such as artificial intelligence, augmented reality (AR), virtual reality, social media and tech devices. However, the understanding of different hyperconnected touchpoints remained shallow and results mixed in previous literature, despite the fact that these touchpoints span different technological interfaces/devices and may influence consumer brand selection. This paper aims to solidify the conceptual underpinnings of the role of online hyperconnected stimuli, which may influence consumer psychological reactions in terms of brand selection and experience. Design/methodology/approach This paper is conceptual and presents a discussion based on extant literature from various international publishers. Findings The authors revealed different technological stimuli in the online hyperconnected environment that may influence consumer online hyperconnected brand selection (OHBS), perceived online hyperconnected brand experience (OHBE), perceived well-being and behavioral intention. Originality/value The conceptual understanding of OHBS and perceived OHBE was mixed and inconsistent in previous studies. This paper brings together extant literature to establish the conceptual understanding of antecedents and outcomes of OHBS, i.e. perceived OHBE, perceived well-being and behavioral intention, and presents a cohesive conceptual framework.
Chapter
There is a convergence on the protection of the traditional right to privacy and today’s right to data protection as evidenced by judicial rulings. However, there are still distinct differences among the jurisdictions based on how personal data is conceived (as a personality or proprietary right) and on the aims of the regulation. These have implications for how the use of AI will impact the laws of US and EU. Nevertheless, there are some regulatory convergences between US and EU law in terms of the realignment of traditional rights through data-driven technologies, the convergence between data protection safeguards and consumer law, and the dynamics of legal transplantation and reception in data protection and consumer law.
Article
Full-text available
This paper focuses on the impact of service robots on customer psychology and behavior, systematically reviews the current service marketing research literature that focuses on service robots. This paper first compares the characteristics of service robots with those of human employees, and then presents the salient features of service robots in the provision of services. Finally, the paper discusses what can be further researched in the field of service robotics in the context of future artificial intelligence, complementing the existing research framework and suggesting new ideas for the study of artificial intelligence services.
Article
The application of artificial intelligence is considered essential to adapt to a new cycle of industrial transformation and technological advancements in many fields and industries. The extensive use of artificial intelligence technology is expected to improve the level and quality of services provided by companies adopting these methods. In this study, we propose a novel approach to self-recovery by chatbot systems after service failures based on social response theory. Moreover, we explore differences in consumer perceptions of different service recovery types and their impact on recovery satisfaction, and discusses whether the intelligence of the computational agent also has an effect. We present the results of three scenario-based experiments, which demonstrate the positive effect of chatbot self-recovery on consumer satisfaction, and show the mediating paths of service recovery types in terms of perceived functional value and privacy risks as well as the boundary condition of the level of robot intelligence. This work expands the range of applications of chatbots in the service industry and provides a new framework for the governance of artificial intelligence.
Chapter
This chapter explores the so-called ‘liability gaps’ that occurs when, in applying existing contractual, extra-contractual, or strict liability rules to harms caused by AI, the inherent characteristics of AI may result in unsatisfying outcomes, in particular for the damaged party. The chapter explains the liability gaps, investigating which features of AI challenge the application of traditional legal solutions and why. Subsequently, this chapter explores the challenges connected to the different possible solutions, including contract law, extra-contractual law, product liability, mandatory insurance, company law, and the idea of granting legal personhood to AI and robots. The analysis is carried out using hypothetical scenarios, to highlight both the abstract and practical implications of AI, based on the roles and interactions of the various parties involved. As a conclusion, this chapter offers an overview of the fundamental principles and guidelines that should be followed to elaborate a comprehensive and effective strategy to bridge the liability gaps. The argument made is that the guiding principle in designing legal solutions to the liability gaps must be the protection of individuals, particularly their dignity, rights and interests.KeywordsArtificial IntelligenceContract lawDamagesEuropean lawLiabilityTort
Article
Along with the popularity of service robots in various service settings, service robots are often gendered as either female or male. This study examines the role of service robots’ gender and level of anthropomorphism of service robots on pleasure and customer satisfaction at service encounters. A 2 gender of service robots (female/male) X 2 level of anthropomorphism (low/high) between-subject factorial design is employed to test hypotheses using a scenario-based experimental survey. Results of the proposed moderated mediation model suggests that female service robots generated more pleasure and higher satisfaction compared to that of male service robots, and its influence is amplified when the level of anthropomorphism is high rather than low. Findings highlight the benefit of female service robots in a hotel setting which is only effective when the service robot is humanized, which provides useful guidelines for hoteliers when applying service robots in their service settings.
Article
Service robots (SR) are increasingly valued and embraced; they are here to stay. Research on collaborative intelligence to better understand robotic-human partnerships is scarce. To bridge that gap this study aimed to examine the value of SR from the guest’s perspective, thus gain a deeper understanding of the co-value creation process in the context of full-service hotels. A mixed-method design was used to capture the depth and breadth of perceived value of SR. Study 1 is a qualitative study probing consumers’ sense making regarding SR. Study 2 used structural equation modeling to test the hypotheses derived from Study 1. Results indicate that perceived privacy, functional benefits of SR, and robot appearance positively influence consumers’ attitude towards adoption of SR. Functional benefits and novelty had an impact on the individuals’ anticipated overall experience. Attitude and anticipated overall experience, in turn, enhanced consumers’ acceptance of SR. Implications, limitations, and future research are discussed.
Article
Background: For campus workplace secure text mining, robotic assistance with feature optimization is essential. The space model of the vector is usually used to represent texts. Besides, there are still two drawbacks to this basic approach: the curse and lack of semantic knowledge. Objectives: This paper proposes a new Meta-Heuristic Feature Optimization (MHFO) method for data security in the campus workplace with robotic assistance. Firstly, the terms of the space vector model have been mapped to the concepts of data protection ontology, which statistically calculate conceptual frequency weights by term various weights. Furthermore, according to the designs of data protection ontology, the weight of theoretical identification is allocated. The dimensionality of functional areas is reduced significantly by combining standard frequency weights and weights based on data protection ontology. In addition, semantic knowledge is integrated into this process. Results: The results show that the development of the characteristics of this process significantly improves campus workplace secure text mining. Conclusion: The experimental results show that the development of the features of the concept hierarchy structure process significantly enhances data security of campus workplace text mining with robotic assistance.
Chapter
The chapter examines how the information revolution impacts the field of data protection in a twofold way. On the one hand, the scale and amount of cross-border interaction taking place in cyberspace illustrate how the information revolution affects basic tenets of current legal frameworks, such as the idea of the law as a set of rules enforced through the menace of physical sanctions and matters of jurisdiction on the Internet. On the other hand, many impasses of today’s legal systems on data protection, liability, and jurisdiction can properly be tackled by embedding normative constraints into information and communication technologies, as shown by the principle of privacy by design in such cases as information systems in hospitals, video surveillance networks in public transports, or smart cards for biometric identifiers. Normative safeguards and constitutional constraints can indeed be embedded in places and spaces, products and processes, so as to strengthen the rights of the individuals and widen the range of their choices. Although it is unlikely that “privacy by design” can offer the one-size-fits-all solution to the problems emerging in the field, it is plausible that the principle will be the key to understanding how today’s data protection issues are being handled.
Article
Full-text available
Various recently-introduced applications of artificial intelligence (AI) operate at the interface between businesses and consumers. This paper looks at whether these innovations have relevant implications for marketing theory. The latest literature on the connection between AI and marketing has emphasized a great variety of AI applications that qualify this relationship. Based on these studies but focusing only on the applications with a direct impact on the relationship at the very heart of marketing, i.e., the one between firms and consumers, the paper analyzes three categories of AI applications: AI-based shipping-then-shopping, AI-based service robots, and AI-based smart products and domestic robots. The main result of this first analysis is that all three categories have to do, each in their own way, with mass customization. A discussion of this common trait leads us to recognize their ways to mass customization that-unlike the traditional approach developed thanks to flexible automation and product modularity technologies-place the customization process within a broader perspective of consumer needs management. This change in approach means that marketing should focus more on managing consumers' needs than directly on the satisfaction of those needs. This finding marks a genuine discontinuity that opens up a new space for reflection for scholars and marketing managers alike.
Article
Full-text available
The cluster of concerns usually identified asmatters of privacy can be adequately accountedfor by unpacking our natural rights to life,liberty, and property. Privacy as derived fromfundamental natural rights to life, liberty,and property encompasses the advantages of thecontrol and restricted access theories withouttheir attendant difficulties.
Article
Full-text available
A path-breaking analysis of the concept of privacy as a question of access to the individual and to information about him. An account of the reasons why privacy is valuable, and why it has the coherence that justified maintaining it as both a theoretical concept and an ideal. Finally, the paper looks into the move from identifying the grounds of the value of privacy to the different question of whether and to what extent privacy should be protected by laws. While privacy is a useful concept in social and moral thought, it may well be the case that it is relatively rare that it should be protected by the law in cases where its violation does not also involve infringement or violation of other important interests or values.
Article
Full-text available
What will it be like to admit Artificial Companions into our society? How will they change our relations with each other? How important will they be in the emotional and practical lives of their owners – since we know that people became emotionally dependent even on simple devices like the Tamagotchi? How much social life might they have in contacting each other? The contributors to this book discuss the possibility and desirability of some form of long-term computer Companions now being a certainty in the coming years. It is a good moment to consider, from a set of wide interdisciplinary perspectives, both how we shall construct them technically as well as their personal philosophical and social consequences. By Companions we mean conversationalists or confidants – not robots – but rather computer software agents whose function will be to get to know their owners over a long period. Those may well be elderly or lonely, and the contributions in the book focus not only on assistance via the internet (contacts, travel, doctors etc.) but also on providing company and Companionship, by offering aspects of real personalization.
Article
Full-text available
Personification of non-humans is best understood as a strategy of dealing with the uncertainty about the identity of the other, which moves the attribution scheme from causation to double contingency and opens the space for presupposing the others' self-referentiality. But there is no compelling reason to restrict the attribution of action exclusively to humans and to social systems, as Luhmann argues. Personifying other non-humans is a social reality today and a political necessity for the future. The admission of actors does not take place, as Latour suggests, into one and only one collective. Rather, the properties of new actors differ extremely according to the multiplicity of different sites of the political ecology.
Article
Full-text available
The concept of Artificial Agents (AA) and the separation of the concerns of morality and responsibility of AA were discussed. Method of abstraction (MOA) was used as a vital component for analyzing the level of abstraction (LoA) at which an agent was considered to act. The approach facilitated the discussion of the morality of agents both in the cyberspace and in the biosphere where systems like organization can play the role of moral agents. It was found that computer ethics had an important scope for the concept of moral agent not necessarily exhibiting free will, mental states or responsibility.
Article
Full-text available
In this article, I summarise the ontological theory of informational privacy (an approach based on information ethics) and then discuss four types of interesting challenges confronting any theory of informational privacy: (1) parochial ontologies and non-Western approaches to informational privacy; (2) individualism and the anthropology of informational privacy; (3) the scope and limits of informational privacy; and (4) public, passive and active informational privacy. I argue that the ontological theory of informational privacy can cope with such challenges fairly successfully. In the conclusion, I discuss some of the work that lies ahead.
Article
Full-text available
The combined use of computers and telecommunications and the latest evolution in the field of Artificial Intelligence brought along new ways of contracting and of expressing will and declarations. The question is, how far we can go in considering computer intelligence and autonomy, how can we legally deal with a new form of electronic behaviour capable of autonomous action? In the field of contracting, through Intelligent Electronic Agents, there is an imperious need of analysing the question of expression of consent, and two main possibilities have been proposed: considering electronic devices as mere machines or tools, or considering electronic devices as legal persons. Another possibility that has been frequently mentioned consists in the application of the rules of agency to electronic transactions. Meanwhile, the question remains: would it possible, under a Civil Law framework, to apply the notions of “legal personhood” and “representation” to electronic agents? It is obvious that existing legal norms are not fit for such an endeavouring challenge. Yet, the virtual world exists and it requires a new but realistic legal approach on software agents, in order to enhance the use of electronic commerce in a global world.
Article
Full-text available
Rapid advances in robotics technology for the battlefield and policing could promote a new breed of copycat "garden shed" robot criminals.
Article
Full-text available
As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory itself, and from computational limits to the implementation of such theories.In this paper the ethicaldisputes aresurveyed,thepossibilityof amoral Turing Test' is considered and the computational diç culties accompanying the diŒerent types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the eŒects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of arti® cially intelligent automata.
Article
Full-text available
Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them.
Article
Full-text available
The advent of software agents gave rise to much discussion of just what such an agent is, and of how they differ from programs in general. Here we propose a formal definition of an autonomous agent which clearly distinguishes a software agent from just any program. We also offer the beginnings of a natural kinds taxonomy of autonomous agents, and discuss possibilities for further classification. Finally, we discuss subagents and multiagent systems. Introduction On meeting a friend or colleague that we haven't seen for a while, or a new acquaintance, some version of the following conversation often ensues: What are you working on these days? Control structures for autonomous agents. Autonomous agents? What do you mean by that? A brief explanation is then followed by: But agents sound just like computer programs. How are they different? This elicits a more satisfying explanation that distinguishes between agent and program. The nature of this "more satisfying explanation" motivates...
Conference Paper
The concept of an agent has recently become important in Artificial Intelligence (AI), and its relatively youthful subfield, DistributedAI (DAI). Our aim in this paper is to point the reader at what we perceive to be the most important theoretical and practical issues associated with the design and construction of intelligent agents. For convenience, we divide the area into three themes (though as the reader will see, these divisions are at times somewhat arbitrary). Agent theory is concerned with the question of what an agent is, and the use of mathematical formalisms for representing and reasoning about the properties of agents. Agent architectures can be thought of as software engineering models of agents; researchers in this area are primarily concerned with the problem of constructing software or hardware systems that will satisfy the properties specified by agent theorists. Finally, agent languages are software systems for programming and experimenting with agents; these languages typically embody principles proposed by theorists. The paper is not intended to serve as a tutorial introduction to all the issues mentioned; we hope instead simply to identify the key issues, and point to work that elaborates on them. The paper closes with a detailed bibliography, and some biblographical remarks.
Chapter
Over the last years lawmakers, privacy commissioners and scholars have discussed the idea of embedding data protection safeguards in ICT and other types of technology, by means of value-sensitive design, AI and legal ontologies, PeCAM platforms, and more. Whereas this kind of effort is offering fruitful solutions for operating systems, health care technologies, social networks and smart environments, the paper stresses some critical aspects of the principle by examining technological limits, ethical constraints and legal conditions of privacy by design, so as to prevent some misapprehensions of the current debate. The idea should be to decrease the entropy of the system via ‘digital air-bags’ and to strengthen people’s rights by widening the range of their choices, rather than preventing harm generating behaviour from occurring through the use of self-enforcement technologies.
Article
Unmanned vehicles (UVs) have rapidly gained prominence in both military and civilian spheres over the last decade. This paper argues that the use of such technology challenges the boundaries and efficacy of existing legal frameworks and raises a range of social and ethical concerns. Despite this, there has been relatively little legal debate on the consequences of removing human operators from vehicles. This is a growing concern, given that unmanned vehicles are now a practical reality in many diverse environments across the globe. This article therefore provides an overview of some of the legal, social and ethical issues presented by unmanned vehicles as a précis to further discussion in a special edition of this journal.
Article
This essay critically examines some classic philosophical and legal theories of privacy, organized into four categories: the nonintrusion, seclusion, limitation, and control theories of privacy. Although each theory includes one or more important insights regarding the concept of privacy, I argue that each falls short of providing an adequate account of privacy. I then examine and defend a theory of privacy that incorporates elements of the classic theories into one unified theory: the Restricted Access/Limited Control (RALC) theory of privacy. Using an example involving data-mining technology on the Internet, I show how RALC can help us to frame an online privacy policy that is sufficiently comprehensive in scope to address a wide range of privacy concerns that arise in connection with computers and information technology.
Article
Could an artificial intelligence become a legal person? As of today, this question is only theoretical. No existing computer program currently possesses the sort of capacities that would justify serious judicial inquiry into the question of legal personhood. The question is nonetheless of some interest. Cognitive science begins with the assumption that the nature of human intelligence is computational, and therefore, that the human mind can, in principle, be modelled as a program that runs on a computer. Artificial intelligence (AI) research attempts to develop such models. But even as cognitive science has displaced behavioralism as the dominant paradigm for investigating the human mind, fundamental questions about the very possibility of artificial intelligence continue to be debated. This Essay explores those questions through a series of thought experiments that transform the theoretical question whether artificial intelligence is possible into legal questions such as, "Could an artificial intelligence serve as a trustee?" What is the relevance of these legal thought experiments for the debate over the possibility of artificial intelligence? A preliminary answer to this question has two parts. First, putting the AI debate in a concrete legal context acts as a pragmatic Occam's razor. By reexamining positions taken in cognitive science or the philosophy of artificial intelligence as legal arguments, we are forced to see them anew in a relentlessly pragmatic context. Philosophical claims that no program running on a digital computer could really be intelligent are put into a context that requires us to take a hard look at just what practical importance the missing reality could have for the way we speak and conduct our affairs. In other words, the legal context provides a way to ask for the "cash value" of the arguments. The hypothesis developed in this Essay is that only some of the claims made in the debate over the possibility of AI do make a pragmatic difference, and it is pragmatic differences that ought to be decisive. Second, and more controversially, we can view the legal system as a repository of knowledge-a formal accumulation of practical judgments. The law embodies core insights about the way the world works and how we evaluate it. Moreover, in common-law systems judges strive to decide particular cases in a way that best fits the legal landscape-the prior cases, the statutory law, and the constitution. Hence, transforming the abstract debate over the possibility of AI into an imagined hard case forces us to check our intuitions and arguments against the assumptions that underlie social decisions made in many other contexts. By using a thought experiment that explicitly focuses on wide coherence, we increase the chance that the positions we eventually adopt will be in reflective equilibrium with our views about related matters. In addition, the law embodies practical knowledge in a form that is subject to public examination and discussion. Legal materials are published and subject to widespread public scrutiny and discussion. Some of the insights gleaned in the law may clarify our approach to the artificial intelligence debate.
Article
My commentary is restricted to two issues raised in the précis article: (a) what response might the law make to autonomous or semi-autonomous unmanned ground vehicles (UGVs) on public roads; and (b) whether the use of unmanned vehicles, and especially the use of unmanned aerial vehicles (UAVs), will require the laws protecting privacy to be reviewed. A further limit on my commentary is that I can speak with any confidence only on the current law in Australia and New Zealand. I comment on the possible effect of the advent of unmanned vehicles on the law in England, Canada and the United States, but beyond those countries, I regret that my knowledge is insufficient to make any meaningful commentary.
Article
The practices of public surveillance, which include the monitoring of individuals in public through a variety of media (e.g., video, data, online), are among the least understood and controversial challenges to privacy in an age of information technologies. The fragmentary nature of privacy policy in the United States reflects not only the oppositional pulls of diverse vested interests, but also the ambivalence of unsettled intuitions on mundane phenomena such as shopper cards, closed-circuit television, and biometrics. This Article, which extends earlier work on the problem of privacy in public, explains why some of the prominent theoretical approaches to privacy, which were developed over time to meet traditional privacy challenges, yield unsatisfactory conclusions in the case of public surveillance. It posits a new construct, "contextual integrity," as an alternative benchmark for privacy, to capture the nature of challenges posed by information technologies. Contextual integrity ties adequate protection for privacy to norms of specific contexts, demanding that information gathering and dissemination be appropriate to that context and obey the governing norms of distribution within it. Building on the idea of "spheres of justice," developed by political philosopher Michael Walzer, this Article argues that public surveillance violates a right to privacy because it violates contextual integrity; as such, it constitutes injustice and even tyranny.
Article
In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such question is, “Can an artificial agent that changes its own programming become so autonomous that the original designer is no longer responsible for the behavior of the artificial agent?” To explore this question, we distinguish between LoA1 (the user view) and LoA2 (the designer view) by exploring the concepts of unmodifiable, modifiable and fully modifiable tables that control artificial agents. We demonstrate that an agent with an unmodifiable table, when viewed at LoA2, distinguishes an artificial agent from a human one. This distinction supports our first counter-claim to Floridi and Sanders, namely, that such an agent is not a moral agent, and the designer bears full responsibility for its behavior. We also demonstrate that even if there is an artificial agent with a fully modifiable table capable of learning* and intentionality* that meets the conditions set by Floridi and Sanders for ascribing moral agency to an artificial agent, the designer retains strong moral responsibility.
Article
The paper examines some aspects of today’s debate on trust and e-trust and, more specifically, issues of legal responsibility for the production and use of robots. Their impact on human-to-human interaction has produced new problems both in the fields of contractual and extra-contractual liability in that robots negotiate, enter into contracts, establish rights and obligations between humans, while reshaping matters of responsibility and risk in trust relations. Whether or not robotrust concerns human-to-robot or even robot-to-robot relations, there is a new generation of cases involving human-to-human contractual and extra-contractual liability for robots’ behaviour because, for the first time, legal systems hold you responsible for what an artificial system autonomously decides to do. KeywordsAI-Delegation-Legal responsibility-Liability-Risk-Robot-Trust-ZI agent
Article
KeywordsPrivacy by design-Privacy-enhancing technologies-Positive-sum
Conference Paper
This paper deals with the results of the Euron Roboethics Atelier 2006 (Genoa, Italy, Feb.-March. 2006), comprising in the Roboethics Roadmap; and it offers a short overview of the ethical problems involved in the development of the next generation of the humanoid robots
Article
I shall argue that software agents can be attributed cognitive states, since their behaviour can be best understood by adopting the intentional stance. These cognitive states are legally relevant when agents are delegated by their users to engage, without users’ review, in choices based on their the agents’ own knowledge. Consequently, both with regard to torts and to contracts, legal rules designed for humans can also be applied to software agents, even though the latter do not have rights and duties of their own. The implications of this approach in different areas of the law are then discussed, in particular with regard to contracts, torts, and personality.
Article
This article examines a number of contractual issues generated by the advent of intelligent agent applications. The aim of the study is to provide legal guidelines for developers of intelligent agent software by addressing the contractual difficulties associated with automated electronic transactions. The author investigates whether the requirements for a legally enforceable contract are satisfied by agent applications that operate independent of human supervision. Given the relative novelty of the technology and the paucity of case law in the area, the author's observations and conclusions are based on an analysis of first principles in contract law. Additionally, the author provides an analysis of whether proposed and enacted electronic commerce legislation in various jurisdictions is sufficient to cure the inherent deficiencies of traditional contract doctrine. Given the trend towards automated electronic commerce, the author concludes by highlighting the legal requirements that must be met in order to ensure the success of agent technology in the formation of online contracts.
Article
A software agent is a computer program that operates within computing environments. The owners of software agents may instruct their agents to roam the networks, access desired information by exchanging data with other agents or people, and handle business and personal transactions. As the interactions between software agents and humans become more frequent, it is relevant to ask whether there are any issues of law that may guide their interactions and conduct. For example, as the agents become more intelligent and autonomous, who will be responsible for the mistakes that software agents make? Will software agents be allowed to contract with humans and with each other, and if so will such contracts be enforceable? And, will software agents have standing to sue and be sued? While there are a host of legal issues associated with software agents operating within virtual environments, the main issue addressed in this paper is whether software agents should be granted the legal rights associated with personhood. After discussing basic characteristics of software agents, and personhood in general, the paper concludes by outlining three possible scenarios that could represent the legal status of software agents in the future; these include the current status quo of property, the status of an indentured servant, and the status and associated rights of legal personhood.
Privacy by design: the definitive workshop
  • Cavoukian