Article

Towards a science of integrated AI and Robotics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The early promise of the impact of machine intelligence did not involve the partitioning of the nascent field of Artificial Intelligence. The founders of AI envisioned the notion of embedded intelligence as being conjoined between perception, reasoning and actuation. Yet over the years the fields of AI and Robotics drifted apart. Practitioners of AI focused on problems and algorithms abstracted from the real world. Roboticists, generally with a background in mechanical and electrical engineering, concentrated on sensori-motor functions. That divergence is slowly being bridged with the maturity of both fields and with the growing interest in autonomous systems. This special issue brings together the state of the art and practice of the emergent field of integrated AI and Robotics, and highlights the key areas along which this current evolution of machine intelligence is heading.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Introduction Background Information and Significance Conquest of outer space has been the single biggest impetus for human innovation and scientific enterprise since the launch of the space age [1]. From the first manned missions to the moon to some of the most complex telescopes and robotic probes, the exploration of space has gone on to throw important light on the universe and humankind's place within it. ...
...  Power Systems: The space robots must be given a strong power supply source, such as batteries or solar panels, that will allow them to ideally operate. Power management systems make sure that the robot's energy needs throughout the mission are continuously met [1]. ...
Research
Full-text available
The exploration of space has always stretched human knowledge and technological capabilities to their limits. In the past few years, artificial intelligence and robotics have taken it to a different level altogether in developing prerequisites for leaps in the unknown. This paper represents the various facets that AI and robotics play in space exploration: from autonomous navigation and decision-making to sophisticated data analysis and environment interfaces. By applying machine learning algorithms, the robotic system can process large amounts of information, find patterns, and make decisions in real-time-without any form of human intervention. Such competencies will be of essence to mission success in far-flung or inhospitable areas of space where the presence of humans either at all or for any extended period is neither practical nor feasible. The paper provides examples of case studies by citing, among others, Mars rovers and satellite constellations, to show how AI-driven robots improve scientific discovery and operational efficiency. In space research, future prospects involving AI and robotics include intelligent habitats, in-situ resource utilization, and deep space missions. This paper makes a review, both in depth and prospective innovation, in such a way as to enable understanding of the transformation of intelligent machines into venturing the cosmos.
... The history of education has shown many innovations introduced in the attempt to improve the learning process (Hermans et al., 2008). Social robots were introduced into education in this millennium (Rajan & Saffiotti, 2017). They were first used as test subjects in human-robot communication research (Fong et al., 2002;Hong et al., 2016;Zhao, 2006), then as museum guides, tour guides, assistants (Fong et al., 2002), childcare assistants (Fridin, 2014;Taipale et al., 2015), to meet the needs of the elderly, disabled and sick (Taipale et al., 2015), in work environments and public spaces and education (Mubin et al., 2013;Shiomi et al., 2015;Tuna et al., 2019). ...
... Social robots are physically embodied systems equipped with artificial intelligence (Rajan & Saffiotti, 2017), interaction capabilities and behaviours that are aligned with the robot's appearance and role in a given human social space. They are designed to become a human interaction partner (Cameron et al., 2017;Damiano & Dumouchel, 2018;Sandry, 2015) and establish relationships with users through their social behaviour (Damiano & Dumouchel, 2018;Heerink et al., 2009). ...
Article
Full-text available
Our research aims to examine the effectiveness of introducing social robots as educational technology within authentic classroom activities without modifying them to be designed for a robot. We chose as test subject the fifth‐grade curricular topic “The role of technology and its impact on society”, meeting the critical stage of moral development students aged of 11–12. The study, with both experimental (EG) and control groups (CG), will be conducted over 6 weeks. This study will examine the impact of robot‐supported lessons with post‐participation testing on learning outcomes and examine students' perception of the robot in the classroom as a potential correlation with academic performance. The form of the study will be a between‐group non‐randomised controlled experiment. Control and experimental groups will be matched concerning gender, mastery of technology and previous knowledge and understanding of the curricular topic in focus. The instructional design of process‐outcome strategies will incorporate all of Bloom's taxonomic levels. In the review of related studies, we identified gaps in social robot‐supported lessons within the regular curriculum between‐group experiment. Based on a review of related research showing more focus on robot performance in the classroom from technical‐interaction aspects we want to convey from pedagogical starting point. The robot's placement in the pedagogical process will be considered an integral part of the teacher's technical environment. We will use the pre‐participation test to establish whether there is the initial equivalence between EG and CG in terms of gender, mastery of technology, and previous knowledge and understanding of the curricular topic under examination.
... A sixth area is natural language processing, which looks to develop machines that understand, interpret, and generate human language for applications like translation and sentiment analysis (Nadkarni et al., 2011). Finally, there is the area of robotics, which focuses on creating machines that can perform tasks autonomously or semiautonomously in both physical and social environments (Rajan & Saffiotti, 2017). ...
Article
Full-text available
The increasingly widespread use of artificial intelligence (AI) has attracted the attention of creative individuals, organizations, and researchers alike. Traditionally focused on conventional forms of creativity, researchers have found many benefits of AI to enhance the generation of novel and useful ideas. However, not all ideas are benevolent in nature. As such, this review considers the intersection of malevolent creativity and AI. Specifically, we discuss how AI can enhance human generation, selection, and implementation of novel and harmful ideas. Using the stage model of creativity as our guide, we examine how stages from problem construction to implementation planning can be impacted by AI, shaping the potential for harm sought by AI’s human partners.
... The integration of social robot technology in education is predicted for the near future (Breazeal, 2009;Edwards et al., 2016;Ivanov, 2016;Newton & Newton, 2019a, 2019bRajan & Saffiotti, 2017;Sumakul, 2019;Tuna et al., 2019). Social robots are defined as physically embodied autonomous robotic technology, equipped with artificial intelligence and social skills, developed to become equal partners in social relations, capable of human-like situational and role-appropriate interaction (Istenic et al., 2021b). ...
Article
Full-text available
Children’s environments are radically modified by introducing artificial intelligence-based technology that can mimic human socio-emotional capabilities. Artificial intelligence facilitated the transition from computers to real-world embodied physical systems such as social robots, anthropomorphic artefacts with implications for child development and a distinctly radical innovation compared to all previous technologies in classrooms. Empirical research of child-learning process in an authentic classroom and teachers’ perception of how children perceive anthropomorphic robots is deficient. Teachers’ knowledge of students’ perception of technology is essential. Social robots are anticipated for future generations of teachers and students. A two-part survey applied ASOR ascription of mental capacities, socio-practical capacities and socio-moral status. Involved were two samples, elementary school students aged 11–12 and preservice teachers. In the first part, students’ perceptions of a social robot were examined. The NAO robot-assisted lesson was conducted in a regular classroom according to a regular curriculum addressing the role of technology in society, followed by a survey. In the second part, preservice teachers assess children’s perceptions of social robots. The study objective was (a) preservice teachers’ knowledge of which capacities and status students attribute to the NAO robot in an educational setting compared with (b) the capacities and status students attribute to the NAO robot. In data analysis, preservice teachers’ and students’ scores were compared. Our findings show that (a) preservice teachers don’t know students’ perceptions of NAO; (b) compared the perceptions of 11-year-old fifth graders and 12-year-old sixth graders showed no statistically significant difference. Examining gender differences three items were identified.
... Several experiments, such as "Mirror Test" [16], have been conducted to gauge capacity for selfcognition. In contrast, within the realm of robotics, there has been a growing emphasis on integrating AI algorithms with hardware to expand the spectrum of tasks AI systems can undertake [11,45,37]. However, a crucial yet relatively unexplored domain is the ability of AI systems to autonomously recognize their bodies (i.e., the physical embodiments), despite the significance of this research field being revealed. ...
Preprint
Full-text available
In the pursuit of realizing artificial general intelligence (AGI), the importance of embodied artificial intelligence (AI) becomes increasingly apparent. Following this trend, research integrating robots with AGI has become prominent. As various kinds of embodiments have been designed, adaptability to diverse embodiments will become important to AGI. We introduce a new challenge, termed "Body Discovery of Embodied AI", focusing on tasks of recognizing embodiments and summarizing neural signal functionality. The challenge encompasses the precise definition of an AI body and the intricate task of identifying embodiments in dynamic environments, where conventional approaches often prove inadequate. To address these challenges, we apply causal inference method and evaluate it by developing a simulator tailored for testing algorithms with virtual environments. Finally, we validate the efficacy of our algorithms through empirical testing, demonstrating their robust performance in various scenarios based on virtual environments.
... This strategy enables the implementation of targeted initiatives to lower emissions and counteract climate change effects, ensuring that environmental management practices remain both efficient and sustainable [115,116]. Innovative applications of AI extend to the realm of robotics [117,118], where AI agents equip micro and nanoscale colloidal robots with deep reinforcement learning capabilities. These robots are capable of navigating unknown environments efficiently, making them particularly impactful in precision surgery and targeted nanodrug delivery [119]. ...
Article
Full-text available
The integration of artificial intelligence (AI) agents with the Internet of Things (IoT) has marked a transformative shift in environmental monitoring and management, enabling advanced data gathering, in-depth analysis, and more effective decision making. This comprehensive literature review explores the integration of AI and IoT technologies within environmental sciences, with a particular focus on applications related to water quality and climate data. The methodology involves a systematic search and selection of relevant studies, followed by thematic, meta-, and comparative analyses to synthesize current research trends, benefits, challenges, and gaps. The review highlights how AI enhances IoT’s data collection capabilities through advanced predictive modeling, real-time analytics, and automated decision making, thereby improving the accuracy, timeliness, and efficiency of environmental monitoring systems. Key benefits identified include enhanced data precision, cost efficiency, scalability, and the facilitation of proactive environmental management. Nevertheless, this integration encounters substantial obstacles, including issues related to data quality, interoperability, security, technical constraints, and ethical concerns. Future developments point toward enhancements in AI and IoT technologies, the incorporation of innovations like blockchain and edge computing, the potential formation of global environmental monitoring systems, and greater public involvement through citizen science initiatives. Overcoming these challenges and embracing new technological trends could enable AI and IoT to play a pivotal role in strengthening environmental sustainability and resilience.
... Before choosing which herbicide to apply in a region, AI can detect and target weeds through the visuals obtained from the camera module inserted into robots. A wide range of issues in the farming sector can be resolved by using an amalgamation of robots and AI approaches (Rajan & Saffiotti, 2017). In addition to this, currently, different robots with AI technology are used for weed removal and plucking, sorting, and packaging of fruits (Vivekananda et al., 2022). ...
Chapter
Full-text available
The role of automation in horticulture has become a focal point globally due to its significant contribution to the economic sector. With a surging population, the demand for food and employment has escalated. Traditional farming methods proved inadequate to meet these demands, prompting the adoption of automated approaches that not only satisfied food requirements but also generated extensive employment opportunities. The advent of Artificial Intelligence (AI) has spurred a revolution by safeguarding crop yields against climate fluctuations, population growth, employment disparities, and food security concerns. This study aims to assess the diverse applications of AI in horticulture. AI is aiding in the development of smart irrigation systems. These systems utilize real-time data from sensors in the soil, weather forecasts, and plant conditions, AI helps improve crop health and yield while promoting sustainable water usage.
... Before choosing which herbicide to apply in a region, AI can detect and target weeds through the visuals obtained from the camera module inserted into robots. A wide range of issues in the farming sector can be resolved by using an amalgamation of robots and AI approaches (Rajan & Saffiotti, 2017). In addition to this, currently, different robots with AI technology are used for weed removal and plucking, sorting, and packaging of fruits (Vivekananda et al., 2022). ...
Chapter
The role of automation in horticulture has become a focal point globally due to its significant contribution to the economic sector. With a surging population, the demand for food and employment has escalated. Traditional farming methods proved inadequate to meet these demands, prompting the adoption of automated approaches that not only satisfied food requirements but also generated extensive employment opportunities. The advent of Artificial Intelligence (AI) has spurred a revolution by safeguarding crop yields against climate fluctuations, population growth, employment disparities, and food security concerns. This study aims to assess the diverse applications of AI in horticulture. AI is aiding in the development of smart irrigation systems. These systems utilize real-time data from sensors in the soil, weather forecasts, and plant conditions, AI helps improve crop health and yield while promoting sustainable water usage.
... A key aspect (and still an open challenge) in the design of autonomous robotic agents is the integration of different AI technologies and Robotics [34,35]. SI-Robotics proposes an AI-based control architecture with twofold objectives: (1) to support the abstraction and reasoning capabilities needed to recognize health-related situations and assistive objectives and (2) to coordinate robotic skills to "act" in the environment and (autonomously) achieve contextualized assistive objectives. ...
Article
Full-text available
Background Parkinson disease (PD) is a progressive neurodegenerative disorder characterized by motor symptoms. Recently, dance has started to be considered an effective intervention for people with PD. Several findings in the literature emphasize the necessity for deeper exploration into the synergistic impacts of dance therapy and exergaming for PD management. Moreover, socially engaging robotic platforms equipped with advanced interaction and perception features offer potential for monitoring patients’ posture and enhancing workout routines with tailored cues. Objective This paper presents the results of the Social Robotics for Active and Healthy Ageing (SI-Robotics) project, aimed at designing an innovative rehabilitation program targeted at seniors affected by (early-stage) PD. This study therefore aims to assess the usefulness of a dance-based rehabilitation program enriched by artificial intelligence–based exergames and contextual robotic assistance in improving motor function, balance, gait, and quality of life in patients with PD. The acceptability of the system is also investigated. Methods The study is designed as a technical feasibility pilot to test the SI-Robotics system. For this study, 20 patients with PD were recruited. A total of 16 Irish dance–based rehabilitation sessions of 50 minutes were conducted (2 sessions per week, for 8 wks), involving 2 patients at a time. The designed rehabilitation session involves three main actors: (1) a therapist, (2) a patient, and (3) a socially interacting robot. To stimulate engagement, sessions were organized in the shape of exergames where an avatar shows patients the movements they should perform to correctly carry out a dance-based rehabilitation exercise. Results Statistical analysis reveals a significant difference on the Performance-Oriented Mobility Assessment scale, both on balance and gait aspects, together with improvements in Short Physical Performance Battery, Unified Parkinson Disease Rating Scale–III, and Timed Up and Go test, underlying the usefulness of the rehabilitation intervention on the motor symptoms of PD. The analysis of the Unified Theory of Acceptance and Use of Technology subscales provided valuable insights into users’ perceptions and interactions with the system. Conclusions This research underscores the promise of merging dance therapy with interactive exergaming on a robotic platform as an innovative strategy to enhance motor function, balance, gait, and overall quality of life for patients grappling with PD.
... One of the long-term goals of AI and robotics is to enable embodied agents to understand natural language instructions and perform complex tasks [14]. Recent advances in large language models (LLMs) have demonstrated a profound capacity for understanding, reasoning, and planning leading to significant enhancements across various domains [21]. ...
Preprint
The integration of large language models (LLMs) into robotics significantly enhances the capabilities of embodied agents in understanding and executing complex natural language instructions. However, the unmitigated deployment of LLM-based embodied systems in real-world environments may pose potential physical risks, such as property damage and personal injury. Existing security benchmarks for LLMs overlook risk awareness for LLM-based embodied agents. To address this gap, we propose RiskAwareBench, an automated framework designed to assess physical risks awareness in LLM-based embodied agents. RiskAwareBench consists of four modules: safety tips generation, risky scene generation, plan generation, and evaluation, enabling comprehensive risk assessment with minimal manual intervention. Utilizing this framework, we compile the PhysicalRisk dataset, encompassing diverse scenarios with associated safety tips, observations, and instructions. Extensive experiments reveal that most LLMs exhibit insufficient physical risk awareness, and baseline risk mitigation strategies yield limited enhancement, which emphasizes the urgency and cruciality of improving risk awareness in LLM-based embodied agents in the future.
... The former refers to the use of technology to optimize or streamline existing processes, while the latter involves the use of machines capable of performing tasks in a manner similar to human workers. According to Rajan and Saffiotti (2017), robotics in services should be integrated with AI so that it can interact with customers and provide them with a personalized experience. Wirtz et al. (2018, p. 909) define service robots (SRs) as 'system-based autonomous and adaptable interfaces that interact, communicate and deliver service to an organization's customers.' ...
Article
Full-text available
The hospitality industry in many countries has recently faced severe labour shortages, leading hoteliers to consider the robotization of services. Many studies have focused on hotels that already use service robots while overlooking those that have not yet deployed them. The key role is played by managers, whose perspective has been neglected in previous studies. This research explores the barriers and prospects for robot adoption from the perspective of hotel managers in hotels where service robotization is not yet widespread. For this purpose, 18 managers of upscale hotels in Croatia, a country heavily dependent on tourism facing a labour outflow and low technological development, were interviewed. Using an inductive thematic analysis, four main groups of barriers were identified: mana- gerial knowledge, employee involvement, service context, and technical aspect. Lack of knowledge emerged as the most critical issue. Hotel managers do not currently consider robotization suitable for luxury hotels but express willingness to use it in the future, depending on hotel size and service type. They see potential applications primarily for back-office tasks. They would use them to support rather than replace staff. The results provide a basis for future studies and practical guidelines for hotel policy development.
... The rise of DL-based perception can be attributed to hardware improvements. Advancements in hardware have facilitated the deployment of sophisticated algorithms on lightweight devices [24], allowing resource-limited agents to be equipped with multiple devices to tackle challenging tasks. Additionally, robots have made significant progress in perception owing to advancements in sensors. ...
Preprint
Computer vision tasks are crucial for aerospace missions as they help spacecraft to understand and interpret the space environment, such as estimating position and orientation, reconstructing 3D models, and recognizing objects, which have been extensively studied to successfully carry out the missions. However, traditional methods like Kalman Filtering, Structure from Motion, and Multi-View Stereo are not robust enough to handle harsh conditions, leading to unreliable results. In recent years, deep learning (DL)-based perception technologies have shown great potential and outperformed traditional methods, especially in terms of their robustness to changing environments. To further advance DL-based aerospace perception, various frameworks, datasets, and strategies have been proposed, indicating significant potential for future applications. In this survey, we aim to explore the promising techniques used in perception tasks and emphasize the importance of DL-based aerospace perception. We begin by providing an overview of aerospace perception, including classical space programs developed in recent years, commonly used sensors, and traditional perception methods. Subsequently, we delve into three fundamental perception tasks in aerospace missions: pose estimation, 3D reconstruction, and recognition, as they are basic and crucial for subsequent decision-making and control. Finally, we discuss the limitations and possibilities in current research and provide an outlook on future developments, including the challenges of working with limited datasets, the need for improved algorithms, and the potential benefits of multi-source information fusion.
... Scholars investigate robot anthropomorphism, categorizing robots into mechanical, humanoid, and human-like entities (Gong and Nass, 2007). AI, unlike robotics, focuses on virtual algorithms, particularly in the era of big data (Rajan and Saffiotti, 2017). Integration of robotics and AI enhances service delivery, especially in the service industry, improving customer experiences (J€ orling et al., 2019). ...
Article
Full-text available
Purpose This study aims to explore the attitudes and perceptions of Chinese coffee consumers towards robot baristas, considering the proliferation of automated entities within China's coffee sector. Design/methodology/approach Employing the extended Technology Acceptance Model 2 as its theoretical framework, this research conducts in-depth interviews with 30 Chinese coffee consumers. The laddering technique is utilized, supplemented by video simulation. Thematic analysis is subsequently employed to scrutinize the data. Findings The findings delineate six pivotal themes encapsulating Chinese coffee consumers' perceptions of robot baristas – Perceived Introvert Friendliness, Perceived Novelty, Perceived Intellectual Discrepancies, Perceived Efficiency and Reliability, Perceived Emotional Disconnection, and Perceived Labour Market Disruption. Moreover, six motivational themes are identified - Social Status Boosting, Openness to Experience, Ease of Use, Tech-Driven Affordability, Reliable and Uncompromising Quality, and Resistance to Overbearing Service. Research limitations/implications The study is limited by its focus on a specific cultural context. Future research could explore cross-cultural perspectives. Practical implications The findings of this study offer guidance on how to market and position robotic barista services to appeal to consumer preferences and drive adoption. Social implications Understanding consumer perceptions of robotic baristas has broader social implications, particularly in terms of labour market disruption and the potential impact on traditional coffee professions. Businesses can navigate the social implications of automation more effectively and foster greater acceptance of technological innovations within society. Originality/value This study offers insights into the inclinations of Chinese coffee consumers, thereby facilitating informed decision-making and the formulation of effective strategies to expedite the adoption of robotic service.
... Artificial intelligence research is progressing quickly, resulting in new opportunities and increased R&D investments. There is a growing trend toward combining artificial intelligence with robotics, particularly in this field (Rajan & Saffiotti, 2017;Schwab & Davis, 2019). Robotics is increasingly impacting various sectors, including manufacturing, agriculture, retail, and services. ...
... Fig. 1: Four types of intelligence, adapted from Huang and Rust [7] Each type of intelligence possesses unique skills and capabilities that can be utilized to enhance customer experience and deliver services more efficiently. Mechanical intelligence involves the capacity to carry out routine tasks automatically, as seen in the case of robots operating in controlled settings, such as factories or assembly lines [16]. Analytical intelligence concerns the ability to process information to solve problems and learn from them [17]. ...
Article
Full-text available
The present study discusses the impact of Human-Robot Collaboration (HRC) powered by Artificial Intelligence (AI) on customer service. It is based on the four types of intelligence-mechanical, analytical, intuitive, and empathetic-and how they are integrated into HRC to provide customers with more efficient and personalized services. The benefits of AI-enabled HRC are highlighted, including reduced operational costs, increased productivity, improved decision-making, and enhanced customer experience. However, the article also addresses the challenges of implementing this approach, such as the potential loss of jobs due to automation, and emphasizes the importance of ethical and responsible implementation. The study has significant practical and academic implications, warning that continuous research is needed to understand the potential and limitations of AI-enabled HRC on customer service. Overall, through a literature review, the article aims to appeal to the reader's critical spirit and explore topics on the transformative power of AI in customer service. Abstract The present study discusses the impact of Human-Robot Collaboration (HRC) powered by Artificial Intelligence (AI) on customer service. It is based on the four types of intelligence-mechanical, analytical, intuitive, and empathetic-and how they are integrated into HRC to provide customers with more efficient and personalized services. The benefits of AI-enabled HRC are highlighted, including reduced operational costs, increased productivity, improved decision-making, and enhanced customer experience. However, the article also addresses the challenges of implementing this approach, such as the potential loss of jobs due to automation, and emphasizes the importance of ethical and responsible implementation. The study has significant practical and academic implications, warning that continuous research is needed to understand the potential and limitations of AI-enabled HRC on customer service. Overall, through a literature review, the article aims to appeal to the reader's critical spirit and explore topics on the transformative power of AI in customer service.
... Digital technologies, including big data, blockchain, digital twin, artificial intelligence (AI), and the Internet of Things (IoT), can attract an influx of high-level production factors, including high-end talents, cutting-edge knowledge, advanced technological processes and production procedures, thereby optimizing the factor structure of conventional agriculture and expanding the scale and quality of agricultural technological innovation (Klerkx et al., 2019;Bolfe et al., 2020), and ultimately promote agricultural carbon emission reduction (Ali et al., 2021;Luo et al., 2023). Robots are loaded with AI (Rajan and Saffiotti, 2017), blockchain (Aditya et al., 2021), big data (Zhang, 2021), 5G (Sophocleous et al., 2022), digital twins (Hoebert et al., 2019) and a host of other digital technologies as physical carriers (Sodikjanov and Khayitboyev, 2023). Its unique potential for cutting carbon emissions. ...
Article
Full-text available
Introduction Reducing carbon emissions from agriculture is essential to ensuring food security and human prosperity. As a country with approximately 20% of the global population, China has begun actively practicing the low-carbon agricultural development conception. Against the backdrop of disruptive technologies that continue to be integrated into various industries, the massive application of agricultural robots has opened the way to intelligent agriculture. This paper tries to answer whether there is some non-linear nexus between the application of agricultural robots and agricultural carbon emissions in China. As an essential tool for carbon emission reduction in China, does environmental regulation moderate the nexus between agricultural robot applications and agricultural carbon emissions? If so, how does this effect manifest itself? Methods This work takes China as an example by collecting macro-regional panel data from 30 provinces from 2006 to 2019. The environmental Kuznets curve theory is extended to agricultural carbon emissions, and we carried out empirical tests utilizing the panel fixed effects model and the moderating effects model. Results This study verifies the inverted U-shaped nexus between agricultural robotics applications and agricultural carbon emissions in Chinese provinces, i.e., the agricultural carbon emissions (ACE)-Kuznets curve holds. The higher the level of formal environmental regulation, the larger the peak of the ACE-Kuznets curve and the more the inflection point is pushed back. The higher the level of informal environmental regulation, the lower the peak of the ACE-Kuznets curve and the later the inflection point. Discussion The findings in this paper represent the first exploration of the environmental Kuznets curve in agricultural carbon emissions. It is noteworthy that the moderating effect of formal environmental regulation does not lower the peak of the curve as we expect. This appearance is attributed to the reality that China is still in a phase of rising agricultural carbon emissions, which is exacerbated by the overlapping positive effects of agricultural robotics applications and formal environmental regulations. Informal environmental regulation is more effective than formal environmental regulation in reducing agricultural carbon emissions at this stage.
... The fields of Artificial Intelligence (AI) and Robotics were strongly connected in the early days of AI but have since diverged (Rajan and Saffiotti, 2017). Nowadays, on one side, we have highly optimized and complex AI algorithms applied in computer science, and on the other hand, we have robust and reliable robots, such as industrial robotic arms, capable of performing tasks even faster and better than humans. ...
Article
Full-text available
Mission planning constitutes an important feature of autonomy for Maritime Autonomous Surface Ships (MASS). Nevertheless, this research topic remains largely unexplored, as the majority of academic and industry projects primarily focus on developing low-level systems, such as control, collision avoidance, and situational awareness. The main contribution of this paper is to address this problem by developing a high-level decision-making system capable of generating an efficient and feasible temporal sequence of high-level actions, which is then sent to the ship control systems responsible for execution. The mission planner is based on the simultaneous temporal planner (STP), which in our case considers temporal actions related to, for example, moving to a specific location, activating docking mode or starting the process of container (un)loading, which are then executed by their respective control systems. Contrary to classical artificial intelligence (AI) planning algorithms, Temporal AI planning algorithms, such as STP, can consider duration of actions, which allows more realistic representation of the mission. We connect the high-level mission planner with the ship's guidance, navigation and control (GNC) system, which has path-planning, path-following control and fuzzy logic-based collision avoidance capabilities. The efficiency of our approach is demonstrated through a series of simulations of a MASS operating in a realistic marine environment including other ships and static obstacles.
... Expert systems utilize high-level proficiencies to give advice and proffer solutions regarding the diagnosis and prognosis of medical conditions, as well as educate healthcare professionals (Lee and Wang, 2011). The field of robotics started as an advancement in the field of mechanical and electronic engineering to perform repetitive tasks in industrial settings (Rajan and Saffiotti, 2017). However, the integration of artificial intelligence into such systems has led to the development of intelligent robots that employ a high degree of accuracy and precision to perform complex sophisticated procedures, including surgeries (Bramhe and Pathak, 2022;Alafaleq, 2023). ...
Article
Full-text available
Background Artificial intelligence technology can be applied in several aspects of healthcare delivery and its integration into the Nigerian healthcare value chain is expected to bring about new opportunities. This study aimed at assessing the knowledge and perception of healthcare professionals in Nigeria regarding the application of artificial intelligence and machine learning in the health sector. Methods A cross-sectional study was undertaken amongst healthcare professionals in Nigeria with the use of a questionnaire. Data were collected across the six geopolitical zones in the Country using a stratified multistage sampling method. Descriptive and inferential statistical analyses were undertaken for the data obtained. Results Female participants (55.7%) were slightly higher in proportion compared to the male respondents (44.3%). Pharmacists accounted for 27.7% of the participants, and this was closely followed by medical doctors (24.5%) and nurses (19.3%). The majority of the respondents (57.2%) reported good knowledge regarding artificial intelligence and machine learning, about a third of the participants (32.2%) were of average knowledge, and 10.6% of the sample had poor knowledge. More than half of the respondents (57.8%) disagreed with the notion that the adoption of artificial intelligence in the Nigerian healthcare sector could result in job losses. Two-thirds of the participants (66.7%) were of the view that the integration of artificial intelligence in healthcare will augment human intelligence. Three-quarters (77%) of the respondents agreed that the use of machine learning in Nigerian healthcare could facilitate efficient service delivery. Conclusion This study provides novel insights regarding healthcare professionals' knowledge and perception with respect to the application of artificial intelligence and machine learning in healthcare. The emergent findings from this study can guide government and policymakers in decision-making as regards deployment of artificial intelligence and machine learning for healthcare delivery.
... The fusion of emerging technology as detailed by Sousa and Rocha (2019), underpins the drive of the paradigm in product development and operational efficiency, with customer value points being created. Central to this transformation, including a digital one, is the integration of smart technologies to advance autonomous, sensor-based and self-regulating systems (Bendul and Blunck, 2019;Rajan and Saffiotti, 2017). Collectively, these propel businesses towards unprecedented levels of automation and sophistication (Lee et al., 2017;Li et al., 2017). ...
Article
Full-text available
The emergence of the Fourth Industrial Revolution (4IR) paradigm, whilst posing challenges, also presents significant opportunities to bolster research capabilities and pioneer breakthrough innovations that can stimulate economic growth across various sectors. However, the realisation of these benefits relies heavily on the ability of countries and their constituents to innovate effectively in this new landscape. The purpose of this study is to explore how innovation mechanisms can be employed to foster stronger innovation capabilities within a university ecosystem, particularly in the African context. To do so a case study methodology is used, where cross-sectional data gathered over six months is assessed using the Diffusion of Innovation (DOI) as a theoretical lens. The findings reveal that such innovation mechanisms, like a makerspace within a university ecosystem, provide critical support for design phase innovation and collaboration. We illustrate this by employing a conceptual framework that explains the process by which innovations evolve from ideas into valuable outcomes.
... Early versions of chatbots were simple response platforms, whereas today's AI-based chatbots are much more sophisticated, powerful, and capable, increasing human-technology interaction (Rajan & Saffiotti, 2017). ...
Chapter
In recent years, the cultural value proposition has acquired an innovative technological component. The daily overexposure to multimedia platforms and the pervasiveness of social networks requires cultural organizations to develop strategic trajectories that can stimulate interest and involvement of current audiences on a par with the attraction of potential audiences. This chapter explores, from a managerial perspective, emerging experimentations regarding the use of artificial intelligence for the enhancement of the cultural-based experience through chatbot technology. The research's findings highlight that this technology can take on different characteristics depending on the implementation used and the purpose to be achieved. The innovativeness of the approach lies in the components of interactivity and customization of human-like interaction, through which museums attract and involve more effectively current and potential audiences.
Article
Full-text available
Everyone having access to or not to a sufficient supply of processed, safe, fresh, nutritional and in pocket-friendly food is a significant worldwide issue. By 2050, it could pose a major challenge for feeding 9 billion people, for which increasing food production technologies need to be addressed. Long term enhancement in food processing by reducing losses at every stage of the supply chain, managing the supply chain from production to consumption, including preservation, nutrient content, safety and shelf life. Massive amounts of streaming data, known as big data come under the Internet of Things (IoT). A new technology based platform helps in food production and processing technology by maintaining the big data and its further analysis. This review approaches scope and food production, food processing and its related technologies. Emerging technologies are saving the food supply and making the food economy more sustainable. Furthermore, this review is based on an overview of artificial intelligence, big data and sensors used in the food production sector.
Article
Full-text available
While recent cognitive science research shows a renewed interest in understanding intelligence, there is still little consensus on what constitutes intelligent behaviour and how it should be assessed. Here we propose a refined approach to biological intelligence as accurate prediction, according to which intelligent behaviour should be understood as adaptive control driven by the minimisation of uncertainty in dynamic environments with limited information. Central to this view is the concept of accuracy, which we argue is key to determining the success of predictions. We identify tensions in applying this framework to contemporary artificial systems such as large-language models, which, despite their impressive capacities for abstract prediction, show deficits in terms of context-sensitive knowledge transfer.
Chapter
The rise of educational robotics has transformed learning, especially in STEM, by emphasizing hands-on, experiential methods that enhance critical thinking and problem-solving skills. The integration of AI technologies enables educational robots to adapt learning styles and provide personalized feedback, creating a dynamic learning shift that not only improves teaching outcomes but also sustainable development goals by encouraging students to tackle real-world challenges using robotics and AI. In a study focused on Indonesian higher education, qualitative methods of semi-structured interviews revealed that while AI-enhanced robotics improves engagement and learning outcomes, significant barriers such as limited infrastructure, financial constraints, and a lack of faculty training hinder Concerns regarding data privacy and algorithmic bias were also noted. Despite these challenges, highlights the potential of AI-integrated robotics to cultivate sustainability awareness and environmental stewardship providing valuable insights for policymakers and educators on effective curricula.
Article
Full-text available
Recent advancements in Explainable Artificial Intelligence (XAI) aim to bridge the gap between complex artificial intelligence (AI) models and human understanding, fostering trust and usability in AI systems. However, challenges persist in comprehensively interpreting these models, hindering their widespread adoption. This study addresses these challenges by exploring recently emerging techniques in XAI. The primary problem addressed is the lack of transparency and interpretability in AI models to humanity for institution-wide use, which undermines user trust and inhibits their integration into critical decision-making processes. Through an in-depth review, this study identifies the objectives of enhancing the interpretability of AI models and improving human understanding of their decision-making processes. Various methodological approaches, including post-hoc explanations, model transparency methods, and interactive visualization techniques, are investigated to elucidate AI model behaviours. We further present techniques and methods to make AI models more interpretable and understandable to humans including their strengths and weaknesses to demonstrate promising advancements in model interpretability, facilitating better comprehension of complex AI systems by humans. In addition, we provide the application of XAI in local use cases. Challenges, solutions, and open research directions were highlighted to clarify these compelling XAI utilization challenges. The implications of this research are profound, as enhanced interpretability fosters trust in AI systems across diverse applications, from healthcare to finance. By empowering users to understand and scrutinize AI decisions, these techniques pave the way for more responsible and accountable AI deployment.
Chapter
Embodied agents, including robots and drones, rely on a myriad of artificial intelligence (AI) algorithms and models to interact with and navigate their environments. This abstract provides an overview of the essential AI components used in building such agents. These include computer vision techniques like convolutional neural networks and object tracking, sensor fusion to integrate data from diverse sensors, reinforcement learning for task learning, simultaneous localization and mapping (SLAM) for environment understanding, path planning and control algorithms, natural language processing for human interaction, 3D perception, transfer learning, imitation learning, self-supervised learning, vision-based reinforcement learning, and more. The selection of these techniques depends on the specific application and hardware capabilities of the embodied agent. By harnessing these AI tools, embodied agents can perform a wide range of tasks, from autonomous navigation to complex object manipulation and human interaction, making them valuable assets in various fields such as robotics, autonomous vehicles, and industrial automation. This chapter explores the pivotal role of artificial intelligence (AI) algorithms and models in the development and empowerment of embodied agents. Embodied agents represent a transformative paradigm in AI, enabling autonomous entities to interact with and adapt to real or simulated environments. The fusion of AI and embodiment offers boundless possibilities across various domains, including robotics, autonomous vehicles, virtual assistants, and digital avatars. Also, the chapter examines the essential AI techniques that underpin the functionality of embodied agents.
Article
Machine learning integrates with the chemical looping hydrogen production system to accelerate the development process and reduce experimental trial-and-error costs.
Article
Full-text available
The Fourth Industrial Revolution (Industry 4.0), and particularly the growth of artificial intelligence ( AI ), has had a profound impact on society, creating excitement among the public while also raising concerns about its implications and ethical considerations. This study aims to examine the perceptions of individuals from a variety of backgrounds toward AI and Industry 4.0 through a mixed-methods, survey and interview approach. The results reveal key themes in participant-supplied definitions of AI and Industry 4.0, emphasizing the replication of human intelligence, machine learning, automation, and the integration of digital technologies. Participants expressed concerns about job replacement, privacy invasion, and inaccurate information provided by AI . However, they also recognized the benefits of AI , such as solving complex problems and increasing convenience. Views on government involvement in shaping Industry 4.0 varied, with some advocating for strict regulations and others favoring support and development. The anticipated changes brought by Industry 4.0 include automation, potential job impacts, increased social disconnect, and reliance on technology. Understanding these perceptions is crucial for effectively managing the challenges and opportunities associated with AI in the evolving digital landscape.
Article
Computer vision tasks are crucial for aerospace missions as they help spacecraft to understand and interpret the space environment, such as estimating position and orientation, reconstructing 3D models, and recognizing objects, which have been extensively studied to successfully carry out the missions. However, traditional methods like Kalman filtering, structure from motion, and multi-view stereo are not robust enough to handle harsh conditions, leading to unreliable results. In recent years, deep learning (DL)-based perception technologies have shown great potential and outperformed traditional methods, especially in terms of their robustness to changing environments. To further advance DL-based aerospace perception, various frameworks, datasets, and strategies have been proposed, indicating significant potential for future applications. In this survey, we aim to explore the promising techniques used in perception tasks and emphasize the importance of DL-based aerospace perception. We begin by providing an overview of aerospace perception, including classical space programs developed in recent years, commonly used sensors, and traditional perception methods. Subsequently, we delve into three fundamental perception tasks in aerospace missions: pose estimation, 3D reconstruction, and recognition, as they are basic and crucial for subsequent decision-making and control. Finally, we discuss the limitations and possibilities in current research and provide an outlook on future developments, including the challenges of working with limited datasets, the need for improved algorithms, and the potential benefits of multi-source information fusion.
Chapter
In recent years, the tourist and hospitality business has undergone substantial technological disruption due to the rapid progress and use of artificial intelligence (AI) based solutions. This research study seeks to investigate the present status of AI implementation, namely chatbots, in the tourism and hospitality industry. It will analyze the primary factors that motivate the acceptance of these revolutionary technologies, as well as the obstacles encountered, and the services offered in accepting them. The article examines the use of AI in improving the customer experience, streamlining operations, and fostering innovation in different parts of the tourist and hospitality business. This is done by doing a thorough study of relevant literature and analyzing case studies from the industry. The results of this study can provide valuable insights for decision-making, influence future research priorities, and ultimately assist the industry in effectively utilizing AI to remain competitive and adaptable to the changing demands of customers in the digital era.
Article
Full-text available
Large language models (LLMs) are artificial intelligence (AI) platforms capable of analyzing and mimicking natural language processing. Leveraging deep learning, LLM capabilities have been advanced significantly, giving rise to generative chatbots such as Generative Pre‐trained Transformer (GPT). GPT‐1 was initially released by OpenAI in 2018. ChatGPT's release in 2022 marked a global record of speed in technology uptake, attracting more than 100 million users in two months. Consequently, the utility of LLMs in fields including engineering, healthcare, and education has been explored. The potential of LLM‐based chatbots in higher education has sparked significant interest and ignited debates. LLMs can offer personalized learning experiences and advance asynchronized learning, potentially revolutionizing higher education, but can also undermine academic integrity. Although concerns regarding AI‐generated output accuracy, the spread of misinformation, propagation of biases, and other legal and ethical issues have not been fully addressed yet, several strategies have been implemented to mitigate these limitations. Here, the development of LLMs, properties of LLM‐based chatbots, and potential applications of LLM‐based chatbots in higher education are discussed. Current challenges and concerns associated with AI‐based learning platforms are outlined. The potentials of LLM‐based chatbot use in the context of learning experiences in higher education settings are explored.
Article
Purpose The purpose of this study is to examine the acceptance of artificial intelligence devices (AIDs) by customers in banking service encounters using the Artificially Intelligent Device Use Acceptance (AIDUA) model and thus test the validity of the AIDUA model in the context of the banking sector as well as extending the AIDUA model by incorporating two moderator variables, namely technology anxiety and risk aversion by regarding the nature of banking services, which are considered highly risky and technology-intensive. Design/methodology/approach About 575 valid face-to-face self-administered surveys were gathered using convenience sampling among real bank customers in Turkey. The structural equation modelling was used to test hypotheses involving both direct and moderation effects. Findings The current study has demonstrated that the AIDUA model is valid and reliable for the acceptance of AIDs in banking service encounters by modifying it. The study results have shown that the acceptance process of AIDs for bank customers consists of three phases. Furthermore, the study’s findings have demonstrated that technology anxiety and risk aversion have adverse moderation effects on the relationship between performance expectancy and emotion as well as on the relationship between emotion and willingness to accept AIDs, respectively. Originality/value The current study validates the AIDUA model for the banking industry. In addition, the present study is unique compared to other studies conducted in the literature since it applies the AIDUA model to the setting of banking services for the first time by considering the potential effects of two moderators.
Article
Full-text available
Despite their diminutive neural systems, insects exhibit sophisticated adaptive behaviors in diverse environments. An insect receives various environmental stimuli through its sensory organs and selectively and rapidly integrates them to produce an adaptive motor output. Living organisms commonly have this sensory-motor integration, and attempts have been made for many years to elucidate this mechanism biologically and reconstruct it through engineering. In this review, we provide an overview of the biological analyses of the adaptive capacity of insects and introduce a framework of engineering tools to intervene in insect sensory and behavioral processes. The manifestation of adaptive insect behavior is intricately linked to dynamic environmental interactions, underscoring the significance of experiments maintaining this relationship. An experimental setup incorporating engineering techniques can manipulate the sensory stimuli and motor output of insects while maintaining this relationship. It can contribute to obtaining data that could not be obtained in experiments conducted under controlled environments. Moreover, it may be possible to analyze an insect’s adaptive capacity limits by varying the degree of sensory and motor intervention. Currently, experimental setups based on the framework of engineering tools only measure behavior; therefore, it is not possible to investigate how sensory stimuli are processed in the central nervous system. The anticipated future developments, including the integration of calcium imaging and electrophysiology, hold promise for a more profound understanding of the adaptive prowess of insects.
Article
Full-text available
Explainable AI (XAI) is widely viewed as a sine qua non for ever-expanding AI research. A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge. In this paper, we explore how human-computer interaction (HCI) and AI researchers conduct user studies in XAI applications based on a systematic literature review. After identifying and thoroughly analyzing 97 core papers with human-based XAI evaluations over the past five years, we categorize them along the measured characteristics of explanatory methods, namely trust, understanding, usability , and human-AI collaboration performance . Our research shows that XAI is spreading more rapidly in certain application domains, such as recommender systems than in others, but that user evaluations are still rather sparse and incorporate hardly any insights from cognitive or social sciences. Based on a comprehensive discussion of best practices, i.e., common models, design choices, and measures in user studies, we propose practical guidelines on designing and conducting user studies for XAI researchers and practitioners. Lastly, this survey also highlights several open research directions, particularly linking psychological science and human-centered XAI.
Article
Full-text available
Human-Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human-robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human-robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human-robot interactions by pushing for pervasive, human-level semantics within the robot’s deliberative system.
Article
Full-text available
In robotics, lower-level controllers are typically used to make the robot solve a specific task in a fixed context. For example, the lower-level controller can encode a hitting movement while the context defines the target coordinates to hit. However, in many learning problems the context may change between task executions. To adapt the policy to a new context, we utilize a hierarchical approach by learning an upper-level policy that generalizes the lower-level controllers to new contexts. A common approach to learn such upper-level policies is to use policy search. However, the majority of current contextual policy search approaches are model-free and require a high number of interactions with the robot and its environment. Model-based approaches are known to significantly reduce the amount of robot experiments, however, current model-based techniques cannot be applied straightforwardly to the problem of learning contextual upper-level policies. They rely on specific parametrizations of the policy and the reward function, which are often unrealistic in the contextual policy search formulation. In this paper, we propose a novel model-based contextual policy search algorithm that is able to generalize lower-level controllers, and is data-efficient. Our approach is based on learned probabilistic forward models and information theoretic policy search. Unlike current algorithms, our method does not require any assumption on the parametrization of the policy or the reward function. We show on complex simulated robotic tasks and in a real robot experiment that the proposed learning framework speeds up the learning process by up to two orders of magnitude in comparison to existing methods, while learning high quality policies.
Article
Full-text available
Abstract A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.
Article
Full-text available
In this study, we present a framework that infers human activities from observations using semantic representations. The proposed framework can be utilized to address the difficult and challenging problem of transferring tasks and skills to humanoid robots. We propose a method that allows robots to obtain and determine a higher-level understanding of a demonstrator’s behavior via semantic representations. This abstraction from observations captures the “essence” of the activity, thereby indicating which aspect of the demonstrator’s actions should be executed in order to accomplish the required activity. Thus, a meaningful semantic description is obtained in terms of human motions and object properties. In addition, we validated the semantic rules obtained in different conditions, i.e., three different and complex kitchen activities: 1) making a pancake; 2) making a sandwich; and 3) setting the table. We present quantitative and qualitative results, which demonstrate that without any further training, our system can deal with time restrictions, different execution styles of the same task by several participants, and different labeling strategies. This means, the rules obtained from one scenario are still valid even for new situations, which demonstrates that the inferred representations do not depend on the task performed. The results show that our system correctly recognized human behaviors in real-time in around 87.44% of cases, which was even better than a random participant recognizing the behaviors of another human (about 76.68%). In particular, the semantic rules acquired can be used to effectively improve the dynamic growth of the ontology-based knowledge representation. Hence, this method can be used flexibly across different demonstrations and constraints to infer and achieve a similar goal to that observed. Furthermore, the inference capability introduced in this study was integrated into a joint space control loop for a humanoid robot, an iCub, for achieving similar goals to the human demonstrator online.
Article
Full-text available
Model-based reinforcement learning is a powerful paradigm for learning tasks in robotics. However, in-depth exploration is usually required and the actions have to be known in advance. Thus, we propose a novel algorithm that integrates the option of requesting teacher demonstrations to learn new domains with fewer action executions and no previous knowledge. Demonstrations allow new actions to be learned and they greatly reduce the amount of exploration required, but they are only requested when they are expected to yield a significant improvement because the teacher's time is considered to be more valuable than the robot's time. Moreover, selecting the appropriate action to demonstrate is not an easy task, and thus some guidance is provided to the teacher. The rule-based model is analyzed to determine the parts of the state that may be incomplete, and to provide the teacher with a set of possible problems for which a demonstration is needed. Rule analysis is also used to find better alternative models and to complete subgoals before requesting help, thereby minimizing the number of requested demonstrations. These improvements were demonstrated in a set of experiments, which included domains from the international planning competition and a robotic task. Adding teacher demonstrations and rule analysis reduced the amount of exploration required by up to in some domains, and improved the success ratio by in other domains.
Article
Full-text available
Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks.
Article
Full-text available
This paper investigates manipulation of multiple unknown objects in a crowded environment. Because of incomplete knowledge due to unknown objects and occlusions in visual observations, object observations are imperfect and action success is uncertain, making planning challenging. We model the problem as a partially observable Markov decision process (POMDP), which allows a general reward based optimization objective and takes uncertainty in temporal evolution and partial observations into account. In addition to occlusion dependent observation and action success probabilities, our POMDP model also automatically adapts object specific action success probabilities. To cope with the changing system dynamics and performance constraints, we present a new online POMDP method based on particle filtering that produces compact policies. The approach is validated both in simulation and in physical experiments in a scenario of moving dirty dishes into a dishwasher. The results indicate that: 1) a greedy heuristic manipulation approach is not sufficient, multi-object manipulation requires multi-step POMDP planning, and 2) on-line planning is beneficial since it allows the adaptation of the system dynamics model based on actual experience.
Conference Paper
Full-text available
This paper describes the software architecture of an autonomous tour-guideltutor robot. This robot was recently deployed in the "Deutsches Museum Bonn," were it guided hundreds of visitors through the museum during a six-day deployment period. The robot's control software integrates low-level probabilistic reasoning with high-level problem solving embedded in first order logic. A collection of software innovations, described in this paper, enabled the robot to navigate at high speeds through dense crowds, while reliably avoiding collisions with obstacles--some of which could not even be perceived. Also described in this paper is a user interface tailored towards non-expert users, which was essential for the robot's success in the museum. Based on these experiences, this paper argues that time is ripe for the development of AI-based commercial service robots that assist people in everyday life.
Conference Paper
Full-text available
On May 17th 1999, the Remote Agent (RA) became the first Artificial Intelligence based closed loop autonomous control system to take control of a spacecraft. The RA commanded NASA's New Millennium Deep Space One spacecraft when it was 65 million miles away from earth. For a period of one week this system com- manded DS1's Ion Propulsion System, its camera, its attitude con- trol and navigation systems. A primary goal of this experiment was to provide an on-board demonstration of spacecraft autonomy. This demonstration included both nominal operations with goal-oriented commanding and closed-loop plan execution, and fault protection ca- pabilities with failure diagnosis and recovery, on-board replanning following unrecoverable failures, and system-level fault protection. This paper describes the Remote Agent Experiment and the model based approaches to Planning and Scheduling, Plan Execution and Fault Diagnosis and Recovery technologies developed at NASA Ames Research Center and the Jet Propulsion Laboratory. to save valuable time in the use of NASA's Deep Space Network (DSN) of antennas which have to be in constant contact with space- craft at distances where the round-trip delay time can be in the order of hours. The DSN is an oversubscribed resource for commanding spacecraft and targeting on-board commanding is a justifiable way to reduce mission operation costs. The cost/benefits therefore, of hav- ing an autonomous system control substantial portions of the routine commanding of such missions, can then be translated into more mis- sions that could be flown with the same pool of mission operators. The Remote Agent (RA) approach to spacecraft commanding and control puts more "smarts" on the spacecraft. In the RA approach, the operational rules and constraints are encoded in the flight software and the software may be considered to be an autonomous "remote agent" of the spacecraft operators in the sense that the operators rely on the agent to achieve particular goals. The operators do not know the exact conditions on the spacecraft, so they do not tell the agent exactly what to do at each instant of time. They do, however, tell the agent exactly which goals to achieve in a specified period of time. Three separate Artificial Intelligence technologies are integrated to form the RA: an on-board planner-scheduler, a robust multi- threaded executive, and a model-based fault diagnosis and recovery system (12, 9). This architectural approach was flown on the NASA's New Millennium Program Deep Space One (DS1) spacecraft as an experiment.
Conference Paper
Full-text available
This paper describes an interactive tour-guide robot, which was successfully exhibited in a Smithsonian museum. During its two weeks of operation, the robot interacted with thousands of people, traversing more than 44 km at speeds of up to 163 cm/sec. Our approach specifically addresses issues such as safe navigation in unmodified and dynamic environments, and short-term human-robot interaction. It uses learning pervasively at all levels of the software architecture
Article
Full-text available
RoboCup Challenge offers a set of challenges for intelligent agent researchers using a friendly competition in a dynamic, real-time, multi-agent domain: synthetic Soccer. While RoboCup in general envisions longer range challenges over the next few decades, RoboCup Challenge presents three specific challenges for the next two years: (i) learning of individual agents and teams; (ii) multi-agent team planning and plan-execution in service of teamwork; and (iii) opponent modeling. RoboCup Challenge provides a novel opportunity for researchers in planning and multi-agent arenas --- it not only supplies them a concrete domain to evalute their techniques, but also challenges them to evolve these techniques to face key constraints fundamental to this domain: real-time and teamwork.
Article
Reports on the significance of sponsoring robot competitions and discusses what we learn from their outcomes.
Article
Reinforcement learning is a plausible theoretical basis for developing self-learning, autonomous agents or robots that can effectively represent the world dynamics and efficiently learn the problem features to perform different tasks in different environments. The computational costs and complexities involved, however, are often prohibitive for real-world applications. This study introduces a scalable methodology to learn and transfer knowledge of the transition (and reward) models for model-based reinforcement learning in a complex world. We propose a variant formulation of Markov decision processes that supports efficient online-learning of the relevant problem features to approximate the world dynamics. We apply the new feature selection and dynamics approximation techniques in heterogeneous transfer learning, where the agent automatically maintains and adapts multiple representations of the world to cope with the different environments it encounters during its lifetime. We prove regret bounds for our approach, and empirically demonstrate its capability to quickly converge to a near optimal policy in both real and simulated environments.
Article
In order to robustly perform tasks based on abstract instructions, robots need sophisticated knowledge processing methods. These methods have to supply the difference between the (often shallow and symbolic) information in the instructions and the (detailed, grounded and often real-valued) information needed for execution. For filling these information gaps, a robot first has to identify them in the instructions, reason about suitable information sources, and combine pieces of information from different sources and of different structure into a coherent knowledge base. To this end we propose the KnowRob knowledge processing system for robots. In this article, we discuss why the requirements of a robot knowledge processing system differ from what is commonly investigated in AI research, and propose to re-consider a KR system as a semantically annotated view on information and algorithms that are often already available as part of the robot's control system. We then introduce representational structures and a common vocabulary for representing knowledge about robot actions, events, objects, environments, and the robot's hardware as well as inference procedures that operate on this common representation. The KnowRob system has been released as open-source software and is being used on several robots performing complex object manipulation tasks. We evaluate it through prototypical queries that demonstrate the expressive power and its impact on the robot's performance.
Article
Planners for real robotic systems should not only reason about abstract actions, but also about aspects related to physical execution such as kinematics and geometry. We present an approach to hybrid task and motion planning, in which state-based forward-chaining task planning is tightly coupled with motion planning and other forms of geometric reasoning. Our approach is centered around the problem of geometric backtracking that arises in hybrid task and motion planning: in order to satisfy the geometric preconditions of the current action, a planner may need to reconsider geometric choices, such as grasps and poses, that were made for previous actions. Geometric backtracking is a necessary condition for completeness, but it may lead to a dramatic computational explosion due to the large size of the space of geometric states. We explore two avenues to deal with this issue: the use of heuristics based on different geometric conditions to guide the search, and the use of geometric constraints to prune the search space. We empirically evaluate these different approaches, and demonstrate that they improve the performance of hybrid task and motion planning. We demonstrate our hybrid planning approach in two domains: a real, humanoid robotic platform, the DLR Justin robot, performing object manipulation tasks; and a simulated autonomous forklift operating in a warehouse.
Article
The inclusion of robots in our society is imminent, such as service robots. Robots are now capable of reliably manipulating objects in our daily lives but only when combined with artificial intelligence (AI) techniques for planning and decision-making, which allow a machine to determine how a task can be completed successfully. To perform decision making, AI planning methods use a set of planning operators to code the state changes in the environment produced by a robotic action. Given a specific goal, the planner then searches for the best sequence of planning operators, i.e., the best plan that leads through the state space to satisfy the goal. In principle, planning operators can be hand-coded, but this is impractical for applications that involve many possible state transitions. An alternative is to learn them automatically from experience, which is most efficient when there is a human teacher. In this study, we propose a simple and efficient decision-making framework for this purpose. The robot executes its plan in a step-wise manner and any planning impasse produced by missing operators is resolved online by asking a human teacher for the next action to execute. Based on the observed state transitions, this approach rapidly generates the missing operators by evaluating the relevance of several cause–effect alternatives in parallel using a probability estimate, which compensates for the high uncertainty that is inherent when learning from a small number of samples. We evaluated the validity of our approach in simulated and real environments, where it was benchmarked against previous methods. Humans learn in the same incremental manner, so we consider that our approach may be a better alternative to existing learning paradigms, which require offline learning, a significant amount of previous knowledge, or a large number of samples.
Article
Planners developed in the Artificial Intelligence community assume that tasks in the task plans they generate will be executed predictably and reliably. This assumption provides a useful abstraction in that it lets the task planners focus on what tasks should be done, while lower-level motion planners and controllers take care of the details of how the task should be performed. While this assumption is useful in many domains, it becomes problematic when controlling physically embedded systems, where there are often delays, disturbances, and failures. The task plans do not provide enough information about allowed flexibility in task duration and hybrid state evolution. Such flexibility could be useful when deciding how to react to disturbances. An important domain where this gap has caused problems is robotics, particularly, the operation of robots in unstructured, uncertain environments. Due to the complexity of this domain, the demands of tasks to be performed, and the actuation limits of robots, knowledge about permitted flexibility in execution of a task is crucial. We address this gap through two key innovations. First, we specify a Qualitative State Plan (QSP), which supports representation of spatial and temporal flexibility with respect to tasks. Second, we extend compilation approaches developed for temporally flexible execution of discrete activity plans to work with hybrid discrete/continuous systems using a recently developed Linear Quadratic Regulator synthesis algorithm, which performs a state reachability analysis to prune infeasible trajectories, and which determines optimal control policies for feasible state regions. The resulting Model-based Executive is able to take advantage of spatial and temporal flexibility in a QSP to improve handling of disturbances. Note that in this work, we focus on execution of QSPs, and defer the problem of how they are generated. We believe the latter could be accomplished through extensions to existing task planners.
Article
In the absence of external guidance, how can a robot learn to map the many raw pixels of high-dimensional visual inputs to useful action sequences? We propose here Continual Curiosity driven Skill Acquisition (CCSA). CCSA makes robots intrinsically motivated to acquire, store and reuse skills. Previous curiosity-based agents acquired skills by associating intrinsic rewards with world model improvements, and used reinforcement learning to learn how to get these intrinsic rewards. CCSA also does this, but unlike previous implementations, the world model is a set of compact low-dimensional representations of the streams of high-dimensional visual information, which are learned through incremental slow feature analysis. These representations augment the robot's state space with new information about the environment. We show how this information can have a higher-level (compared to pixels) and useful interpretation, for example, if the robot has grasped a cup in its field of view or not. After learning a representation, large intrinsic rewards are given to the robot for performing actions that greatly change the feature output, which has the tendency otherwise to change slowly in time. We show empirically what these actions are (e.g., grasping the cup) and how they can be useful as skills. An acquired skill includes both the learned actions and the learned slow feature representation. Skills are stored and reused to generate new observations, enabling continual acquisition of complex skills. We present results of experiments with an iCub humanoid robot that uses CCSA to incrementally acquire skills to topple, grasp and pick-place a cup, driven by its intrinsic motivation from raw pixel vision.
Article
This paper presents an approach to creating a semantic map of an indoor environment incrementally and in closed loop, based on a series of 3D point clouds captured by a mobile robot using an RGB-D camera. Based on a semantic model about furniture objects (represented in an OWL-DL ontology with rules attached), we generate hypotheses for locations and 6DoF poses of object instances and verify them by matching a geometric model of the object (given as a CAD model) into the point cloud. The result, in addition to the registered point cloud, is a consistent mesh representation of the environment, further enriched by object models corresponding to the detected pieces of furniture. We demonstrate the robustness of our approach against occlusion and aperture limitations of the RGB-D frames, and against differences between the CAD models and the real objects. We evaluate the complete system on two challenging datasets featuring partial visibility and totaling over 800 frames. The results show complementary strengths and weaknesses of processing each frame directly vs. processing the fully registered scene, which accord with intuitive expectations.
Article
Cargo-bearing unmanned aerial vehicles (UAVs) have tremendous potential to assist humans by delivering food, medicine, and other supplies. For time-critical cargo delivery tasks, UAVs need to be able to quickly navigate their environments and deliver suspended payloads with bounded load displacement. As a constraint balancing task for joint UAV-suspended load system dynamics, this task poses a challenge. This article presents a reinforcement learning approach for aerial cargo delivery tasks in environments with static obstacles. We first learn a minimal residual oscillations task policy in obstacle-free environments using a specifically designed feature vector for value function approximation that allows generalization beyond the training domain. The method works in continuous state and discrete action spaces. Since planning for aerial cargo requires very large action space (over 106 actions) that is impractical for learning, we define formal conditions for a class of robotics problems where learning can occur in a simplified problem space and successfully transfer to a broader problem space. Exploiting these guarantees and relying on the discrete action space, we learn the swing-free policy in a subspace several orders of magnitude smaller, and later develop a method for swing-free trajectory planning along a path. As an extension to tasks in environments with static obstacles where the load displacement needs to be bounded throughout the trajectory, sampling-based motion planning generates collision-free paths. Next, a reinforcement learning agent transforms these paths into trajectories that maintain the bound on the load displacement while following the collision-free path in a timely manner. We verify the approach both in simulation and in experiments on a quadrotor with suspended load and verify the method's safety and feasibility through a demonstration where a quadrotor delivers an open container of liquid to a human subject. The contributions of this work are two-fold. First, this article presents a solution to a challenging, and vital problem of planning a constraint-balancing task for an inherently unstable non-linear system in the presence of obstacles. Second, AI and robotics researchers can both benefit from the provided theoretical guarantees of system stability on a class of constraint-balancing tasks that occur in very large action spaces.
Article
Anticipation can enhance the capability of a robot in its interaction with humans, where the robot predicts the humans' intention for selecting its own action. We present a novel framework of anticipatory action selection for human-robot interaction, which is capable to handle nonlinear and stochastic human behaviors such as table tennis strokes and allows the robot to choose the optimal action based on prediction of the human partner's intention with uncertainty. The presented framework is generic and can be used in many human-robot interaction scenarios, for example, in navigation and human-robot co-manipulation. In this article, we conduct a case study on human-robot table tennis. Due to the limited amount of time for executing hitting movements, a robot usually needs to initiate its hitting movement before the opponent hits the ball, which requires the robot to be anticipatory based on visual observation of the opponent's movement. Previous work on Intention-Driven Dynamics Models (IDDM) allowed the robot to predict the intended target of the opponent. In this article, we address the problem of action selection and optimal timing for initiating a chosen action by formulating the anticipatory action selection as a Partially Observable Markov Decision Process (POMDP), where the transition and observation are modeled by the IDDM framework. We present two approaches to anticipatory action selection based on the POMDP formulation, i.e., a model-free policy learning method based on Least-Squares Policy Iteration (LSPI) that employs the IDDM for belief updates, and a model-based Monte-Carlo Planning (MCP) method, which benefits from the transition and observation model by the IDDM. Experimental results using real data in a simulated environment show the importance of anticipatory action selection, and that POMDPs are suitable to formulate the anticipatory action selection problem by taking into account the uncertainties in prediction. We also show that existing algorithms for POMDPs, such as LSPI and MCP, can be applied to substantially improve the robot's performance in its interaction with humans.KeywordsAnticipationIntention-driven dynamics modelPartially observable Markov decision processPolicy iterationMonte-Carlo planning
Article
Autonomous robots facing a diversity of open environments and performing a variety of tasks and interactions need explicit deliberation in order to fulfill their missions. Deliberation is meant to endow a robotic system with extended, more adaptable and robust functionalities, as well as reduce its deployment cost.The ambition of this survey is to present a global overview of deliberation functions in robotics and to discuss the state of the art in this area. The following five deliberation functions are identified and analyzed: planning, acting, monitoring, observing, and learning. The paper introduces a global perspective on these deliberation functions and discusses their main characteristics, design choices and constraints. The reviewed contributions are discussed with respect to this perspective. The survey focuses as much as possible on papers with a clear robotics content and with a concern on integrating several deliberation functions.
Article
From 1960 through 1972, the Artificial Intelligence Center at SRI conducted research on a mobile robot system nicknamed "Shakey." Endowed with a limited ability to perceive and model its environment, Shakey could perform tasks that required planning, route finding, and the rearranging of simple objects. Although the Shakey project led to numerous advances in AI techniques, many of which were reported in the literature, much specific in formation that might be useful in current robotics research appears only in a series of relatively inaccessible SRI technical reports. Our purpose here, consequently, is to make this material more readily available by extracting and reprinting those sections of the reports that seem particularly interesting, relevant and important.
Article
Much of current machine learning (ML) research has lost its connection to problems of import to the larger world of science and society. From this perspective, there exist glaring limitations in the data sets we investigate, the metrics we employ for evaluation, and the degree to which results are communicated back to their originating domains. What changes are needed to how we conduct research to increase the impact that ML has? We present six Impact Challenges to explicitly focus the field?s energy and attention, and we discuss existing obstacles that must be addressed. We aim to inspire ongoing discussion and focus on ML that matters.
Article
Renewed motives for space exploration have inspired NASA to work toward the goal of establishing a virtual presence in space, through heterogeneous fleets of robotic explorers. Information technology, and Artificial Intelligence in particular, will play a central role in this endeavor by endowing these explorers with a form of computational intelligence that we call remote agents. In this paper we describe the Remote Agent, a specific autonomous agent architecture based on the principles of model-based programming, on-board deduction and search, and goal-directed closed-loop commanding, that takes a significant step toward enabling this future. This architecture addresses the unique characteristics of the spacecraft domain that require highly reliable autonomous operations over long periods of time with tight deadlines, resource constraints, and concurrent activity among tightly coupled subsystems. The Remote Agent integrates constraintbased temporal planning and scheduling, robust multi-threaded execution, and model-based mode identification and reconfiguration. The demonstration of the integrated system as an on-board controller for Deep Space One, NASA's first New Millennium mission, is scheduled for a period of a week in mid 1999. The development of the Remote Agent also provided the opportunity to reassess some of AI's conventional wisdom about the challenges of implementing embedded systems, tractable reasoning, and knowledge representation. We discuss these issues, and our often contrary experiences, throughout the paper.
Article
elligent agents embedded in a dynamic, uncertain environment should incorporate capabilities for both planned and reactive behavior. Many current solutions to this dual need focus on one aspect, and treat the other one as secondary. We propose an approach for integrating planning and control based on behavior schemas, which link physical movements to abstract action descriptions. Behavior schemas describe behaviors of an agent, expressed as trajectories of control actions in an environment, and goals can be defined as predicates on these trajectories. Goals and behaviors can be combined to produce conjoint goals and complex controls. The ability of multivalued logics to represent graded preferences allows us to formulate tradeoffs in the combination. Two composition theorems relate complex controls to complex goals, and provide the key to using standard knowledge-based deliberation techniques to generate complex controllers. We report experiments in planning and execution on a mobile robot platform, Flakey.
Article
Kuipers, B. and Byun, Y.T., A robot exploration and mapping strategy based on a sematic hierarchy of spatial representations, Robotics and Autonomous System, 8 (1991) 47–63.We have developed a robust qualitative method for robot exploration, mapping, and navigation in large-scale spatial environments. Experiments with a simulated robot in a variety of complex 2D environments have demonstrated that our qualitative method can build an accurate map of a previously unkown environment in spite of substantial random and systematic sensorimotor error.Most current approaches to robot exploration and mapping analyze sensor input to build a geometrically precise map of the environment, then extract topological structure from the geometric description. Our approach recognizes and exploits qualitative properties of large-scale before relatively error-prone geometrical properties. At the control level, distinctive places and distinctive travel edges are identified based on the interaction between the robot's control strategies, its sensorimotor system, and the world. A distinctive place is defined as the local maximum of a distinctiveness measure appropriate to its immediate neighborhood, and is found by a hill-climbing control strategy. A distinctive travel edge, similarly, is defined by a suitable measure and a path-following control strategy. The topological network description is created by linking the distinctive places and travel edges.Metrical information is then incrementally assimilated into localgeometric descriptions of places and edges, and finally merged into a global geometric map. Topological ambiguity arising from sensorily indistinguishable places can be resolved at the topological level by the exploration strategy. With this representation, successful navigation is not critically dependent on the accuracy, or even the existence, of the geometrical description.We present examples demonstrating the process by which the robot explores and builds a map of a complex environment, including the effect of sensory errors. We also discuss new research directions that are suggested by this approach.
Article
Intelligent autonomous action in ordinary environments calls for maps. 3D geometry is generally required for avoiding collision with complex obstacles and to self-localize in six degrees of freedom (6 DoF) (x, y, z positions, roll, yaw, and pitch angles). Meaning, in addition to geometry, becomes inevitable if the robot is supposed to interact with its environment in a goal-directed way. A semantic stance enables the robot to reason about objects; it helps disambiguate or round off sensor data; and the robot knowledge becomes reviewable and communicable.The paper describes an approach and an integrated robot system for semantic mapping. The prime sensor is a 3D laser scanner. Individual scans are registered into a coherent 3D geometry map by 6D SLAM. Coarse scene features (e.g., walls, floors in a building) are determined by semantic labeling. More delicate objects are then detected by a trained classifier and localized. In the end, the semantic maps can be visualized for human inspection. We sketch the overall architecture of the approach, explain the respective steps and their underlying algorithms, give examples based on a working robot implementation, and discuss the findings.
Book
Automated planning technology now plays a significant role in a variety of demanding applications, ranging from controlling space vehicles and robots to playing the game of bridge. These real-world applications create new opportunities for synergy between theory and practice: observing what works well in practice leads to better theories of planning, and better theories lead to better performance of practical applications. Automated Planning mirrors this dialogue by offering a comprehensive, up-to-date resource on both the theory and practice of automated planning. The book goes well beyond classical planning, to include temporal planning, resource scheduling, planning under uncertainty, and modern techniques for plan generation, such as task decomposition, propositional satisfiability, constraint satisfaction, and model checking. The authors combine over 30 years experience in planning research and development to offer an invaluable text to researchers, professionals, and graduate students. Comprehensively explains paradigms for automated planning. Provides a thorough understanding of theory and planning practice, and how they relate to each other. Presents case studies of applications in space, robotics, CAD/CAM, process control, emergency operations, and games. Provides a thorough understanding of AI planning theory and practice, and how they relate to each other. Covers all the contemporary topics of planning, as well as important practical applications of planning, such as model checking and game playing. Presents case studies and applications in planning engineering, space, robotics, CAD/CAM, process control, emergency operations, and games. Provides lecture notes, examples of programming assignments, pointers to downloadable planning systems and related information online.
Article
this technicality has no practical implications; however if k is not very large, then it might be frustrating to obtain only probabilistic assurances, as op- naive reverse van der i sequence binary binary Corput Points in [0, 1]/ 1 0 .0000 .0000 0 2 1/16 .0001 .1000 1/2 3 1/8 .0010 .0100 1/4 4 3/16 .0011 .1100 3/4 5 1/4 .0100 .0010 1/8 6 5/16 .0101 .1010 5/8 7 3/8 .0110 .0110 3/8 8 7/16 .0111 .1110 7/8 9 1/2 .1000 .0001 1/16 10 9/16 .1001 .1001 9/16 11 5/8 .1010 .0101 5/16 12 11/16 .1011 .1101 13/16 13 3/4 .1100 .0011 3/16 14 13/16 .1101 .1011 11/16 15 7/8 .1110 .0111 7/16 16 15/16 .1111 .1111 15/16 Figure 5.2: The van der Corput sequence is obtained by reversing the bits in the binary decimal representation of the naive sequence
Intrinsically motivated model learning for developing curious robots
  • Hester
T. Hester, P. Stone, Intrinsically motivated model learning for developing curious robots, Artif. Intell. (2017), (this issue).
  • E A Feigenbaum
  • J Feldman
E.A. Feigenbaum, J. Feldman, et al., Computers and Thought, AAAI Press, 1963.
Strategic Research Agenda for Robotics in Europe
  • The Partnership For Robotics In Europe
SPARC – The Partnership for Robotics in Europe, Strategic Research Agenda for Robotics in Europe 2014–2020, http://www.eu-robotics.net/sparc, 2014.
  • J Needham
J. Needham, Science and Civilisation in China, vol. 2, History of Scientific Thought, Cambridge University Press, 1954.
Artificial Intelligence Swarms Silicon Valley on Wings and Wheels, The New York Times
  • J Markoff
J. Markoff, Artificial Intelligence Swarms Silicon Valley on Wings and Wheels, The New York Times, online at http://nyti.ms/29ZMgCw, 2016.