Article

Robots autonomy: Some technical challenges

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Robots autonomy has been widely focused on in the newspapers with a trend towards anthropomorphism that is likely to mislead people and conceal or disguise the technical reality. This paper aims at reviewing the different technical aspects of robots autonomy. First we propose a definition allowing to distinguish robots from devices that are not robots. Then autonomy is defined and considered as a relative notion within a framework of authority sharing between the decision functions of the robot and the human being. Several technical issues are mentioned according to three points of view: (i) the robot, (ii) the human operator and (iii) the interaction between the operator and the robot. Some key questions that should be carefully dealt with for future robotic systems are given at the end of the paper.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Nowadays, the growth of information systems, and of the internet in particular, continues to inspire the parallel between the human brain and computer-based artificial systems 4,5 . Despite the fact that the market is currently oriented towards artificial systems that are still based on Newtonian physics and on the assumption that every element obeys simple and static rules, it has been observed that the experience of the internet and the progress made in robotics suggests a different way of designing the next-generation of information systems and robots 6,7,8 . ...
Article
Full-text available
The digital revolution is transforming contemporary society. Connective intelligence is an emerging property deriving from the embedding of intelligence into the connected data, concepts, applications, and people. Furthermore, the progress in behavioral-basic robotics opens new fields of innovative investigation.
Article
Full-text available
The allocation of visual attention is a key factor for the humans when operating complex systems under time pressure with multiple information sources. In some situations, attentional tunneling is likely to appear and leads to excessive focus and poor decision making. In this study, we propose a formal approach to detect the occurrence of such an attentional impairment that is based on machine learning techniques. An experiment was conducted to provoke attentional tunneling during which psycho-physiological and oculomotor data from 23 participants were collected. Data from 18 participants were used to train an adaptive neuro-fuzzy inference system (ANFIS). From a machine learning point of view, the classification performance of the trained ANFIS proved the validity of this approach. Furthermore, the resulting classification rules were consistent with the attentional tunneling literature. Finally, the classifier was robust to detect attentional tunneling when performing over test data from four participants.
Article
Full-text available
Analyses of aviation safety reports reveal that human–machine conflicts induced by poor automation design are remarkable precursors of accidents. A review of different crew–automation conflicting scenarios shows that they have a common denominator: the autopilot behaviour interferes with the pilot's goal regarding the flight guidance via ‘hidden’ mode transitions. Considering both the human operator and the machine (i.e. the autopilot or the decision functions) as agents, we propose a Petri net model of those conflicting interactions, which allows them to be detected as deadlocks in the Petri net. In order to test our Petri net model, we designed an autoflight system that was formally analysed to detect conflicting situations. We identified three conflicting situations that were integrated in an experimental scenario in a flight simulator with 10 general aviation pilots. The results showed that the conflicts that we had a-priori identified as critical had impacted the pilots' performance. Indeed, the first conflict remained unnoticed by eight participants and led to a potential collision with another aircraft. The second conflict was detected by all the participants but three of them did not manage the situation correctly. The last conflict was also detected by all the participants but provoked typical automation surprise situation as only one declared that he had understood the autopilot behaviour. These behavioural results are discussed in terms of workload and number of fired ‘hidden’ transitions. Eventually, this study reveals that both formal and experimental approaches are complementary to identify and assess the criticality of human–automation conflicts.
Article
Full-text available
When the human element is introduced into decision support system design, entirely new layers of social and ethical issues emerge but are not always recognized as such. This paper discusses those ethical and social impact issues specific to decision support systems and highlights areas that interface designers should consider during design with an emphasis on military applications. Because of the inherent complexity of socio-technical systems, decision support systems are particularly vulnerable to certain potential ethical pitfalls that encompass automation and accountability issues. If computer systems diminish a user’s sense of moral agency and responsibility, an erosion of accountability could result. In addition, these problems are exacerbated when an interface is perceived as a legitimate authority. I argue that when developing human computer interfaces for decision support systems that have the ability to harm people, the possibility exists that a moral buffer, a form of psychological distancing, is created which allows people to ethically distance themselves from their actions.