Conference PaperPDF Available

Will Robots Take all the Jobs? Not yet

Authors:

Abstract

Much has been speculated about intelligent artefacts and their potential abilities to automate entire industries or at least a broad variety of human-related tasks and processes. Recent advancements in the fields of Artificial Intelligence (AI) and Robotics have fueled these views, thereby propelling hyped narratives, unfounded fears, and dystopian futures alike. Some of the reasons behind such behaviors originate from the time-old dispute on what intelligence (e.g. in humans and in machines) truly means. Others, to the disparate realities between the promise of AI, i.e. to build machines with human-like intelligence and to consider what abilities current "intelligent" machines possess. The speedy automation of human labour and processes has been around since the industrial revolution. Automation wears new clothes in the digital era, especially that involving the development of emerging technologies, but it is still far from including countless human activities that require genuine intelligence and are not easy to automate. The aim of this paper is twofold. On the one hand, it clarifies why we are no closer to having truly intelligent systems. We base our statements on a thorough discussion about what genuine intelligence means. On the other hand, it presents an analysis of new jobs created in Robotics and related fields by mining and processing job offers posted to the mailing list "robotics-worldwide." By using natural language processing techniques, not only is the evolution of all job offers posted over the last 15 years to that renowned mailing list presented, but also their most salient characteristics and backgrounds. In addition to the continuously growing number of job offers in the analyzed period, the results indicate substantial demand for jobs predominantly within the field of academic and scientific research. Proliferating innovation in AI and Robotics combined with a growing lack of experts in these domains indicates that both are broad fields that are yet to be thoroughly explored. The analysis of "The robotics-worldwide Archives" resoundingly displays that an obvious solution to this is the increased employment of researchers and academics to undertake this exploration. No, robots will not take all the jobs. At least not yet.
A preview of the PDF is not available
... On the other side, there is the need not to increase the total production cost or the time to answer to the market, i.e., the lead time. A good trade-off is offered by collaborative robots, or simply cobots (Bi et al., 2021;Colgate, Edward, Peshkin, & Wannasuphoprasit, 1996), that are spreading nowadays, Anandarajah and Monett (2021), since they can merge the high productivity standards with the required flexibility. Moreover, in collaborative systems, the flexibility and the dexterity of human operators are also included because of the capability of this type of robots to work in the same workspace since they do not require any additional fences. ...
Article
Collaborative robots (cobots) are one of the newest technologies helpful to improve the performance of the systems. This is because of their ability to combine the high-level productivity, typical of automatic machines, with the flexibility, typical of manual systems. Moreover, they can work directly with human operators, in the same work area; however, the shared workspace is both the greatest advantage and limit of collaborative robots. In collaborative applications it is important to understand how this interaction between different resources can affect the performance of the system since an emergency stop can be generated due to interference between them, reducing so the efficiency of the system. Indeed, a collision avoidance strategy can be introduced in order to promote the prosecution of the movement, but safely. Therefore, a 3D collision avoidance strategy is presented in this work, which is experimentally validated to prove the goodness of the proposed method. Finally, the performance of the systems is investigated through simulation and experimental tests to comprehend the effects of the dimension of the collaboration area with respect to the workspace. From the results, it is possible to understand that the reduction in performance, because of an increase in the collaboration area, is mitigated by the introduction of the collision avoidance strategy, since fewer emergency stops are required.
Article
The subject of this paper is ethical and responsibility issues relating to the development and acquisition of robotics in healthcare. The purpose of the paper is to study previous scientific publications and research related to the topic and to clarify which questions, aspects, and concerns are most relevant when considering ethics and responsibility issues related to care robots. In the second phase, ideas from different stakeholders regarding the viewpoints are studied, and those ideas are compared to the ones presented in previous publications. The aim of this study is to find solutions to the issues presented in scientific literature and, also, to find new issues for consideration and further studies. The study is qualitative, and a theme interview was utilized as the main method for acquiring knowledge. The study is a part of the SHAPES Horizon 2020 project. From the perspective of SHAPES, the aim of the study is to provide useful knowledge for the project, which would in part promote the goal of SHAPES, i.e., the development of an international healthcare ecosystem. Based on the results of the study, it can be argued that the issues presented in previous academic publications regarding the ethics and accountability of robots in practical healthcare work are not relevant. Both the legislation and the logic of the AI algorithms used by care robots prevent those situations presented in previous academic discussions in which robots would presumably be forced to make decisions demanding ethical consideration. The results also point toward the fact that current legislation does not limit the development of healthcare robots more than it limits healthcare work in general. Thus, the considerations of ethics regarding care robots should rather be focused on the threshold values used by robots, when making interpretations, as well as the data used for the purpose of machine learning. These were identified as potential subjects for further research.
Article
The subject of this paper is ethical and responsibility issues relating to the development and acquisition of robotics in healthcare. The purpose of the paper is to study previous scientific publications and research related to the topic and to clarify which questions, aspects, and concerns are most relevant when considering ethics and responsibility issues related to care robots. In the second phase, ideas from different stakeholders regarding the viewpoints are studied, and those ideas are compared to the ones presented in previous publications. The aim of this study is to find solutions to the issues presented in scientific literature and, also, to find new issues for consideration and further studies. The study is qualitative, and a theme interview was utilized as the main method for acquiring knowledge. The study is a part of the SHAPES Horizon 2020 project. From the perspective of SHAPES, the aim of the study is to provide useful knowledge for the project, which would in part promote the goal of SHAPES, i.e., the development of an international healthcare ecosystem. Based on the results of the study, it can be argued that the issues presented in previous academic publications regarding the ethics and accountability of robots in practical healthcare work are not relevant. Both the legislation and the logic of the AI algorithms used by care robots prevent those situations presented in previous academic discussions in which robots would presumably be forced to make decisions demanding ethical consideration. The results also point toward the fact that current legislation does not limit the development of healthcare robots more than it limits healthcare work in general. Thus, the considerations of ethics regarding care robots should rather be focused on the threshold values used by robots, when making interpretations, as well as the data used for the purpose of machine learning. These were identified as potential subjects for further research.
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
Article
Full-text available
Every artificial-intelligence research project needs a working definition of "intelligence ", on which the deepest goals and assumptions of the research are based. In the project described in the following chapters, "intelligence" is defined as the capacity to adapt under insufficient knowledge and resources. Concretely, an intelligent system should be finite and open, and should work in real time. If these criteria are used in the design of a reasoning system, the result is NARS, a non-axiomatic reasoning system. NARS uses a term-oriented formal language, characterized by the use of subject-- predicate sentences. The language has an experience-grounded semantics, according to which the truth value of a judgment is determined by previous experience, and the meaning of a term is determined by its relations with other terms. Several different types of uncertainty, such as randomness, fuzziness, and ignorance, can be represented in the language in a single way. The inference rules of NARS...
Article
Full-text available
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
Artifictional Intelligence: Against Humanity's Surrender to Computers
  • H Collins
Collins, H. (2018). Artifictional Intelligence: Against Humanity's Surrender to Computers. Cambridge, UK: Polity Press.
How Robots Change the World: What automation really means for jobs and productivity
  • E Cone
  • J Lambert
Cone, E. and Lambert, J. (2019). How Robots Change the World: What automation really means for jobs and productivity. Oxford Economics [online]. Available at: https://www.oxfordeconomics.com/recent-releases/how-robots-changethe-world (Accessed: 15 June 2021).