Science topic
Human-Robot Interaction - Science topic
Explore the latest questions and answers in Human-Robot Interaction, and find Human-Robot Interaction experts.
Questions related to Human-Robot Interaction
Dear Colleagues,
We are glad to announce that we are organizing the 2nd workshop on Cooperative Multi-Agent Systems Decision-Making and Learning at AAAI 2025. If you are interested, we sincerely invite you to share your project with us and the broader community via submitting a paper contribution or an extended abstract to our workshop.
Workshop Website: https://www.is3rlab.org/aaai25-cmasdl-workshop.github.io/
AAAI 2025 W11 Website: https://aaai.org/conference/aaai/aaai-25/workshop-list/
Paper Submission Link: https://easychair.org/conferences/?conf=aaai2025cmasdlworksh
Paper Submission Deadline: December 6, 2024
ABOUT:
Natural agents, like humans, often make decisions based on a blend of biological, social, and cognitive motivations, as elucidated by theories like Maslow’s Hierarchy of Needs and Alderfer’s Existence-Relatedness-Growth (ERG) theory. On the other hand, the AI agent can be regarded as a self-organizing system that also presents various needs and motivations in its evolution through decision-making and learning to adapt to different scenarios and satisfy their needs. Combined with AI agent capacity to aid decision-making, it opens up new horizons in human-multi-agent collaboration. This potential is crucially essential in the context of interactions between human agents and intelligent agents, when considering to establish stable and reliable relationships in their cooperation, particularly in adversarial and rescue mission environments.
This workshop focuses on the role of decision-making and learning in human-multi-agent cooperation, viewed through the lens of cognitive modeling. AI technologies, particularly in human-robot interaction, are increasingly focused on cognitive modeling, encompassing everything from visual processing to symbolic reasoning and action recognition. These capabilities support human-agent cooperation in complex tasks.
Important Dates
- Workshop paper submission deadline: December 6, 2024, 11:59 pm Pacific Time.
- Notification to authors: 21 December 2024.
- Date of workshop: 3 March 2025.
TOPICS:
We solicit contributions from topics including but not limited to:
§ Human-multi-agent cognitive modeling
§ Human-multi-agent trust networks
§ Trustworthy AI agents in Human-robot interaction
§ Trust based Human-MAS decision-making and learning
§ Consensus in Human-MAS collaboration
§ Intrinsically motivated AI agent modeling in Human-MAS
§ Innate-values-driven reinforcement learning
§ Multi-Objective MAS decision-making and learning
§ Adaptive learning with social rewards
§ Cognitive models in swarm intelligence and robotics
§ Game-theoretic approaches in MAS decision-making
§ Cognitive model application in intelligent social systems
SUBMISSION:
We welcome contributions of both short (2-4 pages) and long papers (6-8 pages) related to our stated vision in the AAAI 2025 proceedings format. Position papers and surveys are also welcome. All contributions will be peer reviewed (single-blind).
Paper Submission Link: https://easychair.org/conferences/?conf=aaai2025cmasdlworksh
PUBLICATION & ATTENDANCE:
All accepted papers will be given the opportunity to be presented in the workshop. The accepted papers will be posted on the workshop’s website in advance so that interested participants will have a chance to view the papers first before coming to the workshop. These non-archival papers and their corresponding posters will remain available on this website after the workshop. The authors will retain copyright of their papers.
Virtual and Remote Attendance will be available to everyone who has registered for the workshops. The workshop will be held in Philadelphia, Pennsylvania at the Pennsylvania Convention Center on Mar. 3, 2025. Authors are NOT allowed to present virtually.
REGISTRATION:
All attendees have to register for the workshop. Please check more details about AAAI 2025 workshop 11 registration: https://aaai.org/conference/aaai/aaai-25/registration/.
Please contact is3rlab@gmail.com with any questions.
Chairs & Organizers:
Qin Yang, Bradley University
Giovanni Beltrame, Polytechnique Montreal
Alberto Quattrini Li, Dartmouth College
Christopher Amato, Northeastern University
IEEE 2024 4th International Conference on Electrical Engineering and Mechatronics Technology (ICEEMT 2024) will be held in Hangzhou, China from July 5 to 7, 2024.
Conference Webiste: https://ais.cn/u/QNryAj
---Call For Papers---
The topics of interest for submission include, but are not limited to:
◕ Electrical Engineering
Power system stability analysis and control
Renewable energy integration and smart grid technologies
Power electronics and motor drives for electric vehicles
Power quality improvement in electrical distribution systems
Electromagnetic compatibility and interference mitigation
......
◕Mechatronics Technology
Mechatronic system design and modeling
Robotics and automation technology
Intelligent control systems for mechatronics
MEMS/NEMS and micro/nano robotics
Human-robot interaction and collaboration
......
All accepted full papers will be published and submitted for inclusion into IEEE Xplore subject to meeting IEEE Xplore's scope and quality requirements, and also submitted to EI Compendex and Scopus for indexing.
Important Dates:
Full Paper Submission Date: May 31, 2024
Registration Deadline: July 3, 2024
Final Paper Submission Date: June 28, 2024
Conference Dates: July 5-7, 2024
For More Details please visit:
In your opinion, will autonomous robots equipped with generative artificial intelligence technology, embedded intelligent chatbots, able to perform many activities that only humans have done so far be more of a help or a threat to humans?
How should robots equipped with generative artificial intelligence technology performing the role of household help, etc. be constructed and programmed to make it safe for humans?
If mass-produced autonomously functioning, programmable robots, highly intelligent androids equipped with generative artificial intelligence technology and programmed to provide assistance to humans appear in the near future, what kind of work would you hire your personal robot helper to do?
In some countries, such as Japan, there have already been mass-produced robots for years that act as domestic helpers mainly for the elderly. Perhaps in the not too distant future, mass-produced autonomously functioning, programmable robots, including highly intelligent androids equipped with generative artificial intelligence technology, will also be on sale in many other countries. Such robots can act as domestic help, child or elderly caregivers. With the rapidly advancing technological progress being realized in the field of robotics, artificial intelligence and other technologies typical of the current fourth technological revolution, Industry 4.0/5.0 technologies will soon be mass-produced humanoid, highly intelligent, highly autonomous robots performing various household and other jobs. Such robots equipped with generative artificial intelligence technology can have a number of alternative algorithms built in to perform specific functions of their choice and can be programmable. Thus, after purchasing a kind of helper robot for household and/or other work, the robot owner will be able to determine the scope and types of functions for which he will give the robot authorization to perform them. The programming of the purchased robot will consist of selecting from the available options of various functions only those that will correspond to specific activities and tasks possible for the robot to perform. Also, the level of autonomy within a certain possible range from to will be able to be defined by the robot owner as part of the programming. In addition to typical household chores, providing assistance around the house involving, for example, cleaning, watering flowers, feeding pets, walking pets, such robots can act as controllers analyzing the operation of various household appliances and systems, including smart home systems, heating, ventilation, lighting, air conditioning, etc. Besides, the robots can act as a night watchman in the situation of the owner's departure from home, can act as a bodyguard for home assets, and can call for help in an unusual situation when the owner needs urgent assistance from public services, including the police, security company, health service, etc. Such intelligent robots can also have a permanent connection to the Internet and, according to the owner's command, can quickly search for something on the Internet, print it out or send it to a smartphone belonging to the owner of a particular robot.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If mass-produced autonomously functioning, programmable robots, highly intelligent androids equipped with generative artificial intelligence technology and programmed to provide assistance to humans, appear on the market in the not-too-distant future, what kind of work would you hire your personal robot helper to do?
How should robots equipped with generative artificial intelligence technology and programmed to act as domestic helpers, etc., be constructed and programmed to make it safe for humans?
In your opinion, will autonomous robots equipped with generative artificial intelligence technology, embedded intelligent chatbots, able to perform many activities that only humans have done so far be more of a help or a threat to humans?
Will autonomous robots equipped with generative artificial intelligence technology be more of a help or a threat to humans?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
Perhaps in the future - as a result of the rapid technological advances currently taking place and the rivalry of leading technology companies developing AI technologies - a general artificial intelligence (AGI) will emerge. At present, there are unresolved deliberations on the question of new opportunities and threats that may occur as a result of the construction and development of general artificial intelligence in the future. The rapid technological progress currently taking place in the field of generative artificial intelligence in connection with the already high level of competition among technology companies developing these technologies may lead to the emergence of a super artificial intelligence, a strong general artificial intelligence that can achieve the capabilities of self-development, self-improvement and perhaps also autonomy, independence from humans. This kind of scenario may lead to a situation where this kind of strong, super AI or general artificial intelligence is out of human control. Perhaps this kind of strong, super, general artificial intelligence will be able, as a result of self-improvement, to reach a state that can be called artificial consciousness. On the one hand, new possibilities can be associated with the emergence of this kind of strong, super, general artificial intelligence, including perhaps new possibilities for solving the key problems of the development of human civilization. However, on the other hand, one should not forget about the potential dangers if this kind of strong, super, general artificial intelligence in its autonomous development and self-improvement independent of man were to get completely out of the control of man. Probably, whether this will involve mainly new opportunities or rather new dangers for mankind will mainly be determined by how man will direct this development of AI technology while he still has control over this development.
I described the key issues of opportunities and threats to the development of artificial intelligence technology in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Assuming that in the future - as a result of the rapid technological progress that is currently taking place and the competition of leading technology companies developing AI technologies - general artificial intelligence (AGI) will be created, will it mainly involve new opportunities or rather new threats for humanity? What is your opinion on this issue?
If general artificial intelligence (AGI) is created, will it involve mainly new opportunities or rather new threats for humanity?
What do you think about this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text, I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
What new occupations, professional professions, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
The recent rapid development of generative artificial intelligence applications is increasingly changing labor markets. The development of generative artificial intelligence applications is increasing the scale of objectification of work performed within various professions. On the one hand, generative artificial intelligence technologies are finding more and more applications in companies, enterprises and institutions increasing the efficiency of certain business processes supporting employees working in various positions. However, there are increasing considerations about the possibility of black scenarios coming true in futurological projections suggesting that in the future many jobs will be completely replaced by autonomic AI-equipped robots, androids or systems operating in cloud computing. On the other hand, in opposition to the black scenarios of future developments in labor markets are contrasted with more positive scenarios presenting futuristic projections of the development of labor markets, where new professions will be created thanks to the implementation of generative artificial intelligence technology into various aspects of economic activity. Which of these two scenarios will be realized to a greater extent in the future is currently not easy to predict precisely.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
What new professions, professional occupations, specialties in the workforce are being created or will soon be created in connection with the development of generative artificial intelligence applications?
What new professions will soon be created in connection with the development of generative artificial intelligence applications?
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
"From science to law, from medicine to military questions, artificial intelligence is shaking up all our fields of expertise. All?? No?! In philosophy, AI is useless." The Artificial Mind, by Raphaël Enthoven, Humensis, 2024.
Will generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to learn from its activities and in the process of self-improvement will learn from its own mistakes?
Can the possible future combination of generative artificial intelligence technology and general artificial intelligence result in the creation of a highly technologically advanced super general artificial intelligence, which will improve itself, which may result in its self-improvement out of the control of man and thus become independent of the creator, which is man?
An important issue concerning the prospects for the development of artificial intelligence technology and its applications is also the question of obtaining by the built intelligent systems taught to perform highly complex tasks based on generative artificial intelligence a certain range of independence and self-improvement, repairing certain randomly occurring faults, errors, system failures, etc. For many years, there have been deliberations and discussions on the issue of obtaining a greater range of autonomy in making certain decisions on self-improvement, repair of system faults, errors caused by random external events by systems built on the basis of generative artificial intelligence technology. On the one hand, if there are built and developed, for example, security systems built on the basis of generative artificial intelligence technology in public institutions or commercially operating business entities providing a certain category of security for people, it is an important issue to give these intelligent systems a certain degree of autonomy in decision-making if in a situation of a serious crisis, natural disaster, geological disaster, earthquake, flood, fire, etc. a human being could make a decision too late relative to the much greater speed of response that an automated, intelligent, specific security, emergency response, early warning system for specific new risks, risk management system, crisis management system, etc. can have. However, on the other hand, whether a greater degree of self-determination is given to an automated, intelligent information system, including a specified security system then the scale of the probability of a failure occurring that will cause changes in the operation of the system with the result that the specified automated, intelligent and generative artificial intelligence-based system may be completely out of human control. In order for an automated system to quickly return to correct operation on its own after the occurrence of a negative, crisis external factor causing a system failure, then some scope of autonomy and self-decision-making for the automated, intelligent system should be given. However, to determine what this scope of autonomy should be is to first carry out a multifaceted analysis and diagnosis on the impact factors that can act as risk factors and cause malfunction, failure of the operation of an intelligent information system. Besides, if, in the future, generative artificial intelligence technology is enriched with super-perfect general artificial intelligence technology, then the scope of autonomy given to an intelligent information system that has been built with the purpose of automating the operation of a risk management system, providing a high level of safety for people may be high. However, if at such a stage in the development of super-perfect general artificial intelligence technology, however, an incident of system failure due to certain external or perhaps also internal factors were to occur, then the negative consequences of such a system slipping out of human control could be very large and currently difficult to assess. In this way, the paradox of building and developing systems developed within the framework of super-perfect general artificial intelligence technology may be realized. This paradox is that the more perfect, automated, intelligent system will be built by a human, an information system far beyond the capacity of the human mind, the capacity of a human to process and analyze large sets of data and information is, on the one hand, because such a system will be highly perfect it will be given a high level of autonomy to make decisions on crisis management, to make decisions on self-repair of system failure, to make decisions much faster than the capacity of a human in this regard, and so on. However, on the other hand, when, despite the low level of probability of an abnormal event, the occurrence of an external factor of a new type, the materialization of a new category of risk, which will nevertheless cause the effective failure of a highly intelligent system then this may lead to such a system being completely out of human control. The consequences, including, first of all, the negative consequences for humans of such a slipping of an already highly autonomous intelligent information system based on super general artificial intelligence, would be difficult to estimate in advance.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can the possible future combination of generative artificial intelligence and general artificial intelligence technologies result in the creation of a highly technologically advanced super general artificial intelligence that will improve itself which may result in its self-improvement escaping the control of man and thus becoming independent of the creator, which is man?
Will the generative artificial intelligence taught various activities performed so far only by humans, solving complex tasks, self-improving in performing specific tasks, taught in the process of deep learning with the use of artificial neural network technology be able to draw conclusions from its activities and in the process of self-improvement learn from its own mistakes?
Will generative artificial intelligence in the future in the process of self-improvement learn from its own mistakes?
The key issues of opportunities and threats to the development of artificial intelligence technologies are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Thank you,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Has the development of artificial intelligence, including especially the current and prospective development of generative artificial intelligence and general artificial intelligence technologies, entered a phase that can already be called an open Pandora's box?
In recent weeks, the media covering the issue of the prospects for the development of artificial intelligence technology have made disturbing news. Rival leading technology companies developing ICT, Internet and Industry 4.0/5.0 information technologies have entered the next phase of generative artificial intelligence and general artificial intelligence development. Generative artificial intelligence technologies already present mainly through the ChatGPT intelligent chatbot, which was made available on the Internet at the end of 2022, and new further variants of it are already being made openly available to Internet users. Citizen interest in Internet-accessible intelligent chatbots and other tools based on generative artificial intelligence technology is very high. When OpenAI made the first publicly available versions of ChatGPT available to Internet users in November 2022, the number of users of the platform with this offering grew faster than the previously reported increases in the number of users on social media sites in the corresponding first months of their online availability. Dominant in the markets of online information services, the most recognizable brands of technology companies compete in the development of artificial intelligence technology no longer only in the development of generative artificial intelligence, which, through deep learning and the use of artificial neural networks, is taught specific abilities to intelligently perform jobs, tasks, write texts, participate in discussions, generate photos, videos, draw graphics and carry out other outsourced tasks that were previously performed only by humans. Currently dominating the markets for online information services, major technology companies are also competing to build increasingly sophisticated AI solutions referred to as general artificial intelligence. From the analysis of futurological projections of the possibilities of development of constantly improved artificial intelligence, there is a risk that at some point this development will enter another developmental phase, which will consist in the fact that advanced general artificial intelligence systems will already create even more advanced general artificial intelligence systems on their own, which with their computing power and advanced processing of large data sets, processing of data on platforms using accumulated huge data sets and Big Data Analytics information will far surpass the analytical capabilities of the human brain, human intelligence and the holistic computing power of all neurons of the human nervous system. This kind of development phase, in which advanced general artificial intelligence systems will already create even more advanced general artificial intelligence systems on their own, could lead to a situation where this development is out of human control. In such a situation, the risks associated with the uncontrolled development of advanced general artificial intelligence systems could increase strongly. The levels of risk could be so high that it could be compared to the situation of various very serious threats and even armegeddon of human civilization depicted in catastrophic futurological projections of the development of artificial intelligence out of human control depicted in many science fiction films. The catastrophic and/or bordering on horror movie images depicted in science fiction films suggest the potential future risks of a kind of arms race already taking place between the globally largest technology companies developing generative artificial intelligence and general artificial intelligence technologies. If this kind of development of generative artificial intelligence and general artificial intelligence technologies has entered this phase and there is no longer any possibility of stopping this development, then perhaps this development can already be called an open Pandora's box of artificial intelligence development.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Has the development of artificial intelligence, including, above all, the current and prospective development of generative artificial intelligence and general artificial intelligence technologies, entered a phase that can already be called an open Pandora's box of artificial intelligence development?
Has the development of artificial intelligence entered a phase that can already be called an open Pandora's box?
Artificial intelligence technology has been rapidly developing and finding new applications in recent years. The main determinants, including potential opportunities and threats to the development of artificial intelligence technology are described in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
And what is your opinion on this topic?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Has the rivalry among leading technology companies in perfecting generative artificial intelligence technology already entered a path of no return and could inevitably lead to the creation of a super general artificial intelligence that will achieve the ability to self-improve, develop and may escape human control in this development? And if so, what risks could be associated with such a scenario of AI technology development?
The rivalry between IT giants, leading technology companies in perfecting generative artificial intelligence technology may have already entered a path of no return. On the other hand, there are increasing comments in the media about where this rivalry may lead, and whether this rivalry has already entered a path of no return. Even these aforementioned IT giants made attempts in the spring of 2023 to slow down this not-quite-tame development, but unfortunately failed. As a result, regulators are now expected to step in with the goal of sanctioning this development with regulations concerning, for example, the issue of including copyright in creative processes during which artificial intelligence takes on the role of creator. In the growing number of considerations regarding the use of artificial intelligence technology in various applications, in more and more spheres of human functioning, professional work and so on. there are questions about the dangers of this and attempts to powder the subject by suggesting that, after all, the development of AI technology and its applications cannot escape human control, that AI is unlikely to replace humans only assist in many jobs, that the vision of disaster known from the "Terminator" saga of science fiction films will not materialize, that human-like intelligent androids will never become fully autonomous, and so on. Or perhaps in this way, man is subconsciously trying to escape from other kinds of considerations, in which, for example, it could soon turn out that the technological advances taking place under Industry 5.0 driven by the entry of leading technology companies into a path of competition, which first will create a highly advanced super general artificial intelligence, which could turn out to be smarter than man, will be able to self-improve without man and develop in a direction that man will not even be able to imagine let alone predict beforehand. Perhaps the greatest fear of the consequences of the unbridled development of AI applications stems from the fact that the result of this development could be something that will intellectually surpass humans. Sometimes this kind of situation has already been referred to as an attempt to create one's own God (not an idol, just God). In these considerations, we repeatedly come to the conclusion that what is most fascinating can also generate the greatest dangers.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Has the competition among leading technology companies to perfect generative artificial intelligence technology already entered a path of no return and may inevitably lead to the creation of a super general artificial intelligence that will reach the capacity of self-improvement, development and may escape human control in this development? And if so, what risks could be associated with such a scenario of AI technology development?
Has the competition among leading technology companies to perfect generative artificial intelligence technology already entered a path of no return?
And what is your opinion about it?
What is your opinion on this issue?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
Solutions to this question may vary. However, the key issue is the moral dilemmas in the applications of the constantly developing and improving artificial intelligence technology and the preservation of ethics in the process of developing applications of these technologies. In addition to this, the key issues within the framework of this issue also include the need to more fully explore and clarify what human consciousness is, how it is formed, how it functions within specific plexuses of neurons in the human central nervous system.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
If an imitation of human consciousness called artificial consciousness is built on the basis of AI technology in the future, will it be built by mapping the functioning of human consciousness or rather as a kind of result of the development and refinement of the issue of autonomy of thought processes developed within the framework of "thinking" generative artificial intelligence?
How can artificial consciousness be built on the basis of AI technology?
And what is your opinion on this topic?
What do you think about this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best wishes,
Dariusz Prokopowicz
The above text is entirely my own work written by me on the basis of my research.
In writing this text I did not use other sources or automatic text generation systems.
Copyright by Dariusz Prokopowicz
Whether or not robots should have rights is a complex question with no easy answer. There are both potential benefits and risks to consider, such as the well-being of robots, the potential for abuse, and the impact on society as a whole.
Dear Colleagues,
Today, in the era of digitalization, companies are experiencing new challenges more than ever before. Innovation, sustainability and resilience are the pillars of successful and sophisticated digital transformation.
X.0 companies can no longer focus on a simple short-term vision by producing the same recipes but are embracing more creative changes in order to be able to produce innovations and create new personalized experiences.
X.0 is the digital reinvention of the industry, combining the efficiency of transformation with research. This transformation will be able to create innovation and make these companies more resilient toward new sustainable growth to create value.
This session aims to share the most recent contributions in this area. Researchers and professionals are invited to present their work in the following or related fields:
- Resilience;
- Innovation and/or digitalization;
- Sustainability;
- Smart industry;
- Industry 4.0/Industry X.0;
- Artificial intelligence (AI);
- Modeling and simulation;
- Lean manufacturing/supply chain management;
- Safety and maintenance;
- Railways and trains.
Dr. Mario Di Nardo
Dr. Maryam Gallab
Guest Editors
Will the black scenarios of the futurological visions known from scence fiction films, in which autonomous robots equipped with artificial intelligence will be able to reproduce and self-improve, come true in the future?
The theoretical basis for the concept of the essence of artificial intelligence has been developing since the 1960s. Since then, black scenarios of futurological visions, in which autonomous robots equipped with artificial intelligence will be able to reproduce themselves, self-improve, become independent of human control and become a threat to humans, have been created in literature and film of the genre of scence fiction. Nowadays, in the situation of dynamic development of artificial intelligence and robotics technologies, the above-mentioned considerations return to topicality.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Will the black scenarios of futurological visions known from scence fiction films, in which autonomous robots equipped with artificial intelligence will be able to reproduce and self-improve, come true in the future?
Will artificial intelligence-equipped autonomous robots that can reproduce and self-improve emerge in the future?
And what is your opinion about it?
What is your opinion on this topic?
Please answer,
I invite everyone to join the discussion,
Thank you very much,
Best regards,
Dariusz Prokopowicz
Can autonomous robots equipped with artificial intelligence that process significantly larger amounts of data and information faster than humans pose a significant threat to humans?
Will autonomous robots equipped with artificial intelligence, which process much larger amounts of data and information faster than humans, only be useful, friendly and helpful to humans, or could they also be a significant threat?
Robots equipped with artificial intelligence are being built to create a new kind of useful, friendly and helpful machine for humans. Already, the latest generations of microprocessors with which computers, laptops and smartphones are equipped have high computing and processing capacities that exceed those of the human brain. When new generations of artificial intelligence are implemented in computers with high computing and multi-criteria data processing powers, intelligent systems are obtained that can process large amounts of data much more quickly and with a high level of objectivity and rationality in comparison to what is referred to as natural human intelligence. AI systems are already being developed that process much larger volumes of data and information faster than humans. If such artificial intelligence systems and autonomous robots equipped with this technology were to escape human control due to certain errors, a new kind of serious threat to humans could arise.
In view of the above, I address the following question to the esteemed community of scientists and researchers:
Can autonomous robots equipped with artificial intelligence that process significantly larger amounts of data and information faster than humans pose a significant threat to humans?
What do you think?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Warm regards,
Dariusz Prokopowicz
What are the differences in taxation of companies and employees vis-à-vis analogous entities in which all employed workers have been replaced by artificial intelligence?
How does the system of taxation of income generated by business entities and their employees differ from analogous companies, businesses, financial institutions, etc., in which all employed workers have been replaced by artificial intelligence?
In a situation where in many service companies and many manufacturing companies, as part of the so-called cost optimisation and profitability improvement, a significant part, the majority of the employed workers or the entire workforce will be replaced by artificial intelligence technology, the tax revenue going to the state budget from income taxes of the previously employed workers and the amounts from para-taxes, contributions to the social security system and others will significantly decrease if the tax system is not applied modified and adapted to the fourth technological revolution currently taking place. In addition, a long-standing process of change in the demographic structure of society, known as ageing, is taking place in developed countries. This means a successive decrease in the number of people in many productive years against people who have already reached retirement age. This will further weaken the state's public finance system in the years to come. If, in the future, the state is to ensure convenient provision of public goods and services for the next generations of citizens, the social security system, the participatory pension system, etc. are to function effectively, the necessary changes, including in the area of fiscal policy, should already be introduced. However, the issue of shaping socio-economic policy, including fiscal policy, social policy, provision of public goods by the state to citizens, etc., may be a problem mainly in the short term (a few months) or medium term (up to a few years) instead of the long term (at least a few decades of time).
In view of the above, I address the following question to the esteemed community of scientists and researchers:
How does the system of taxation of the income generated by economic entities and the employees employed in these entities differ from the analogous companies, enterprises, financial institutions, etc., in which all employed employees have been replaced by artificial intelligence?
What is your opinion on this?
What is your opinion on this subject?
Please respond,
I invite you all to discuss,
Thank you very much,
Best regards,
Dariusz Prokopowicz
We, at the Design Innovation Centre of Mondragon University, are working to better understand the interaction between humans and robots through a user-focused questionnaire. Our Human-Robot Experience (HUROX) questionnaire will gauge human perception and acceptance of robots in an industrial setting. Your participation in completing the questionnaire will greatly help us validate our findings.
Please, answer the electronic questionnaire that can be accessed here: https://questionpro.com/t/AWzTgZwkBl
The estimated time to answer all questions is about 40 minutes.
Your cooperation and support in this research effort would be greatly appreciated. We believe that by working together, we can advance our understanding of human-robot interaction and create better, more intuitive technologies for the future. If you're willing, please share this message with your network of contacts to help us reach even more participants.
Thank you for your cooperation!
We are a group of researchers from the Federal University of Sergipe (Brazil), Universidad Nacional de San Juan (Argentina), Universidad Nacional de Tucumán (Argentina) and Queen’s University (Canada) and we are are conducting research to assess whether robots can be commanded to perform tasks through gestures in an easy and intuitive way and we would like to ask for your help.
Please, answer the electronic questionnaire that can be accessed here: https://forms.gle/RBy75MbwhJcUoYJh6.
The estimated time to answer all questions is just over 10 minutes.
It would also help us even more if you share this message with your entire network of contacts. To participate, it is not necessary to have any prior knowledge of robotics and your collaboration will assist us in the search for a simpler way for anyone to use robots in their daily lives.
Thanks for your cooperation!
How to understand the "collaboration mechanism" between human and machine in these HMC methods?
Thank you!
What kind of scientific research dominate in the field of Artificial intelligence?
Please, provide your suggestions for a question, problem or research thesis in the issues: Artificial intelligence.
Please reply.
I invite you to the discussion
Best wishes
Will the development of artificial intelligence, e-learning, Internet of Things and other information technologies increase the scope of automation and standardization of didactic processes, which could result in the replacement of a teacher by robots?
Unfortunately, there is a danger that due to the development of artificial intelligence, e-learning, learning machines, the Internet of Things, etc. technology can replace the teacher in the future. However, this will not happen in the next few years, but this scenario can not be excluded in the perspective of several decades. In addition, the work of the teacher is a creative work, a social education, etc. Currently, it is assumed that artificial intelligence will not be able to fully replace a human teacher, because it is now assumed that you can not teach artistry machine, social sensitivity, emotional intelligence, empathy, etc.
Do you agree with me on the above matter?
In the context of the above issues, the following question is valid:
Will the development of artificial intelligence, e-learning, Internet of Things and other information technologies increase the scope of automation and standardization of didactic processes, which could result in the replacement of a teacher by robots?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Is one of the products of the integration of Information Technology Industry 4.0 will be the creation of fully autonomous robots capable of self-improvement?
Is the combination of artificial intelligence and technology learning machines, robotics, the Internet of Things and data processing in the cloud and Big Data databases automatically acquiring information from the Internet and possibly other advanced information technologies typical of the current technological revolution Industry 4.0 will allow to create fully autonomous robots capable of self-improvement?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
In which branches, industry, the development of robotics, work automation and the implementation of artificial intelligence into production processes, logistics, etc. is currently the most dynamic?
Please reply
I invite you to the discussion
What kind of scientific research dominate in the field of The development of automation and robotics?
Please, provide your suggestions for a question, problem or research thesis in the issues: The development of automation and robotics.
Please reply.
I invite you to the discussion
Thank you very much
Best wishes
Will intelligent robots replace human in all difficult, cumbersome jobs and professions as part of the progress of civilization?
Will robotization in the course of civilization progress replace man in all difficult, arduous jobs and professions?
Please, answer, comments.
I invite you to the discussion.
Best wishes
We are modeling an HRI scenario in virtual reality using Unity and the Oculus Rift. Our problem is, though, that Unity only allows objects to be either kinematic or physics. This means, either we have a nicely running inverse kinematic model of a 6-DOF industrial robot, but struggle with physical-virtual interaction with objects, or we have a physically correct behavior of the robotic parts, but lose the inverse kinematic model.
Does anyone of you also work with Unity as a platform for HRI VR and came across these issues, too?
Thank you very much in advance,
Jonas
Dexterous, in-hand robotic manipulation is the process of perturbing a stably grasped object from arbitrary pose A to a desired pose B by appropriately controlling the robotic fingers.
What are in your opinion the most important practical, everyday life applications of such a dexterous robotic manipulation?
For example, in the case of a robot arm hand system both the arm and the hand contribute to the dexterity. I am particularly interested in applications that render hand dexterity absolutely necessary.
In other words, what tasks / applications require a dexterous hand because they cannot be efficiently executed with a simple arm-gripper system?
I conducted a questionnaire which included the Godspeed Questionnaire (https://www.bartneck.de/2008/03/11/the-godspeed-questionnaire-series/).
I have two different groups and want to compare their results. I know, that for these kinds of likert scales I should not use mean values but medians. However in the Godspeed Questionnaire there a four groupings and I want to compare the results of these groupings and not of every single likert-item. Otherwise I would use a Mann-Whitney U test.
What would be a sensible statistical method to use in this case?
Shear forces are the main source of discomfort for Exosuit users and unreliable force transmission from actuators to wearable components of an Exosuit.
So in the design of an Exosuit, shear forces should be minimized.
How shear force minimization will be achieved in the design?
Are there futurological estimates when mass autonomous humanoid robots will be manufactured in series?
When mass-produced autonomous humanoid robots will be produced in series, that is, futurological visions known from such science fiction novels and movies as "I. Robot", "AI Artificial Intelligence", "Ex Machina", "Chappie", "Bicentennial Man", "Star Wars", "Star Trek", etc.?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
Hello,
I'm currently a master student with the thesis subject 'Social robots in healthcare: implications for producers'. The goal is to research the different aspects a social robot producer has to take into account when producing such a social robot. This could go from decisions about the technical aspect (which software and how to get this software, which platform will the robot be built upon etc.) to how they do market research to find out what the preferences are of the customers they target. This will be done through interviews with these producers.
Which questions can I ask social robot developers about the implications they have to take into account? Your insights on which questions to ask and which factors to take into account would be very useful.
Many thanks in advance,
Thomas
Industrial robot manufacturers have in recent years developed collaborative robots and these gains more and more interest within the manufacturing industry. Collaborative robots ensure that humans and robots can work together without the robot being dangerous for the human.
However, how can we mesure the performnace of these system in industrie 4.0 ?
regarding communication mechanisms between human and robot ?
What is your vision on that?
Can someone explain to me the benefits from having a relationship with AI? And also the things we should worry about?
“AIs will colonize and transform the entire cosmos,” says Juergen Schmidhuber, a pioneering computer scientist based at the Dalle Molle Institute for Artificial Intelligence in Switzerland, “and they will make it intelligent.”
What do you think? do you think the AI will change everything in the life? do you believe that the AI will become a threat for human or not?
and if yes, how near is that day?
I am currently investigating means to assess human interaction interest on a mobile robotic platform before approaching said humans.
I already found several sources which mainly focus on movement trajectories and body poses.
Do you know of any other observable features which could be used for this?
Could you point me to relevant literature in this field?
Thanks in advance,
Martin
Can someone provide me with information on how to control a physical robot arm with V-Rep?
I am new to robotics and I want to link my V-Rep simulation to a real KUKA arm
There have been many emotion space models, be it the circumplex model or the PANA model or the Plutchik's wheel. But all of them are used to represent human emotions in an emotion space. The definitions for arousal and valence are easy to interpret as human beings as we have some understanding of pleasant/ unpleasant or intense/non intense stimuli. However, how can we define the same for a robot? What stimuli should be considered as Arousing and what should be Pleasant for a robot? I am interested in reading the responses from researchers in the field and having a discussion in this area. Any reference to relevant literature would also be highly appreciated.
In our university, we went to buy robot manipulator for teaching modeling and control, in addition we went to use the robot for research works about human-robot interaction, please share with us your experiences about this subject?
Research challenges related to human robot collaboration (HRC) include mainly (a) teaching the robot and avoidance of unintended consequences; (b) considering how both human and robot have mutual models of each other; (c) coping with user culture and fears . The last point “(c)” deal with the primary recommendations of the DARPA-NSF study on human robot interaction, which was the need for interdisciplinary research in human-robot interaction, including researchers from the fields of psychology and human factors.
If we want to keep using technologies we must abandoning Artificial intelligence. unless we found a way to control it which is unlikely to happen. Some rules must be adopted to band or control AI the issue here is do we have the right to make rules to band or control the AI in the name of law?
On the KUKA LWR robot, whenever we switch the controllers using FRI (external computer) from Position Control Mode to Cartesian Impedance Control Mode the end effector moves from its position slightly, gives a 'click' sound and then stops functioning showing errors on the KCP. This happens even without a tool attached. Currently we need one person to hold the robot while the switching is happening which is very inconvenient.
1. As soon as the KCP command for opening FRI is executed the robot end effector moves (either falls down or changes its position) and the KCP shows -
LR:FRI Monitor Mode
FRI successfully opened
LR: Cartesian Impedance
Control scheme: Cart.impedance control
LR:FRI cmd.cartPos not properly initialized
ERROR during friStart (result:2)
LR:FRI Interpolation error
2. If on the rare occasion that the FRI opens then upon loading the controller from external host (PC) the following happens -
FRI succesfully opened
LR:Cartesian Impedance
Control Scheme: Cart. Impedance Control
ERROR during friStart (result:2)
3. And if somehow the above works, then upon loading the Impedance Controller from external host (PC) the following happens in one second -
LF:FRI monitor mode
FRI successfully started
LR:FRI Bad Communication Quality!
I have already done All Position Zero and All Torque Zero calibration.
In some activities it is possible and robots are already produced, which replace people in specific repetitive activities.
However, will robots replace people in future in all activities and functions?
In my opinion, it is not possible for this type of futurological vision to be realized.
People are afraid of such a scenario of the future development of civilization.
The expression of these fears is the predominance of negative futurological visions known from fictional literature and films that such a development of civilization in which autonomous robots replace people in almost all activities, difficult work, production processes and achieve a high level of artificial intelligence generates serious threats to humanity.
Please answer
Best wishes
Special track: Advances and Challenges in Human-Computer Interaction in Digitalized Industry
Special track leaders:
- Pere Ponsa (Universitat Politècnica de Catalunya Barcelona Tech)
- Ramon Vilanova (Universitat Autònoma de Barcelona)
Brief Description:
We are witnessing the massive introduction of digital technologies in the industry, which leads to new production models that are grouped under the concepts of Connected Industry and Industry 4.0. It is a challenge to put in context the current state of this paradigm in the industrial sphere (how a conventional industry should be transformed), academic (what changes in the teaching plans are necessary) and social (what is the future of work and the new competences of people). It should be noted that some of the dimensions on which the connected industry is based clearly show the presence of methods and technologies that are common in the human-computer interaction discipline.
Thus, for example, highlights the importance of Collaborative Robotics (what degree of interaction is allowed between the person and the robot), Augmented/Virtual Reality (how to design machine maintenance procedures using glasses/helmet and mixed reality applications), the Big Data (how to transform data volumes into quality data and that provide information in information visualization systems), Industrial Internet of Things (how effective can be the synergy between IIoT and people-centered design), 3D Printing and Additive Manufacturing (before edible products which is the person’s emotional response), for example.
In general, it is convenient to analyze how the deployment of these dimensions affects the traditional human-machine system models, interaction with new devices, human capabilities and limitations in the management of complex systems, new trends in information visualization.
Topics:
We invite the submission of contributions. Examples topics in the track may include, but are not limited to:
– Conceptual models of HCI for Industry 4.0
– Mixed reality developments in productive environments
– Human-robot Interactive systems
– Interface design
– Human factors and connected enterprise
– Teaching in HCI and connected enterprise
Objectives:
– Announce the importance of HCI in the connected industry
– Show current status of theoretical-experimental research
– Announce good teaching practices
– Analyze trends
Expected audience and outcomes:
This special session is proposed as a meeting point to establish a framework for future work between HCI and Industry 4.0 professionals. A wide audience is invited to contribute their knowledge and experience in this special session, such as academics, researchers, industry experts, professionals working in this field of knowledge.
Important Dates:
Submission Deadline: until April 15, 2019
Author Notification: April 30, 2019
Camera-ready papers due: May 10, 2019
Track Committee:
Josep M. Blanco, Interactius, Spain
Alejandro Chacón. U. ESPE, Quito, Ecuador
Horacio René Del Giorgio. U. Nacional de la Matanza, Buenos Aires, Argentina
Manuel Domínguez, U. de León, Spain
Inmaculada Fajardo, U. de Valencia, Spain
Ganix Lasa, Mondragon Unibertsitatea, Spain
Maitane Mazmela, Mondragon Unibertsitatea, Spain
Alicia Mon. U. Nacional de la Matanza, Buenos Aires, Argentina
Pere Ponsa. U. Politècnica Catalunya Barcelona Tech, Spain
Francisco Jesús Rodríguez-Sedano, U. de León, Spain.
Sebastián Tornil. U. Politècnica Catalunya Barcelona Tech, Spain
Ramon Vilanova. U. Autònoma de Barcelona, Spain
I am looking for the link lengths and joint diameters of DLR hand but not able to find any article specifically mentioning them. If anyone has any information please can you refer me the article.
Hi,
I am PhD student working on conversational Social robotics. Is there any way to measure the relevancy of responses with regard to user utterance? are there any ways to measure relevancy automatically (through AI)? if not which manual techniques are available in literature nowadays?
I would highly appreciate.
with kind regards,
Tahir
- the advantages of investigastin on humanoid robots...
So far I found these papers. I think the last two are not so well reputed Journals.
Title: Social Robots for Long-Term Interaction: A Survey
Year: 2013
Journel/conference: International Journal of Social Robotics
Title: A Systematic Review of Adaptivity in Human-Robot Interaction
Year: 20 July 2017
Journal/conference: Multimodal Technologies and Interaction — Open Access Journal
file:///C:/Users/20174715/Downloads/mti-01-00014-v2.pdf
Title: A review on humanoid robots
Year: 7 February 2017
Journal/Conference: International Journal of Advanced and Applied Sciences
I would highly appreciate your help
Hi everyone,
I am trying to implement the ESFM which was introduced in paper "An Integrated Pedestrian Behavior Model Based on Extended Decision Field Theory and Social Force Model".
Assume the direction agent is facing would be the same direction as agent current acceleration(the total force he receives) is. When there is an obstacle between the agent and his destination, the agent firstly will see the obstacle and heading back. But after he looked back he could not see the obstacle again so will turn around immediately. This will be an infinite loop and agent will stuck in one place turning around until the environment changes.
So is there a better definition of agent heading?
Thanks in advance,
Yufeng
What do you think of having structures that can possibly change shapes using compliant mechanisms in a smart way to adapt the plant (e.g. robots, manipulators, airplanes, cars) to different known and unknown environments? Any satisfactory used method so far or concepts? any point of view?
Possibly quite a naive question, but the answer doesn't appear easy to find when wondering why only some children are terrified of robots and others are not. It seems more than just individual difference, with significant numbers of kids responding with terror to robots (at least 1/10, sometimes higher).
Does anyone have any resources on children who are terrified of any type of robot (humanoid or not, e.g., NAO or Roomba)? Or a suggestion for any measures which might capture this?
I'm looking for two things basically:
1) Syllabi for sampling or survey data analysis courses you've taken or tought. Good and bad examples both wanted.
2) If you've take a course on this, we'd love your frank thoughts (again positive or negative) even if you can't share the syllabus. Direct message is fine if you don't feel comfortable commenting "out loud."
Colleague Stas Kolenikov and I are up to something.
Just so you don't think I'm doing my own work, this isn't for a class I need to prepare. :)
Thanks in advance! :) We can continue the conversation here.
Dear researchers,
I am new to R and statistic analysis, and I deeply value all your opinions!
My last experiment was a within-subject experiment, with in total 24 participants. The independent variable is robot`s behaviours (28 candidates) and the independent variable is emotion category (4 categories, relaxed, happy, sad and angry). Part of the summarised data is showed in the attached figure Snip20160824_2.png, where y axis is the behaviour candidate and x axis is the sum of picks regarding 4 categories.
My current idea is to analyse each emotion category separately. Thus I now have data sets such as figure Snip20160824_3.png. For example for the category of relaxed, the dependent variable then turns into a binary one. For each behaviour candidate, the dv is 1 if it is picked as relaxed and 0 if it isn`t. For each behaviour candidate, the number of picks across the 24 participants is summed. This procedure is the same for all the 4 emotion categories.
Here is my question: which method shall I use to analyse whether there are some behaviour candidates that picked significantly more than others regarding each category, respectively? (for instance in figure Snip20160824_3.png, I would like to see whether c1 is picked significantly more than others, regarding the relaxed category).
Sorry for my badly organised English. Thank you!
I'm looking for a good open-source gesture recognizer (in particular, pointing gesture) that would work on Linux. Either using skeleton tracking (OpenNI) or other RGB-D methods.
While the theory is rather simple (typically, you want to train a hidden Markov model), I'm looking for an robust implementation + trained model.
Human–robot interaction is the study of interactions between humans and robots. It is often referred as HRI by researchers. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language understanding, design, and social sciences.
I would like to obtain a realistic model for the human hand while it is in interaction with a surgery tool and soft or hard tissues. I would like to know more about the methods that I can use for getting the appropriate DOF for hand and the way to get the data to do so.
I saw in the literature that using on/off valves it is possible to control artificial air muscles but there are many disadvantages as heating of the muscle and difficult control. It is very difficult to realize a precise control. Could you please suggest suitable valves and control principles for resolve the disadvantages mentioned above?
Hi,
One of the problem with force control is that there may be vibrations when interacting with stiff environment. A passivity observer/controller can be used, for example, to add damping and thus eliminate vibrations when the algorithm detects passivity violations. In some applications, for example in some human-robot interactions scheme, such an algorithm doesn't work. We then developed a control algorithm to detect vibrations and to decrease the control gains when vibrations were detected (thus eliminating the vibrations).
The problem we had with the publication of this paper is that the reviewers required stability proofs. The algorithm works very well in practice and was tested on several robots. However, theoretical stability of such algorithms is hard to prove (if even possible). On the other hand, many control algorithms are published without theoretical stability proofs (for example algorithms based on neural networks, artificial intelligence in general, fuzzy logic, etc.).
I would like to have your advices on how to tackle this problem to get the paper published.
Thank you very much !
Alex
I want help in quadcopter programming
How can I programming the 4 motors by using arduino to control the sped and the direction
and when it show up I want just slow dawn the speed with out effect the direction
*note my motors are DC not stepper
thanks a lot
I want to know the Jacobian Matrix equation when the external force is not on the end-effector of robot, e.g, on the elbow or any other robot arm's point.
Thank you for your help!
Kind regards,
P.Liang
And furthermore, to what degree would this induction be stable and subject to noise? and in particular, which sources of noises? Any pointer to useful sources is most appreciated.