My working definition of an autonomous robot is: a robot is autonomous if it has the computational resources - both in terms of hardware and softwre - other than realtime interference from a human agent, to estimate how it is physically embedded in the environment to compute best possible actions bounded by some constraints to percieve and move if needed, to achieve a set of goals. According to this working definition, a robot's ability to estimate its current state (how it is physically embedded in the envrionment) is an essential component of autonomy. Then, it has to have adequate computational resources at its disposal to take an action within bounds, to perceive the environment more if needed, and move if needed, to achieve a given goal.
First define "Robot" and then define "Autonomous". I think of the two concepts as being on continuums.
I think of "Robot" as being something between a completely dumb machine and at the higher end, a synthetic human (or better). Certainly a high functioning decision making system with complex environmental awareness.
"Autonomous" is an equally slippery concept. As we are all reactive to our environments, and this becomes more so, the more we have the ability to perceive our environments, so our adaption and interaction with the environment becomes more a part of our "self". As such, I think dumb machines are the most "Autonomous" as the environment has the smallest impact on them.
The other definition of "Autonomous" might be a measure of "independence of the control of another sentient being". This becomes a philosophical question, as this measure can vary at different time points. An "Autonomous robot" may be completely dependant upon humans for its creation and at any point where it needs maintenance, recharging and its final termination. However, at other times it may act independently and require little interaction to operate.
However, the very "judgement" about "Operation" is subjective and opens up another question of control. How does a robot continue to know that its autonomous when it may no longer be functioning as desired because the political situation has changed? What if humans want to control it ( think of a device who's mission has been ended and needs to be decommissioned) In this context what is the difference between autonomous and defective? The definition changes based on information that may not be present in the robots environment.
Your definition sounds a lot like a self-aware mobile AI; which I would put fairly high up the complexity continuum. Consider how your definition apply to other categories of devices. "smart home appliances", "fixed and semi-mobile industrial robots", "healthcare devices", "recreational systems", "military devices"....
Trenčianska Univerzita Alexandra Dubčeka v Trenčíne
Ivan Plander, Alexander Dubcek University of Trencin in Trencin:
An robot is autonomous if it can create a model of outer world in its memory, can make a moving in this model and discharge given tasks. The model of outer world should be described by a knowledge representation language and should contain description of objects in the scene and relations between them. Recognition and perception of outer world and building the mathematical model are of fundamental significance, the control of the moving is a matter of technical performance only.
Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE
I really like your short definition, as it reduce the autonomous robot to a core aspect. I like to know your opinion to this detail: If there is an authority outside the machine defining the tasks or goals (especially during runtime), is the machine still autonomous? Thus, must a full autonomous machine be able to define its own goals?
Sorry to ask a question in another persons question, but maybe this aspect will also help to find one definition (as I think there are different ones).
As already noted well, autonomy has levels of complexity that might range from simple perception to action linking like the Roomba, which has simple task and goal automation that results in a cleaner floor and a recharged robot, but also those tasks and goals may be much more complex - e.g. autonomous exploration and sample testing tasks and goals for a planetary rover, or a self-driving vehicle. You might consider Rodney Brook's subsumption architecture in your research - http://dspace.mit.edu/bitstream/handle/1721.1/6432/AIM-864.pdf - these ideas were explored long ago and are still relevant to this discussion in my opinion. Simply defining autonomous and robot without consideration of the complexity of the goals and tasking tells only part of the story. Interaction with these systems for shared control is yet another interesting aspect, where goals or tasks might be user directed as well as autonomous.
Everything depends on how you define it is an autonomous robot. To do this I invite you to read the definitions of the International Federation of Robotics (http://www.ifr.org) who define it are industrial robots and robot services. In each of these definitions explain the level of autonomy and display statistics and new international technical regulations that can offer you information on what is currently defined as a robot wants. For my part, now, anything can be called ROBOT ....
Autonomous robots are robots that can perform desired tasks in unstructured environments without continuous human guidance. Many kinds of robots have some degree of autonomy. Different robots can be autonomous in different ways. A high degree of autonomy is particularly desirable in fields such as space exploration, cleaning floors, mowing lawns, and waste water treatment.
Some modern factory robots are "autonomous" within the strict confines of their direct environment. It may not be that every degree of freedom exists in their surrounding environment, but the factory robot's workplace is challenging and can often contain chaotic, unpredicted variables. The exact orientation and position of the next object of work and (in the more advanced factories) even the type of object and the required task must be determined. This can vary unpredictably (at least from the robot's point of view).
One important area of robotics research is to enable the robot to cope with its environment whether this be on land, underwater, in the air, underground, or in space.
A fully autonomous robot has the ability to
Gain information about the environment.
Work for an extended period without human intervention.
Move either all or part of itself throughout its operating environment without human assistance.
Avoid situations that are harmful to people, property, or itself unless those are part of its design specifications.
An autonomous robot may also learn or gain new capabilities like adjusting strategies for accomplishing its task(s) or adapting to changing surroundings.
Autonomous robots still require regular maintenance, as do other machines.
1 Examples of progress towards commercial autonomous robots
1.2 Sensing the environment
1.3 Task performance
1.4 Indoor position sensing and navigation
1.5 Outdoor autonomous position-sensing and navigation
Autonomous robot is a intelligent physical agent or machine that is built to simulate the actions of humans. They are situated in an environment: to have self awareness and awareness of others in the environment, to precieve it, to note the changes that occur in the environment, to react to the changes,and to take appropriate decisions.
In the SCIFI literture, there are many examples of autonomous robots, but two stand out in my mind and are readily available. The first is Commander Data of the StarTrek, Next Generation series. The other is the type of robots in the Isaac Asimove "I, Robot" series of books and short stories. I just read one the early books "Caves of Steel" and was impressed by how up-to-date it seemed. I have not included the two robots featured in the "Star Wars" series of movements, because they primarily serve as comic relief, rather than being major characters. I am currently working on the development of a a stationary autonomous Artificial Intelligent Agent; brains but no braun, which would meet the intellectual requirements but not mechanical ones discussed in previous posts. After extensive researach, I belive that almost all of the pieces are available as separate units, but they need to be integrated into a cohesive whole.It would then be a minor step to go from there to a mobile version, The biggest problme that I can forsee is the power supply, one small enough to be mobile, yet powerful enough to keep both the locomotion and the intelligence parts of the machine going. In this regard, it is instructive to note that the brain is the most energy consuming part of the body, when the body is at rest.
In the research and development projects that I led years ago, the 'continuum' of the definition of autonomy that Duncan Blair cited above was recognized. For that reason we defined "autonomy" for the purposes of our project as follows:
"Autonomy for our purposes is the capability of the system to select, from two or more available courses of action, the action that it will follow without human intervention and where the correct course of action cannot be known a priori."
This was for a robotics system, and it permitted us to claim that simply closing a loop, performing force-moment reaction control, or following a predefined linear script of motion would not be sufficient to be deemed autonomy. I must emphasize that this definition was used only for the purposes of our project to clarify our goals and requirements.
There are some recent works trying to standardize core notions in robotics. They can be useful for you. In the following references, you can find two papers that present the first results regarding the CORA (Core Ontology for Robotics and Automation). This ontology has been developed by an IEEE work group, as a standard for knowledge representation for robotics and automation.
-Prestes, Edson, et al. "Towards a core ontology for robotics and automation." Robotics and Autonomous Systems 61.11 (2013): 1193-1204.
A robot can be autonomous but not conscious. Autonomy can be summarised in terms of viablity with respect to a given environment. Thus for a limited environment autonomy can be a simple proposition. For a real-time fully interactive with respect to our (human) environment the robot can be viable and hence autonomous but still not conscious.
An insect is autonomous with respect to our environment but not knowlingly conscious. An increase in viability corresponds to an increase in adaptabilty (otherwise known as intelligence) Consciousness is a major improvement in viability for complex environments. For limited (or restrictions on complex) environments autonomy can be more efficient without consciouness.
The point I am trying to make is best illustrated by the animal kingdom - autonomy is the norm . However all life forms have varying degrees of adaptability (responses can be tuned for example). This corresponds to a lesser degree as the property: intelligence. The test is: degree of viability wrt the given environment, while the mechanism is adapatability (measured as the variable labelled "intelligence")
thus autonomy does not imply consciousness although it may be needed to improve viability in more complex environments
In my career, I had men working under me, some were more autonomous than others; some I had to give detailed instructions to frequently , and others, I only had to give a general idea of what was needed and they would carry out the job, unless they ran up against a problem that they could not handle. All of them knew enough not to walk into walls or fall into holes and not to cut off their hands, although sometimes they did have accidents. While they were not autonomous robots, there is an helpful analogy here, as to the range of what might be called autonomous robots. Perhaps a definition of a minimally autonomous would be like the men who needed frequent detailed instructions. but not continuous control, such as set program as in an industrial robot. I do not like the term industrial robot, as they are called. They should be called program-controlled generalized material manupulators.
In our project we had detailed discussion about autonomity of robots and meaning of word autonomus. The term had formal impact on our project. Our findings can be summarized inf following way. Word autonomus without further refinment is meaningless, is undefinable, is a buzzword. Instead we can define term "autonomus in" ( eg autonomus in cleaning of floor, autonomus in driving, autonomus in scouting ).The second term is definable. We can define clear condition to satisfy autonomity of carrying of task.Definition mentioned by Seddik Khemaissia seems to be well. Lets me restructurize it.
Robot/Machine is autonomus in performing task, only and only if it can reach desired goal (clean floor) without intervention of supervisor ( human or another machine ) regardless of external conditions.
Autonomous = closed loop control is sufficient, Conscious - open loop environment.
If someone infers control from minimal instruction - then they are using open-loop modelling -prediction to perform the inference. They might be clever and do everything right but they are not autonomous. A ant responding to pheromones is autonomous - it obtains feedback from the environment not its brain and steers its course accordingly - that is close loop wrt environment.
To answer the question “What are autonomous robots?” we should distinguish between cognitive and motivational autonomy. To be cognitively autonomous a robot must be able to execute some task without human help. To be motivationally autonomous a robot must also be able to decide which task to pursue at any given time with its behaviour. Current robots only have cognitive autonomy because they are practical tools and we construct them so that they can do some particular task. But robots can also be, and will increasingly be, scientific tools that help us to better understand human beings (see my just published book “Future Robots. Towards a robotic science of human beings.), and human beings - and also nonhuman animals - have both cognitive autonomy and motivational autonomy. They do not only do most of what they do without external help but they also autonomously decide which motivation to try to satisfy at any given time with their behaviour. For example, they may decide to stay at home instead of going out to eat, to stay at home to read a book instead of sleeping, and which book to read. Robots as practical applications should be as cognitively autonomous as possible but they should not be motivationally autonomous because being motivational autonomous can make them less practically useful or even dangerous. But only if we construct motivationally autonomous robots they can help us to better understand many aspects of the behaviour of animals and human beings such as actually feeling emotions - and not just expressing unfelt emotions with their face - and inter-individual differences not only in cognitive abilities but also in personality and character.
"cognitive" robots don't exist yet. They will be based on autonomous robots. First master the basics of an environment (autonomics) then add further intelligence by the addition of goal-based planners (basis of cognition). Thus we need the distinction between viable autonomic systems and viable cognitive-with-autonomics systems. Note this always comes with the concept of control - ie feedback from something to system. Internalised feedback cannot occur until there is sufficient autonomic/external feedback (eg with a consistent environment). That is why we can't build real-time AI systems yet.
A team of roboticists from the Korea Advanced Institute of Science and Technology claimed a $2 million prize on Saturday that was offered by a Pentagon research agency for developing a mobile robot capable of operating in hazardous environments.
In this paper we explore the connection between theories of actions and reactive robot control architectures that are based on the paradigm of situated activity. In particular, we use the entailment relation of the action description language to formalize the notion of `an action leading to a goal' of Kaelbling and Rosenschein. Content Areas: Robot...
Many recent papers claim, that the symbol grounding problem (SGP) remains unsolved. Most AI researchers ignore that and the autonomous agents (or robots) they design indeed do not seem to have any “problem”. Anyway, these claims should be taken rationally, since nearly all these papers make “robots” a subject of the discussion - leaving some kind o...
As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such ad hoc team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammat...