ArticlePDF Available

Abstract

The article provides views from researchers and practitioners directly involved with robotics applications to find out what they think about the applicability of this new generation of more adaptable and user-friendly robots. Aspects discussed include: personal robots in human societies; robot teams in urban search and rescue; space robots and natural language communication; representation and reasoning; and rationality and emotions.
Trends & Controversies
66 1094-7167/01/$10.00 © 2001 IEEE IEEE INTELLIGENT SYSTEMS
Who Wants a Nosy Butler?
Paolo Dario, Scuola Superiore Sant’Anna,
Pisa, Italy
For years, AI research has focused on
developing entities with intelligence com-
parable to that of humans—that is, with the
capability of reasoning and managing
knowledge. The Turing test demonstrates
how researchers have pursued machine
intelligence. Recently, AI research has
changed its focus. Recognizing (as Rodney
Brooks at MIT first proposed) that interac-
tion with the physical world is critical to
developing intelligence, many research
groups have tried to develop humanoid
“bodies. According to this approach,
human-like intelligence relates not only to
reasoning but also to learning, perceiving,
and interpreting the physical world and to
interacting with the world and humans.
These goals are much more difficult to
implement in machines than pure reason-
ing. As Tommaso Poggio pointed out,
humans have developed reasoning rules
only in the last few millennia; perception
seems so natural to us because nature has
refined it over millions of years. In fact,
we’ve accomplished what was AI’s biggest
challenge for decades—defeating a human
champion in the game of chess. AI’s new,
much more challenging goal is to develop
humanoid robots that can play, and possi-
bly win, against a human soccer team.
Even when AI is connected to robotics—
which provides a physical body for
machines—its goals are still to develop
entities that can autonomously perceive,
reason, learn, and act, thus exhibiting
human-like aspects of intelligence. But
should these entities really be autonomous?
Real-world applications of robotics implies
using robots in close interaction with
humans. In this nonabstract context, intelli-
gence means interacting with humans and
their environment safely, effectively, and
pleasantly. In most cases, this interaction
requires cooperation between the robot and
human, rather than autonomous behavior.
Researchers in rehabilitation robotics
have analyzed in depth the need for semiau-
tonomous robots and human–robot coopera-
tion in real-world applications. My group
has obtained experimental evidence that the
acceptability of personal robots (robots that
assist humans) is lower when the system is
autonomous, especially in everyday tasks at
Sentience in Robots:
Applications and Challenges
and user-friendly robots. We also thought
it would be instructive to know what the
most experienced and distinguished
researchers from the intelligent-robotics
field think are the remaining challenges.
The first three contributions come
from well-known roboticists. Paolo Dario
discusses the acceptability of personal
robots in human societies. He concludes
that human–robot cooperation should
receive particular attention, keeping in
mind the different abilities of humans
and robots. Robin Murphy, actively
involved with using robot teams for
urban search and rescue, discusses the
need for various levels of semisentience
in this context. Pete Bonasso, who has
worked in space robotics for a long time,
believes the need for sentience is a func-
tion of how closely the robot must work
with humans while carrying out the task.
Since its early stages, AI has been con-
cerned with knowledge representation,
reasoning, and problem solving. These
topics make up the core of what is often
called classical AI. For various reasons, the
general-purpose techniques of this long-
standing AI tradition are seldom found in
robots. We were, therefore, curious to
know if researchers still believed that
these techniques applied to robots. Bern-
hard Nebel, a theoretical AI researcher
who leads a champion RoboCup team,
thinks that such representation and rea-
soning techniques will eventually find
their way into robots. He also discusses
why this is taking so long to occur in a
widespread fashion. And Nils Nilsson,
three decades after building Shakey,
thinks that logic-based representation
and reasoning will be an essential com-
ponent of sufficiently sentient robots—
although different variants might be
required for different kinds of tasks.
Rationality, emotions, and what else?
For Rodney Brooks, the real challenge
now is developing robust methods for
visual object recognition, a skill that is
effortless for children but very difficult
for robots. He argues that this ability is
essential for any reasonable understand-
ing of the world.
—Luís Seabra Lopes &
Jonathan H. Connell
T
he articles in this special issue on semisentient robots provide an excellent view
of current research. However, we thought you might also appreciate views from
other researchers and practitioners directly involved with robotics applications, to find
out what they think about the applicability of this new generation of more adaptable
home. Who feels confident with—or likes—
an autonomous machine that moves around
the house and makes decisions? Further-
more, why should we let a robot waste effort
trying to recognize our favorite drink when
we could help it (and our safety) by simply
confirming that the object it is painfully
trying to identify is really a bottle?
If the goal is to develop robots that can
participate in human society and move and
act in domains as complex as human envi-
ronments, these robots should be able to
communicate with humans using human-
friendly modalities. They should possess
some form of reasoning capabilities and be
able to learn their user’s preferences and
adapt to the changing environment. This
means research must consider the differ-
ences between natural and artificial intelli-
gence and develop machines that can syn-
ergistically exploit such complementarity
in their interaction with humans.
Semisentience Doesn’t Mean
Being a Team Member
Robin Murphy, University of South Florida
Our work with rescue workers has led to a
surprising discovery about intelligent robots
for urban search and rescue. Although USAR
robots need intelligence to operate in the
complex, wireless-hostile environment of a
collapsed building, it’s not clear that the res-
cuers need robots to be recognizably semi-
sentient.
One approach to USAR robotics concen-
trates on giving the robot the ability to work
in the existing technical-rescue-team struc-
ture. In this approach, the robot’s relation-
ship with humans is similar to that of a res-
cue dog—only one or two humans who have
bonded with the robot care for and handle it.
Such a relationship presumes recognizable
semisentience—humans treat the robot as an
intelligent entity or near peer.
However, one motivation for USAR
robots is that there aren’t enough confined-
space rescue workers certified for entry into
a collapsed structure to respond to a major
disaster. Robots that less-specialized rescue
workers can operate could alleviate the man-
power problem. The catch with the robots-
as-a-tool scenario is that a fire rescue worker
might receive only 30 minutes’ training
every few months on a particular rescue tool.
So, the rescue worker treats the robot as a
generic tool, not a near peer, and up to 10
different workers might use a robot in a day.
Likewise, the workers are unlikely to use the
same robot each shift. There’s no advantage
in having a social bond, such as is induced
when a human recognizes a robot as semi-
sentient.
Clearly, we need semisentience for both
the robot-as-a-peer and robot-as-a-tool to
perform their task competently. It is the
human’s perception of the robot’s sentience
that is different. To be treated as a team
member, a robot must be able to communi-
cate naturally, explain itself, and adapt to
humans—but precisely because the robot is
semisentient, it can also take advantage of
humans willing to adapt to it. To be a useful
tool, a robot must be able to accomplish the
same tasks but with a wider range of users,
who have less motivation and time to coad-
apt to a peer.
In the end, robots may have to be much
smarter to appear comfortably dumb.
Natural Language Beats
Programming
Pete Bonasso, Johnson Space Center, NASA
Sentient robots (those capable of not only
perception but also feelings) are useful only
when they work in close proximity with
humans. Perhaps someday we’ll show that
emotional biasing of robot behaviors is the
most efficient control paradigm. Until then,
the only real value of sentient robots is to
communicate better with humans. Humans
understand each other more completely by
using communications channels beyond
language—such as speech prosody, facial
expressions, and body language. So, it seems
only reasonable to endow robots with more
than just the ability to interact through menus
or simple spoken commands. Given that the
emotional model and its subsequent output
accurately reflect the robot’s state and its
understanding of the world, human–robot
team efforts should prove more efficient. For
space applications, examples of such robots
might be the control computers on the space
station (the present-day incarnation of the
HAL 9000) or human–rover teams exploring
and mapping planetary surfaces (the year
2015 version of Star Wars C3PO).
However, the usefulness of a robot’s
emotional aspects decreases as some func-
tion of its distance from the human or, more
accurately, as a function of the increasing
cycle time of communications response.
This point is obvious for deep-space probes,
whose communications with humans are
more akin to sending surface-mail letters
than to having a conversation. When the
robot works autonomously for long periods
of time—such as an intelligent satellite
monitoring sunspot activity—there is little
need for sentience or anything more than a
graphical display and a command menu.
Let me issue a caveat about my comments
regarding language ability. One normally
identified practical motivation for endowing
robots with a natural language capability is
that in many human–robot tasks, the human
can’t easily access a keyboard or a graphical
interface. A singular exemplar for this is the
suited astronaut on Mars working with an
intelligent rover. Yet there is another good
reason to develop natural language interac-
tion with robots: to obviate the need for the
human to learn a new interface with every
robot or computer-controlled machine he or
she encounters.
In my work with long-running autono-
mous life support systems, for example, I’ve
seen many instances where the human
supervisors wish to make special queries,
temporarily override restrictions, or other-
wise modify the system behavior on an ad
hoc basis. Without a language capability
tied to the control system, either the pro-
grammer must be on hand to temporarily
change the code or to code a new piece of
interactive graphics, or we must add new
selections to existing menus—often for
rarely performed tasks. It is often cheaper to
have the programmer manually obtain the
information or perform the task at the time it
is needed than to build and test new code
and place it under configuration control.
So, we are developing a language dis-
course ability for our intelligent control
systems that will let the human perform
such ad hoc actions more naturally without
programmer intervention.
SEPTEMBER/OCTOBER 2001 computer.org/intelligent 67
Perhaps someday we’ll show that
emotional biasing of robot behaviors
is the most efficient control
paradigm. Until then, the only real
value of sentient robots is to
communicate better with humans.
Do Intelligent Robots Need
Knowledge Representation
and Reasoning?
Bernhard Nebel, Albert-Ludwigs-Univer-
sität, Freiburg, Germany
Knowledge representation and reasoning
covers much of AI’s theoretical aspects and
offers methods to represent knowledge and
reason with it. In other words, this technology
could support the specification of abstract,
high-level controllers for robots. However,
for three reasons, previous robot systems
have not had much of this technology. First,
research was needed to make basic robot
capabilities (such as self-localization, naviga-
tion, and mapping) more robust. So, high-
level control might not have had the highest
priority in most robotic projects. Second,
because of these problems, a typical robot’s
action repertoire was very limited. Thus,
there was little need to deliberate about what
to do next. Third, reasoning and action plan-
ning approaches proved slow and inefficient
and thus didn’t seem mature enough to be
incorporated in a robotic control system.
However, this situation seems to be chang-
ing. Several researchers have investigated
cognitive robotics or high-level control.
Indeed, the time seems ripe for such
approaches. Although the basic capabilities
are far from perfect, mobile robots can easily
act for an extended period of time without
getting lost or stuck. Also, in some applica-
tions, the size of the robot’s action repertoire
has become larger than we can manage using
pencil and paper. So we clearly need to coor-
dinate different possible actions. For instance,
our robotic soccer team, CS Freiburg, has
over a dozen different actions, parameterized
by the situation.
1
It has become a nontrivial
problem to select the right action and switch
between different actions. Finally, current
action planning methods and decision-
theoretic action selection methods scale well
enough to cope with realistic problems in
robotic domains. For example, current plan-
ners can easily generate plans with 100 steps,
whereas planners five years ago failed on 10-
step plans. In addition, to reduce the combi-
natorics inherent in planning, we can use
agent-programming languages as an alterna-
tive to deliberation. One famous agent-
programming language is Golog, a logical
specification language that has even been
extended to incorporate decision-theoretic
notions.
2
In general, more techniques and methods
from knowledge representation and reason-
68 computer.org/intelligent IEEE INTELLIGENT SYSTEMS
Paolo Dario is an associate professor of biomedical engineering at the
Scuola Superiore Sant’Anna in Pisa and an adjunct professor of mecha-
tronics at the School of Engineering of the University of Pisa. His interests
include microengineering, sensing and artificial perception in robotics,
medical applications of robotics, mechatronics, and microengineering
(especially to computer-assisted surgery, rehabilitation, and space). He
received his Dr. Eng. in mechanical engineering from the University of
Pisa, Italy. In 1989, he established the Advanced Robotics Technology and
Systems (ARTS) Laboratory, and in 1991 the Microfabrication Technolo-
gies (MiTech) Laboratory. Contact him at dario@arts.sssup.it.
Robin Murphy is an associate professor in the Department of Computer
Science and Engineering at the University of South Florida. Her main
research interest is in sensor fusion and fault-tolerant perception for
teams of heterogeneous mobile robots. She received her BME in
mechanical engineering and her MS and PhD in computer science from
Georgia Tech. Contact her at Computer Science and Eng., Univ. of
South Florida, 4202 E. Fowler Ave., ENB118, Tampa, FL 33620-5399;
murphy@csee. usf.edu; www.csee.usf.edu/~murphy.
Pete Bonasso is a senior scientist for AI & Robotics at TRACLabs, based at
Johnson Space Center. Since 1993, he has supported the Automation, Robot-
ics, and Simulation Division investigations of intelligent monitoring and con-
trol using layered software architectures. He is codeveloper of the Three
Tiered Robot Control Architecture and has applied that architecture to the
control of space robotic and life support systems. He received his BS in engi-
neering from the US Military Academy at West Point and his MS in operation
research and computer utilization from Stanford. He is a member of the
American Association for Artificial Intelligence. Contact him at
r.p.bonasso@jsc.nasa.gov.
Bernhard Nebel is a professor at Albert-Ludwigs-Universität Freiburg and
is head of the Artificial Intelligence research group. He is generally inter-
ested in knowledge representation and reasoning and, in particular, descrip-
tion logics, temporal and spatial reasoning, constraint-based reasoning,
planning, and belief revision. One of his current favorite application areas is
robotic soccer. He received a diploma in Computer Science from the Univer-
sity of Hamburg and his PhD from the University of Saarland. He is a mem-
ber of AAAI and ACM, has been a program chair of a number of confer-
ences, including IJCAI-01, and is on the editorial board of a number of
journals, including Artificial Intelligence. Contact him at nebel@informatik.
uni-freiburg.de.
Nils J. Nilsson is Kumagai Professor of Engineering (Emeritus) in the
Department of Computer Science at Stanford University. He spent 23 years
at the Artificial Intelligence Center of SRI International, working on statisti-
cal and neural-network approaches to pattern recognition, coinventing the
A* heuristic search algorithm and the STRIPS automatic planning system,
directing work on the integrated mobile robot, Shakey, and collaborating in
the development of the Prospector expert system. Besides teaching courses
on AI and machine learning, he has conducted research on flexible robots
that can react to dynamic worlds, plan courses of action, and learn from
experience. He received his PhD in electrical engineering from Stanford. He
is a past-president and fellow of the American Association for Artificial
Intelligence and is also a fellow of the American Association for the Advancement of Science.
Contact him at nilsson@cs.stanford.edu; http://robotics.stanford.edu/users/nilsson/bio.html.
Rodney Brooks is the director of the MIT Artificial Intelligence Labora-
tory (www.ai.mit.edu), Fujitsu Professor of Computer Science and Engi-
neering at MIT, and Chairman and CTO of the iRobot
(www.irobot.com). He has spent the last decade building humanoid
robots and is now turning his attention to the difference between living
and nonliving matter. He received a BSc and MSc in pure mathematics
from the Flinders University of South Australia, and a PhD in computer
science from Stanford. His book, Flesh and Machines, will be a pub-
lished by Pantheon in February 2002. He is a member of the IEEE and a
fellow of both AAAI and AAAS. Contact him at brooks@ai.mit.edu.
ing and AI will find their way into robots.
1. T. Weigel et al., “CS Freiburg: Doing the
Right Thing in a Group,RoboCup 2000:
Robot Soccer World Cup IV, P. Stone, G.
Kraetzschmar, and T. Balch, eds., Springer-
Verlag, New York, 2001.
2. C. Boutilier et al., “Decision-Theoretic, High-
Level Agent Programming in the Situation
Calculus, Proc. 17th Nat’l Conf. Artificial
Intelligence (AAAI 2000), AAAI Press,
Menlo Park, Calif., 2000, pp. 355–362.
Logic, but Which Variants?
Nils Nilsson, Robotics Laboratory,
Stanford University
Robots that are sufficiently sentient need
to be able to represent their environment
and reason about it. Many robot actions
will be “automatically” evoked by particu-
lar combinations of perceptual inputs. And
some will be calculated by path-planning
algorithms using, perhaps, some kind of
configuration-space representation. But
robots that can receive, represent, and uti-
lize declarative information, such as “deliv-
eries are not to be made to rooms on the
second floor on Tuesdays,” would be much
more flexible (or sentient?) than ones that
merely react to specified perceptual cues.
Given that representing and reasoning
with declarative information is important,
what is the best form for such representa-
tions? There is less to that question than
meets the eye. Most of the efficient repre-
sentational forms, such as structured ones
like semantic networks, turn out to be vari-
ants of relational calculi. They are efficient
because certain frequently used inferences,
such as those involving taxonomic and
inheritance information, are precoded in
their structures. The main question is thus
not whether to use some language such as
the first-order predicate calculus (FOPC),
but rather which structural variants are most
appropriate for the kinds of reasoning to be
performed in the particular tasks at hand.
Some promising work has already been
done on robot architectures that combine
low-level reactive capabilities with high-
level, logic-based declarative reasoning—
in particular, research on logic-based sub-
sumption architecture.
3
3. E. Amir and P. Maynard-Reid II, “Logic-
Based Subsumption Architecture,Proc. 16th
Int’l Joint Conf. Artificial Intelligence (IJCAI
99), Morgan Kaufmann, San Francisco,1999,
pp. 147–152.
Toward Better Semisentient
Robots
Rodney A. Brooks, MIT AI Lab
Over the last few years, both the acade-
mic and commercial world have made
tremendous progress toward making sen-
tient and semisentient robots a reality. The
world of mobile robots in 2001 is almost
unrecognizable by 1996 standards.
Building 2D metric maps indoors using
sonars or laser scanners is now routine, as
is collision avoidance and corridor follow-
ing based on sonar or vision. Such systems
provide the basis for commercial remote-
presence robots, built on top of the Internet
infrastructure. Such a remote robot with
local perception and intelligence can per-
form useful work in a harsh or hard-to-
reach location if a person can provide long-
latency supervisory commands.
On the research front, many systems are
available for tracking moving objects visu-
ally from a (temporarily) static camera,
as are commercial systems for real-time
stereo depth maps. Active vision systems
with saccades, smooth pursuit, and vestibu-
lar ocular reflexes abound. Finding people
on the basis of skin color or face detection
is common. Despite our intuition that every
face is different, so much commonality
exists that there are many robust techniques
for finding faces, determining facial fea-
tures, and detecting gaze direction. Facial
recognition has received intense research
attention, and there is a real payoff with
practical recognition now a reality in many
applications. Recent work combining voice
prosody understanding, facial detection,
and gesture and expression understanding
has lead to the first few robots that can
engage in genuine social interactions with
robot-naive people. And finally, the first
steps have been made toward robots devel-
oping a practical understanding of human
intent, attention, and knowledge, a set of
capabilities that autistics never develop.
With all this positive news, what stum-
bling blocks might impede the future devel-
opment of sentient robots? Some might
argue that we still have not determined how
to organize the higher-level control systems
and find the right mix between cognitive,
reflexive, and homeostatic arbitration. And
some might argue that we are missing a fun-
damental understanding of some sort of key
ingredient of all living systems, which is
holding back our development of robust
comprehensive artificial creatures. However,
there is a more fundamental problem that we
all know but have pushed to the back of our
consciousness and out of our active research
agendas. Fundamentally, all our robots are
limited by their inability to recognize objects
visually. This skill is effortless for small chil-
dren and primates, but our robots perform it
pathetically. It is a skill that we do not even
discuss and for which obtaining research
funding is difficult—we have relegated this
to the “not likely any time soon” category.
We do not even know whether object recog-
nition should be built as a front-end percep-
tual system that delivers descriptions of the
world, or whether it is something that
emerges from the interaction of more primi-
tive visual capabilities, higher-level knowl-
edge of the world, and embodied active
motion and manipulation of the world.
Our current robots live in a sea of the
immediate, perhaps with a 2D map of where
undifferentiated stuff exists. Everything else
is transient—a face here, a colorful object
there—appearing and disappearing from a
background of largely misunderstood fea-
tures. None of our robots can reliably differ-
entiate a cell phone, a stack of business
cards, or a wallet, let alone pick out the
chairs, tables, televisions, couches, desks,
VCRs, file cabinets, or a thousand other
object categories that we can all effortlessly
perceive and name.
Without this capability, our robots cannot
have any reasonable understanding of the
world for carrying out complex tasks. Fur-
thermore, they cannot begin to do even the
rudimentary nondextrous manipulation of the
world that a one-year-old child can achieve.
We need some good ideas and innovative
research that will lead to the beginnings of
real object recognition under general view-
point and lighting conditions. It might need
to rely on interaction with the world, or it
might turn out to be solvable passively. Until
some bright researchers make a dent in this,
our robots and their sentience are going to
remain limited.
SEPTEMBER/OCTOBER 2001 computer.org/intelligent 69
Coming Next:
Human Language
Technology in
Knowledge
Management
... Ide penggunaan robot untuk operasi pencarian dan penyelamatan dipicu oleh peristiwa pemboman di kota Oklahoma tahun 1995 dan gempa Kobe, Jepang tahun 1995 ) (Lopes, 2001. Semua robot ini memiliki bentuk, ukuran, visi, komunikasi, kecepatan, sensor dan daya yang berbeda-beda. ...
Article
Full-text available
p>Agar robot bencana bisa melaksanakan tugas tertentu pada medan yang tak beraturan dan tidak diketahui keadaannya secara dinamis, harus memiliki kemampuan pemetaan. Berdasarkan peta yang telah dibuat, maka robot bisa bergerak sesuai dengan peta tersebut. Makalah ini membahas implementasi pemetaan dan navigasi robot, dengan menggunakan algoritma bug untuk membuat lintasan yang dapat menghindari halangan. Lintasan tersebut kemudian dioptimalkan dengan menggunakan jaringan syaraf tiruan untuk memilih lintasan terpendek. Metode yang diusulkan ini kemudian diuji baik menggunakan perangkat lunak dan eksperimen di medan laboratorium. Abstract In order to perform certain task in a cluttered and unknown dynamic field, a disaster robot should have mapping capability. Based on the map, then robot will move accordingly. This paper describes the implementation of mapping and navigation of the robot by using bug algorithm to avoid the obstacle. The paths optimized by using artificial neural networks to select the shortest one. The proposed method then implemented both by simulation and experimental in lab scale field. </p
... A great deal of such system requirement is based on decision-making, and knowledge accumulation [1], [2]. However, enhancing the system supporting software with appropriate tools is considered as a primary factor in developing a highly efficient and easy to operate robot system [3], [4]. ...
In recent years, accelerating maturation of robotics technologies, such as machine perception and intelligent control technologies have led to a widening knowledge and experiential base serving as a springboard for expanded research. This maturation has led to increased engineering reliability, opening the way for more tractable human-robot research. The technology area from which a good deal of this work can draw inspiration and guidance is humans in automation (Parasuraman, Sheridan and Wickens, 2000), which lets us examine issues including air traffic controllers' multi-tracking or pilots' multi-tasking operations under conditions of time and other stressors. Among the issues the panelists will address are: How will human-robot teams dynamically reconfigure or gracefully degrade as assets are lost? How can people and robots make judgments of the robots ability to traverse or climb broken terrain? How can people manage multiple robots? How can robots and people build a shared awareness of the remote environment? What metrics will capture human-robot teamwork? Overall, human-robot team work is a new frontier for Human Factors.
Article
The Toyota Production System (TPS) exemplifies Japanese manufacturing. It has been further developed and spread in the form of internationally shared global production systems. The author has proposed the New Japan Global Production Model “NJ-GPM”, a system designed to achieve worldwide uniform quality and production at optimal locations – the keys to successful global production at Toyota. Based on NJ-GPM, the author has further established the Body Auto Fitting Model “BAFM”. The author has realized innovative unmanning of a fitting line by integrating the technologies utilizing BAFM. The ability to automatically fit and tighten door, hood and luggage compartment panels to the car body was achieved, utilizing robotics, vision systems, bolt tightening and product quality management. This paper shows the development of the highly reliable production system combining the following three items: (1) panel fitting accuracy, (2) automatic bolt tightening, and (3) integration into flexible assembly line at Toyota.
Article
A b s t r a c t As a way to understand and improve the usability of computer interfaces, we have been creating model users using the ACT-R cognitive modeling language. We present here a model that uses a publicly available JAVA-based driving game. The model's structure describes the knowledge the human operator must have; the model's performance makes quantitative predictions about how the speed of the robot influences the quality of the navigation and performance on a secondary task as well as indicating what aspects of the task will be difficult for the operator. The model's performance correlates reasonably well to human performance. While the model does not cover all aspects of human behavior with this technology, it illustrates how providing models access to an interface through their bitmap can lead to more accurate and more widely applicable model users.
Article
This paper surveys sensing assessment solutions from the literature with a particular focus on techniques which can be used in unknown environments, including the following: sensor fault detection and identification (FDI), sensor or source evaluation, and isolating poorly sensed regions. Each approach is evaluated in terms of its ability to perform sensing assessment tasks in unknown environments and its coverage of the range of potential sensing problems. These tasks include sensing problem detection and characterization, as well as performance evaluation (e.g., estimating accuracy or reliability), for a sensor or group of sensors. This survey shows that over 40 existing approaches are focused on either detection and identification of traditional sensor faults (e.g., drift or physical damage) in known environments or evaluation of the reliability of a source (e.g., sensor or agent). Only eight approaches surveyed have tackled environment-dependent problems (e.g., exteroceptive sensor FDI, miscalibration, or use of an inappropriate sensor) in a useful manner for unknown environments. Even less work (two studies) appears to have been done on isolating poorly sensed regions. The survey concludes with a list of opportunities for future research, including developing methods for detecting and characterizing environment-dependent problems and creating comprehensive sensing assessment systems.
Conference Paper
Full-text available
We propose a framework for robot programming which allows the seamless integration of explicit agent programming with decision-theoretic planning. Specifically, the DTGolog model allows one to partially specify a control program in a highlevel, logical language, and provides an interpreter that, given a logical axiomatization of a domain, will determine the optimal completion of that program (viewed as a Markov decision process). We demonstrate the utility of this model with results obtained in an office delivery robotics domain.
Conference Paper
Full-text available
We describe a logic-based AI architecture based on Brooks' subsumption architecture. In this architecture, we axiomatize different layers of control in First-Order Logic (FOL) and use independent theorem provers to derive each layer's outputs given its inputs. We implement the subsumption of lower layers by higher layers using circumscription to make assumptions in lower layers, and nonmonotonically retract them when higher layers draw new conclusions. W Te also give formal semantics to our approach. Finally, we describe four layers designed for the task of robot control and an experiment that empirically shows the feasibility of using fully expressive FOL theorem provers for robot control with our architecture.
Conference Paper
We describe a logic-based AI architecture based on Brooks' subsumption architecture. In this architecture, we axiomatize different layers of control in First-Order Logic (FOL) and use independent theorem provers to derive each layer's outputs given its inputs. We implement the subsumption of lower layers by higher layers using circumscription to make assumptions in lower layers, and nonmonotonically retract them when higher layers draw new conclusions. WTe also give formal semantics to our approach. Finally, we describe four layers designed for the task of robot control and an experiment that empirically shows the feasibility of using fully expressive FOL theorem provers for robot control with our architecture.