ChapterPDF Available

T-REX: Partitioned inference for AUV mission control

Authors:

Abstract and Figures

In this chapter Teleo-Reactive Executive (T-REX) is designed, developed, tested and deployed as an onboard adaptive control system that integrates artificial intelligence (AI)-based planning and probabilistic state estimation in a hybrid executive. Probabilistic state estimation integrates a number of science observations to produce a likelihood that the vehicle sensors perceive a feature of interest. Onboard planning and execution enable adaptation of navigation and instrument control based on the probability of having detected such a phenomenon. It further enables goal-directed commanding within the context of projected mission state and allows for replanning for off-nominal situations and opportunistic science events.
Content may be subject to copyright.
T-REX: Partitioned Inference for AUV
Mission Control
Kanna Rajan & Fed´eric Py
Monterey Bay Aquarium Research Institute
7700 Sandholdt Rd.
Moss Landing, CA, 95039
{kanna.rajan,fpy}@mbari.org
1 Introduction
The coastal ocean is a complex environment driven by the interaction of atmospheric,
oceanographic, estuarine/riverine, and land-sea processes which result in dynamic coastal
features such as fronts, blooms, anoxic zones and plumes (estuarine, oil, pollutant). Often
these phenomena have unpredictable spatial and temporal expressions with models (if
they exist) with poor accuracy. Observing and sampling such features using a robotic
platform is therefore a challenge. Fig. 1, for example, shows a representation of three
phenomena of interest observed simultaneously in Monterey Bay: fronts, intermediate
nephaloid layers and blooms. Each of these phenomena can occur over a wide range of
scales, from tens of kilometers (fronts and blooms) down to tens of meters.
While autonomous underwater vehicles (AUVs) have emerged as cost-effective and
capable robotic vehicles with sufficient power and diverse payloads to robustly observe
these phenomena, under-sampling remains a persistent problem (Rudnick & Perry 2003).
One significant reason has been how these vehicles have been programmed to observe such
challenging and dynamic phenomena. Typically, waypoint-based pre-defined mission de-
signs are uploaded to the AUV; specialized code fragments called behaviors are designed
for the specific mission and a choice of behaviors for the mission are used on the com-
putational stack (Bellingham & Leonard 1994, Carreras, Ridao, Garcia & Battle 2006).
Swapping in and out of these behaviors using conditionals forms the basis for adaptation
and safety in the vehicle.
There are a number of drawbacks with this approach. First, the underlying motivation
in such Subsumption based architectures (Brooks 1986) to write simple software machin-
ery gives way to code bloat as the natural tendency to make the vehicle more adaptive
is shoehorned into behaviors. In our experience this often ends up with procedural code
that is too encompassing and difficult to maintain by anyone but the author(s). Sec-
ond, if conditional adaptivity is not encased in these uber-behaviors, then it is usually
hard-coded into the control stack. In both cases, the causality of such behavior swaps
during execution, is opaque to human understanding (and subsequent change) leading
to a re-implementation of the control stack and/or the behavior(s). Third, this system
design continues to force the operator (more often a scientist with little interest or un-
derstanding of software engineering) to think about how to make the vehicle perform a
specific set of patterns rather than stating intent.
Not only is the operator forced to confront lower level details primarily to make the
1
Figure 1: Coastal ocean phenomena targeted for studies using adaptive control on a robust
operational AUV. Along the black surface tracks from a Sept. ’07 mission, the AUV executed
a vertical Yo-Yo to map the water column in high-resolution for key phenomena such as fronts,
intermediate nephaloid layers (INLs), and phytoplankton blooms and patches. Image Courtesy:
John Ryan, MBARI
vehicle return data, but must do so at lower levels of abstraction such as waypoints and
track lines. Further, the operator needs to scrupulously ensure interaction between be-
haviors is not compromised when running concurrently in the execution stack. Getting
the vehicle to do anything more unstructured than fixed surveys then, becomes a chal-
lenge. Finally and crucially, in such a control paradigm, the controller is responsive to its
immediate environment (e.g., passing through a front with a temperature gradient, de-
tecting an obstacle in the vehicle’s path), causing it to generate commands that disregard
impacts to future actions or state. This prevents any substantial adaptation of mission
structure essential to improving operation in a dynamic environment and to pursuing
unanticipated science opportunities. Safe and effective adaptation however, requires a
balanced consideration of mission objectives, environmental conditions and available re-
sources. Moreover, as mission durations increase with more capable hardware, it becomes
challenging to fully describe a mission while having sufficient flexibility to capture un-
predictable events. While traditional behavior based approaches have proved adequate
to survey scalar fields, the increasing need to observe dynamic events require a balance
between reactive behaviors and more deliberative projections of planned actions. Be-
sides mitigating these shortcomings, there is an important need to balance near term
reaction with their long term impact in order to study dynamic processes efficiently and
cost-effectively while abstracting commanding from low-level vehicle control.
To mitigate these challenges, we have designed, developed, tested and deployed the
Teleo-Reactive EXecutive (T-REX) an on-board adaptive control system that integrates
artificial intelligence (AI) based planning and probabilistic state estimation in a hybrid
executive (McGann, Py, Rajan, Thomas, Henthorn & McEwen 2008, McGann, Py, Ra-
jan, Ryan & Henthorn 2008, Py, Rajan & McGann 2010). Probabilistic state estimation
integrates a number of science observations to produce a likelihood that the vehicle sen-
sors perceive a feature of interest. Onboard planning and execution enables adaptation
of navigation and instrument control based on the probability of having detected such
a phenomenon. It further enables goal-directed commanding within the context of pro-
jected mission state and allows for replanning for off-nominal situations and opportunistic
science events. The framework in addition to being used on an AUV, is general enough
be used for controlling a personal robot (McGann, Berger, Boren, Chitta, Gerkey, Glaser,
Marder-Eppstein, Marthi, Meeussen, Pratkanis & Wise 2009) and is deployed on a Euro-
2
Figure 2: The MBARI Dorado AUV being deployed from its support vessel the R/V Zephyr.
Figure on the right shows 10 Gulper samplers in the AUV mid-body.
pean planetary rover testbed (Ceballos, Bensalem, Cesta, de Silva, Fratini, Ingrand, Ocon,
Orlandini, Py, Rajan, Rasconi & van Winnendael 2011). Probabilistic state estimation
is discussed elsewhere (McGann, Py, Rajan, Ryan, Thomas, Henthorn & McEwen 2008)
and out of scope for this chapter.
This chapter is organized as follows. After introducing a motivating example in Sec-
tion 2, we present the key ideas behind the T-REX framework in Sections 3 & 4 that
embed automated goal-oriented planning and execution for autonomous mission control.
Results from field missions with MBARI’s Dorado AUV platform (Fig. 2) are in Section
5. We conclude with Section 6.
2 A Motivating Example
Oceanographic features are often heterogeneous and dynamic, spread over large spatial
scales (from meters to kilometers in extent) with dynamic biological activity across the
temporal spectrum (in the order of hours to weeks), making it challenging for robotic
platforms to autonomously track and sample. A prominent example of such a process is
coastal algal blooms which are patchy and could cover large (>50 Sq Km.) coastal zones.
Persistent observation of such dynamic events dictates that our robotic assets track and
sample such patches which can evolve rapidly due to inherent bio-geochemical activity,
advection as well as diffusion with the water mass it resides in.
survey target
advection
diffusion
survey area
survey target
advection
diffusion
survey area
survey target
advection
diffusion
survey area
Figure 3: An illustration of dynamic patch
mapping and tracking using a ’lawnmower’
pattern. The vehicle needs to be retargeted
at every iteration, since the spatial definitions
evolve due to diffusion or advection.
Fig. 3 illustrates a targeted approach
where the AUV moves along with the patch,
to give multiple “snapshots” of this feature
during its evolution. While the scientific goal
to characterize the ecology of such amor-
phous patches necessitates gathering data
with a comprehensive set of sensors (such
as CTD, Backscatter, Nitrate, O2, particle
size, chlorophyll fluorescence), acquiring wa-
ter samples to return to shore for laboratory
analysis, has become equally important. Our
Dorado vehicle therefore is equipped with
water samplers (Fig. 2), each usable once
during the course of a mission (to avoid con-
tamination). Acquiring these samples in a
manner that is spatially distributed within a
3
feature of interest, becomes critical.
An a priori specification of survey parameters is inadequate given poor skill of pre-
dictive ocean models and often little to no availability of synoptic views of the survey
area. Our primary challenge then, is to fly ’blind’, detect the feature of interest and ap-
propriately trigger samplers. Mission design would ideally have the vehicle adapt to the
patch and also make shorter term reactive decisions traded with longer term deliberation
to decide when the limited sampler resources ought to be used, based on the ability of
the vehicle’s sensors to identify the patch.
Identifying subsequent iterations of the survey presents a different challenge since the
vehicle lacks the synoptic views necessary to predict where the patch might be headed;
incremental dispatch of survey plans or waypoints such as in glider operations are not sus-
tainable for powered AUVs. Sending data back to shore for human-directed navigational
change needs to be balanced by time spent on the surface in transmitting with the utility
of continuous observation. Additionally, total mission time is bounded by vehicle speed
and battery capacity (Willcox, Bellingham, Zhang & Baggeroer 2001) and needs to be
balanced by known and potential future objectives. Fully descriptive surveys designed a
priori are therefore inappropriate; as is mitigating the above constraints piece-meal with-
out systemic consideration of various constraints both for sustained software engineering
and dealing with complex interacting decisions.
3 Key Concepts in T-REX
T-REX was designed for the kinds of dynamic patch tracking missions illustrated in Fig.
3, keeping in mind the competing demands of deliberation, reaction, dynamic execution,
complexity in computation and sustained software engineering. In this section, we detail
the core design principles.
3.1 Mission, goals, actions
World state
s1
s2
s3
s4
s1'
s2''
s3'
s4'''
s2'
s4''s4'
Action
preconditions effects
Mission
goals
Figure 4: An abstract view of the relation between
an action and mission goals.
In the ontology of autonomous sys-
tems, different levels of abstraction
represent different kinds interactions
with the world. A high level mission
qualifies the overall purpose of the
mission and is itself described as a set
of objectives – or goals – that should
be fulfilled in the scope of the mission
lifetime. These goals can be dynamic
and dependent on extraneous events
(for example acquiring a water sam-
ple or performing a ’Yo-Yo’ saw-tooth
transect) driven by sensor data. In
order to complete these goals, the ve-
hicle needs to take actions to allow the vehicle to fulfill its goals and consequently its
mission.
The mapping of actions for a set of objectives is not unique and often depends on the
situation encountered by the system as it interacts with its environment. Fig. 4 illustrates
this relation between state variables (Ghallab, Nau & Traverso 2004) representing the
4
variation of the world with the passage of time, actions with their precondition and effects
and mission goals. An action is itself just one of many ways to alter the state of the world
in order to fulfill mission objectives. Preconditions provide the context under which an
action can be executed, while effects indicate how this action alters world state. Not all
effects of an action are relevant for stated mission objectives and alternate actions could
result in similar outcomes. Further, robot actions need to resolve unexpected situations
which either threaten the mission or, conversely, generate an opportunity to increase
overall utility.
Traditionally planning has been considered computationally intensive (the general
planning problem can be worse than NP-complete (Ghallab et al. 2004)). A major rea-
son for this belief had to do with the role of the planner (which was assumed to be
generative) and its place in the architecture (infrequently called to re-plan the entire
mission in off-nominal conditions) within a sense-plan-act paradigm. This can limit the
reactivity of a robot especially when the environment could change at a rate faster than
the planner can plan. 3-tiered architectures (Gat 1998) and their impact on high-profile
experiments such as (Muscettola, Nayak, Pell & Williams 1998, J´onsson, Morris, Muscet-
tola, Rajan & Smith 2000, Rajan, Bernard, Dorais, Gamble, Kanefsky, Kurien, Millar,
Muscettola, Nayak, Rouquette, Smith, Taylor & Tung 2000), (Chien, Sherwood, Tran,
Cichy, Rabideau, Castano, Davis & Boyer 2005) created a slew of variants in robotic
planning. Lessons learned led to a critical understanding of the relationship between
planning and plan execution. This in turn had an impact on knowledge representa-
tion, architecture and interleaved planning and execution in IDEA (Muscettola, Dorais,
Fry, Levinson & Plaunt 2002, Finzi, Ingrand & Muscettola 2004) (planning for mis-
sion duration as well as reactively re-planning) and CASPER (Chien, Knight, Stechert,
Sherwood & Rabideau 2000) (iteratively and incrementally repairing an anytime plan
(Zilberstein 1996)).
In these approaches to robot planning, the focus has been on action execution without
an easy way to link the actions requested to the purpose they serve. Decision making
is monolithic, requiring one central entity to adapt in the face of divergent and dynamic
environmental change. Making and maintaining planning domain models often requires
a careful balance between interacting parts.
Our architecture falls within the continuum of these past efforts but pushes the state
of the art principally in two ways. First, we distribute computation into smaller frag-
ments in systematic functional components called reactors and use a divide-and-conquer
strategy for effective problem solving. The overall system is a composition of reactors
with each reactor deliberating on a subset of the state variables being tracked. Planning
is distributed between these components which can interact through state updates and
goals for future values of a state variable. Second, planning and execution within this
framework is tightly coupled, with consequent broader semantics of execution to imply
dispatching of goals from one reactor to another. The focus shifts from actions to goals
with state updates providing the results of goal specification; this is the only and explicit
contract between reactors. This conceptually contrasts with classical architectures as
actions become private to each component while goals become the new base of interac-
tion and control. This in turn makes each reactor a standalone entity with liberty to
define internal state and to fulfill overall mission objectives while ensuring the system is
responsive to exogenous events.
In turn, the architecture has to manage the information flow within the partitioned
structure to ensure consistency so as to direct the flow of goals and observations in a timely
5
T-REX agent
Vehicle
Nav_Cmd. Sensor_Data Sampling_cmd.
Pilot
Sampling
Survey controller
Sampling_policy
Nav_policy
Waypoint
Shore comm.
Relevant_data
Survey
Ship
Operation
Figure 5: AT-REX agent is composed of multiple reactors or control loops (rounded boxes)
which are connected through state variables provided by one reactor (solid lines ending with a
`) with multiple possible clients ( dashed lines ending with a )
manner. The resulting control structure improves scalability since many details of each
controller can be encapsulated within a single control loop. Furthermore, partitioning
increases robustness since controller failure can be localized to enable graceful system
degradation making this an effective divide-and-conquer approach to the overall control
problem. The role of the T-REX agent then, is to ensure that all the reactors will be able
to interact concurrently so that they are informed of state evolution that may impact
them, have a sufficient amount of time to synthesize plans and coordinate plan dispatch
across reactor boundaries. Fig. 5 shows an instantiation of a T-REX agent.
3.2 Distributing Decision Processes: A Conceptual View
Distributed decision-making in T-REX occurs around the notion of state variables. The
complexity of planning is strongly related to the number of state variables that are part
of the problem. The partitioning of computation ensures partitioning of problem solving
and consequently helps reduce the time the system spends exploring potential strategies.
Such a division occurs along with an ownership and dependency model where each reactor
can declare two kinds of state variables:
internal state variables are directly manipulated and controlled by the owner reactor
and describe their view of the state of the world. The owning reactor has sole
responsibility to update this variable when required and also to plan ahead on any
goal state.
external state variables describe state information a reactor does not directly control
but which is necessary for identification of a reactor’s internal state. Conversely it
may need to be able to request changes to these variables in order to execute the
plan that allow it to complete its internal goals.
The overall design explicitly enshrines the notion of internal state in a reactor directly
depending on its external state variables. The external state variables are provided by
6
other reactor(s) – ones that declared the same state variable as internal. Only one reactor
can define a state variable as internal; this reactor is then the one providing the ground
truth for changes taking place on that state variable. Fig. 6 illustrates this concept for
a single reactor with this natural dependency captured in Fig. 5. Cyclic dependency
between reactors is not allowed.
Pilot reactor
nav. cmd. sensor
data
nav.
policy
waypoint
Internal
states
External
states
Model
Figure 6: An example of a T-REX reactor. The
Pilot’s internal state stores current waypoint ob-
jective along with its navigation policy. This inter-
nal state relies on external state information such as
the current navigation command executed and vehicle
sensor data provided by other reactor(s).
This sub-division enables the
larger control problem to be dis-
tributed between reactors with each
having a specific functional scope de-
fined by the state variables it ma-
nipulates. Fig. 5 for instance
shows a Survey Controller coor-
dinating vehicle activities which are
not only functionally different from a
Sampling or a Pilot reactor, but also
encapsulate and manipulate informa-
tion at differing levels of abstraction.
In addition, within the hierarchy, re-
actors have different levels of tempo-
ral scope; those higher (like Survey
Controller) encompasses the entire mission potentially taking longer to do more ab-
stract problem solving; those lower (like the Pilot) are solving smaller problems in a
reactive and timely manner.
The goals of a reactor will only be on its internal state variables. The plans that
are produced in order to accomplish these goals on the other hand, may require changes
to its external state variables. However, these will be reported as a new goal to the
reactor declaring this state variable as internal and enforced by the agent aware of the
dependency structure. From the standpoint of a reactor, there is no specific need to know
explicitly which reactor is managing the external state variables. Further, the basis of
interaction between reactors remain limited to these state variables as the sole mechanism
to transmit goals and return observations. Fig. 7 illustrates an example of such tasking.
Transect(A, B)
At(X)
Head(A)
At(A)
Head(B)
At(B)
Transit
Head(A)
At(X)
Fact
Goal
Plan
Wait
Figure 7: A plan fragment for Survey Controller. In order to fulfill its internal goal to do
aTransect(A,B), the reactor identifies several state changes that need to occur on the external
waypoint state variable such as Head(A) in order to be At location A. These will become goals
of the Pilot which declares waypoint as internal. Time flows from left to right.
7
T-REX adopts a synchronous view of the world and its evolution. Every reactor of the
system advances in accordance with a centralized clock which dictates the frequency at
which state updates occur. This atomic unit of time or tick, therefore gives the frequency
at which the agent executes its control loop; all state variables are assumed to be updated
at this rate. The duration of a tick is set according to the system and the environmental
rate of change. For example, in our AUV implementation, a 1 second tick proved to be
a good balance between the low-power computer embedding T-REX and the speed of our
vehicle.
Ascend(2)
depth 0.5
t < 5
Command
Position
CommunicateStatus Surfacing
starts
Idle
contained_by
Figure 8: The Pilot’s plan for communication: Sta-
tus is internal while Command and Position are exter-
nal to this reactor.
Finally, each reactor is a collec-
tion of state variables that reflect its
evolution in the past and its desired
projection of the future. Information
about allowable state evolution is ex-
pressed within a domain model which
a reactor consults. In our agent, the
Pilot exhibits a state variable ex-
pressing the status of the underwa-
ter vehicle with different predicates
such as Communicate,Surfacing and
HeadingTo(x,y) representative of system state. These states are constrained by concur-
rent vehicle state variables such as its Position and the Command executed. For example
Status can be in the Communicate state only if the current Command is Idle and the
Position indicates a depth close to the surface (depth 0.5) as illustrated in Fig. 8.
3.3 Interleaving Planning and Execution
A central design feature in T-REX is how deliberation and execution are carried out within
the reactor. Unlike 3-tier architectures, the traditional (and synchronous) sense-plan-act
loop gives way to planning and acting as being intertwined; a single reactor can then be
seen as a control loop that both observes and acts on its external state in order to fulfill
its internal objectives.
Integrating planning with execution is challenging. It is often limited by planner
complexity and performance. Typically, generative AI planners have been deemed com-
plan
sense Act
solution
conflict/flaw
(a) Sense/Plan/Act
plan
check-plan/
plan-repair
sense Act
OK
conflict, flaw
solution
initial state
conflict/flaw
(b) IxTeT Exec: Plan/Sense/Act
synchronize
sense Act
plan database
update
plan
conflict/flaw update
(c) T-REX paradigm
Figure 9: Paradigms used to integrate planning and execution: (9a) Sense/Plan/Act requires
the planner to complete before being able to take the next set of action(s). (9b) The Plan-
Sense/Act approach attempts to circumvent this by reporting planning ahead of the execution
loop as in IxTeT-EXEC. (9c) T-REX has a multi-process point of view where both planning
and execution tasks are altering the plan concurrently.
8
putationally intensive for embedding as real-time systems (Muscettola et al. 1998). A
prime reason has been for their expansive (and consequently expensive) exploration of
the search space. The classic sense-plan-act loop (Fig. 9a) assumes that the world state
remains static during plan synthesis.
(Lemai-Chenevier 2004) among others, attempted to resolve this with an approach
that can be considered as plan-sense-act (Fig 9b). Planning is done ahead of execution;
the solution plan is then executed in an alternate loop where the decision-making is
limited to local plan repair with the insertion of a limited set of recovery actions. When
it is not possible to repair, reverting to full planning is the only solution and one where the
world is again assumed to be static. They therefore, consider planning to be undertaken
while the system can be set in a stable state (in their case stopping their rover testbed)
until planning is complete. Planning and execution are seen as sequential and not fully
interleaved, with planning often interrupting execution. The ability of the system to react,
therefore, is limited by the time spent planning. When the system is planning, it is neither
in a position to act nor perceive its environment until plan synthesis is complete. In
dynamic environments (such as the underwater domain) AUVs are in continuous motion,
as is the environment and such a sequential problem solving approach is not viable.
In T-REX therefore, within each reactor we consider execution control and planning
as two separate tasks (Fig. 9c) running concurrently and scheduled by the agent using
the following semantics:
synchronization where a reactor collects external state information observed from
other reactors in order to produce its internal state. This occurs at a fixed rate
reflected by the single tick value.
deliberation is a background task where a reactor can identify a potential plan
in order to complete its internal goals. This is a background task that can take
multiple ticks and can be interrupted any time a new synchronization cycle occurs.
Often the plan will translate into desired alterations of the external state variables
of the reactor. If for example the Pilot from Fig. 5, has the goal to reach a
Waypoint, its plan will add a goal to the external Nav Cmd state variable to task
the Vehicle to head toward this waypoint.
tick
synchronization
deliberation
τ τ+1τ+2
internal goals external plan
external
observations
internal
state
Figure 10: The T-REX execution cycle: Deliberation
is interrupted by synchronization at the beginning of
every tick allowing integration of state information.
While the two processes are sep-
arate, they share the same plan in-
ternal to a reactor and therefore at
every synchronization cycle the plan-
ning process is informed of the new
world state and its impact on the
plan. Conversely, when the plan-
ning process eventually finds a so-
lution, synchronization is informed
about generating the planned state
values on the external state variables.
This external plan defines the set of goals for this state variable managed by another
reactor. Fig. 10 shows a conceptual view of these two tasks.
The information flow emphasizes the responsibility of both of these processes inside
the reactor as well as how each can impact the interaction with other reactors. During
synchronization the reactor is notified of new state changes on its external state variables.
The responsibility of the reactor is to not only to integrate this information into its plan
9
but also compute its internal state and propagate changes to the reactors that use these
state variables externally. In deliberation the reactor receives new goals on its internal
state variables and plans for them. The impact of this plan on external state variables can
then be transformed into goals to whichever reactor(s) that declare these state variables
as internal.
This provides a clear separation at the reactor interface level between two necessary
information flows critical for agent execution. Deliberation is top-down where a high level
internal goal will produce an external plan, which in turn can be dispatched as internal
goals to other reactors. Synchronization by design gives state feedback in a bottom-
up fashion where a low level internal state will propagate up as external observations.
These two information flows are connected, as we expect an internal goal to eventually
be realized as an observation.
4 The T-REX Execution Cycle
AT-REX agent is a collection of reactors each manipulating state information through con-
current synchronization and deliberation tasks. The agent’s role is to schedule the tasks
appropriately while ensuring that information flow occurs in a timely manner via goal
and observation exchanges. The agent does not strictly follow a sense-plan-act paradigm
as noted in Section 3.3. We break out synchronization, deliberation and dispatch, in the
following section, for purposes of readability, however.
4.1 Synchronization: Maintaining Reactor State
Synchronization may be seen as the point during inference where the agent ensures that
every reactor is able to identify its state for the current tick τ(see Fig. 10). Each reactor
is seen as a state machine executive that keeps track of external state variable evolution at
every tick in order to identify its own internal state, which in turn could be used by other
reactors. This gives rise to a natural dependency between reactors that is maintained at
every tick by the agent using the graph structure G= (R, D) where:
R={r1. . . rn}is the set of reactors
D:riBrjshows the dependency reactor rihas on rjwith the implication that at
least one external state variable of riis internal to rj.
This dependency as seen by the agent amounts to a synchronization order.
Proposition 4.1. A reactor rRcannot be synchronized before all the reactors {u
R:rBuD}are synchronized.
Therefore, in order to resolve agent synchronization – and ensure that every reactor
is able to identify its current internal state – we need to find a scheduling of reactor
synchronization going from the least dependent (leaves of G) to the most dependent
(roots of G) reactors. This can be resolved by using a depth-first traversal as given by
Algorithm 1 where the reactor synchronization (Line 19) is done after having traversed
all of the reactors it depends on. By using this algorithm, we can also identify cyclic
dependencies between reactors (Lines 5 and 13) which are treated as errors.
Theorem 4.2. To ensure convergence of all reactors during synchronization, cyclic de-
pendency between reactors is prohibited.
10
Algorithm 1: Synchronize
Input:G= (R, D) the reactors graph
begin1
forall rRdo r.color white ; // Unmark all reactors2
Stack S− {r:@u, u BrD};// Get root reactors of the graph3
if S=∅ ∧ R6=then error(“No root node”) ; // Detected global cycle55
while S6=do6
rS.front ;// Traverse the 1st reactor on the stack7
r.color gray ;8
foreach vR:rBvDdo9
if v.color =white then10
SS.push(v) ; // Add non traversed reactor11
else if v.color =gray then1313
error(“Cyclic dependency”) ; // Cycle between rand v14
end15
end16
if r=S.front then // All dependencies of rhave been processed17
synchronize(r) ;1919
r.color black ;20
SS.pop ;21
end22
end23
end24
Proof. Assume two reactors r1and r2have a cyclic dependency such that r1Br2and
r2Br1. Proposition 4.1 implies that none of them can be synchronized before the other.
Consequently the only way to have these two reactors synchronized is to search for a
fixed point until both appear to retain their state. Such a fixed-point iteration in general
is not guaranteed to converge. Therefore we cannot guarantee that these reactors will
eventually be synchronized to stabilize overall agent state.
Execution time for Algorithm 1 is O(kRk+kDk). This ensures scalability to large
graphs as long as the synchronization call for each reactor ris small compared to the
duration of a tick. While this depends on the implementation and complexity of each
reactor, in practice it is a reasonable assumption as each reactor is restricted to a small
subset of the overall synchronization problem.
4.2 Deliberation: Making the Agent Proactive
4.2.1 Definitions
While synchronization is crucial to ensure that every reactor’s state is current, it is
insufficient to ensure agent decision making and execution.
Take for example the reactor Ship Operation in Fig. 5. The role of such a reactor
would be to ensure the vehicle is at a reasonable range for recovery by the support vessel.
We assume that it is given an overall objective to have the vehicle at a specific zone close
to the harbor between say, 6:00 am and 6:30 am. Synchronization ensures this reactor
can identify, at every tick, what the value of the Waypoint state variable provided by
11
the Pilot will be. We will consider that this state variable can be in either of the two
following parametrized states:
Head(direction) is a predicate which reflects the vehicles movement towards the
heading direction
Correspondingly the predicate At(location) indicates that the vehicle is in the vicin-
ity of location
The Ship Operation reactor receives continuous state updates at tick intervals. How-
ever it is the projection of future state via periodic deliberation and synchronization that
is used by reactors to identify if their external states are evolving accordingly to their
plan, in this case to determine if the vehicle will be available within the time window
for recovery. Moreover, if a plan represents only a single trajectory of evolution of sys-
tem state, it will likely be brittle, resulting in re-planning and potentially sub-optimal
performance of an embedded planner, one which might not be responsive to off-nominal
events.
To represent the evolving state information coupled with the notion of time, we use
state variables as timelines. This is a flexible representation to describe one sequence
of states for a given state variable (Muscettola 1994, J´onsson et al. 2000, McGann, Py,
Rajan & Olaya 2009). A timeline represents the temporal evolution of a single state
variable allowing each reactor to deliberate about the future evolution of each of these
states. At any given time each state variable can have only one state value which is
described as a predicate. Thus each timeline consists of a sequence of predicates which
encapsulate and describe state evolution; we call these instantiated predicates as tokens.
A token therefore describes a predicate, the state variables on which it can occur, the
parameter values of the predicate, and the temporal values defining the interval during
which this predicate holds true.
Formally, we define timelines and tokens as follows:
Definition 1. For each state variable s∈ S, timeline values l∈ Lsof sare defined by
the tuple {Ts,Ql,Gl}where:
• Tsis the set of all possible tokens for s. Each token T∈ Tsexpresses a constraint
on the value of sover a temporal scope and described as the predicate
T=p(start,duration,end,
x)
where pis the predicate name; attributes start,duration and end are intervals over
Nindicating the temporal scope during which pwill hold.
xdescribes the attributes
of pwith their acceptable domains. For example the token:
Speed(start = [0,5],duration = [1,10],end = [1,15], speed = [1,1.5] )
represents the speed of a vehicle between 1 and 1.5m/s starting between tick 0 and
5 for a duration up to 10 ticks.
• Ql={T1, . . . , Tn} ∈ 2Tsis a sequence of valid tokens that describe a possible
evolution of s. No two tokens Ti∈ Qlcan be concurrent and they are sorted
according to their relative temporal order. Using Allen algebra (Allen 1984):
i[1, n) : Timeets Ti+1 Tibefore Ti+1 (1)
12
• Gl⊆ Tsdescribes the set of goals for this timeline. A goal is a token that needs to be
evaluated for potential insertion in Qlor rejected if insertion is not possible (Frank
& J´onsson 2003). The result of deliberation is to find a way to make this goal set
empty.
4.2.2 Flexible Constraint-based Plan Representation
While T-REX is agnostic to the nature of reactor deliberation, several design choices made
– including the use of a timeline representation – were to support our goal in having
expressivity. This is one reason our implementation of reactors is based on the temporal
constraint-based planning engine EUROPA2(J´onsson et al. 2000, Frank & J´onsson 2003)
with a demonstrated legacy from NASA space missions (Muscettola et al. 1998, Rajan
et al. 2000, Bresina, Jonsson, Morris & Rajan 2005). In EUROPA2, the planner is given a
domain model that describes the interactions and constraints between each state variable.
A constraint solver manipulates these variables defined in the model to find a plan. Each
timeline evolution both respects the constraints of the model while satisfying as many of
the given goals as possible.
EUROPA2’s domain model is written in declarative form with NDDL (New Domain
Description Language) (BedraxWeiss, McGann, Bachmann, Edgington & Iatauro 2005,
NDDL Reference 2011). Initial conditions and goals also specified in NDDL, construct a
set of temporal relations that must be true at start time. These models include temporal
assertions about the physics of the vehicle, i.e how it responds to external stimulus and
internally driven goals. By propagating these relations forward using Simple Temporal
Networks (Dechter, Meiri & Perl 1991) and applying goal constraints, EUROPA2can
select a set of conditions that should be true in the future, and where some of these
conditions will correspond to actions the agent must take. The planner can backtrack
and search for alternatives if a goal cannot be achieved and is capable of discarding
unachievable goals.
Consider for example, a scientific need to take a water sample 100 meters from a hot-
spot while an AUV is moving up and down in the water column in a Yo-Yo pattern. Two
samples are to be obtained if the feature’s signal is above a threshold; one otherwise.
The token that is capturing the sensory threshold has parametric constraints to the
token which triggers the requisite water sampler. In addition, the start time of the water
sampling procedure is highly dependant on the variability of sub-sea currents and actual
speed of the vehicle. Therefore a number of values are possible for the start as well as the
duration of the sampling task, all of which are valid combinations for desired outcomes
and one which can only be determined in-situ.
Each token attribute (including its temporal extent) is not restricted to a single value
but represented by the domain of all values that are considered valid for a plan instance.
Temporal flexibility is represented as an interval as shown in Definition 1. For example a
range of possible start times can be expressed by [10,40] to indicate a start time between
10 and 40 time ticks. Unlike a traditional fixed time-tagged command sequence, such
flexible plans leave room for adaptation at execution time by cushioning the impact of
an unpredictable actuation or sensor feedback from the environment. Instead of starting
precisely at tick 10 a perfectly legitimate start time could be at tick 30 reflecting a delay in
ending the last activity. Furthermore, in the event of a local plan failure, plan flexibility
allows timely execution of recovery procedures to diagnose and return vehicle state to
resume nominal operation. In such “fail operational” modes, flexibility is therefore an
enabler for robust execution.
13
Waypoint
Waypoint
meets meets
meet_by
5
0
time
10 45 60 contains
Take_Water_Sample (N=?x)
Waypoint
d= [40,50]
Waypoint_Yo-Yo (?threshold)
Figure 11: Tokens with flexible temporal intervals and parametric constraints between tokens
on two separate timelines. The example shows the triggering of a water sampler based on a
feature threshold while the vehicle performs Yo-Yo’s in the water-column. The Waypoint Yo-Yo
token has a flexible duration, start & end times. Arcs between tokens represent parametric
temporal constraints.
When the executive considers when to start a task, it propagates information through
the constraint network, computes a time bound for the variable, selects an actual ex-
ecution time within the bound, and starts the task at that time. Temporally flexible
plans therefore, express a range of possible outcomes of the robot’s interaction with the
environment within which the executive can elect, at run time, the most appropriate
value for actual execution. The fact that constraints are explicitly represented ensures
that through constraint propagation the executive will respect global limits expressed
in the plan (e.g., don’t start a task until a certain condition has been satisfied but still
satisfying some deadline). Such flexibility is critical when dealing with dynamic ocean
conditions where precise timing of a robotic action is often indeterminate. Because tradi-
tional time-tagged sequences are inflexible, they must necessarily be designed considering
worst case scenarios leading to sub-optimality. Fig. 11 illustrates temporal flexibility
in concurrent tokens and timelines. This representation enables a flexible view of state
variable evolution without early commitment on its attributes.
Systematic interval computation is reflected by the use of Allen Algebra (Allen
Figure 12: Temporal relations defined within the planner are based on Allen Algebra relations
shown above.
14
Navigation
Surface
WaitAtSurface Navigate
meets
meets
duration 1800
Idle Go(X, Y, Z)
meets
meets
contained_by
Figure 13: A partial plan domain model based with two timelines. Surface models the vehicle
to be at the surface after 1800 seconds of navigation and Navigation models the cycle between
the vehicle Idling and Going toward its goal. Temporal relations indicate modeled constraints
between states. Constraints shown are meets to indicate each timeline state’s evolution and
contained by enforcing the rule that the AUV will Go somewhere only when not Waiting at
surface.
1984) temporal relations shown in Fig. 12 which allows encoding of temporal constraint
relationships between tokens on concurrent timelines. During synchronization for example
the identification of a new state update on an internal timeline can easily be represented
by the corresponding token with its start time restricted to the single value corresponding
to the current tick τ. This in turn can be integrated by all reactors declaring this timeline
as external by using the principles described on Definition 1. Constraint propagation
algorithms and interval logic are covered in more depth in (J´onsson et al. 2000, Frank
& J´onsson 2003). Finally, our token representation is used for external and internal
state variables of reactors and forms the uniform basis of information exchanged between
reactors, both as observations and goals.
In describing the evolution of state via timelines, transitions within a timeline can be
described by simple finite state machines. Fig. 13 for example describes timeline evo-
lution for two abstract timeline entities describing navigation. Arc transitions represent
temporal constraints and token durations are represented by temporal extent of state.
The entire vehicle can then be modeled as a number of state transitions with concur-
rency for each of the modeled state variables. However this view is often complicated
when temporal constraints between state variables are articulated in our domain models.
4.3 Consistency in Timely Plan Dispatch
While each reactor can independently infer the future evolution of a state variable, the
distributed nature of our architecture requires we address the problem of divergent state
views between reactors. Fig. 14 for instance, shows three reactors with the Waypoint
state variable, owned by the Pilot.
Synchronization ensures that all these views are consistent up to the execution frontier
τas new state updates on its internal view are propagated to the external views in
Ship Operation and Mission Controller. However, it is possible that each reactor
can envision a desirable future evolution that will allow it to complete its objectives in
different ways; Ship operation could desire a rendezvous at the recovery location around
6am, while Mission Controller might suggest the vehicle continue to navigate to fulfill
its survey objectives. Both of these objectives may not necessarily be conflicting – in Fig.
14 being At(M1) may for example happen soon enough that the vehicle can target the
recovery area prior to 6:30am, the end horizon for the rendezvous.
However, the agent needs to offer a mechanism to ensure that these potential futures
15
are timely and compatible for the Pilot to include them in its plan prior to execution.
Sending full plan projections of Ship operation and Mission Controller is not a viable
option; not only does it nullify the very essence of the distributed nature of inference, but
with the Pilot being closer to the hardware (where timeliness is critical), overall system
performance would be impacted by deliberation at that lower level.
The resolution calls for reactor to declare a priori, two extra parameters that provide
a hint of their planning capabilities to other reactors. Each reactor has a:
look-ahead π(r) that indicates how far it is expected to look into the future while
deliberating.
latency λ(r) that provides the maximum expected number of ticks it would take for
the reactor to produce a plan.
These two parameters together help evaluate the performance of the reactor to find a
solution for given goals. At a given tick τit can take the reactor rup to the tick τ+λ(r)
before finding a solution. As this plan needs to be executed, we also need to take into
account the latency of the reactors rdepends on. For doing so we define in the agent the
notion of execution latency as follows:
Definition 2. A reactor’s execution latency Λ(r), is the time necessary for a reactor
to deliberate and correctly dispatch the outcome of its partial plan on its external state
variables. It is recursively defined as:
Λ(r) = λ(r) + max
r0R:rBr0(Λ(r0)) (2)
Using the look-ahead πr, we can then identify the planning window as a temporal
duration over which reactor rdeliberates:
Theorem 4.3. When a reactor starts its deliberation at τits planning window is specified
as:
Πr(τ) = [τ+ Λ(r), τ + Λ(r) + πr] (3)
Proof. Let r1Br2(i.e. an external timeline of reactor r1is internal to r2). Let r1take
its full latency λ(r1) to produce its partial plan. It would then be able to dispatch its
goals only after τ+λ(r1). Let r2also use its full latency to produce a plan for a goal tp.
At(
?) At([-121.3 -121.32], .
At(M1)
At(M0) Head(235º N)Head(23ºN)
τ
past future
Mission&Controller
external view
Ship&Opera1on
external view
Pilot
internal view
Head(?)
[6:00 6:20]
Figure 14: Multiple views of the Waypoint state variable. The three reactors manipulating this
timeline could potentially have divergent views of the future. “Springs” in the future represent
the temporal flexibility of each token. The execution frontier is represented by τ.
16
The observation of tpduring synchronization of r1will not occur before τ+λ(r1) + Λ(r2).
Consequently to ensure a viable execution of the plan resulting from deliberation, reactor
r1should plan on a window starting from τ+ Λ(r1). Therefore the upper bound of r1’s
planning window is given by: τ+ Λ(r1) + πr1.
This planning window can then be used to identify when tokens on an external timeline
should be transformed into a goal for the reactor that declares this timeline as internal,
based on the possible token start time. The T-REX agent ensures that only tokens with
start times within this window are dispatched to reactors as goals. Consequently, any
token that cannot start during this window is not dispatched since it is outside the current
planning scope of the receiving reactor. Conversely, reactors receiving new goals through
this process are then scheduled for deliberation as they need to find a solution in between
two synchronization cycles and within their latency λ(r). This in turn results in tokens
on their external timelines to be dispatched using the above approach. This top-down
cascading effect is the counter-part of synchronization which allows high level reactors to
transfer their plan down to the lower level reactors.
5 Experimental results
T-REX is in routine operational use on MBARI’s Dorado AUV, with a focus on near-
shore upper water-column coastal ocean processes in Monterrey Bay, Northern Cali-
fornia. The system has evolved from simple objectives of waypoint control and path
planning – which abstracted regular surfacing for localization and straight-line tran-
sects – towards more complex volume surveys (McGann, Py, Rajan, Ryan, Thomas,
Henthorn & McEwen 2008) with adaptation to finding and acquiring water samples
within INLs1(Ryan, Johnson, Sherman, Rajan, Py, Thomas, Harvey, Bird, Paduan &
Vrijenhoek 2010), to tracking dynamic features such as blooms (Das, Rajan, Frolov,
Ryan, Py, Caron & Sukhatme 2010) and fronts (Das, Py, Maughan, Messie, O’Reilly,
Ryan, Sukhatme & Rajan 2011, Py, Ryan, O’Reilly, Thomas & Rajan 2011) in coastal
and off-shore waters while participating in large field deployments (Das, Maughan, Mc-
Cann, Godin, O’Reilly, Messie, Bahr, Gomes, Py, Bellingham, Sukhatme & Rajan 2011).
T-REX, written in C++, runs on a 367 MHz EPX-GX500 AMD Geode stack using Red
Hat Linux. The control loop operates at 10Hz on top of a lower level functional layer run-
ning on a separate processor on real-time QNX on the Dorado; communication between
the two processors is via a socket-based protocol. The vehicle hosts a range of scientific
payload including two Seabird CTDs, a HobiLabs HS2 HydroScat, a Satlantic/MBARI
ISUS Nitrate sensor among others. The vehicle is typically capable of operating to depths
of 1500m; however our experiments were carried out in depths of up to 200m. The AUV
on average runs at about 1.5m.s1speed over ground. Operational constraints require the
vehicle to localize with a GPS reading on the surface approximately every half hour. For
validation purposes we run experiments on a high-fidelity simulator based on (Gertler &
Hagen 1967). Water sampling involves the use of the Gulper instrument (Bird, Sherman
& Ryan 2007) to take 2 Liter water samples in less than a minute (Fig 2). The system is
used for unattended and overnight operations for up to 17 hours, the maximum battery
life of the vehicle.
1Intermediate Nephaloid Layers are fluid sheets of suspended particulate matter that originate from
the sea floor (McPhee-Shaw 2006)
17
p(INL)
S1
S2
S3
S4
S5
250m
250m
250m
250m
250m
250m
250m
257m
250m
259m
273m
Survey
area
Figure 15: Visualization of an INL sampling mission over the Monterey Canyon in November
2008. Dark area indicates high probability of INL presence as detected by onboard sensors. S1-
S5 indicate triggering of 10 water samplers two at a time. Arrows indicate the separation of the
transects based on average signal strength in the previous transect. Overall mission duration
was for 6 hours 40 minutes.
5.1 Augmenting Traditional Surveys
Our early missions related to tracking and sampling within estuarine plumes and INLs.
The volume survey required probabilistic feature estimation by a reactor to transform
a set of sensor data (the ISUS Nitrate sensor for high-nitrate, low salinity estuarine
plumes and the HS2 HydroScat for INLs along with CTD measurements) as a probabil-
ity, indicative of whether the AUV was within or outside the feature of interest. The
vehicle was given a high-level objective specifying the corners of the projected survey
box, minimum distance constraints for triggering Gulper samples and resolution between
the lawn-mower transects on feature detection. Feature detection and its instantiation is
outside the scope of this chapter and covered in (McGann, Py, Rajan, Ryan, Thomas,
Henthorn & McEwen 2008, Ryan et al. 2010).
Fig. 15 shows a mission executed November 2008 for INL surveys. Ten Gulper
samples, two at a time, were distributed within the high INL probability areas (as detected
by our feature detector) which were early in the mission. As the feature signal detected
became weaker, the vehicle adaptation called for wider transect separation to cover a
larger volume. No specific constraint on the starting point of the trapezoidal projected
area was given; the vehicle selected the start location closest to where it was deployed
abstracting out waypoints altogether.
Early reactor design evolved from a 3-tiered architecture with both the Mission
Planner and Exec reactors based on the EUROPA2planning framework as shown Fig.
16a. This design validated the feasibility of our architecture while also demonstrating its
limits, since most of the real-time decisions were at the lowest level in the Exec. This
proved limiting in the ability of the system to react, often leading to systemic agent failure
in sea trials under unexpected situations. Concurrent with increasing mission complexity
driven by science needs, migration to a more distributed design occurred (Fig. 16b) which
in turn led to redistribution of Exec functionalities. A side effect of this devolution al-
lowed for further refinement of complex sampling methodologies (Garcia-Olaya, Py, Das
18
& Rajan 2012).
T-REX agent
Vehicle
Shore comm.
Mission Planner
Exec
(a) Early 3-tiered design
T-REX agent
Pilot
Sampling
Survey controller
Feature
Estimator
Vehicle
Shore comm.
Mission Planner
(b) Currently used design
Figure 16: Evolution of T-REX reactor instantiation from the early design in Fig. 16a reflecting
a classic 3-tiered architecture, to the current design where deliberation is distributed across
multiple reactors.
Fig. 17 compares the distribution of CPU load for the two T-REX designs across
multiple runs. A viable design instance would minimize ticks on the high CPU load
(right side of the figure). For example, the figure shows that approximately 20% of the
ticks were responsible for 5% of the CPU load for both designs. For the majority of
the runs normalized between the two designs, we see that for most of the time the CPU
load is around 20% indicating overall effectiveness in problem solving. CPU load can
potentially rise up to 100% at the beginning of a mission, when plan synthesis occurs in
all reactors in either of the two designs.
While both have comparable average CPU-load (slightly below 20%) the distinction
we find is how often one design requires more CPU. The 6-reactor design tends to maintain
its CPU load under 20% in more than 95% of the ticks, while the 3-reactor (3-tiered)
design can show CPU loads up to 35% on a regular basis (5% of the ticks). Distributing
decision making across more reactors therefore shows better performance in a real-time
embedded environment.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
10
20
30
40
50
60
70
80
CPU load
Amount of ticks (%)
3 reactors
6 reactors
Figure 17: Comparison of the distribution of CPU-load of the 3-tiered design in Fig. 16a, with
the newer distributed design shown in Fig. 16b in white. The 3-reactor design shows loads up
to 35% on a regular basis. Distinctively small loads occur to the right of the figure.
19
A key benefit in partitioning is more systematic robustness; an unexpected situation
resulting in plan failure remains local to the reactor dealing with the state variable(s)
reflecting the situation. Additionally, a more distributed model also implies encapsulation
localized within a reactor and in turn to a more tractable resolution during deliberation
and/or synchronization. Finally, in moving between reactor designs (including those
above), change is minimal; in the above example for instance apart from the Exec other
reactors required no change. Since information exchange between reactors occurs solely
via state variables, changing the Exec had little impact in the way other reactors were
implemented showcasing good system modularity.
5.2 Extending Lagrangian Surveys
In the ocean sciences, drifters are often used as proxies for advection to study marine
transport (Lumpkin & Pazos 2007), for observation of coastal surface currents (Davis
1985) and analysis of circulation fields for improvement of ocean models (Molcard, Piter-
barg, Griffa, Ozgokmen & Mariano 2003). Usually one or more of these drifters are
deployed for such studies and used in concert with ship surveys. We have extended the
T-REX controller model to support such Lagrangian studies for our Dorado platform.
The scientific goal of such an effort is to provide an environmental 4D (space and
time) snapshot of an advecting 2patch of water. To do so, the vehicle is to execute a
pattern around the drifter advected by surface currents. Such an extension also provides
for the vehicle to track any dynamic mass of water, whether it represent a phytoplankton
bloom or an ocean front.
The technical challenge for such an application is to be able to describe and flexibly
move the waypoints based on the predicted displacement of a moving water mass. The
critical change is the definition of waypoints; from a world reference model to one relative
to the moving mass where the drifter serves as a proxy. Further these waypoints are also
dependent on the position and speed estimates of the vehicle relative to the drifter. This
cannot be known a priori, and must be computed in situ as shown in (Das, Py, Maughan,
O’Reilly, Messie, Ryan, Rajan & Sukhatme 2010).
This T-REX instantiation exploits the modularity of the agent architecture with the
addition of a Lagrangian Pilot reactor at a higher level of abstraction to the Pilot. The
Mission Controller switches navigation between absolute and lagrangian waypoints by
connecting to either the Waypoint – internal to Pilot – or the Lagrangian Waypoint
internal to the Lagrangian Pilot reactor. The latter acts as a translator between the
two waypoint reference models using current speed and location of the drifter provided
via an Iridium satellite feed.
Fig. 18a shows a drifter following mission in Earth reference with Fig. 18b showing
an 18 hour mission with the vehicle projection in the drifter reference frame. T-REX was
initiated with a drifter position, bearing and speed with the requirement of a 3 Km X
3 Km survey pattern around the advecting drifter to provide contextual sensor informa-
tion. Estimated waypoints were computed in the Earth frame based on the projected
displacement of the drifter while integrating the time to take for the vehicle to fly around
these lagrangian waypoints. Fig. 18b shows the effective vehicle path projected in the
drifter frame of reference and oriented based on its bearing. With no DVL bottom lock
and consequent navigation errors as well as a range of current velocities from 0.05m.s1
to 0.45m.s1, the AUV was able to repeatedly survey around the moving drifter with 60
2The horizontal transport of a patch of water.
20
123.1 @123 122.9 122.8 122.7 122.6
35.7
35.8
35.9
36
36.1
36.2
36.3
Longitude
Latitude
September Drifter following
Drifter path
Drifter updates to AUV
Day 257
Day 258
Day 259
Day 260
Day 261
86 km
iter 5
iter 7
iter 8
iter 28
iter 36
iter 37
iter 45
iter 47
(a) September 2010 offshore drifter experiment
1500 1000 500 0 500 1000 1500
1500
1000
500
0
500
1000
1500
Lateral distance (m)
drifter
AUV lagrangian path
Lagrangian waypoints
Ahead
Behind
(b) Drifter estimates and AUV paths
Figure 18: AUV and drifter paths during the September 2010 five-day field trial, 100 Nautical
miles off the California coast in the California current. In Fig. 18a black dots show the beginning
of every iteration when latest drifter locations were sent to the AUV. Based on these updates,
the T-REX on the AUV computed linear projections of the drifter trajectory at the beginning of
every iteration and planned waypoints in the Earth frame. Fig. 18b shows the same AUV path
centered on drifter estimated position.
surveys in over 5 days. Additional details are provided in an extended paper (Das, Py,
Maughan, Messie, O’Reilly, Ryan, Sukhatme & Rajan 2011).
This lagrangian model has a range of extensions allowing the AUV to follow any mov-
ing point which can be marked by a drifter, be extracted from a predictive ocean model
running on shore, identified by another asset (Das, Py, Maughan, O’Reilly, Messie, Ryan,
Rajan & Sukhatme 2010) or human using a decision-support system (Das, Maughan, Mc-
Cann, Godin, O’Reilly, Messie, Bahr, Gomes, Py, Bellingham, Sukhatme & Rajan 2011),
indicating for example chlorophyll fluorescence as being a patch centroid. This is an
early step in desk-top based AUV commanding using mixed-initiative approaches used
elsewhere (Bresina et al. 2005).
6 Conclusion
The use of autonomous underwater vehicles for exploration and sampling has prolifer-
ated. Their continued use and utility is dependent on how adaptive and persistent these
robots can be in the water column. Adaptation on these vehicles has traditionally relied
on carefully crafted pre-scripted plans with little tolerance for off-nominal conditions.
Dealing with scientific uncertainty to observe dynamic and episodic phenomenon has be-
come paramount for ocean scientists to understanding large-scale ecological processes.
Safe and effective adaptation in such cases, requires a balanced consideration of mission
objectives, environmental conditions and available resources.
To meet these challenges, we have designed, developed, tested and flown in routine
AUV operations, a novel controller which encapsulates both deliberation and reactive re-
sponse. The novelty of our work is in integrating deliberation and reaction over different
21
temporal and functional scopes within a single agent and a single model that covers the
needs of high-level mission management, low-level navigation, instrument control, and
detection of unstructured and poorly understood phenomena. Re-planning is a natural
consequence of the knowledge representation as well as the overall architectural paradigm.
Besides removing the dependency on a priori fixed transects, our controller, T-REX, has
augmented a highly capable tool marine scientists have relied on to observe and under-
stand microbial evolution within an advecting mass of water that would hitherto have
been difficult. Execution failures can be captured locally within a reactor with failure
propagation potentially leading to graceful system degradation. Finally, T-REX allows for
a more systematic and sustained way to develop software incrementally using spiral devel-
opment techniques (Boehm 1986) given the use of self-contained computational elements
in the form of reactors. Legacy software or other computational methods of inference can
be encapsulated within these reactors leading to ease of development and maintenance.
Put together the system has advanced the state of the practice in the inter-disciplinary
fields of Artificial Intelligence, Robotics and Ocean science and engineering. The soft-
ware is open source and freely available (T-REX source code 2011). To the best of our
understanding, T-REX is the only onboard deliberation system on an operational AUV,
anywhere.
A number of challenges remain. Principal among them has been its extension to coor-
dinate multiple vehicles over low bandwidth communication networks often found in our
domain. A separate and interesting research problem is in augmenting control decisions
with human-in-the-loop paradigms which are particularly effective in the ocean sciences
given poor understanding of the dynamic coastal environment. With a highly skilled
oceanographer bringing her formidable cognitive skills to interpret sparsely available and
sub-sampled data, having a non-machine “reactor” aiding problem solving will move the
human-robot interaction paradigm in new ways. One shortcoming with the framework
itself, has to do with the inter-dependence between modeling of state variables and re-
actor design. When timelines have co-temporal interactions, they should be reasoned
about together in a single reactor. These considerations have not proven problematic in
our experience. However, reactor design is still heuristically driven; techniques developed
for partitioning constraint graphs such as (Salido & Barber 2006) for example, may be
relevant in this context.
Currently, at MBARI, T-REX plays an important role in the context of a multi-year
inter-disciplinary field program called the Controlled, Agile and Novel Observing Net-
work (CANON) (CANON 2010). The program focuses on understanding rapidly evolving
coastal ocean processes that have significant societal impact on local ecosystems. Reg-
ular field deployments are providing a rich trove of problem requirements and technical
advancements including the need to consider autonomy as a dual onboard and on-shore
entity with human-in-the-loop for decision making. CANON also provides a natural labo-
ratory for experimenting with event-driven multi-vehicle sampling and control problems.
7 Acknowledgements
The authors are funded by a block grant from the David and Lucile Packard Foundation
to MBARI. We thank our colleagues and collaborators at MBARI and elsewhere and the
crew of the R/V Zephyr in supporting our deployments.
22
References
Allen, J. (1984), ‘Towards a General Theory of Action and Time’, Artificial Intelligence
23(2), 123–154.
BedraxWeiss, T., McGann, C., Bachmann, A., Edgington, W. & Iatauro, M. (2005),
EUROPA2 User and contributor guide, Technical report, NASA Ames Research
Center, Tech. Report.
Bellingham, J. G. & Leonard, J. J. (1994), Task Configuration with Layered Control, in
‘IARP 2nd Workshop on Mobile Robots for Subsea Environments’.
Bird, L., Sherman, A. & Ryan, J. P. (2007), Development of an Active, Large Volume,
Discrete Seawater Sampler for Autonomous Underwater Vehicles, in ‘Proc Oceans
MTS/IEEE Conference’, Vancouver, Canada.
Boehm, B. (1986), ‘A spiral model of software development and enhancement’, SIGSOFT
Softw. Eng. Notes 11, 14–24.
Bresina, J., Jonsson, A., Morris, P. & Rajan, K. (2005), Activity Planning for the
Mars Exploration Rovers, in ‘International Conference on Automated Planning and
Scheduling (ICAPS)’, Monterey, California.
Brooks, R. A. (1986), ‘A robust layered control system for a mobile robot’, IEEE Journal
of Robotics and Automation RA-2, 14–23.
CANON (2010).
URL: http://www.mbari.org/canon/
Carreras, M., Ridao, P., Garcia, R. & Battle, J. (2006), Behaviour Control of UUV’s,
in G. N. Roberts & R. Sutton, eds, ‘Advances in Unmanned Marine Vehicles’, IEE,
chapter 4.
Ceballos, A., Bensalem, S., Cesta, A., de Silva, L., Fratini, S., Ingrand, F., Ocon, J.,
Orlandini, A., Py, F., Rajan, K., Rasconi, R. & van Winnendael, M. (2011), A Goal-
oriented Autonomous Controller for Space Exploration, in ‘Proc. 11th Symposium on
Advanced Space Technologies in Robotics and Automation (ASTRA)’, Noordwijk,
the Netherlands.
Chien, S., Knight, R., Stechert, A., Sherwood, R. & Rabideau, G. (2000), Using iterative
repair to improve responsiveness of planning and scheduling, in ‘Proceedings of the
Fifth International Conference on Artificial Intelligence Planning and Scheduling
(AIPS)’, pp. 300–307.
Chien, S., Sherwood, R., Tran, D., Cichy, B., Rabideau, G., Castano, R., Davis, A. &
Boyer, D. (2005), ‘Using autonomy flight software to improve science return on earth
observing one’, Journal of Aerospace Computing, Information, and Communication
2, 196–216.
Das, J., Maughan, T., McCann, M., Godin, M., O’Reilly, T., Messie, M., Bahr, F., Gomes,
K., Py, F., Bellingham, J., Sukhatme, G. & Rajan, K. (2011), Towards mixed-
initiative, multi-robot field experiments: Design, deployment, and lessons learned,
in ‘Proc. Intelligent Robots and Systems (IROS)’, San Francisco, California.
23
Das, J., Py, F., Maughan, T., Messie, M., O’Reilly, T., Ryan, J., Sukhatme, G. S. &
Rajan, K. (2011), ‘Simultaneous Tracking and Sampling of Dynamic Oceanographic
Features with AUV and Drifters’, Intnl. J. of Robotics Research . Submitted; under
review.
Das, J., Py, F., Maughan, T., O’Reilly, T., Messie, M., Ryan, J., Rajan, K. & Sukhatme,
G. (2010), Simultaneous Tracking and Sampling of Dynamic Oceanographic Features
with Autonomous Underwater Vehicles and Lagrangian Drifters, in ‘Intnl. Symp. on
Experimental Robotics (ISER)’, New Delhi, India.
Das, J., Rajan, K., Frolov, S., Ryan, J., Py, F., Caron, D. A. & Sukhatme, G. S. (2010),
Towards Marine Bloom Trajectory Prediction for AUV Mission Planning, in ‘Intnl.
Conf. on Robotics and Automation (ICRA)’, Anchorage, Alaska.
Davis, R. E. (1985), ‘Drifter observations of coastal surface currents during CODE: The
method and descriptive view’, Journal of Geophysical Research 90, 4741–4755.
Dechter, R., Meiri, I. & Perl, J. (1991), ‘Temporal Constraint Networks’, Artificial Intel-
ligence 49(1-3), 61–95.
Finzi, A., Ingrand, F. F. & Muscettola, N. (2004), Model-based Executive Control
through Reactive Planning for Autonomous Rovers, in ‘Proc. Intelligent Robots
and Systems (IROS)’.
Frank, J. & J´onsson, A. K. (2003), ‘Constraint-based Attribute and Interval Planning’,
Constraints 8(4), 339–364.
Garcia-Olaya, A., Py, F., Das, J. & Rajan, K. (2012), ‘An online utility-based ap-
proach for sampling dynamic ocean fields’, Oceanic Engineering, IEEE Journal of
37(2), 185–203.
Gat, E. (1998), On Three-Layer Architectures, in D. Kortenkamp, R. Bonnasso & R. Mur-
phy, eds, ‘Artificial Intelligence and Mobile Robots’, MIT Press, pp. 195–210.
Gertler, M. & Hagen, G. R. (1967), ‘Standard Equations of Motion for Submarine Sim-
ulation’, Naval Ship Research and Development Center Report 2510 .
Ghallab, M., Nau, D. & Traverso, P. (2004), Automated Planning Theory and Practice,
Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
onsson, A. K., Morris, P., Muscettola, N., Rajan, K. & Smith, B. (2000), Planning in
Interplanetary Space: Theory and Practice, in ‘Proc. Artificial Intelligence Planning
and Scheduling (AIPS)’.
Lemai-Chenevier, S. (2004), IXTET-EXEC : planning, plan repair and execution control
with time and resource management, PhD thesis, Institut National Polytechnique
de Toulouse, Toulouse, France.
Lumpkin, R. & Pazos, M. (2007), ‘Measuring surface currents with surface velocity pro-
gram drifters: the instruments, its data and some recent results’, Lagrangian Anal-
ysis and Prediction of Coastal and Ocean Dynamics .
24
McGann, C., Berger, E., Boren, J., Chitta, S., Gerkey, B., Glaser, S., Marder-Eppstein,
E., Marthi, B., Meeussen, W., Pratkanis, T. & Wise, M. (2009), Model-based, Hier-
archical Control of a Mobile Manipulation Platform, in ‘4th Workshop on Planning
and Plan Execution for Real World Systems, ICAPS’.
McGann, C., Py, F., Rajan, K. & Olaya, A. (2009), Integrated Planning and Execution
for Robotic Exploration, in ‘Intnl. Workshop on Hybrid Control of Autonomous
Systems, in IJCAI’09’, Pasadena, California.
McGann, C., Py, F., Rajan, K., Ryan, J. P. & Henthorn, R. (2008), Adaptive Control for
Autonomous Underwater Vehicles, in ‘Proceedings of the Assoc. for the Advance-
ment of Artificial Intelligence (AAAI)’, Chicago, IL.
McGann, C., Py, F., Rajan, K., Ryan, J. P., Thomas, H., Henthorn, R. & McEwen, R.
(2008), Preliminary Results for Model-Based Adaptive Control of an Autonomous
Underwater Vehicle, in ‘Intnl. Symp. on Experimental Robotics (ISER)’, Athens.
McGann, C., Py, F., Rajan, K., Thomas, H., Henthorn, R. & McEwen, R. (2008), A
Deliberative Architecture for AUV Control, in ‘Intnl. Conf. on Robotics and Au-
tomation (ICRA)’, Pasadena.
McPhee-Shaw, E. (2006), ‘Boundary-interior exchange. reviewing the idea that internal-
wave mixing enhances lateral dispersal near continental margins’, Deep-Sea Research
II 53, 45–49.
Molcard, A., Piterbarg, L. I., Griffa, A., Ozgokmen, T. M. & Mariano, A. (2003), ‘As-
similation of drifter observations for the reconstruction of the eulerian circulation
field’, J. Geophys. Res 108(C3), 1 1–1 21.
Muscettola, N. (1994), HSTS: Integrating Planning and Scheduling, in M. Fox &
M. Zweben, eds, ‘Intelligent Scheduling’, Morgan Kaufmann.
Muscettola, N., Dorais, G., Fry, C., Levinson, R. & Plaunt, C. (2002), IDEA: Planning
at the Core of Autonomous Reactive Agents, in ‘Proc. 3rd Intnl. NASA Workshop
on Planning and Scheduling for Space’.
Muscettola, N., Nayak, P., Pell, B. & Williams, B. (1998), ‘Remote Agent: To Boldly Go
Where No AI System Has Gone Before’, Artificial Intelligence 103, 5–48.
NDDL Reference (2011).
URL: http://code.google.com/p/europa-pso/wiki/NDDLReference
Py, F., Rajan, K. & McGann, C. (2010), A Systematic Agent Framework for Situated
Autonomous Systems, in ‘Proc. 9th International Conf. on Autonomous Agents and
Multiagent Systems (AAMAS)’, Toronto, Canada.
Py, F., Ryan, J., O’Reilly, T., Thomas, H. & Rajan, K. (2011), ‘Following Ocean Fronts
by Dynamically Retargeting AUVs from Shore’, Med. Marine Sci. J, Spl. Issue on
AUVs . Submitted, under review.
Rajan, K., Bernard, D., Dorais, G., Gamble, E., Kanefsky, B., Kurien, J., Millar, W.,
Muscettola, N., Nayak, P. P., Rouquette, N., Smith, B., Taylor, W. & Tung, Y.
(2000), Remote Agent: An Autonomous Control System for the New Millennium,
in ‘Proc. Prestigious Applications of Intelligent Systems, ECAI’.
25
Rudnick, D. L. & Perry, M. J. (2003), ALPS: Autonomous and Lagrangian Platforms
and Sensors, Workshop Report, Technical report, www.geo-prose.com/ALPS.
Ryan, J. P., Johnson, S., Sherman, A., Rajan, K., Py, F., Thomas, H., Harvey, J., Bird,
L., Paduan, J. & Vrijenhoek, R. (2010), ‘Mobile autonomous process sampling within
coastal ocean observing systems’, Limnology & Oceanograhy: Methods 8, 394–402.
Salido, M. A. & Barber, F. (2006), ‘Distributed CSPs by Graph Partitioning’, Applied
Mathematics and Computation 183, 212–237.
T-REX source code (2011).
URL: http://code.google.com/p/trex2-agent/
Willcox, J. S., Bellingham, J. G., Zhang, Y. & Baggeroer, A. B. (2001), ‘Performance met-
rics for oceanographic surveys with autonomous underwater vehicles’, IEEE Journal
of Ocean Engineering 26, 711–725.
Zilberstein, S. (1996), ‘Using anytime algorithms in intelligent systems’, AI Magazine
17(3), 73–83.
26
... The robotic platform (see Figure 8(b)) consisted of a Light AUV from OceanScan equipped with a Wetlabs EcoPuck sensor measuring among others total suspended matter (TSM) which was used as the sensor reading for updating the model. Our code was implemented as a python plug-in, running as a part of the autonomous agent architecture Teleo-Reactive Executive (T-REX) (see, e.g., Rajan & Py, 2012). T-REX enables an adaptive mission by communicating with the Unified Navigation Environment (DUNE) which can be seen as the operating system on the AUV, handling control, navigation, communication, vehicle supervision and interaction with actuators and sensors. ...
Article
Full-text available
New robotic sensor platforms have computing resources that enable a rich set of tasks for adaptive monitoring of the environment. But to substantially augment the toolbox of environmental sensing, such platforms must be embedded with realistic statistical models and coherent methodologies for designing experiments and assimilating the data. In this article, we develop myopic and hybrid strategies for autonomous underwater vehicle sampling in space and time. These strategies are based on a stochastic advection-diffusion Gaussian process model for the mine tailings concentration in a Norwegian fjord, and the goal is to monitor the excursion set (ES) of high concentrations. Closed form expressions for the expected misclassification probabilities of the ES enable real-time operation on board the autonomous vehicle, and this is used to guide the spatio-temporal sampling. Simulation studies show that the suggested strategies outperform other approaches that either (i) simplify the models for spatio-temporal variation, or (ii) simplify the design criterion. A field test shows how autonomous underwater sampling is useful for refining an initial stochastic advection-diffusion model. These experiments further show that the vehicle can adapt to focus on regions with intermediate concentrations where it is natural to improve the ES prediction.
Chapter
To get a better understanding of the highly nonlinear processes driving the ocean, efficient and informative sampling is critical. By combining robotic sampling with ocean models we are able to choose informative sampling sites and adaptively change our path based on measurements. We present models exploiting prior information from ocean models as well as real-time information from in situ measurements. The method uses compact Gaussian process modeling and objective functions to locate informative sampling sites. Our aim is to get a better understanding of ocean processes and improve real-time monitoring of dispersal dynamics. The case study focuses on a fjord located in Norway containing a seafill for mine tailings. Transportation of the deposited particles are studied, and the sampling method is tested in the area. The results from these sea trials are presented.KeywordsAdaptive samplingGaussian processesAUVOceanography
Article
In dynamic environments, external events might occur and modify the environment without consent of intelligent agents. Plans of the agents might hence be disrupted and, worse, the agents might end up in dead-end states and no longer be able to achieve their goals. Hence, the agents should monitor the environment during plan execution and if they encounter a dangerous situation they should (reactively) act to escape from it. In this paper, we introduce the notion of dangerous states that the agent might encounter during its plan execution in dynamic environments. We present a method for computing lower bound of dangerousness of a state after applying a sequence of actions. That method is leveraged in identifying situations in which the agent has to start acting to avoid danger. We present two types of such behaviour – purely reactive and proactive (eliminating the source of danger). The introduced concepts for planning with dangerous states are implemented and tested in two scenarios – a simple RPG-like game, called Dark Dungeon, and a platform game inspired by the Perestroika video game. The results show that reasoning with dangerous states achieves better success rate (reaching the goals) than naive planning or rule-based techniques.
Article
Fronts between Arctic- and Atlantic-origin waters are characterized by strong lateral gradients in temperature and salinity. Ocean processes associated with fronts are complex with considerable space and time variability. Therefore, resolving the processes in frontal zones by observation is challenging but important for understanding the associated physical–biological interactions and their impact on the marine ecosystem. The use of autonomous robotic vehicles and in situ data-driven sampling can help improve and augment the traditional sampling practices, such as ships and profiling instruments. Here, we present the development and results of using an autonomous agent for detection and sampling of an Arctic front, integrated on board an autonomous underwater vehicle. The agent is based on a subsumption architecture implemented as behaviors in a finite-state machine. Once a front is detected, the front tracking behavior uses observations to continuously adapt the path of the vehicle to perform transects across the front interface. Following successful sea trials in the Trondheimsfjord, the front-tracking agent was deployed to perform a full-scale mission near 82 $^{\circ }$ N north of Svalbard, close to the sea ice edge. The agent was able to detect and track an Arctic frontal feature, performing a total of six crossings while collecting vertical profiles in the upper 90 m of the water column. Measurements yield a detailed volumetric description of the frontal feature with high resolution along the frontal zone, augmenting ship-based sampling that was run in parallel.
Article
Full-text available
Using Automated Planning for the high level control of robotic architectures is becoming very popular thanks mainly to its capability to define the tasks to perform in a declarative way. However, classical planning tasks, even in its basic standard Planning Domain Definition Language (PDDL) format, are still very hard to formalize for non expert engineers when the use case to model is complex. Human Robot Interaction (HRI) is one of those complex environments. This manuscript describes the rationale followed to design a planning model able to control social autonomous robots interacting with humans. It is the result of the authors’ experience in modeling use cases for Social Assistive Robotics (SAR) in two areas related to healthcare: Comprehensive Geriatric Assessment (CGA) and non-contact rehabilitation therapies for patients with physical impairments. In this work a general definition of these two use cases in a unique planning domain is proposed, which favors the management and integration with the software robotic architecture, as well as the addition of new use cases. Results show that the model is able to capture all the relevant aspects of the Human-Robot interaction in those scenarios, allowing the robot to autonomously perform the tasks by using a standard planning-execution architecture.
Preprint
Full-text available
Improving and optimizing oceanographic sampling is a crucial task for marine science and maritime resource management. Faced with limited resources in understanding processes in the water-column, the combination of statistics and autonomous systems provide new opportunities for experimental design. In this work we develop efficient spatial sampling methods for characterizing regions defined by simultaneous exceedances above prescribed thresholds of several responses, with an application focus on mapping coastal ocean phenomena based on temperature and salinity measurements. Specifically, we define a design criterion based on uncertainty in the excursions of vector-valued Gaussian random fields, and derive tractable expressions for the expected integrated Bernoulli variance reduction in such a framework. We demonstrate how this criterion can be used to prioritize sampling efforts at locations that are ambiguous, making exploration more effective. We use simulations to study and compare properties of the considered approaches, followed by results from field deployments with an autonomous underwater vehicle as part of a study mapping the boundary of a river plume. The results demonstrate the potential of combining statistical methods and robotic platforms to effectively inform and execute data-driven environmental sampling.
Article
Full-text available
Almost every research project that focuses on the cooperation of autonomous robots for underwater operations designs their own architectures. As a result, most of these architectures are tightly coupled with the available robots/vehicles for their respective developments, and therefore the mission plan and management is done using an ad-hoc solution. Typically, this solution is tightly coupled to just one underwater autonomous vehicle (AUV), or a restricted set of them selected for the specific project. However, as the use of AUVs for underwater operations increases, there is the need to identify some commonalities and weaknesses of these architectures, specifically in relation to mission planning and management. In this paper, we review a selected number of architectures and frameworks that in one way or another make use of different approaches to mission planning and management. Most of the selected works were developed for underwater operations. Still, we have included some other architectures and frameworks from other domains that can be of interest for the survey. The explored works have been assessed using selected features related to mission planning and management, considering that underwater operations are performed in an uncertain and unreliable environment, and where unexpected events are not strange. Furthermore, we have identified and highlighted some potential challenges for the design and implementation of mission managers. This provides a reference point for the development of a mission manager component to be integrated in architectures for cooperative robotics in underwater operations, and it can serve for the same purposes in other domains of application.
Article
Full-text available
The coastal ocean is a dynamic and complex environment due to the confluence of atmospheric, oceanographic, estuarine/riverine, and land–sea interactions. Yet it continues to be undersampled, resulting in poor understanding of dynamic, episodic, and complex phenomena such as harmful algal blooms, anoxic zones, coastal plumes, thin layers, and frontal zones. Often these phenomena have no viable biological or computational models that can provide guidance for sampling. Returning targeted water samples for analysis becomes critical for biologists to assimilate data for model synthesis. In our work, the scientific emphasis on building a species distribution model necessitates spatially distributed sample collection from within hotspots in a large volume of a dynamic field of interest. To do so, we propose an autonomous approach to sample acquisition based on an online calculation of sample utility. A series of reward functions provide a balance between temporal and spatial scales of oceanographic sampling and do so in such a way that science preferences or evolving knowledge about the feature of interest can be incorporated in the decision process. This utility calculation is undertaken onboard a powered autonomous underwater vehicle (AUV) with specialized water samplers for the upper water column. For validation, we provide experimental results using archival AUV data along with an at-sea demonstration in Monterey Bay, CA.
Article
Full-text available
We extend existing oceanographic sampling methodologies to sample an advecting feature of interest using autonomous robotic platforms. GPS-tracked Lagrangian drifters are used to tag and track a water patch of interest with position updates provided periodically to an autonomous underwater vehicle (AUV) for surveys around the drifter as it moves with ocean currents. Autonomous sampling methods currently rely on geographic waypoint track-line surveys that are suitable for static or slowly changing features. When studying dynamic, rapidly evolving oceanographic features, such methods at best introduce error through insufficient spatial and temporal resolution, and at worst, completely miss the spatial and temporal domain of interest. We demonstrate two approaches for tracking and sampling of advecting oceanographic features. The first relies on extending static-plan AUV surveys (the current state-of-the-art) to sample advecting features. The second approach involves planning of surveys in the drifter or patch frame of reference. We derive a quantitative envelope on patch speeds that can be tracked autonomously by AUVs and drifters and show results from a multi-day off-shore field trial. The results from the trial demonstrate the applicability of our approach to long-term tracking and sampling of advecting features. Additionally, we analyze the data from the trial to identify the sources of error that affect the quality of the surveys carried out. Our work presents the first set of experiments to autonomously observe advecting oceanographic features in the open ocean.
Article
Full-text available
This paper uses Constraint-based Temporal Plan-ning (CTP) techniques to integrate deliberation and reaction in a uniform representation for au-tonomous robot control. We do so by formulating a control structure that partitions an agent into a collection of coordinated control loops, with a re-curring sense, plan, act cycle. Algorithms are pre-sented for sharing state between controllers to en-sure consistency during execution and enable com-positional control. The partitioned structure makes it practical to apply CTP for both deliberative and reactive behavior and promises a scalable and ro-bust approach for control of real-world autonomous robots operating in dynamic environments. The resulting framework is independant of the domain and provides a principled approach to building au-tonomous systems.
Article
Full-text available
NASA's Earth Observing One Spacecraft (EO-1) has been adapted to host an advanced suite of onboard autonomy software designed to dramatically improve the quality and timeliness of science-data returned from remote-sensing missions. The Autonomous Sciencecraft Experiment (ASE) enables the spacecraft to autonomously detect and respond to dynamic scientifically interesting events observed from EO-1's low earth orbit. ASE includes software systems that perform science data analysis, mission planning, and run-time robust execution. In this article we describe the autonomy flight software, as well as innovative solutions to the challenges presented by autonomy, reliability, and limited computing resources.
Article
Full-text available
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. We describe compatibilities, a compact mechanism for describing planning domains. We also demonstrate how this framework incorporates the use of constraint representation and reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
Chapter
A control architecture is a key component in the development of an autonomous robot. The control architecture has the goal of accomplishing a mission which can be divided into a set of sequential tasks. This chapter has presented behaviour-based control architectures as a methodology to implement this kind of controller. Its high interaction with the environment, as well as its fast execution and reactivity, are the keys to its success in controlling autonomous robots. The main attention has been given to the coordination methodology. Competitive coordinators assure the robustness of the controller, whereas cooperative coordinators determine the performance of the final robot trajectory. The structure of a control architecture for an autonomous robot has been presented. Two main layers are found in this schema; the deliberative layer which divides the robot mission into a set of tasks, and the behaviour-based layer which is in charge of accomplishing these tasks. This chapter has focused only on the behaviour-based layer. A behaviour coordination approach has been proposed, its main feature being a hybrid coordination of behaviours between competitive and cooperative approaches. A second distinctive part is the use of a learning algorithm to learn the internal mapping between the environment state and the robot actions. Then, the chapter has introduced the main features of the URIS AUV and its experimental set-up. According to the navigation system we have shown that the estimation of the robot position and velocity can be effectively performed with a vision-based system using mosaicking techniques. This approach is feasible when the vehicle navigates near the ocean floor and is also useful to generate a map of the area. Finally, results on real data have been shown through an example in which the behaviour-based layer controlled URIS in a target following task. In these experiments several behaviours were in charge of the control of the robot to avoid obstacles, to find the target and to follow it. One of these behaviours was automatically learnt demonstrating the feasibility of the learning algorithms. Finally, we have also shown the performance of the navigation system using the visual mosaicking techniques. © 2006 The Institution of Electrical Engineers and 2008 The Institution of Engineering and Technology.
Article
Improving the autonomy of space systems, such as satellites or rovers, requires to supply decisional capabilities on-board, including: the choice of the activities to perform in order to achieve the mission goals, the control of the execution of these activities and the diagnostic and monitoring of the state of the system. Our work focused on the processes of planning a mission and controlling its execution in the context of applications requiring the enforcement of temporal constraints (visibility windows, etc.) and the management of limited resources (energy, propellant, etc.). This thesis proposes a general framework to combine deliberative planning, execution control and reactive adaptation of a plan, that takes advantage of the temporal flexibility and parallelism of the plans produced by a Partial Order Causal Link planning system based on CSP managers (particularly, the temporal constraints are handled through a Simple Temporal Network). This approach has been implemented in the IxTeT-eXeC system. It is composed of a temporal executive and the IxTeT planning system, modified to improve the flexibility of its representation of resources, and to take into account the execution context and the search duration during the planning process. The executive controls the mechanisms of plan adaptation (in case of failures or new goals): (1) a plan repair concurrent with the execution of its valid part, (2) replanning from scratch. IxTeT-eXeC has been integrated in the decisional level of the LAAS architecture, in interaction with the procedural executive OpenPRS, to control an outdoor mobile robot with an exploration mission.
Article
Observations of near-surface coastal currents were made off the Northern California coast during the Coastal Dynamics Experiment (CODE) by using 164 current-following drifters. Viewed as flow visualization descriptions, the results disclose a number of energetic mesoscale features that dominate across-shelf transport. Examples of eddies, jets, convergences and across-shelf “squirts” are shown and related to moored current observations, wind forcing, and mesoscale features observed in satellite surface temperature imagery. Convergences appear to be most common when currents reverse following relaxation of normally upwelling-favorable winds. Squirts are apparently the cause of cold water plumes extending away from the coast; they appear most frequently at coastal promontories.
Article
Standard equations of motion are presented for use in submarine simulation studies being conducted for the U. S. Navy. The equations are general enough to simulate the trajectories and responses of submarines in six degrees of freedom resulting from various types of normal maneuvers as well as for extreme maneuvers such as those associated with emergency recoveries from sternplane jam and flooding casualties. Information is also presented pertaining to the hydrodynamic coefficients and other input data needed to perform simulation studies of specific submarine designs with the standard equations of motion.