ArticlePDF Available

Abstract and Figures

As the data management field has diversified to consider settings in which queries are increasingly complex, statistics are less available, or data is stored remotely, there has been an acknowledgment that the traditional optimize-then-execute paradigm is insufficient. This has led to a plethora of new techniques, generally placed under the common banner of adaptive query processing, that focus on using runtime feedback to modify query processing in a way that provides better response time or more efficient CPU utilization. In this survey paper, we identify many of the common issues, themes, and approaches that pervade this work, and the settings in which each piece of work is most appropriate. Our goal with this paper is to be a “value-add” over the existing papers on the material, providing not only a brief overview of each technique, but also a basic framework for understanding the field of adaptive query processing in general. We focus primarily on intra-query adaptivity of long-running, but not full-fledged streaming, queries. We conclude with a discussion of open research problems that are of high importance.
Content may be subject to copyright.
Foundations and Trends R
in
Databases
Vol. 1, No. 1 (2007) 1–140
c
2007 A. Deshpande, Z. Ives and V. Raman
DOI: 10.1561/1900000001
Adaptive Query Processing
Amol Deshpande1, Zachary Ives2and
Vijayshankar Raman3
1University of Maryland, USA, amol@cs.umd.edu
2University of Pennsylvania, USA, zives@cis.upenn.edu
3IBM Almaden, USA, ravijay@us.ibm.com
Abstract
As the data management field has diversified to consider settings in
which queries are increasingly complex, statistics are less available, or
data is stored remotely, there has been an acknowledgment that the
traditional optimize-then-execute paradigm is insufficient. This has led
to a plethora of new techniques, generally placed under the common
banner of adaptive query processing, that focus on using runtime feed-
back to modify query processing in a way that provides better response
time or more efficient CPU utilization.
In this survey paper, we identify many of the common issues, themes,
and approaches that pervade this work, and the settings in which each
piece of work is most appropriate. Our goal with this paper is to be
a “value-add” over the existing papers on the material, providing not
only a brief overview of each technique, but also a basic framework
for understanding the field of adaptive query processing in general.
We focus primarily on intra-query adaptivity of long-running, but not
full-fledged streaming, queries. We conclude with a discussion of open
research problems that are of high importance.
1
Introduction
One of the fundamental breakthroughs of Codd’s relational data
model [33] was the identification of how to take a declarative, logic-
based formulation of a query and convert it into an algebraic query
evaluation tree. As described in every database textbook, this enabled
physical data independence and promised many benefits: the database
administrator and the DBMS optimizer became free to choose among
many different storage formats and execution plans to answer a declar-
ative query. The challenge, since then, has been how to deliver on these
promises regardless of where or how the data is laid out, how com-
plex the query is, and how unpredictable the operating environment is.
This challenge has spurred 30 years of query processing research.
Cost-based query optimization, pioneered by Selinger et al. [102] in
System R and refined by generations of database researchers and devel-
opers, has been tremendously effective in addressing the needs of rela-
tional DBMS query processing: one can get excellent performance for
queries over data with few correlations, executed in a relatively stable
environment, given sufficient statistical information.
However, when even one of these characteristics is not present, the
System R-style optimize-then-execute model begins to break down: as
2
3
noted in [69], optimizer error begins to build up at a rate exponential
in the size of the query. As the database field has broadened to con-
sider more general data management, including querying autonomous
remote data sources, supporting continuous queries over data streams,
encoding and retrieving XML data, supporting OLAP and data mining
operations, and combining text search with structured query capabili-
ties, the weaknesses of the traditional optimization model have begun
to show themselves.
In response, there has been a surge of interest in a broad array of
techniques termed adaptive query processing (AQP). AQP addresses
the problems of missing statistics, unexpected correlations, unpre-
dictable costs, and dynamic data by using feedback to tune execution. It
is one of the cornerstones of so-called autonomic database management
systems, although it also generalizes to many other contexts, particu-
larly at the intersection of database query processing and the Web.
The spectrum of adaptive query processing techniques has been
quite broad: they may span multiple query executions or adapt within
the execution of a single query; they may affect the query plan being
executed or the scheduling of operations within the plan; they have been
developed for improving performance of local DBMS queries (e.g., [75,
87, 112]), for processing distributed and streaming data (e.g., [6, 72, 88,
92, 101]), and for performing distributed query execution (e.g., [115]).
This survey is an attempt to cover the fundamental issues, tech-
niques, costs, and benefits of adaptive query processing. We begin
with a broad overview of the field, identifying the dimensions of
adaptive techniques. Then we focus our analysis on the spectrum of
approaches available to adapt query execution at runtime primarily
in a non-streaming context. Where possible, we focus on simplifying
and abstracting the key concepts of each technique, rather than repro-
ducing the full details available in the papers; we consider generaliza-
tions of the specific published implementations. Our goal is to identify
the strengths and limitations of the different techniques, demonstrate
when they are most useful, and suggest possible avenues of future
research.
In the rest of the section, we present a brief overview of query pro-
cessing in relational database systems (Section 1.1) and elaborate on
4Introduction
the reasons behind the push toward adaptivity (Section 1.2); we then
present a road map for the rest of the survey (Section 1.3), and briefly
discuss the related surveys of interest (Section 1.4).
1.1 Query Processing in Relational Database Systems
The conventional method of processing a query in a relational DBMS
is to parse the SQL statement and produce a relational calculus-like
logical representation of the query, and then to invoke the query opti-
mizer, which generates a query plan. The query plan is fed into an
execution engine that directly executes it, typically with little or no
runtime decision-making (Figure 1.1).
The query plan can be thought of as a tree of unary and binary
relational algebra operators, where each operator is annotated with
specific details about the algorithm to use (e.g., nested loops join versus
hash join) and how to allocate resources (e.g., memory). In many cases
the query plan also includes low-level “physical” operations like sorting,
network shipping, etc. that do not affect the logical representation of
the data.
Certain query processors consider only restricted types of queries,
rather than full-blown SQL. A common example of this is select-
project-join or SPJ queries: an SPJ query essentially represents a single
SQL SELECT-FROM-WHERE block with no aggregation or subqueries.
Fig. 1.1 Query processing in database systems.
1.1 Query Processing in Relational Database Systems 5
An even further restriction is conjunctive queries, which are SPJ queries
that only have conjunctive predicates in the WHERE clause; these can be
represented as single rules in the Datalog language.
The model of query processing established with the System R
project [102], which is still followed today, is to divide query processing
into three major stages.
Statistics generation is done offline (typically using the RUNSTATS
or UPDATE STATISTICS command) on the tables in the database. The
system profiles the relation instances, collecting information about car-
dinalities and numbers of unique attribute values, and often generating
histograms over certain fields.
The second stage, which is normally done at runtime,1is query opti-
mization. The optimization stage is very similar to traditional compi-
lation; in fact, in some systems, it generates directly executable code.
Optimization uses a combination of cost estimation, where the run-
ning times of query subexpressions are estimated (based on known
performance characteristics of algorithm implementations, calibration
parameters for hardware speed, and the statistics generated for the
relations), pruning heuristics (which are necessary to reduce the over-
all search space), and exhaustive enumeration. For relatively simple
queries with good statistics, the plans produced by a query optimizer
can be quite good, although as discussed previously, this is less true in
more complex settings.
The final stage, query execution, is handled by an engine analogous
to a virtual machine or interpreter for the compiled query plan. There
are several important aspects of query execution that are of note. The
first is that in general it is desirable to pipeline computation, such
that each operator processes a tuple at a time from its sub-operators,
and also propagates a single tuple to its parent for processing. This
leads to better response time in terms of initial answers, and often
higher throughput as delays are masked. However, not all operators
are naturally amenable to pipelining (e.g., operators like sorting and
grouping often must process entire table before they can determine
1Except for certain embedded SQL queries, which may be pre-optimized or optimized once
for multiple possible input bindings.
6Introduction
what tuple to output next). Also, complex query plans may require too
many resources to be fully pipelined. In these settings, the optimizer
must break the plan into multiple segments, materializing (storing)
intermediate results at the end of each stage and using that as an
input to the next stage.
Second, the issue of scheduling computation in a query plan has
many performance implications. Traditional query processing makes
the assumption that an individual operator implementation (e.g., a
nested loops join) should be able to control how CPU cycles are allo-
cated to its child operators. This is achieved through a so-called itera-
tor [53] architecture: each operator has open,close, and getNextTuple
methods. The query engine first invokes the query plan root node’s
open method, which in turn opens its children, and the process repeats
recursively down the plan. Then getNextTuple is called on the root
node. Depending on the operator implementation, it will make calls to
its children’s getNextTuple methods until it can return a tuple to its
parent. The process completes until no more tuples are available, and
then the engine closes the query plan.
An alternate approach, so called data-driven or dataflow schedul-
ing [121], is used in many parallel database systems. Here, in order to
allow for concurrent computation across many machines, the data pro-
ducers not the consumers control the scheduling. Each operator
takes data from an input queue, processes it, and sends it to an output
queue. Scheduling is determined by the rates at which the queues are
filled and emptied. In this survey, we will discuss a number of adaptive
techniques that in essence use a hybrid of the iterator and data-driven
approaches.
1.2 Motivations for AQP
Over the years, many refinements have been made to the basic query
processing technology discussed above. Since CPUs are more powerful
today and query workloads are much more diverse, query optimizers
perform a more comprehensive search of the space of query plans with
joins, relying less on pruning heuristics. Selectivity estimation tech-
niques have become more accurate and consider skewed distributions
1.2 Motivations for AQP 7
(and to a limited extent, attribute correlations). However, the System
R-style approach has begun to run into its fundamental limits in recent
years, primarily due to the emergence of new application domains in
which database query processing is being applied. In particular, triggers
for this breakdown include the following:
Unreliable cardinality estimates: The cost estimation
process depends critically on estimates of the cardinality
of various query subexpressions. Despite significant work
on building better statistics structures and data collection
schemes, many real-world settings have either inaccurate or
missing statistics. (In some circumstances, as with remote
data sources, statistics may be difficult or impossible to
obtain.) Even when base-table statistics are perfect, cor-
relations between predicates can cause intermediate result
cardinality estimates to be off by several orders of magni-
tude [69, 112].
Queries with parameter markers: SQL is not a pleasant
language for end users, so most database queries are issued by
a user clicking on a form. The SQL for such queries invariably
contains parameter markers (for form input), and the pre-
computed query plans for such queries can be substantially
worse than optimal for some values of the parameters.
Dynamically changing data, runtime, and workload
characteristics: In many environments, especially data
streams [23, 88, 92], queries might be long-running, and the
data characteristics and hence the optimal query plans might
change during the execution of the query. The runtime costs
can also change dramatically, especially in wide-area envi-
ronments. Similarly, fluctuating query workloads can result
in variations in the resources available to execute a query
(e.g., memory), making it necessary to adapt.
Complex queries involving many tables: Query opti-
mizers typically switch to a heuristic approach when queries
become too complex to be optimized using the dynamic pro-
gramming approach. Such queries are naturally more prone
8Introduction
to estimation errors [69], and the use of heuristics exacerbates
the problem.
Interactive querying: The optimize-then-execute model
does not mesh well with an interactive environment where
a user might want to cancel or refine a query after a few
seconds of execution: the metric changes too quickly for opti-
mization to pay off [61]. Also, pipelined execution and early-
result scheduling, even in the presence of slow data sources,
becomes paramount.
Need for aggressive sharing: Though there has been
much work in multi-query optimization, so far no definitive
solutions have emerged in this area. Traditional databases
make do with almost no inter-query state sharing because
their usage pattern is made up of a small number of
queries against large databases. However, sharing the data
as well as the computation is critical in environments
such as data streams, which feature a very large num-
ber of (typically simple) queries over a small set of data
streams [28, 86].
There have been two responses to the challenges posed above. The
first, a very pragmatic response by application vendors, has been to
build domain-specific optimization capabilities outside the DBMS and
override its local optimizer. Many commercial DBMSs allow users to
specify “hints” on what access methods and join orders to use, via SQL
or catalog tables. Recently, SAP has built an application-level query
processor that runs only a very limited set of plans (essentially, only
table scans), but at very high efficiency [82]. While this achieves SAP’s
target of satisfying its users, it runs counter to the database commu-
nity’s goals of developing high-performance, general-purpose processors
for declarative queries.
Our interest in this survey is on the second development, which
has been the focus of the academic and commercial DBMS research
community: the design and construction of what have come to be known
as adaptive (or autonomic) query processing systems, that use runtime
feedback to adapt query processing.
1.3 Road Map 9
1.3 Road Map
We begin with a brief introduction to query optimization in relational
database systems (Section 2). We then discuss some of the foundations
of AQP, namely, three new operators, and several unifying concepts
that we use throughout the survey to illustrate the AQP techniques,
to analyze them, and to differentiate between them (Section 3).
We begin our discussion of adaptive query processing by considering
a simple class of queries called selection ordering queries (Section 4).
The discussion of adaptation techniques for join queries is divided into
three parts, roughly based on the space of the query execution plans
they explore. We begin with a discussion of techniques for adapting
pipelined query execution (Sections 6 and 7), and cover non-pipelined
query execution in Section 8. We conclude the survey with a discussion
of some of the most important research challenges in adaptive query
processing (Section 9).
1.4 Related Work
A number of surveys on query processing are related to this paper.
We assume basic familiarity with many of the ideas of Graefe’s survey
on query execution techniques [53]. Kossmann’s survey on distributed
query processing [79] also provides useful context for the discussion, as
do Ioannidis and Chaudhuri’s surveys on query optimization [24, 68].
Babu and Bizarro [8] also present a survey of AQP from a different
means of classification from our own (whether the scheme is plan-based,
routing-based, or continuous query-based).
2
Background: Conventional Optimization
Techniques
Before beginning to discuss adaptive query processing, we review some
of the key concepts in single-pass, non-adaptive query optimization in
relational databases (Section 2.1). Our goal is not to review general-
purpose query optimization of full-blown SQL queries (we refer the
reader to the surveys by Ioannidis [68] and Chaudhuri [24] for a broader
overview of the topic). Instead, we aim to lay down the foundations for
discussing AQP techniques in the later sections. We then briefly discuss
the impact of cost estimates on the optimization process, and present
strategies that have been proposed to make plan selection robust to
erroneous estimates (Section 2.2).
2.1 Query Optimization
While general query optimization may consider GROUP BY [26] and
multi-block SQL queries [58, 78, 89, 105], the heart of cost-based opti-
mization lies in selection ordering and join enumeration, upon which we
focus in this section. We begin with presenting the plan spaces that are
explored for these problems, and the static planning algorithms com-
monly used. In some specific cases, more efficient and simpler planning
10
2.1 Query Optimization 11
algorithms can be used, and we discuss these as appropriate later in
the survey.
2.1.1 Selection Ordering: Plan Space
Selection ordering refers to the problem of determining the order in
which to apply a given set of commutative selection predicates (filters)
to all the tuples of a relation, so as to find the tuples that satisfy all the
predicates. Figure 2.1 shows an example selection query over a persons
relation and several query plans for that query.
Let the query to be evaluated be a conjunction1of ncommutative
predicates, S1,...,Sn, to be applied to the tuples from a relation R.We
denote the cost of Siby ci, and the probability that a tuple satisfies the
predicate Siby p(Si), or pi. Similarly, we denote the probability that a
tuple satisfies predicates Sj1,...,Sjkby p(Sj1 ··· Sjk). If a tuple is
Fig. 2.1 An example query with two expensive user-defined predicates, two serial plans for
it, and one conditional plan that uses age to decide which of the two expensive predicates
to apply first.
1Queries with disjunctions are often treated as equivalent to a union of multiple queries in
conjunctive normal form, although more efficient schemes exist for evaluating such plans
with “bypass” operators [32] or multi-query optimization strategies [86, 91]. We do not
consider such techniques in this survey and refer the reader to those works.
12 Background: Conventional Optimization Techniques
known to have already satisfied a set of predicates {Sj1,...,Sjk}, then
we denote the conditional probability that the tuple also satisfies Siby
p(Si|Sj1,...,Sjk). Note that if the predicates are independent of each
other (an assumption typically made), then p(Si|Sj1,...,Sjk)=p(Si).
Serial Plans: The natural class of execution plans to consider for eval-
uating such queries is the class of serial orders that specify a single order
in which the predicates should be applied to the tuples of the relation
(Figure 2.1). Given a serial order, Sπ1,...,Sπn, where π1,...,πndenotes
a permutation of 1,...,n, the expected cost per tuple for that order can
be written as:
cπ1+p(Sπ1)cπ2+··· +p(Sπ1···Sπn2Sπn1)cπn.
Or, if the predicates are independent of each other:
cπ1+p(Sπ1)cπ2+··· +p(Sπ1)× ··· × p(Sπn1)cπn.
Conditional Plans: Conditional plans [17, 39] generalize serial plans
by allowing different predicate evaluation orders to be used for different
tuples based on the values of certain attributes. This class of plans can
be especially beneficial when the attributes are highly correlated with
each other, and when there is a large disparity in the evaluation costs
of the predicates.
Figure 2.1 shows an example conditional plan that uses an inex-
pensive predicate on age, which is strongly correlated with both the
query predicates, to decide which of the two predicates to apply first.
Specifically, for tuples with age >25, the predicate on credit score is
likely to be more selective than the predicate on education, and hence
should be applied first, whereas the opposite would be true for tuples
with age <25.
2.1.2 Selection Ordering: Static Planning
The static planning algorithms for selection ordering are quite simple
if the predicates are independent of one another; however, the planning
problem quickly becomes NP-Hard in presence of correlated predicates.
2.1 Query Optimization 13
Independent Predicates: If the predicates are independent of one
another, as is commonly assumed by most query optimizers, the optimal
serial order can be found in O(nlog(n)) time by simply sorting the
predicates in the increasing order of ci/(1 pi) [60, 67, 80]. Under
independence assumptions, conditional plans offer no benefit over serial
plans.
The expression ci/(1 pi) is commonly referred to as the rank of
the predicate, and this algorithm is hence called rank ordering.
Correlated Predicates: The complexity of the planning problem, in
this case, depends on the way the correlations are represented. Babu
et al. [10] show that the problem of finding the optimal order for a
given dataset D (similarly, for a given random sample of the underly-
ing relation) is NP-Hard. However, to our knowledge, it is not known
whether the problem is tractable if the correlations are represented in
some other format, e.g., using probabilistic graphical models [38, 49].
Assuming that the conditional probabilities for the predicates can
somehow be computed using the correlation information, the following
greedy algorithm (called Greedy henceforth) can be used [10]:
Algorithm 2.1 The Greedy algorithm for correlated selection
ordering.
Input: A set of predicates, Si,i =1,...,n; a procedure to compute
conditional probabilities.
Output: A serial plan, Sπ1,...,Sπn, for applying the predicates to the
tuples.
1. Choose Sπ1to be the predicate Siwith the lowest ci/(1 p(Si))
among all predicates.
2. Choose Sπ2to be the predicate Sjwith the lowest cj/(1
p(Sj|Sπ1)) among remaining predicates; p(Sj|Sπ1) denotes the
conditional probability of Sjbeing true, given that Sπ1is true.
3. Choose Sπ3to be the predicate Skwith the lowest ck/(1
p(Sk|Sπ1,Sπ2)) among remaining predicates.
4. Continue until no operators left.
14 Background: Conventional Optimization Techniques
For instance, if the correlation information is given in the form of
a dataset (or a random sample) D, then the conditional probabilities
can be computed by scanning Drepeatedly as required. This algorithm
can be shown to approximate the optimal solution within a constant
factor of 4 [10]; for reasonable numbers of predicates, the bound is even
smaller (e.g., for n= 20, the bound is only 2.35).
Finding an optimal conditional plan for a given dataset Dcan be
shown to be NP-Hard as well [39] (using a straightforward reduction
from binary decision tree construction problem). Several heuristic algo-
rithms are presented in [39], but to our knowledge no approximation
algorithms are known for this problem.
2.1.3 Multi-way Join Queries: Plan Space
In picking a query plan for a multi-way select-project-join (SPJ) query,
an optimizer has to make many choices, the most important being:
access methods, join order, join algorithms, and pipelining.
Access Methods: The optimizer needs to pick an access method for
each table in the query. Typically there are many choices, including a
direct table scan, a scan over an index (also called “index-only access”
or “vertical partitioning” in the literature), an index-based lookup on
some predicate over that table, or an access from a materialized view.
Some stream or wide-area data sources may support only one access
method: a continual stream of tuples entering the query processor. Oth-
ers may allow binding parameters to be passed akin to index probes or
the two-way semijoin [36].
Join Order: The access methods provide source tuples to the query
plan. To join these tables, the optimizer then chooses a join order.
Definition 2.1. Ajoin order is a tree, with the access methods as
leaves; each internal node represents a join operation over its inputs.
Optimizers usually avoid performing relational cross-products, so
they typically consider trees where each join (internal node) can
apply some join predicates (over the node’s descendants) specified
2.1 Query Optimization 15
in the query. Traditionally, these internal nodes have been binary
joins, but many adaptive techniques use n-ary join operators as well
(Section 3.1.3). Each join order corresponds to a way of placing paren-
thesis around tables in the query: it is equivalent to a parse tree for
an expression in which each join’s input is parenthesized. (The poor
name “join order” arose because historically DBMSs first supported
only “left-deep” query plans; we will continue to use this terminology
in this paper.) A left-deep or “left-linear” plan is one where only the left
child of a join operator may be the result of another relational algebra
operation; the right child must be a base relation. See Figure 2.2(ii).
Since the join order is a tree, it may not cover all join predicates.
In particular, if the join predicates in a query form a cycle, the join
order can only cover a spanning tree of this cycle. So, as part of join
ordering, the optimizer also picks a spanning tree over the join graph.
Join predicates on edges that are eliminated (to remove cycles) are
applied after performing the join, as “residual” predicates.
Figure 2.2 shows an example multi-way join query that we use as a
running example, and two possible join orders for it one using a tree
of binary join operators and the other using a ternary join operator.
Join Algorithms: The next aspect of join execution is the physical
join algorithm used to implement each join in the join order nested
loop join, merge join, hash join, etc. These decisions are typically made
Fig. 2.2 (i) A multi-way join query that we use as the running example; (ii) a left-deep join
order, in which the right child of each join must be a base relation; (iii) a join order that
uses a ternary join operator.
16 Background: Conventional Optimization Techniques
during cost estimation, along with the join order selection, and are
partly constrained by the available access methods. For instance, if
the data from a relation is streaming in, pipelined join operators must
be used. Similarly, an index access method is often ideally suited for
use with an index nested-loops join or sort-merge join.
Pipelining vs. Materialization: One of the biggest differentiating
factors between query plans for join queries is whether they contain a
blocking operator or not. A blocking operator is one that needs to hold
intermediate state in order to compute its output because it requires
more than one pass over the input to create its output, e.g., a sort
(which must sort the input) or a hash join (where the “build” relation is
stored in a hash table and probed many times). In some cases, resource
constraints might force materialization of intermediate state.
Query plans for SPJ queries can be classified into two classes:
Non-pipelined plans: These contain at least one blocking
operator that segments execution into stages: the blocking
operator materializes its sub-results to a temporary table,
which is then read by the next operator in a subsequent step.
Each materialization is performed at a materialization point,
illustrated in Figure 2.3 (i).
Fig. 2.3 (i) A non-pipelined plan; (ii) a pipelined plan.
2.1 Query Optimization 17
Pipelined plans: These plans execute all operators in the
query plan in parallel, by sending the output tuples of
an operator directly to the next operator in the pipeline.
Figure 2.3(ii) shows an example pipelined plan that uses
hash join operators. A subtlety of this plan is that it is actu-
ally only “partly pipelined”: the traditional hash join per-
forms a build phase, reading one of the source relations into
a hash table, before it begins pipelining tuples from input to
output. Symmetric or doubly pipelined hash join operators
(Figure 2.4) are fully pipelined: when a tuple appears at
either input, it is incrementally added to the corresponding
hash table and probed against the opposite hash table. Sym-
metric operators are extensively used when quicker response
time is needed, or when the inputs are streaming in over a
wide-area network, as they can read tuples from whichever
input is available, and they incrementally produce output
based on the tuples received so far.
We will continue to refer to plans such as the one
shown in Figure 2.3(ii) as pipelined plans, as long as all
the major operations in the plan are executed in a single
pipeline.
Fig. 2.4 Symmetric hash join (doubly pipelined hash join) operator builds hash tables on
both inputs.
18 Background: Conventional Optimization Techniques
These two plan classes offer very different adaptation opportuni-
ties because of the way they manipulate state. Proposed adaptation
schemes for pipelined plans usually involve changes in the tuple flow,
whereas adaptation schemes in non-pipelined plans typically involve
re-evaluating the rest of the plan at the materialization points.
2.1.3.1 A Variation: SPJ Queries over Data Streams
In recent years, a new class of queries has become popular: queries
where the data sources are (infinite) data streams. To execute join
queries over streams, sliding windows must be specified on the data
streams in the query. As an example, if the relations Rand Sreferred
the above example query are streaming relations, then a query may be
written in the CQL language [4] as follows:
select *
from R [rows 100], S [range 60 min], T, U
where R.a = S.a and S.b = T.b and S.c = U.c
This query specifies two sliding windows. At any time, the sliding win-
dow on R(a tuple-based window) contains the last 100 tuples of R. The
window on S,atime-based window, contains all tuples that arrived dur-
ing the last 60 min.
Because of the potentially endless nature of data streams, only fully
pipelined plans can be used for executing them. In other respects, the
query execution remains essentially the same, except that the earliest
tuples may have to be deleted from the join operator data structures
when they have gone outside the range of the window (i.e., its size or
duration). For example, if the symmetric hash join operator shown in
Figure 2.4 is used to execute the join between Rand S, then tuples
should be removed from the hash tables once they have been followed
by sufficient tuples to exceed the sliding window’s capacity.
2.1.4 Multi-way Join Queries: Static Planning
The wide array of choices in access methods, join orders, and join
algorithms results in an enormous space of execution plans: one that
is exponential in the number of tables and predicates in the query,
2.1 Query Optimization 19
in the number of access methods on each table, and in the number
of materialized views available. Query optimizers search in this plan
space through dynamic programming, applying a number of heuris-
tics to prune the choices and make optimization tractable. The basic
method is the one proposed in the original System R paper [102]. We
briefly summarize this here (for a more thorough survey of query opti-
mization, we refer the reader to [24, 68]).
The System R paper made a number of ground-breaking observa-
tions. First, projection, and in many cases selection, can be pushed
down as a heuristic. Hence, the key to optimizing an SQL query is
reasoning about join order and algorithm selection.
The second key observation is that it is fairly easy to develop cost
modeling equations for each join implementation: given page size, CPU
and disk characteristics, other machine-specific information, and the
cardinality information about the input relations, the cost of execut-
ing a join can be easily computed. The challenge is to estimate the
cardinality of each join’s output result this forms one of the input
relations to a subsequent join, and hence impacts all future cost esti-
mation. System R only did limited reasoning about join selectivities,
primarily based on hard-coded heuristics.
The third key idea perhaps the most insightful was that in
general, the cost of joining two subexpressions is independent of how
those subexpressions were computed. The optimal way of computing
some join expression thus can be posed as the problem of consider-
ing all ways of factoring the expression into two subexpressions, opti-
mally computing those subexpressions, and joining them to get the
final expression. This naturally leads to a dynamic programming for-
mulation: the System R optimizer finds all optimal ways of retrieving
data from the source relations in the query, then all optimal ways of
joining all pairs of relations, then all optimal ways of joining pairs
with one additional relation, etc., until all combinations have been
considered.
One exception to the independence property described above is the
impact of sorting: a single sort operation can add significant one-time
overhead, but this may be amortized across multiple merge joins. Sys-
tem R separately considers each so-called interesting order (i.e., sort
20 Background: Conventional Optimization Techniques
order that might be exploited when joining or grouping), as well as an
“unconstrained order” in the dynamic programming table.
Only after optimizing joins (while simultaneously maximally push-
ing down selections and projections) did System R consider correlated
subqueries and grouping and for these, it generally did very lit-
tle reasoning, and simply added a final nested loops join or group-by
operator at the end.
System R limited its search in two ways to make the query opti-
mization problem more tractable on 1970s hardware. First, it deferred
reasoning about Cartesian products until all possible joins were evalu-
ated: the assumption was that such plans were likely to be inefficient.
Second, System R only considered left-deep or left-linear plans. (Left-
linearness is significant because the right input to a nested loops join
is conventionally the inner loop.)
Modern optimizers have extended System R’s approach to plan
enumeration in a number of key ways, in part because complex plans
and operations became more prevalent and in part because machines
became more powerful and could afford to do more optimization.
Plan enumeration with other operators: Both Star-
burst [58] and Volcano [56] extended the cost estimation
component of query optimization to explore combinations of
operators beyond simply joins. For instance, group-by push-
down [26] and shipment of data in a distributed setting [85]
are often considered during cost estimation.
Top-down plan enumeration: The original System R opti-
mizer enumerated plans in true dynamic programming fashion,
starting with single relations and building increasingly com-
plex expressions. Some modern optimizers, following a model
established in Volcano and Cascades [54], use top-down enu-
meration of plans, i.e., recursion with memoization. The pri-
mary benefit of this approach is that it enables early pruning
of subexpressions that will never be used: branch-and-bound
pruning, in particular, can be used to “short-circuit” the com-
putation of subexpressions whose cost is higher than an exist-
ing, alternative expression that subsumes the existing one.
2.2 Choosing an Effective Plan 21
Cross-query-block optimization: In contrast to System
R’s model, which processed each block of an SQL query sep-
arately, most modern DBMSs allow for optimizations that
move predicates across blocks and in some cases even per-
form sophisticated rewritings, such as magic sets transfor-
mations [104]. Typically these rewritings are chosen using a
combination of heuristics and cost estimation.
Broader search space: A modern query optimizer often
considers bushy plans (where two arbitrary sub-results may
be joined, as opposed to requiring one of the inputs to be a
leaf) as well as early Cartesian products. Additionally, when
the query expression becomes sufficiently large, rather than
attempting to do dynamic programming on all possible plans,
most optimizers first partition the work using heuristics, and
then run dynamic programming on each plan segment.
Of course, plan enumeration is only half of the query optimization
story. In many ways, the “magic” of query optimization lies not in the
plan enumeration step, but in the process of estimating a particular
plan’s cost.
2.2 Choosing an Effective Plan
As we previously described, the query optimizer can create a composite
estimate of overall query execution cost from its knowledge of
individual operators’ costs, system parameters, and data distribu-
tions ultimately enabling it to choose the minimal-estimated-cost
plan from its plan enumeration space.
Naturally, one of the most essential aspects of the cost estimation
process is selectivity estimation: given a set of input relations (which
may be stored on disk or derived by the query) and an operation, the
estimator predicts the cardinality of the result. This estimated cardi-
nality then provides a prediction of the size of one of the inputs to the
next subsequent operation, and so forth, until query completion.
The original System R implementation made use of very sim-
ple selectivity estimation strategies: the DBMS maintained cardinal-
ity information for each table, and selectivity estimation made use of
22 Background: Conventional Optimization Techniques
this information, as well as minimum and maximum values of indexed
attributes and several ad hoc “magic ratios” to predict cardinali-
ties [102].
All modern DBMSs employ more sophisticated techniques, relying
on histograms created offline (via RUNSTATS or UPDATE STATISTICS)
to record the distributions of selection and join key attributes. As
described in [24], these histograms are not only used to estimate the
results of selection predicates, but also joins. Unfortunately, because
attributes are often correlated and different histograms may be diffi-
cult to “align” and intersect, significant error can build up in this pro-
cess [69]. This has motivated a great deal of research into either making
query plans more robust, pre-encoding contingent or conditional plans,
or performing inter-query adaptation of the cost model. We proceed to
discuss each of these ideas in turn.
2.2.1 Robust Query Optimization
In some cases the query optimizer has a choice between a “conserva-
tive” plan that is likely to perform reasonably well in many situations,
or a more aggressive plan that works better if the cost estimate is accu-
rate, but much worse if the estimate is slightly off. Chu et al. [30, 31]
propose least expected cost optimization where the optimizer attempts
to find the plan that has the lowest expected cost over the different
values the parameters can take, instead of finding the lowest cost plan
for the expected values of the parameters. The required probability dis-
tributions over the parameters can be computed using histograms or
query workload information. This is clearly a more robust optimization
goal, assuming only one plan can be chosen and the required probability
distributions can be obtained. (The latter may be difficult if the work-
load is diverse, the data changes, or there are complex correlations.)
An approach along similar lines was proposed by Babcock and Chaud-
huri [7] recently, called robust query optimization. Here the authors
provide a knob to tune the predictability of the desired plan vs. the
performance by using such probability distributions.
Error-aware optimization (EAO) [119] makes use of intervals over
query cost estimates, rather than specifying the estimates for single
2.2 Choosing an Effective Plan 23
statistical points. EAO focuses primarily on memory usage uncertainty.
A later work, Rio [9], provides several features including the use of inter-
vals. It generates linear query plans (a slight variation of the left-linear
or left-deep plan, in that one of the two inputs to every join not
necessarily the right one must be a base relation) and uses bound-
ing boxes over the estimated cardinalities in order to find and prefer
robust plans. (Rio’s proactive reoptimization features are discussed in
Section 8).
A different means of making plans more robust is to employ more
sophisticated operators, for instance, n-way pipelined hash joins, such
as MJoins [120] (discussed in Section 3.1.3) or eddies [6] (discussed in
Section 3.1.2). Such operators dramatically reduce the number of plans
considered by the query optimizer, although potentially at the cost of
some runtime performance.
2.2.2 Parametric Query Optimization
An alternative to finding a single robust query plan is to find a small
set of plans that are appropriate for different situations. Parametric
query optimizers [46, 52, 70] postpone certain planning decisions to
runtime, and are especially attractive in scenarios where queries are
compiled once and executed repeatedly, possibly with minor parame-
ter changes. They choose among the possible previously chosen plan
alternatives at the start of execution (i.e., before the first pipeline
begins) and in some cases at intermediate materialization points. The
simplest form uses a set of query execution plans annotated with
a set of ranges of parameters of interest; just before query execu-
tion commences, the current parameter values are used to find the
appropriate plan.
Graefe et al. [34, 52] propose dynamic query evaluation plans for this
purpose, where special choose-plan operators are used to make decisions
about the plans to use based on the runtime information. Some of the
choices may not be finalized until after the query has begun executing
(as opposed to only at the beginning of execution); this allows the
possibility of using the intermediate result sizes as parameters to make
the decisions.
24 Background: Conventional Optimization Techniques
Ioannidis et al. [70] study parametric optimization for the buffer
size parameter, and propose randomized algorithms for generating the
parametric plans. More recently, several researchers have begun a sys-
tematic study of parametric query optimization by understanding the
complexity of the problem [46, 98], and by considering specific forms
of cost functions that may be easier to optimize for, generalizing from
linear to piecewise linear to convex polytopes [64, 65, 94].
Despite the obvious appeal of parametric query optimization, there
has not been much research in this area and the commercial adoption
has been nearly non-existent. The bane of this technique is determining
what plans to keep around: the space of all optimal plans is super-
exponential in the number of parameters considered. More recently,
an alternative idea was proposed in the progressive reoptimization
work [87], where validity ranges are associated with query plans: if the
values of the parameters are observed to be outside these ranges at the
time of the execution, reoptimization is invoked. This idea is largely
shared in progressive parametric query optimization (PPQO) [19],
which combines the ideas of parametric query optimization with pro-
gressive reoptimization: the reoptimizer is called when error exceeds
some bound (the Bounded implementation) or when there is no “near
match” among the set of possible plan configurations (the Ellipse
implementation).
2.2.3 Inter-Query Adaptivity: Self-tuning and Autonomic
Optimizers
Several techniques have been proposed that passively observe the query
execution and incorporate the knowledge learned from these previous
query executions to better predict the selectivity estimates in future.
Chen and Roussopoulos [27] propose adaptive selectivity estimation,
where an attribute distribution is approximated using a curve-fitting
function that is learned over time by observing the query results.
Self-tuning histograms [1, 22] are similar in spirit, but focus on gen-
eral multi-dimensional distributions that are approximated using his-
tograms instead of curve-fitting. These approaches have the additional
advantage of not having to directly access the data to build the statis-
2.3 Summary 25
tical summaries: they in effect “snoop” on the intermediate results of
queries.
More recently, the LEarning Optimizer (LEO) project [112] gen-
eralizes this basic idea by monitoring arbitrary subexpressions within
queries as they execute and comparing the actual observed selectivities
with the optimizer’s selectivity estimates. This information is then used
to compute adjustments to the optimizer’s estimates that may be used
during future optimizations of similar queries. If a significant difference
is observed, reoptimization may be triggered during execution; we will
discuss this aspect of LEO in more detail in Section 8.
Similarly, Bruno and Chaudhuri [21] propose gathering statistics
on query expressions (SITS) during query execution and using those
during optimization of future queries. Their main focus is on deciding
which of the query expressions, among a very large number of possible
candidates, to maintain such statistics on.
In some sense, these techniques form a feedback loop, across the
span of different queries, in the design of an adaptive database sys-
tem. Many of them are also fairly easy to integrate into a commercial
database system, as evidenced by the development of self-tuning wiz-
ards and autonomic optimizers in leading DBMSs.
2.3 Summary
The traditional, single-pass, optimize-then-execute strategy for query
execution has served the database community quite well since the
1970s. As queries have become more complex and widespread, how-
ever, they have started to run into limitations. Schemes for robust opti-
mization, parametric optimization, and inter-query adaptivity alleviate
some of these difficulties by reducing sensitivity to errors. A significant
virtue of these methods is that they impose little runtime overhead on
query execution. Perhaps even more importantly, they serve as a simple
upgrade path for conventional single-pass query processors.
However, there are settings in which even these techniques run into
limitations: for instance, if the query workload is highly diverse, then
subsequent queries may have little overlap; if the actual costs change
frequently, as they may in highly distributed settings, then the recal-
26 Background: Conventional Optimization Techniques
ibration may not be beneficial; if the data itself changes frequently,
as it may in streaming, data integration, or high-throughput settings,
then the new statistics may be out of date. In these settings, we would
like to immediately react to such changes or adapt the current query
plan. Through the remainder of this survey, we focus on such intra-
query adaptive query processing techniques that adapt the execution
of a single query, for greater throughput, improved response time or
more useful incremental results.
3
Foundations of Adaptive Query Processing
The goal of adaptive query processing is to find an execution plan and
a schedule that are well-suited to runtime conditions; it does this by
interleaving query execution with exploration or modification of the
plan or scheduling space. At a very high level, the differences between
various adaptive techniques can be explained as differences in the way
they interleave. Standard, System R-style [102] query processing does
full exploration first, followed by execution. Evolutionary techniques
like choose nodes [34] or mid-query reoptimization [75] interleave plan-
ning and execution a few times, whereas more radical techniques like
eddies [6] interleave them to the point where they are not even clearly
distinguishable.
We note that the adaptive aspect is not free: given sufficient infor-
mation, an offline optimization strategy could define a plan or a col-
lection of plans (e.g., for different partitions of relations) that always
matches or improves the execution times of the adaptive technique,
without the overhead of exploration. However, in reality such infor-
mation is seldom accurately available in advance, which is the major
motivation for studying adaptivity. Different techniques incur different
27
28 Foundations of Adaptive Query Processing
amounts of overhead in order to explore the search space and to adapt
an executing query plan.
In this section, we introduce some of the foundations of adaptive
query processing. We begin with presenting three new operators that
play a central role in several adaptive techniques that we discuss in this
survey (Section 3.1). We then present a framework called the adaptiv-
ity loop that we use for illustrating the AQP techniques throughout
this survey (Section 3.2), and discuss how the behavior of an adaptive
technique may be analyzed post-mortem using traditional query plans
(Section 3.3). We conclude with an analysis of several example systems
using these two concepts (Section 3.4).
3.1 New Operators
Traditional database engines are optimized for disk-based I/O, which
(at least on a modern storage architecture) can supply data to a DBMS
at very high rates. The goal is to have very tight control loops, carefully
scheduled to minimize the per-tuple overhead [53]. This type of execu-
tion, however, tends to severely restrict the possible adaptation oppor-
tunities, especially when wide-area data sources or data streams are
involved. In this section, we present three new operators that address
these problems by allowing for greater scheduling flexibility and more
opportunities for adaptation. Such flexibility requires greater memory
consumption and more execution overhead, but the result is still often
superior performance.
We note that our focus here is on describing the underlying mech-
anisms; we postpone the discussion of most of the policy aspects to
latter sections.
3.1.1 Symmetric Hash Join Operators
The traditional hash join operator is not very well suited for adap-
tive query processing; it must wait for the build relation to fully arrive
before it can start processing the probe relation and producing results.
This makes it unsuitable for handling wide-area data sources and data
streams, where the inputs arrive in an interleaved fashion, and continu-
ous result generation is desired. Further, this operator severely restricts
3.1 New Operators 29
adaptation opportunities since the build relations must be chosen in
advance of query execution and adapting these decisions can be costly.
The symmetric hash join operator [99, 121] introduced in Section
2.1.3 solves both these problems by building hash tables on both inputs
(Figure 2.4); when an input tuple is read, it is stored in the appropriate
hash table and probed against the opposite table, resulting in incremen-
tal output. Because the operator is symmetric, it can process data from
either input, depending on availability. This operator also enables addi-
tional adaptivity since it has frequent moments of symmetry [6] points
at which the join order can be changed without compromising correct-
ness or without losing work. As a result, this operator has formed the
cornerstone of many AQP techniques.
The symmetric hash join operator does have a significant disadvan-
tage in that the memory footprint is much higher since a hash table
must be built on the larger input relation as well.
Several pieces of work build upon the original proposal for this oper-
ator that was developed for dataflow-based parallel query processing.
Both XJoin [117] and the doubly pipelined hash join [72] adapted the
operator to a multi-threaded architecture, using producer–consumer
threads instead of a dataflow model, and they also included strategies
for handling overflow to disk when memory capacity is exceeded. In
general, they only perform runtime decisions in terms of deciding what
to overflow to disk and when to process overflowed data.
Another extension, called ripple join, proposed for interactive query
processing, adapts the order in which tuples are read from the inputs
so as to rapidly improve the precision of approximated aggregates; in
addition to equality join predicates, ripple joins can also be used for
non-equijoins [59].
3.1.2 Eddy [6]
Avnur and Hellerstein [6] proposed the eddy operator for enabling fine-
grained run-time control over the query plans executed by the query
engine. The basic idea behind the approach they propose is to treat
query execution as a process of routing tuples through operators, and
to adapt by changing the order in which tuples are routed through the
30 Foundations of Adaptive Query Processing
Fig. 3.1 Example of an eddy instantiated for a 4-way join query (taken from Avnur and
Hellerstein [6]). A routing table can be used to record the valid routing destinations, and
possibly current probabilities for choosing each destination, for different tuple signatures.
operators (thereby, in effect, changing the query plan being used for the
tuple). The eddy operator, which is used as the tuple router, monitors
the execution, and makes the routing decisions for the tuples.
Figure 3.1 shows how an eddy can be used to execute a 4-way join
query. Along with an eddy, three join operators and one selection oper-
ator are instantiated. The eddy executes the query by routing tuples
from the relations R,S, and Tthrough these operators; a tuple that
has been processed by all operators is sent to the output. The eddy can
adapt to changing data or operator characteristics by simply changing
the order in which the tuples are routed through these operators. Note
that the operators themselves must be chosen in advance (this was
somewhat relaxed by a latter approach called SteMs that we discuss in
Section 6). These operator choices dictate, to a large degree, the plans
among which the eddy can adapt. Pipelined operators like symmetric
hash join offer the most freedom in adapting and typically also provide
immediate feedback to the eddy (for determining the operator selectiv-
3.1 New Operators 31
ities and costs). On the other hand, blocking operators like sort-merge
operators are not very suitable since they do not produce output before
consuming the input relations in their entirety.
Various auxiliary data structures are used to assist the eddy during
the execution; broadly speaking, these serve one of two purposes:
1. Determining Validity of Routing Decisions: In general, arbi-
trary routing of tuples through the operators is not always correct.
As an example, the eddy should not route Rtuples to the selec-
tion operator σP(T), since that operator expects and operates on
Ttuples. In fact, Rtuples should only be routed to the R1S
operator.
Typically, some form of tuple-level lineage, associated with each
tuple, is used to determine the validity of routing decisions. One option
is to use the set of base relations that a tuple contains and the operators
it has already been routed through, collectively called tuple signature,as
the lineage. However, for efficient storage and lookups, compact repre-
sentations of this information are typically used instead. For instance,
the original eddies proposal advocated attaching two bitsets to each
tuple, called done and ready, that respectively encode the information
about the operators that the tuple has already been through, and the
operators that the tuple can be validly routed to next [6]. We will see
several other approaches for handling this later.
2. Implementation of the Routing Policy: Routing policy refers
to the set of rules used by the eddy to choose a routing destination for
a tuple among the possible valid destinations. This is the most critical
component of an eddy and is directly responsible for the performance
benefits of adaptivity. To facilitate clear distinctions between differ-
ent routing policies, we will use the following unifying framework to
describe a routing policy.
The routing policy data structures are classified into two parts:
Statistics about the query execution: To aid in making
the routing decisions, the eddy monitors certain data and
operator characteristics; the specific statistics maintained
depend on the routing policy. Since the eddy processes every
32 Foundations of Adaptive Query Processing
tuple generated during query execution, it can collect such
statistics at a very fine granularity. However, the cost of
maintaining detailed statistics could be high, and must be
weighed against the expected adaptivity benefits of gather-
ing such statistics.
Routing table: This data structure stores the valid routing
destinations for all possible tuple signatures. Figure 3.1 shows
an example of such a routing table for our example query.
As discussed above, more compact, bitmap-based representa-
tions may be used instead for efficient storage and lookups.
To allow for probabilistic choices (so that the eddy can
explore other alternatives during execution), a probability
may be associated with each destination. Deterministic poli-
cies are simulated by requiring that the routing table have
exactly one destination with non-zero (= 1) probability for
each tuple signature; in that case, we further assume that
destination is the first destination listed for the tuple in the
routing table.
Given these data structures, the eddy follows a two-step process for
routing a tuple:
Step 1: The eddy uses the statistics gathered during the
execution to construct or change the routing table. This
may be done on a per-tuple basis, or less frequently.
Step 2: The eddy uses the routing table to find the valid
destinations for the tuple, and chooses one of them and
routes the tuple to it. If the eddy is using probabilistic rout-
ing, this step involves a random number generation and a
scan over the possible destinations (this can be reduced to
O(H(p)) using a Huffman Tree, where H(p) denotes the
entropy of the probability distribution over the destina-
tions [63]). If the eddy is using deterministic routing, this
takes O(1) time since we assume that the deterministic des-
tination is listed first in the routing table.
This framework minimizes the architectural overhead of adaptivity.
The policy overhead of using a routing policy depends largely on the
3.1 New Operators 33
frequency with which the eddy executes Step 1 and the statistics that
it collects; these two factors also determine how well and how quickly
the eddy can adapt. This overhead can be minimized with some careful
engineering (e.g., by amortizing the cost of Step 1 over many tuples [37],
or by using random sampling to maintain the statistics). We will revisit
this issue in more detail when we discuss specific routing policies in the
later sections.
3.1.3 n-ary Symmetric Hash Joins/MJoins
An n-ary Symmetric Hash Join (called MJoin henceforth) [95, 120]
generalizes the binary symmetric hash join operator to multiple inputs
by treating the input relations symmetrically and by allowing the tuples
from the relations to arrive in an arbitrary interleaved fashion. An
MJoin has several attractive features over the alternative option of a
tree of binary join operators, especially in data stream processing and
adaptive query processing.
Figure 3.2 shows an example MJoin operator instantiated for a
4-way join query. MJoins build a hash index on every join attribute
of every relation in the query. In the example, three hash indexes (that
share the base tuples) are built on the Srelation, and one hash table
each is built on the other relations. For acyclic query graphs, the total
number of hash tables ranges from 2(n1) when every join is on a
different attribute to nwhen all joins are on the same attribute.
An MJoin uses a lightweight tuple router to route the tuples from
one hash table to another. The eddy operator can also be used for this
purpose [95].
During the query execution, when a new tuple from a relation
arrives into the system, it is first built into the hash tables on that rela-
tion, and then the hash tables corresponding to the remaining relations
are probed in some order to find the matches for the tuple. The order
in which the hash tables are probed is called the probing sequence.For
example, when a new tuple sSarrives into the system, the following
steps are taken:
sis built into the (three) hash indexes on the relation S.
Let the probing sequence chosen for sbe TRU
(Figure 3.2).
34 Foundations of Adaptive Query Processing
Fig. 3.2 Executing a 4-way join query using the MJoin operator. The triangles denote the
in-memory hash indexes built on the relations.
sis used to probe into the hash table on Tto find matches.
Intermediate tuples are constructed by concatenating sand
the matches, if any.
If any result tuples were generated during the previous step,
they are used to probe into the hash tables on Rand U, all
in that order.
Similarly, when a new Rtuple arrives, it is first built into the hash
table on R. It is then probed into the hash table on Son attribute S.a
first, and the resulting matches are then probed into the hash tables
on Tand U. Note that the Rtuple is not eligible to be probed into
the hash tables on Tor Udirectly, since it does not contain the join
attributes corresponding to either of those two joins.
An MJoin is significantly more attractive over a tree of binary
operators when processing queries involving sliding windows over data
streams; when a base tuple from a relation drops out of the sliding win-
3.2 Adaptivity Loop 35
dow, only the base tuple needs to be located and removed from a hash
table, since intermediate tuples are not stored in the hash tables. Fur-
ther, MJoins are naturally very easy to adapt; the query plan being used
can be changed by simply changing the probing sequence. For these two
reasons, much work in data stream processing has focused on using an
MJoin-style execution [12, 86]. However, MJoins tend not to perform
as well as trees of binary join operators, especially for non-streaming
data. This is mainly because they do not reuse the intermediate results;
we will revisit this issue in more detail in Sections 6 and 7.
Memory Overflow: Handling memory overflow is harder with
MJoins than with symmetric hash joins, especially if the probing
sequences are changed during execution. Viglas et al. [120] study this
problem and present a technique called coordinated flushing that aims
to maximize the output rate while allowing tuples to be spilled to disk.
This technique, however, can only be applied if all joins in the query are
on the same attribute. More recently, Bizarro and DeWitt [18] general-
ize this to the non-identical join attributes case, and present a scheme
that processes an input tuple maximally given the in-memory parti-
tions of the relations; the remaining portions of the relations are joined
at the end using a clean-up phase. Exploring the interactions between
out-of-core execution, adaptivity, and cost metrics, remains a rich area
for future research (cf. Section 9).
3.2 Adaptivity Loop
In intra-query adaptivity, regular query execution is supplemented with
a control system for monitoring query processing and adapting it.
Adaptive control systems are typically componentized into a four-part
loop that the controller repeatedly executes (either in line or in parallel
with normal tuple processing, see Figure 3.3):
Measure: An adaptive system periodically or even continuously monitors
parameters that are appropriate for its goals.
Analyze: Given the measurements, an adaptive system evaluates how well
it is meeting its goals, and what (if anything) is going wrong.
36 Foundations of Adaptive Query Processing
Fig. 3.3 Adaptivity loop.
Plan: Based on the analysis, an adaptive system makes certain decisions
about how the system behavior should be changed.
Actuate: After the decision is made, the adaptive system executes the
decision, by possibly doing extra work to manipulate the system state.
In the context of adaptive query processing, the Measure step
involves monitoring data characteristics like cardinalities and distribu-
tions, and system characteristics like memory utilization and network
bandwidth. Analysis is primarily done with respect to performance
goals or estimates, although there have been some recent proposals for
broader QOS-aware DBMSs that address goals like availability [14, 57,
66, 84, 106]. The Plan step can involve a traditional query optimizer
(in, e.g., mid-query reoptimization [9, 73, 75, 87]), or be a routing pol-
icy that is performed as part of execution (in eddies [6, 40, 95]). The
Actuate step corresponds to switching from one plan to another, with
careful state migration to reuse work and ensure correct answers.
These above steps are simple in concept but involve difficult engi-
neering to achieve overall efficiency and correctness. Measurement,
analysis, and planning all add overhead to normal query processing,
and it is often tricky to balance the ability to react quickly to newly
discovered information against the ability to rapidly process tuples
once a good plan has been chosen (the classic exploration–exploitation
dilemma).
Throughout this survey, we will (where appropriate) relate the vari-
ous adaptive techniques to this four-part loop, as a way of understand-
3.2 Adaptivity Loop 37
ing the key aspects of the technique and its relative trade-offs. First,
we provide further detail on each of the stages.
3.2.1 Measurement
In general, all intra-query adaptation methods perform some form of
monitoring, such as measuring cardinalities at key points in plan execu-
tion. As we will see in Sections 6–8, mid-query reoptimization [75, 87]
does this at the end of each pipeline stage, and it may also add statis-
tics collectors (i.e., histogram construction algorithms) at points that
are judged to be key; eddies measure the selectivities and costs of the
operators; corrective query processing [73]1maintains cardinality infor-
mation for all operators and their internal state (including aggregation).
Often it is desirable to explore the costs of options other than the
currently executing query plan. Eddies perform exploration as a part of
execution, by routing tuples through different alternative paths, serv-
ing the goals of execution and information gathering simultaneously.
Antoshenkov’s work on DEC Rdb [3] adopted a different, competitive
strategy in which multiple alternative plans were redundantly run in
parallel; once enough information was gathered to determine which
plan appeared most promising, all other plans were terminated.
Finally, at times it is more efficient to stratify the search and mea-
surement space, by executing a plan in a series of steps, and hence mea-
suring only the active portions of query execution. Choose nodes [52]
and mid-query reoptimization [75, 87] follow this strategy, interleaving
measurement-plus-execution with analysis and actuation.
3.2.2 Analysis
The analysis step focuses on determining how well execution is pro-
ceeding relative to original estimates or to the estimated or mea-
sured costs of alternative strategies. There are two major caveats to
this phase. First, the only way to know precise costs of alternative
strategies is through competitive execution, which is generally expen-
sive. Thus most adaptive strategies employ sampling or cost modeling
1Originally called convergent query processing.
38 Foundations of Adaptive Query Processing
to evaluate the alternatives. Second, and perhaps more fundamentally,
all adaptive strategies analyze (some portion of) past performance and
use that to predict future performance. Clearly, in theory an adversarial
workload can be created that does not follow this property; however, all
evidence suggests that in practice, such a situation is highly unlikely.
Most adaptive techniques make use of some form of plan cost model-
ing in the analysis phase, comparing current execution to what was orig-
inally expected or what is estimated to be possible. Some approaches,
such as mid-query reoptimization and corrective query processing, use
the query optimizer’s cost modeler to perform analysis. Eddies and
their descendants [86, 95] generally do not perform full cost model-
ing (with the exception of [40]) instead they rely on local routing
heuristics to achieve a similar effect.
3.2.3 Planning
Planning is often closely interlinked with analysis, since it is quite com-
mon that the same process that reveals a plan to be performing poorly
will also suggest a new plan. Mid-query reoptimization and its variants
(Section 8) compute plans in stages using analysis from the current
stage to produce a new plan for the next stage, and supplementing
it with the appropriate measurement operators. Corrective query pro-
cessing (Section 7) incrementally collects information as it computes a
query, and it uses this information to estimate the best plan for the
remaining input data.
In some places, changing the query plan requires additional
“repairs”: Query scrambling, for instance, may change the order of exe-
cution of a query plan, and in some cases plan synthesis [118] is required
(see Section 8.3). Similarly, corrective query processing requires a com-
putation to join among intermediate results that were created in dif-
ferent plans; this is often done in a “cleanup” or “stitch-up” phase.
Eddies and their descendants do not plan in the same fashion as the
other strategies: they use queue length, cost, and selectivity estimation
to determine a “next step” in routing a tuple from one plan to another.
SteMs [95] and STAIRs [40] use different planning heuristics to manage
the intermediate state, in essence performing the same actions as the
3.2 Adaptivity Loop 39
stitch-up phase described above, but without postponing them to the