Page 1

On Combining Shortest-Path and Back-Pressure

Routing Over Multihop Wireless Networks

Lei Ying

ECE Department

Iowa State University

Email: leiying@iastate.edu

Sanjay Shakkottai and Aneesh Reddy

ECE Department

The University of Texas at Austin

Email: {shakkott, areddy}@ece.utexas.edu

Abstract—Back-pressure based algorithms based on the al-

gorithm by Tassiulas and Ephremides have recently received

much attention for jointly routing and scheduling over multi-

hop wireless networks. However a significant weakness of this ap-

proach has been in routing, because the traditional back-pressure

algorithm explores and exploits all feasible paths between each

source and destination. While this extensive exploration is essen-

tial in order to maintain stability when the network is heavily

loaded, under light or moderate loads, packets may be sent

over unnecessarily long routes and the algorithm could be very

inefficient in terms of end-to-end delay and routing convergence

times.

This paper proposes new routing/scheduling back-pressure

algorithms that not only guarantees network stability (through-

put optimality), but also adaptively selects a set of optimal

routes based on shortest-path information in order to minimize

average path-lengths between each source and destination pair.

Our results indicate that under the traditional back-pressure

algorithm, the end-to-end packet delay first decreases and then

increases as a function of the network load (arrival rate). This

surprising low-load behavior is explained due to the fact that the

traditional back-pressure algorithm exploits all paths (including

very long ones) even when the traffic load is light. On the other-

hand, the proposed algorithm adaptively selects a set of routes

according to the traffic load so that long paths are used only

when necessary, thus resulting in much smaller end-to-end packet

delays as compared to the traditional back-pressure algorithm.

I. INTRODUCTION

Due to the scarcity of wireless bandwidth resources, it

is important to efficiently utilize resources to support high-

throughput, high-quality communications over multi-hop wire-

less networks. In this context, good routing and scheduling al-

gorithms are needed to dynamically allocate wireless resources

to maximize the network throughput region. To address this,

throughput-optimal1routing and scheduling, first developed in

the seminal work of [1], has been extensively studied [2], [3],

[4], [5], [6], [7], [8], [9], [12], [13], [14], [15]. We refer to

[10], [11] for a comprehensive survey. While these algorithms

maximize the network throughput region, additional issues

need to be considered for practical deployment.

With the significant increase of real-time traffic (an article

by Ellacoya [16] published in 2007 suggests that video-

streaming accounts for 36% of today’s HTTP traffic), end-

to-end delay becomes very important in network algorithm

1A routing/scheduling algorithm is throughput-optimal if it can stabilize

any traffic that can be stabilized by any other routing/scheduling algorithm.

design. The traditional back-pressure algorithms stabilize the

network by exploiting all possible paths between source-

destination pairs (thus load balancing over the entire network).

While this might be needed in a heavily loaded network,

this seems unnecessary in a light or moderate load regime.

Exploring all paths is in fact detrimental – it leads to packets

traversing excessively long paths between sources and desti-

nations leading to large end-to-end packet delays.

This paper proposes a new routing/scheduling back-pressure

algorithm that minimizes the path-lengths between sources and

destinations while simultaneously being overall throughput-

optimal. The proposed algorithm results in much smaller end-

to-end packet delay as compared to the traditional back-

pressure algorithm. The main contributions of this paper are

summarized in the following subsection.

A. Main Contributions

We define a flow using its source and destination. Let f

denote a flow in network, and Af[t] denote the number of

packets generated by flow f at time t. We first consider

the case where each flow associates with a hop constraint

Hf. The routing and scheduling algorithm needs to guarantee

that the packets from flow f are delivered no more than

Hf hops. Note that this hop constraint is closely related

to the end-to-end propagation delay. For this problem, we

propose a shortest-path-aided back-pressure algorithm which

exploits the shortest-path information to guarantee the hop

constraint and is throughput optimal, i.e., if there exists a

routing/scheduling algorithm that can support the traffic with

the given hop constraints, then the shortest-path-aided back-

pressure can support the traffic as well.

We then consider a case where no per-flow hop constraint is

imposed. The objective is to minimize the average number of

hops per packet delivery (or the average path-lengths between

sources and destinations). Mathematically, given a traffic load

{Af[t]}, the objective is

min

?

f∈F,N−1≥h>0

hAf,h,

where Af,h is the fraction of flow f transmitted over paths

with h hops, and?

hAf,h = E[Af[t]]. This objective has

two interpretations:

Page 2

2

• First,?

a packet over an h-hop path requires h transmissions).

Thus, minimizing?

demand;

• Second, note that the number of hops is closely related

to the end-to-end delay, so?

?

in the network (the difference being that the MAC delays

is ignored in the hop-count metric).

To solve this problem, we propose a joint traffic-control

and shortest-path-aided back-pressure algorithm that not only

guarantees the network stability (throughput-optimal),but also

adaptively selects the optimal routes according to the traffic

demand. When the traffic is light, the algorithm only uses

shortest paths; when the traffic increases, more paths are

exploited to support the traffic. Our simulations show that

the joint traffic-control and shortest-path-aided back-pressure

algorithm leads to a much smaller end-to-end delay compared

to the traditional back-pressure algorithm (3 vs 1000 when the

traffic load is light and 200 vs 400 when the traffic load is

high).

f,hhAf,h can be thought of as the number of

transmissions needed to support traffic A[t] (transmitting

f,hhAf,h can be regarded as min-

imizing the network resource used to support the traffic

hhAf,h is related to the

average end-to-end delay of flow f. Thus minimizing

f,hhAf,h can potentially be used as a surrogate for

minimizing the average end-to-end delay over all flows

B. Related Work

Throughput-optimal routing/scheduling was first proposed

in [1], and then have been studied for varied networks in-

cluding cellular networks [17], cooperative relay networks

[12], [13], and multi-hop wireless networks [6], [4], [5].

Low-complexity implementations have been proposed in

[18], [19], [20], [21], [22], [14], [23], [24]. Joint schedul-

ing/routing/power control has been developed in [5], [9].

Throughput-optimalrouting/scheduling for multicast flows has

been considered in [25]. The idea of using the shortest path

information to enhance the performance of back-pressure

algorithm has been studied in [26]. The difference is that

the proposed algorithm provably minimizes the average path-

lengths whereas the enhanced algorithm in [26] uses the

shortest path information in a heuristic manner. An alternate

algorithm that deals with minimizing the number of hops has

been recently independently obtained in [30]. The objective

function in [30] is the same as in this paper, however the

proposed algorithms are different.

II. AN ILLUSTRATIVE EXAMPLE

As we discussed in the introduction, the back-pressure algo-

rithm exploits all feasible paths, which is critical to maintain

stability when the network is heavily loaded. However, when

the traffic load is light, packets may be sent over unnecessary

long paths and the algorithm could be very inefficient.

In this section, we use an example to demonstrate the

weakness of the back-pressure algorithm, and the significant

end-to-end delay reduction that results under the proposed

algorithm (the algorithm will be described in Section V).

(0, 0)

Path 1

Path 2

SD

(4, 4)

Fig. 1.A grid network example

Consider a 4×4 grid network as in shown Figure 1. Assume

that the channel capacity is one data unit per time slot for all

channels. When one link is on, no adjacent link can be on

simultaneously. We also impose half-duplex constraint so that

a node cannot transmit and receive at the same time. At the

beginning of each time slot, each node generates a packet with

probability λ. The destination of this packet is randomly and

uniformly selected from all nodes in the network. (A detailed

description of our simulation settings will be presented in

Section VI).

0 0.050.10.15 0.2

0

100

200

300

400

500

600

700

800

900

1000

Average Delay

Back Pressure

Proposed Algo.

λ

Fig. 2.

back-pressure

Back-pressure via our joint traffic-splitting and shortest-path-aided

The end-to-end delay of a packet is defined to be the time

interval from when the packet enters the source to when the

packet reaches the destination (this includes the MAC delay at

intermediate nodes). In Figure 2, we plot the average end-to-

end delay under the back-pressure algorithm and the proposed

algorithm with different values of λ. From Figure 2, we have

two observations:

(1) Under the back-pressure algorithm, surprisingly, the

delay first decreases and then increases with arrival rate

λ. The second part is easy to understand: the queues

build up when the traffic load increases, which increases

the queuing delays. The first part is because the back-

Page 3

3

pressure algorithm uses all paths even when the traffic

load is light. For example, with a very small λ, using

path 1 only is sufficient to support the flow from node S

to node D. However, under the back-pressure algorithm,

long paths (e.g., path 2 that has 15 hops) and paths with

loops are also used. Furthermore, the lighter is the traffic

load, more loops are involved in the route. Hence the

end-to-end delay is large when λ is small.

(2) In the proposed algorithm, the set of routes used are

intelligently selected according to the traffic load so that

long paths are used only when necessary. We can see

that under the proposed algorithm, not only is the delay

significantly reduced (3 v.s 1000), but also the delay

monotonically increases with the traffic load.

We would like to emphasize that under the proposed al-

gorithm, the delay improvement is achieved without losing

the throughput-optimality. The proposed algorithm is still

throughput-optimal,but yields much smaller end-to-end delays

as compared to the traditional back-pressure algorithm.

III. BASIC MODEL

Network model: Consider a network represented by a graph

G = (N,L), where N is the set of nodes and L is the set of

directed links. We assume that |N| = N and |L| = L.

Denote by (m,n) the link from node m to node n. Let

µ = {µ(m,n)} denote a link-rate vector (over link (m,n), the

transmission rate is µ(m,n)). A link-rate vector µ is said to be

admissible if the link-rates specified by µ can be achieved

simultaneously. Define Γ to be the set of all admissible

link-rate vectors. It is easy to see that Γ depends on the

choice of interference model and might not be a convex set.

Furthermore, Γ is time-varying if channels are time-varying.

To simplify our notations, we assume time-invariant channels

in this paper. However, our results can be extended to time-

varying channels in a straightforward manner. Furthermore,we

assume that there exists µmaxsuch that µ(m,n)≤ µmaxfor all

(m,n) ∈ L and all admissible µ. Next, we define a link vector

µ to be obtainable if µ ∈ CH(Γ), where CH(Γ) denotes the

convex hull of Γ. Note that an admissible rate-vector is a set

of rates at which the links can transmit simultaneously; while

an obtainable rate-vector is a set of rates that can be achieved

including using time-sharing.

Traffic model: For network traffic, we let f denote a flow,

s(f) denote the source of the flow, and d(f) the destination of

the flow. We use F to denote the set of all flows in the network.

Assume that time is discretized, and let Af[t] (f ∈ F) denote

the number of packets injected by flow f at time t. We assume

{Af[t]} are bounded random variables, and i.i.d. across time-

slots and flows. We also define Af= E[Af[t]].

IV. THROUGHPUT-OPTIMAL ROUTING/SCHEDULING WITH

HOP CONSTRAINTS

In this section, we consider the case where each flow is

associated with a hop constraint Hf. Packets of flow f need

to be delivered within Hf hops. We propose a shortest-path-

aided back-pressure algorithm, which is throughput-optimal

under hop-constraints. The algorithm is also a building block

for the algorithm to be proposed in Section V, which smoothly

integrates the back-pressure and the shortest-path routing.

Next, we characterize the network throughput region under

hop-constraints.

A. Network Throughput Region under Hop-constraints

Given traffic A[t] = {Af[t]}f∈F and hop-constraint H =

{Hf}f∈F, we say that (A[t],H) ∈ ΛG if there exists

?

(i) For any three-tuple (n,d,h) such that n ?= d and N −

1 ≥ h > 0, we have

ˆ µ(n,d,h)

(m,n)≥ 0

?

such that the following conditions hold:

Af1s(f) = n,d(f) = d

Hf = h

+

?

m:(m,n)∈L

ˆ µ(m,d,h+1)

(m,n)

=

?

i:(n,i)∈L

ˆ µ(n,d,h)

(n,i)

.

(1)

(ii) If h − 1 < Hmin

n→d, then

ˆ µ(m,d,h)

(m,n)

= 0,

(2)

where Hmin

from node n to node d.

n→dis the minimum number of hops required

(iii)

?ˆ µ(m,n)

?

(m,n)∈L∈ CH(Γ),

(3)

where

ˆ µ(m,n)=

?

{(m,d,h):d∈D,N−1≥h>0}

ˆ µ(m,d,h)

(m,n),

and D is the set of all destinations.

We can regard ˆ µ(m,d,h)

(m,n)

link (m,n) used to transmit those packets that are destined to

node d and delivered with exactly h more hops (including the

hop from m to n). Then, the conditions above can be explained

as follows:

(a) Condition (i) is a flow conservation constraint, which

states that the number of incoming packets to node n

with hop-constraint h is equal to the number of outgoing

packets from node n with hop-constraint h−1. Note that

the hop-constraintis reduced by one after a packet is sent

out by node n because it takes one hop to transmit the

packet from node n to one of its neighbors. We only

consider hop-constraints up to N − 1 hops because the

longest loop-free route has no more than N − 1 hops,

and considering only loop-free routes does not change

the network throughput region.

(b) Condition (ii) states that a packet should not be trans-

mitted from node m to node n if node n cannot deliver

the packet within the required number of hops.

(c) Condition (iii) is the capacity constraint, which states

that the rate-vector ˆ µ should be obtainable.

We say traffic (A[t],H) can be stabilized if there exists

some routing/scheduling algorithm under which the mean of

the number of packets queued in the network is bounded.

as the average transmission-rate over

Page 4

4

From discussions (a)-(c), it is easy to see that if (A[t],H) can

be stabilized, then there must exist ˆ µ satisfying conditions

(i)-(iii). Thus, ΛG is named as the the throughput region of

network G.

Next, we introduce our queue management scheme.

B. Queue Management

Recall Hmin

from node m to node d (or the length of the shortest path from

node m to node d). Note that Hmin

distributed fashion using algorithms such as the Bellman-Ford

algorithm. Thus, we assume that node m knows Hmin

destinations d ∈ D, and Hmin

We assume node m maintains a separate queue, named

queue {m,d,h}, for the packets required to be delivered to

node d within h hops. For destination d, node m maintains

queues for h = Hmin

upper bound on the number of hops along loop-free paths.

As an example, consider the directed network shown in

Figure 3, and assume that D = {4} (i.e., there is only one

destination). Each non-destination node maintains up to three

queues (because for this topology, there are no loop free paths

longer than three hops). Node 1 has queues corresponding

to h = 1,2,3 respectively. Node two does not have a direct

path to node 4 (i.e., Hmin

two queues corresponding to h = 2,3 (and implicitly, we

set Q{2,4,1} = ∞ to ensure that no packets enter Q{2,4,1}

from other nodes). Node 3 maintains three separate queues

corresponding to h = 1,2,3. This is in spite of the observation

that there is only one feasible route from node 3 to node 4. We

maintain these additional queues because the global network

topology is not known by individual nodes (in the algorithm,

we will later see that the “extra” queues will build up sufficient

back-pressure so that the rate of packet arrivals into these

queues goes to zero). Finally, all queues at the destination

for packets meant to itself are set to zero (i.e., Q{4,4,h}= 0).

In Figure 3, queues into which packets potentially arrive are

marked in solid lines and the “virtual” queues which are fixed

at {0,∞} are in dotted lines.

m→dis the minimum number of hops required

m→dcan be computed in a

m→dfor all

n→dfor n such that (m,n) ∈ L.

m→d,...,N −1, where N −1 is a universal

2→4= 2), hence, it maintains only

Q{1,4,1}=8

Q{1,4,2}=6

Q{1,4,3}=4

node 1 (Hmin

1→4= 1)

node 2 (Hmin

2→4= 2)

node 3 (Hmin

3→4= 1)

node 4 (Hmin

4→4= 0)

Q{2,4,1}=∞

Q{2,4,2}=9

Q{2,4,3}=5

Q{3,4,1}=4

Q{3,4,2}=1

Q{3,4,3}=0

Q{4,4,0}=0

Q{4,4,1}=0

Q{4,4,2}=0

Q{4,4,3}=0

Fig. 3. Illustration of queue-management and computation of back-pressure

C. Queue Dynamics

Let Q{m,d,h}[t] denote the queue length at time slot t,

and µ{m,d,h}

(m,n)

[t] denote the service rate for queue {m,d,h}

over link (m,n) at time t. For packets transmitted over link

(m,n), we require that the packets from queue {m,d,h} are

transferred to queue {n,d,h−1}. For example, packets from

queue {2,4,3} can be transferred to queue {3,4,2}, but not

to queue {3,4,1}.

The dynamics of queue {n,d,h} (n ?= d) is as follows:

Q{n,d,h}[t + 1] = Q{n,d,h}[t] + Af[t]1s(f)=n,d(f)=d,Hf=h

?

where ν{n,d,h}

(n,i)

is the actual number of packets transferred

from queue {n,d,h} to queue {i,d,h−1}, and is smaller than

µ{n,d,h}

(n,i)

[t] when there is no enough packet in queue {n,d,h}.

Define u{m,d,h}

(m,n)

[t] to be the unused service, we have

+

m:(m,n)∈L

ν{m,d,h+1}

(m,n)

[t] −

?

i:(n,i)∈L

ν{n,d,h}

(n,i)

[t],

ν{m,d,h}

(m,n)

[t] = µ{m,d,h}

(m,n)

[t] − u{m,d,h}

(m,n)

[t].

We also define Q{n,n,h}= 0 for all h, i.e., packets delivered

are removed from the network.

In the next subsection, we propose a shortest-path-aided

back-pressure algorithm that stabilizes the network given any

(A[t],H) ∈ ΛG.

D. Shortest-path-aided Back-pressure Algorithm

Recall that we have per-hop queues for each destination,

which is different from the back-pressure algorithm in [1].

Thus we first define the back-pressure of link (m,n) under

our queue management scheme. We define P{m,d,h}

back-pressure of queue {m,d,h} over link (m,n)) as follows:

• P{m,d,h}

(m,n)

[t] = Q{m,d,h}[t] − Q{n,d,h−1}[t] if Hmin

h − 1;

• P{m,d,h}

(m,n)

[t] = −∞ if Hmin

{n,d,h − 1} does not exist if Hmin

The back-pressure of link (m,n) is defined to be

(m,n)

[t] (the

n→d≤

n→d> h − 1 (note that queue

n→d> h − 1).

P(m,n)[t] = max

?

max

d∈D,N−1≥h≥Hmin

m→d

P{m,d,h}

(m,n)

[t],0

?

.

Considering the example shown in Figure 3, it can be veri-

fied that P(1,2)= 0, P(1,3)= Q{1,4,3}−Q{3,4,2}= 3, P(1,4)=

Q{1,4,1}−Q{4,4,0}= 8, P(2,3)= Q{2,4,2}−Q{3,4,1}= 5, and

P(3,4)= Q{3,4,1}− Q{4,4,0}= 4.

Shortest-path-aided Back-Pressure Algorithm: Consider

time slot t.

Step 0: The packets injected by flow f are deposited into

queue {s(f),d(f),Hf} maintained at node s(f).

Step 1: The network first computes µ∗[t] that solves the

following optimization problem:

µ∗[t] = argmax

µ∈Γ

?

(m,n)∈L

µ(m,n)P(m,n)[t],

(4)

Page 5

5

Step 2: Consider link (m,n). If µ∗

0, node m selects a queue {m,d,h} such that

(m,n)[t] > 0 and P(m,n)>

Q{m,d,h}[t] − Q{n,d,h−1}[t] = P(m,n)[t],

and transfers packets from queue {m,d,h} to queue {n,d,h−

1} at rate µ∗

(m,n)[t].

We again consider the example in Figure 3. Assume the

node exclusive interference model where adjacent links cannot

be active at the same time. Furthermore, assume that link

capacity is equal to one for all links. Then, given the queue-

states shown in the figure, we can easily verify that µ∗

µ∗

1 transmits one packet from queue {1,4,1} to its destination

(node 4), and node 2 transmits one packet from queue {2,4,2}

to queue {3,4,1} at node 3.

Remark 1: Note that the optimization problem defined by

equation (4) is a centralized problem. There has been a lot of

recent work on distributed solutions, e.g., [18], [19], [20], [21],

[22], which compute near optimal solutions with polynomial

or even constant complexity. These distributed algorithms can

be used in step 2 of the proposed algorithm in this paper.

Distributed implementation, however, is not the focus of this

paper.

Remark 2: From the definition of the back-pressure and

the optimization (4), we can see that the packets in queue

{m,d,h} can be transmitted to its neighbor n only if Hmin

h − 1. Also packets of flow f are first queued at queue

{s(f),d(f),Hf}. Based on the facts above, it can be easily

verified that if a packet is received by its destination d(f), then

0 = Hmin

packet has been transmitted over. Thus, we can conclude that

every delivered packet is delivered within the required number

of hops under the shortest-path-aided back-pressure algorithm.

Theorem 1: Given traffic A[t] and hop constraint H such

that ((1 + ?)A[t],H) ∈ ΛG, the network is stochastically

stable under the shortest-path-aided back-pressure algorithm;

and packets delivered are routed over paths that satisfy corre-

sponding hop constraints.

Proof: The second part of the theorem has been explained

in Remark 2. To prove the first part, we define a Lyapunov

function

V [t] =

?

It can be shown that there exists Qmax > 0 such that if

Q{n,d,h}[t] > Qmaxfor some queue {n,d,h}, then

(1,4)[t] =

(2,3)[t] = 1 and µ∗

(1,2)[t] = µ∗

(1,3)[t] = µ∗

(3,4)[t] = 0. Node

n→d≤

d(f)→d(f)≤ Hf−g, where g is the number of hops the

{n,d,h}

?Q{n,d,h}[t]?2.

E[V [t + 1] − V [t]|Q[t]] < −δ+

?

−

?

where

?

(i)-(iii) for given traffic ((1 + ?)A[t],H) (ˆ µ exists because

+

(m,n)∈L

?

µ∗

(m,n)[t]P(m,n)[t],

d,h

ˆ µ{m,d,h}

(m,n)

?Q{m,d,h+1}[t] − Q{n,d,h}[t]?

(m,n)∈L

ˆ µ{m,d,h}

(m,n)

?

= ˆ µ is the rate-vector satisfying condition

((1+?)A[t],H) ∈ ΛG), and {µ∗

solution of (4) given Q[t].

We also can prove that

(m,n)[t]} = µ∗[t] is the optimal

?

(m,n)∈L

?

d,h

ˆ µ{m,d,h}

(m,n)

?Q{m,d,h+1}[t] − Q{n,d,h}[t]?

(m,n)[t]P(m,n)[t],

≤

?

(m,n)∈L

µ∗

which implies that E[V [t + 1] − V [t]|Q[t]]

Q{n,d,h}[t] > Qmax for some {n,d,h}. This part of the

theorem follows from Foster’s Criterion [28]. We skip the

proof details due to space constraints. Interested readers can

find the details in [29].

<

−δ if

V. THROUGHPUT-OPTIMAL AND HOP-OPTIMAL

ROUTING/SCHEDULING

In the previous section, we proposed the shortest-path-

aided back-pressure algorithm that is throughput-optimal and

supports per-flow hop-constraint.

In this section, we consider the scenario where no hop

constraint is imposed. Recall that N − 1 is an upper bound

on the number of hops of loop-free paths. Define¯H such that

¯H[f] = N − 1 for all f ∈ F. Then, we can assume that

a flow is always associated with hop-constraint¯H, i.e., all

loop-free paths are allowed. Note that considering only loop-

free paths does not change the network throughput region.

Thus we say A[t] is within the network throughput region if

(A[t],¯H) ∈ ΛG, which is also written as A[t] ∈ ΛG.

It is well-known that the back-pressure algorithm can sta-

bilize any A[t] that is in the network throughput region.

However, the back-pressure algorithm exploits all feasible

paths, which leads to undesirable delay performance as shown

in Section II. Intuitively, we should only use short-paths when

the traffic load is low, and start to exploit longer-paths as the

traffic load increases. We note that the number of hops used

to deliver a packet is an important parameter in two senses:

(i) the number of hops is related to the wireless resource used

to deliver the packet; (ii) the number of hops is also related

to the end-to-end delay. Motivated by these observations, we

will design an algorithm that is not only throughput-optimal,

but also minimizes the average number of hops used to deliver

a packet. The motivation is the hope that such an algorithm

will not only minimize the number of transmissions required

to support the traffic, but also reduce the average end-to-end

transmission delay. (As we will later see from simulations,

minimizing hop-count does seem to result in smaller end-to-

end delays).

A. Hop Minimization

Given traffic A[t] ∈ ΛG, we let SA[t] denote the set

of routing/scheduling policies that stabilize the network. We

further define Af,h,P[∞] to be the fraction of flow f that

is delivered with exact h hops under policy P, which is

well defined when the network is stochastically stable. Our