Content uploaded by Vibhaalakshmi Sivaraman
Author content
All content in this area was uploaded by Vibhaalakshmi Sivaraman on May 04, 2020
Content may be subject to copyright.
High Throughput Cryptocurrency Routing in
Payment Channel Networks
Vibhaalakshmi Sivaraman1, Shaileshh Bojja Venkatakrishnan2, Kathleen Ruan3,
Parimarjan Negi1, Lei Yang1, Radhika Mittal4,
Mohammad Alizadeh1, and Giulia Fanti3
1Massachusetts Institute of Technology
2Ohio State University
3Carnegie Mellon University
3University of Illinois at Urbana-Champaign
Abstract
Despite growing adoption of cryptocurrencies, making fast payments at scale remains a
challenge. Payment channel networks (PCNs) such as the Lightning Network have emerged as
a viable scaling solution. However, completing payments on PCNs is challenging: payments
must be routed on paths with sufficient funds. As payments flow over a single channel (link)
in the same direction, the channel eventually becomes depleted and cannot support further
payments in that direction; hence, naive routing schemes like shortest-path routing can deplete
key payment channels and paralyze the system. Today’s PCNs also route payments atomically,
worsening the problem. In this paper, we present Spider, a routing solution that “packetizes”
transactions and uses a multi-path transport protocol to achieve high-throughput routing in
PCNs. Packetization allows Spider to complete even large transactions on low-capacity pay-
ment channels over time, while the multi-path congestion control protocol ensures balanced
utilization of channels and fairness across flows. Extensive simulations comparing Spider
with state-of-the-art approaches shows that Spider requires less than 25% of the funds to suc-
cessfully route over 95% of transactions on balanced traffic demands, and offloads 4x more
transactions onto the PCN on imbalanced demands.
1 Introduction
Despite their growing adoption, cryptocurrencies suffer from poor scalability. For example, the
Bitcoin [5] network processes 7 transactions per second, and Ethereum [14] 15 transactions/sec-
ond, which pales in comparison to the 1,700 transactions per second achieved by the VISA network
The lead author can be contacted at vibhaa@mit.edu
1
arXiv:1809.05088v5 [cs.NI] 23 Mar 2020
[57]. Scalability thus remains a major hurdle to the adoption of cryptocurrencies for retail and other
large-scale applications. The root of the scalability challenge is the inefficiency of the underlying
consensus protocol: every transaction must go through full consensus to be confirmed, which can
take anywhere from several minutes to hours [44].
A leading proposal among many solutions to improve cryptocurrency scalability [23, 32, 41]
relies on so-called payment channels. A payment channel is a cryptocurrency transaction that
escrows or dedicates money on the blockchain for exchange with a prespecified user for a pre-
determined duration. For example, Alice can set up a payment channel with Bob in which she
escrows 10 tokens for a month. Now Alice can send Bob (and only Bob) signed transactions from
the escrow account, and Bob can validate them privately in a secure manner without mediation
on the blockchain (§2). If Bob or Alice want to close the payment channel at any point, they can
broadcast the most recent signed transaction message to the blockchain to finalize the transfer of
funds.
The versatility of payment channels stems from payment channel networks (PCNs), in which
users who do not share direct payment channels can route transactions through intermediaries for a
nominal fee. PCNs enable fast, secure transactions without requiring consensus on the blockchain
for every transaction. PCNs have received a great deal of attention in recent years, and many
blockchains are looking to PCNs to scale throughput without overhauling the underlying consensus
protocol. For example, Bitcoin has deployed the Lightning network [15, 10], and Ethereum uses
Raiden [18].
For PCNs to be economically viable, the network must be able to support high transaction
throughput. This is necessary for intermediary nodes (routers) to profitably offset the opportunity
cost of escrowing funds in payment channels, and for encouraging end-user adoption by providing
an appealing quality of payment service. But, a transaction is successful only if all channels along
its route have sufficient funds. This makes payment channel routing, the protocol by which a path
is chosen for a transaction, of paramount importance.
Existing payment channel routing protocols achieve poor throughput, for two main reasons.
First, they attempt to route each incoming transaction atomically and instantaneously, in full. This
approach is harmful, particularly for larger transactions, because a transaction fails completely if
there is no path to the destination with enough funds. Second, existing routing protocols fail to keep
payment channels balanced. A payment channel becomes imbalanced when the transaction rate
across it is higher in one direction than the other; the party making more transactions eventually
runs out of funds and cannot send further payments without “refilling” the channel via either an
on-chain transaction (i.e., committing a new transaction to the blockchain) or coordinated cyclic
payments between a series of PCN nodes [40]. Most PCNs today route transactions naively on
shortest paths with no consideration for channel balance; this can leave many channels depleted,
reducing throughput for everyone in the network. We describe a third problem, the creation of
deadlocks in certain scenarios, in §3.
In this paper we present Spider, a multi-path transport protocol that achieves balanced, high-
throughput routing in PCNs, building on concepts in an earlier position paper [52]. Spider’s design
centers on two ideas that distinguish it from existing approaches. First, Spider senders “packetize”
transactions, splitting them into transaction-units that can be sent across different paths at different
rates. By enabling congestion-control-like mechanisms for PCNs, this packet-switched approach
makes it possible to send large payments on low-capacity payment channels over a period of time.
Second, Spider develops a simple multi-path congestion control algorithm that promotes balanced
2
channels while maximizing throughput. Spider’s senders use a simple one-bit congestion signal
from the routers to adjust window sizes, or the number of outstanding transaction-units, on each of
their paths.
Spider’s congestion control algorithm is similar to multi-path congestion control protocols like
MPTCP [60] developed for Internet congestion control. But the routing problem it solves in PCNs
differs from standard networks in crucial ways. Payment channels can only route transactions
by moving a finite amount of funds from one end of the channel to the other. Because of this,
the capacity of a payment channel — the transaction rate that it can support — varies depending
on how it is used; a channel with balanced demand for routing transactions in both directions
can support a higher rate than an imbalanced one. Surprisingly, we find that a simple congestion
control protocol can achieve such balanced routing, despite not being designed for that purpose
explicitly.
We make the following contributions:
1. We articulate challenges for high-throughput routing in payment channel networks (§3), and
we formalize the balanced routing problem (§5). We show that the maximum throughput
achievable in a PCN depends on the nature of the transaction pattern: circulation demands
(participants send on average as much as they receive) can be routed entirely with sufficient
network capacity, while demands that form Directed Acyclic Graphs (DAGs) where some
participants send more than they receive cannot be routed entirely in a balanced manner. We
also show that introducing DAG demands can create deadlocks that stall all payments.
2. We propose a packet-switched architecture for PCNs (§4) that splits transactions into transaction-
units and multiplexes them across paths and time.
3. We design Spider (§6), a multi-path transport protocol that (i) maintains balanced channels
in the PCN, (ii) uses the funds escrowed in a PCN efficiently to achieve high throughput, and
(iii) is fair to different payments.
4. We build a packet-level simulator for PCNs and validate it with a small-scale implementation
of Spider on the LND Lightning Network codebase [15]. Our evaluations (§7) show that (i)
on circulation demands where 100% throughput is achievable, compared to the state-of-the-
art, Spider requires 25% of the funds to route over 95% of the transactions and completes 1.3-
1.8x more of the largest 25% of transactions based on a credit card transactions dataset [34];
(ii) on DAG demands where 100% throughput is not achievable, Spider offloads 7-8x as
many transactions onto the PCN for every transaction on the blockchain, a 4x improvement
over current approaches.
2 Background
Bidirectional payment channels are the building blocks of a payment channel network. A bidirec-
tional payment channel allows a sender (Alice) to send funds to a receiver (Bob) and vice versa. To
open a payment channel, Alice and Bob jointly create a transaction that escrows money for a fixed
amount of time [47]. Suppose Alice puts 3 units in the channel, and Bob puts 4 (Fig. 1). Now,
if Bob wants to transfer one token to Alice, he sends her a cryptographically-signed message as-
serting that he approves the new balance. This message is not committed to the blockchain; Alice
simply holds on to it. Later, if Alice wants to send two tokens to Bob, she sends a signed message
to Bob approving the new balance (bottom left, Fig. 1). This continues until one party decides to
3
close the channel, at which point they publish the latest message to the blockchain asserting the
channel balance. If one party tries to cheat by publishing an earlier balance, the cheating party
loses all the money they escrowed to the other party [47].
Alice Bob
Txn 2
- Alice
(3)
Alice Bob
Open
Channel
(1)
Alice Bob
Txn 1
-Bob
ç
(2)
Alice Bob
Close
Channel
(4)
Figure 1: Bidirectional payment channel between Alice and Bob. A blue shaded block indicates a trans-
action that is committed to the blockchain.
Figure 2: In a payment channel network, Alice can transfer money to Bob by using intermediate nodes’
channels as relays. There are two paths from Alice to Bob, but only the path (Alice, Charlie, Bob) can
support 3 tokens.
A payment channel network is a collection of bidirectional payment channels (Fig. 2). If Alice
wants to send three tokens to Bob, she first finds a path to Bob that can support three tokens of
payment. Intermediate nodes on the path (Charlie) will relay payments to their destination. Hence
in Fig. 2, two transactions occur: Alice to Charlie, and Charlie to Bob. To incentivize Charlie to
participate, he receives a routing fee. To prevent him from stealing funds, a cryptographic hash
lock ensures that all intermediate transactions are only valid after a transaction recipient knows a
private key generated by Alice [18]. 1Once Alice is ready to pay, she gives that key to Bob out-
of-band; he can either broadcast it (if he decides to close the channel) or pass it to Charlie. Charlie
is incentivized to relay the key upstream to Alice so that he can also get paid. Note that Charlie’s
payment channels with Alice and Bob are independent: Charlie cannot move funds between them
without going through the blockchain.
1The protocol called Hashed Timelock Contracts (HTLCs) can be implemented in two ways: the sender generates
the key, as in Raiden [18] or the receiver generates the key, as in Lightning [47]. Spider assumes that the sender
generates the key.
4
(a) Underutilized channels (b) Imbalanced channels
(c) Deadlock
Figure 3: Example illustrating the problems with state-of-the-art PCN routing schemes.
3 Challenges in Payment Channel Networks
A major cost of running PCNs is the collateral needed to set up payment channels. As long as a
channel is open, that collateral is locked up, incurring an opportunity cost for the owner. For PCNs
to be financially viable, this opportunity cost should be offset by routing fees, which are charged on
each transaction that passes through a router. To collect more routing fees, routers try to process
as many transactions as possible for a given amount of collateral. A key performance metric is
therefore the transaction throughput per unit collateral where throughput itself is measured either
in number of transactions per second or transaction value per second.
Current PCN designs exhibit poor throughput due to naive design choices in three main areas:
(1) how to route transactions,(2) when to send them and, (3) deadlocks.
Challenge #1: How to route transactions? A central question in PCNs is what route(s) to use for
sending a transaction from sender to destination. PCNs like the Lightning and Raiden networks are
source-routed. 2Most clients by default pick the shortest path from the source to the destination.
However, shortest-path routing degrades throughput in two key ways. The first is to cause
underutilization of the network. To see this, consider the PCN shown in Fig. 3a. Suppose we have
two clusters of nodes that seek to transact with each other at roughly the same rate on average, and
the clusters are connected by two paths, one consisting of channels a−b, and the other channel c.
If the nodes in cluster A try to reach cluster B via the shortest path, they would all take channel c,
as would the traffic in the opposite direction. This leads to congestion on channel c, while channels
aand bare under-utilized.
A second problem is more unique to PCNs. Consider a similar topology in Figure 3b, and
suppose we fully utilize the network by sending all traffic from cluster A→B on edge aand all
traffic from cluster B→A on edge b. While the rate on both edges is the same, as funds flow in
one direction over a channel, the channel becomes imbalanced: all of the funds end up on one
side of the channel. Cluster A can no longer send payments until it receives funds from cluster B
on the edge aor it deposits new funds into the channel avia an on-chain transaction. The same
applies to cluster B on edge b. Since on-chain transactions are expensive and slow, it is desirable
to avoid them. Routing schemes like shortest-path routing do not account for this problem, thereby
leading to reduced throughput (§7). In contrast, it is important to choose routes that actively prevent
channel imbalance. For example, in Figure 3b, we could send half of the A→B traffic on edge a,
2This was done in part for privacy reasons: transactions in the Lightning network use onion-routing, which is easy
to implement with source routing [33].
5
and half on edge b, and the same for the B→A traffic. The challenge is making these decisions in
a fully decentralized way.
Challenge #2: When to send transactions? Another problem is when to send transactions.
Most existing PCNs are circuit-switched: transactions are processed instantaneously and atomi-
cally upon arrival [47, 18]. This causes a number of problems. If a transaction’s value exceeds
the available balance on each path from the source to the destination, the transaction fails. Since
transaction values in the wild tend to be heavy-tailed [34, 29], either a substantial fraction of real
transactions will fail as PCN usage grows, or payment channel operators will need to provision
higher collateral to satisfy demand.
Even when transactions do not fail outright, sending transactions instantaneously and atomi-
cally exacerbates the imbalance problem by transferring the full transaction value to one side of
the channel. A natural idea to alleviate these problems is to “packetize” transactions: transactions
can be split into smaller transaction-units that can be multiplexed over space (by traversing differ-
ent paths) and in time (by being sent at different rates). Versions of this idea have been proposed
before; atomic multi-path payments (AMP) enable transactions to traverse different paths in the
Lightning network [3], and the Interledger protocol uses a similar packetization to conduct cross-
ledger payments [55]. However, a key observation is that it is not enough to subdivide transactions
into smaller units: to achieve good throughput, it is also important to multiplex in time as well, by
performing congestion control. If there is a large transaction in one direction on a channel, simply
sending it out in smaller units that must all complete together doesn’t improve the likelihood of
success. Instead, in our design, we allow each transaction-unit to complete independently, and
a congestion control algorithm at the sender throttles the rate of these units to match the rate of
units in the opposite direction at the bottlenecked payment channel. This effectively allows the to-
kens at that bottleneck to be replenished and reused multiple times as part of the same transaction,
achieving a multiplicative increase in throughput for the same collateral.
Challenge #3: Deadlocks. The third challenge in PCNs is the idea that the introduction of certain
flows can actively harm the throughput achieved by other flows in the network. To see this, consider
the topology and demand rates in Figure 3c. Suppose nodes 1 and 2 want to transmit 1-unit
transactions to node 3 at rates of 1 and 2 units/second, respectively, and node 3 wants to transact
2 units/sec with node 1.3Notice that the specified transaction rates are imbalanced: there is a net
flow of funds out of node 2 and into nodes 1 and 3. Suppose the payment channels are initially
balanced, with 10 units on each side and we only start out with flows between nodes 1 and 3. For
this demand and topology, the system can sustain 2 units/sec by only having nodes 1 and 3 to send
to each other at a rate of 1 unit/second.
However, once transactions from node 2 are introduced, this example achieves zero throughput
at steady-state. The reason is that node 2 sends transactions to node 3 faster than its funds are being
replenished, which reduces its funds to 0. Slowing down 2’s transactions would only delay this
outcome. Since node 2 needs a positive balance to route transactions between nodes 1 and 3, the
transactions between 1 and 3 cannot be processed, despite the endpoints having sufficient balance.
The network finds itself in a deadlock that can only be resolved by node 2 replenishing its balance
with an on-chain transaction.
Why these problems are difficult to solve. The above problems are challenging because their
3For simplicity, we show three nodes, but a node in this example could represent a cluster of many users who wish
to transact at the rates shown in aggregate.
6
effects are closely intertwined. For example, because poor routing and rate-control algorithms can
cause channel imbalance, which in turn degrades throughput, it is difficult to isolate the effects of
each. Similarly, simply replacing circuit switching with packet-switching gives limited benefits
without a corresponding rate control and routing mechanism.
From a networking standpoint, PCNs are very different from traditional communication net-
works: payment channels do not behave like a standard communication link with a certain capacity,
say in transactions per second. Instead, the capacity of a channel in a certain direction depends
on two factors normally not seen in communication networks: (a) the rate that transactions are re-
ceived in the reverse direction on that channel, because tokens cannot be sent faster on average in
one direction than they arrive in the other, (b) the delay it takes for the destination of a transaction
to receive it and send back the secret key unlocking the funds at routers (§2). Tokens that are “in
flight”, i.e. for which a router is waiting for the key, cannot be used to service new transactions.
Therefore the network’s capacity depends on its delay, and queued up transactions at a depleted
link can hold up funds from channels in other parts of the network. This leads to cascading effects
that make congestion control particularly critical.
4 Packet-Switched PCN
Spider uses a packet-switched architecture that splits transactions into a series of independently
routed transaction-units. Each transaction-unit transfers a small amount of money bounded by a
maximum-transaction-unit (MTU) value. Packetizing transactions is inspired by packet switching
for the Internet, which is more effective than circuit switching [42]. Note that splitting transac-
tions does not compromise the security of payments; each transaction-unit can be created with
an independent secret key. As receivers receive and acknowledge transaction-units, senders can
selectively reveal secret keys only for acknowledged transaction-units (§2). Senders can also use
proposals like Atomic Multi-Path Payments (AMP) [3] if they desire atomicity of transactions.
In Spider, payments transmitted by source end-hosts are forwarded to their destination end-
hosts by routers within the PCN. Spider routers queue up transaction-units at a payment channel
whenever the channel lacks the funds to forward them immediately. As a router receives funds
from the other side of its payment channel, it uses these funds to forward transaction-units waiting
in its queue. Current PCN implementations [15] do not queue transactions at routers—a transaction
fails immediately if it encounters a channel with insufficient balance on its route. Thus, currently,
even a temporary lack of channel balance can cause many transactions to fail, which Spider avoids.
5 Modeling Routing
A good routing protocol must satisfy the following objectives:
1. Efficiency. For a PCN with a fixed amount of escrowed capital, the aggregate transaction
throughput achieved must be as high as possible.
2. Fairness. The throughput allocations to different users must be fair. Specifically, the system
should not starve transactions of some users if there is capacity.
Low latency, a common goal in communication networks, is desirable but not a first order
concern, as long as transaction latency on the PCN is significantly less than an on-chain transaction
7
(which can take minutes to hours today). However, as mentioned previously (§3), very high latency
could hurt the throughput of a PCN, and must therefore be avoided. We assume that the underlying
communication network is not a bottleneck and PCN users can communicate payment attempts,
success and failures with one another easily since these messages do not require much bandwidth.
To formalize the routing problem, we consider a fluid model of the system in which payments
are modeled as continuous “fluid flows” between users. This allows us to cast routing as an opti-
mization problem and derive decentralized algorithms from it, analogous to the classical Network
Utility Maximization (NUM) framework for data networks [46]. More specifically, for the fluid
model we consider a PCN modeled as a graph G(V, E )in which Vdenotes the set of nodes (i.e.,
end-hosts or routers), and Edenotes the set of payment channels between them. For a path p, let
xpdenote the (fluid) rate at which payments are sent along pfrom a source to a destination. The
fluid rate captures the long-term average rate at which payments are made on the path.
For maximizing throughput efficiency, routing has to be done such that the total payment flow
through each channel is as high as possible. However, routers have limited capital on their payment
channels, which restricts the maximum rate at which funds can be routed (Fig. 3a). In particular,
when transaction units are sent at a rate xu,v across a payment channel between uand vwith cu,v
funds in total and it takes ∆time units on average to receive the secret key from a destination once
a payment is forwarded, then xu,v ∆credits are locked (i.e., unavailable for use) at any point in
time in the channel. This implies that the average rate of transactions (across both directions) on a
payment channel cannot exceed cu,v/∆. This leads to capacity constraints on channels.
Sustaining a flow in one direction through a payment channel requires funds to be regularly
replenished from the other direction. This requirement is a key difference between PCNs and
traditional data networks. In PCNs if the long-term rates xu,v and xv,u are mismatched on a channel
(u, v), say xu,v > xv,u, then over time all the funds cu,v will accumulate at vdeeming the channel
unusable in the direction uto v(Fig. 3b). This leads to balance constraints which stipulate that the
total rate at which transaction units are sent in one direction along a payment channel matches the
total rate in the reverse direction.
Lastly, for enforcing fairness across flows we assume sources have an intrinsic utility for mak-
ing payments, which they seek to maximize. A common model for utility at a source is the loga-
rithm of the total rate at which payments are sent from the source [39, 38, 31]. A logarithmic utility
ensures that the rate allocations are proportionally fair [39]—no individual sender’s payments can
be completely throttled. Maximizing the overall utility across all source-destination pairs subject
to the capacity and balance constraints discussed above, can then be computed as
maximize X
i,j∈V
logX
p∈Pi,j
xp(1)
s.t.X
p∈Pi,j
xp≤di,j ∀i, j ∈V(2)
xu,v +xv,u ≤cu,v
∆∀(u, v)∈E(3)
xu,v =xv,u ∀(u, v)∈E(4)
xp≥0∀p∈ P,(5)
where for a source iand destination j,Pi,j is the set of all paths from ito j,di,j is the demand
from ito j,xu,v is the total flow going from uto vfor a channel (u, v),cu,v is the total amount of
8
2 31 10 10 10 10
1
1
1
(a) Payment graph
1 2 3
10 10 10 10
1
1
(b) Circulation
1 2 3
10 10 10 10
1
1
1
(c) DAG
Figure 4: Payment graph (denoted by blue lines) for a 3 node network (left). It decomposes into a maxi-
mum circulation and DAG components as shown in (b) and (c).
funds escrowed into (u, v),∆is the average round-trip time of the network taken for a payment
to be completed, and Pis the set of all paths. Equation (2) specifies demand constraints which
ensures that the total flow for each sender-receiver pair across all of their paths, is no more than
their demand.
5.1 Implications for Throughput
A consequence of the balance constraints is that certain traffic demands are more efficient to route
than certain others. In particular, demands that have a circulation structure (total outgoing demand
matches total incoming demand at a router) can be routed efficiently. The cyclic structure of
such demands enables routing along paths such that the rates are naturally balanced in channels.
However, for demands without a circulation structure, i.e., if the demand graph is a directed acyclic
graph (DAG), balanced routing is impossible to achieve in the absence of periodic replenishment
of channel credits, regardless of how large the channel capacities are.
For instance, Fig. 4a shows the traffic demand graph for a PCN with nodes {1,2,3}and pay-
ment channels between nodes 1−2and 2−3. The weight on each blue edge denotes the demand in
transaction-units per second between a pair of users. The underlying black lines denote the topol-
ogy and channel sizes. Fig. 4b shows the circulation component of the demand in Fig. 4a. The
entire demand contained in this circulation can be routed successfully as long as the network has
sufficient capacity. In this case, if the confirmation latency for transaction-units between 1 and 3 is
less than 10s, then the circulation demand can be satisfied indefinitely. The remaining component
of the demand graph, which represents the DAG, is shown in Fig. 4c. This portion cannot be routed
indefinitely since it shifts all tokens onto node 3 after which the 2−3channel becomes unusable.
App. A formalizes the notion of circulation and shows that the maximum throughput achievable
by any balanced routing scheme is at most the total demand contained within the circulation.
5.2 A Primal-Dual Decomposition Based Approach
We now describe a decentralized algorithm based on standard primal-dual decomposition tech-
niques used in utility-maximization-based rate control and routing literature (e.g. [38]). §5.2.1 and
§5.2.2 discuss the protocol at router nodes and end-hosts in order to facilitate this approach. A de-
tailed derivation of the algorithm using in the fluid model that also considers the cost of on-chain
rebalancing is discussed in App. C. However, §5.2.3 outlines the difficulties with this approach that
lead us to the design of the practical protocol discussed in §6.
To arrive at this algorithm, we consider the optimization problem described in §5 for a generic
utility function U(Pp∈Pi,j xp). The structure of the Lagrangian of the LP allows us to naturally
9
decompose the overall optimization problem into separate subproblems for each sender-receiver
pair.
A solution to this LP can be computed via a decentralized algorithm in which each sender
maintains rates at which payments are sent on each of its paths. Each payment channel has a price
in each direction. Routers locally update these prices depending on both congestion and imbalance
across the payment channels, while end-hosts adjust their rates by monitoring the total price on the
different paths. The primal variables of the LP represent the rate of payments on each path, and
the dual variables represent the channel prices. While this approach has been used before [38],
a key difference from prior work is the presence of price variables for link balance constraints
in addition to the price variables for capacity constraints. This ensures that the price of a channel
having a skewed balance is different in each direction, and steers the flow rates to counter the skew.
5.2.1 Router Design
Routers in each payment channel under this algorithm maintain price variables, which are updated
periodically based on the current arrival rate of transaction units in the channel, available channel
balance, and the number of transaction units queued up at the routers. The price variables at
the routers determine the path prices, which in turn affect the rates at which end-hosts transmit
transaction units (we discuss more in §5.2.2).
In a payment channel (u, v)∈E, routers uand vhold estimates for three types of price
variables: λu,v, µu,v and µv,u. These are dual variables corresponding to the capacity and imbalance
constraints in Equation (4) and (3) respectively. The capacity price λu,v signals congestion in the
channel if the total rate at which transactions arrive exceeds its capacity; the congestion prices µu,v
and µv,u are used to signal imbalance in the channel in the two directions. These variables are
updated periodically to ensure that the capacity and imbalance conditions are not violated in the
channel. Prices are updated every τseconds according to the rules described next.
Imbalance Price. For a channel (u, v), let nu, nvdenote the total amount of transactions that have
arrived at uand vrespectively, in the τseconds since the last price update. The price variable for
imbalance µ(u,v)is updated as
µu,v(t+ 1) = [µu,v (t) + κ(nu(t)−nv(t))]+,(6)
where κis a positive step-size parameter for controlling the rate at which the price varies. 4
Intuitively, if more funds arrive in the u-vdirection compared to the v-udirection (i.e., nu(t)>
nv(t)), the price µu,v increases while the price µv,u decreases. The higher price in the u-vdirection
signals end-hosts that are routing along (u, v)to throttle their rates, and signal end-hosts routing
along (v, u)to increase their rates.
Capacity Price. The price variable for capacity λu,v is also updated every τseconds as follows:
λu,v(t+ 1) = [λu,v (t) + η(mu(t) + mv(t)−cu,v )]+.(7)
For the current rates of transaction arrival at uand v,mu(t)and mv(t)are estimates of the amount
of funds required to sustain those rates at uand vrespectively. Since cu,v is the total amount of
4The price update for µv,u is analogous to Eq. (6), but with uand vinterchanged. This equation can be modified
to include on-chain rebalancing amounts on both ends.
10
funds available in the channel, any excess in required amount of funds compared to cu,v would
cause λu,v to rise and vice-versa. An increase in λu,v signals end-hosts routing via u, v, on either
direction, to reduce their rates. We estimate the demands mu(t)and mv(t)for tokens by measuring
the arrival and service rates of transactions, and the current amount of locked funds in the channel
as described in App. D. ηis a positive step-size parameter.
5.2.2 End-host Design
Spider-hosts run a multi-path transport protocol with pre-determined paths which controls the rates
at which payments are transferred, based on observations of the channel prices or router feedback.
End-hosts here use probe messages to evaluate the channel prices on each path. The total price of
a path pis given by
zp=X
(u,v):(u,v)∈p
(2λu,v +µu,v −µv,u),(8)
which captures the aggregate amount of imbalance and excess demand, as signaled by the corre-
sponding price variables, in the path. We refer to App. C for a mathematical intuition behind Equa-
tion (8). Probes are sent periodically every τseconds (i.e., the same frequency at which channel
prices are updated §6.2) on each path. A probe sent out on path psums the price 2λu,v +µu,v −µv,u
of each channel (u, v)it visits, until it reaches the destination host on p. The destination host then
transmits the probe back to the sender along the reverse path. The rate to send xpon each path pis
updated using the path price zpfrom the most recently received probe as
xp(t+ 1) = xp(t) + α(U0(x)−zp(t)),(9)
where αis a positive step-size parameter. Thus the rate to send on a path decreases if the path price
is high—indicating a large amount of imbalance or capacity deficit in the path—and increases
otherwise.
5.2.3 Challenges
There are a number of challenges in making this algorithm work in practice. Firstly, iterative al-
gorithms for adjusting path prices and sending rates suffer from slow convergence. An algorithm
that is slow to converge may not be able to adapt the routing to changes in the transaction arrival
pattern. If the transaction arrival patterns change frequently, this may result in a perpetually sub-
optimal routing. Secondly, in order to compute the imbalance prices, the two routers in a payment
channel need to exchange information about their arrival patterns and their respective queue states
to calculate nuand muin Eq.6–7. This implies that routers cannot deploy this in isolation. Fur-
ther, we found that the scheme was extremely sensitive to the many parameters involved in the
algorithm, making it hard to tune for a variety of topologies and capacity distributions. Lastly, a
pure pacing based approach could cause bursts in transaction-units sent that lead to large queue
buildups much before the prices react appropriately. To account for this, the algorithm needs to be
augmented with windows [37] and active queue control to work in practice. Due to these difficul-
ties, we propose a more practical and simpler protocol described in §6.
11
Available funds
In-flight funds
router
𝑥"
𝑦
"
$𝑣
𝑞"
router
𝑥'𝑦
'
𝑞'
$𝑢 X X
(a) A capacity limited payment channel.
Available funds
In-flight funds
router
𝑥"𝑦
"
𝑞"
%𝑢 router
𝑥'
𝑦
'
%𝑣
𝑞'
X
(b) An imbalance limited payment channel.
Figure 5: Example of queue growth in a payment channel between routers uand v, under different scenar-
ios of transaction arrival rates at uand v. (a) If the rate of arrival at v,xv, and the rate of arrival at u,xu,
are such that their sum exceeds the channel capacity, neither router has available funds and queues build up
at both uand v. (b) If the arrival rates are imbalanced, e.g., if xv> xu, then uhas excess funds while vhas
none, causing queue build-up at v.
6 Design
6.1 Intuition
Spider routers queue up transactions at a payment channel whenever the channel lacks funds to for-
ward them immediately (§5). Thus, queue buildup is a sign that either transaction-units are arriving
faster (in both directions) than the channel can process (Fig. 5a) or that one end of the payment
channel lacks sufficient funds(Fig. 5b). It indicates that the capacity constraint (Equation 3) or the
balance constraint (Equation 4) is being violated and the sender should adjust its sending rate.
Therefore, if senders use a congestion control protocol that controls queues, they could detect
both capacity and imbalance violations and react to them. For example, in Fig. 5a, the protocol
would throttle both xuand xv. In Fig. 5b, it would decrease xvto match the rate at which queue qv
drains, which is precisely xu, the rate at which new funds become available at router v.
This illustrates that a congestion controller that satisfies two basic properties can achieve both
efficiency and balanced rates:
1. Keeping queues non-empty, which ensures that any available capacity is being utilized, i.e.,
there are no unused tokens at any router.
2. Keeping queues stable (bounded), which ensures that (a) the flow rates do not exceed a
channel’s capacity, (b) the flow rates are balanced. If either condition is violated, then at
least one of the channel’s queues would grow.
Congestion control algorithms that satisfy these properties abound (e.g., Reno [19], Cubic [35],
DCTCP [22], Vegas [27], etc.) and could be adapted for PCNs.
In PCNs, it is desirable to transmit transaction-units along multiple paths to better utilize avail-
able capacity. Consequently, Spider’s design is inspired by multi-path transport protocols like
MPTCP [60]. These protocols couple rate control decisions for multiple paths to achieve both high
throughput and fairness among competing flows [59]. We describe an MPTCP-like protocol for
PCNs in §6.2–6.3. In §6.4 we show that the rates found by Spider’s protocol for parallel network
topologies, match the solution to the optimization problem in §5.
12
Figure 6: Routers queue up transaction-units and schedule them based on priorities when funds become
available. and transaction priorities. If the delay through the queue for a packet exceeds a threshold, they
mark the packet. End-hosts maintain and adjust windows for each path to a receiver based on the marks they
observe.
6.2 Spider Router Design
Fig. 6 shows a schematic diagram of the various components in the Spider PCN. Spider routers
monitor the time that each packet spends in their queue and mark the packet if the time spent ex-
ceeds a pre-determined threshold T. If the transaction-unit is already marked, routers leave the
field unchanged and merely forward the transaction-unit. Routers forward acknowledgments from
the receiving end-host back to the sender which interprets the marked bit in the ack accordingly.
Spider routers schedule transaction-units from their queues according to a scheduling policy, like
Smallest-Payment-First or Last-In-First-Out (LIFO). Our evaluations (§7.5) shows that LIFO pro-
vides the highest transaction success rate. The idea behind LIFO is to prioritize transaction units
from new payments, which are likely to complete within their deadline.
6.3 Spider Transport Layer at End-Hosts
Spider senders send and receive payments on a PCN by interfacing with their transport layer. This
layer is configured to support both atomic and non-atomic payments depending on user prefer-
ences. Non-atomic payments utilize Spider’s packet-switching which breaks up large payments
into transaction-units that are delivered to the receiver independently. In this case, senders are no-
tified of how much of the payment was completed allowing them to cancel the rest or retry it on
the blockchain. While this approach crucially allows token reuse at bottleneck payment channels
for the same transaction (§3), senders also have the option of requesting atomic payments (likely
for a higher fee). Our results (§7) show that even with packetization, more than 95% payments
complete in full
The transport layer also involves a multi-path protocol which controls the rates at which pay-
ments are transferred, based on congestion in the network. For each destination host, a sender
chooses a set of kpaths to route transaction-units along. The route for a transaction-unit is decided
at the sender before transmitting the unit. It is written into the transaction-unit using onion encryp-
tion, to hide the full route from intermediate routers [33, 17]. In §7.5, we evaluate the impact of
different path choices on Spider’s performance and propose using edge-disjoint widest paths [21]
between each sender and receiver in Spider.
To control the rate at which payments are sent on a path, end-hosts maintain a window size
wpfor every candidate path to a destination. This window size denotes the maximum number
of transaction-units that can be outstanding on path pat any point in time. End-hosts track the
13
transaction-units that have been sent out on each path but have not yet been acked or canceled. A
new transaction-unit is transmitted on a path ponly if the total amount pending does not exceed
wp.
End-hosts adjust wpbased on router feedback on congestion and imbalance. In particular, on a
path pbetween source iand receiver jthe window changes as
wp←wp−β, on every marked packet and, (10)
wp←wp+α
X
p0:p0∈Pi,j
wp0
,on every unmarked packet.(11)
Here, αand βare both positive constants that denote the aggressiveness with which the window
size is increased and decreased respectively. Eq. (10)–(11) are similar to MPTCP, but with a
multiplicative decrease factor that depends on the fraction of packets marked on a path (similar to
DCTCP [22]).
We expect the application to specify a deadline for every transaction. If the transport layer
fails to complete the payment within the deadline, the sender cancels the payment, clearing all of
its state from the PCN. In particular, it sends a cancellation message to remove any transaction-
units queued at routers on each path to the receiver. Notice that transaction-units that arrive at the
receiver in the meantime cannot be unlocked because we assume the sender holds the secret key
(§2). Senders can then choose to retry the failed portion of the transaction again on the PCN or on
the blockchain; such retries would be treated as new transactions. Canceled packets are considered
marked and Spider decreases its window in response to them.
6.4 Optimality of Spider
Under a fluid approximation model for Spider’s dynamics, we can show that the rates computed
by Spider are an optimal solution to the routing problem in Equations (1)–(5) for parallel networks
(such as Fig. 20 in App. B). In the fluid model, we let xp(t)denote the rate of flow on a path p
at time t; for a channel (u, v),fu,v (t)denotes the fraction of packets that are marked at router
uas a result of excessive queuing. The dynamics of the flow rates xp(t)and marking fractions
fu,v(t)can be specified using differential equations to approximate the window update dynamics
in Equations (10) and (11). We elaborate more on this fluid model, including specifying how the
queue sizes and marking fractions evolve, in App. B.
Now, consider the routing optimization problem (Equations (1)–(5)) written in the context
of a parallel network. If Spider is used on this network, we can show that there is a mapping
from the rates {xp}and marking fractions {fu,v}values after convergence, to the primal and dual
variables of the optimization problem, such that the Karush-Kuhn-Tucker (KKT) conditions for the
optimization problem are satisfied. This proves that the set of rates found by Spider is an optimal
solution to the optimization problem [26]. The complete and formal mathematical proof showing
the above is presented in App. B.
14
7 Evaluation
We develop an event-based simulator for PCNs, and use it to extensively evaluate Spider across
a wide range of scenarios. We describe our simulation setup (§7.1), validate it via a prototype
implementation (§7.2), and present detailed results for circulation demands (§7.3). We then show
the effect of adding DAG components to circulations (§7.4), and study Spider’s design choices
(§7.5).
7.1 Experimental Setup
Simulator. We extend the OMNET++ simulator (v5.4.1) [1] to model a PCN. Our simulator accu-
rately models the network-wide effects of transaction processing, by explicitly passing messages
between PCN nodes (endhosts and routers).5Each endhost (i) generates transactions destined for
other endhosts as per the specified workload, and (ii) determines when to send a transaction and
along which path, as per the specified routing scheme. All endhosts maintain a view of the en-
tire PCN topology, to compute suitable source-routes. The endhosts can’t view channel balances,
but they do know each channel’s size or total number of tokens (e). Endhosts also split gener-
ated transactions into MTU-sized segments (or transaction-units) before routing, if required by the
routing scheme (e.g. by Spider). Each generated transaction has a timeout value and is marked as a
failure if it fails to reach its destination by then. Upon receiving a transaction, an endhost generates
an acknowledgment that is source-routed along its reverse path.
A router forwards incoming transactions and acknowledgments along the payment channels
specified in their route, while correspondingly decrementing or incrementing the channel balances.
Funds consumed by a transaction in a channel are inflight and unavailable until its acknowledgment
is received. A transaction is forwarded on a payment channel only if the channel has sufficient
balance; otherwise the transaction is stored in a per-channel queue that is serviced in a last in
first out (LIFO) order §7.5. If the queue is full, an incoming transaction is dropped, and a failure
message is sent to the sender.
Routing Schemes. We implement and evaluate five different routing schemes in our simulator.
(1) Spider: Every Spider sender maintains a set of up to kedge-disjoint widest paths to each
destination and a window size per path. The sender splits transactions into transaction-units and
sends a transaction-unit on a path if the path’s window is larger than amount inflight on the path. If
a transaction-unit cannot be sent, it is placed in a per-destination queue at the sender that is served
in LIFO order. Spider routers mark transaction-units experiencing queuing delays higher than a
pre-determined threshold. Spider receivers echo the mark back to senders who adjust the window
size according to the equations in §6.3.
(2) Waterfilling: Waterfillinguses balance information explicitly in contrast to Spider’s 1-bit feed-
back. As with Spider, a sender splits transactions into transaction-units and picks up to kedge-
disjoint widest paths per destination. It maintains one outstanding probe per path that computes
the bottleneck (minimum) channel balance along it. When a path’s probe is received, the sender
computes the available balance based on its bottleneck and the in-flight transaction-units. A
transaction-unit is sent along the path with the highest available balance. If the available bal-
ance for all of the kpaths is zero (or less), the transaction-unit is queued and retried after the next
5https://github.com/spider-pcn/spider-omnet
15
(a) Transaction Size Distribution (b) LN Channel Size Distribution
Figure 7: Transaction dataset and channel size distribution used for real-world evaluations.
probe.
(3) Shortest Path: This baseline sends transactions along the shortest path to the destination without
transaction splitting.
(4) Landmark Routing: Landmark routing, as used in prior PCN routing schemes [48, 43, 51],
chooses kwell-connected landmark nodes in the topology. For every transaction, the sender com-
putes its shortest path to each landmark and concatenates it with the shortest path from that land-
mark to the destination to obtain kdistinct paths. Then, the sender probes each path to obtain its
bottleneck balance, and partitions the transaction such that each path can support its share of the
total transaction. If such a partition does not exist or if any of the partitions fail, the transaction
fails.
(5) LND: The PCN scheme currently deployed in the Lightning Network Daemon (LND) [15] at-
tempts first send a transaction along the shortest path to its destination. If the transaction fails due
to insufficient balance at a channel, the sender removes that channel from its local view, recom-
putes the shortest path, and retries the transaction on the new path until the destination becomes
unreachable or the transaction times out. A channel is added back to the local view 5 seconds after
its removal.
(6) Celer: App. E.1 compares Spider to Celer’s cRoute as proposed in a white-paper [11]. Celer is
a back-pressure routing algorithm that routes transactions based on queue and imbalance gradients.
Due to computation overheads associated with Celer’s large queues, we evaluate it on a smaller
topology.
Workload. We generate two forms of payment graphs to specify the rate at which a sender trans-
acts with every other receiver: (i) pure circulations, with a fixed total sending rate xper sender.
The traffic demand matrix for this is generated by adding xrandom permutation matrices; (ii)
circulations with a DAG component, having a total rate y. This type of demand is generated by
sampling ydifferent sender-receiver pairs where the senders and receivers are chosen from two
separate exponential distributions (so that some nodes are more likely than others to be picked as
a sender or receiver). The scale βof the distribution is set proportional to the desired percentage
of DAG component in the total traffic matrix: the greater the fraction of DAG component desired,
the more skewed the distribution becomes. We translate the rates specified in the payment graphs
to discrete transactions with Poisson inter-arrival times. The transaction size distribution is drawn
from credit card transaction data [34], and has a mean of 88eand median 25ewith the largest
transaction being 3930e. The distribution of transaction sizes is shown in Fig. 7a. We keep the
sending rates constant at an average of 30 tx/sec per sender that is shared among 10 destinations
throughout all of our experiments. Note that a sender represents a router in these experiments,
16
sending transactions to other routers on behalf of many users.
Topology. We set up an LND node [15] to retrieve the the Lightning Network topology on July
15, 2019. We snowball sample [36] the full topology (which has over 5000 nodes and 34000
edges), resulting in a PCN with 106 nodes and 265 payment channels. For compatibility with
our transaction dataset, we convert LND payment channel sizes from Satoshis to e, and cap the
minimum channel size to the median transaction size of 25e. The distribution of channel sizes
for this topology has a mean and median size of 421eand 163erespectively (Fig. 7b). This
distribution is highly skewed, resulting in a mean that is much larger than the median or the smallest
payment channels. We refer to this distribution as the Lightning Channel Size Distribution (LCSD).
We draw channel propagation delays based on ping times from our LND node to all reachable
nodes in the Lightning Network, resulting in transaction minimum RTTs of about a second.
We additionally simulate two synthetic topologies: a Watts-Strogatz small world topology [20]
with 50 nodes and 200 edges, and a scale-free Barabasi-Albert graph [4] with 50 nodes and 336
edges. We set the per-hop delay to 30ms in both cases, resulting in an end-to-end minimum RTT
of 200-300ms.
For payment channel sizes, we use real capacities in the Lightning topology and sample ca-
pacities from LCSD for synthetic topologies. We vary the mean channel size across different
experiments by proportionally scaling up the size of each payment channel. All payment channels
are perfectly balanced at the start of the experiment.
Parameters. We set the MTU as 1e. Every transaction has a timeout of 5seconds. Schemes with
router queues enabled have a per-channel queue size of 12000e. The number of path choices is
set to k= 4 for schemes that use multiple paths. We vary both the number of paths and the nature
of paths in §7.5. For Spider, we set α(window increase factor) to 10,β(multiplicative decrease
factor) to 0.1, and the marking threshold for the queue delay to 300ms. For the experiments in
§7.4, we set this threshold to 75ms so as to respond to queue buildup faster than per-RTT.
Metrics. We use the following three evaluation metrics: (i) transaction success ratio: the number
of completed transactions over the number of generated transactions. A transaction which is split
at the source is complete when all of its split pieces successfully reach the destination, and (ii)
normalized throughput: the total amount of payments (in e) completed over the total amount of
payments generated. All of these metrics are computed over a measurement interval, set such that
all algorithms are in their steady-state. Unless otherwise specified, we use a measurement interval
of 800-1000s when running an experiment for 1010s.
7.2 Prototype Implementation
To support Spider, we modify the Lightning Network Daemon (LND) [15] which is currently
deployed on the live Bitcoin Network. We repurpose the router queues to queue up transactions
(or HTLCs) that cannot be immediately serviced. When a transaction spends more than 75ms in
the queue, Spider marks it. The marking is echoed back via an additional field in the transaction
acknowledgement (FulfillHTLC) to the sender. We maintain a per-receiver state at the sender
to capture the window and number inflight on each path, as well as the queue of unattempted
transactions. Each sender finds 4edge-disjoint shortest paths to every destination. We do not
implement transaction-splitting.
We deploy our modified LND implementation [15] on Amazon EC2’s c5d.4xlarge in-
stances with 16 CPU cores, 16 GB of RAM, 400 GB of NVMe SSD, and a 10 Gbps network
17
●
●
●
●
●
●
●
●
●
●
●
Figure 8: Comparison of performance on simulator and implementation for LND and Spider on a 10 node
scale-free topology with 1etransactions. Spider outperforms LND in both settings. Further, the average
success ratio on the simulator and implementation for both schemes are within 5% of each other.
interface. Each instance hosts one end-host and one router. Every LND node is run within a
docker container with a dedicated bitcoin daemon [6]. We create our own regtest [8] blockchain
for the nodes. Channels are created corresponding to a scale-free graph with 10 nodes and 25
edges. We vary the mean channel size from 25eto 400e. Five circulation payment graphs are
generated with each sender sending 100 tx/s (each 1e). Receiving nodes communicate invoices
via etcd [13] to sending nodes who then complete them using the appropriate scheme. We run
LND and Spider on the implementation and measure the transaction RTTs to inform propagation
delays on the simulator. We then run the same experiments on the simulator.
Fig. 8 shows the average success ratio that Spider and LND achieve on the implementation and
the simulator. There are two takeaways: (i) Spider outperforms LND in both settings and, (ii) the
average success ratio on the simulator is within 5% of the implementation for both schemes. Our
attempts at running experiments at larger scale showed that the LND codebase is not optimized
for high throughput. For example, persisting HTLC state on disk causes IO bottlenecks and vari-
ations of tens of seconds in transaction latencies even on small topologies. Given the fidelity and
flexibility of the simulator, we chose to use it for the remaining evaluations.
7.3 Circulation Payment Graph Performance
●●●●●
●●●●●
●●●●●
●
Figure 9: Performance of different algorithms on small-world, scale-free and Lightning Network topolo-
gies, for different per sender transaction arrival rates. Spider consistently outperforms all other schemes
achieving near 100% average success ratio. Note the log scale of the x-axes.
18
Figure 10: Breakdown of performance of different schemes by size of transactions completed. Each point
reports the success ratio for transactions whose size belongs to the interval denoted by the shaded region.
Each interval corresponds roughly to a 12.5% weight in the transaction size CDF shown in Fig. 7a. The
graphs correspond to the midpoints of the corresponding Lightning sampled channel sizes in Fig. 9.
Recall that on circulation payment graphs, all the demand can theoretically be routed if there
is sufficient capacity (§5.1 and App. A). However, the capacity at which a routing scheme attains
100% throughput depends on the scheme’s ability to balance channels: the more balanced a scheme
is, the less capacity it needs for high throughput.
Efficiency of Routing Schemes. We run five circulation traffic matrices on our three topologies
(§7.1). Notice that the channel sizes are much larger on the Lightning Topology compared to the
other two due to the highly skewed nature of capacities (Fig. 7b). We measure success ratio for the
transactions across different channel sizes. Fig. 9 shows that on all topologies, Spider outperforms
the state-of-the-art schemes. Spider successfully routes more than 95% of the transactions with
less than 25% of the capacity required by LND. At lower capacities, Spider completes 2-3×more
transactions than LND. This is because Spider maintains balance in the network by responding
quickly to queue buildup at payment channels, thus making better use of network capacity. The
explicit balance-aware scheme, Waterfilling, also routes more transactions than LND. However,
when operating in low capacity regimes, where many paths are congested and have near-zero
available balance, senders are unable to use just balance information to differentiate paths. As a
result, Waterfilling’s performance degrades at low capacity compared to Spider which takes into
account queuing delays.
Size of Successful Payments. Spider’s benefits are most pronounced at larger transaction sizes,
where packetization and congestion control helps more transactions complete. Fig. 10 shows suc-
cess ratio as a function of transaction size. We use mean channel sizes of 4000eand 16880 e
for the synthetic and real topologies, respectively. Each shaded region denotes a different range
of transaction sizes, each corresponding to about 12.5% of the transactions in the workload. A
point within a range represents the average success ratio for transactions in that interval across 5
runs. Spider outperforms LND across all sizes, and is able to route 5-30% more of the largest
transactions compared to LND.
Impact on Latency. We anticipate Spider’s rate control mechanism to increase latency. Fig. 11
shows the average and 99th percentile latency for successful transactions on the Lightning topol-
ogy as a function of transaction size. Spider’s average and tail latency increase with transaction
size because larger transactions are multiplexed over longer periods of time. However, the tail la-
tency increases much more than the average because of the skew in channel sizes in the Lightning
topology: most transactions use large channels while a few unfortunate large transactions need
19
●
Shortest Path Landmark Routing Waterfilling LND Spider
●●●●●●●
●
1000
2000
3000
5
10
15
25
41
82
164
3930
Transaction Size (Euros)
Average Latency (ms)
●●●●●
●
●
●
2000
3000
4000
5000
5
10
15
25
41
82
164
3930
Transaction Size (Euros)
99%ile Latency (ms)
Figure 11: Average and 99%ile transaction latency for different routing schemes on the Lightning topol-
ogy. Transactions experience 1-2s of additional latency with Spider relative to LND for a 20% improvement
in throughput.
more time to reuse tokens from smaller channels. Yet, the largest Spider transactions experience
at most 2 seconds of additional delay when compared to LND, a small hit relative to the 20%
increase in overall success ratio at a mean channel size of 16880e. LND’s latency also increases
with size since it retries transactions, often upto 10 times until it finds a single path with enough
capacity. In contrast, Landmark Routing and Shortest path are size-agnostic in their path-choice
for transactions.
Waterfillingpauses transactions when there is no available balance and resumes sending when
balance becomes available. Small transactions are unlikely to be paused in their lifetime while
mid-size transactions are paused a few times before they complete. In contrast, large transactions
are likely to be paused many times, eventually getting canceled if paused too much. This has
two implications: (i) the few large transactions that are successful with Waterfillingare not paused
much and contribute smaller latencies than mid-size transactions, and (ii) Waterfilling’s conserva-
tive pause and send mechanism implies there is less contention for the large transactions that are
actually sent into the network, leading to smaller latencies than what they experience with Spider.
7.4 Effect of DAGs
Real transaction demands are often not pure circulations: consumer nodes spend more, and mer-
chant nodes receive more. To simulate this, we add 5 DAG payment graphs (§7.1) to circulation
payment graphs, varying the relative weight to generate effectively 5%, 20% and 40% DAG in the
total demand matrix. We run all schemes on the Lightning topology with a mean channel size of
16880e; results on the synthetic topologies are in App. E.3.
Fig. 12 shows the success ratio and normalized throughput. We immediately notice that no
scheme achieves the theoretical upper bound on throughput (i.e., the % circulation demand). How-
ever, throughput is closer to the bound when there is a smaller DAG component in the demand
matrix. This suggests that not only is the DAG itself unroutable, it also alters the PCN balances
in a way that prevents the circulation from being fully routed. Further, the more DAG there is, the
more affected the circulation is. This is because the DAG causes a deadlock (§3).
20
Shortest Path Landmark Routing Waterfilling LND Spider Circulation
0
25
50
75
100
0 10 20 30 40
DAG Amount (%)
Success Ratio (%)
0
25
50
75
100
0 10 20 30 40
DAG Amount (%)
Norm. Throughput (%)
Figure 12: Performance of different algorithms on the Lightning topology as the DAG component in the
transaction demand matrix is varied. As the DAG amount is increased, the normalized throughput achieved
is further away from the expected optimal circulation throughput.
Small World
Scale Free
Lightning Network
0 1000 2000 3000 0 1000 2000 3000 0 1000 2000 3000
0
25
50
75
100
Time (s)
Norm. Throughput(%)
Pure Circulation DAG + Circulation
Figure 13: Comparing throughput when a pure circulation demand is run for 3000s to a scenario where a
circulation demand is restored for 1000s after 2000s of a demand with 20% DAG. The throughput achieved
on the last 1000s of circulation is not always the expected 100% even after the DAG is removed.
To illustrate this, we run two scenarios: (i) a pure circulation demand Xfor 3000s, and (ii) a
traffic demand (X+Y)containing 20% DAG for 2000s followed by the circulation Xfor 1000s
after that. Here, each sender sends 200e/s of unit-sized transactions in X. We observe a time
series of the normalized throughput over the 3000s. The mean channel size is 4000eand 16990e
for the synthetic and real topologies respectively.
Fig. 13 shows that Spider achieves 100% throughput (normalized by the circulation demand) at
steady state for the pure circulation demand on all topologies. However, when the DAG component
is introduced to the demand, it affects the topologies differently. Firstly, we do not observe the
expected 80% throughput for the circulation in the presence of the DAG workload suggesting
that the DAG affects the circulation. Further, even once the circulation demand is restored for
the last 1000s, in the scale free and Lightning Network topology, the throughput achieved is no
longer 100%. In other words, in these two topologies, the DAG causes a deadlock that affects the
circulation even after the DAG is removed.
As described in §3, the solution to this problem involves replenishing funds via on-chain re-
balancing, since DAG demands continuously move money from sources to sinks. We therefore
21
Figure 14: Performance of different algorithms on the Lightning topology when augmented with on-chain
rebalancing. Spider needs less frequent rebalancing to sustain high throughput. Spider offloads 3-4x more
transactions onto a PCN per blockchain transaction than LND.
implement a simple rebalancing scheme where every router periodically reallocates funds between
its payment channels to equalize their available balance. The frequency of rebalancing for a router,
is defined by the number of successful transaction-units (in e) between consecutive rebalancing
events. In this model, the frequency captures the on-chain rebalancing cost vs. routing fee trade-off
for the router.
Fig. 14 shows the success ratio and normalized throughput achieved by different schemes when
rebalancing is enabled for the traffic demand with 20% DAG from Fig. 12, or Fig. 13. Spider is
able to achieve 90% success ratio even when its routers rebalance only every 10,000erouted while
LND is never able to sustain more than 85% success ratio even when rebalancing for every 10e
routed. This is because LND deems a channel unusable for 5seconds every time a transaction
fails on it due to lack of funds and this is further worsened by its lack of transaction splitting. This
implies that when using Spider, routers need to pay for only one on-chain transaction typically
costing under 1e[7] for every 10,000erouted. Thus, for a router to break even, it would have to
charge 1efor every 10000erouted. This translates into significantly lower routing fees for end-
users than today’s payment systems [12]. Fig. 14 also captures the same result in the form of the
best offloading or number of off-chain PCN transactions per blockchain transaction achieved by
each algorithm. Transactions that fail on the PCN as well as rebalancing transactions are counted
towards the transactions on the blockchain. Spider is able to route 7-8 times as many transactions
off-chain for every blockchain transaction, a 4x improvement from the state-of-the-art LND.
7.5 Spider’s Design Choices
In this section, we investigate Spider’s design choices with respect to the number of paths, type of
paths, and the scheduling algorithm that services transaction-units at Spider’s queues. We evaluate
these on both the real and synthetic topologies with channel sizes sampled from the LCSD, and
scaled to have mean of 16880eand 4000 erespectively .
Choice of Paths. We vary the type of paths that Spider uses by replacing edge-disjoint widest paths
with edge-disjoint shortest paths, Yen’s shortest paths [61], oblivious paths [49] and a heuristic ap-
22
0
25
50
75
100
Small World Scale Free Lightning Topology
Topology
Success Ratio (%)
Edge−disjoint Widest
Edge−disjoint Shortest Shortest (Yen's)
Heuristic Oblivious
Figure 15: Performance of Spider as the type of paths considered per sender-receiver pair is varied. Edge-
disjoint widest outperforms others by 1-10% on the Lightning Topology without being much worse on the
synthetic topologies.
0
25
50
75
100
Small World Scale Free Lightning Topology
Topology
Success Ratio (%)
1 2 4 8
Figure 16: Performance of Spider as the number of edge-disjoint widest paths considered per sender-
receiver pair is varied on different topologies. Increasing the number of paths increases success ratio, but
the gains are low in going from 4 to 8 paths.
proach. For the widest and oblivious path computations, the channel size acts as the edge weight.
The heuristic picks 4 paths for each flow with the highest bottleneck balance/RTT value. Fig. 15
shows that edge-disjoint widest paths outperforms other approaches by 1-10% on the Lightning
Topology while being only 1-2% worse that edge-disjoint shortest paths on the synthetic topolo-
gies. This is because widest paths are able to utilize the capacity of the network better when there
is a large skew (Fig. 7b) in payment channel sizes.
Number of Paths. We vary the maximum number of edge-disjoint widest paths Spider allows
from 1 to 8. Fig. 16 shows that, as expected, the success ratio increases with an increase in number
of paths, as more paths allow Spider to better utilize the capacity of the PCN. While moving from 1
to 2 paths results in 30-50% improvement in success ratio, moving from 4 to 8 paths has negligible
benefits (¡5%). This is because the sparseness of the three PCN topologies causes most flows to
have at most 5-6 edge-disjoint widest paths. Further, Spider prefers paths with smaller RTTs since
they receive feedback faster resulting in the shortest paths contributing most to the overall rate for
the flow. As a result, we use 4 paths for Spider.
Scheduling Algorithms. We modify the scheduling algorithm at the per-destination queues at
the sender as well as the router queues in Spider to process transactions as per First-In-First-
Out (FIFO), Earliest-Deadline-First (EDF) and Smallest-Payment-First (SPF) in addition to the
LIFO baseline. Fig. 17 shows that LIFO achieves a success ratio that is 10-28% higher than its
counterparts. This is because LIFO prioritizes transactions that are newest or furthest from their
deadlines and thus, most likely complete especially when the PCNs is overloaded. Spider’s rate
control results in long wait times in the sender queues themselves. This causes FIFO and EDF that
send out transactions closest to their deadlines to time out immediately in the network resulting
in poor throughput. When SPF deprioritizes large payments at router queues, they consume funds
23
0
25
50
75
100
Small World Scale Free Lightning Topology
Topology
Success Ratio (%)
Smallest−Payment−First
Earliest−Deadline−First First−In−First−Out
Last−In−First−Out
Figure 17: Performance of Spider as the scheduling algorithm at the sender and router queues is varied.
Last in first out outperforms all other approaches by over 10% on all topologies.
from other payment channels for longer, reducing the effective capacity of the network.
7.6 Additional Results
In addition to the results described so far, we run additional experiments that are described in the
Appendices.
1. We compare Spider to Celer, as proposed in a white-paper [11], and show that Spider out-
performs Celer’s success ratio by 2x on a scale free topology with 10 nodes and 25 edges
(App. E.1).
2. We evaluate the schemes on the synthetic and real topologies with a simpler channel size
distribution where all channels have equal numbers of tokens. Even in this scenario, Spider
is able to successfully route more than 95% of the transactions with less than 25% of the
capacity required by LND (App. E.2).
3. We evaluate the schemes for their fairness across multiple payments and show that Spider
does not hurt small payments to gain on throughput (App. E.3).
4. We show the effect of DAG workloads on synthetic topologies. In particular, we identify
deadlocks with those topologies too and show that Spider requires rebalancing only ev-
ery 10,000esuccessfully routed to sustain high success ratio and normalized throughput
(App. E.3).
8 Related Work
PCN Improvements. Nodes in current Lightning Network implementations, maintain a local view
of the network topology and source-route transactions along the shortest path [15, 2]. Classical
max-flow-based alternatives are impractical for the Lightning Network that has over 5000 nodes
and 30,000 channels [16, 9] due to their computational complexity. Recent proposals have used a
modified version of max-flow that differentiates based on the size of transactions [58]. However,
inferring the size of payments is hard in an onion-routed network like Lightning.
Two main alternatives to max-flow routing have been proposed: landmark routing and embedding-
based routing. In landmark routing, select routers (landmarks) store routing tables for the rest of
the network, and nodes only route transactions to a landmark [56]. This approach is used in Flare
[48] and SilentWhispers [43, 45]. Embedding-based or distance-based routing learns a vector em-
bedding for each node, such that nodes that are close in network hop distance are also close in
embedded space. Each node relays each transaction to the neighbor whose embedding is clos-
24
est to the destination’s embedding. VOUTE [50] and SpeedyMurmurs [51] use embedding-based
routing. Computing and updating the embedding dynamically as the topology and link balances
change is a primary challenge of these approaches. Our experiments and prior work [52] show that
Spider outperforms both approaches.
PCN improvements outside of the routing layer focus on rebalancing existing payment chan-
nels more easily [28, 40]. Revive [40] leverages cycles within channels wanting to rebalance and
initiates balancing off-chain payments between them. These techniques are complementary to
Spider and can be used to enhance overall performance. However, §7.4 shows that a more general
rebalancing scheme that moves funds at each router independently fails to achieve high throughput
without a balanced routing scheme.
Utility Maximization and Congestion Control. Network Utility Maximization (NUM) is a pop-
ular framework for developing decentralized transport protocols in data networks to optimize a
fairness objective [38]. NUM uses link “prices” derived from the solution to the utility maxi-
mization problem, and senders compute rates based on these router prices. Congestion control
algorithms that use buffer sizes or queuing delays as router signals [30, 54, 22] are closely related.
While the Internet congestion control literature has focused on links with fairly stable capacities,
this paper shows that they can be effective even in networks with capacities dependent on the in-
put rates themselves. Such problems have also been explored in the context of ride-sharing, for
instance [24, 25], and require new innovation in both formulating and solving routing problems.
9 Conclusion
We motivate the need for efficient routing on PCNs and propose Spider, a protocol for balanced,
high-throughput routing in PCNs. Spider uses a packet-switched architecture, multi-path conges-
tion control, and and in-network scheduling. Spider achieves nearly 100% throughput on circu-
lation payment demands across both synthetic and real topologies. We show how the presence
of DAG payments causes deadlocks that degrades circulation throughput, necessitating on-chain
intervention. In such scenarios, Spider is able to support 4x more transactions than the state-of-
the-art on the PCN itself.
This work shows that Spider needs less on-chain rebalancing to relieve deadlocked PCNs.
However, it remains to be seen if deadlocks can be prevented altogether. Spider relies on routers
signaling queue buildup correctly to the senders, but this work does not analyze incentive com-
patibility for rogue routers aiming to maximize fees. A more rigorous treatment of the privacy
implications of Spider routers relaying queuing delay is left to future work.
10 Acknowledgements
We thank Andrew Miller, Thaddeus Dryja, Evan Schwartz, Vikram Nathan, and Aditya Akella for
their detailed feedback. We also thank the Sponsors of Fintech@CSAIL, the Initiative for Cryp-
toCurrencies and Contracts (IC3), Distributed Technologies Research Foundation, Input-Output
Hong Kong Inc, the CISCO Research Center, the National Science Foundation under grants CNS-
1718270, CNS-1563826, CNS-1910676, CCF-1705007 and CNS-1617702, and the Army Re-
search Office under grant W911NF1810332 for their support.
25
References
[1] http://omnetpp.org/.
[2] Amount-independent payment routing in Lightning Networks. https://medium.com/
coinmonks/amount-independent-payment-routing-in-lightning- networks-
6409201ff5ed.
[3] AMP: Atomic Multi-Path Payments over Lightning. https://lists.linuxfoundation.org/
pipermail/lightning-dev/2018-February/000993.html.
[4] Barabasi Albert Graph. https://networkx.github.io/documentation/networkx-
1.9.1/reference/generated/networkx.generators.random graphs.
barabasi albert graph.html.
[5] Bitcoin Core. https://bitcoin.org/en/bitcoin-core/.
[6] Bitcoin Core Daemon. https://bitcoin.org/en/full-node#other-linux- daemon.
[7] Bitcoin historical fee chart. https://bitinfocharts.com/comparison/bitcoin-
median transaction fee.html.
[8] Bitcoin Regtest Mode. https://bitcoin.org/en/developer-examples#regtest-
mode.
[9] Blockchain caffe. https://blockchaincaffe.org/map/.
[10] c-lightning: A specification compliant Lightning Network implementation in C. https://
github.com/ElementsProject/lightning.
[11] Celer Network: Bring Internet Scale to Every Blockchain. https://www.celer.network/doc/
CelerNetwork-Whitepaper.pdf.
[12] Credit Card Merchant Processing Fees. https://paymentdepot.com/blog/average-
credit-card-processing-fees/.
[13] etcd: A distributed, reliable key-value store for the most critical data of a distributed system. https:
//github.com/etcd-io/etcd.
[14] Ethereum. https://www.ethereum.org/.
[15] Lightning Network Daemon. https://github.com/lightningnetwork/lnd.
[16] Lightning Network Search and Analysis Engine. https://1ml.com.
[17] Onion Routed Micropayments for the Lightning Network. https://github.com/
lightningnetwork/lightning-onion.
[18] Raiden network. https://raiden.network/.
[19] The NewReno Modification to TCP’s Fast Recovery Algorithm. https://tools.ietf.org/
html/rfc6582.
26
[20] Watts Strogatz Graph. https://networkx.github.io/documentation/networkx-1.9/
reference/generated/networkx.generators.random graphs.watts
strogatz graph.html.
[21] R. K. Ahuja, T. L. Magnanti, and J. B. Orlin. Network Flows: Theory, Algorithms and Applications.
Prentice Hall, 1993.
[22] M. Alizadeh, A. Greenberg, D. A. Maltz, J. Padhye, P. Patel, B. Prabhakar, S. Sengupta, and M. Srid-
haran. Data center tcp (dctcp). ACM SIGCOMM computer communication review, 41(4):63–74, 2011.
[23] V. Bagaria, S. Kannan, D. Tse, G. Fanti, and P. Viswanath. Deconstructing the blockchain to approach
physical limits. arXiv preprint arXiv:1810.08092, 2018.
[24] S. Banerjee, R. Johari, and C. Riquelme. Pricing in ride-sharing platforms: A queueing-theoretic
approach. In Proceedings of the Sixteenth ACM Conference on Economics and Computation, pages
639–639. ACM, 2015.
[25] S. Banerjee, R. Johari, and C. Riquelme. Dynamic pricing in ridesharing platforms. ACM SIGecom
Exchanges, 15(1):65–70, 2016.
[26] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
[27] L. S. Brakmo, S. W. O’Malley, and L. L. Peterson. TCP Vegas: New techniques for congestion detec-
tion and avoidance, volume 24. ACM, 1994.
[28] C. Burchert, C. Decker, and R. Wattenhofer. Scalable funding of bitcoin micropayment channel net-
works. Royal Society open science, 5(8):180089, 2018.
[29] C. N. Cordi. Simulating high-throughput cryptocurrency payment channel networks. PhD thesis, 2017.
[30] N. Dukkipati. Rate Control Protocol (RCP): Congestion control to make flows complete quickly.
Citeseer, 2008.
[31] A. Eryilmaz and R. Srikant. Joint congestion control, routing, and mac for stability and fairness in
wireless networks. IEEE Journal on Selected Areas in Communications, 24(8):1514–1524, 2006.
[32] Y. Gilad, R. Hemo, S. Micali, G. Vlachos, and N. Zeldovich. Algorand: Scaling byzantine agreements
for cryptocurrencies. In Proceedings of the 26th Symposium on Operating Systems Principles, pages
51–68. ACM, 2017.
[33] D. Goldschlag, M. Reed, and P. Syverson. Onion routing. Communications of the ACM, 42(2):39–41,
1999.
[34] U. M. L. Group. Credit card fraud detection, 2018. https://www.kaggle.com/mlg-ulb/
creditcardfraud.
[35] S. Ha, I. Rhee, and L. Xu. Cubic: a new tcp-friendly high-speed tcp variant. ACM SIGOPS operating
systems review, 42(5):64–74, 2008.
[36] P. Hu and W. C. Lau. A survey and taxonomy of graph sampling. arXiv preprint arXiv:1308.5865,
2013.
[37] V. Jacobson and M. J. Karels. Congestion avoidance and control. In SIGCOMM 1988, Stanford, CA,
Aug. 1988.
27
[38] F. Kelly and T. Voice. Stability of end-to-end algorithms for joint routing and rate control. ACM
SIGCOMM Computer Communication Review, 35(2):5–12, 2005.
[39] F. P. Kelly, A. K. Maulloo, and D. K. Tan. Rate control for communication networks: shadow prices,
proportional fairness and stability. Journal of the Operational Research society, 49(3):237–252, 1998.
[40] R. Khalil and A. Gervais. Revive: Rebalancing off-blockchain payment networks. In Proceedings
of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 439–453.
ACM, 2017.
[41] E. Kokoris-Kogias, P. Jovanovic, L. Gasser, N. Gailly, E. Syta, and B. Ford. Omniledger: A secure,
scale-out, decentralized ledger via sharding. In 2018 IEEE Symposium on Security and Privacy (SP),
pages 583–598. IEEE, 2018.
[42] B. M. Leiner, V. G. Cerf, D. D. Clark, R. E. Kahn, L. Kleinrock, D. C. Lynch, J. Postel, L. G. Roberts,
and S. Wolff. A brief history of the internet. SIGCOMM Comput. Commun. Rev., 39(5):22–31, Oct.
2009.
[43] G. Malavolta, P. Moreno-Sanchez, A. Kate, and M. Maffei. SilentWhispers: Enforcing Security and
Privacy in Decentralized Credit Networks. IACR Cryptology ePrint Archive, 2016:1054, 2016.
[44] R. McManus. Blockchain speeds & the scalability debate. Blocksplain, February 2018.
[45] P. Moreno-Sanchez, A. Kate, M. Maffei, and K. Pecina. Privacy preserving payments in credit net-
works. In Network and Distributed Security Symposium, 2015.
[46] D. P. Palomar and M. Chiang. A tutorial on decomposition methods for network utility maximization.
IEEE Journal on Selected Areas in Communications, 24(8):1439–1451, 2006.
[47] J. Poon and T. Dryja. The Bitcoin Lightning Network: Scalable Off-chain Instant Payments. draft
version 0.5, 9:14, 2016.
[48] P. Prihodko, S. Zhigulin, M. Sahno, A. Ostrovskiy, and O. Osuntokun. Flare: An approach to routing
in lightning network. 2016.
[49] H. Racke. Minimizing congestion in general networks. In The 43rd Annual IEEE Symposium on
Foundations of Computer Science, 2002. Proceedings., pages 43–52. IEEE, 2002.
[50] S. Roos, M. Beck, and T. Strufe. Anonymous addresses for efficient and resilient routing in f2f over-
lays. In Computer Communications, IEEE INFOCOM 2016-The 35th Annual IEEE International
Conference on, pages 1–9. IEEE, 2016.
[51] S. Roos, P. Moreno-Sanchez, A. Kate, and I. Goldberg. Settling Payments Fast and Private: Efficient
Decentralized Routing for Path-Based Transactions. arXiv preprint arXiv:1709.05748, 2017.
[52] V. Sivaraman, S. B. Venkatakrishnan, M. Alizadeh, G. Fanti, and P. Viswanath. Routing cryptocurrency
with the spider network. In Proceedings of the 17th ACM Workshop on Hot Topics in Networks, pages
29–35. ACM, 2018.
[53] R. Srikant. The mathematics of Internet congestion control. Springer Science & Business Media,
2012.
28
[54] C.-H. Tai, J. Zhu, and N. Dukkipati. Making large scale deployment of rcp practical for real networks.
In IEEE INFOCOM 2008-The 27th Conference on Computer Communications, pages 2180–2188.
IEEE, 2008.
[55] S. Thomas and E. Schwartz. A protocol for interledger payments. URL https://interledger. org/in-
terledger. pdf, 2015.
[56] P. F. Tsuchiya. The landmark hierarchy: a new hierarchy for routing in very large networks. In ACM
SIGCOMM Computer Communication Review, volume 18, pages 35–42. ACM, 1988.
[57] Visa. Visa acceptance for retailers. https://usa.visa.com/run-your-business/small-
business-tools/retail.html.
[58] P. Wang, H. Xu, X. Jin, and T. Wang. Flash: efficient dynamic routing for offchain networks. arXiv
preprint arXiv:1902.05260, 2019.
[59] D. Wischik, M. Handley, and M. B. Braun. The resource pooling principle. ACM SIGCOMM Computer
Communication Review, 38(5):47–52, 2008.
[60] D. Wischik, C. Raiciu, A. Greenhalgh, and M. Handley. Design, Implementation and Evaluation of
Congestion Control for Multipath TCP. In NSDI, volume 11, pages 8–8, 2011.
[61] J. Y. Yen. Finding the k shortest loopless paths in a network. management Science, 17(11):712–716,
1971.
29
45
321
2
2
12
12
1
1
(a) Payment graph
45
321
2
1
1
11
1
1
(b) Circulation
45
321
1
11
1
(c) DAG
Figure 18: Example payment graph (denoted by blue lines) for a five node network (left). It decomposes
into a maximum circulation and DAG components as shown in (b) and (c).
Appendices
A Circulations and Throughput Bounds
For a network G(V, E)with set of routers V, we define a payment graph H(V, EH)as a graph
that specifies the payment demands between different users. The weight of any edge (i, j)in the
payment graph is the average rate at which user iseeks to transfer funds to user j. A circulation
graph C(V, EC)of a payment graph is any subgraph of the payment graph in which the weight of
an edge (i, j)is at most the weight of (i, j )in the payment graph, and moreover the total weight of
incoming edges is equal to the total weight of outgoing edges for each node. Of particular interest
are maximum circulation graphs which are circulation graphs that have the highest total demand
(i.e., sum of edge weights), among all possible circulation graphs. A maximum circulation graph
is not necessarily unique for a given payment graph.
Proposition 1. Consider a payment graph Hwith a maximum circulation graph C∗. Let ν(C∗)
denote the total demand in C∗. Then, on a network in which each payment channel has at least
ν(C∗)units of escrowed funds, there exists a balanced routing scheme that can achieve a total
throughput of ν(C∗). However, no balanced routing scheme can achieve a throughput greater than
ν(C∗)on any network.
Proof. Let wC∗(i, j)denote the payment demand from any user ito user jin the maximum circu-
lation graph C∗. To see that a throughput of ν(C∗)is achievable, consider routing the circulation
demand along the shortest paths of any spanning tree Tof the payment network G. In this routing,
for any pair of nodes i, j ∈Vthere exists a unique path from ito jin Tthrough which wC∗(i, j)
amount of flow is routed. We claim that such a routing scheme is perfectly balanced on all the
links. This is because for any partition S, V \Sof C∗, the net flow going from Sto V\Sis equal to
the net flow going from V\Sto Sin C∗. Since the flows along an edge eof Tcorrespond precisely
to the net flows across the partitions obtained by removing ein T, it follows that the flows on eare
balanced as well. Also, for any flow (i, j)in the demand graph C∗, the shortest path route from i
to jin Tcan cross an edge eat most once. Therefore the total amount of flow going through an
edge is at most the total amount of flow in C∗, which is ν(C∗).
Next, to see that no balanced routing scheme can achieve a throughput greater than ν(C∗),
assume the contrary and suppose there exists a balanced routing scheme SCH with a throughput
30
Figure 19: Model of queues at a payment channel between nodes uand v.xuv and yuv denote the rates at
which transaction-units for varrive into and get serviced at the queue at urespectively. cuv is the capacity
of the payment channel and quv denotes the total number of transaction-units waiting in u’s queue to be
serviced.
greater than ν(C∗). Let HSCH ⊆Hbe a payment graph where the edges represent the portion of
demand that is actually routed in SCH. Since ν(HSCH)> ν(C∗),HSCH is not a circulation and there
exists a partition S, V \Ssuch that the net flow from Sto V\Sis strictly greater than the net flow
from V\Sto Sin HSCH. However, the net flows routed by SCH across the same partition S, V \S
in Gare balanced (by assumption) resulting in a contradiction. Thus we conclude there does not
exist any balanced routing scheme that can achieve a throughput greater than ν(C∗).
B Optimality of Spider
B.1 Fluid Model
In this section we describe a fluid model approximation of the system dynamics under Spider’s
protocol. Following a similar notation as in §5, for a path pwe let xp(t)denote the rate of flow on
it at time t. For a channel (u, v)and time t, let qu,v (t)be the size of the queue at router u,fu,v (t)
be the fraction of incoming packets that are marked at u,xu,v(t)be the total rate of incoming
flow at u, and yu,v(t)be the rate at which transactions are serviced (i.e., forwarded to router v) at
u. All variables are real-valued. We approximate Spider’s dynamics via the following system of
equations
˙xp(t) =
xp(t)
Pp0∈Pip,jpxp0(t)−X
(u,v)∈p
fu,v(t)xp(t)
+
xp(t)
∀p∈ P (12)
˙qu,v(t)=[xu,v(t)−yu,v (t)]+
qu,v(t)∀(u, v)∈E(13)
˙
fu,v(t)=[qu,v(t)−qthresh]+
fu,v(t)∀(u, v)∈E, (14)
where yu,v(t) = yv,u(t) =
cu,v
2∆ if qu,v(t)>0 & qv,u(t)>0
min{cu,v
2∆ , xv,u(t)}if qu,v(t)>0 & qv ,u(t)=0
min{cu,v
2∆ , xu,v(t)}if qu,v(t) = 0 & qv,u(t)>0
min{cu,v
2∆ , xu,v(t), xv,u(t)}if qu,v (t) = 0 & qv,u(t) = 0
(15)
for each (u, v)∈E. Let ipand jpdenote the source and destination nodes for path prespec-
tively. Then, Pip,jpdenotes the set of all paths ipuses to route to jp. Equation (12) models how the
31
rate on a path pincreases upon receiving successful acknowledgements or decreases if the packets
are marked, per Equations (10) and (11) in §6.3. If the fraction of packets marked at each router
is small, then the aggregate fraction of packets that return marked on a path pcan be approxi-
mated by the sum P(u,v)∈pfu,v [53]. Hence the rate which marked packets arrive for a path pis
P(u,v)∈pfu,v xp. Similarly, the rate which successful acknowledgements are received on a path pis
xp(1 −P(u,v)∈pfu,v ), which can be approximated as simply xpif the marking fractions are small.
Since Spider increases the window by 1/(Pp0∈Pip,jpwp0)for each successful acknowledgement
received, the average rate at which xpincreases is xp/(Pp0∈Pip,jpxp0). Lastly, the rate xpcannot
become negative; so if xp= 0 we disallow ˙xpfrom being negative. The notation (x)+
ymeans xif
y > 0and 0if y= 0.
Equations (13) and (14) model how the queue sizes and fraction of packets marked, respec-
tively, evolve at the routers. For a router uin payment channel (u, v), by definition yu,v is the rate
at which transactions are serviced from the queue qu,v, while transactions arrive at the queue at a
rate of xu,v (Figure 19). Hence the net rate at which qu,v grows is given by the difference xu,v −yu,v.
The fraction of packets marked at a queue grows if the queue size is larger than a threshold qthresh,
and drops otherwise, as in Equation (14). This approximates the marking model of Spider (§6.2)
in which packets are marked at a router if their queuing delay exceeds a threshold.
To understand how the service rate yu,v evolves (Equation (15)), we first make the approxima-
tion that the rate at which transactions are serviced from the queue at a router uis equal to the
rate at which tokens are replenished at the router, i.e., yu,v =yv,u for all (u, v)∈E. The precise
value for yu,v at any time, depends on both the arrival rates and current occupancy of the queues at
routers uand v. If both qu,v and qv,u are non-empty, then there are no surplus of tokens available
within the channel. A token when forwarded by a router is unavailable for ∆time units, until its
acknowledgement is received. Therefore the maximum rate at which tokens on the channel can
be forwarded is cu,v/∆, implying yu,v +yv,u =cu,v or yu,v =yv,u =cu,v /(2∆) in this case. If
qu,v is non-empty and qv,u is empty, then there are no surplus tokens available at u’s end. Router
vhowever may have tokens available, and service transactions at the same rate at which they are
arriving, i.e., yv,u =xv,u. This implies tokens become available at router uat a rate of xv,u and
hence yu,v =xv,u. However, if the transaction arrival rate xv,u is too large at v, it cannot service
them at a rate more than cu,v/(2∆) and a queue would start building up at qv,u. The case where qu,v
is empty and qv,u is non-empty follows by interchanging the variables uand vin the description
above. Lastly, if both qu,v and qv,u are empty, then the service rate yu,v can at most be equal to the
arrival rate xv,u. Similarly yv,u can be at most xu,v. Since yu,v =yv,u by our approximation, we get
the expression in Equation (15).
We have not explicitly modeled delays, and have made simplifying approximations in the fluid
model above. Nevertheless this model is useful for gaining intuition about the first-order behavior
of the Spider protocol. In the following section, we use this model to show that Spider finds optimal
rate allocations for a parallel network topology.
B.2 Proof of Optimality
Consider a PCN comprising of two sets of end-hosts {e1, . . . , em}and {e0
1, . . . , e0
n}that are con-
nected via kparallel payment channels (r1, r0
1),...,(rk, r0
k)as shown in Figure 20. The end-hosts
from each set have demands to end-hosts on the other set. The end-hosts within a set, however, do
32
𝑟"
𝑟#
𝑟"
$
𝑟#
$
𝑒"
𝑒#
𝑒"
$
𝑒#
$
path 1
path 2
end-host payment channelrouter
𝑟&𝑟&
$𝑒&
$
Figure 20: Example of a parallel network topology with bidirectional flows on each payment channel.
not have any demands between them. Let the paths for different source-destination pairs be such
that for each path p, if pcontains a directed edge (ri, r0
i)for some ithen there exists another path
(for a different source-destination pair) that contains the edge (r0
i, ri). We will show that running
Spider on this network results in rate allocations that are an optimal solution to the optimization
problem in Equations (1)–(5). Under a fluid model for Spider as discussed in §B.1, assuming con-
vergence, we observe that in the steady-state the time derivatives of the rate of flow of each path
(Equation (12)) must be non-positive, i.e.,
1
Pp0∈Pip,jpx∗
p0
−X
(u,v)∈p
f∗
u,v (= 0 if x∗
p>0
≤0if x∗
p= 0 ∀p∈ P,(16)
where the superscript ∗denotes values at convergence (e.g., x∗
pis the rate of flow on path pat
convergence). Similarly, the rate of growth of the queues must be non-positive, or
x∗
u,v (=y∗
u,v if q∗
u,v >0
≤y∗
u,v if q∗
u,v = 0 ∀(u, v)∈E. (17)
Now, consider the optimization problem in Equations (1)–(5) for this parallel network. For simplic-
ity we will assume the sender-receiver demands are not constrained. From Equation (17) above,
the transaction arrival rates x∗
u,v and x∗
v,u for a channel (u, v)satisfy the capacity constraints in
Equation (3). This is because x∗
u,v ≤y∗
u,v from Equation (17) and yu,v(t)is at most cu,v
2∆ from Equa-
tion (15). Similarly the transaction arrival rates also satisfy the balance constraints in Equation (4).
To see this, we first note the that the queues on all payment channels through which a path (cor-
responding to a sender-receiver pair) passes must be non-empty. For otherwise, if a queue q∗
u,v is
empty then the fraction of marked packets on a path pthrough (u, v)goes to 0, and the rate of flow
x∗
pwould increase as per Equation (12). Therefore we have x∗
u,v =y∗
u,v (from Equation (17)) for
every channel. Combining this with yu,v (t) = yv,u (t)(Equation (15)), we conclude that the arrival
rates are balanced on all channels. Thus the equilibrium rates {x∗
p:p∈ P} resulting from Spider
are in the feasible set for the routing optimization problem.
Next, let λu,v ≥0and µu,v ∈Rbe the dual variables corresponding to the capacity and balance
constraints, respectively, for a channel (u, v). Consider the following mapping from f∗
u,v to λu,v
33
and µu,v
λ∗
u,v ←(f∗
u,v +f∗
v,u)/2∀(u, v)∈E(18)
µ∗
u,v ←f∗
u,v/2∀(u, v)∈E, (19)
where the superscript ∗on the dual variables indicate that they have been derived from the equi-
librium states of the Spider protocol. Since fu,v (t)is always non-negative (Equation (14)), we see
that λ∗
u,v ≥0for all (u, v). Therefore {λ∗
u,v : (u, v)∈E}and {µ∗
u,v : (u, v)∈E}are in the
feasible set of the dual of the routing optimization problem.
Next, we have argued previously that the queues on all payment channels through which a
path (corresponding to a sender-received pair) passes must be non-empty. While we used this
observation to show that the channel rates x∗
u,v are balanced, it also implies that the rates are at
capacity, i.e., x∗
u,v =cu,v/(2∆), or x∗
u,v +x∗
v,u =cu,v /∆for all (u, v). This directly follows from
Equation (17) and the first sub-case in Equation (15). It follows that the primal variables {x∗
p:
p∈ P} and the dual variables {λ∗
u,v : (u, v)∈E},{µ∗
u,v : (u, v)∈E}satisfy the complementary
slackness conditions of the optimization problem.
Last, the optimality condition for the primal variables on the Lagrangian defined with dual
variables {λ∗
u,v : (u, v)∈E}and {µ∗
u,v : (u, v)∈E}stipulates that
1
Pp0∈Pip,jpxp0
−X
(u,v)∈p
(λ∗
u,v +µ∗
u,v −µ∗
v,u)(= 0 if xp>0
≤0if xp= 0 ,(20)
for all p∈ P. However, note that for any path p
X
(u,v)∈p
(λ∗
u,v +µ∗
u,v −µ∗
v,u) = X
(u,v)∈p
f∗
u,v +f∗
v,u
2+f∗
u,v
2−f∗
v,u
2
=X
(u,v)∈p
f∗
u,v,(21)
where the first equation above follows from our mapping for λ∗
u,v and µ∗
u,v in Equations (18), (19).
Combining this with Equation (16), we see that xp←x∗
pfor all p∈ P is a valid solution to the
Equation (20). Hence we conclude that {x∗
p:p∈ P} and {λ∗
u,v : (u, v)∈E},{µ∗
u,v : (u, v)∈E}
are optimal primal and dual variables, respectively, for the optimization problem. The equilibrium
rates found by Spider for the parallel network topology are optimal.
34
C Primal-Dual Algorithm Derivation
In this section, we present a formal derivation of the decentralized algorithm for computing the
optimum solution of the fluid-model LP (Eq. (1)–(5)). Consider the partial Lagrangian of the LP:
L(x, λ, µ) = X
i,j∈VX
p∈Pi,j
xp
−X
(u,v)∈E
λ(u,v)
X
p∈P:
(u,v)∈p
xp+X
p0∈P:
(v,u)∈p0
xp0−c(u,v)
∆
−X
(u,v)∈E
µ(u,v)
X
p∈P:
(u,v)∈p
xp−X
p0∈P:
(v,u)∈p0
xp0
−X
(u,v)∈E
µ(v,u)
X
p∈P:
(v,u)∈p
xp−X
p0∈P:
(u,v)∈p0
xp0
,(22)
where µ(u,v), µ(v,u)are Lagrange variables corresponding to the imbalance constraints (Eq. (4)) in
the u-vand v-udirections respectively. λ(u,v)is a Lagrange variable corresponding to the capacity
constraint (Eq (3)). Since the λvariable does not have a direction associated with it, to simplify
notation we use λ(v,u)and λ(u,v)interchangeably to denote λ(u,v)for channel (u, v)∈E. The
partial Lagrangian can be rewritten as
L(x, λ, µ) = X
i,j∈VX
p∈Pi,j
xp
1−X
(u,v)∈p
λ(u,v)−X
(u,v)∈p
µ(u,v)
+X
(v,u)∈p
µ(v,u)
+X
(u,v)∈E
λ(u,v)
c(u,v)
∆.(23)
Define z(u,v)=λ(u,v)+µ(u,v)−µ(v,u)to denote the price of channel (u, v)∈Ein the u-vdirection,
and zp=P(u,v):(u,v)∈pλ(u,v)+µ(u,v)−µ(v,u)to be the total price of a path p. The partial La-
grangian above decomposes into separate terms for rate variables for each source/destination pair
{xp:p∈ P}. This suggests the following iterative primal-dual algorithm for solving the LP:
•Primal step. Supposing the path price of a path pat time tis zp(t). Then, each sender-receiver
pair (i, j)updates its rates xpon each path p∈ Pi,j as
xp(t+ 1) = xp(t) + α(1 −zp(t)) (24)
xp(t+ 1) = Projχi,j (xp(t+ 1)),(25)
where Proj is a projection operation on to the convex set {xp:Pp:p∈Pi,j xp≤di,j , xp≥0∀p},
to ensure the rates are feasible.
35
Figure 21: Update figure. Routers queue transaction units and schedule them across the payment channel
based on available capacity and transaction priorities. Funds received on a payment channel remain in a
pending state until the final receiver provides the key for the hashlock.
•Dual step. Similarly, for the dual step let xp(t)denote the flow rate along path pat time tand
w(u,v)(t) = X
p∈P:
(u,v)∈p
xp(t) + X
p0∈P:
(v,u)∈p0
xp0(t)−c(u,v)
∆(26)
y(u,v)(t) = X
p∈P:
(u,v)∈p
xp(t)−X
p0∈P:
(v,u)∈p0
xp0(t)(27)
be the slack in the capacity and balance constraints respectively for a payment channel (u, v).
Then, each channel (u, v)∈Eupdates its prices as
λ(u,v)(t+ 1) = λ(u,v)(t) + ηw(u,v)(t)+(28)
µ(u,v)(t+ 1) = µ(u,v)(t) + κy(u,v )(t)+(29)
µ(v,u)(t+ 1) = µ(v,u)(t)−κy(u,v )(t)+.(30)
The parameters α, η, κ are positive ”step size” constants, that determine the rate at which the
algorithm converges. Using standard arguments we can show that for small enough step sizes, the
algorithm would converge to the optimal solution of the LP in Eq. (1)–(5).
The algorithm has the following intuitive interpretation. λ(u,v)and µ(u,v), µ(v ,u)are prices that
vary due to capacity constraints and imbalance at the payment channels. In Eq. (28), λ(u,v )would
increase if the total rate on channel (u, v)(in both directions) exceeds its capacity, and would de-
crease to 0 if there is excess capacity. Similarly, µ(u,v)would increase (resp. decrease) if the net
rate in the u-vdirection is greater (resp. less) than the net rate in the v-udirection (Eq. (29), (30)).
As the prices vary, an end-host with a flow on path pwould react according to Eq. (24) by increas-
ing its sending rate xpif the total price of the path pis cheap, and decreasing the rate otherwise.
The net effect is the convergence of the rate and price variables to values such that the overall
throughput of the network is maximized. We remark that the objective of our optimization prob-
lem in Eq. (1) can be modified to also ensure fairness in routing, by associating an appropriate
utility function with each sender-receiver pair [39]. A decentralized algorithm for such a case may
be derived analogously as our proposed solution.
36
D Estimating the Demand-Capacity Gap at the Routers
In this section, we explain how Spider estimates the total amount of demand on a channel at any
time, for updating the capacity price λin Eq. (7). From the description of the primal-dual algorithm
for the fluid model in App. C, we see that updating λ(u,v)at a channel (u, v)∈Erequires estimating
X
p∈P:
(u,v)∈p
xp(t) + X
p0∈P:
(v,u)∈p0
xp0(t)−c(u,v)
∆(31)
at the channel (Eq. (28)). While the total rate at which transactions are arriving at u(Pp∈P:(u,v)∈pxp(t))
and at v(Pp0∈P:(v ,u)∈p0xp0(t)) are straightforward to estimate, estimating ∆—the average time
taken for transactions to reach their destination from the channel, and for their hashlock keys to
arrive at the channel—is difficult. In Spider, we overcome this problem by estimating the quantity
X
p∈P:
(u,v)∈p
xp(t)∆ + X
p0∈P:
(v,u)∈p0
xp0(t)∆ −c(u,v),(32)
instead of trying to estimate the expression in Eq. (31). Eq. (32) is simply a scaling of Eq. (31), but
can be estimated without having to first estimate ∆. To see this, let ˜xu(t) = Pp∈P:(u,v )∈pxp(t)and
˜xv(t) = Pp0∈P:(v,u)∈p0xp0(t)denote the rate of transaction arrival at uand vrespectively. Similarly,
let ˜yu(t)and ˜yv(t)be the rate at which transactions are serviced from the queue at each of the
routers (see Fig. 21 for an illustration). Eq. (32) can now be rewritten as ˜xu(t)∆ + ˜xv(t)∆ −c(u,v)
=˜xu(t)
˜yu(t)˜yu(t)∆ + ˜xv(t)
˜yv(t)˜yv(t)∆ −c(u,v)(33)
=˜xu(t)
˜yu(t)iu(t) + ˜xv(t)
˜yv(t)iv(t)−c(u,v),(34)
where iu(t)and iv(t)are the amount of funds that are currently locked at routers vand urespec-
tively (Fig. 21). Since the funds used when servicing transactions at router urequire ∆seconds
on average to become available at v, by Little’s law the product of the average service rate ˜yu(t)
and average delay ∆is equal to the average amount of pending transactions iu(t)at v. Thus,
Eq. (34) follows from Eq. (33). However, each of the terms in Eq. (34)—the transaction arrival
rates ˜xu(t),˜xv(t), service rates ˜yu(t),˜yv(t), amount of pending transactions iu(t), iv(t)—can now
be readily estimated at the channel.
Intuitively, since iu(t)is the amount of pending funds at router vwhen transactions are be-
ing serviced at a rate ˜yu(t),˜xu(t)iu(t)/˜yu(t)is an estimate of the amount of transactions that will
be pending if transactions were serviced at a rate ˜xu(t). As the total amount of pending trans-
actions in the channel cannot exceed the total amount of funds escrowed c(u,v), the difference
˜xu(t)iu(t)/˜yu(t) + ˜xv(t)iv(t)/˜yv(t)−c(u,v )is exactly the additional amount of funds required in
the channel to support the current rates of transaction arrival. Denoting ˜xu(t)iu(t)/˜yu(t)as mu(t)
and ˜xv(t)iv(t)/˜yv(t)as mv(t), the equation for updating λat the routers can be written as
λ(u,v)(t+ 1) = λ(u,v)(t) + ηmu(t) + mv(t)−c(u,v )
+βmin(qu(t), qv(t)))]+,(35)
where the βmin(qu(t), qv(t)) term has been included to ensure the queue sizes are small as dis-
cussed in §6.3.
37
●●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
●
Figure 22: Spider’s performance relative to Celer on a 10 node scale free topology. Spider achieves a 2x
improvement in success ratio even at Celer’s peak performance. Celer’s performance dips after a peak since
it maintains larger queues at higher capacities, eventually causing timeouts.
E Additional Results
E.1 Comparison with Celer
We run five circulation traffic matrices for 610s on a scale free topology with 10 nodes and 25 edges
to compare Spider to Celer [11], a back-pressure based routing scheme. Each node sends 30 txns/s
and we vary the mean channel size from 200eto 6400 e. We measure the average success ratio
and success volume for transactions in the 400-600s interval and observe that Spider outperforms
Celer at all channel sizes. Celer splits transactions into transaction-units at the source but does not
source-route individual transaction-units. Instead, transaction-units for a destination are queued at
individual routers and forwarded on the link with the maximum queue and imbalance gradient for
that destination. This approach tries to maximize transaction-units in queues to improve network
utilization. However, queued-up and in-flight units in PCNs hold up tokens in other parts of the
network while they are in-flight waiting for acknowledgements, reducing its capacity. Celer trans-
actions also use long paths, sometimes upto 18 edges in this network with 25 edges. Consequently,
tokens in Celer spend few seconds in-flight in contrast to the hundreds of milliseconds with Spider.
The time tokens spent in-flight also increases with channel size since Celer tries to maintain larger
queues. Celer’s performance dips once the in-flight time has increased to the point where transac-
tions start timing out before they can be completed. Due to computational constraints associated
with large queues, we do not run Celer on larger topologies.
E.2 Circulations on Synthetic Topologies
We run five circulation traffic matrices for 1010s on our three topologies with all channels having
exactly the tokens denoted by the channel size. Fig. 23 shows that across all topologies, Spider
outperforms the state-of-the-art schemes on success ratio. Spider is able to successfully route more
than 95% of the transactions with less than 25% of the capacity required by LND. Further Fig. 24
shows that Spider completes nearly 50% more of the largest 12.5% of the transactions attempted in
the PCN across all three topologies. Even the waterfilling heuristic outperforms LND by 15-20%
depending on the topology.
38
●
●
●●●●
●
●●●●●
●
●
●●●●
●
Figure 23: Performance of different algorithms on different topologies with equal channel sizes with
different per sender transaction arrival rates. Spider consistently outperforms all other schems achieving
near 100% average success ratio. Error-bars denote the maximum and minimum success ratio across five
runs. Note the log scale of the x-axes.
Figure 24: Breakdown of performance of different schemes by size of transactions completed. Each point
reports the success ratio for transactions whose size belongs to the interval denoted by the shaded region.
Each interval corresponds roughly to 12.5% of the CDF denoted in Fig. 7a. The graphs correspond to the
(right) midpoints of the corresponding Lightning sampled channel sizes in Fig. 9.
E.3 Fairness of Schemes
In §7.3, we show that Spider outperforms state-of-the art schemes on the success ratio achieved
for a given channel capacity. Here, we break down the success volume by flows (sender-receiver
pairs) to understand the fairness of the scheme to different pairs of nodes transacting on the PCN.
Fig. 25 shows a CDF of the absolute throughput in e/s achieved by different protocols on a single
circulation demand matrix when each sender sends an average of 30 tx/s. The mean channel
sizes for the synthetic topologies and the real topologies with LCSD channel sizes are 4000e
and 16880erespectively. We run each protocol for 1010s and measure the success volume for
transactions arriving between 800-1000s. We make two observations: (a) Spider achieves close to
100% throughput in all three scenarios, (b)Spider is fairer to small flows (most vertical line) and
doesn’t hurt the smallest flows just to benefit on throughput. This is not as true for LND.
E.4 DAG Workload on Synthetic Topologies
Fig. 26 shows the effect of adding a DAG component to the transaction demand matrix on the
synthetic small world and scale free topologies. We observe the success ratio and normalized
throughput of different schemes with five different traffic matrices with 30 transactions per second
39
Figure 25: CDF of normalized throughput achieved by different flows under different schemes across
topologies. Spider achieves close to 100% throughput given its proximity to the black demand line. Spider
is more vertical line than LND because it is fairer: it doesn’t hurt the throughput of smaller flows to attain
good overall throughput.
per sender under 5%, 20%, 40% DAG components respectively. No scheme is able to achieve the
maximum throughput. However, the achieved throughput is closer to the maximum when there is
a smaller component of DAG in the demand matrix. This suggests again that the DAG affect PCN
balances in a way that also prevents the circulation from going through. We investigate what could
have caused this and how pro-active on-chain rebalancing could alleviate this in §7.4.
Fig. 27 shows the success ratio and normalized throughput achieved by different schemes when
rebalancing is enabled for the 20% DAG traffic demand from Fig. 26. Spider is able to achieve
over 95% success ratio and 90% normalized throughput even when its routers balance only every
10,000 ewhile LND is never able to sustain more than 75% success ratio even when rebalancing
for every 10erouted. This implies that Spider makes PCNs more economically viable for both
routers locking up funds in payment channels and end-users routing via them since they need far
fewer on-chain rebalancing events to sustain high throughput and earn routing fees.
40
Shortest Path Landmark Routing Waterfilling LND Spider Circulation
Small World
Scale Free
0 10 20 30 40 0 10 20 30 40
0
25
50
75
100
Success Ratio (%)
0 10 20 30 40 0 10 20 30 40
0
25
50
75
100
DAG Amount (%)
Norm. Throughput (%)
Figure 26: Performance of different algorithms across all topologies as the DAG component in the trans-
action demand matrix is varied. As the DAG amount is increased, the normalized throughput achieved is
further away from the expected optimal circulation throughput. The gap is more pronounced on the real
topology.
Figure 27: Performance of different algorithms across all topologies as the DAG component in the trans-
action demand matrix is varied. As the DAG amount is increased, the normalized throughput achieved is
further away from the expected optimal circulation throughput. The gap is more pronounced on the real
topology.
41