Available via license: CC BY 4.0
Content may be subject to copyright.
A Survey on Consortium Blockchain Consensus
Mechanisms
Wei Yao1, Junyi Ye1, Renita Murimi2, and Guiling Wang1
1New Jersey Institute of Technology
2University of Dallas
Abstract. Blockchain is a distributed ledger that is decentralized, im-
mutable, and transparent, which maintains a continuously growing list
of transaction records ordered into blocks. As the core of blockchain,
the consensus algorithm is an agreement to validate the correctness
of blockchain transactions. For example, Bitcoin is a public blockchain
where each node in Bitcoin uses the Proof of Work (PoW) algorithm
to reach a consensus by competing to solve a puzzle. Unlike a public
blockchain, a consortium blockchain is an enterprise-level blockchain that
does not contend with the issues of creating a resource-saving global con-
sensus protocol. This paper highilights several state-of-the art solutions
in consensus algorithms for enterprise blockchain. For example, the Hy-
perLedger by Linux Foundation includes implementing Practical Byzan-
tine Fault Tolerance (PBFT) as the consensus algorithm. PBFT can tol-
erate a range of malicious nodes and reach consensus with quadratic com-
plexity. Another consensus algorithm, HotStuff, implemented by Face-
book’s Libra project, has achieved linear complexity of the authentica-
tor. This paper presents the operational mechanisms of these and other
consensus protocols, and analyzes and compares their advantages and
drawbacks.
1 Introduction
In 2008, Satoshi Nakamoto first proposed Bitcoin [1] and ushered in a
new chapter for digital currency. The blockchain technology that forms
the foundation of digital currency has continued to receive worldwide in-
terest, and blockchain applications now span the spectrum of use cases
ranging from agriculture, sports, education and government [2]. At the
heart of blockchain lies the consensus algorithm, where all nodes on the
public ledger reach consensus in a distributed, untrusted environment.
Thus, the consensus mechanism fundamentally determines the security,
availability, and system performance of the entire blockchain system. The
study of consensus mechanisms in the blockchain is of great significance
to the scalability of the blockchain, since it determines the transaction
arXiv:2102.12058v1 [cs.DS] 24 Feb 2021
2 W. Yao et al.
processing speed and the security of the blockchain. The consensus mech-
anism, then, is of fundamental significance in the widespread adoption
and consequent success of blockchain applications.
Since the first whitepaper describing Nakamoto’s vision for Bitcoin
was published in 2008, several variants of cryptocurrencies have been re-
leased. Notable among them is Ethereum [3] which introduced the concept
of a smart contract. Smart contracts, which denote contracts in code on
the blockchain, allow for the use of Ethereum as a platform for currency
transactions. While Ethereum and Bitcoin have several notable differ-
ences in their architectures, one common aspect of Ethereum and Bitcoin
is that they are both public blockchains since any node can join these
networks and partake in the network activity.
In 2015, the Linux Foundation initiated an open-source blockchain
project called the Hyperledger project [4]. While Bitcoin and Ethereum
are opened to the public without any authentication mechanisms, Hyper-
ledger is not a public blockchain. Instead, Hyperledger belongs to a class
of blockchain solutions called enterprise blockchain, which is specifically
designed for enterprise-level applications. Enterprise blockchain provides
roles and permission for each member who participates in the blockchain.
Moreover, Hyperledger eliminates the incentive mechanism presented by
Bitcoin mining to save energy consumption and achieve better perfor-
mance. With blockchain technology development, more and more enterprise-
level users have begun to consider using blockchain to meet their busi-
ness needs. For example, Walmart has implemented transparency in their
food supply chain with Hyperledger Fabric, CULedger has instituted
fraud-protection for credit unions with Hyperledger Indy, and Kuber-
netes uses the Hyperledger Sawtooth to simplify enterprise blockchain
adoption [5, 6, 7]. Therefore, the exploration of effective consensus pro-
tocols for use in consortium blockchains has developed into a research
problem of emerging significance.
The release of Facebook’s Libra project white paper in 2019 [8] has led
to a new round of cryptocurrency interest, which has attracted widespread
attention from many investors and researchers in blockchain. Among the
various applications of blockchain technology in the public and private
sectors, one notable application is that of digital governance. In what is
touted as Web 3.0, countries around the world have ventured to seize the
opportunity of a new round of information revolution using blockchain.
The use of blockchain technologies has accelerated the pace of indus-
trial innovation and development, and in the process have introduced and
A Survey on Consortium Blockchain Consensus Mechanisms 3
modified long-standing policies, laws and public perceptions of blockchain
technology.
The rest of this paper is structured as follows. Section 2 provides
an overview of blockchain technology. Section 3 introduces different fam-
ilies of consensus protocols and illustrates two Crash Fault Tolerance
(CFT)-based consensus mechanisms. Section 4 addresses variants of the
Byzantine Fault Tolerance (BFT)-based consensus algorithm in consor-
tium blockchains. Finally, Section 5 concludes the paper and presents
directions for future work.
2 Blockchain overview
The goal of the consensus protocol in blockchain technology is to achieve
consistency of nodes participating in the distributed ledger. The nomen-
clature of blockchain is derived from its architecture; each block is linked
cryptographically to the previous block. Generally speaking, the first
block of the blockchain is called the genesis block, and each block contains
a set of transactions generated in the network at a given time.
Blockchain has the following characteristics - decentralization, trust-
lessness, openness, immutability and anonymity. First, decentralization
refers to the absence of a central trusted third party in the network,
unlike those found in centralized transactions. Examples of centralized
environments include governments, banks, or other financial institutions
which serve to regulate various aspects of interactions between entities.
Trustlessness denotes the lack of formal social constructs for nodes to es-
tablish trust-based on prior history, familiarity or a guarantee from a third
party. Instead, trust is established through consensus on the ledger. Third,
blockchain enables openness and transparency. In public blockchains such
as Bitcoin, which are also called permissionless blockchains, all nodes can
join and exit at any time, and nodes can obtain the historical ledger data
of the blockchain at any time ranging back to the genesis block. The
fourth defining characteristic of blockchain is the blockchain’s immutabil-
ity which ensures that it is tamper-proof. An example of a tamper-proof
implementation is illustrated through Bitcoin’s depth constraints. In Bit-
coin, when the ”depth” of the block exceeds 6, it is established that the
content of the block will not be tampered with [9]. Finally, blockchains en-
sure some degree of anonymity. Although Bitcoin is not completely anony-
mous, privacy-protection technologies, such as group signatures, ring sig-
natures, and zero-knowledge proofs implemented in other blockchain so-
lutions [10] can effectively increase user privacy on the blockchain.
4 W. Yao et al.
2.1 Model and Definition
State Machine Replication State machine replication (SMR) refers to
the existence of a set of distributed nodes that can process and respond
to requests from a client. The client can be software or a user and serves
to jointly maintain a linearly growing log, with each node agreeing on the
content of the log [11].
In the SMR mode, there is a primary node, and the other nodes
are called backups or replicas. The primary node’s identity can change.
State machine replication is fault-tolerant, allowing a certain percentage
of nodes to fail or suffer from adversary attacks within a tolerable range.
SMR needs to satisfy two essential security properties.
1. Consistency. All honest nodes end up with the same logs in their
output.
2. Liveness. A transaction received by an honest node appears in the
logs of all honest nodes after a specific time.
Adversary model In cryptography terminology, an adversary repre-
sents a malicious entity that aims to prevent non-malicious entities from
achieving their goal [12]. An adversary model is a model that imposes a
specific limit on the percentage of computing power or property that an
adversary can hold, generally represented by ffor the number of adver-
saries and nfor the total number of nodes in the network. For example,
if a BFT algorithm’s adversary model is n= 3f+ 1, it implies that if the
algorithm can tolerate ffaulty replicas, the system requires a minimum
number of n= 3f+ 1 replicas.
2.2 Blockchain Architecture
The basic framework of the blockchain is shown in Figure 1. The frame-
work comprises of the infrastructure layer, the network layer, the data
layer, the consensus layer, and the application layer. In the core frame-
work, the data layer includes the data blocks, the chain structure, and
the cryptographical mechanisms that are the essential components of the
blockchain [13]. The data layer is responsible for blockchain transactions
and implementation mechanisms, as well as related technologies for block
propagation verification. The consensus layer is mainly a consensus mech-
anism represented by algorithms such as Proof of Work (PoW) used in
Bitcoin, and Proof of Stake (PoS) used in Ethereum. In the applica-
tion layer, various application scenarios and cases represented by pro-
A Survey on Consortium Blockchain Consensus Mechanisms 5
grammable assets such as currencies and financial instruments, various
script codes and smart contracts are encapsulated.
Fig. 1. Blockchain Architecture
Infrastructure Layer The infrastructure layer contains hardware, net-
work architecture equipment, and deployment environment for a blockchain
system such as virtual machine and docker container.
Network Layer The blockchain’s network layer includes the blockchain
system’s node organization method and propagation verification mecha-
nisms of the transaction and the block. A newly generated block can only
be recorded in the blockchain after it has passed the verification.
Blockchains use P2P networks and are connected via a flat topology.
Network nodes generally have the characteristics of equality, distribution,
and autonomy. Each node in the P2P network undertakes node discovery,
connection establishment, block synchronization, transaction broadcast-
ing and verification, and block propagation and verification. After the new
6 W. Yao et al.
node is connected to the network, it establishes reliable connections to
other nodes through the Transmission Control Protocol (TCP) three-way
handshake. Once the connection is established, the new node continuously
receives broadcast messages from the connected node and store the un-
known nodes’ address information from the connected node by broadcast
message. Since the broadcast message from a node includes the informa-
tion of all its connected nodes, eventually the new node can establish
connections with all nodes in the blockchain [14]. With the establishment
of the connection, the new node also synchronizes the block information
from connected nodes. It can then start to work as a fully functional node
to submit and verify transactions if the information of all blocks has been
synchronized to the new node [14].
When a new block is successfully generated, the node that generated
the block will broadcast the block to other nodes in the network for
verification. After a node receives the new block information, it verifies
the block through a list of criteria. For instance, some of the criteria used
in the verification process of a block in Bitcoin include the block hash,
block timestamp, hash of the previous block and hash of the Merkle Root
[15]. If the block is verified to be invalid, it will be rejected. Otherwise,
the new block will be appended after the preceding block is found on the
chain.
From the network layer’s design principles, it is clear that blockchain
is a typical distributed big-data technology. The entire network’s data is
stored on completely decentralized nodes. Even if some nodes fail, as long
as there is still a functioning node, the data stored in the blockchain can
be fully recovered without affecting the subsequent blocks. The difference
between this blockchain model and the cloud storage model is that the
former is an entirely decentralized storage model with a higher level of
storage capacity, while the latter is based on a centralized structure with
multiple storages and data backup functionalities.
Data Layer The data in this layer is recorded through the blockchain
structure, as shown in Figure 2. The data layer realizes the requirements
of traceability and non-tampering. Any data in the blockchain system can
be tracked through this chain ledger [16].
For example, in Bitcoin, each data block comprises of a block header
and a block body containing a packaged transaction, shown in Figure
3. The block header contains information such as the current system
version number, the hash value of the previous block, the difficulty target
of the current block, the random number, the root of the Merkel tree of
A Survey on Consortium Blockchain Consensus Mechanisms 7
Fig. 2. An example of chain structure in blockchain [17]
the block transaction, and the timestamp [1]. The block body includes
many verified transactions and a complete Merkel tree composed of these
transactions [18]. The Merkle tree is a binary tree, where the bottom
layer corresponds to the content of the leaf node. Each leaf node is the
hash value of the corresponding data. Two neighboring leaves unite to
perform a hash computation that becomes the content of the upper-level
node. A recursive form of these computations forms the content of the
root node. Based on the Merkle tree’s particular date structure, any data
modification that happens in the leaf node will be passed to its parent
node and will propagate all the way to the root of the tree. The data in
the block body constitutes the central part of the blockchain ledger. The
Merkel tree formed by these transactions generates a unique Merkel root
and stores it in the block header. The block header data is double-SHA256
hashed to get the hash value of the block [19].
Consensus Layer The consensus layer forms the core of the consensus
process used to determine the validity of the block data by the highly
decentralized nodes. The main consensus mechanisms are Proof of Work
(POW), Proof of Stake (POS), Delegated Proof of Stake (DPOS), and
Practical Byzantine Fault Tolerance (PBFT), which have been the back-
bone of scalable solutions in blockchain technology. A particular economic
incentive model used to encourage nodes to participate in the blockchain’s
security verification work is only used in an incentive mechanism-based
algorithm, such as PoW.
8 W. Yao et al.
Fig. 3. An example of a block in Bitcoin [20]
Application Layer The application layer encapsulates various script
codes, smart contracts, Decentralized Applications (DApps) and Appli-
cation Programming Interfaces (APIs).
1. Script. A script is essentially a set of instruction lists attached to
a Bitcoin transaction. Bitcoin uses a simple, stack-based, left-to-right
scripting language. Bitcoin transactions are verified through two scripts:
locking script and unlocking script. The locking script specifies the
conditions for spending the output of this transaction, and the out-
put of this transaction can only be spent if the conditions of the lock-
ing script are met. The unlocking script corresponds to the locking
script, a script that meets the transaction cost conditions. If a node
receives transaction data, it runs locking scripts and unlocking scripts
to check whether the transaction is valid, before accepting it [1]. The
locking and unlocking of scripts provides flexible transaction control
in Bitcoin. The Bitcoin script system does not have complex loops and
flow control, and it is not Turing-complete. A Turing-complete sys-
tem means that a program written in this system can find a solution,
and there is no limitation on time-consumption and memory usage.
The Bitcoin script is not Turing-complete, because it has no condi-
tional statements, cannot execute loops, and does not produce recur-
sion. To improve the flexibility and scalability of the scripting system,
Ethereum proposes a Turing-complete scripting language, which al-
A Survey on Consortium Blockchain Consensus Mechanisms 9
lows users to construct smart contracts and decentralized applications
flexibly based on Ethereum [21].
2. Smart Contract. The emergence of blockchain technology has re-
defined smart contracts. Smart contracts are event-driven, stateful
computer programs running on a replicable shared blockchain data
ledger that can actively or passively process data and manage various
on-chain smart assets. Smart contracts are embedded programmatic
contracts that can be built into any blockchain data, transaction, tan-
gible or intangible asset. Thus, smart contracts create software-defined
systems, markets, and assets that can be programmatically controlled.
Smart contracts provide innovative solutions for issuing, trading, cre-
ating, and managing traditional financial assets and play an essential
role in asset management, contract management, and regulatory en-
forcement in social systems.
3. DApp. A DApp is defined as an application that runs on a distributed
network, where participants’ information is securely protected (and
possibly anonymous) and decentralized through network nodes. A
DApp in the blockchain generally meet the following three conditions.
(a) The application normally is open-source, and most of them can
run autonomously rather than controlled by a single entity. All
data and records should be stored on the blockchain through some
cryptographic technologies.
(b) The application generates tokens through a set of criteria and
allocates partial or whole tokens. Any user who contributes is re-
warded with tokens paid by the application.
(c) The application can improve and adjust its protocols based on
business requirements, but all changes have to be agreed upon by
the majority of users.
4. APIs. Blockchain provides documented APIs in the application layer
for development, thus allowing DApps and third-party applications
the ability to handle smart contracts and retrieve data from the blockchain
platform.
The infrastructure layer, network layer, data layer, and consensus
layer can be envisioned as the blockchain’s underlying virtual machine,
and the application layer comprises the business logic, algorithms, and
applications built on the blockchain virtual machine, as shown in Figure
4.
10 W. Yao et al.
Fig. 4. Five layers model to virtual machine model
2.3 Classification of blockchain networks
Blockchain networks can be classified as public, consortium or private
blockchain in order of decreasing degrees of openness available for partic-
ipation by nodes, as shown in figure 5. Here, we provide a brief overview
of the three architectures.
Fig. 5. Public, Consortium, and Private Blockchain
A Survey on Consortium Blockchain Consensus Mechanisms 11
Public blockchain The public blockchain is also referred to as a permis-
sionless blockchain, since any node can enter and exit the network freely.
The public chain is the earliest and most widely used blockchain architec-
ture. Bitcoin is the most widely known example of the public blockchain
[22]. Every participant in the blockchain can view the entire ledger data in
the public blockchain, and any public blockchain participant can freely ex-
ecute transactions with other nodes on the public chain. Further, anyone
on the public chain can participate in the blockchain consensus process
for mining, i.e. any node can decide exactly which blocks should be added
to the blockchain and participate in recording the current network status.
Thus, the public chain is a completely decentralized blockchain. Users of
the public chain can participate anonymously without registration and
can access the blockchain network and view data without authorization.
Additionally, any node can choose to join or exit the blockchain network
at any time [23]. The public chain uses cryptography-related technologies
such as digital signatures, hashing [24], symmetric/asymmetric keys [25],
and Elliptic Curve Digital Signature Algorithm (ECDSA) [26] to ensure
that transactions cannot be tampered with. Economic incentives such as
transaction fees and rewards are adopted so that the consensus node is
motivated to participate in the consensus process, which in turn serves
to maintain the security and effectiveness of the decentralized blockchain
system. The consensus mechanism in the public chain is generally PoW
(Bitcoin) or PoS (Ethereum). Under the PoW mechanism, nodes compete
for the right to confirm a transaction and getting the associated rewards
through computing power, while under the PoS mechanism, users com-
pete for these rights through collecting resources. Section 2.4 elaborates
on the different families of consensus protocols.
Private blockchain The private blockchain is also known as the permis-
sioned blockchain, and is only used in private organizations or institutions
[27]. Unlike public blockchains, private blockchains are generally not open
to the outside world and are only open to individual individuals or insti-
tutions. Data read and write permissions on the private blockchain and
block accounting rights are allocated under the rules established by pri-
vate organizations. Specifically, each node’s writing rights in the private
chain system are allocated by the organization, and the organization de-
cides how much information and data is open to each node according to
the specific conditions of the actual scenarios. The private chain’s value is
mainly to prevent internal and external security attacks on data and pro-
vide users of the private chain with a safe, non-tamperable, and traceable
12 W. Yao et al.
system. From the above description, it can be seen that the private chain
is not a completely decentralized blockchain. Instead, there is a certain
degree of centralized control. Compared with public chains, private chains
sacrifice complete decentralization in exchange for increased transaction
speed.
Consortium blockchain The consortium blockchain is a hybrid archi-
tecture comprising of features from both public and private blockchains.
A consortium blockchain is also a permissioned blockchain, in which par-
ticipation is limited to a consortium of members to participate; each node
might refer to a single organization or institution in the consortium. The
number of nodes in a consortium blockchain is determined by the size of
the pre-selected participants in the blockchain. For example, suppose a
financial blockchain is designed for a consortium of thirty financial insti-
tutions. In that case, the maximum number of nodes in this consortium
blockchain is thirty, and the number of nodes required to reach the con-
sensus depends on which consensus algorithm the consortium blockchain
uses. The consortium chain accesses the network through the gateways of
member institutions. The consortium chain platform generally provides
members’ information authentication, data read and write permission au-
thorization, network transaction monitoring, member management, and
other functions. Each member can have permissions assigned by the con-
sortium to access the ledger and validate the generation of blocks. The
well-known Hyperledger project is an example of a consortium blockchain.
Since there are relatively few nodes participating in the consensus pro-
cess, the consortium blockchain generally does not use the PoW mining
mechanism as the consensus algorithm. Consortium chains’ requirements
for transaction confirmation time and transaction throughput are very
different from those of public chains.
The following Table 1 shows a comparison between the three different
types of blockchain.
2.4 Consensus algorithm classification
In this section, we will provide a brief overview of the different types of
consensus algorithms. There are two ways in which consensus algorithms
may be classified.
One way of classifying consensus algorithms is by the approach of
making a final decision to reach a consensus. We call this first category
as proof-based consensus algorithms, since a node in this category has to
A Survey on Consortium Blockchain Consensus Mechanisms 13
Table 1. Comparison of three blockchain networks
Property Public Private Consortium
Infrastructure Highly Decentralized. Distributed. Decentralized.
Permission Permissionless. Permissioned. Permissioned.
Governance Type Public. Consensus is man-
aged by a single node.
Consensus is man-
aged by a consortium
of participants.
Validator Any node or miner. A set of authorized
nodes.
A set of authorized
nodes.
Transactions
Throughout
Low (≤100 TPS). High (>100 TPS) High (>100 TPS)
Network
Scalability
High. Low. Medium.
Example Bitcoint, Ethereum. Quorum, SoluLab. HyperLedger, Ten-
dermint, Corda
R3.
compete with other nodes and prove it is more qualified to commit trans-
actions. PoW [1], PoS [28], PoA [29], PoET [30], and PoSpace [31] are
algorithms in this group. The other category is that of voting-based al-
gorithms since the commitment depends on which committed result wins
the majority of votes. Paxos [32], Raft [33], PBFT [34], RFBT [35], RPCA
[36], SCP [37], Tendermint [38], and HotStuff [39] belong to this category.
Figure 6 shows the classification of blockchain consensus algorithms by
working mechanism. The first group of consensus is proof-based, while
the second group is voting-based.
The second way of classifying consensus algorithms is by the design
principle of fault tolerance. Nodes can suffer from non-Byzantine error
(Crash Fault), which is exemplified by situations where the node fails
to respond. Alternatively, nodes can forge or tamper with the informa-
tion and respond maliciously, causing Byzantine errors (Byzantine Fault).
Thus, consensus algorithms may be classified as being designed for Crash
Fault Tolerance (CFT) or Byzantine Fault Tolerance (BFT). It is impor-
tant to note that this classification method only focuses on the original de-
sign principle; most BFT-based consensus algorithms can tolerate either
crash fault or Byzantine fault. Since the design principle of algorithms
in the previous proof-based family is very different from fault tolerance,
those proof-based families will be excluded in this classification.
Paxos [32], Raft [33], and Zab [40] belong to the category of CFT-
based consensus algorithm. A collection of variants of PBFT [34] algo-
rithms, such as RBFT [35], SBFT [41], BFT-SMART [42], DBFT [43],
14 W. Yao et al.
Fig. 6. Classification of Blockchain Consensus Algorithm by mechanism
A Survey on Consortium Blockchain Consensus Mechanisms 15
and HotStuff [39], are in the category of BFT-based consensus algorithm.
Another collection of consensus algorithms in the same category uses
Byzantine Federated Agreement (BFA) [37] for voting, such as RPCA
[36] and SCP [37]. Figure 7 shows a classification of blockchain consensus
algorithm by fault tolerance.
Fig. 7. Classification of Blockchain Consensus Algorithm by Fault Tolerance
Section 3 presents CFT algorithms, and Section 4 presents BFT algo-
rithms.
3 CFT Consensus Mechanisms in Consortium Blockchain
3.1 The CFT Problem
CFT consensus algorithms only guarantee a blockchain’s reliability and
resiliency to blockchain node failure. Also known as non-Byzantine errors,
16 W. Yao et al.
node failures can be caused by failed hardware, crashed processes, broken
network, or software bugs. CFT can not address scenarios where malicious
activities are involved, referred to as Byzantine errors. When nodes in a
blockchain intentionally and maliciously violate consensus principles, e.g.,
tampering with data, a CFT algorithm can not guarantee the system
reliability. Thus, CFT consensus algorithms are mainly used in closed
environments such as enterprise blockchains. Current mainstream CFT
consensus algorithms include the Paxos algorithm and Raft. The latter is
a derivative of the former and is a simplified consensus algorithm designed
to be more suitable for industry implementation than the original Paxos.
3.2 Paxos
Paxos [32] is a fault-tolerant consensus algorithm based on message pass-
ing in a distributed system. The Paxos algorithm divides nodes into three
roles: proposer,acceptor, and learner. Each role corresponds to a process
on the node, and each node can have multiple roles simultaneously.
A proposer is responsible for proposing a proposal and for awaiting
responses from acceptors. An acceptor is responsible for voting on the
proposal. A Learner is informed of the proposal’s result and follows the
results, but it does not participate in voting.
A proposal consists of a key-value pair formed by a proposal number
and a value. The proposal number ensures the proposal’s uniqueness, and
the value represents the content of the proposal itself. A value of Chosen
indicates that the proposal has been selected. When more than half of
the acceptors approve a proposal, the proposal is considered Chosen.
The Paxos algorithm meets the constraints of safety and liveness,
which are described below.
•Safety ensures that the decision is correct and not ambiguous. The
safety constraint has the following requirements. Only the value pro-
posed by the proposer can be chosen. Further, only one decision value
can be chosen, and the process can only obtain those values that are
actually chosen.
•Liveness guarantees that the proposal will be completed within a lim-
ited time. The value proposed by the proposer cannot be learned until
it has been chosen.
The Paxos algorithm’s consensus process begins with a proposer, who
puts forward a proposal to win the support of the majority of acceptors.
When a proposal proposed by a proposer receives more than half of the
A Survey on Consortium Blockchain Consensus Mechanisms 17
approval of acceptors, the proposer sends the result to all nodes for con-
firmation. In this process, if the proposer fails due to a crash, it can be
solved by triggering the timeout mechanism. If the proposer happens to
fail every time a new round of proposals is proposed, then the system will
enter a livelock status and never reach an agreement [44].
The Paxos algorithm execution is divided into two phases shown in
figure 8. In the PREPARE phase,the proposer sends a prepare request
with a proposal number to more than half of the acceptors in the network.
The purpose of this initial transmission of the proposal number is to test
whether the majority of acceptors are prepared to accept the proposal.
After receiving the proposal, the acceptor will always store the largest
proposal number it has received. When an acceptor receives a prepare
request, it will compare the currently received proposal’s number and
the saved largest proposal number. If the received proposal number is
greater than the saved maximum proposal number, it will be accepted and
included in a message called promise, which it returns as the response
to the proposer. The internally saved largest proposal number is updated
simultaneously and the acceptor will promise not to accept any proposal
with a number less than the proposal number that is currently received.
Fig. 8. Paxos two phases.[45]
In the ACCEPT phase, if the proposer receives more than half of
the responses as promise messages, it will broadcast an accept request
with the proposal. This accept request consists of a proposal number and
the value that the node would like to propose. Note that if the response
message received by a proposer does not contain any proposal, the value
is determined by proposer itself. However, if the response message re-
18 W. Yao et al.
trieved by the proposer contains a proposal, the value will be replaced by
the value in the response that contains the largest proposal number. Af-
ter the acceptor receives the accept request, if it finds that the proposal
number in the accept request is not less than the maximum proposal
number promised by the acceptor, it will accept the proposal and update
the accepted maximum proposal. If a majority of acceptors accept the
proposal, then the proposed value is chosen, which means the cluster of
all proposers and acceptors has reached consensus.
In the n= 2f+ 1 model, Paxos can tolerate fcrashing nodes and
implements a consensus algorithm based on message-passing. Paxos is
fault-tolerant only for for crashed nodes, not for Byzantine nodes. This
is because a Byzantine node can always try and find out a number larger
than the current maximum proposal number, either to mess up other
nodes’ efforts to reach a consensus or to force other nodes to accept its
proposed incorrect value.
3.3 Raft
Raft [33], formally known as the Raft Consensus Algorithm, is moti-
vated by Paxos. Raft is designed for ease of understandability and imple-
mentability for industry applications. Its core idea is that servers start
from the same initial state and execute a series of command operations in
the same order. The goal of Raft is to achieve a consistent state. There-
fore, Raft uses the log method for synchronization, which is a consistent
algorithm for managing replicated logs.
The Raft algorithm divides nodes into three mutually-convertible roles:
leader,f ollower, and candidate. There can be at most one leader in the
entire cluster. The minimum size of a cluster is five nodes. The leader is
responsible for receiving client requests, managing replication logs, and
maintaining communication with followers.
Initially, all servers are followers. A follower, passively responds to the
Remote Procedure Call (RPC) requests from the leader. Followers do not
communicate with each other since they are passive nodes. A follower
is responsible for responding to log replication requests from the leader
and responding to election requests from candidate nodes. If a follower
receives a request from the client, the follower forwards it directly to the
leader.
In Raft, a candidate is responsible for initiating election voting. If the
leader goes down due to a crash or loses network connectivity, one or
more nodes will change their role from follower to candidate and initiates
an election to elect a new leader. Once a candidate node wins an election,
A Survey on Consortium Blockchain Consensus Mechanisms 19
its status is changed from candidate to leader, and it still has a chance
to convert back to a candidate if a new leader is elected but then fails.
Figure 9 shows how the three roles change states. T erm in the figure is
represented by a continuously increasing number. Each round of election
is a term, and each term elects only one leader.
Fig. 9. Server states. [33]
The Raft algorithm consensus process runs in two phases. The first
phase is the leader election, triggered by a heartbeat mechanism. A leader
sends a heartbeat message to all followers periodically, to maintain its
authority. If a follower does not receive the heartbeat message for a period
of time, denoted by election timeout, it switches to the candidate role
and starts a leader election process since it is determined that the leader
has failed [33]. Then, it increases its current term, canvasses for itself,
sends RequestVoteRPC to other servers, and waits for the following any
of the following three situations to occur:
1. A candidate wins the election. This implies that the candidate has
won more than half of the server votes, and it will become a leader.
2. A candidate loses the election, which means another server has won
more than half of the votes and has received the corresponding heart-
beat, thereby leading to the candidate becoming a follower.
3. If no one wins the election, after a randomized timeout, the election
is re-initiated and the term increases.
The second phase is the log replication phase, where the leader accepts
the client’s request, updates the log, and sends a heartbeat to all followers.
Consequently, all followers synchronize the leader’s log.
20 W. Yao et al.
4 BFT Consensus Mechanisms in Consortium
Blockchains
4.1 BFT
In 1982, Leslie Lamport, Robert Shostak, and Marshall Pease proposed
the Byzantine Generals problem [46]. The Byzantine Generals problem is
described as follows. Suppose there are several Byzantine armies camp-
ing outside an enemy city, and each army is commanded by a general.
The generals can only communicate with each other by dispatching a
messenger who carries messages [46]. After observing the enemy’s situa-
tion, they must agree on an identical plan of action. However, there are
some traitors among these generals, and these traitors will prevent loyal
generals from reaching an agreement. The generals should legislate an al-
gorithm to guarantee that all loyal generals reach a consensus, and that a
small number of traitors will not cause a loyal general to adopt the wrong
plan.
Let v(i) represent the information sent by the i-th general. Each gen-
eral draws up a battle plan based on v(1), v(2), · · · ,v(n), where nis the
number of generals. The problem can be described in terms of how a com-
manding general sends an order to his lieutenants. Therefore, the problem
will be transformed into the following Byz antine Gener al P r oblem: A
commander sends an order to his n−1 lieutenants such that:
•IC1. All loyal lieutenants obey the same order.
•IC2. If the commander is loyal, then each loyal lieutenant must obey
his orders.
The above IC1 and IC2 are conditions for interactive consistency,
which is a configuration that includes the number of generals in a final
agreement [46]. It has been shown that if there are mtraitors and the total
number of generals is less than 3m+ 1, the Byzantine generals problem
has no solution.
An example of the Byzantine generals problem is shown in Figure
10. Here, the commander and Lieutenant 1 are loyal, and Lieutenant 2
is a traitor. The commander sends an attack order to all lieutenants.
Lieutenant 2 is a traitor, and he/she deceives Lieutenant 1 by sending
a tampered message called ”retreat”. Since Lieutenant 1 does not know
whether the commander or Lieutenant 2 is a traitor, he/she cannot judge
which message includes the correct information and thus, cannot reach a
consensus with the loyal commander.
A Survey on Consortium Blockchain Consensus Mechanisms 21
Fig. 10. Byzantine General Problem with three participants and one traitor (lieu-
tenant) [46]
In another case shown in Figure 11, the two lieutenants are loyal, and
the commander is a traitor. The commander sends different orders to the
two lieutenants. Lieutenant 2 conscientiously delivered the information
of the commander to Lieutenant 1. Lieutenant 1 can not judge which
information is correct, resulting in two loyal lieutenants not reaching a
consensus.
Fig. 11. Byzantine General Problem with three participants and one traitor (comman-
der) [46]
If there are mtraitors and the total number of generals nis less than
3m+ 1, the Byzantine generals problem has no solution. Unlike CFT
problems that deal with crashes or failures, a Byzantine fault, named af-
ter Byzantine generals problem, is caused by malicious nodes who may
send incorrect information to prevent other nodes from reaching consen-
22 W. Yao et al.
sus. In distributed systems, the Byzantine Generals problem translates
to the inability in maintaining consistency and correctness under certain
conditions.
Lamport proposed a BFT algorithm to solve the Byzantine generals
problem in exponential time O(nf) if the adversary mode is n= 3f+ 1
[46]. This original BFT algorithm is computationally expensive to imple-
ment, and a practical BFT algorithm is introduced in the next section.
4.2 PBFT
Practical Byzantine Fault Tolerance (PBFT) is a consensus algorithm
based on state machine replication [34]. As a state machine, services are
replicated in different nodes of a distributed system. Each copy of the
state machine saves the state of the service and the operations it imple-
ments. This algorithm can ensure the system’s regular operation when
the proportion of nodes with errors does not exceed a third of the total
number of nodes. The idea is to let every node receive a message asking
about the content of the message received by other nodes.
The adversary mode of PBFT is n= 3f+ 1, and it ensures that the
system which contains nnodes can reach a consensus if the number of
faulty nodes fdoes not exceed 1/3 of n. In the PBFT algorithm, there
is one primary node out of nnodes, and other backup nodes called repli-
cas. The PBFT consensus mechanism reaches a consensus through three
phrases: pre-prepare,prepare, and commit. Another important mech-
anism in the PBFT algorithm is view-change. When the primary node
fails, and cannot process the data request within a specified time, other
replicas initiate a view-change, and the new primary node starts to work
after the conversion is successful.
The processe of reaching consensus in the PBFT algorithm is as fol-
lows:
1. Propose. The client uploads the request message mto the nodes in
the network, including the primary node and replicas.
2. Pre-prepare. The primary node receives the request message mup-
loaded by the client, assigns to it the message sequence number s, and
generates the pre-prepare message hP RE-P R EP ARE, H (m), s, vi,
where H(m) is a one-way hash function and vrepresents the view
at that time instant. The view vis used to record the replacement of
the primary node. If the primary node changes, the view vis incre-
mented by one. The message sender uses its private key to implement
the digital signature before sending it. The primary node sends the
pre-prepare message to replicas.
A Survey on Consortium Blockchain Consensus Mechanisms 23
3. Prepare. Once replica nodes receive the pre-prepare message from the
primary node, the replica nodes verify H(m) to ensure they have
not received other messages before view vand sequence s. After the
verification is passed, the replica nodes calculate the prepare message
hP REP AR E, H (m), s, viand broadcast it to the entire network. If
the number of valid prepare messages received by a replica node is
greater than or equal to 2f+ 1 (including its own prepare message),
then the replica node will generate a prepared certificate. This implies
that it is prepared to move to the next phase.
4. Commit. If the replica node collects 2f+ 1 prepare messages and
generates the prepared certificate in the prepare phase, it will broad-
cast the commit message hCOM MI T, s, vito other replica nodes and
store the message min the local log for processing. If the number of
valid commit messages received by a replica node is greater than or
equal to 2f+ 1 (including its own commit message), then the replica
will generate a committed certificate which means the message has
successfully committed.
5. Reply. Once a node (either primary node or replica) receives 2f+ 1
valid commit messages from the replicas and the primary, it will send
the committed certificate as a reply of the message mto the client.
Fig. 12. PBFT algorithm process [47]
PBFT contains a checkpoint mechanism for discarding messages in
a garbage-collection approach. Each request message is assigned a spe-
cific sequence number s. This functions as a checkpoint for s, which is
a state reached after the request sis executed. Any checkpoint that has
24 W. Yao et al.
no less than 2f+ 1 nodes generating the committed certificate is a stable
checkpoint. For example, let the sequence number corresponding to mes-
sage mbe 106. If no less than 2f+ 1 nodes generate the committed
certificate of message m, then the serial number 106 becomes the stable
checkpoint after the commit phase. Thus, the replica can reduce storage
costs by clearing the data before the stable checkpoint.
The stable checkpoint also plays a crucial role in PBFT’s view-change
protocol. View-change protocol provides liveness through a mechanism
to ensure that the cluster keeps working when the primary node fails.
To avoid waiting indefinitely, a replica starts a timer when it receives
a request. View changes are triggered if the replica has not received a
response from the primary node after a timeout. PBFT’s view-change
protocol works as follows:
1. Broadcast view-change messages. For replica i, suppose the timer ex-
pires in view v. The current stable checkpoint is S∗, and Cis defined
to be a set of 2f+ 1 valid checkpoint messages for S∗.Uis a set of
messages with sequence number greater than S∗and contains a valid
pre-prepare message. Node ibroadcasts the view-change message: vci:
hV I EW -C HAN GE, v + 1,S∗,C,U, iito all replica nodes.
2. View-change confirmation. The backup node verifies the legality of
the received view-change message for view v+ 1. An acknowledge
message is then sent to the new primary node for view v+ 1 once the
verification is processed.
3. Broadcast new view. For node j’s view-change message vcj, if the new
primary preceives 2facknowledge messages for view v+ 1, then vcj
is considered valid. Primary node pbroadcasts the new view message:
hN EW -V IEW, v + 1,V,U∗ito all other replicas, where Vis a set of
valid view-change messages plus the view-change message for v+ 1
which is sent by p. The term U∗denotes a set of numbers, which
contains the sequence number of the latest stable checkpoint, and the
highest sequence number in prepare message.
PBFT uses Message Authenticated Codes (MACs) [48] to facilitate
inter-node authentication. In the authentication process, both the mes-
sage and its digest are generated through a specific hash function. A pair
of session keys between the two nodes is used to calculate the MAC of the
message. The session key is generated through a key exchange protocol
and dynamically replaced. PBFT achieves the consistency and activity
of state machine replication. The message communication complexity is
O(n2) if there is a non-malicious primary node which works without fail-
A Survey on Consortium Blockchain Consensus Mechanisms 25
ure. Otherwise, it rises to O(n3) if the primary node fails (processing
view-change protocol).
4.3 Redundant Byzantine Fault Tolerance (RBFT)
The Redundant Byzantine Fault Tolerance (RBFT) algorithm [35] is a
variation of PBFT proposed in 2013 that uses a multi-core architecture
to improve its robustness.
The RBFT requires the same adversary mode, i.e. n= 3f+ 1 nodes,
as PBFT. Each node runs f+ 1 PBFT protocol instances [35] in paral-
lel. Only one of these instances is the master instance, while the other
instances are backup instances. Each instance has its own nreplicas; and
in f+1 instances, each node has at most one primary in each. An overview
of this parallel architecture is shown in Figure 13.
Fig. 13. An overview of RBFT components [35]
As shown in the figure 14, RBFT uses a communication process simi-
lar to PBFT in the consensus protocol phase but adds a propagate phase
before the pre-prepare phase. This ensures that a request will eventually
be sent to the next phase by all the correct nodes. To guarantee correct-
ness, RBFT requires that f+ 1 PBFT instances receive the same client
request. However, when a node receives a request from the client, it does
not directly run it on its f+ 1 instances, but forwards the request mes-
sage to each other. If a node receives 2f+ 1 requests from client, it will
eventually send the request to f+1 instances, and move to the next phase.
This 3-phase process is similar to PBFT [34], and is shown in steps 3, 4,
26 W. Yao et al.
and 5 in Figure 14. In the 3-phase process, the RBFT algorithm is also
performed by the f+ 1 instances when executing the consensus protocol.
After execution, the result will be returned to the client through MAC au-
thentication messages. When the client receives f+1 valid and consistent
replies, it accepts these replies as a result.
Fig. 14. RBFT protocol steps [35]
An improvement of RBFT over PBFT is the implementation of a
monitoring mechanism and a protocol instance change mechanism to pro-
mote robustness. Each node runs a monitoring program to monitor the
throughput of all f+1 instances. If 2f+1 nodes find that the performance
difference between the master and the best backup instance reaches a cer-
tain threshold, then the primary of the master instance is considered as a
malicious node [35]. Thus, a new primary is selected or the primary in the
backup instance with the best performance is chosen. It then upgrades the
backup instance to the master instance. Since each node has at most one
instance of the primary, if the wrong primary of the master instance has
been found, all primaries on different instances need to be replaced. Each
node maintains a counter to record the change information of each in-
stance. If a node finds that it needs to change the primary, it will send an
INSTANCE CHANGE message with a MAC authenticator to all nodes.
After the node receives the incoming INSTANCE CHANGE message, it
verifies the MAC, then compares it with its counter. If its counter is larger,
then it discards the message. Otherwise, the node checks whether it also
needs to send the INSTANCE CHANGE message by comparing the per-
A Survey on Consortium Blockchain Consensus Mechanisms 27
formance of the master and backup. If 2f+1 valid INSTANCE CHANGE
messages are received, the counter is incremented by one and this starts
the view-change process as in PBFT. As a result, each instance’s primary
gets updated, including the master’s.
4.4 BFT-SMART
BFT-SMART [42] is a state machine replication library written in the
Java language, designed to tolerate fByzantine nodes where the total
number of nodes is n≥3f+ 1. In BFT-SMART, a state transfer ser-
vice is provided to repair a faulty node, re-assign it into the system, and
access other nodes to obtain replicas’ latest status. To ensure that the
system can recover stably from errors occurring at the fnodes simulta-
neously, the state transfer service stores each node’s operation logs on
other disks. Besides, BFT-SMART implemented a reconfiguration ser-
vice to add/remove replicas dynamically through a Trusted Third Party
(TPP) particular client.
The BFT-SMART algorithm divides the nodes into two types: leader
nodes and backup nodes, and it has a reconfiguration protocol [49], which
is very similar to the view-change protocol employed in PBFT to handle
a leader failure.
The consensus process of the BFT-SMART algorithm is based on a
module named Mod-SMaRt [50], with a leader-driven algorithm described
in [51]. There are three phases in the consensus process: P ropose,W r ite,
and Accept, as shown in Figure 15. A leader node is elected from the
entire network. Before entering the consensus process, a client sends a
REQU EST message contains the client serial number, digital signature,
and operation request content to all nodes and then waits for a response.
When the system is in the normal phase (no node fails or has an error
in the system), the leader node first verifies the correctness of the re-
ceived REQU EST message. After the verification is passed, the leader
node accepts the received message, assigns a serial number, and sends
the P ROP OS E message to replica nodes. As long as a replica node
accepts the message and forwards it, other nodes will also receive and
send the W RI T E message to all nodes, including itself. When receiving
2f W RI T E messages, the node broadcasts an ACCESS message to all
nodes, including itself. When a node receives 2f+ 1 ACCESS messages,
the request is executed. The algorithm stores the content of the series of
request operations and the encryption certificate in each node’s log and
replies AC CEP T to the client simultaneously.
28 W. Yao et al.
Fig. 15. BFT-SMART normal phase message pattern [42]
.
If an error occurs in a node (the number of error nodes are f=
(n−1)/3) and triggers timeout twice, the algorithm is forced to jump to
the synchronization phase, and the reconfiguration protocol will start to
re-elect the leader node. This process and the consensus process can exe-
cute simultaneously. When the first timeout is triggered, the REQU EST
request will be automatically forwarded to all nodes because the timeout
may be triggered by a faulty node that is only sending its response to
a part of nodes in the network, instead of sending the response to the
entire network. When the second timeout is activated, the node imme-
diately enters the next reconfiguration and sends a ST OP message to
notify other nodes. When a node receives more than f S T OP messages,
it will immediately start the next reconfiguration. Once the leader elec-
tion is complete, all nodes send a S T OP DAT A message to the new leader
node. If the leader node accepts at least n−fvalid ST O P DAT A mes-
sages, it will send a SY N C message to all nodes. The node that receives
the SY N C message will perform the same operation as the leader node to
verify whether the leader node has collected and sent valid information.
If the leader has been verified as valid, then all other replicas will start
to synchronize from the leader.
4.5 RPCA
The Ripple Protocol Consensus Algorithm (RPCA) [36, 52] was proposed
in 2014 for use in the Ripple cryptocurrency created by Ripple Labs. The
A Survey on Consortium Blockchain Consensus Mechanisms 29
RPCA algorithm uses some pre-configured nodes as validators verifying
and voting on transactions to reach the consensus. After several rounds
of voting, if a transaction continues to receive more than a threshold
(usually 80%) of votes, the transaction is directly recorded in the ledger.
Each node in the system maintains a subset of validators as a list of
trusted nodes named Unique Node List (UNL). In addition to validators,
there are also non-validators in the system known as tracking servers.
Tracking servers are responsible for forwarding transaction information
in the network and responding to client’s requests, and not participating
in the consensus process. A validator and a tracking server can switch
roles. When a tracking server obtains a certain threshold of votes, it can
switch to serving in the role of a validator. If a validator is inactive for a
long time, it will be deleted from the UNL and it then becomes a tracking
server.
The consensus process of the RPCA algorithm is shown in Figure
16. The client initiates a transaction and broadcasts it to the network.
The validator receives the transaction data, stores it locally, and veri-
fies it. Invalid transactions will be discarded, while a valid transaction
is integrated into the candidate set of transactions. Each validator peri-
odically sends its transaction candidate set as a transaction proposal to
other nodes. Once the validator receives the proposal from other nodes,
it checks whether the sender of the proposal is on the UNL. If it is not,
the proposal is discarded. Otherwise, the validator will store the proposal
locally and compare it with the candidate set. The transaction will obtain
one vote if it is the same as in the candidate set. Within a certain period
[52], if the transaction fails to reach 50% of the votes, it will return to
the candidate set and wait for the next consensus process. If it reaches a
threshold denoted by 50% of votes, it will enter the next round and be
re-sent as a proposal to other nodes and the threshold will also be raised.
As the number of rounds increases, the threshold continues to increase
until the transaction reaches 80% or more of the votes, at which point
the validator writes it into the ledger.
In the RPCA algorithm, because the identity of the nodes partici-
pating in the consensus (validators) is known, this algorithm reduces the
communication cost between network nodes and improves consensus ef-
ficiency compared with PoW, PBFT, and other algorithms. Since the
algorithm requires 80% or more of the votes to reach a consensus, if ma-
licious nodes want to cheat the ledger, they must reach 80% or more in
the UNL to succeed. Thus, RPCA has a better Byzantine fault tolerance
30 W. Yao et al.
Fig. 16. Ripple’s RPCA Consensus Algorithm
.
compares to PBFT, and it is able to guarantee the correctness of the
system.
4.6 Stellar Consensus Protocol(SCP)
Stellar is an open-source blockchain technology, mainly used in distributed
financial infrastructure. One of the main objectives of SCP is to reduce
the cost of financial services such as daily payments between enterprises,
cross-border electronic remittances, and asset transactions. SCP, pro-
posed by David Mazieres, is a distributed consensus algorithm designed
around state machine replication, and does not require miners but a dis-
tributed server network to run the protocol [37].
SCP is the first implementation of a consensus protocol called the
Federated Byzantine Agreement (FBA), which follows Federated Byzan-
tine Fault Tolerance (FBFT). A quorum slice introduced by FBFT refers
to the subset of nodes on the network that a given node chooses to trust.
A quorum is a set, and each non-faulty member of it contains at least
one quorum slice. The notion of FBA is similar to the UNL in the RPCA
algorithm, since the UNL can be considered as a type of quorum slice.
However, unlike the UNL used in Ripple which requires only 80% of the
A Survey on Consortium Blockchain Consensus Mechanisms 31
agreement to reach the consensus, in Stellar, the ledger will not update
the transaction until 100% of nodes in a quorum slice agree on it.
There are two mechanisms in the quorum slice model, federated vot-
ing and federated leader election. In federated voting, nodes vote on a
statement and use a two-step protocol to confirm it. If each quorum of
non-faulty nodes v1intersects each quorum of non-faulty nodes v2in at
least one non-faulty node, then v1and v2are intertwined [53]. It is guaran-
teed that intertwined nodes would never approve a conflicting transaction
[53]. In federated leader election, the algorithm allows nodes to pseudo-
randomly select one or a small number of leaders in the quorum slice.
Fig. 17. Federated voting process [37]
.
SCP is a global consensus protocol consisting of three interrelated
components - a nomination protocol, a ballot protocol, and a timeout
mechanism. The nomination phase is the initial operation in SCP, and it
proposes new values as candidate values to reach an agreement. NO MI N AT E
xis a statement that states xis a valid candidate consensus value. Each
node that receives these values votes for a single value among these val-
ues. The nomination phase eventually generates the same set of candidate
values as a deterministic combination of all values on each intact node
[53].
Once the nomination phase is successfully executed, the nodes enter
the ballot phase. In the ballot phase, federated voting is used to commit
or abort the values. An example of the three-step process used in FBA is
shown in Figure 17. In the first step of the FBA process, a node vvotes for
a valid statement aby broadcasting the message. In the second step, vac-
cepts the aif vnever accepted a values that contradicts a. If each member
32 W. Yao et al.
of v’s quorum set claims to accept a, then the fact ais broadcasted again.
The statement ais confirmed in the last step if each node in node v’s quo-
rum accepts aand vconfirms a. However, there may be a stuck state since
the node cannot conclude whether to abort or commit a value. SCP uses
two statements P REP AR E and CO MM I T , and a series of numbered
ballots to avoid stuck votes in the federated voting process. A statement
P REP AR Ehn, xistates that no value other than xwas or will ever be
chosen in any ballot ≤n. Another statement C OM MI T hn, xistates that
value xis chosen in ballot n. A node has to confirm the P RE P AREhn, xi
statement before voting for the CO MM I T hn, xistatement. Once the
COMM IT statement has been confirmed, the value xcan be output by
the node. SCP provides liveness by using these two statements when the
node thinks a stuck ballot has been committed.
The last and important part of SCP is the timeout mechanism. If the
current ballot nseems to be stuck, it will cause a new round of federated
voting to start on a new ballot with a higher counter n+ 1.
This particular quorum model used in SCP allows the participating
node to decide quorums, which is the critical difference between FBA and
the previous Byzantine agreement systems introduced in Sections 4.2 - 4.5
above. The SCP protocol employing FBA claims no stuck state and can
provide low latency and flexible trust.
4.7 HotStuff
The HotStuff algorithm proposed by Yin, Abraham, Gueta, and Malkhi
[39] improves upon the PBFT. The HotStuff network is a partially syn-
chronized network [54] with an adversary model of n= 3f+ 1. It uses a
parallel pipeline to process the proposal, which is equivalent to combining
the preparation and commitment phases of PBFT into a single phase. The
original paper proposes two implementations of HotStuff, namely Basic
HotStuff and Chained HotStuff.
The Basic HotStuff protocol forms the core of HotStuff, which switches
between a series of views. The views switch according to a monotonically
increasing number sequence. A unique consensus leader exists within each
view. Each replica node maintains a tree structure of pending commands
in its memory. Uncommitted branches compete, and only one branch
in a round will be agreed upon by the nodes. In the HotStuff protocol,
branches are committed as the view number grows. Voting in HotStuff
uses the cryptographic term QuorumCertif icate (QC), where each view
is associated with a QC that indicates whether enough replicas have ap-
proved the view. If a replica agrees with a branch, it signs the branch with
A Survey on Consortium Blockchain Consensus Mechanisms 33
its private key, creating a partial certificate [54] to send to the leader. The
leader collects n−fpartial certificates, which can be combined into a QC.
A view with a QC means that it receives the majority votes of the repli-
cas. The leader collects signatures from n−freplicas by using threshold
signatures [41, 55]. The process of collecting signatures consists of three
phases, PREPARE, PRE-COMMIT, and COMMIT phases. Moreover,
the entire algorithm consists of five phases, PREPARE, PRE-COMMIT,
COMMIT, DECIDE, and FINALLY phases, as shown in Figure 18.
Fig. 18. Consensus in the HotStuff Protocol [56]
1. PREPARE. The leader denoted by the current highest view desig-
nated as highQC, initiates a proposal for highQC, encapsulates it into
a PREPARE message with message content m=MSG(P RE P ARE,
curP roposal ,highQC), and broadcasts it to all replicas. Replicas will
decide whether to accept the proposal or not, and then return a vote
with partial signature to the leader if the proposal is accepted.
2. PRE-COMMIT. When the leader receives votes from n−freplicas for
the current proposal curP roposal , it combines them into prepareQC,
encapsulates prepareQC into a PRE-COMMIT message, and broad-
casts it to all replicas. The replica votes after receiving the above
proposal message and returns the vote to the leader.
3. COMMIT. When the leader receives the PRE-COMMIT votes from
n−freplicas, it merges them into precommitQC, encapsulates a
precommitQC into a COMMIT message, and broadcasts them to all
34 W. Yao et al.
replicas. The replica votes after receiving the above proposal mes-
sage and returns the COMMIT vote to the leader. To ensure the
safety of the proposal, the replica is locked by setting its lockedQC
to precommitQC.
4. DECIDE. When the leader receives the COMMIT votes from n−f
replicas, it merges them into one commitQC and then uses the DE-
CIDE message to broadcast it to all replicas. After receiving this mes-
sage, the replica confirms and submits the proposal in the commitQC,
executes the command and returns it to the client. After this, the
replica increases the viewN umber and starts the next view.
5. FINALLY. If the system moves to the next view, each copy sends a
message to the next view’s leader with the message m=MSG(N EW -
V I EW, ⊥, prepareQC ).
Figure 18 shows that the processes in each phase of Basic HotStuff are
very similar to each other. A modified version of HotStuff called Chained
HotStuff was proposed [39] to optimize and simplify Basic HotStuff. In the
Chained HotStuff protocol, the replicas’ votes in the P REP AR E phase
are collected by the leader, and stored in the state variable genericQC .
Then, genericQC is forwarded to the leader of the next view, essentially
delegating the next phase’s (the PRE-COMMIT phase) responsibilities
to the next view’s leader. Thus, instead of starting its new PREPARE
phase alone, the next view’s leader actually executes the PRE-COMMIT
phase simultaneously. Specifically, the PREPARE phase of view v+ 1
also acts as the PRE-COMMIT phase of view v. The PREPARE phase
of view v+ 2 acts as both the PRE-COMMIT phase of view v+ 1 and
the COMMIT phase of view v. The flow of Chained HotStuff is shown in
Figure 19.
Fig. 19. Chained HotStuff is a pipelined Basic HotStuff where a QC can serve in
different phases simultaneously. [39]
.
A Survey on Consortium Blockchain Consensus Mechanisms 35
Figure 19 shows that a node can be in different views simultaneously.
Through a chained structure, a proposal can reach a consensus after three
blocks. In other words, it resembles a Three-Chain as shown in figure 20.
An internal state converter enables the automatic switching of proposals
through genericQC . The chained mechanism in Chained HotStuff reduces
the cost of communication messages and allows pipelining of processing.
In the implementation of Chained HotStuff, if a leader fails in obtaining
enough QC, then it may appear that the view numbers of a node are not
consecutive. This issue can be solved by adding dummy nodes, as shown
in Figure 20, where a dummy node has been added to force v6, itself, and
v8to form a Three-Chain.
Fig. 20. The nodes at views v4,v5,v6form a Three-Chain. The node at view v8does
not make a valid One-Chain in Chained HotStuff. [39]
.
HotStuff achieves O(n) message authentication complexity by improv-
ing the distributed consistency algorithm’s efficiency using threshold sig-
natures, parallel pipeline processing, and linear view changing. Compared
to PBFT, HotStuff can reach consensus pipelining without a complex
view-change mechanism and improves consensus efficiency.
5 Comparison of Consensus Algorithms
The use of different consensus algorithms in enterprise blockchains im-
pacts the overall performance of the system. In this section, we compare
and summarize the eight consensus algorithms profiled thus far in this pa-
per. We compare these algorithms along the lines of the following metrics:
the degree of decentralization, scale of use, fault tolerance, performance
efficiency, and resource consumption.
5.1 Comparisons Aspects
•Fault tolerance: Fault tolerance refers to the ability of the consensus
algorithm to tolerate both non-Byzantine faults (CFT) and Byzantine
faults (BFT). Fault tolerance also impact the security of the consensus
protocol.
36 W. Yao et al.
•Performance efficiency: Performance efficiency is measured by the
block generation rate and the number of Transactions Per Second
(TPS) that the system can process. Block generation is expressed as
the time required for the entire process starting from the time when
transactions are packaged into blocks up to the time when consen-
sus is completed and recorded on the blockchain. TPS represents the
transaction throughput, which is determined by the size of the data
block and the block generation speed. TPS is measured as the number
of transactions in the block divided by the length of time required for
the generation of the current block. The faster the block generation
speed of the algorithm used in the actual system, the greater is the
transaction throughput, and the higher is the algorithm’s performance
efficiency.
•Degrees of decentralization: Decentralization does not mean that there
is no central node; rather, it implies there exists a relatively neutral
entity that functions as the central node. In a round of reaching con-
sensus, the node who decides the recording of transactions on the
distributed ledger is considered as the central node. All other nodes
will keep the data consistent around it. Therefore, we compare the
degree of decentralization of the algorithm according to the recording
node’s selection rules and the number of selected recording nodes in
each round.
•Scalability: The use scale refers to the number of network nodes that
the algorithm can process in the system.
•Resource consumption: Resource consumption refers to the computing
power, memory, input and output, and electricity resources that each
node needs to consume in the process of reaching a consensus.
5.2 Evaluation and Comparison
In this section, we present a summary of the various consensus algo-
rithms presented in this paper (Table 2). The latency property shown in
Table 2 is denoted as high, medium or low. High latency is indicated in
minutes, medium is in seconds, and low is in milliseconds. These con-
sortium blockchain consensus algorithms presented usually have better
performance on TPS compared to the public blockchain system for the
throughput property. In previous work for throughput comparison [57],
high represents TPS that is more than 1,000 TPS, medium denotes TPS
between 100 and 1,000 TPS, and low denotes throughput less than 100
TPS. Typically, the throughput property in public blockchains is in the
A Survey on Consortium Blockchain Consensus Mechanisms 37
range of 100 - 1000 TPS, and the consortium blockchain consensus al-
gorithms presented in this paper have a TPS that exceeds. We propose
that the measurement method should be redefined, with high represent-
ing more than 2,000 TPS, medium denoting TPS between 1,500 to 2,000,
and low denoting a throughput that is less than 1,500 TPS.
Table 2. Comparison of Consortium Consensus Algorithms [57, 58, 59]
Consensus Adversary
Tolerance
Scalability Latency Throughput Examples of
Applications
Paxos n= 2f+ 1 High Low Medium Google Chubby,
Zookeeper
Raft n= 2f+ 1 High Low Medium IPFS
PBFT n= 3f+ 1 Low Low Low Hyperledger Fabric
RBFT n= 3f+ 1 High Low High Hyperledger Indy
BFT-
SMART
n= 3f+ 1 High Low High R3 Coda, Symbiont
RPCA n= 5f+ 1 High Medium Medium Ripple
SCP n= 3f+ 1 High Medium Low Stellar
HotStuff n= 3f+ 1 High Low High Libra
Another comparison is regarding the communication complexity of
algorithms in this paper. The results are shown in Table 3.
Table 3. Communication Complexity of selected protocols
Consensus Normal Case Leader Failure
Paxos [60] O(n2) -
Raft [33] O(n) -
PBFT [39] O(n2)O(n3)
RBFT [35] O(n3)O(n3)
BFT-SMART [39] O(n2)O(n3)
RPCA [60] O(nK ), Kis the size of UNL. -
SCP [60] O(nK), Kis the size of quorum. -
HotStuff [39] O(n)O(n)
The Paxos algorithm has high performance and low resource consump-
tion, which can enable a distributed system to reach consensus when the
number of normal nodes is greater than half of the total nodes. Paxos’
properties make it suitable only for distributed systems with high non-
Byzantine fault tolerance. It cannot be used for blockchains that require
38 W. Yao et al.
Table 4. Pros and Cons
Consensus Advantages Disadvantages
Paxos High Performance. Cannot tolerant Byzantine failure.
Be lack of understandability.
Raft High Performance.
Improved understandability com-
pares to Paxos.
Cannot tolerant Byzantine failure.
PBFT No tokens.
High performance.
High security.
High communication volume.
Low efficiency of operation when
the number of nodes is too large.
RBFT Can rapidly find malicious primary
node.
Cannot deal with closed-loop sys-
tems. [61]
BFT-SMART Implement dynamic addition and
deletion of nodes.
High system throughput.
Cannot prevent malicious nodes
being primary node.
RPCA High efficiency.
High security.
Low fault tolerance.
More threat to the verification
nodes.
SCP Fast transaction speeds.
Low transaction costs.
Flexible Trust.
Can only guarantee safety if
trusted nodes are adequate.
HotStuff High performance.
Linear complexity for message
communication and validation.
No complex view-change mecha-
nism compared to PBFT.
Less implementations.
A Survey on Consortium Blockchain Consensus Mechanisms 39
Byzantine fault tolerance. Google Chubby [62] is a typical application
using the Paxos algorithm, which provides a coarse-grained lock service
for the file system to store a large number of small files.
Like the Paxos algorithm, Raft can enable the distributed system to
reach a consensus if more than half of the nodes are non-failure nodes in
the distributed system. However, Raft has one and only one legal leader
in any round of consensus. Since the original objective of proposing is to
simplify the Paxos algorithm’s process and make a more understandable
and implementable algorithm compared to Paxos, Raft’s fault tolerance,
performance efficiency, degree of decentralization, use scale, and resource
consumption are very similar to the Paxos algorithm.
The PBFT algorithm can tolerate both non-Byzantine errors and
Byzantine errors simultaneously by sending broadcasts to the entire net-
work in each round to reach consensus. This allows each node to partic-
ipate in electing the primary node. This mechanism ensures that PBFT
has the capabilities to maintain consistency, availability, and anti-fraud
attacking. However, with the increase in the total number of nodes, the
growth ratio of the total number of broadcast messages is quadratic. This
results in rapid performance degradation that is faster than linear. There-
fore, the PBFT algorithm cannot support public networks since they are
large-scale. Instead, it is more suitable for consortium blockchain and
private blockchain.
The RBFT algorithm was first proposed for better Byzantine fault
tolerance. In earlier BFT algorithms such as PBFT, Prime [63], Aardvark
[64], and Spinning [65], if the primary is malicious, the whole system’s
performance is degraded. RBFT proposes a new model: multiple PBFT
protocol instances are executed in parallel using multi-core machines,
and only the results of the master instance are executed. Each protocol
instance is monitored for performance and compared with the master
instance. If a ratio of the performance of the master instance and the
best backup instance is lower than a preselected threshold, the primary
node of the master is considered malicious, and a replacement process is
initiated. If one or more Byzantine faulty nodes exist in the blockchain
network, it has been shown that the maximum performance degradation
of RBFT is 3%, which is better than other protocols, for instance, Prime
is 80%, Aardvark is 87%, and Spinning is 99%.
The BFT-SMART algorithm is an improvement to the PBFT algo-
rithm. In addition to the implementation of consensus, BFT-SMART also
provides state transition and reconfiguration services, addition and dele-
tion of nodes in the system, and effectively improves the system’s perfor-
40 W. Yao et al.
mance and efficiency. The Symbiont programming language implements
the BFT-SMART algorithm, which can reach a throughput of 8000 TPS
in a 4-node network cluster, which is in line with the literature’s expected
performance [42].
The advantage of the RPCA algorithm is its relatively high perfor-
mance and efficiency. Ripple can generate a block every 3 seconds with
a transaction throughput that can reach 1500 TPS. A disadvantage of
RPCA is that the fault tolerance is lower than other PBFT-likely con-
sensus algorithms. Since RPCA’s adversary mode is n= 5f+ 1, in order
to tolerate ffaulty nodes, the total number of nodes required in RPCA
is greater than other algorithms that have adversary mode as n= 3f+ 1.
The verification node is pre-configured, and the degree of decentralization
is low. Simultaneously, the reliability of the verification node directly af-
fects the operation of the entire network.
The SCP algorithm is a new consensus mechanism based on the Feder-
ated Byzantine Agreement, and it has four essential attributes: decentral-
ized control, flexible trust, low latency, and asymptotic security. Unlike
other BFT protocols, the transaction is not verified by all nodes in SCP. If
any node in a quorum has verified a transaction, the other nodes will trust
that node and skip the verification process. This mechanism allows SCP
to process transactions quickly, rather than other consensus algorithms in
a public blockchain. SCP emphasizes maintaining the network’s activity,
and instead of choosing nodes, any node can join each other’s trust list
for transactions if it follows the policy. With SCP, the Stellar network is
currently running approximately 100 nodes [66].
HotStuff consensus algorithm summarizes the features from other
BFT-based consensus algorithms such as PBFT and Tendermint [38],
and implements a new algorithm with safety, liveness, and responsive-
ness. Responsiveness allows the blockchain node to confirm the blocks
fast when the network is under a reliable condition; otherwise, it can wait
for more time to confirm if the network condition is limited. Hotstuff can
reduce the communication complexity to linear and guarantee responsive-
ness by using threshold signatures, three rounds of voting, and a chained
structure to acknowledge a block [39].
In summary, the advantages and disadvantages of the eight consensus
algorithms are listed in Table 4.
A Survey on Consortium Blockchain Consensus Mechanisms 41
6 Conclusion
Consensus algorithms lie at the core of blockchain and have become a
rapidly emerging area of research. This paper summarizes the working of
eight consortium blockchain consensus algorithms: Paxos, RAFT, PBFT,
RBFT, BFT-SMART, RPCA, SCP, and HotStuff. We discuss five crucial
aspects of the operation of each of these algorithms, namely, fault toler-
ance, performance, efficiency, decentralization, resource consumption, and
scalability. Our work in this paper lays the groundwork for researchers,
developers, and the blockchain community at large to understand the
current landscape of consensus technologies. The potential of blockchain
to revolutionize use cases in various scenarios from finance to agriculture
relies on the blockchain solution’s ability to achieve a balance between
three overarching objectives: scalability, security, and decentralization.
The choice of consensus algorithm has an outsize impact on the perfor-
mance of blockchain applications. Therefore, ongoing research into the
design and implementation of consensus algorithms will go a long way in
adapting blockchain for diverse applications.
Acknowledgment
The research is partially supported by FHWA EAR 693JJ320C000021.
Bibliography
[1] Satoshi Nakamoto. Bitcoin: A Peer-to-Peer Electronic Cash System.
Cryptography Mailing list at https://metzdowd.com, March 2009.
[2] Melanie Swan. Blockchain: blueprint for a new economy. O’Reilly,
first edition edition, 2015.
[3] Gavin Wood et al. Ethereum: A secure decentralised generalised
transaction ledger. Ethereum project yellow paper, 151(2014):1–32,
2014.
[4] Elli Androulaki, Artem Barger, Vita Bortnikov, Christian Cachin,
Konstantinos Christidis, Angelo De Caro, David Enyeart, Christo-
pher Ferris, Gennady Laventman, Yacov Manevich, and et al. Hy-
perledger fabric: a distributed operating system for permissioned
blockchains. In Proceedings of the Thirteenth EuroSys Conference,
page 1–15. ACM, Apr 2018.
[5] HyperLedger. Case study: How walmart brought unprecedented
transparency to the food supply chain with hyperledger fabric, Mar
2019.
[6] HyperLedger. Case study: How culedger protects credit unions
against fraud with hyperledger indy, 2020.
[7] HyperLedger. When hyperledger sawtooth met kubernetes - simpli-
fying enterprise blockchain adoption, 2020.
[8] Libra Association Members. Libra White Paper |Blockchain, Asso-
ciation, Reserve, April 2020.
[9] Bitcoin Wiki. Help:faq - bitcoin wiki, 2020.
[10] Jorge Bernal Bernabe, Jose Luis Canovas, Jose L. Hernandez-Ramos,
Rafael Torres Moreno, and Antonio Skarmeta. Privacy-preserving
solutions for blockchain: Review and challenges. IEEE Access,
7:164908–164940, 2019.
[11] Fred B. Schneider. Implementing fault-tolerant services using the
state machine approach: a tutorial. ACM Computing Surveys,
22(4):299–319, Dec 1990.
[12] Wikipedia. Adversary (cryptography), Dec 2020. Page Version ID:
995747901.
[13] Rui Zhang, Rui Xue, and Ling Liu. Security and privacy on
blockchain. arXiv:1903.07602 [cs], Aug 2019. arXiv: 1903.07602.
[14] Yifan Mao, Soubhik Deb, Shaileshh Bojja Venkatakrishnan, Sreeram
Kannan, and Kannan Srinivasan. Perigee: Efficient peer-to-peer net-
A Survey on Consortium Blockchain Consensus Mechanisms 43
work design for blockchains. arXiv:2006.14186 [cs, math, stat], Jun
2020. arXiv: 2006.14186.
[15] Bitcoin Wiki. Protocol rules - bitcoin wiki, 2020.
[16] Hye-Young Paik, Xiwei Xu, H. M. N. Dilum Bandara, Sung Une
Lee, and Sin Kuang Lo. Analysis of data management in blockchain-
based systems: From architecture to governance. IEEE Access,
7:186091–186107, 2019.
[17] Bitcoin. Block chain — bitcoin, 2009.
[18] Michael Szydlo. Merkle tree traversal in log space and time.
In International Conference on the Theory and Applications of
Cryptographic Techniques, pages 541–554. Springer, 2004.
[19] Pham Hoai Luan, Thi Hong Tran, Tri Phan, Duong Le Vu Trung,
Duckhai Lam, and Yasuhiko Nakashima. Double sha-256 hardware
architecture with compact message expander for bitcoin mining.
IEEE Access, 8:1–1, 01 2020.
[20] Ying-Chang Liang. Blockchain for dynamic spectrum management.
In Dynamic Spectrum Management, pages 121–146. Springer, 2020.
[21] Chris Dannen. Introducing Ethereum and Solidity: Foundations of
Cryptocurrency and Blockchain Programming for Beginners. Apress,
2017.
[22] Harald Vranken. Sustainability of bitcoin and blockchains. Current
Opinion in Environmental Sustainability, 28:1 – 9, 2017. Sustain-
ability governance.
[23] Dylan Yaga, Peter Mell, Nik Roby, and Karen Scarfone. Blockchain
technology overview. arXiv:1906.11078 [cs], page NIST IR 8202, Oct
2018. arXiv: 1906.11078.
[24] C. Dods, N. P. Smart, and M. Stam. Hash based digital signature
schemes. In Nigel P. Smart, editor, Cryptography and Coding, page
96–115. Springer Berlin Heidelberg, 2005.
[25] S. Chandra, S. Paira, S. S. Alam, and G. Sanyal. A compara-
tive survey of symmetric and asymmetric key cryptography. In
2014 International Conference on Electronics, Communication and
Computational Engineering (ICECCE), pages 83–93, 2014.
[26] Don Johnson, Alfred Menezes, and Scott Vanstone. The elliptic
curve digital signature algorithm (ecdsa). International Journal of
Information Security, 1(1):36–63, Aug 2001.
[27] Supriya Thakur and Vrushali Kulkarni. Blockchain and its ap-
plications – a detailed survey. International Journal of Computer
Applications, 180(3):29–35, Dec 2017.
[28] S. King and Scott Nadal. Ppcoin: Peer-to-peer crypto-currency with
proof-of-stake. 2012.
44 W. Yao et al.
[29] VeChain Foundation. Vechain whitepaper, Dec 2019.
[30] HyperLedger Sawtooth. Poet 1.0 specification — sawtooth v1.0.5
documentation, 2015.
[31] Stefan Dziembowski, Sebastian Faust, Vladimir Kolmogorov, and
Krzysztof Pietrzak. Proofs of space. Cryptology ePrint Archive,
Report 2013/796, 2013. https://eprint.iacr.org/2013/796.
[32] L. Lamport. Paxos Made Simple, 2001.
[33] D. Ongaro and J. Ousterhout. In search of an understandable con-
sensus algorithm. In USENIX Annual Technical Conference, 2014.
[34] Miguel Castro, Barbara Liskov, et al. Practical byzantine fault tol-
erance. In OSDI, volume 99, pages 173–186, 1999.
[35] P. Aublin, S. B. Mokhtar, and V. Qu´ema. RBFT: Redundant Byzan-
tine Fault Tolerance. In 2013 IEEE 33rd International Conference on
Distributed Computing Systems, pages 297–306, July 2013. ISSN:
1063-6927.
[36] D. Schwartz, Noah Youngs, and A. Britto. The ripple protocol con-
sensus algorithm. 2014.
[37] David Mazi`eres. The Stellar Consensus Protocol : A Federated Model
for Internet-level Consensus, 2015.
[38] Jae Kwon. Tendermint: Consensus without mining. Draft v. 0.6, fall,
1(11), 2014.
[39] Maofan Yin, Dahlia Malkhi, Michael K. Reiter, Guy Golan Gueta,
and Ittai Abraham. HotStuff: BFT Consensus in the Lens of
Blockchain. arXiv:1803.05069 [cs], July 2019. arXiv: 1803.05069.
[40] F. P. Junqueira, B. C. Reed, and M. Serafini. Zab: High-performance
broadcast for primary-backup systems. In 2011 IEEE/IFIP 41st
International Conference on Dependable Systems & Networks
(DSN), page 245–256. IEEE, Jun 2011.
[41] Guy Golan Gueta, Ittai Abraham, Shelly Grossman, Dahlia Malkhi,
Benny Pinkas, Michael K. Reiter, Dragos-Adrian Seredinschi, Orr
Tamir, and Alin Tomescu. Sbft: a scalable and decentralized trust
infrastructure. arXiv:1804.01626 [cs], Jan 2019. arXiv: 1804.01626.
[42] A. Bessani, J. Sousa, and E. E. P. Alchieri. State Machine Repli-
cation for the Masses with BFT-SMART. In 2014 44th Annual
IEEE/IFIP International Conference on Dependable Systems and
Networks, pages 355–362, June 2014. ISSN: 2158-3927.
[43] neo project. dbft 2.0 algorithm, 2014.
[44] Wen-Cheng Shi and Jian-Ping Li. Research on consistency of dis-
tributed system based on paxos algorithm. In 2012 International
Conference on Wavelet Active Media Technology and Information
Processing (ICWAMTIP), page 257–259. IEEE, Dec 2012.
A Survey on Consortium Blockchain Consensus Mechanisms 45
[45] Aleksey Charapko, Ailidani Ailijiang, and Murat Demirbas. Bridg-
ing paxos and blockchain consensus. In 2018 IEEE International
Conference on Internet of Things (iThings) and IEEE Green
Computing and Communications (GreenCom) and IEEE Cyber,
Physical and Social Computing (CPSCom) and IEEE Smart Data
(SmartData), pages 1545–1552. IEEE, 2018.
[46] Leslie Lamport, Robert Shostak, and Marshall Pease. The Byzantine
Generals Problem. ACM Transactions on Programming Languages
and Systems, 4(3):20, 1982.
[47] Libo Feng, Hui Zhang, Yong Chen, and Liqi Lou. Scalable dynamic
multi-agent practical byzantine fault-tolerant consensus in permis-
sioned blockchain. Applied Sciences, 8(10):1919, Oct 2018.
[48] Daniel J. Bernstein. The poly1305-aes message-authentication code.
In Henri Gilbert and Helena Handschuh, editors, Fast Software
Encryption, pages 32–49, Berlin, Heidelberg, 2005. Springer Berlin
Heidelberg.
[49] Eduardo Alchieri, Fernando Dotti, Odorico M. Mendizabal, and Fer-
nando Pedone. Reconfiguring parallel state machine replication.
In 2017 IEEE 36th Symposium on Reliable Distributed Systems
(SRDS), page 104–113. IEEE, Sep 2017.
[50] J. Sousa and A. Bessani. From byzantine consensus to bft state
machine replication: A latency-optimal transformation. In 2012
Ninth European Dependable Computing Conference, page 37–48,
May 2012.
[51] Christian Cachin. Yet another visit to paxos. IBM Research, Zurich,
Switzerland, Tech. Rep. RZ3754, 2009.
[52] Brad Chase and Ethan MacBrough. Analysis of the XRP Ledger
Consensus Protocol. arXiv:1802.07242 [cs], February 2018. arXiv:
1802.07242.
[53] David Mazi`eres, Giuliano Losa, and Eli Gafni. Simplified scp, Mar
2019.
[54] T-H Hubert Chan, Rafael Pass, and Elaine Shi. Pala: A sim-
ple partially synchronous blockchain. IACR Cryptol. ePrint Arch.,
2018:981, 2018.
[55] Christian Cachin and Marko Vukoli´c. Blockchain Consensus Proto-
cols in the Wild. arXiv:1707.01873 [cs], July 2017. arXiv: 1707.01873.
[56] The Ontology Team. Hotstuff: the consensus protocol behind face-
book’s librabft, Sep 2019.
[57] Mehrdad Salimitari and Mainak Chatterjee. A survey on consensus
protocols in blockchain for iot networks. arXiv:1809.05613 [cs], Jun
2019. arXiv: 1809.05613.
46 W. Yao et al.
[58] Omar Dib, Kei-L´eo Brousmiche, Antoine Durand, Eric Thea, and
Elyes Hamida. Consortium blockchains: Overview, applications and
challenges. 09 2018.
[59] Yaqin Wu, Pengxin Song, and Fuxin Wang. Hybrid consensus al-
gorithm optimization: A mathematical method based on pos and
pbft and its application in blockchain. Mathematical Problems in
Engineering, 2020, 2020.
[60] Yang Xiao, Ning Zhang, Wenjing Lou, and Y. Thomas Hou. A
survey of distributed consensus protocols for blockchain networks.
arXiv:1904.04098 [cs], Apr 2019. arXiv: 1904.04098.
[61] Roberto Casado-Vara, Pablo Chamoso, Fernando De la Prieta, Javier
Prieto, and Juan M. Corchado. Non-linear adaptive closed-loop con-
trol system for improved efficiency in iot-blockchain management.
Information Fusion, 49:227–239, Sep 2019.
[62] Mike Burrows. The chubby lock service for loosely-coupled dis-
tributed systems. In 7th USENIX Symposium on Operating Systems
Design and Implementation (OSDI), 2006.
[63] Y Amir, B Coan, J Kirsch, and J Lane. Prime: Byzantine replica-
tion under attack. IEEE Transactions on Dependable and Secure
Computing, 8(4):564–577, Jul 2011.
[64] Allen Clement, Edmund L. Wong, Lorenzo Alvisi, Michael Dahlin,
and Mirco Marchetti. Making byzantine fault tolerant systems tol-
erate byzantine faults. In Jennifer Rexford and Emin G¨un Sirer,
editors, Proceedings of the 6th USENIX Symposium on Networked
Systems Design and Implementation, NSDI 2009, April 22-24, 2009,
Boston, MA, USA, pages 153–168. USENIX Association, 2009.
[65] Giuliana Santos Veronese, Miguel Correia, Alysson Neves Bessani,
and Lau Cheuk Lung. Spin one’s wheels? byzantine fault toler-
ance with a spinning primary. In 2009 28th IEEE International
Symposium on Reliable Distributed Systems, page 135–144. IEEE,
Sep 2009.
[66] Christian Berger and Hans P. Reiser. Scaling byzantine consensus: A
broad analysis. In Proceedings of the 2nd Workshop on Scalable and
Resilient Infrastructures for Distributed Ledgers, page 13–18. ACM,
Dec 2018.