Chapter

From Byzantine Consensus to Blockchain Consensus

Authors:
  • Instituto Superior Técnico University of Lisbon
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... , which in the distributed system research area jargon means that such systems do not allow bad behavior from participants (bad things do not happen), and desired behavior eventually is processed by the system (good things happen) [29]. How these properties are realized depends on the desirable decentralization level, the fundamental property of blockchains, and the implementation specifics. ...
... The blockchain trilemma (cf. Figure 1), postulated by one of Ethereum's founders [19], states that blockchains have an inherent trade-off between security, scalability, and decentralization. Being an equivalent of the CAP theorem 1 for blockchains, the core property chosen is typically securityimplemented through consensus algorithms, crypto-economics, formal modeling, and results from distributed systems research (namely crash-fault tolerant and byzantine-fault tolerant algorithms [29,90]). Typically, the more nodes involved in a peer-to-peer network, the harder it is to corrupt it, but the slower the consensus becomes (intuitively, more nodes, more messages exchanged and therefore, the higher the overall communication latency). ...
Article
Dek A deep dive into blockchain interoperability: why it is needed, progress that has been made over the past decade, how it is currently deployed and used, and likely paths of future development.
... Blockchain systems ought to be resilient to faults (e.g., crash fault-tolerant or Byzantine fault-tolerant), as there may be crashes or malicious nodes on the network [54]. They run a consensus algorithm to create agreement on a global ledger state in the presence of Byzantine faults. ...
... They run a consensus algorithm to create agreement on a global ledger state in the presence of Byzantine faults. Consensus algorithms are important because they define the behavior of blockchain nodes and their interaction [54,199], and the security assumptions of each blockchain. ...
Article
Blockchain interoperability is emerging as one of the crucial features of blockchain technology, but the knowledge necessary for achieving it is fragmented. This fact makes it challenging for academics and the industry to achieve interoperability among blockchains seamlessly. Given this new domain’s novelty and potential, we conduct a literature review on blockchain interoperability by collecting 284 papers and 120 grey literature documents, constituting a corpus of 404 documents. From those 404 documents, we systematically analyzed and discussed 102 documents, including peer-reviewed papers and grey literature. Our review classifies studies in three categories: Public Connectors, Blockchain of Blockchains, and Hybrid Connectors. Each category is further divided into sub-categories based on defined criteria. We classify 67 existing solutions in one sub-category using the Blockchain Interoperability Framework, providing a holistic overview of blockchain interoperability. Our findings show that blockchain interoperability has a much broader spectrum than cryptocurrencies and cross-chain asset transfers. Finally, this article discusses supporting technologies, standards, use cases, open challenges, and future research directions, paving the way for research in the area.
... Це гарантує, що всі дані в мережі є однаковими та недоступними до змін без погодження. Найбільш поширеними правилами консенсусу, що застосовуються у технології блокчейн є алгоритми економічного стимулювання: Proof of Work (PoW -«доказ роботи»); Proof of Stake (PoS -«підтвердження частки») та їх модифікації [1,2], а також алгоритми математичних обчислень гарантій безпеки на основі Byzantine Fault Tolerance (BFT -«Візантійська стійкість до відмов») [3,4]. ...
Article
The concept of the new generation of the Internet is based on decentralization and today is widely implemented in cryptocurrency tokens and information systems based on blockchain technology. The purpose of the article is to investigate the impact of algorithms for confirming the authenticity of information on the effectiveness of the functioning of corporate information systems using blockchain technology through the analysis of existing concepts of consensus. The scheme of the corporate information system of document circulation proposed by the authors of the article in previous works using blockchain technology is based on ensuring the decentralization of the system and the integrity of data regarding the preservation and revision of the institution's documents. To automate the consensus process using a smart contract, the system uses a dynamic consensus. Концепція нового покоління Інтернету заснована на децентралізації і сьогодні широко реалізована в токенах криптовалюти та інформаційних системах на основі технології блокчейн. Метою статті є дослідження впливу алгоритмів підтвердження достовірності інформації на ефективність функціонування корпоративних інформаційних систем з використанням технології блокчейн через аналіз існуючих концепцій консенсусу. Запропонована авторами статті в попередніх роботах схема корпоративної інформаційної системи документообігу з використанням технології блокчейн базується на забезпеченні децентралізації системи та цілісності даних щодо збереження та перегляду документів установи. Для автоматизації процесу консенсусу за допомогою смарт-контракту система використовує динамічний консенсус.
... As defined by [27], blockchain is a distributed ledger that can record transactions between two (or more) parties efficiently and in a verifiable and permanent way. Blockchain, in its original form, is a distributed database technology that utilizes a tamper-proof list of transaction records with timestamps. ...
... Its importance stems from the fact that Byzantine agreement lies at the heart of state machine replication [1,14,15,31,76,77,83,86,95], distributed key generation [7,48,75,92], secure multi-party computation [50,61,65], as well as various distributed services [62,69]. Recent years have witnessed a renewed interest in Byzantine agreement due to the emergence of blockchain systems [8,9,26,46,47,68,81]. Formally, the agreement problem is defined in a distributed system of processes; up to < processes can be faulty, whereas the rest are correct. ...
... Byzantine Agreement (BA) is a core primitive of distributed computing [60]. It is indispensable to state machine replication (SMR) [1,16,50,56], blockchain systems [3,14,30,31,41], and various other distributed protocols [7,37,42,44]. In BA, processes propose an -bit value and agree on an -bit value, while tolerating up to arbitrary failures. ...
... This significant market value underscores the global acceptance and reliance on blockchain technology, underpinning the operation of cryptocurrencies and other relevant use cases, such as social impact [142], [143], decentralized finance [144]- [146], decentralized identity [147], [148], process optimization [149], [150], and many more [151]. Indeed, the development of blockchain technologies gave back to traditional distributed systems [152] and cryptography research [153], highlighting the importance of this new discipline. ...
Preprint
The field of blockchain interoperability plays a pivotal role in blockchain adoption. Despite these advances, a notorious problem persists: the high number and success rate of attacks on blockchain bridges. We propose Harmonia, a framework for building robust, secure, efficient, and decentralized cross-chain applications. A main component of Harmonia is DendrETH, a decentralized and efficient zero-knowledge proof-based light client. DendrETH mitigates security problems by lowering the attack surface by relying on the properties of zero-knowledge proofs. The DendrETH instance of this paper is an improvement of Ethereum’s light client sync protocol that fixes critical security flaws. This light client protocol is implemented as a smart contract, allowing blockchains to read the state of the source blockchain in a trust-minimized way. Harmonia and DendrETH support several cross-chain use cases, such as secure cross-blockchain bridges (asset transfers) and smart contract migrations (data transfers), without a trusted operator. We implemented Harmonia in 9K lines of code. Our implementation is compatible with the Ethereum Virtual Machine (EVM) based chains and some non-EVM chains. Our experimental evaluation shows that Harmonia can generate light client updates with reasonable latency, costs (a dozen to a few thousand US dollars per year), and minimal storage requirements (around 4.5 MB per year). We also carried out experiments to evaluate the security of DendrETH. We provide an open-source implementation and reproducible environment for researchers and practitioners to replicate our results.
... прошлого века разработал Эрик Брюэр. Данные свойства блокчейна реализуются в одноранговой сети распределенных реестров посредством консенсусных алгоритмов, криптографии и хеш-функции (Correia, 2019;Zhang et al., 2019). ...
Article
Objective: literature review and basic characteristics of the asset tokenization; clarification of the types of token classification; identification of the stages of modeling the asset tokenization; analysis of applications of decentralized finance ecosystem protocols; study of the opportunities and systemic advantages of asset tokenization; presentation of the problems arising in the asset tokenization; analysis of the factors of asset tokenization efficiency growth. Methods: the article uses empirical, historical, logical, country-specific, corporate, comparative and statistical methods of economic analysis to study the peculiarities of the asset tokenization development in the digital transformation of modern economy. Results: the basic characteristics of the asset tokenization are disclosed; the types of standardized tokens involved in the asset tokenization are defined; the stages of the asset tokenization development are considered; the options of using decentralized finance applications under asset tokenization are shown; the opportunities of tokenization through the new forms of investment, increased financial accessibility, transparency and componentization of tokenized assets are studied; the problems of tokenization are analyzed; the factors of asset tokenization efficiency growth under the cross-chain compatibility of different types of blockchains are analyzed. Scientific novelty: the article shows that asset tokenization is a process of accounting and asset management transformation, in which each asset is represented in the form of a programmable digital token; tokenization is a new form of creating additional liquidity by expanding the circulation of idle illiquid assets. Tokenization guarantees greater transparency regarding the rights to real assets and the history of ownership of these rights; it contributes to transaction efficiency by reducing transaction costs, including costs associated with management, token issuance and possible forms of intermediation. By accessing the applications of the DeFi ecosystem, it allows the expansion of financial market potential through the fragmentation and compartmentalization of tokenized assets. All the challenges in the asset tokenization are related to the blockchain trilemma, where decentralization, security and scalability cannot be implemented together. The blockchain trilemma is now becoming a set of possible trade-offs that can preserve all three properties of the blockchain, but at different levels of compatibility. To form a set of possible trade-offs, it is necessary to develop a theory of interoperability, which should be built on the compatibility of factors such as anonymity and privacy, security and preservation of rights to tokenized assets. Practical significance: the main provisions and conclusions of the article can be used: to develop scenarios for the asset tokenization development under the digital transformation of modern economy; to analyze the applications of the decentralized finance ecosystem protocols; to increase the efficiency of asset tokenization under the cross-chain compatibility of different types of blockchains; to study additional opportunities and systemic advantages as a result of fragmentation and compatability of tokenized assets; to study the problems arising in the asset tokenization; and to search for additional growth factors for the asset tokenization efficiency.
... Consensus [57] is the cornerstone of state machine replication (SMR) [1,9,10,22,53,54,63,68,86], as well as various distributed algorithms [14,42,46,47]. Recently, it has received a lot of attention with the advent of blockchain systems [6,7,18,29,31,45,61]. The consensus problem is posed in a system of processes, out of which < can be faulty, and the rest correct. ...
... Byzantine consensus [64] is a fundamental primitive in distributed computing. It has recently risen to prominence due to its use in blockchains [69,22,4,32,5,40,38] and various forms of state machine replication (SMR) [8,30,63,1,11,62,86,70,73]. At the same time, the performance of these applications has become directly tied to the performance of consensus and its efficient use of network resources. ...
Preprint
Full-text available
Consensus enables n processes to agree on a common valid L-bit value, despite t < n/3 processes being faulty and acting arbitrarily. A long line of work has been dedicated to improving the worst-case communication complexity of consensus in partial synchrony. This has recently culminated in the worst-case word complexity of O(n^2). However, the worst-case bit complexity of the best solution is still O(n^2 L + n^2 kappa) (where kappa is the security parameter), far from the \Omega(n L + n^2) lower bound. The gap is significant given the practical use of consensus primitives, where values typically consist of batches of large size (L > n). This paper shows how to narrow the aforementioned gap while achieving optimal linear latency. Namely, we present a new algorithm, DARE (Disperse, Agree, REtrieve), that improves upon the O(n^2 L) term via a novel dispersal primitive. DARE achieves O(n^{1.5} L + n^{2.5} kappa) bit complexity, an effective sqrt{n}-factor improvement over the state-of-the-art (when L > n kappa). Moreover, we show that employing heavier cryptographic primitives, namely STARK proofs, allows us to devise DARE-Stark, a version of DARE which achieves the near-optimal bit complexity of O(n L + n^2 poly(kappa)). Both DARE and DARE-Stark achieve optimal O(n) latency.
... Consensus [50] is the cornerstone of state machine replication (SMR) [1,8,9,21,46,47,57,61,77], as well as various distributed protocols [13,37,39,40]. Recently, it has received a lot of attention with the advent of blockchain systems [5,6,17,26,28,38,55]. The consensus problem is posed in a system of processes, out of which < can be faulty, and the rest are correct. ...
Preprint
Full-text available
The Byzantine consensus problem involves n processes, out of which t < n could be faulty and behave arbitrarily. Three properties characterize consensus: (1) termination, requiring correct (non-faulty) processes to eventually reach a decision, (2) agreement, preventing them from deciding different values, and (3) validity, precluding ``unreasonable'' decisions. But, what is a reasonable decision? Strong validity, a classical property, stipulates that, if all correct processes propose the same value, only that value can be decided. Weak validity, another established property, stipulates that, if all processes are correct and they propose the same value, that value must be decided. The space of possible validity properties is vast. However, their impact on consensus remains unclear. This paper addresses the question of which validity properties allow Byzantine consensus to be solvable with partial synchrony, and at what cost. First, we determine necessary and sufficient conditions for a validity property to make the consensus problem solvable; we say that such validity properties are solvable. Notably, we prove that, if n <= 3t, all solvable validity properties are trivial (there exists an always-admissible decision). Furthermore, we show that, with any non-trivial (and solvable) validity property, consensus requires Omega(t^2) messages. This extends the seminal Dolev-Reischuk bound, originally proven for strong validity, to all non-trivial validity properties. Lastly, we give a general Byzantine consensus algorithm, we call Universal, for any solvable (and non-trivial) validity property. Importantly, Universal incurs O(n^2) message complexity. Thus, together with our lower bound, Universal implies a fundamental result in partial synchrony: with t \in Omega(n), the message complexity of all (non-trivial) consensus variants is Theta(n^2).
... The Fabric blockchain, together with its implementation client and library, have prevailed due to its Crash Fault Tolerance (CFT) consensus mechanisms. Although Byzantine Fault Tolerance (BFT) mechanisms are excellent against malicious actors, CFT validates more transactions using fewer peers and recovers from system crashes as long as there are at least 2n + 1 nodes available [5]. The Proof of Work (PoW), Proof of Authority (PoA), Proof of Stake (PoS) and Delegated Proof of Stake (DPoS) consensus mechanisms correspond to BFT instances, whereas Kafka and Raft to CFT. ...
Chapter
CloudAnchor is a multi-agent brokerage platform for the negotiation of Infrastructure as a Service cloud resources between Small and Medium Sized Enterprises, acting either as providers or consumers. This project entails the research, design, and implementation of a smart contract solution to permanently record and manage contractual and behavioural stakeholder data on a blockchain network. Smart contracts enable safe contract code execution, increasing trust between parties and ensuring the integrity and traceability of the chained contents. The defined smart contracts represent the inter-business trustworthiness and Service Level Agreements established within the platform. CloudAnchor interacts with the blockchain network through a dedicated Application Programming Interface, which coordinates and optimises the submission of transactions. The performed tests indicate the success of this integration: (i) the number and value of negotiated resources remain identical; and (ii) the run-time increases due to the inherent latency of the blockchain operation. Nonetheless, the introduced latency does not affect the brokerage performance, proving to be an appropriate solution for reliable partner selection and contractual enforcement between untrusted parties. This novel approach stores all brokerage strategic knowledge in a distributed, decentralised, and immutable database.KeywordsBrokerageIaaSMulti-agentNegotiationProfilingSmart contractsService level agreementsTrust & reputation
... The Proof-of-Stake (PoS) algorithm expects to replace PoW BCIoT systems with its expanded power utilization [53]. As an option in contrast to computationally costly puzzle-solving, PoS tries to manage the monitoring system [62]. There, "miners" is replaced by "validators," and one of the validators is chosen to publish a block on the BC, the distinction lies in how the validator is picked. ...
Article
Technological advancements have always been influencing our lives. Recently, the Internet of Things (IoT) and Blockchain (BC) are emerging as potentially disruptive technologies. Whereby, the IoT is a system of inter-related devices with unique identifiers for data sharing and device management and control. IoT is based on the integration of traditional technologies including embedded systems, wireless sensor networks, control systems, and automation. While, the concept of IoT is continuously evolving with the convergence of multiple technologies including real-time analytics, machine learning, commodity sensor, and embedded systems. On the other hand, BC technology is a distributed ledger used to maintain the transaction logs of a network, and it has started revolutionizing data provenance, storage, secure, and traceable transaction management systems. There is limited use of blockchain technology for a fully decentralized, untrusted, and secure environment in the field of IoT. This article reviews the current state-of-art blockchain technology and its current utilization in different application domains of IoT. Furthermore, it presents the use of blockchain technology with digital ledger technology (DLT) and IoT. Similarly, the notable challenges of BC and IoT integration are presented. To the best of our knowledge, there is no such SLR available that provides a comprehensive review in this domain. Applying blockchain to solve IoT problems improves IoT security. Moreover, a taxonomy of application domains is presented, which can be integrated with BC and IoT. The article identifies and discusses open research issues and challenges that need to be addressed to harness the potential of BC technology for IoT.
... One of the practical applications of view integration is blockchain. Blockchain is an emerging technology that provides decentralized, immutable, append-only data storage (Correia, 2019;Peck, 2017). On top of such secure storage, a computing framework can be maintained by a network of untrusted participants (or nodes) via smart contracts (Belchior et al., 2020). ...
Article
Purpose The complexity of business environments often causes organizations to produce several inconsistent views of the same business process (BP), leading to fragmentation. BP view integration attempts to produce an integrated view from different views of the same model, facilitating the management of BP models. Design/methodology/approach To study the trends of BP view integration, the authors conduct an extensive and systematic literature review to summarize findings since the 1970s. With a starting corpus of 918 documents, this survey draws up a systematic inventory of solutions used in academia and industry. By narrowing it down to 71 articles, the authors discuss in-depth 17 BP integration techniques papers, classifying each solution according to 9 criteria. Findings The authors' study shows that most view-integration methods (11) utilize annotation-based matching, based on formal merging rules. While most solutions are formalized, only approximately half are validated with a real-world use case scenario. View integration can be applied to areas other than database schema integration and BP view integration. Practical implications By summarizing existing knowledge up to June 2021, the authors explore possible future research directions. The authors highlight the application of view integration to the blockchain research area, where stakeholders can have different views on the same blockchain. The authors expect that this study contributes to interdisciplinary research across view integration, namely to the context of blockchain. Originality/value This survey serves to pave the way for future trends, where the authors highlight the application of view integration to blockchain research.
... This technology is also referred to as distributed ledger because the blockchain ledger is stored on multiple participating computers rather than on a single central server. The consistency of this ledger is maintained using consensus algorithms [3]. Apart from the distributive nature of blockchain, some of the characteristic features are as follows: ...
Chapter
Full-text available
Data in IoT domains is significantly analysed and the information is mined as required. The results from the devices are then shared among the interested devices for better experience and efficiency. Sharing of data is rudimentary in any IoT platform which increases the probability of an adversary gaining access of the data. Blockchain, which consists of blocks that are connected together by means of cryptographic hashes, SHA256 being the most popularly used hash function in the blockchain network, is a newly adapted technology for secure sharing of data in IoT domains. A lot of challenges involving the integration for blockchain in IoT has to be addressed that would ultimately provide a secure mechanism for data sharing among IoT devices.
... Technical consensus is a fundamental component of a blockchain network [8] and their properties are underscored by the application, at a protocol layer, of algorithms that address problems of bad actor attacks on information ledgers such as 51% Attacks [9] and Cybil Attacks, and tackle variants of the Byzantine Generals problem [10][11][12][13][14]. These consensus protocols are either the probabilistic-finality consensus protocols, maximizing the probability of consensus between a network of computers running the algorithm, and minimizing the probability of attack [8,[15][16][17] or the absolute-finality consensus protocols, guaranteeing a newly added transaction to be immediately finalized in the blockchain network [8]. A technical consensus information system provides strong guarantees about data integrity once data is on the ledger. ...
Article
Attaining "Common Knowledge" is essential in the formation of deals and agreements, but is hard to achieve over point-to-point communication networks such as the Internet. While blockchain technologies provide a helpful mechanism in the form of technical consensus, originally created to resolve the double-spending problem, little does it address impediments to proper contract formation that have to do with the quality of exogenous data, asymmetry of information, or even its suitability (the colloquial garbage-in/garbage-out problem). In the face of the limitations of technical consensus, we explore how the strengths of technical consensus can be utilized by introducing another dimension of consensus-what we have called social consensus-which directly addresses the formation of true (if retrospective) Common Knowledge essential to arms-length agreement between multiple actors. We have eschewed the notion that social consensus as an epistemological concern, and instead focus on mechanisms by which environments of multiple actors form and sustain bodies of information that enable them to go about their activities with a reasonable level of confidence that others in the supply chain will 'do their bit,' backed by a 'failsafe' mechanism enforced by cryptography which holds the actors unharmed in case the envisioned finality is not obtained. These bodies of information create what is essentially a retrospective form of common knowledge, and a lesser variant, mutual knowledge. We propose an implementation from the preferred mechanism of a multisig primitive, which builds and improves on the Gnosis multisig, in the use case of decentralized blockchain-powered supply chains consisting of multiple parties, human and machine.
... It is a distributed network with nodes present in a peer-to-peer topology that are the main executors of all the necessary transactions in a blockchain. Consensus algorithms [1] are used in a blockchain network to make sure the consistency of the blockchain ledger. The general architecture of blockchain consists of various transaction blocks as shown in Fig. 1, connected/linked using hashes [2], most commonly used SHA256 [3]. ...
Article
Full-text available
The significance of an agile and widespread healthcare system was evident from the recent pandemic, Covid-19. Healthcare encompasses different stakeholders and many domains that include pharmaceutical supply chain management (SCM), electronic medical records, patient histories, clinical trial results, imaging and scans, insurance claim records, and doctors' information as well. Currently, there are many domains in healthcare with challenges and issues where improvements are due. Blockchain is the single most imperative technology that could be integrated with healthcare to enhance the capabilities in every aspect of the trivial system. In this paper, the benefits of integrating blockchain with healthcare shall be discussed. A discussion of the challenges and possible solutions in healthcare using blockchain shall also be given.
... The applicat ion area of blockchain technology is beyond cryptocurrencies and has attracted various fields such as healthcare, e-voting, fraud detection, insurance, supply chain etc. Blockchain technology is decentralized in nature as there is no single authority which governs the network; rather it consists of peer-to-peer nodes which are responsible for performing the operations in a blockchain. It consists of a distributed ledger that is shared among the participating nodes in the network which is always consistent by means of consensus algorithms [2]. Blockchain is characterized by features such as immutable, d istribute d, anonymous, transparent, and secure. ...
Conference Paper
Full-text available
Blockchain is a decentralized network consisting of peer-to-peer nodes which perform all the operations within a network without the need of a central authority unlike IoT where information exchange, validation and authentication of data is done using a central authority. The trivial security and privacy protocols in place for IoT suffer due to the decentralization and limited resource capabilities of IoT devices. Blockchain in IoT with its cryptographic functions and decentralized nature could prove significantly useful. This paper discusses the motivation for IoT using Blockchain, the attacks and challenges in Blockchain, the challenges in the integration of Blockchain in IoT and the opportunities of such an integration. Some applications of Blockch ain in IoT are also discussed to provide an insight into the future prospect of this integrated technology.
... Therefore, hundreds of slightly different new cryptocurrencies have been appearing 1 , with the emphasis on different problems, from faster transactions to better anonymity or stability. This flourishing ecosystem has also lead to many research in different aspects of cryptocurrencies, from different networks [2], [3], [7]- [9] to attacks [10], [11], consensus algorithms [12], [13], and many other topics. ...
... It has to be modular to support expansions, and scalable to support an increasing number of players. Blockchain technology offers solutions to these challenges by providing decentralized, immutable (append-only), transparent, and trustworthy storing of data [4]- [7]. Furthermore, it allows organizations to remove intermediaries, reducing the time of going through thirdparties, while also lowering the cost of transactions. ...
Article
Entrepreneurs, enterprises, and governments are using distributed ledger technology (DLT) as a component of complex information systems, and therefore interoperability capabilities are required. Interoperating DLTs enables network effects, synergies and, similarly to the rise of the Internet, it unlocks the full potential of the technology. However, due to the novelty of the area, interoperability mechanisms (IM) are still not well understood, as interoperability is studied in silos. Consequently, choosing the proper IM for a use case is challenging. Our paper has three contributions: first, we systematically study the research area of DLT interoperability by dissecting and analyzing previous work. We study the logical separation of interoperability layers, how a DLT can connect to others (connection mode), the object of interoperation (interoperation mode), and propose a new categorization for IMs. Second, we propose the first interoperability assessment for DLTs that systematically evaluates the interoperability degree of an IM. This framework allows comparing the potentiality, compatibility, and performance among solutions. Finally, we propose two decision models to assist in choosing an IM, considering different requirements. The first decision model assists in choosing the infrastructure of an IM, while the second decision model assists in choosing its functionality.
Chapter
As a promising emerging technology, Web3.0 has become the focus of more and more manufacturers and researchers. Web3.0 is an integration of network readability, writability, and authenticity. It is not only a new Internet architecture that integrates multiple rising technologies based on decentralization, but also an Internet infrastructure owned and trusted by each individual users. It reshapes the relationship between users and applications, by storing data on the network, rather than on specific servers owned by large service providers, which means that anyone can use this data without creating access credentials or obtaining permission from those monopolistic providers. This vision paper will first review the way the current network services work, then introduce some key technologies closely related to Web3.0, and finally point out the future research directions and potential opportunities, which are expected to give researchers a better understanding of Web3.0.
Conference Paper
Full-text available
Designing a secure permissionless distributed ledger (blockchain) that performs on par with centralized payment processors, such as Visa, is a challenging task. Most existing distributed ledgers are unable to scale-out, i.e., to grow their total processing capacity with the number of validators; and those that do, compromise security or decentralization. We present OmniLedger, a novel scale-out distributed ledger that preserves longterm security under permissionless operation. It ensures security and correctness by using a bias-resistant public-randomness protocol for choosing large, statistically representative shards that process transactions, and by introducing an efficient cross-shard commit protocol that atomically handles transactions affecting multiple shards. OmniLedger also optimizes performance via parallel intra-shard transaction processing, ledger pruning via collectively-signed state blocks, and low-latency "trust-but-verify" validation for low-value transactions. An evaluation of our experimental prototype shows that OmniLedger's throughput scales linearly in the number of active validators, supporting Visa-level workloads and beyond, while confirming typical transactions in under two seconds.
Conference Paper
Full-text available
We briefly describe the preliminary work on the design, implementation and evaluation of a Byzantine-fault tolerant ordering service for the Hyperledger Fabric Blockchain platform using the BFT-SMaRt replication library.
Conference Paper
Full-text available
Scaling the transaction throughput of decentralized blockchain ledgers such as Bitcoin and Ethereum has been an ongoing challenge. Two-party duplex payment channels have been designed and used as building blocks to construct linked payment networks, which allow atomic and trust-free payments between parties without exhausting the resources of the blockchain. Once a payment channel, however, is depleted (e.g., because transactions were mostly unidirectional) the channel would need to be closed and re-funded to allow for new transactions. Users are envisioned to entertain multiple payment channels with different entities, and as such, instead of refunding a channel (which incurs costly on-chain transactions), a user should be able to leverage his existing channels to rebalance a poorly funded channel. To the best of our knowledge, we present the first solution that allows an arbitrary set of users in a payment channel network to securely rebalance their channels, according to the preferences of the channel owners. Except in the case of disputes (similar to conventional payment channels), our solution does not require on-chain transactions and therefore increases the scalability of existing blockchains. In our security analysis, we show that an honest participant cannot lose any of its funds while rebalancing. We finally provide a proof of concept implementation and evaluation for the Ethereum network.
Conference Paper
Full-text available
Algorand is a new cryptocurrency that confirms transactions with latency on the order of a minute while scaling to many users. Algorand ensures that users never have divergent views of confirmed transactions, even if some of the users are malicious and the network is temporarily partitioned. In contrast, existing cryptocurrencies allow for temporary forks and therefore require a long time, on the order of an hour, to confirm transactions with high confidence. Algorand uses a new Byzantine Agreement (BA) protocol to reach consensus among users on the next set of transactions. To scale the consensus to many users, Algorand uses a novel mechanism based on Verifiable Random Functions that allows users to privately check whether they are selected to participate in the BA to agree on the next set of transactions, and to include a proof of their selection in their network messages. In Algorand's BA protocol, users do not keep any private state except for their private keys, which allows Algorand to replace participants immediately after they send a message. This mitigates targeted attacks on chosen participants after their identity is revealed. We implement Algorand and evaluate its performance on 1,000 EC2 virtual machines, simulating up to 500,000 users. Experimental results show that Algorand confirms transactions in under a minute, achieves 125x Bitcoin's throughput, and incurs almost no penalty for scaling to more users.
Conference Paper
Full-text available
Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital cryptocurrencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature. This opens the question whether existing security analysis of Bitcoin's PoW applies to other implementations which have been instantiated with different consensus and/or network parameters. In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.
Conference Paper
Full-text available
The last fifteen years have seen an impressive amount of work on protocols for Byzantine fault-tolerant (BFT) state machine replication (SMR). However, there is still a need for practical and reliable software libraries implementing this technique. BFT-SMART is an open-source Java-based library implementing robust BFT state machine replication. Some of the key features of this library that distinguishes it from similar works (e.g., PBFT and UpRight) are improved reliability, modularity as a first-class property, multicore-awareness, reconfiguration support and a flexible programming interface. When compared to other SMR libraries, BFT-SMART achieves better performance and is able to withstand a number of real-world faults that previous implementations cannot.
Article
Full-text available
State machine replication (SMR) is a generic technique for implementing fault-tolerant distributed services by replicating them in sets of servers. There have been several proposals for using SMR to tolerate arbitrary or Byzantine faults, including intrusions. However, most of these systems can tolerate at most f faulty servers out of a total of 3f+1. We show that it is possible to implement a Byzantine SMR algorithm with only 2f+1 replicas by extending the system with a simple trusted distributed component. Several performance metrics show that our algorithm, BFT-TO, fares well in comparison with others in the literature. Furthermore, BFT-TO is not vulnerable to some recently presented performance attacks that affect alternative approaches.
Conference Paper
Full-text available
Bitcoin is a digital currency that unlike traditional currencies does not rely on a centralized authority. Instead Bitcoin relies on a network of volunteers that collectively implement a replicated ledger and verify transactions. In this paper we analyze how Bitcoin uses a multi-hop broadcast to propagate transactions and blocks through the network to update the ledger replicas. We then use the gathered information to verify the conjecture that the propagation delay in the network is the primary cause for blockchain forks. Blockchain forks should be avoided as they are symptomatic for inconsistencies among the replicas in the network. We then show what can be achieved by pushing the current protocol to its limit with unilateral changes to the client's behavior.
Article
Full-text available
We present two asynchronous Byzantine fault-tolerant state machine replication (BFT) algorithms, which improve previous algorithms in terms of several metrics. First, they require only 2f+1 replicas, instead of the usual 3f+1. Second, the trusted service in which this reduction of replicas is based is quite simple, making a verified implementation straightforward (and even feasible using commercial trusted hardware). Third, in nice executions the two algorithms run in the minimum number of communication steps for nonspeculative and speculative algorithms, respectively, four and three steps. Besides the obvious benefits in terms of cost, resilience and management complexity-fewer replicas to tolerate a certain number of faults-our algorithms are simpler than previous ones, being closer to crash fault-tolerant replication algorithms. The performance evaluation shows that, even with the trusted component access overhead, they can have better throughput than Castro and Liskov's PBFT, and better latency in networks with nonnegligible communication delays.
Article
Full-text available
Wireless ad hoc networks, due to their inherent unreliability, pose significant challenges to the task of achieving tight coordination among nodes. The failure of some nodes and momentary breakdown of communications, either of accidental or malicious nature, should not result in the failure of the entire system. This paper presents an asynchronous Byzantine consensus protocol-called Turquois-specifically designed for resource-constrained wireless ad hoc networks. The key to its efficiency is the fact that it tolerates dynamic message omissions, which allows an efficient utilization of the wireless broadcasting medium. The protocol also refrains from computationally expensive public-key cryptographic during its normal operation. The protocol is safe despite the arbitrary failure of f <; n/3 nodes from a total of n nodes, and unrestricted message omissions. Progress is ensured in rounds where the number of omissions is σ ≤ [n-t/2] (n - k - t) + k - 2, where k is the number of nodes required to terminate and t ≤ f is the number of nodes that are actually faulty. These characteristics make Turquois the first consensus protocol that simultaneously circumvents the FLP and the Santoro-Widmayer impossibility results, which is achieved through randomization. Finally, the protocol was prototyped and subject to a comparative performance evaluation against two well-known Byzantine fault-tolerant consensus protocols. The results show that, due to its design, Turquois outperforms the other protocols by more than an order of magnitude as the number of nodes in the system increases.
Conference Paper
Full-text available
The Bitcoin cryptocurrency records its transactions in a public log called the blockchain. Its security rests critically on the distributed protocol that maintains the blockchain, run by participants called miners. Conventional wisdom asserts that the mining protocol is incentive-compatible and secure against colluding minority groups, that is, it incentivizes miners to follow the protocol as prescribed. We show that the Bitcoin mining protocol is not incentive-compatible. We present an attack with which colluding miners obtain a revenue larger than their fair share. This attack can have significant consequences for Bitcoin: Rational miners will prefer to join the selfish miners, and the colluding group will increase in size until it becomes a majority. At this point, the Bitcoin system ceases to be a decentralized currency. Unless certain assumptions are made, selfish mining may be feasible for any group size of colluding miners. We propose a practical modification to the Bitcoin protocol that protects Bitcoin in the general case. It prohibits selfish mining by pools that command less than 1/4 of the resources. This threshold is lower than the wrongly assumed 1/2 bound, but better than the current reality where a group of any size can compromise the system.
Article
Full-text available
One of the main reasons why Byzantine fault-tolerant (BFT) systems are not widely used lies in their high resource consumption: 3f+1 replicas are necessary to tolerate only f faults. Recent works have been able to reduce the minimum number of replicas to 2f+1 by relying on a trusted subsystem that prevents a replica from making conflicting statements to other replicas without being detected. Nevertheless, having been designed with the focus on fault handling, these systems still employ a majority of replicas during normal-case operation for seemingly redundant work. Furthermore, the trusted subsystems available trade off performance for security; that is, they either achieve high throughput or they come with a small trusted computing base. This paper presents CheapBFT, a BFT system that, for the first time, tolerates that all but one of the replicas active in normal-case operation become faulty. CheapBFT runs a composite agreement protocol and exploits passive replication to save resources; in the absence of faults, it requires that only f+1 replicas actively agree on client requests and execute them. In case of suspected faulty behavior, CheapBFT triggers a transition protocol that activates f extra passive replicas and brings all non-faulty replicas into a consistent state again. This approach, for example, allows the system to safely switch to another, more resilient agreement protocol. CheapBFT relies on an FPGA-based trusted subsystem for the authentication of protocol messages that provides high performance and comprises a small trusted computing base.
Article
Full-text available
Work to date on algorithms for message-passing systems has explored a wide variety of types of faults, but corresponding work on shared memory systems has usually assumed that only crash faults are possible. In this work, we explore situations in which processes accessing shared objects can fail arbitrarily (Byzantine faults).
Conference Paper
Full-text available
Wireless ad-hoc networks are being increasingly used in diverse contexts, ranging from casual meetings to disaster recovery operations. A promising approach is to model these networks as distributed systems prone to dynamic communication failures. This captures transitory disconnections in communication due to phenomena like interference and collisions, and permits an efficient use of the wireless broadcasting medium. This model, however, is bound by the impossibility result of Santoro and Widmayer, which states that, even with strong synchrony assumptions, there is no deterministic solution to any non-trivial form of agreement if n − 1 or more messages can be lost per communication round in a system with n processes. In this paper we propose a novel way to circumvent this impossibility result by employing randomization. We present a consensus protocol that ensures safety in the presence of an unrestricted number of omission faults, and guarantees progress in rounds where such faults are bounded by f £ é\fracn2 ù(n-k)+k-2f \leq \lceil \frac{n}{2} \rceil (n-k)+k-2, where k is the number of processes required to decide, eventually assuring termination with probability 1.
Article
Full-text available
Randomized agreement protocols have been around for more than two decades. Often assumed to be inefficient due to their high expected communication and computation complexities, they have remained overlooked by the community-at-large as a valid solution for the deployment of fault-tolerant distributed systems. This paper aims to demonstrate that randomization can be a very competitive approach even in hostile environments where arbitrary faults can occur. A stack of randomized intrusion-tolerant protocols is described and its performance evaluated under several settings in both local-area-network (LAN) and wide-area-network environments. The stack provides a set of relevant services ranging from basic communication primitives up to atomic broadcast. The experimental evaluation shows that the protocols are efficient, especially in LAN environments where no performance reduction is observed under certain Byzantine faults.
Article
Full-text available
Existing Byzantine-resilient replication protocols satisfy two standard correctness criteria, safety and liveness, even in the presence of Byzantine faults. The runtime performance of these protocols is most commonly assessed in the absence of processor faults and is usually good in that case. However, faulty processors can significantly degrade the performance of some protocols, limiting their practical utility in adversarial environments. This paper demonstrates the extent of performance degradation possible in some existing protocols that do satisfy liveness and that do perform well absent Byzantine faults. We propose a new performance-oriented correctness criterion that requires a consistent level of performance, even with Byzantine faults. We present a new Byzantine fault-tolerant replication protocol that meets the new correctness criterion and evaluate its performance in fault-free executions and when under attack.
Conference Paper
Full-text available
Byzantine-Fault-Tolerant (BFT) state machine replication is an appealing technique to tolerate arbitrary failures. However, Byzantine agreement incurs a fundamental trade-off between being fast (i.e. optimal latency) and achieving optimal resilience (i.e. 2f + b + 1 replicas, where f is the bound on failures and b the bound on Byzantine failures). Achieving fast Byzantine replication despite f failures requires at least f + b - 2 additional replicas. In this paper we show, perhaps surprisingly, that fast Byzantine agreement despite f failures is practically attainable using only b - 1 additional replicas, which is independent of the number of crashes tolerated. This makes our approach particularly appealing for systems that must tolerate many crashes (large f) and few Byzantine faults (small b). The core principle of our approach is to have replicas agree on a quorum of responsive replicas before agreeing on requests. This is key to circumventing the resilience lower bound of fast Byzantine agreement.
Conference Paper
Full-text available
The operation of wireless ad hoc networks is intrinsically tied to the ability of nodes to coordinate their actions in a dependable and efficient manner. The failure of some nodes and momentary breakdown of communications, either of accidental or malicious nature, should not result in the failure of the entire system. This paper presents Turquois - an intrusion-tolerant consensus protocol specifically designed for resource-constrained wireless ad hoc networks. Turquois allows an efficient utilization of the broadcasting medium, avoids synchrony assumptions, and refrains from public-key cryptography during its normal operation. The protocol is safe despite the arbitrary failure of f <; n/3 processes from a total of n processes, and unrestricted message omissions. The protocol was prototyped and subject to a comparative performance evaluation against two well-known intrusion-tolerant consensus protocols. The results show that, as the system scales, Turquois outperforms the other protocols by more than an order of magnitude.
Conference Paper
Full-text available
Recently, Fischer, Lynch and Paterson [3] proved that no completely asynchronous consensus protocol can tolerate even a single unannounced process death. We exhibit here a probabilistic solution for this problem, which guarantees that as long as a majority of the processes continues to operate, a decision will be made (Theorem 1). Our solution is completely asynchronous and is rather strong: As in [4], it is guaranteed to work with probability 1 even against an adversary scheduler who knows all about the system.
Conference Paper
Fabric is a modular and extensible open-source system for deploying and operating permissioned blockchains and one of the Hyperledger projects hosted by the Linux Foundation (www.hyperledger.org). Fabric is the first truly extensible blockchain system for running distributed applications. It supports modular consensus protocols, which allows the system to be tailored to particular use cases and trust models. Fabric is also the first blockchain system that runs distributed applications written in standard, general-purpose programming languages, without systemic dependency on a native cryptocurrency. This stands in sharp contrast to existing block-chain platforms that require "smart-contracts" to be written in domain-specific languages or rely on a cryptocurrency. Fabric realizes the permissioned model using a portable notion of membership, which may be integrated with industry-standard identity management. To support such flexibility, Fabric introduces an entirely novel blockchain design and revamps the way blockchains cope with non-determinism, resource exhaustion, and performance attacks. This paper describes Fabric, its architecture, the rationale behind various design decisions, its most prominent implementation aspects, as well as its distributed application programming model. We further evaluate Fabric by implementing and benchmarking a Bitcoin-inspired digital currency. We show that Fabric achieves end-to-end throughput of more than 3500 transactions per second in certain popular deployment configurations, with sub-second latency, scaling well to over 100 peers.
Article
Bitcoin was hatched as an act of defiance. Unleashed in the wake of the Great Recession, the cryptocurrency was touted by its early champions as an antidote to the inequities and corruption of the traditional financial system. They cherished the belief that as this parallel currency took off, it would compete with and ultimately dismantle the institutions that had brought about the crisis. Bitcoin's unofficial catchphrase, "In cryptography we trust," left no doubt about who was to blame: It was the middlemen, the bankers, the "trusted" third parties who actually couldn't be trusted. These humans simply got in the way of other humans, skimming profits and complicating transactions.
Article
In the dusty, sunbaked land surrounding Ordos, a city in China's Inner Mongolia, sits one of the world's largest bitcoin mines. Encircled by coal-fired power plants, rare earth mineral extraction sites, and the skeletal remains of abandoned, half-constructed housing complexes, the Bitmain Technologies bitcoin mine is evidence of a new economic boom in the area. Every 10 minutes, a new block of data is added to the Bitcoin blockchain, the accounting ledger that records every transaction made with the currency. And every 10 minutes, a shiny new cache of bitcoins is deposited into the digital pocket of the person whose computer added the most recent block. Miners compete for the right to add new blocks by running a single calculation, the SHA-256 hash function, over and over as fast as they can. This essentially enters them into a lottery with all other miners on the network. The rewards of this lottery now amount to over US $8 million worth of bitcoins every day. Half of this goes to miners in China, who own a majority of the hashing power on the Bitcoin network, according to a new study by University of Cambridge researchers. Their proximity to manufacturers of specialized hardware and their access to cheap land and cheap electricity make Chinese miners the natural beneficiaries of the Bitcoin system, which rewards efficiency and hustle above all else.
Conference Paper
With the advent of trusted execution environments provided by recent general purpose processors, a class of replication protocols has become more attractive than ever: Protocols based on a hybrid fault model are able to tolerate arbitrary faults yet reduce the costs significantly compared to their traditional Byzantine relatives by employing a small subsystem trusted to only fail by crashing. Unfortunately, existing proposals have their own price: We are not aware of any hybrid protocol that is backed by a comprehensive formal specification, complicating the reasoning about correctness and implications. Moreover, current protocols of that class have to be performed largely sequentially. Hence, they are not well-prepared for just the modern multi-core processors that bring their very own fault model to a broad audience. In this paper, we present Hybster, a new hybrid state-machine replication protocol that is highly parallelizable and specified formally. With over 1 million operations per second using only four cores, the evaluation of our Intel SGX-based prototype implementation shows that Hybster makes hybrid state-machine replication a viable option even for today's very demanding critical services.
Conference Paper
Bitcoin is the first and most popular decentralized cryptocurrency to date. In this work, we extract and analyze the core of the Bitcoin protocol, which we term the Bitcoin backbone, and prove two of its fundamental properties which we call common prefix and chain quality in the static setting where the number of players remains fixed. Our proofs hinge on appropriate and novel assumptions on the “hashing power” of the adversary relative to network synchronicity; we show our results to be tight under high synchronization. Next, we propose and analyze applications that can be built “on top” of the backbone protocol, specifically focusing on Byzantine agreement (BA) and on the notion of a public transaction ledger. Regarding BA, we observe that Nakamoto’s suggestion falls short of solving it, and present a simple alternative which works assuming that the adversary’s hashing power is bounded by 1/3. The public transaction ledger captures the essence of Bitcoin’s operation as a cryptocurrency, in the sense that it guarantees the liveness and persistence of committed transactions. Based on this notion we describe and analyze the Bitcoin system as well as a more elaborate BA protocol, proving them secure assuming high network synchronicity and that the adversary’s hashing power is strictly less than 1/2, while the adversarial bound needed for security decreases as the network desynchronizes.
Conference Paper
The Bitcoin system only provides eventual consistency. For everyday life, the time to confirm a Bitcoin transaction is prohibitively slow. In this paper we propose a new system, built on the Bitcoin blockchain, which enables strong consistency. Our system, PeerCensus, acts as a certification authority, manages peer identities in a peer-to-peer network, and ultimately enhances Bitcoin and similar systems with strong consistency. Our extensive analysis shows that PeerCensus is in a secure state with high probability. We also show how Discoin, a Bitcoin variant that decouples block creation and transaction confirmation, can be built on top of PeerCensus, enabling real-time payments. Unlike Bitcoin, once transactions in Discoin are committed, they stay committed.
Conference Paper
The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) protocols for mission-critical applications, such as financial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as expected. We argue these protocols are ill-suited for this deployment scenario. We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing assumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and experimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experiments over Tor, without needing to tune any parameters. Unlike the alternatives, HoneyBadgerBFT simply does not care about the underlying network.
Conference Paper
Cryptocurrencies record transactions in a decentralized data structure called a blockchain. Two of the most popular cryptocurrencies, Bitcoin and Ethereum, support the feature to encode rules or scripts for processing transactions. This feature has evolved to give practical shape to the ideas of smart contracts, or full-fledged programs that are run on blockchains. Recently, Ethereum's smart contract system has seen steady adoption, supporting tens of thousands of contracts, holding millions dollars worth of virtual coins. In this paper, we investigate the security of running smart contracts based on Ethereum in an open distributed network like those of cryptocurrencies. We introduce several new security problems in which an adversary can manipulate smart contract execution to gain profit. These bugs suggest subtle gaps in the understanding of the distributed semantics of the underlying platform. As a refinement, we propose ways to enhance the operational semantics of Ethereum to make contracts less vulnerable. For developers writing contracts for the existing Ethereum system, we build a symbolic execution tool called Oyente to find potential security bugs. Among 19, 336 existing Ethereum contracts, Oyente flags 8, 833 of them as vulnerable, including the TheDAO bug which led to a 60 million US dollar loss in June 2016. We also discuss the severity of other attacks for several case studies which have source code available and confirm the attacks (which target only our accounts) in the main Ethereum network. This article is summarized in: the morning paper an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer
Conference Paper
The increasing popularity of blockchain-based cryptocurrencies has made scalability a primary and urgent concern. We analyze how fundamental and circumstantial bottlenecks in Bitcoin limit the ability of its current peer-to-peer overlay network to support substantially higher throughputs and lower latencies. Our results suggest that reparameterization of block size and intervals should be viewed only as a first increment toward achieving next-generation, high-load blockchain protocols, and major advances will additionally require a basic rethinking of technical approaches. We offer a structured perspective on the design space for such approaches. Within this perspective, we enumerate and briefly discuss a number of recently proposed protocol ideas and offer several new ideas and open challenges.
Conference Paper
Bitcoin cryptocurrency demonstrated the utility of global consensus across thousands of nodes, changing the world of digital transactions forever. In the early days of Bitcoin, the performance of its probabilistic proof-of-work (PoW) based consensus fabric, also known as blockchain, was not a major issue. Bitcoin became a success story, despite its consensus latencies on the order of an hour and the theoretical peak throughput of only up to 7 transactions per second. The situation today is radically different and the poor performance scalability of early PoW blockchains no longer makes sense. Specifically, the trend of modern cryptocurrency platforms, such as Ethereum, is to support execution of arbitrary distributed applications on blockchain fabric, needing much better performance. This approach, however, makes cryptocurrency platforms step away from their original purpose and enter the domain of database-replication protocols, notably, the classical state-machine replication, and in particular its Byzantine fault-tolerant (BFT) variants. In this paper, we contrast PoW-based blockchains to those based on BFT state machine replication, focusing on their scalability limits. We also discuss recent proposals to overcoming these scalability limits and outline key outstanding open problems in the quest for the “ultimate” blockchain fabric(s).
Conference Paper
Consensus protocols employed in Byzantine fault-tolerant systems are notoriously compute intensive. Unfortunately, the traditional approach to execute instances of such protocols in a pipelined fashion is not well suited for modern multi-core processors and fundamentally restricts the overall performance of systems based on them. To solve this problem, we present the consensus-oriented parallelization (COP) scheme, which disentangles consecutive consensus instances and executes them in parallel by independent pipelines; or to put it in the terminology of our main target, today's processors: COP is the introduction of superscalarity to the field of consensus protocols. In doing so, COP achieves 2.4 million operations per second on commodity server hardware, a factor of 6 compared to a contemporary pipelined approach measured on the same code base and a factor of over 20 compared to the highest throughput numbers published for such systems so far. More important, however, is: COP provides up to 3 times as much throughput on a single core than its competitors and it can make use of additional cores where other approaches are confined by the slowest stage in their pipeline. This enables Byzantine fault tolerance for the emerging market of extremely demanding transactional systems and gives more room for conventional deployments to increase their quality of service.
Conference Paper
An implicit goal of Bitcoin's reward structure is to diffuse network influence over a diverse, decentralized population of individual participants. Indeed, Bitcoin's security claims rely on no single entity wielding a sufficiently large portion of the network's overall computational power. Unfortunately, rather than participating independently, most Bitcoin miners join coalitions called mining pools in which a central pool administrator largely directs the pool's activity, leading to a consolidation of power. Recently, the largest mining pool has accounted for more than half of network's total mining capacity. Relatedly, "hosted mining" service providers offer their clients the benefit of economies-of-scale, tempting them away from independent participation. We argue that the prevalence of mining coalitions is due to a limitation of the Bitcoin proof-of-work puzzle -- specifically, that it affords an effective mechanism for enforcing cooperation in a coalition. We present several definitions and constructions for "nonoutsourceable" puzzles that thwart such enforcement mechanisms, thereby deterring coalitions. We also provide an implementation and benchmark results for our schemes to show they are practical.
Article
Consensus is a fundamental building block used to solve many practical problems that appear on reliable distributed systems. In spite of the fact that consensus is being widely studied in the context of standard networks, few studies have been conducted in order to solve it in dynamic and self-organizing systems characterized by unknown networks. While in a standard network the set of participants is static and known, in an unknown network, such set and number of participants are previously unknown. This work studies the problem of Byzantine Fault-Tolerant Consensus with Unknown Participants, namely BFT-CUP. This new problem aims at solving consensus in unknown networks with the additional requirement that participants in the system may behave maliciously. It presents the necessary and sufficient knowledge connectivity conditions in order to solve BFT-CUP under minimal synchrony requirements. In this way, it proposes algorithms that are shown to be optimal in terms of synchrony and knowledge connectivity among participants in the system.
Conference Paper
Bit coin is widely regarded as the first broadly successful e-cash system. An oft-cited concern, though, is that mining Bit coins wastes computational resources. Indeed, Bit coin's underlying mining mechanism, which we call a scratch-off puzzle (SOP), involves continuously attempting to solve computational puzzles that have no intrinsic utility. We propose a modification to Bit coin that repurposes its mining resources to achieve a more broadly useful goal: distributed storage of archival data. We call our new scheme Perm coin. Unlike Bit coin and its proposed alternatives, Perm coin requires clients to invest not just computational resources, but also storage. Our scheme involves an alternative scratch-off puzzle for Bit coin based on Proofs-of-Retrievability (PORs). Successfully minting money with this SOP requires local, random access to a copy of a file. Given the competition among mining clients in Bit coin, this modified SOP gives rise to highly decentralized file storage, thus reducing the overall waste of Bit coin. Using a model of rational economic agents we show that our modified SOP preserves the essential properties of the original Bit coin puzzle. We also provide parameterizations and calculations based on realistic hardware constraints to demonstrate the practicality of Perm coin as a whole.
Article
We propose a new protocol for a cryptocurrency, that builds upon the Bitcoin protocol by combining its Proof of Work component with a Proof of Stake type of system. Our Proof of Activity protocol offers good security against possibly practical attacks on Bitcoin, and has a relatively low penalty in terms of network communication and storage space.
Article
We determine what information about failures is necessary and sufficient to solve Consensus in asynchronous distributed systems subject to crash failures. In Chandra and Toueg [1996] it is shown that ◇script W sign, a failure detector that provides surprisingly little information about which processes have crashed, is sufficient to solve Consensus in asynchronous systems with a majority of correct processes. In this paper, we prove that to solve Consensus, any failure detector has to provide at least as much information as ◇script W sign. Thus, ◇script W sign is indeed the weakest failure detector for solving Consensus in asynchronous systems with a majority of correct processes.
Conference Paper
Bitcoin is a disruptive new crypto-currency based on a decentralized open-source protocol which has been gradually gaining momentum. Perhaps the most important question that will affect Bitcoin’s success, is whether or not it will be able to scale to support the high volume of transactions required from a global currency system. We investigate the implications of having a higher transaction throughput on Bitcoin’s security against double-spend attacks. We show that at high throughput, substantially weaker attackers are able to reverse payments they have made, even well after they were considered accepted by recipients. We address this security concern through the GHOST rule, a modification to the way Bitcoin nodes construct and re-organize the block chain, Bitcoin’s core distributed data-structure. GHOST has been adopted and a variant of it has been implemented as part of the Ethereum project, a second generation distributed applications platform.
Article
A peer-to-peer crypto-currency design derived from Satoshi Nakamoto's Bitcoin. Proof-of-stake replaces proof-of-work to provide most of the network security. Under this hybrid design proof-of-work mainly provides initial minting and is largely non-essential in the long run. Security level of the network is not dependent on energy consumption in the long term thus providing an energy-efficient and more cost-competitive peer-to-peer crypto-currency. Proof-of-stake is based on coin age and generated by each node via a hashing scheme bearing similarity to Bitcoin's but over limited search space. Block chain history and transaction settlement are further protected by a centrally broadcasted checkpoint mechanism.
Article
Broadcast protocols are a fundamental building block for im- plementing replication in fault-tolerant distributed systems. This paper addresses secure service replication in an asynchronous environment with a static set of servers, where a malicious adversary may corrupt up to a threshold of servers and controls the network. We develop a formal model using concepts from modern cryptography, give modular denitions for several broadcast problems, including reliable, atomic, and secure causal broadcast, and present protocols implementing them. Reliable broad- cast is a basic primitive, also known as the Byzantine generals problem, providing agreement on a delivered message. Atomic broadcast imposes additionally a total order on all delivered messages. We present a ran- domized atomic broadcast protocol based on a new, ecient multi-valued asynchronous Byzantine agreement primitive with an external validity condition. Apparently, no such ecient asynchronous atomic broadcast protocol maintaining liveness and safety in the Byzantine model has appeared previously in the literature. Secure causal broadcast extends atomic broadcast by encryption to guarantee a causal order among the delivered messages. Our protocols use threshold cryptography for signa- tures, encryption, and coin-tossing.
Article
Many distributed algorithms are designed for a system with a fixed set of n processes. However, some systems may dynamically change and expand over time,so that the number of processes may grow to infinity as time tends to infinity. This paper considers such systems, and gives algorithms that are new and simple (but not necessarily efficient) for common problems. The reason for simplicity is to better expose some of the algorithmic techniques for dealing with infinitely many processes. A brief summary of existing work in the subject is also provided.
Article
A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution. Digital signatures provide part of the solution, but the main benefits are lost if a trusted third party is still required to prevent double-spending. We propose a solution to the double-spending problem using a peer-to-peer network. The network timestamps transactions by hashing them into an ongoing chain of hash-based proof-of-work, forming a record that cannot be changed without redoing the proof-of-work. The longest chain not only serves as proof of the sequence of events witnessed, but proof that it came from the largest pool of CPU power. As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure. Messages are broadcast on a best effort basis, and nodes can leave and rejoin the network at will, accepting the longest proof-of-work chain as proof of what happened while they were gone.
Article
This paper presents a consensus protocol resilient to Byzantine failures. It uses signed and certified messages and is based on two underlying failure detection modules. The first is a muteness failure detection module of the class . The second is a reliable Byzantine behaviour detection module. More precisely, the first module detects processes that stop sending messages, while processes experiencing other non-correct behaviours (i.e., Byzantine) are detected by the second module. The protocol is resilient to F faulty processes, F⩽min(⌊(n−1)/2⌋,C) (where C is the maximum number of faulty processes that can be tolerated by the underlying certification service).The approach used to design the protocol is new. While usual Byzantine consensus protocols are based on failure detectors to detect processes that stop communicating, none of them use a module to detect their Byzantine behaviour (this detection is not isolated from the protocol and makes it difficult to understand and prove correct). In addition to this modular approach and to a consensus protocol for Byzantine systems, the paper presents a finite state automaton-based implementation of the Byzantine behaviour detection module. Finally, the modular approach followed in this paper can be used to solve other problems in asynchronous systems experiencing Byzantine failures.
Article
Recent models for the failure behaviour of systems involving redundancy and diversity have shown that common mode failures can be accounted for in terms of the variability of the failure probability of components over operational environments. Whenever such variability is present, we can expect that the overall system reliability will be less than we could have expected if the components could have been assumed to fail independently. We generalise a model of hardware redundancy due to Hughes, [Hughes, R. P., A new approach to common cause failure. Reliab. Engng, 17 (1987) 211–236] and show that with forced diversity, this unwelcome result no longer applies: in fact it becomes theoretically possible to do better than would be the case under independence of failures. An example shows how the new model can be used to estimate redundant system reliability from component data.
Article
Can unanimity be achieved in an unreliable distributed system? This problem was named the “Byzantine Generals Problem” by L. Lamport, R. Shostak, and M. Pease (Technical Report 54, Computer Science Laboratory, SRI International, March 1980). The results obtained in the present paper prove that unanimity is achievable in any distributed system if and only if the number of faulty processors in the system is: (1) less than one-third of the total number of processors; and (2) less than one-half of the connectivity of the system's network. In cases where unanimity is achievable, algorithms for obtaining it are given. This result forms a complete characterization of networks in the light of the Byzantine Problem.
Article
This paper proposes a variation of the Byzantine generals problem (or Byzantine consensus). Each general has a set of good plans and a set of bad plans. The problem is to make all loyal generals agree on a good plan proposed by a loyal general, and never on a bad plan.