Content uploaded by Magnus Boman
Author content
All content in this area was uploaded by Magnus Boman on Mar 01, 2018
Content may be subject to copyright.
The Scalability of Trustless Trust
Dominik Harz1and Magnus Boman2,3
1IC3RE, Imperial College London, SW7 2RH London, UK,
2RISE, Box 1263, SE-16429 Kista, Sweden,
3KTH/ICT/SCS, Electrum 229, SE-16440 Kista, Sweden
Abstract. Permission-less blockchains can realise trustless trust, albeit
at the cost of limiting the complexity of computation tasks. To explain
the implications for scalability, we have implemented a trust model for
smart contracts, described as agents in an open multi-agent system.
Agent intentions are not necessarily known and autonomous agents have
to be able to make decisions under risk. The ramifications of these general
conditions for scalability are analysed for Ethereum and then generalised
to other current and future platforms.
Keywords: Trustless trust, smart contract, agent, Ethereum, blockchain,
scalability, multi-agent system, distributed ledger
1 Introduction
Turing-complete programming languages allow creating a generic programmable
blockchain by means of smart contracts [30]. A smart contract can be defined
as a decentralised application executed on the distributed P2P network that
constitutes the blockchain. The smart contract captures the formalisation of
electronic commerce in code, to execute the terms of a contract. However, a
smart contract is, in fact, neither smart nor a contract. In practice, it codes an
agreement about what will come to pass, in the form of a production rule. Since
there cannot be a breach of contract—which would happen only if one or more
parties would not honour the agreement—thanks to how this production rule
is coded, a smart contract is not a contract. Since there is no opportunity for
learning on the contract’s behalf, it is also not smart.
Smart contracts do code the preferences of their owners, and their negotiating
partners as appropriate, with respect to the decision under risk or uncertainty.
They react on events, have a specific state, are executed on a distributed ledger,
and are able to interact with assets stored on the ledger [28]. Ethereum offers
smart contracts through its blockchain. The Ethereum Virtual Machine (EVM)
handles the states and computations of the protocol and can theoretically exe-
cute code of arbitrary algorithmic complexity [3]. Using Ethereum, developers
can implement smart contracts as lines of code in an account that execute au-
tomatically when transactions or function calls are sent to that account. The
outcome is final and agreed on by all participants and blockchains can thus
enable a system of trust.
arXiv:1801.09535v1 [cs.CR] 29 Jan 2018
In Ethereum, smart contracts can interact through function calls via their
Application Binary Interface (ABI). Single smart contracts or multiple smart
contracts together can act as decentralised autonomous organisations by encod-
ing the rules of interaction for the organisation’s inner and outer relationships
(e.g., The DAO, MakerDAO). Full nodes store the distributed ledger and validate
new blocks in the chain pro bono. Permission-less blockchains limit the complex-
ity of computation tasks and thus, the scalability of these blockchains. When
utilising smart contracts, external services can be required to circumvent these
computational limitations to code the preferences of their owners. The result of
computations performed by external parties are not subject to the consensus
protocol of the underlying blockchain, and their provided solution or correct ex-
ecution cannot be formally verified. Hence, the oft-cited benefit of blockchains
allowing for transparency over every transaction and enforced trust through a
consensus mechanism cannot be guaranteed with external entities [17]. A trust
model for smart contracts in permission-less blockchains is thus missing, a fact
that limits their adaptability. Earlier trust models used in related applications,
such as those devised for quantitative trading or speculative agent trading (see
the patent text [14] for a good indication of this range), need to be adjusted
for the inherent transparency and particular trust implications of blockchain
systems. We propose a model that incorporate all these aspects.
2 Method
We answer the following research questions:
1. Which models of trust can be applied to smart contracts to reflect public
permission-less blockchains?
2. What can be done to clarify the link between, on the one hand, the prefer-
ences and intentions of authors of smart contracts and, on the other hand,
the run-time properties of those smart contracts?
3. How can properties of trust models be applied to verify computations in
permission-less blockchains?
Question 1 is analysed in two steps. First, the applicability of agent-based
trust models for smart contracts is evaluated by deducing their strong and weak
notions based on agent theory. Second, a trust model suitable for smart contracts
in permission-less blockchains is developed, based on a review of existing multi-
agent system trust models [23]. Question 2 is analysed deductively, based on
literature on decision theory and decision analysis, and on limitations of formal
representations of preference, and their logical closure, e.g., what can be derived
from them. Question 3 is investigated instrumentally, by developing an algo-
rithm for verifiable computations. The development of the algorithm followed a
deductive method of merging verifiable computation concepts using blockchains
[34] [29] with cloud and distributed systems research [6] [7]. This revolves around
preserving privacy of user data, whereby aspects of the blockchain are used to
enforce the algorithm [34], and on verifiable computation for Ethereum using
computation services inside the blockchain [29]. In the latter, a verification al-
gorithm with dispute resolution and an incentive layer were suggested, and the
relevant assumptions critically assessed to develop a new algorithm, since their
proposal had two practical issues: First, the verification game includes a ’jack-
pot’ to reward solvers and verifiers for their work. This introduces an incentive to
steal the jackpot by solvers and verifiers colluding to receive the jackpot without
providing a correct solution. Second, they propose to implement the computa-
tion tasks in C, C++, or Rust code using the Lanai interpreter implemented as
a smart contract on Ethereum. This limits the flexibility of computation services
by forcing them to use one of the three programming languages. The objective
of the here presented algorithm is to achieve:
1. execution of arbitrary computations requested from a smart contract in
Ethereum, and executed outside the blockchain;
2. verification of the computation result achievable within reasonable time, i.e.,
O(n);
3. guarantees that the result of the computation is correct without having to
trust the providing service.
Our development was experimental and explorative. Different parameters and
the agents they pertain to were first considered in a pen and paper exercise,
then validated via qualitative assessment as well as quantitative analysis. The
quantitative experiments constitute an evaluation basis for the last two algorithm
objectives.
3 Explicating Smart Contracts
Consensus protocols are used to decide upon the state of the distributed ledger
[21]. This ledger is in permission-less blockchains accessible to anyone partici-
pating in the network and through blockchain explorers even to entities outside
of the network. This means everyone is able to see for example which public
key owns the most Ether. Also, each transaction can be inspected, making it
possible for participating parties to monitor the progress of their transaction.
To provide an incentive to the miner and prevent unnecessary changes to the
ledger, blockchains introduce fees on executing transactions [21]. In Ethereum,
the blockchain stores transactions and the code of smart contracts as wellas
their state. Hence, the state of a smart contract needs to be updated in the
same fashion as executing a transaction including fees, consensus, and mining
time.
Smart contracts on Ethereum are executed by each node participating in the
P2P network and hence operations are restricted to protect the network [31].
To circumvent operational issues (e.g., someone executing a denial of service
attack on the network), Ethereum introduces a concept to make users pay for
execution of a smart contract functions, and the EVM supports only certain
defined operations [31], with each operation coming with a certain cost referred
to as gas. Before executing a state-changing function or a transaction, the user
has to send a certain amount of gas to the function or the transaction. Only
if the provided amount of gas is sufficient for the function or transaction to
execute, it will successfully terminate. Otherwise, the transaction or function
will terminate prematurely, with results contingent on the handling of the smart
contract function.
We now look at two ways of explicating the roles that smart contracts may
take on. First, the agent metaphor is employed to provide an informal under-
standing in terms of a widely accepted and understood terminology. Second, the
concept of utility is employed to provide a formal understanding of how the pref-
erences and intentions of smart contract owners may be encoded in the contract
itself.
3.1 Smart Contracts as Agent Systems
Agents have certain properties separable in weak and strong notions [32]. Weak
notions include autonomy,pro-activeness,reactivity, and social ability.Auton-
omy refers to the smart contract ability to operate without a direct intervention
of others and include control over their actions and state. In Ethereum, the state
of smart contracts is maintained on the blockchain, while the actions are coded
into the smart contract itself. These actions can depend on the state, thus provid-
ing a weak form of autonomy. Pro-activeness describes goal-directed behaviour
by agents taking initiative. This is somewhat limited in Ethereum, as smart con-
tracts currently act on incoming transactions or calls to their functions. However,
if one perceives an agent as a collection of multiple different parts, smart con-
tracts might well be extended by external programs triggering such initiatives.
Thereby, the limitations set by Ethereum can be circumvented and an agent with
pro-active notions can be created. The result is in effect a multi-agent system
and can be analyzed as such. Reactivity is based on perception of an agent’s
environment and a timely response to those changes. By design, smart contracts
only have access to the state of the blockchain they are operating in. Reactivity
for state changes in Ethereum is reached via event, transaction, or function im-
plementation. To react to environment changes outside of the blockchain (e.g.
executing a function based on changes in stock market prices) requires import-
ing this information to the blockchain via e.g. Oracles [4]. Social ability enables
the potential interaction with other agents or humans through a communication
language. In Ethereum, users and contracts are identifiable by their public key
[31] and interaction is possible through transactions or function calls on smart
contracts.
Strong notions include properties such as beliefs and intentions,veracity,
benevolence,rationality, and mobility. As mentioned in the introduction above,
pro-activeness is somewhat limited in Ethereum smart contracts, and so these
properties are present only to a limited extent. The two properties veracity,
which refers to not knowingly communicating false information, and rationality,
describing the alignment of the agent’s actions to its preferences, both pertain
to the incentives an author of a smart contract might have to develop an agent
which is rational but not truthful, in order to maximise profits. This can be
deliberate so that the agent correctly encodes the true preferences of the smart
contract owner, or non-deliberate, in which case the owner preferences might
be inadequately coded. To deal with the uncertainty of agent intentions, three
approaches have emerged. First, security approaches utilise cryptographic mea-
sures to guarantee basic properties such as authenticity, integrity, identities, and
privacy [23]. Within blockchains, this is mainly achieved through cryptographic
measures, which do not provide trust in the content of the messages. Second,
institutional approaches enforce behaviour through a centralised authority. This
entity controls agents’ actions and can penalise undesired behaviour. Gover-
nance functions enforcing behaviour not defined in the core protocol do not
exist. Third, social approaches utilise reputation and trust mechanisms to e.g.
select partners, punish undesired behaviour, or evaluate different strategies. In
blockchains, there is no system of trust implemented in the core protocol, which
would rate behaviour according to certain standards. These three approaches are
complementary and can be used to create a system of trust [23]. Trust research
and current implementations are primarily focused on the first two approaches.
This allows creating agents on a platform that enforces these defined trust mea-
surements [1] [26] [24] [20].
3.2 Utility and Risk
Some researchers believe that all game-theoretical aspects of making decisions
can be pinned down by logical axiomatizations: it is only a matter of finding the
right axioms. Game-theoretical studies often concentrate on two-person games,
one reason being that many conflicts involve only two protagonists. In any game,
the players may or may not be allowed to cooperate to mutual advantage. If co-
operation is allowed, the generalized theory of n-person games can sometimes
be reduced to the one for two-person games, since any group of cooperating
players may be seen as opposing the coalition of the other players. In the case of
smart contracts, this would allow for an owner of multiple contracts (in effect,
a multi-agent system) to maximize the utility of interplaying contracts by em-
ploying game theory, at least on paper. For a given set of smart contracts, the
problem is how to determine a rule that specifies what actions would have been
optimal for the smart contract owner. Actions could here pertain to details of a
particular contract, or to the order of their execution, for instance. Comparing
different rules measures the risk involved in consistently applying a particular
rule, e.g., a chain of smart contract employment. Formally, we wish to determine
a decision function that minimizes this risk. The simpler case of handling risk
is in decisions under certainty. This means that the owner of one or more smart
contracts can predict the consequences of employing them. This represents the
ideal case in which all smart contracts execute as intended. Thus, the owner
simply chooses the alternative whose one and only possible consequence has a
value not less than the value of any other alternative. This seems simple enough,
but it is necessary to investigate a bit further what the value of a consequence
denotes. The preferences of the owner should be compatible with the following
axioms (Ais not preferred to Bis henceforth denoted by A≤B).
≤is a weak ordering on the set of preferences P:
A1. (i) Transitivity: If A≤Band B≤C, then A≤C, for all A, B, and C in P.
A1. (ii) Comparability: A≤Bor B≤A, for all A and B in P.
From this, we may derive the relation of indifference and strict preference, and
we state the consistency criteria for these:
A2. (i) A=Bis equivalent to A≤Band B≤A, for all A and B in P.
A2. (ii) A≤Bis equivalent to A≤Band not B≤A, for all A and B in P.
However, A1 implies that the owner has to admit to all consequences being
comparable. This is typically not the case in smart contracts, and it becomes
necessary to replace Comparability with Reflexivity, yielding a partial ordering
instead:
A1. (iii) Reflexivity: A≤A, for all A in P.
There is much to be gained by representing the preference ordering as a real-
valued order-preserving function. If we cannot find such a function there is not
much sense in speaking of the numerical value of a sequence of employed smart
contracts, and we might as well throw a coin for deciding. Assuming axioms A1
and A2 hold, we must find a function f(X) with the property f(A)≤f(B) iff
A≤B, which we can always do fairly easily for decisions under certainty [13],
but we now turn to decisions under risk, which is the class of decisions that nor-
mally pertain to owners of smart contracts. In the Bayesian case, with subjective
probabilities, we can think of a smart contract employment S as consisting of a
matrix of probabilities p1, ..., pnand their corresponding consequences c1, ..., cn.
Then the real-valued function f(X) we seek lets us compute the value of S as
Σpif(ci). This fixes one possible definition of an agent as rational, by making it
maximize its own utility (in accordance with its preferences, i.e. with the prefer-
ences it codes). Formally, an agent accepts the utility principle iff it assigns the
value Σpivito S, given that it has assigned the value vito ci. Any ordering Ωof
the alternatives is compatible to the principle of maximizing the expected utility
iff aΩb implies that the expected value of ais higher than the expected value
of b. In other words, we are now free to start experimenting with various axiom
systems for governing the owners, or at least recommending them actions based
on the smart contracts they have at hand. While game-theoretic axiom systems
have been favoured among agent researchers, a wide variety of axiomatizations
are surveyed in the more formal literature [12] [19].
4 A Trust Model for Smart Contracts
From the 25 models covered in [23], five consider global visibility and nine con-
sider cheaters. The overlap of those models leaves one model focusing on repu-
tation of actors in electronic markets [25]. The core idea is to use incentives to
encourage truthful behaviour of agents in the system by social control. Social
control implies that actors in the network are responsible for enforcing secure
interactions instead of using an external or global authority.
Assuming a rational agent, there is a possible motivation to break protocol
if this maximizes utility. Speculation-free protocols have been recommended for
some agent applications, but the Ethereum smart contract environment is much
too complex to allow for such control features, which require equilibrium markets
[27]. To provide a certain level of trust, new agents have to deposit a certain
cryptocurrency value for participation, and this deposit is returned when an
agent decides to stop participating. However, dishonest or corrupt agents can be
penalised by either destroying their deposit or distributing it to honest agents.
This is in line with norm-regulation of agent systems [2] and does not make any
other strong requirements on models. Norm-regulation has been formalized for
multi-agent systems, e.g., in the form of algebra [22].
Gossiping can be used to communicate experiences with other agents in a
P2P fashion and thereby establish trust or reputation. In the protocol of Bit-
coin or Ethereum gossiping is the basis for propagating new transactions and
subsequently validating blocks [10]. A similar approach can be taken for smart
contracts, whereby agents could exchange knowledge or experiences of other
agents [8]. Reputation of an agent is based on its interaction with other agents,
whereby agents mutually need to sign a transaction if they are satisfied with the
interaction. Over time, an agent collects these signed transactions to build up
its reputation. However, this model is prone to colluding agents boosting their
reputation [5]. Trust can also be implemented by relying on independent review
agents [15] [16] [9]. However, both gossiping and review agents are subject to
detection rate issues.
5 Applying Trust Measures to Verifiable Computation
Due to the restrictions set by the EVM (i.e. gas cost of operations), implementing
functions in Ethereum with a complexity greater than O(n) is not feasible. To
circumvent these limitations, computations can be executed outside of Ethereum
and results stored on the blockchain. We present an algorithm to achieve ver-
ifiable computations outside of Ethereum through measures presented in the
trust model. Agents’ rational behaviour can be aligned to the overall objective
of the algorithm. The actors involved in the verifying computation algorithm
are presented in Fig.1. Users request solving a specific computation problem.
They provide an incentive for solving and verifying the problem. Computation
services provide computation power in exchange for receiving a compensation.
For participation, they are providing a deposit. One of the computation services
acts as a solver and at least one other computation service acts as a verifier.
Judges decide whether basic mathematical operations are correct or not. They
are neutral parties and are not receiving any incentives. An arbiter enforces the
verifiable computation algorithm when users request a new computation.
Users are assumed as agents with the objective to receive a correct com-
putation. They are required to send a fee to reward solvers and verifiers for
executing the computation. This fee depends on the complexity of the com-
putation to be performed, the complexity of the input data, and the number
of verifiers. Computation services are assumed to optimise their incentive. They
might purposely communicate false information to maximise their incentive. Fur-
Fig. 1: Overview of actors in the verification algorithm.
ther, enough computation services are available (i.e. a minimum of 2) to execute
the computation with at least one verifier. The probability of detecting a false
computation depends on the number of verifiers in the algorithm. The arbiter
and judge are trusted by participating parties, respectively enforcing the algo-
rithm and reaching a verdict. This is a strong assumption in a trustless system
and needs to be justified. To limit their incentive for undesired behaviour (i.e.
cheating) in the algorithm, these two agents are not rewarded for taking part in
the computations. Thus, their work is pro bono and only the operational cost in
gas are covered.
Alternatively and not further covered in this paper, other approaches limit
or eliminate trust in arbiter and judge. First, following the trust is risk approach
[18], a network of trusted entities with a fixed amount of deposited value could
be created to find arbiters and judges trusted commonly between computation
services and users. Second, a user might create their own arbiter and judge, while
storing the fee in an escrow contract between user and computation services.
The computation services store an encrypted hash of the result in the escrow
contract. Upon completion of the protocol, the user issues the payment and
receives the result in full. Third, the protocol could be executed with different
test cases while results would be publicly stored on the blockchain. Thus, a user
and computation service could verify correct execution of the protocol, if arbiter
and judge remain unchanged.
5.1 Algorithm
The algorithm is initiated when a user requests a computation by sending the
input data, the operation to be performed, and the desired number of verifiers
to the arbiter. One computation service is randomly determined as a solver, and
the other(s) are randomly assigned as verifiers by the arbiter. The user instructs
the arbiter to forward the input data and operation to the computation ser-
vices smart contracts, triggering the off-chain computation by sending a request
through an oracle. This requires sending a fee for the computation as well as
providing the fee for using the oracle. Verifiers and the solver report their result
back to the arbiter. If all results are reported back, then the user can trigger
the arbiter to compare the available results. If the solver and all participating
verifiers agree on one solution, the algorithm is finished and the user can collect
the result. However, if at least one verifier disagrees with the solver the user
can initiate a dispute resolution algorithm. The dispute resolution is inspired
by a technique introduced in [7], [6], and [29]: to split up the operation into
simple parts with intermediary results until the computation is simple enough
for the judge to solve it. Overall and intermediary results are stored in a Merkle
tree for the solver, and each verifier challenging the solver. The comparison is
achieved through a binary search on the trees. The root of the tree encodes the
overall result, while the leaves in the lowest layer encode the input data. Leaves
in between represent intermediary results.
5.2 Interactions
Under the assumption that arbiter, judge, and user behave rational and follow
the algorithm, computation services have a combination of four different be-
haviours with respect to their role as solver Sor verifier V. The behaviours
are summarised in Table 1 with either verifiers accepting the solution (i.e. VA)
or challenging the solution (i.e. VC). Sprofits the most if it provides a correct
solution, which is challenged by V, while Vprofits the most when Sprovides
a false solution and Vis able to challenge it. The problematic case is that the
incentives for accepting a false or correct solution are the same. To prevent this
from happening we will consider the behaviour of Vand Sin detail.
Table 1: Possible behaviours of computation services as solver Sand verifier V,
whereby all verifiers behave the same.
S
correct solution false solution
Vchallenge
Sreceives Sfee share
Sreceives VCfee share
VCreceives nothing
Sreceives nothing
VCreceives VCfee share
VCreceives Sfee share
accept Sreceives Sfee share
VAreceives VAfee share
Sreceives Sfee share
VAreceivesVAfee share
Case 1: Sprovides a correct solution and no Vchallenges the solution.
Agents behave as intended by the algorithm. As no Vchallenges the solution,
the judge is not triggered and the fee is equally split between Sand the involved
V.
Case 2: Sprovides a correct solution and at least one Vchallenges the
solution. This is an undesired behaviour since the solution provided is actually
correct. This triggers the dispute resolution with a verdict by the judge determin-
ing Sas correct. In this case Sprofits from the extra work due to the additional
dispute steps by receiving the fee share of VC.VAreceive their part of the fee
since their amount of work remained the same.
Case 3: Sprovides a false solution and no Vchallenges the solution. Sand
all Vwould receive their share of the fee. This is an undesired behaviour in
the algorithm as it would flag a false result as correct. To prevent this from
happening two measures are used. First, computation services do not know their
role in advance as they are randomly assigned by the arbiter. If several services
collude to provide false solutions, all of them would need to work together to
provide the “same wrong” result. However, if just one VCexists, it profits by
gaining the fee shares of itself, S, and all VA. Thus, second, the user is able to
determine the number of Vfor each computation. The probability of having at
least one VCdepends on the prior probability pof Vproviding correct or false
solutions and the number nof Vin the computation.
Case 4: Sprovides a false solution and at least one VCchallenges the solu-
tion. Hereby, Sand VAare not receiving their share of the fee, which goes to all
VC. This is based on the verdict by the judge. However, this is also an undesired
case since the user does not receive a solution to his computation.
Considering the four scenarios, rational Sis trying to receive its share of the
incentive and get a chance to receive fees of any Vchallenging a correct solution.
The strategy for Sconsidering Vis to provide a correct solution to the problem.
Vprofits the most form challenging a false solution. A rational Vprovides the
correct solution to a computation to receive its fee share or to have the chance
of becoming a challenger to a false solution. Arguably, Sand Vcould try to
deliver a false solution to save up on computation cost or trick the user. In this
case, the probability of discovering the false solution relies on the number of Vs
and the prior probability of cheating Vs. If a Vdelivers a false solution, it must
be the same solution as S’ to not trigger the dispute resolution. Moreover, by
destroying the services’ deposits and excluding them from the algorithm after
detected cheating, the prior probability of having such a service can be reduced.
5.3 Implementation and experiments
The algorithm was implemented using Solidity smart contracts and AWS Lambda
external computation services. The quantitative analysis is conducted by execut-
ing experiments with one exemplary type of computation. The computation is a
multiplication of two integers to simplify the verification steps in the algorithm.
The results depend on external and internal parameters of the algorithm. Exter-
nally, the prior probability of computation services providing false solutions is
considered. Internally, the number of verifiers the user requests for each compu-
tation are examined. Experiments are executed for each different configuration
of parameters to determine gas consumption and outcome of the computation.
Assuming a potentially large number of computation services (>10,000), this
gives a confidence level of 95% and a maximum confidence interval of 3.1 for the
three different prior probabilities. Before each iteration of the experiment, the
environment is initialised with a new set of smart contracts. Experiments are
executed within TestRPC [11].
Reporting the amount of gas used equals the time and space complexity of
the algorithm, as gas consumption is determined by the type and number of
operations in the EVM. It further excludes the time used for sending transac-
tions or calls. Independent of the prior probability of false solutions, the µgas
consumption increases linearly as presented in Fig.2. Further, σdecreases with
an increasing number of verifiers. At a low number of verifiers, the dispute reso-
lution is less likely triggered, leading to a higher σin gas consumption. With an
increasing number of verifiers, the probability of triggering the dispute resolution
increases. As the dispute resolution is almost always triggered, σis reduced.
(a) 30% of computation services provid-
ing incorrect solutions.
(b) 50% of computation services provid-
ing incorrect solutions.
(c) 70% of computation services provid-
ing incorrect solutions.
Fig. 2: Total amount of gas used by algorithm with different number of verifiers
and percentage of computation services providing incorrect solutions. Each com-
bination of specific number of verifier(s) and percentage of computation services
with incorrect solutions with N= 1000.
The algorithm is tested for three different cases of verification: First, the
algorithm can accept a correct solution. Second, each verifier agrees with the
solver although the solution is not correct. The dispute resolution is not triggered
and the user receives a false solution marked as correct. Third, at least one
verifier disagrees with the solver providing a false solution and the judge rules
that the solver’s solution is false. For the second case, invoking the dispute
resolution depends on the prior probability of computation services providing
false solutions described by P(VC)=1−pn. The experiments as shown in Table
2 indicate that the expected and actual value are similar for p= 0.5. However,
for p= 0.3 and p= 0.7 the actual values are below the expected ones. Since the
experiment is executed with a confidence level of 95% and interval of 3.1, those
changes are accounted towards sampling size not being a perfect representative
of the actual distribution. Also, the random assignment of false and correct
computation services could be a cause for having a higher detection rate.
Table 2: Comparison of expected and actual probabilities of accepting a false
solution in the algorithm.
Prior pVerifiers nExpected false [%] Actual false [%]
0.3 1 9.0 2.7
0.3 2 2.7 0.0
0.3 3 0.81 0.0
0.3 4 0.243 0.0
0.3 5 0.0729 0.0
0.3 6 0.02187 0.0
0.5 1 25.0 28.6
0.5 2 12.5 12.2
0.5 3 6.25 4.6
0.5 4 3.125 1.2
0.5 5 1.5625 0.0
0.5 6 0.78125 0.0
0.7 1 49.0 41.2
0.7 2 34.3 24.4
0.7 3 24.01 12.1
0.7 4 16.807 4.9
0.7 5 11.7649 2.9
0.7 6 8.23543 0.0
6 Discussion
Within the presented trust model, deposits are simple to implement in permission-
less blockchains that already have a cryptocurrency. However, the deposit value
can be volatile. This poses two risks: Either the escrow or independent entity
maintaining the deposit may be motivated to steal the deposits, or the deposit
value might be so little that its trust-building attribute vanishes. To prevent
this, the deposit value could be bound to a fiat currency or a stable asset. The
deposit can also be dynamically adjusted and deposits only kept a short time or
one iteration of interactions. Gossiping could be used as a basis to communicate
experiences with other agents. In permission-less blockchains, the agents can
use a common protocol to exchange this information and use a rating approach
[33]. Yet, gossiping can be misused by agents to boost their own reputations by
executing Sybil attacks. Review agents can be used that reach a verdict on a
specific issue or problem. Their implementation is simple and potential scenarios
to manipulate agents’ reputations are prevented. However, the judge or review
agent needs to be trusted by other agents. The algorithm is based on its actors
and their interaction. The idea of arbiter, judge, user, and computation services
is strongly influenced by [29] and [34]. The main differences are in the idea of
using a jackpot to reward verifiers as well as the implementation either entirely
on Ethereum or using external computation services. Moreover, the algorithm
defers from [34] as its goal is to deliver verifiable computations for entities (i.e.
users or smart contracts) on the blockchain, while [34] primarily delivers privacy-
preserving computations, where blockchain enables the algorithm.
The algorithm cannot guarantee to detect false solutions. It is based on the
assumption that solvers and verifiers behave as desired (i.e. delivering correct
solutions), as their strategy is aligned with the incentives provided by the al-
gorithm. This assumption is based on game-theoretic properties. The algorithm
leaves no dominant strategy considering the interactions in Table 1. Scan choose
either to provide a correct or false solution and Vcan challenge or accept. Only
when considering both agents, a Nash equilibrium exists. If there is a (high)
probability that a VCexists, the only valid strategy for Sis to provide a cor-
rect solution. Consequently, Vin turn has to provide a correct solution, which
accepts correct Sand challenges false S. In the algorithm, both Sand Vprovid-
ing correct solutions gives a Pareto efficient result. If they change their strategy
under the assumption that no VCexists, their utility remains the same. How-
ever, a Vhas an incentive to challenge a false solution, which would increase
his utility and reduce the utility of the others. Social welfare considers the sum
of all agent’s utilities depending on their strategy which can be disregarded in
permission-less blockchains since overall the agent wants to optimise his utility
independent of the overall utility. Specifically, the overall utility is potentially
unknown to an individual agent, since he is unable to determine with certainty
the utility of other agents.
7 Conclusion
On permission-less blockchains like Ethereum, rational agents through smart
contracts code the preferences of their owners. This could motivate maximizing
their utility by dishonest behaviour, and hence, further social control mecha-
nisms are required. We have presented a trust model for smart contracts in
permission-less blockchains that incorporate state-of-the-art research into de-
posits, reputation, and review agents for social control. Trust can be extended
to entities outside of permission-less blockchains through applying the trust mea-
sures presented in our model. An example application is an algorithm implement-
ing verifiable computation. The model includes users requesting computational
tasks, computational services providing solutions and acting either as solver or
verifier, arbiters enforcing the algorithm, and judges resolving disputes. Due to
the incentive structure and the potential penalty cause by cheating, providing
correct solutions to the computation task is a Nash equilibrium. Under the as-
sumption that arbiter and judge are trusted, the algorithm detects false solutions
provided based on a probability distribution. The algorithm is realised as So-
lidity smart contracts and AWS Lambda functions, implementing verification of
multiplying two integers. Experiments show that with six verifiers the algorithm
detects cheaters with prior probabilities of 30%, 50%, and 70% dishonest com-
putation services. Experiments show that the algorithm performs overall with a
linear time and space complexity depending on the number of verifiers.
As future work, we leave eliminating trust requirements regarding arbiter
and judge by a fully decentralised algorithm.
Acknowledgement The authors thank Babak Sadighi and Erik Rissanen for
comments and discussions, Daniel Gillblad for important support for Magnus
Boman’s part of the project. Also, the authors thank Outlier Ventures Ltd. for
partly funding Dominik Harz’ share of the project.
References
1. Balakrishnan, V., Majd, E.: A Comparative Analysis of Trust Models for Multi-
Agent Systems. Lecture Notes on Software Engineering 1(2), 183–185 (2013)
2. Boman, M.: Norms in artificial decision making. Artificial Intelligence and Law
7(1), 17–35 (1999)
3. Buterin, V.: A Next-Generation Smart Contract and Decentralized Application
Platform (2013), https://github.com/ethereum/wiki/wiki/White-Paper
4. Buterin, V.: Chain Interoperability. Tech. Rep. 1, R3CEV (2016)
5. Can, A.B., Bhargava, B.: SORT: A Self-ORganizing Trust Model for Peer-to-Peer
Systems. IEEE Transactions on Dependable and Secure Computing 10(1), 14–27
(2013)
6. Canetti, R., Riva, B., Rothblum, G.N.: Practical delegation of computation using
multiple servers. In: Proceedings of the 18th ACM conference on Computer and
communications security - CCS ’11. p. 445. ACM Press, New York, New York,
USA (2011)
7. Canetti, R., Riva, B., Rothblum, G.N.: Refereed delegation of computation. Infor-
mation and Computation 226, 16–36 (2013)
8. Carboni, D.: Feedback based Reputation on top of the Bitcoin Blockchain (2015)
9. Cerutti, F., Toniolo, A., Oren, N., Norman, T.J.: Context-dependent Trust Deci-
sions with Subjective Logic (2013)
10. Decker, C., Wattenhofer, R.: A Fast and Scalable Payment Network with Bitcoin
Duplex Micropayment Channels. In: Pelc, A., Schwarzmann, A.A. (eds.) Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelli-
gence and Lecture Notes in Bioinformatics), vol. 9212, pp. 3–18. Springer Interna-
tional Publishing, Cham (2015)
11. Ethereum: Ethereum TestRPC (2017), https://github.com/ethereumjs/
testrpc
12. Fishburn, P.: Foundations of Decision Analysis: Along the way. Management Sci-
ence 35, 387–405 (1989)
13. French, S. (ed.): Decision Theory: An Introduction to the Mathematics of Ratio-
nality. Halsted Press, New York, NY, USA (1986)
14. Hoffberg, S.: Multifactorial optimization system and method (Apr 19 2007), https:
//www.google.com/patents/US20070087756, uS Patent App. 11/467,931
15. Huynh, T.D., Jennings, N.R., Shadbolt, N.R.: An integrated trust and reputation
model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems
13(2), 119–154 (2006)
16. Jakubowski, M., Venkatesan, R., Yacobi, Y.: Quantifying Trust (2010)
17. Kosba, A., Miller, A., Shi, E., Wen, Z., Papamanthou, C.: Hawk: The Blockchain
Model of Cryptography and Privacy-Preserving Smart Contracts. In: 2016 IEEE
Symposium on Security and Privacy (SP). vol. 2015, pp. 839–858. IEEE (2016)
18. Litos, O.S.T., Zindros, D.: Trust Is Risk: A Decentralized Financial Trust Platform.
IACR Cryptology ePrint Archive 2017, 156 (2017)
19. Malmn¨as, P.E.: Axiomatic Justifications of the Utility Principle. Synthese 99(2)
(1994)
20. Mui, L., Mohtashemi, M., Halberstadt, A.: A computational model of trust and
reputation. In: HICSS. Proceedings of the 35th Annual Hawaii International Con-
ference on System Sciences. vol. 5, pp. 2431–2439. IEEE (2002)
21. Narayanan, A., Bonneau, J., Felten, E., Miller, A., Goldfeder, S.: Bitcoin and
Cryptocurrency Technologies - Draft. Princeton University Press, Princeton, NJ,
USA (2016)
22. Odelstad, J., Boman, M.: Algebras for agent norm-regulation. Annals of Mathe-
matics and Artificial Intelligence 42(1), 141–166 (2004)
23. Pinyol, I., Sabater-Mir, J.: Computational trust and reputation models for open
multi-agent systems: A review. Artificial Intelligence Review 40(1), 1–25 (2013)
24. Ramchurn, S.D., Huynh, D., Jennings, N.R.: Trust in multi-agent systems. The
Knowledge Engineering Review 19(01), 1–25 (2004)
25. Rasmusson, L., Jansson, S.: Simulated social control for secure Internet commerce.
Proceedings of the 1996 workshop on New security paradigms - NSPW ’96 pp.
18–25 (1996)
26. Sabater, J., Sierra, C.: Review on computational trust and reputation models.
Artificial Intelligence Review 24(1), 33–60 (2005)
27. Sandholm, T., Ygge, F.: On the Gains and Losses of Speculation in Equilibrium
Markets. In: Proceedings IJCAI’97. pp. 632–638. AAAI Press (1997)
28. Szabo, N.: Formalizing and Securing Relationships on Public Networks. (1997),
http://ojphi.org/ojs/index.php/fm/article/view/548/469
29. Teutsch, J., Reitwießner, C.: A scalable verification solution for blockchains (2017)
30. Vukoli´c, M.: Hyperledger fabric: towards scalable blockchain for business. Tech.
Rep. Trust in Digital Life 2016, IBM Research (2016), https://www.zurich.ibm.
com/dccl/papers/cachin{_}dccl.pdf
31. Wood, G.: Ethereum: a secure decentralised generalised transaction ledger.
Ethereum Project Yellow Paper pp. 1–32 (2014)
32. Wooldridge, M.: An Introduction to MultiAgent Systems. Wiley Publishing, 2nd
edn. (2009)
33. Zhou, R., Hwang, K., Cai, M.: GossipTrust for Fast Reputation Aggregation in
Peer-to-Peer Networks. IEEE Transactions on Knowledge and Data Engineering
20(9), 1282–1295 (2008)
34. Zyskind, G.: Efficient Secure Computation Enabled by Blockchain Technology.
Master thesis, Massachusetts Insitute of Technology (2016)