Conference PaperPDF Available

Sequentially Composable Rational Proofs



We show that Rational Proofs do not satisfy basic compositional properties in the case where a large number of “computation problems” are outsourced. We show that a “fast” incorrect answer is more remunerable for the prover, by allowing him to solve more problems and collect more rewards. We present an enhanced definition of Rational Proofs that removes the economic incentive for this strategy and we present a protocol that achieves it for some uniform bounded-depth circuits.
Sequentially Composable Rational Proofs
Matteo Campanelli and Rosario Gennaro(B
The City University of New York, New York, USA,
Abstract. We show that Rational Proofs do not satisfy basic compo-
sitional properties in the case where a large number of “computation
problems” are outsourced. We show that a “fast” incorrect answer is
more remunerable for the prover, by allowing him to solve more prob-
lems and collect more rewards. We present an enhanced definition of
Rational Proofs that removes the economic incentive for this strategy
and we present a protocol that achieves it for some uniform bounded-
depth circuits.
1 Introduction
The problem of securely outsourcing data and computation has received wide-
spread attention due to the rise of cloud computing: a paradigm where businesses
lease computing resources from a service (the cloud provider) rather than main-
tain their own computing infrastructure. Small mobile devices, such as smart
phones and netbooks, also rely on remote servers to store and perform compu-
tation on data that is too large to fit in the device.
It is by now well recognized that these new scenarios have introduced new
security problems that need to be addressed. When data is stored remotely,
outside our control, how can we be sure of its integrity? Even more interestingly,
how do we check that the results of outsourced computation on this remotely
stored data are correct. And how do perform these tests while preserving the
efficiency of the client (i.e. avoid retrieving the whole data, and having the client
perform the computation) which was the initial reason data and computations
were outsourced.
Verifiable Outsourced Computation is a very active research area in Cryptog-
raphy and Network Security (see [8] for a survey), with the goal of designing
protocols where it is impossible (under suitable cryptographic assumptions) for
a provider to “cheat” in the above scenarios. While much progress has been done
in this area, we are still far from solutions that can be deployed in practice.
A different approach is to consider a model where “cheating” might actually
be possible, but the provider would have no motivation to do so. In other words
while cryptographic protocols prevent any adversary from cheating, one considers
protocols that work against rational adversaries whose motivation is to maximize
a well defined utility function.
This work was supported by NSF grant CNS-1545759
Springer International Publishing Switzerland 2015
MHR Khouzani et al. (Eds.): GameSec 2015, LNCS 9406, pp. 270–288, 2015.
DOI: 10.1007/978-3-319-25594-1 15
Sequentially Composable Rational Proofs 271
Previous Work. An earlier work in this line is [3] where the authors describe
a system based on a scheme of rewards [resp. penalties] that the client assesses
to the server for computing the function correctly [resp. incorrectly]. However
in this system checking the computation may require re-executing it, something
that the client does only on a randomized subset of cases, hoping that the penalty
is sufficient to incentivize the server to perform honestly. Morever the scheme
might require an “infinite” budget for the rewards, and has no way to “enforce”
payment of penalties from cheating servers. For these reasons the best application
scenario of this approach is the incentivization of volunteer computing schemes
(such as SETI@Home or Folding@Home), where the rewards are non-fungible
“points” used for “social-status”.
Because verification is performed by re-executing the computation, in this
approach the client is “efficient” (i.e. does “less” work than the server) only in
an amortized sense, where the cost of the subset of executions verified by the
client is offset by the total number of computations performed by the server.
This implies that the server must perform many executions for the client.
Another approach, instead, is the concept of Rational Proofs introduced by
Azar and Micali in [1] and refined in subsequent papers [2,6]. This model cap-
tures, more accurately, real-world financial “pay-for-service” transactions, typi-
cal of cloud computing contractual arrangements, and security holds for a single
“stand-alone” execution.
In a Rational Proof, given a function fand an input x, the server returns
the value y=f(x), and (possibly) some auxiliary information, to the client. The
client will in turn pay the server for its work with a reward which is a function of
the messages sent by the server and some randomness chosen by the client. The
crucial property is that this reward is maximized in expectation when the server
returns the correct value y. Clearly a rational prover who is only interested in
maximizing his reward, will always answer correctly.
The most striking feature of Rational Proofs is their simplicity. For example
in [1], Azar and Micali show single-message Rational Proofs for any problem in
#P, where an (exponential-time) prover convinces a (poly-time) verifier of the
number of satisfying assignment of a Boolean formula.
For the case of “real-life” computations, where the Prover is polynomial and
the Verifier is as efficient as possible, Azar and Micali in [2] show d-round Ratio-
nal Proofs for functions computed by (uniform) Boolean circuits of depth d,for
d=O(log n) (which can be collapsed to a single round under some well-defined
computational assumption as shown in [6]). The problem of rational proofs for
any polynomial-time computable function remains tantalizingly open.
Our Results. Motivated by the problem of volunteer computation, our first
result is to show that the definition of Rational Proofs in [1,2] does not satisfy
a basic compositional property which would make them applicable in that sce-
nario. Consider the case where a large number of “computation problems” are
outsourced. Assume that solving each problem takes time T. Then in a time
interval of length T, the honest prover can only solve and receive the reward
for a single problem. On the other hand a dishonest prover, can answer up to
272 M. Campanelli and R. Gennaro
Tproblems, for example by answering at random, a strategy that takes O(1)
time. To assure that answering correctly is a rational strategy, we need that at
the end of the T-time interval the reward of the honest prover be larger than
the reward of the dishonest one. But this is not necessarily the case: for some
of the protocols in [1,2,6] we can show that a “fast” incorrect answer is more
remunerable for the prover, by allowing him to solve more problems and collect
more rewards.
The next questions, therefore, was to come up with a definition and a pro-
tocol that achieves rationality both in the stand-alone case, and in the composi-
tion described above. We first present an enhanced definition of Rational Proofs
that removes the economic incentive for the strategy of fast incorrect answers,
and then we present a protocol that achieves it for the case of some (uniform)
bounded-depth circuits.
2 Rational Proofs
In the following we will adopt a “concrete-security” version of the “asymptotic”
definitions and theorems in [2,6]. We assume the reader is familiar with the
notion of interactive proofs [7].
Definition 1 (Rational Proof ). Afunctionf:{0,1}n→{0,1}nadmits a
rational proof if there exists an interactive proof (P, V )and a randomized reward
function rew :{0,1}R0such that
1. (Rational completeness) For any i n p ut x∈{0,1}n,Pr[out((P, V )(x)) =
f(x)] = 1.
2. For every prover
P, and for any input x∈{0,1}nthere exists a δ
such that E[rew((
P,V)(x))] + δ
P(x)E[rew((P, V )(x))].
The expectations and the probabilities are taken over the random coins of the
prover and verifier.
P=Pr[out((P, V )(x)) =f(x)]. Following [6] we define the reward gap as
i.e. the minimum reward gap over the provers that always report the incorrect
value. It is easy to see that for arbitrary prover
Pwe have δ
Therefore it suffices to prove that a protocol has a strictly positive reward gap
Δ(x) for all x.
Examples of Rational Proofs. For concreteness here we show the protocol
for a single threshold gate (readers are referred to [1,2,6] for more examples).
Let Gn,k(x1,...,x
n) be a threshold gate with nBoolean inputs, that evalu-
ates to 1 if at least kof the input bits are 1. The protocol in [2] to evaluate this
gate goes as follows. The Prover announces the number ˜mof input bits equal
to 1, which allows the Verifier to compute Gn,k(x1,...,x
n). The Verifier select
Sequentially Composable Rational Proofs 273
a random index i[1..n] and looks at input bit b=xiand rewards the Prover
using Brier’s Rule BSRp, b) where ˜pm/n i.e. the probability claimed by the
Prover that a randomly selected input bit be 1. Then
BSRp, 1) = 2˜p˜p2(1 ˜p)2+1=2˜p(2 ˜p)
BSRp, 0) = 2(1 ˜p)˜p2(1 ˜p)2+ 1 = 2(1 ˜p2)
Let mbe the true number of input bits equal to 1, and p=m/n the correspond-
ing probability, then the expected reward of the Prover is
pBSRp, 1) + (1 p)BSRp, 0) (1)
which is easily seen to be maximized for ppi.e. when the Prover announces
the correct result. Moreover one can see that when the Prover announces a wrong
˜mhis reward goes down by 2(p˜p)22/n2. In other words for all n-bit input
x,wehaveΔ(x)=2/n2and if a dishonest Prover
Pcheats with probability
then δ
3 Profit vs. Reward
Let us now define the profit of the Prover as the difference between the reward
paid by the verifier and the cost incurred by the Prover to compute fand engage
in the protocol. As already pointed out in [2,6] the definition of Rational Proof
is sufficiently robust to also maximize the profit of the honest prover and not
the reward. Indeed consider the case of a “lazy” prover
Pthat does not evaluate
the function: even if
Pcollects a “small” reward, his total profit might still be
higher than the profit of the honest prover P.
Set R(x)=E[rew((P, V )(x))], ˜
P,V)(x))] and C(x)[resp.
C(x)] the cost for P[resp.
P] to engage in the protocol. Then we want
In general this is not true (see for example the previous protocol), but it is always
possible to change the reward by a multiplier M. Note that if MC(x)
then we have that
R(x)) C(x)C(x)˜
as desired. Therefore by using the multiplier Min the reward, the honest prover
Pmaximizes its profit against all provers
Pexcept those for which δ
C(x)/M , i.e. those who report the incorrect result with a “small” probability
We note that Mmight be bounded from above, by budget considerations
(i.e. the need to keep the total reward MR(x)Bfor some budget B). This
point out to the importance of a large reward gap Δ(x) since the larger Δ(x)is,
the smaller the probability of a cheating prover
Pto report an incorrect result
must be, in order for
Pto achieve an higher profit than P.
274 M. Campanelli and R. Gennaro
Example. In the above protocol we can assume that the cost of the honest
prover is C(x)=n, and we know that Δ(x)=n2. Therefore the profit of
the honest prover is maximized against all the provers that report an incorrect
result with probability larger than n3/M , which can be made sufficiently small
by choosing the appropriate multiplier.
Remark 1. If we are interested in an asymptotic treatment, it is important to
notice that as long as Δ(x)1/poly(|x|) then it is possible to keep a polynomial
reward budget, and maximize the honest prover profit against all provers who
cheat with a substantial probability
4 Sequential Composition
We now present the main results of our work. First we informally describe our
notion of sequential composition of rational proof, via a motivating example
and show that the protocols in [1,2,6] do not satisfy it. Then we present our
definition of sequential rational proofs, and a protocol that achieves it for circuits
of bounded depth.
4.1 Motivating Example
Consider the protocol in the previous section for the computation of the func-
tion Gn,k(·). Assume that the honest execution of the protocol (including the
computation of Gn,k(·)) has cost C=n.
Assume now that we are given a sequence of ninputs x(1) ,...,x
(i),... where
each x(i)is an n-bit string. In the following let mibe the Hamming weight of
x(i)and pi=mi/n.
Therefore the honest prover investing C=ncost, will be able to execute the
protocol only once, say on input x(i). By setting pp=piin Eq. 1, we see that
Pobtains reward
Consider instead a prover
Pwhich in the execution of the protocol outputs a
random value ˜m[0..n]. The expected reward of
Pon any input x(i)is (by
setting p=piand ˜p=m/n in Eq. 1and taking expectations):
R(x(i))= E
n2pi+ 1))
Sequentially Composable Rational Proofs 275
Therefore by “solving” just two computations
Pearns more than P. Moreover
t the strategy of
Phas cost 1 and therefore it earns more than Pby investing a
lot less cost1.
Note that “scaling” the reward by a multiplier Mdoes not help in this case,
since both the honest and dishonest prover’s rewards would be multiplied by the
same multipliers, without any effect on the above scenario.
We have therefore shown a rational strategy, where cheating many times and
collecting many rewards is more profitable than collecting a single reward for an
honest computation.
4.2 Sequentially Composable Rational Proofs
The above counterexample motivates the following Definition which formalizes
that the reward of the honest prover Pmust always be larger than the total
reward of any prover
Pthat invests less computation cost than P.
Technically this is not trivial to do, since it is not possible to claim the above
for any prover
Pand any sequence of inputs, because it is possible that for a
given input ˜x, the prover
Phas “hardwired” the correct value ˜y=fx) and can
compute it without investing any work. We therefore propose a definition that
holds for inputs randomly chosen according to a given probability distribution
D, and we allow for the possibility that the reward of a dishonest prover can
be “negligibly” larger than the reward of the honest prover (for example if
lucky and such “hardwired” inputs are selected by D).
Definition 2 (Sequential Rational Proof ). A rational proof (P, V )for a
function f:{0,1}n→{0,1}nis -sequentially composable for an input distrib-
ution D, if for every prover
P, and every sequence of inputs x, x1,...,x
such that C(x)k
i=1 ˜
C(xi)we have that i˜
A few sufficient conditions for sequential composability follow.
Lemma 1. Let (P, V )be a rational proof. If for every input xit holds that
R(x)=Rand C(x)=Cfor constants Rand C, and the following inequality
holds for every
P=Pand input x∈D:
then (P, V )is kR-sequentially composable for D.
Proof. It suffices to observe that, for any kinputs x1, ..., xk, the inequality above
C+)] R+kR
where the last inequality holds whenever k
i=1 ˜
C(xi)Cas in Definition 2.
1If we think of cost as time, then in the same time interval in which Psolves one
Pcan solve up to nproblems, earning a lot more money, by answering fast
and incorrectly.
276 M. Campanelli and R. Gennaro
Corollary 1. Let (P, V )and rew be respectively an interactive proof and a
reward function as in Definition 1;ifrew can only assume the values 0and
Rfor some constant R,let˜px=Pr[rew((
P,V)(x)) = R].Ifforx∈D
then (P, V )is kR-sequentially composable for D.
Proof. Observe that ˜
R(x)=˜px·Rand then apply Lemma 1.
4.3 Sequential Rational Proofs in the PCP Model
We now describe a rational proof appeared in [2] and prove that is sequentially
composable. The protocol assumes the existence of a trusted memory storage
to which both Prover and Verifier have access, to realize the so-called “PCP”
(Probabilistically Checkable Proof) model. In this model, the Prover writes a
very long proof of correctness, that the verifier checks only in a few randomly
selected positions. The trusted memory is needed to make sure that the prover
is “committed” to the proof before the verifier starts querying it.
The following protocol for proofs on a binary logical circuit Cappeared in [2].
The Prover writes all the (alleged) values αwfor every wire w∈C, on the trusted
memory location. The Verifier samples a single random gate value to check its
correctness and determines the reward accordingly:
1. The Prover writes the vector {αw}w∈C
2. The Verifier samples a random gate g∈C.
The Verifier reads αgout
gR, with gout,g
Rbeing respectively the
output, left and right input wires of g; the verifier checks that αgout =
–Ifgin an input gate the Verifier also checks that αgL
gRcorrespond to
the correct input values;
The Verifier pays Rif both checks are satisfied, otherwise it pays 0.
Theorem 1 ([2]). The protocol above is a rational proof for any boolean function
in P||NP , the class of all languages decidable by a polynomial time machine that
can make non-adaptive queries to NP.
We will now show a cost model where the rational proof above is sequentially
composable. We will assume that the cost for any prover is given by the number
of gates he writes. Thus, for any input x, the costs for honest and dishonest
provers are respectively C(x)=S, where S=|C|,and ˜
C(x)=˜swhere ˜sis the
number of gates written by the dishonest prover. Observe that in this model a
dishonest prover may not write all the Sgates, and that not all of the ˜sgates
have to be correct. Let σ˜sthe number of correct gates written by
Sequentially Composable Rational Proofs 277
Theorem 2. In the cost model above the PCP protocol in [2]issequentially
Proof. Observe that the probability ˜pxthat
P=Pearns Ris such that
Applying Corollary 1completes the proof.
The above cost model, basically says that the cost of writing down a gate dom-
inates everything else, in particular the cost of computing that gate. In other
cost models a proof of sequential composition may not be as straightforward.
Assume, for example, that the honest prover pays $1 to compute the value of a
single gate while writing down that gate is “free”. Now ˜pxis still equal to σ
to prove that this is smaller than ˜
Cwe need some additional assumption that
limits the ability for
Pto “guess” the right value of a gate without computing
it (which we will discuss in the next Section).
4.4 Sequential Composition and the Unique Inner State
Definition 2for sequential rational proofs requires a relationship between the
reward earned by the prover and the amount of “work” the prover invested to
produce that result. The intuition is that to produce the correct result, the prover
must run the computation and incur its full cost. Unfortunately this intuition
is difficult, if not downright impossible, to formalize. Indeed for a specific input
xa “dishonest” prover
Pcould have the correct y=f(x) value “hardwired”
and could answer correctly without having to perform any computation at all.
Similarly, for certain inputs x, xand a certain function f, a prover
computing y=f(x) might be able to “recycle” some of the computation effort
(by saving some state) and compute y=f(x) incurring a much smaller cost
than computing it from scratch.
A way to circumvent this problem was suggested in [3] under the name of
Unique Inner State Assumption: the idea is to assume a distribution Dover the
input space. When inputs xare chosen according to D, then we assume that
computing frequires cost Cfrom any party: this can be formalized by saying
that if a party invests ˜
C=γC effort (for γ1), then it computes the correct
value only with probability negligibly close to γ(since a party can always have
a “mixed” strategy in which with probability γit runs the correct computation
and with probability 1 γdoes something else, like guessing at random).
Assumption 1. We say that the (C, )-Unique Inner State Assumption holds
for a function fand a distribution Dif for any algorithm
Pwith cost ˜
the probability that on input x∈D,
Poutputs f(x)is at most γ+(1γ).
278 M. Campanelli and R. Gennaro
Note that the assumption implicitly assumes a “large” output space for f(since
a random guess of the output of fwill be correct with probability 2nwhere n
is the binary length of f(x)).
More importantly, note that Assumption 1immediately yields our notion of
sequential composability, if the Verifier can detect if the Prover is lying or not.
Assume, as a mental experiment for now, that given input x, the Prover claims
that ˜y=f(x) and the Verifier checks by recomputing y=f(x) and paying a
reward of Rto the Prover if yyand 0 otherwise. Clearly this is not a very
useful protocol, since the Verifier is not saving any computation effort by talking
to the Prover. But it is sequentially composable according to our definition, since
˜px, the probability that
Pcollects R, is equal to the probability that
f(x) correctly, and by using Assumption 1we have that
satisfying Corollary 1.
To make this a useful protocol we adopt a strategy from [3], which also uses
this idea of verification by recomputing. Instead of checking every execution, we
check only a random subset of them, and therefore we can amortize the Verifier’s
effort over a large number of computations. Fix a parameter m. The prover sends
to the verifier the values ˜yjwhich are claimed to be the result of computing f
over minputs x1,...,x
m. The verifier chooses one index irandomly between
1andm, and computes yi=f(xi). If yiyithe verifier pays R, otherwise it
pays 0.
Let Tbe the total cost by the honest prover to compute minstances: cleary
T=mC.Let ˜
Cibe the total effort invested by ˜
P, by investing ˜
Cion the
computation of xi. In order to satisfy Corollary 1we need that ˜px, the probability
Pcollects R, be less than ˜
T/T +.
Let γi=˜
Ci/C, then under Assumption 1we have that ˜yiis correct with
probability at most γi+(1γi). Therefore if we set γ=iγi/m we have
But note that γ=˜
T/T as desired since
Efficiency of the Verifier. If our notion of “efficient Verifier” is a verifier who
runs in time o(C) where Cis the time to compute f, then in the above protocol
mmust be sufficiently large to amortize the cost of computing one execution
over many (in particular a constant – in the input size n– value of mwould
not work). In our “concrete analysis” treatment, if we requests that the Verifier
runs in time δC for an “efficiency” parameter δ1, then we need mδ1.
Sequentially Composable Rational Proofs 279
Therefore we are still in need of a protocol which has an efficient Verifier,
and would still works for the “stand-alone” case (m= 1) but also for the case
of sequential composability over any number mof executions.
5 Our Protocol
We now present a protocol that works for functions f:{0,1}n→{0,1}n
expressed by an arithmetic circuit Cof size Cand depth dand fan-in 2, given
as a common input to both Prover and Verifier together with the input x.
Intuitively the idea is for the Prover to provide the Verifier with the output
value yand its two “children” yL,y
Rin the gate, i.e. the two input values of the
last output gate G. The Verifier checks that G(yL,y
R)=y, and then asks the
Prover to verify that yLor yR(chosen a random) is correct, by recursing on the
above test. The protocol description follows.
1. The Prover evaluates the circuit on xand sends the output value y1to the
2. Repeat rtimes: The Verifier identifies the root gate g1and then invokes
where the procedure Round(i, gi,y
i) is defined for 1 idas follows:
1. The Prover sends the value of the input wires z0
iand z1
iof gito the Verifier.
2. The Verifiers performs the following
Check that yiis the result of the operation of gate gion inputs z0
iand z1
If not STOP and pay a reward of 0.
–Ifi=d(i.e. if the inputs to giare input wires), check that the values of z0
and z1
iare equal to the corresponding bits of x. Pay reward Rto Merlin
if this is the case, nothing otherwise.
–Ifi<d, choose a random bit b, send it to Merlin and invoke Round(i+
i) where gb
i+1 is the child gate of giwhose output is zb
5.1 Efficiency
The protocol runs at most in drounds. In each round, the Prover sends a constant
number of bits representing the values of specific input and output wires; The
Verifier sends at most one bit per round, the choice of the child gate. Thus the
communication complexity is O(d) bits.
The computation of the Verifier in each round is: (i) computing the result of
a gate and checking for bit equality; (ii) sampling a child. Gate operations and
equality are O(1) per round. We assume our circuits are T-uniform, which allows
the Verifier to select the correct gate in time T(n)2. Thus the Verifier runs in
O(rd ·T(n)) with r=O(log C).
2We point out that the Prover can provide the Verifier with the requested gate and
then the Verifier can use the uniformity of the circuit to check that the Prover has
given him the correct gate at each level in time O(T(n)).
280 M. Campanelli and R. Gennaro
5.2 Proofs of (Stand-Alone) Rationality
Theorem 3. The protocol in Sect. 5for r=1is a Rational Proof according to
Definition 1.
We prove the above theorem by showing that for every input xthe reward gap
Δ(x) is positive.
Proof. Let
Pa prover that always reports ˜y=y1=f(x) at Round 1.
Let us proceed by induction on the depth dof the circuit. If d= 1 then there
is no possibility for
Pto cheat successfully, and its reward is 0.
Assume d>1. We can think of the binary circuit Cas composed by two
subcircuits CLand CRand the output gate g1such that f(x)=g1(CL(x),CR(x)).
The respective depths dL,d
Rof these subcircuits are such that 0 dL,d
R)=d1. After sending ˜y, the protocol requires that
sends output values for CL(x)andCR(x); let us denote these claimed values
respectively with ˜yLand ˜yR. Notice that at least one of these alleged values will
be different from the respective correct subcircuit output: if it were otherwise,
Vwould reject immediately as gyL,˜yR)=f(x)y. Thus at most one of the
two values ˜yLyRis equal to the output of the corresponding subcircuit. The
probability that the
Pcheats successfully is:
Pr[V accepts] 1
2·(Pr[V accepts on CL] + Pr[V acceptson CR]) (2)
2·(1 2max(dL,dR))+1
2·(1 2d+1)+ 1
At Eq. 3 we used the inductive hypothesis and the fact that all probabilities are
at most 1.
Therefore the expected reward of
Pis ˜
RR(1 2d) and the reward gap
is Δ(x)=2
dR(see Remark 2or an explanation of the equality sign).
The following useful corollary follows from the proof above.
Corollary 2. If the protocol described in Sect. 5is repeated r1times a prover
can cheat with probability at most (1 2d)r.
Remark 2. We point out that one can always build a prover strategy Pwhich
always answers incorrectly and achieves exactly the reward R=R(1 2d).
This prover outputs an incorrect ˜yand then computes one of the subcircuits that
results in one of the input values (so that at least one of the inputs is correct).
This will allow him to recursively answer with values z0
iand z1
iwhere one of the
two is correct, and therefore be caught only with probability 2d.
Remark 3. In order to have a non-negligible reward gap (see Remark 1)we
need to limit ourselves to circuits of depth d=O(log n).
Sequentially Composable Rational Proofs 281
5.3 Proof of Sequential Composability
General Sufficient Conditions for Sequential Composability
Lemma 2. Let Cbe a circuit of depth d. If the (C, )Unique Inner State
Assumption (see Assumption 1) holds for the function fcomputed by C,and
input distribution D, then the protocol presented above with rrepetitions is a
kR-sequentially composable Rational Proof for Cfor Dif the following inequal-
ity holds
(1 2d)r1
Proof. Let γ=˜
C. Consider x∈Dand prover
Pwhich invests effort ˜
Under Assumption 1,
Pgives the correct outputs with probability γ+– assume
that in this case
Pcollects the reward R.If
Pgives an incorrect output we
know (following Corollary 2) that he collects the reward Rwith probability at
most (1 2d)rwhich by hypothesis is less than γ. So either way we have that
R(γ+)Rand therefore applying Lemma 1concludes the proof.
The problem with the above Lemma is that it requires a large value of rfor
the result to be true resulting in an inefficient Verifier. In the following sections
we discuss two approaches that will allows us to prove sequential composability
even for an efficient Verifier:
Limiting the class of provers we can handle in our security proof;
Limiting the class of functions/circuits.
Limiting the Strategy of the Prover: Non-adaptive Provers. In proving
sequential composability it is useful to find a connection between the amount
of work done by a dishonest prover and its probability of cheating. The more a
dishonest prover works, the higher its probability of cheating. This is true for our
protocol, since the more “subcircuits” the prover computes correctly, the higher
is the probability of convincing the verifier of an incorrect output becomes. The
question then is: how can a prover with an “effort budget” to spend maximize
its probability of success in our protocol?
As we discussed in Remark 2, there is an adaptive strategy for the
Pto max-
imize its probability of success: compute one subcircuit correctly at every round
of the protocol. We call this strategy “adaptive”, because the prover allocates
its “effort budget” ˜
Con the fly during the execution of the rational proof. Con-
versely a non-adaptive prover
Puses ˜
Cto compute some subcircuits in Cbefore
starting the protocol. Clearly an adaptive prover strategy is more powerful, than
a non-adaptive one (since the adaptive prover can direct its computation effort
where it matters most, i.e. where the Verifier “checks” the computation).
Is it possible to limit the Prover to a non-adaptive strategy? This could be
achieved by imposing some “timing” constraints to the execution of the proto-
col: to prevent the prover from performing large computations while interacting
with the Verifier, the latter could request that prover’s responses be delivered
282 M. Campanelli and R. Gennaro
“immediately”, and if a delay happens then the Verifier will not pay the reward.
Similar timing constraints have been used before in the cryptographic litera-
ture, e.g. see the notion of timing assumptions in the concurrent zero-knowledge
protocols in [5].
Therefore in the rest of this subsection we assume that non-adaptive strate-
gies are the only rational ones and proceed in analyzing our protocol under the
assumption that the prover is adopting a non-adaptive strategy.
Consider a prover
Pwith effort budget ˜
C<C.ADFS (for “depth first
search”) prover uses its effort budget ˜
Cto compute a whole subcircuit of size
Cand maximal depth dDF S. Call this subcircuit CDFS.
Pcan answer correctly
any verifier’s query about a gate in CDFS . During the interaction with V,the
behavior of a DFS prover is as follows:
At the beginning of the protocol send an arbitrary output value y1.
During procedure Round(i, gi,y
If gi∈C
DFS then
Psends the two correct inputs z0
iand z1
If gi∈C
DFS and neither of gi’s input gate belongs to CDFS then
two arbitrary z0
iand z1
ithat are consistent with yi, i.e. gi(z0
DFS and one of gi’s input gates belongs to CDFS , then
Pwill send
the correct wire known to him and another arbitrary value consistent with
yias above.
Lemma 3 (Advantage of a DFS Prover). In one repetition of the protocol
above, a DFS prover with effort budget ˜
Cinvestment has probability of cheating
˜pDFS bounded by
˜pDFS 12dDF S
The proof of Lemma 3follows easily from the proof of the stand-alone rationality
of our protocol (see Theorem 3).
If a DFS prover focuses on maximizing the depth of a computed subcircuit
given a certain investment, BFS provers allot their resources to compute all sub-
circuits rooted at a certain height. A BFS prover with effort budget ˜
the value of all gates up to the maximal height possible dBF S. Note that dBF S
is a function of the circuit Cand of the effort ˜
C.LetCBF S be the collection
of gates computed by the BFS prover. The interaction of a BFS prover with V
throughout the protocol resembles that of the DFS prover outlined above:
At the beginning of the protocol send an arbitrary output value y1.
During procedure Round(i, gi,y
If gi∈ CBF S then
Psends the two correct inputs z0
iand z1
If gi∈ CBF S and neither of gi’s input gate belongs to CBFS then
two arbitrary z0
iand z1
ithat are consistent with yi, i.e. gi(z0
gi∈ CBF S and both gi’s input gates belong to CDFS , then
Pwill send one
of the correct wires known to him and another arbitrary value consistent
with yias above.
Sequentially Composable Rational Proofs 283
As before, it is not hard to see that the probability of successful cheating by a
BFS prover can be bounded as follows:
Lemma 4 (Advantage of a BFS Prover). In one repetition of the proto-
col above, a BFS prover with effort budget ˜
Chas probability of cheating ˜pBFS
bounded by
BFS and DFS provers are both special cases of the general non-adaptive strategy
which allots its investment ˜
Camong a general collection of subcircuits C.The
interaction with Vof such a prover is analogous to that of a BFS/DFS prover
but with a collection of computed subcircuits not constrained by any specific
height. We now try to formally define what the success probability of such a
prover is.
Definition 3 (Path Experiment). Consider a circuit Cand a collection C
of subcircuits of C. Perform the following experiment: starting from the output
gate, flip a unbiased coin and choose the “left” subcircuit or the “right” subcircuit
at random with probability 1/2. Continue until the random path followed by the
experiment reaches a computed gate in C.Letibe the height of this gate, which is
the output of the experiment. Define with Πithe probability that this experiment
outputs i.
The proof of the following Lemma is a generalization of the proof of security of
our scheme. Once the “verification path” chosen by the Verifier enters a fully
computed subcircuit at height i(which happens with probability ΠC
i), the prob-
ability of success of the Prover is bounded by (1 2i)
Lemma 5 (Advantage of a Non Adaptive Prover). In one repetition of
the protocol above, a generic prover with effort budget ˜
Cused to compute a
col lection Cof subcircuits, has probability of cheating ˜pCbounded by
Πi(1 2i)
where Πi-s are defined as in Definition 3.
Limiting the Class of Functions: Regular Circuits. Lemma 5still does
not produce a clear bound on the probability of success of a cheating prover.
The reason is that it is not obvious how to bound the probabilities ΠC
ithat arise
from the computed subcircuits Csince those depends in non-trivial ways from
the topology of the circuit C.
We now present a type of circuits for which it can be shown that the BFS
strategy is optimal. The restriction on the circuit is surprisingly simple: we call
them regular circuits. In the next section we show examples of interesting func-
tions that admit regular circuits.
284 M. Campanelli and R. Gennaro
Definition 4 (Regular Circuit). A circuit Cis said to be regular if the fol-
lowing conditions hold:
Cis layered;
every gate has fan-in 2;
the inputs of every gate are the outputs of two distinct gates.
The following lemma states that, in regular circuits, we can bound the advantage
of any prover investing ˜
Cby looking at the advantage of a BFS prover with the
same investment.
Lemma 6 (A Bound for Provers’ Advantage in Regular Circuits). Let
Pbe a prover investing ˜
C.LetCbe the circuit being computed and δ=dBF S(C,˜
In one repetition of the protocol above, the advantage of
Pis bounded by
˜p˜pBFS =12δ
Proof. Let Cbe the family of subcircuits computed by
Pwith effort ˜
pointed out above the probability of success for
i(1 2i)
Consider now a prover
Pwhich uses ˜
Ceffort to compute a different collection
of subcircuits Cdefined as follows:
Remove a gate from a subcircuit of height jin C: this produces two subcircuits
of height j1. This is true because of the regularity of the circuit: since the
inputs of every gate are the outputs of two distinct gates, when removing a
gate of height jthis will produce two subcircuits of height j1;
Use that computation to “join” two subcircuits of height kinto a single sub-
circuit of height k+ 1. Again we are using the regularity of the circuit here:
since the circuit is layered, the only way to join two subcircuits into a single
computed subcircuit is to take two subcircuits of the same height.
What happens to the probability ˜pof success of
P?Letbe the number of
possible paths generated by the experiment above with C. Then the probability of
entering a computed subcircuit at height jdecreases by 1/ and that probability
weight goes to entering at height j1. Similarly the probability of entering at
height kgoes down by 2/ and that probability weight is shifted to entering at
height k+ 1. Therefore
Πi(1 2i)
)(1 2j)+(Πj1+1
)(1 2j+1)
)(1 2k)+(Πk+1 +2
)(1 2k1)
Sequentially Composable Rational Proofs 285
p+2k2k+1 +2
j+1 2j
Note that ˜pincreases if j>kwhich means that it’s better to take “com-
putation” away from tall computed subcircuits to make them shorter, and use
the saved computation to increase the height of shorter computed subtrees, and
therefore that the probability is maximized when all the subtrees are of the same
height, i.e. by the BFS strategy which has probability of success ˜pBFS =12δ.
The above Lemma, therefore, yields the following.
Theorem 4. Let Cbe a regular circuit of s ize C.Ifthe(C, )Unique Inner
State Assumption (see Assumption 1) holds for the function fcomputed by C,
and input distribution D, then the protocol presented above with rrepetitions is
akR-sequentially composable Rational Proof for Cfor Dif the prover follows a
non-adaptive strategy and the following inequality holds for all ˜
(1 2δ)r˜
where δ=dBFS(C,˜
Proof. Let γ=˜
C. Consider x∈Dand prover
Pwhich invests effort ˜
Under Assumption 1,
Pgives the correct outputs with probability γ+– assume
that in this case
Pcollects the reward R.
Pgives an incorrect output we can invoke Lemma 6and conclude that he
collects reward Rwith probability at most (1 2δ)rwhich by hypothesis is
less than γ. So either way we have that ˜
R(γ+)Rand therefore applying
Lemma 1concludes the proof.
6 Results for FFT Circuits
In this section we apply the previous results to the problem of computing FFT
circuits, and by extension to polynomial evaluations.
6.1 FFT Circuit for Computing a Single Coefficient
The Fast Fourier Transform is an almost ubiquitous computational problem that
appears in many applications, including many of the volunteer computations
that motivated our work. As described in [4] a circuit to compute the FFT of a
vector of ninput elements, consists of log nlevels, where each level comprises n/2
butterflies gates. The output of the circuit is also a vector of ninput elements.
Let us focus on the circuit that computes a single element of the output
vector: it has log nlevels, and at level iit has n/2ibutterflies gates. Moreover
the circuit is regular, according to Definition 4.
286 M. Campanelli and R. Gennaro
Theorem 5. Under the (C, )-unique inner state assumption for input distribu-
tion D, the protocol in Sect. 5, when repeated r=O(1) times, yields sequentially
composable rational proofs for the FFT, under input distribution Dand assuming
non-adaptive prover strategies.
Proof. Since the circuit is regular we can prove sequential composability by
invoking Theorem 4and proving that for r=O(1), the following inequality
where δ=dBFS (C,˜
But for any ˜
δ<d, the structure of the FFT circuit implies that the number
of gates below height ˜
δis ˜
δ=Θ(C(1 2˜
δ)). Thus the inequality above can
be satisfied with r=Θ(1).
6.2 Mixed Strategies for Verification
One of the typical uses of the FFT is to change representation for polynomials.
Given a polynomial P(x) of degree n1 we can represent it as a vector of n
coefficients [a0,...,a
n1] or as a vector of npoints [P(ω0),...,P(ωn1)]. If ωi
are the complext n-root of unity, the FFT is the algorithm that goes from one
representation to the other in O(nlog n) time, rather than the obvious O(n2).
In this section we consider the following problem: given two polynomial P, Q
of degree n1 in point representation, compute the inner product of the coef-
ficients of P, Q. A fan-in two circuit computing this function could be built as
two parallel FFT subcircuits computing the coefficient representation of P,Q
(log n-depth and nlog n) size total for the 2 circuits);
– a subcircuit where at the first level the i-degree coefficients are multiplied
with each other, and then all these products are added by a binary tree of
additions O(log n)-depth and O(n) size);
Note that this circuit is regular, and has depth 2 logn+ 1 and size nlog n+n+1.
Consider a prover
Pwho pays ˜
C<nlog neffort. Then, since the BFS strat-
egy is optimal, the probability of convincing the Verifier of a wrong result of the
FFT is (1 2˜
d)rwhere ˜
d=clog nwith c1. Note also that ˜
C<1. Therefore
with r=O(nc) repetitions, the probability of success can be made smaller than
C. The Verifier’s complexity is O(nclog n)=o(nlog n).
If ˜
Cnlog nthen the analysis above fails since ˜
d>log n. Here we observe
that in order for
Pto earn a larger reward than P, it must be that Phas run at
least k=O(log n) executions (since it is possible to find k+ 1 inputs such that
CkC only if k>log n).
Assume for a moment that the prover always executes the same strategy
with the same running time. In this case we can use a “mixed” strategy for
Sequentially Composable Rational Proofs 287
The Verifier pays the Prover only after kexecutions. Each execution is verified
as above (with ncrepetitions);
– Additionally the Verifier uses the “check by re-execution” (from Sect. 4.4)
strategy every kexecutions (verifiying one execution by recomputing it);
The Verifier pays Rif all the checks are satisfied, 0 otherwise;
The Verifier’s complexity is O(knclog n+nlog n)=o(kn log n) – the latter
being the complexity of computing kinstances.
Notice that there are many plausible ways to assume that the expected cost ˜
remains the same through the k+ 1 proofs, for example by assuming that the
Prover can be “resetted” at the beginning of each execution and made oblivious
of the previous interactions.
7 Conclusion
Rational Proofs are a promising approach to the problem of verifying computa-
tions in a rational model, where the prover is not malicious, but only motivated
by the goal of maximizing its utility function. We showed that Rational Proofs
do not satisfy basic compositional properties in the case where a large number
of “computation problems” are outsourced, e.g. volunteered computations. We
showed that a “fast” incorrect answer is more remunerable for the prover, by
allowing him to solve more problems and collect more rewards. We presented
an enhanced definition of Rational Proofs that removes the economic incentive
for this strategy and we presented a protocol that achieves it for some uniform
bounded-depth circuits.
One thing to point out is that our protocol has two additional advantages:
the honest Prover is always guaranteed a fixed reward R, as opposed to some
of the protocols in [1,2] where the reward is a random variable even for the
honest prover;
Our protocol is the first example of a rational proof for arithmetic circuits.
Our work leaves many interesting research directions to explore:
Is it possible to come up with a protocol that works for any bounded-depth
circuit, and not circuits with special “topological” conditions such as the ones
imposed by our results?
Our results hold for “non-adaptive” prover strategies, though that seems more
a proof artifact to simplify the analysis, than a technical requirement. Is it
possible to lift that restriction?
– Are there other circuits which, like the FFT one, satisfy our notions and
What about rational proofs for arbitrary poly-time computations? Even if the
simpler stand-alone case?
288 M. Campanelli and R. Gennaro
1. Azar, P.D., Micali, S.: Rational proofs. In: 2012 ACM Symposium on Theory of
Computing, pp. 1017–1028 (2012)
2. Azar, P.D., Micali, S.: Super-efficient rational proofs. In: 2013 ACM Conference on
Electronic Commerce, pp. 29–30 (2013)
3. Belenkiy, M., Chase, M., Erway, C.C., Jannotti, J., K¨up¸u, A., Lysyanskaya, A.:
Incentivizing outsourced computation. In: NetEcon 2008, pp. 85–90 (2008)
4. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms. MIT
Press (2001)
5. Dwork, C., Naor, M., Sahai, A.: Concurrent zero-knowledge. J. ACM 51(6), 851–
898 (2004)
6. Guo, S., Hubacek, P., Rosen, A., Vald, M.: Rational arguments: single round del-
egation with sublinear verification. In: 2014 Innovations in Theoretical Computer
Science Conference (2014)
7. Goldwasser, S., Micali, S., Rackoff, C.: The knowledge complexity of interactive
proof-systems. In: Proceedings of the seventeenth Annual ACM Symposium on
Theory of computing. ACM (1985)
8. Walfish, M., Blumberg, A.J.: Verifying computations without reexecuting them.
Commun. ACM 58(2), 74–84 (2015)
... Rational arguments for all languages in P were given in [31]. Campanelli and Rosario [11] study sequentially composable rational proofs. Zhang and Blanton [48] design protocols to outsource matrix multiplications to a rational cloud. ...
... Finally, a SUM gate at the end of G i can sum over the payments to compute the expected payment. 11 This final expectation is the output of the block G i . ...
As modern computing moves towards smaller devices and powerful cloud platforms, more and more computation is being delegated to powerful service providers. Interactive proofs are a widely-used model to design efficient protocols for verifiable computation delegation. Rational proofs are payment-based interactive proofs. The payments are designed to incentivize the provers to give correct answers. If the provers misreport the answer then they incur a payment loss of at least 1/u, where u is the utility gap of the protocol. In this work, we tightly characterize the power of rational proofs that are super efficient, that is, require only logarithmic time and communication for verification. We also characterize the power of single-round rational protocols that require only logarithmic space and randomness for verification. Our protocols have strong (that is, polynomial, logarithmic, and even constant) utility gap. Finally, we show when and how rational protocols can be converted to give the completeness and soundness guarantees of classical interactive proofs.
... However, in real life applications, the exponential precision required in some of the reports is very costly to provide. This influential work has been extended to work for different complexity classes, to improve the efficiency of the verifier, to improve the efficiency of the prover, and to the setting with multiple computation tasks [2,9,10,11,21,22]. However, while this line of work does explicitly have incentives, the costs of computation are ignored while computing these incentives (however, see Sect. 3 in [9]). ...
... This influential work has been extended to work for different complexity classes, to improve the efficiency of the verifier, to improve the efficiency of the prover, and to the setting with multiple computation tasks [2,9,10,11,21,22]. However, while this line of work does explicitly have incentives, the costs of computation are ignored while computing these incentives (however, see Sect. 3 in [9]). ...
Full-text available
We consider a setting where a verifier with limited computation power delegates a resource intensive computation task---which requires a $T\times S$ computation tableau---to two provers where the provers are rational in that each prover maximizes their own payoff---taking into account losses incurred by the cost of computation. We design a mechanism called the Minimal Refereed Mechanism (MRM) such that if the verifier has $O(\log S + \log T)$ time and $O(\log S + \log T)$ space computation power, then both provers will provide a honest result without the verifier putting any effort to verify the results. The amount of computation required for the provers (and thus the cost) is a multiplicative $\log S$-factor more than the computation itself, making this schema efficient especially for low-space computations.
... As payments naturally exist in computation-outsourcing marketplaces, rational interactive proofs (RIP) [3] and multi-prover rational interactive proofs (MRIP) [23] instead provide a mechanism for the client to use payments strategically, allowing the client to have highly efficient outsourcing protocols, and thus to outsource harder computation problems. Many interesting rational proofs have already been designed [4,14,43,44,48,69]. ...
... Guo et al. present rational arguments for a computationally bounded prover and a sublinear verifier in [43], and construct rational arguments for all languages in P [44]. More recently, Campanelli and Rosario [14] study sequentially composable rational proofs and Zhang and Blanton [69] design protocols to outsource matrix multiplications to a rational cloud. ...
Interactive-proof-based approaches are widely used in verifiable computation outsourcing. The verifier models a computationally-constrained client and the provers model powerful service providers. In classical interactive-proof models with multiple provers, the provers' interests either perfectly align (e.g. MIP) or directly conflict (e.g. refereed games). However, service providers participating in outsourcing applications may not meet such extremes. Instead, each provider may be paid for his service, while he acts solely in his own best interest. An active research area in this context is rational interactive proofs (RIP), in which the provers try to maximize their payment. However, existing works consider either a single prover, or multiple provers who cooperate to maximize their total payment. None of them truly capture the strategic nature of multiple service providers. How to define and design non-cooperative rational interactive proofs is a well-known open problem. We introduce a multi-prover interactive-proof model in which the provers are rational and non-cooperative. That is, each prover acts individually so as to maximize his own payment in the resulting game. This model generalizes single-prover rational interactive proofs as well as cooperative multi-prover rational proofs. This new model better reflects the strategic nature of service providers from a game-theoretic viewpoint. To design and analyze non-cooperative rational interactive proofs (ncRIP), we define a new solution concept for extensive-form games with imperfect information, strong sequential equilibrium. Our technical results focus on protocols which give strong guarantees on utility gap, which is analogous to soundness gap in classical interactive proofs. We give tight characterizations of the class of ncRIP protocols with constant, noticeable, and negligible gap.
... Since then, rational secret sharing has been intensively studied [9], [10], [11], [12], [13], [14], [15]. Moreover, there have been many studies using game-theoretic analysis of cryptographic primitives/protocols, including two-party computation [16], [17], leader election [18], [19], Byzantine agreement [20], consensus [21], public-key encryption [22], [23], delegation of computation [24], [25], [26], [27], [28], [29], and protocol design [30], [31]. Among them, several works [20], [24], [25], [27], [30] used the rationality of adversaries to circumvent the impossibility results. ...
Full-text available
Secure Message Transmission (SMT) is a two-party cryptographic protocol by which the sender can securely and reliably transmit messages to the receiver using multiple channels. An adversary can corrupt a subset of the channels and commit eavesdropping and tampering attacks over the channels. In this work, we introduce a game-theoretic security model for SMT in which adversaries have some preferences for protocol execution. We define rational “timid” adversaries who prefer to violate security requirements but do not prefer the tampering to be detected. First, we consider the basic setting where a single adversary attacks the protocol. We construct perfect SMT protocols against any rational adversary corrupting all but one of the channels. Since minority corruption is required in the traditional setting, our results demonstrate a way of circumventing the cryptographic impossibility results by a game-theoretic approach. Next, we study the setting in which all the channels can be corrupted by multiple adversaries who do not cooperate. Since we cannot hope for any security if a single adversary corrupts all the channels or multiple adversaries cooperate maliciously, the scenario can arise from a game-theoretic model. We also study the scenario in which both malicious and rational adversaries exist.
... Since then, rational secret sharing has been intensively studied [1,16,27,28,4,12,26]. Moreover, there have been many studies using game-theoretic analysis of cryptographic primitives/protocols, including two-party computation [3,18], leader election [17,2], Byzantine agreement [19], consensus [23], public-key encryption [35,36], delegation of computation [5,20,7,21,8,24], and protocol design [13,14]. Among them, several work [19,5,20,21,13] used the rationality of adversaries to circumvent the existing impossibility results. ...
The multi-tenant coexistence service mode makes the cloud-based scientific workflow encounter the risks of being intruded. For this problem, we propose a CLoud scientific wOrkflow SchedUling algoRithm based on attack–defensE game model (CLOSURE). In the algorithm, attacks based on different operating system vulnerabilities are regarded as different “attack” strategies; and different operating system distributions in a virtual machine cluster executing the workflows are regarded as different “defense” strategies. The information of the attacker and defender is not balanced. In other words, the defender cannot obtain the information about the attacker’s strategies, while the attacker can acquire information about the defender’s strategies through a network scan. Therefore, we propose to dynamically switch the defense strategies during the workflow execution, which can weaken the network scan effects and transform the workflow security problem into an attack–defense game problem. Then, the probability distribution of the optimal mixed defense strategies can be achieved by calculating the Nash Equilibrium in the attack–defense game model. Based on this probability, diverse VMs are provisioned for workflow execution. Furthermore, a task-VM mapping algorithm based on dynamic Heterogeneous Earliest Finish Time (HEFT) is presented to accelerate the defense strategy switching and improve workflow efficiency. The experiments are conducted on both simulation and actual environment, experimental results demonstrate that compared with other algorithms, the proposed algorithm can reduce the attacker’s benefits by around 15.23%, and decrease the time costs of the algorithm by around 7.86%.
This paper initiates a study of Fine Grained Secure Computation: i.e. the construction of secure computation primitives against “moderately complex” adversaries. We present definitions and constructions for compact Fully Homomorphic Encryption and Verifiable Computation secure against (non-uniform) \(\mathsf {NC}^1\) adversaries. Our results do not require the existence of one-way functions and hold under a widely believed separation assumption, namely \(\mathsf {NC}^{1}\subsetneq \oplus \mathsf {L}/ {\mathsf {poly}}\). We also present two application scenarios for our model: (i) hardware chips that prove their own correctness, and (ii) protocols against rational adversaries potentially relevant to the Verifier’s Dilemma in smart-contracts transactions such as Ethereum.
As modern computing moves towards smaller devices and powerful cloud platforms, more and more computation is being delegated to powerful service providers. Interactive proofs are a widely-used model to design efficient protocols for verifiable computation delegation.
Rational proofs, introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is rational, and may deviate from the protocol for increasing his reward. Guo et al. (ITCS 2014) demonstrated that rational proofs are relevant to delegation of computation. By restricting the prover to be computationally bounded, they presented a one-round delegation scheme with sublinear verification for functions computable by log-space uniform circuits with logarithmic depth. In this work, we study rational proofs in which the verifier is also rational, and may deviate from the protocol for decreasing the prover's reward. We construct a three-message delegation scheme with sublinear verification for functions computable by log-space uniform circuits with polylogarithmic depth in the random oracle model.
Conference Paper
We present new protocols for the verification of space bounded polytime computations against a rational adversary. For such computations requiring sublinear space our protocol requires only a verifier running in sublinear-time. We extend our main result in several directions: (i) we present protocols for randomized complexity classes, using a new composition theorem for rational proofs which is of independent interest; (ii) we present lower bounds (i.e. conditional impossibility results) for Rational Proofs for various complexity classes.
Conference Paper
Full-text available
We describe different strategies a central authority, the boss, can use to distribute computation to untrusted contractors. Our problem is inspired by volunteer distributed computing projects such as SETI@home, which outsource computation to large numbers of participants. For many tasks, verifying a task's output requires as much work as computing it again; additionally, some tasks may produce certain outputs with greater probability than others. A selfish contractor may try to exploit these factors, by submitting potentially incorrect results and claiming a reward. Further, malicious contractors may respond incorrectly, to cause direct harm or to create additional overhead for result-checking. We consider the scenario where there is a credit system whereby users can be rewarded for good work and fined for cheating. We show how to set rewards and fines that incentivize proper behavior from rational contractors, and mitigate the damage caused by malicious contractors. We analyze two strategies: random double-checking by the boss, and hiring multiple contractors to perform the same job. We also present a bounty mechanism when multiple contractors are employed; the key insight is to give a reward to a contractor who catches another worker cheating. Furthermore, if we can assume that at least a small fraction h of the contractors are honest (1% - 10%), then we can provide graceful degradation for the accuracy of the system and the work the boss has to perform. This is much better than the Byzantine approach, which typically assumes h > 60%.
From theoretical possibility to near practicality.
Rational proofs, recently introduced by Azar and Micali (STOC 2012), are a variant of interactive proofs in which the prover is neither honest nor malicious, but rather rational. The advantage of rational proofs over their classical counterparts is that they allow for extremely low communication and verification time. Azar and Micali demonstrated their potential by giving a one message rational proof for #SAT, in which the verifier runs in time O(n), where $n$ denotes the instance size. In a follow-up work (EC 2013), Azar and Micali proposed "super-efficient" and interactive versions of rational proofs and argued that they capture precisely the class TC0 of constant-depth, polynomial-size circuits with threshold gates. In this paper, we show that by considering rational arguments, in which the prover is additionally restricted to be computationally bounded, the class NC1, of search problems computable by log-space uniform circuits of O(log n)-depth, admits rational protocols that are simultaneously one-round and polylog(n) time verifiable. This demonstrates the potential of rational arguments as a way to extend the notion of "super-efficient" rational proofs beyond the class TC0. The low interaction nature of our protocols, along with their sub-linear verification time, make them well suited for delegation of computation. While they provide a weaker (yet arguably meaningful) guarantee of soundness, they compare favorably with each of the known delegation schemes in at least one aspect. They are simple, rely on standard complexity hardness assumptions, provide a correctness guarantee for all instances, and do not require preprocessing.
Conference Paper
Information asymmetry is a central problem in both computer science and economics. In many fundamental problems, an uninformed principal wants to obtain some knowledge from an untrusted expert. This models several real-world situations, such as a manager's relation with her employees, or the delegation of computational tasks in mechanical turk. Because the expert is untrusted, the principal needs some guarantee that the provided knowledge is correct. In computer science, this guarantee is usually provided via a proof, which the principal can verify. Thus, a dishonest expert will get caught and penalized (with very high probability). In many economic settings, the guarantee that the knowledge is correct is usually provided via incentives. That is, a game is played between expert and principal such that the expert maximizes her utility by being honest. A rational proof is an interactive proof where the prover, Merlin, is neither honest nor malicious, but rational. That is, Merlin acts in order to maximize his own utility. Rational proofs have been previously studied when the verifier, Arthur, is a probabilistic polynomial-time machine \cite{AzarMicali}. In this paper, we study super efficient rational proofs, that is, rational proofs where Arthur runs in logarithmic time. Our new rational proofs are very practical. Not only are they much faster than their classical analogues, but they also provide very tangible incentives for the expert to be honest. Arthur only needs a polynomial-size budget, yet he can penalize Merlin by a large quantity if he deviates from the truth. We give the following characterizations of which problems admit super-efficient rational proofs. (1)Uniform TC⁰ coincides with the set of languages L that admit a rational proof using O(log n) time, O(log n) communication, a constant number of rounds and a polynomial size budget. P║NPcoincides with the set of languages having a rational proof using O(log n) time, poly(n) communication, one round and a polynomial-size budget. Furthermore, we show that when Arthur is restricted to have a polynomial-size budget, the set of languages which admit rational proofs with polynomial time verification, polynomial communication and one round is P║MA
We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are "good" or "malicious" provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0,c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin's single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts.
Usually, a proof of a theorem contains more knowledge than the mere fact that the theorem is true. For instance, to prove that a graph is Hamiltonian it suffices to exhibit a Hamiltonian tour in it; however, this seems to contain more knowledge than the single bit Hamiltonian/non-Hamiltonian. In this paper a computational complexity theory of the 'knowledge' contained in a proof is developed. Zero-knowledge proofs are defined as those proofs that convey no additional knowledge other than the correctness of the proposition in question. Examples of zero-knowledge proof systems are given for the languages of quadratic residuosity and quadratic nonresiduosity. These are the first examples of zero-knowledge proofs for languages not known to be efficiently recognizable.