Content uploaded by Matteo Campanelli

Author content

All content in this area was uploaded by Matteo Campanelli on Apr 26, 2017

Content may be subject to copyright.

Sequentially Composable Rational Proofs

Matteo Campanelli and Rosario Gennaro(B

)

The City University of New York, New York, USA

mcampanelli@gradcenter.cuny.edu, rosario@ccny.cuny.edu

Abstract. We show that Rational Proofs do not satisfy basic compo-

sitional properties in the case where a large number of “computation

problems” are outsourced. We show that a “fast” incorrect answer is

more remunerable for the prover, by allowing him to solve more prob-

lems and collect more rewards. We present an enhanced deﬁnition of

Rational Proofs that removes the economic incentive for this strategy

and we present a protocol that achieves it for some uniform bounded-

depth circuits.

1 Introduction

The problem of securely outsourcing data and computation has received wide-

spread attention due to the rise of cloud computing: a paradigm where businesses

lease computing resources from a service (the cloud provider) rather than main-

tain their own computing infrastructure. Small mobile devices, such as smart

phones and netbooks, also rely on remote servers to store and perform compu-

tation on data that is too large to ﬁt in the device.

It is by now well recognized that these new scenarios have introduced new

security problems that need to be addressed. When data is stored remotely,

outside our control, how can we be sure of its integrity? Even more interestingly,

how do we check that the results of outsourced computation on this remotely

stored data are correct. And how do perform these tests while preserving the

eﬃciency of the client (i.e. avoid retrieving the whole data, and having the client

perform the computation) which was the initial reason data and computations

were outsourced.

Veriﬁable Outsourced Computation is a very active research area in Cryptog-

raphy and Network Security (see [8] for a survey), with the goal of designing

protocols where it is impossible (under suitable cryptographic assumptions) for

a provider to “cheat” in the above scenarios. While much progress has been done

in this area, we are still far from solutions that can be deployed in practice.

A diﬀerent approach is to consider a model where “cheating” might actually

be possible, but the provider would have no motivation to do so. In other words

while cryptographic protocols prevent any adversary from cheating, one considers

protocols that work against rational adversaries whose motivation is to maximize

a well deﬁned utility function.

This work was supported by NSF grant CNS-1545759

c

Springer International Publishing Switzerland 2015

MHR Khouzani et al. (Eds.): GameSec 2015, LNCS 9406, pp. 270–288, 2015.

DOI: 10.1007/978-3-319-25594-1 15

Sequentially Composable Rational Proofs 271

Previous Work. An earlier work in this line is [3] where the authors describe

a system based on a scheme of rewards [resp. penalties] that the client assesses

to the server for computing the function correctly [resp. incorrectly]. However

in this system checking the computation may require re-executing it, something

that the client does only on a randomized subset of cases, hoping that the penalty

is suﬃcient to incentivize the server to perform honestly. Morever the scheme

might require an “inﬁnite” budget for the rewards, and has no way to “enforce”

payment of penalties from cheating servers. For these reasons the best application

scenario of this approach is the incentivization of volunteer computing schemes

(such as SETI@Home or Folding@Home), where the rewards are non-fungible

“points” used for “social-status”.

Because veriﬁcation is performed by re-executing the computation, in this

approach the client is “eﬃcient” (i.e. does “less” work than the server) only in

an amortized sense, where the cost of the subset of executions veriﬁed by the

client is oﬀset by the total number of computations performed by the server.

This implies that the server must perform many executions for the client.

Another approach, instead, is the concept of Rational Proofs introduced by

Azar and Micali in [1] and reﬁned in subsequent papers [2,6]. This model cap-

tures, more accurately, real-world ﬁnancial “pay-for-service” transactions, typi-

cal of cloud computing contractual arrangements, and security holds for a single

“stand-alone” execution.

In a Rational Proof, given a function fand an input x, the server returns

the value y=f(x), and (possibly) some auxiliary information, to the client. The

client will in turn pay the server for its work with a reward which is a function of

the messages sent by the server and some randomness chosen by the client. The

crucial property is that this reward is maximized in expectation when the server

returns the correct value y. Clearly a rational prover who is only interested in

maximizing his reward, will always answer correctly.

The most striking feature of Rational Proofs is their simplicity. For example

in [1], Azar and Micali show single-message Rational Proofs for any problem in

#P, where an (exponential-time) prover convinces a (poly-time) veriﬁer of the

number of satisfying assignment of a Boolean formula.

For the case of “real-life” computations, where the Prover is polynomial and

the Veriﬁer is as eﬃcient as possible, Azar and Micali in [2] show d-round Ratio-

nal Proofs for functions computed by (uniform) Boolean circuits of depth d,for

d=O(log n) (which can be collapsed to a single round under some well-deﬁned

computational assumption as shown in [6]). The problem of rational proofs for

any polynomial-time computable function remains tantalizingly open.

Our Results. Motivated by the problem of volunteer computation, our ﬁrst

result is to show that the deﬁnition of Rational Proofs in [1,2] does not satisfy

a basic compositional property which would make them applicable in that sce-

nario. Consider the case where a large number of “computation problems” are

outsourced. Assume that solving each problem takes time T. Then in a time

interval of length T, the honest prover can only solve and receive the reward

for a single problem. On the other hand a dishonest prover, can answer up to

272 M. Campanelli and R. Gennaro

Tproblems, for example by answering at random, a strategy that takes O(1)

time. To assure that answering correctly is a rational strategy, we need that at

the end of the T-time interval the reward of the honest prover be larger than

the reward of the dishonest one. But this is not necessarily the case: for some

of the protocols in [1,2,6] we can show that a “fast” incorrect answer is more

remunerable for the prover, by allowing him to solve more problems and collect

more rewards.

The next questions, therefore, was to come up with a deﬁnition and a pro-

tocol that achieves rationality both in the stand-alone case, and in the composi-

tion described above. We ﬁrst present an enhanced deﬁnition of Rational Proofs

that removes the economic incentive for the strategy of fast incorrect answers,

and then we present a protocol that achieves it for the case of some (uniform)

bounded-depth circuits.

2 Rational Proofs

In the following we will adopt a “concrete-security” version of the “asymptotic”

deﬁnitions and theorems in [2,6]. We assume the reader is familiar with the

notion of interactive proofs [7].

Deﬁnition 1 (Rational Proof ). Afunctionf:{0,1}n→{0,1}nadmits a

rational proof if there exists an interactive proof (P, V )and a randomized reward

function rew :{0,1}∗→R≥0such that

1. (Rational completeness) For any i n p ut x∈{0,1}n,Pr[out((P, V )(x)) =

f(x)] = 1.

2. For every prover

P, and for any input x∈{0,1}nthere exists a δ

P(x)≥0

such that E[rew((

P,V)(x))] + δ

P(x)≤E[rew((P, V )(x))].

The expectations and the probabilities are taken over the random coins of the

prover and veriﬁer.

Let

P=Pr[out((P, V )(x)) =f(x)]. Following [6] we deﬁne the reward gap as

Δ(x)=minP∗:P∗=1[δP∗(x)]

i.e. the minimum reward gap over the provers that always report the incorrect

value. It is easy to see that for arbitrary prover

Pwe have δ

P(x)≥

P·Δ(x).

Therefore it suﬃces to prove that a protocol has a strictly positive reward gap

Δ(x) for all x.

Examples of Rational Proofs. For concreteness here we show the protocol

for a single threshold gate (readers are referred to [1,2,6] for more examples).

Let Gn,k(x1,...,x

n) be a threshold gate with nBoolean inputs, that evalu-

ates to 1 if at least kof the input bits are 1. The protocol in [2] to evaluate this

gate goes as follows. The Prover announces the number ˜mof input bits equal

to 1, which allows the Veriﬁer to compute Gn,k(x1,...,x

n). The Veriﬁer select

Sequentially Composable Rational Proofs 273

a random index i∈[1..n] and looks at input bit b=xiand rewards the Prover

using Brier’s Rule BSR(˜p, b) where ˜p=˜m/n i.e. the probability claimed by the

Prover that a randomly selected input bit be 1. Then

BSR(˜p, 1) = 2˜p−˜p2−(1 −˜p)2+1=2˜p(2 −˜p)

BSR(˜p, 0) = 2(1 −˜p)−˜p2−(1 −˜p)2+ 1 = 2(1 −˜p2)

Let mbe the true number of input bits equal to 1, and p=m/n the correspond-

ing probability, then the expected reward of the Prover is

pBSR(˜p, 1) + (1 −p)BSR(˜p, 0) (1)

which is easily seen to be maximized for p=˜pi.e. when the Prover announces

the correct result. Moreover one can see that when the Prover announces a wrong

˜mhis reward goes down by 2(p−˜p)2≥2/n2. In other words for all n-bit input

x,wehaveΔ(x)=2/n2and if a dishonest Prover

Pcheats with probability

P

then δ

P>2

P/n2.

3 Proﬁt vs. Reward

Let us now deﬁne the proﬁt of the Prover as the diﬀerence between the reward

paid by the veriﬁer and the cost incurred by the Prover to compute fand engage

in the protocol. As already pointed out in [2,6] the deﬁnition of Rational Proof

is suﬃciently robust to also maximize the proﬁt of the honest prover and not

the reward. Indeed consider the case of a “lazy” prover

Pthat does not evaluate

the function: even if

Pcollects a “small” reward, his total proﬁt might still be

higher than the proﬁt of the honest prover P.

Set R(x)=E[rew((P, V )(x))], ˜

R(x)=E[rew((

P,V)(x))] and C(x)[resp.

˜

C(x)] the cost for P[resp.

P] to engage in the protocol. Then we want

R(x)−C(x)≥˜

R(x)−˜

C(x)=⇒δ

P(x)≥C(x)−˜

C(x)

In general this is not true (see for example the previous protocol), but it is always

possible to change the reward by a multiplier M. Note that if M≥C(x)/δ

P(x)

then we have that

M(R(x)−˜

R(x)) ≥C(x)≥C(x)−˜

C(x)

as desired. Therefore by using the multiplier Min the reward, the honest prover

Pmaximizes its proﬁt against all provers

Pexcept those for which δ

P(x)≤

C(x)/M , i.e. those who report the incorrect result with a “small” probability

P(x)≤C(x)

MΔ(x).

We note that Mmight be bounded from above, by budget considerations

(i.e. the need to keep the total reward MR(x)≤Bfor some budget B). This

point out to the importance of a large reward gap Δ(x) since the larger Δ(x)is,

the smaller the probability of a cheating prover

Pto report an incorrect result

must be, in order for

Pto achieve an higher proﬁt than P.

274 M. Campanelli and R. Gennaro

Example. In the above protocol we can assume that the cost of the honest

prover is C(x)=n, and we know that Δ(x)=n2. Therefore the proﬁt of

the honest prover is maximized against all the provers that report an incorrect

result with probability larger than n3/M , which can be made suﬃciently small

by choosing the appropriate multiplier.

Remark 1. If we are interested in an asymptotic treatment, it is important to

notice that as long as Δ(x)≥1/poly(|x|) then it is possible to keep a polynomial

reward budget, and maximize the honest prover proﬁt against all provers who

cheat with a substantial probability

P≥1/poly(|x|).

4 Sequential Composition

We now present the main results of our work. First we informally describe our

notion of sequential composition of rational proof, via a motivating example

and show that the protocols in [1,2,6] do not satisfy it. Then we present our

deﬁnition of sequential rational proofs, and a protocol that achieves it for circuits

of bounded depth.

4.1 Motivating Example

Consider the protocol in the previous section for the computation of the func-

tion Gn,k(·). Assume that the honest execution of the protocol (including the

computation of Gn,k(·)) has cost C=n.

Assume now that we are given a sequence of ninputs x(1) ,...,x

(i),... where

each x(i)is an n-bit string. In the following let mibe the Hamming weight of

x(i)and pi=mi/n.

Therefore the honest prover investing C=ncost, will be able to execute the

protocol only once, say on input x(i). By setting p=˜p=piin Eq. 1, we see that

Pobtains reward

R(x(i))=2(p2

i−pi+1)≤2

Consider instead a prover

Pwhich in the execution of the protocol outputs a

random value ˜m∈[0..n]. The expected reward of

Pon any input x(i)is (by

setting p=piand ˜p=m/n in Eq. 1and taking expectations):

˜

R(x(i))= E

m,b[BSR(m

n,b)]

=1

n+1

n

m=0

E

b[BSR(m

n,b]

=1

n+1

n

m=0

(2(2pi·m

n−m2

n2−pi+ 1))

=2−2n+1

3n>1forn>1.

Sequentially Composable Rational Proofs 275

Therefore by “solving” just two computations

Pearns more than P. Moreover

t the strategy of

Phas cost 1 and therefore it earns more than Pby investing a

lot less cost1.

Note that “scaling” the reward by a multiplier Mdoes not help in this case,

since both the honest and dishonest prover’s rewards would be multiplied by the

same multipliers, without any eﬀect on the above scenario.

We have therefore shown a rational strategy, where cheating many times and

collecting many rewards is more proﬁtable than collecting a single reward for an

honest computation.

4.2 Sequentially Composable Rational Proofs

The above counterexample motivates the following Deﬁnition which formalizes

that the reward of the honest prover Pmust always be larger than the total

reward of any prover

Pthat invests less computation cost than P.

Technically this is not trivial to do, since it is not possible to claim the above

for any prover

Pand any sequence of inputs, because it is possible that for a

given input ˜x, the prover

Phas “hardwired” the correct value ˜y=f(˜x) and can

compute it without investing any work. We therefore propose a deﬁnition that

holds for inputs randomly chosen according to a given probability distribution

D, and we allow for the possibility that the reward of a dishonest prover can

be “negligibly” larger than the reward of the honest prover (for example if

Pis

lucky and such “hardwired” inputs are selected by D).

Deﬁnition 2 (Sequential Rational Proof ). A rational proof (P, V )for a

function f:{0,1}n→{0,1}nis -sequentially composable for an input distrib-

ution D, if for every prover

P, and every sequence of inputs x, x1,...,x

k∈D

such that C(x)≥k

i=1 ˜

C(xi)we have that i˜

R(xi)−R≤.

A few suﬃcient conditions for sequential composability follow.

Lemma 1. Let (P, V )be a rational proof. If for every input xit holds that

R(x)=Rand C(x)=Cfor constants Rand C, and the following inequality

holds for every

P=Pand input x∈D:

˜

R(x)

R≤˜

C(x)

C+

then (P, V )is kR-sequentially composable for D.

Proof. It suﬃces to observe that, for any kinputs x1, ..., xk, the inequality above

implies

k

i=1

˜

R(xi)≤R[

k

i=1

(˜

C(xi)

C+)] ≤R+kR

where the last inequality holds whenever k

i=1 ˜

C(xi)≤Cas in Deﬁnition 2.

1If we think of cost as time, then in the same time interval in which Psolves one

problem,

Pcan solve up to nproblems, earning a lot more money, by answering fast

and incorrectly.

276 M. Campanelli and R. Gennaro

Corollary 1. Let (P, V )and rew be respectively an interactive proof and a

reward function as in Deﬁnition 1;ifrew can only assume the values 0and

Rfor some constant R,let˜px=Pr[rew((

P,V)(x)) = R].Ifforx∈D

˜px≤˜

C(x)

C+

then (P, V )is kR-sequentially composable for D.

Proof. Observe that ˜

R(x)=˜px·Rand then apply Lemma 1.

4.3 Sequential Rational Proofs in the PCP Model

We now describe a rational proof appeared in [2] and prove that is sequentially

composable. The protocol assumes the existence of a trusted memory storage

to which both Prover and Veriﬁer have access, to realize the so-called “PCP”

(Probabilistically Checkable Proof) model. In this model, the Prover writes a

very long proof of correctness, that the veriﬁer checks only in a few randomly

selected positions. The trusted memory is needed to make sure that the prover

is “committed” to the proof before the veriﬁer starts querying it.

The following protocol for proofs on a binary logical circuit Cappeared in [2].

The Prover writes all the (alleged) values αwfor every wire w∈C, on the trusted

memory location. The Veriﬁer samples a single random gate value to check its

correctness and determines the reward accordingly:

1. The Prover writes the vector {αw}w∈C

2. The Veriﬁer samples a random gate g∈C.

– The Veriﬁer reads αgout ,α

gL,α

gR, with gout,g

L,g

Rbeing respectively the

output, left and right input wires of g; the veriﬁer checks that αgout =

g(αgL,α

gR);

–Ifgin an input gate the Veriﬁer also checks that αgL,α

gRcorrespond to

the correct input values;

The Veriﬁer pays Rif both checks are satisﬁed, otherwise it pays 0.

Theorem 1 ([2]). The protocol above is a rational proof for any boolean function

in P||NP , the class of all languages decidable by a polynomial time machine that

can make non-adaptive queries to NP.

We will now show a cost model where the rational proof above is sequentially

composable. We will assume that the cost for any prover is given by the number

of gates he writes. Thus, for any input x, the costs for honest and dishonest

provers are respectively C(x)=S, where S=|C|,and ˜

C(x)=˜swhere ˜sis the

number of gates written by the dishonest prover. Observe that in this model a

dishonest prover may not write all the Sgates, and that not all of the ˜sgates

have to be correct. Let σ≤˜sthe number of correct gates written by

P.

Sequentially Composable Rational Proofs 277

Theorem 2. In the cost model above the PCP protocol in [2]issequentially

composable.

Proof. Observe that the probability ˜pxthat

P=Pearns Ris such that

˜px=σ

S≤˜s

S=˜

C

C

Applying Corollary 1completes the proof.

The above cost model, basically says that the cost of writing down a gate dom-

inates everything else, in particular the cost of computing that gate. In other

cost models a proof of sequential composition may not be as straightforward.

Assume, for example, that the honest prover pays $1 to compute the value of a

single gate while writing down that gate is “free”. Now ˜pxis still equal to σ

Sbut

to prove that this is smaller than ˜

C

Cwe need some additional assumption that

limits the ability for

Pto “guess” the right value of a gate without computing

it (which we will discuss in the next Section).

4.4 Sequential Composition and the Unique Inner State

Assumption

Deﬁnition 2for sequential rational proofs requires a relationship between the

reward earned by the prover and the amount of “work” the prover invested to

produce that result. The intuition is that to produce the correct result, the prover

must run the computation and incur its full cost. Unfortunately this intuition

is diﬃcult, if not downright impossible, to formalize. Indeed for a speciﬁc input

xa “dishonest” prover

Pcould have the correct y=f(x) value “hardwired”

and could answer correctly without having to perform any computation at all.

Similarly, for certain inputs x, xand a certain function f, a prover

Pafter

computing y=f(x) might be able to “recycle” some of the computation eﬀort

(by saving some state) and compute y=f(x) incurring a much smaller cost

than computing it from scratch.

A way to circumvent this problem was suggested in [3] under the name of

Unique Inner State Assumption: the idea is to assume a distribution Dover the

input space. When inputs xare chosen according to D, then we assume that

computing frequires cost Cfrom any party: this can be formalized by saying

that if a party invests ˜

C=γC eﬀort (for γ≤1), then it computes the correct

value only with probability negligibly close to γ(since a party can always have

a “mixed” strategy in which with probability γit runs the correct computation

and with probability 1 −γdoes something else, like guessing at random).

Assumption 1. We say that the (C, )-Unique Inner State Assumption holds

for a function fand a distribution Dif for any algorithm

Pwith cost ˜

C=γC,

the probability that on input x∈D,

Poutputs f(x)is at most γ+(1−γ).

278 M. Campanelli and R. Gennaro

Note that the assumption implicitly assumes a “large” output space for f(since

a random guess of the output of fwill be correct with probability 2−nwhere n

is the binary length of f(x)).

More importantly, note that Assumption 1immediately yields our notion of

sequential composability, if the Veriﬁer can detect if the Prover is lying or not.

Assume, as a mental experiment for now, that given input x, the Prover claims

that ˜y=f(x) and the Veriﬁer checks by recomputing y=f(x) and paying a

reward of Rto the Prover if y=˜yand 0 otherwise. Clearly this is not a very

useful protocol, since the Veriﬁer is not saving any computation eﬀort by talking

to the Prover. But it is sequentially composable according to our deﬁnition, since

˜px, the probability that

Pcollects R, is equal to the probability that

Pcomputes

f(x) correctly, and by using Assumption 1we have that

˜px=γ+(1−γ)≤˜

C

C+

satisfying Corollary 1.

To make this a useful protocol we adopt a strategy from [3], which also uses

this idea of veriﬁcation by recomputing. Instead of checking every execution, we

check only a random subset of them, and therefore we can amortize the Veriﬁer’s

eﬀort over a large number of computations. Fix a parameter m. The prover sends

to the veriﬁer the values ˜yjwhich are claimed to be the result of computing f

over minputs x1,...,x

m. The veriﬁer chooses one index irandomly between

1andm, and computes yi=f(xi). If yi=˜yithe veriﬁer pays R, otherwise it

pays 0.

Let Tbe the total cost by the honest prover to compute minstances: cleary

T=mC.Let ˜

T=Σ

i˜

Cibe the total eﬀort invested by ˜

P, by investing ˜

Cion the

computation of xi. In order to satisfy Corollary 1we need that ˜px, the probability

that

Pcollects R, be less than ˜

T/T +.

Let γi=˜

Ci/C, then under Assumption 1we have that ˜yiis correct with

probability at most γi+(1−γi). Therefore if we set γ=iγi/m we have

˜px=1

m

i

[γi+(1−γi)]=γ+(1−γ)≤γ+

But note that γ=˜

T/T as desired since

˜

T=

i

˜

Ci=

i

γiC=T

i

γi/m

Eﬃciency of the Veriﬁer. If our notion of “eﬃcient Veriﬁer” is a veriﬁer who

runs in time o(C) where Cis the time to compute f, then in the above protocol

mmust be suﬃciently large to amortize the cost of computing one execution

over many (in particular a constant – in the input size n– value of mwould

not work). In our “concrete analysis” treatment, if we requests that the Veriﬁer

runs in time δC for an “eﬃciency” parameter δ≤1, then we need m≥δ−1.

Sequentially Composable Rational Proofs 279

Therefore we are still in need of a protocol which has an eﬃcient Veriﬁer,

and would still works for the “stand-alone” case (m= 1) but also for the case

of sequential composability over any number mof executions.

5 Our Protocol

We now present a protocol that works for functions f:{0,1}n→{0,1}n

expressed by an arithmetic circuit Cof size Cand depth dand fan-in 2, given

as a common input to both Prover and Veriﬁer together with the input x.

Intuitively the idea is for the Prover to provide the Veriﬁer with the output

value yand its two “children” yL,y

Rin the gate, i.e. the two input values of the

last output gate G. The Veriﬁer checks that G(yL,y

R)=y, and then asks the

Prover to verify that yLor yR(chosen a random) is correct, by recursing on the

above test. The protocol description follows.

1. The Prover evaluates the circuit on xand sends the output value y1to the

Veriﬁer.

2. Repeat rtimes: The Veriﬁer identiﬁes the root gate g1and then invokes

Round(1,g

1,y

1),

where the procedure Round(i, gi,y

i) is deﬁned for 1 ≤i≤das follows:

1. The Prover sends the value of the input wires z0

iand z1

iof gito the Veriﬁer.

2. The Veriﬁers performs the following

– Check that yiis the result of the operation of gate gion inputs z0

iand z1

i.

If not STOP and pay a reward of 0.

–Ifi=d(i.e. if the inputs to giare input wires), check that the values of z0

i

and z1

iare equal to the corresponding bits of x. Pay reward Rto Merlin

if this is the case, nothing otherwise.

–Ifi<d, choose a random bit b, send it to Merlin and invoke Round(i+

1,g

b

i+1,zb

i) where gb

i+1 is the child gate of giwhose output is zb

i.

5.1 Eﬃciency

The protocol runs at most in drounds. In each round, the Prover sends a constant

number of bits representing the values of speciﬁc input and output wires; The

Veriﬁer sends at most one bit per round, the choice of the child gate. Thus the

communication complexity is O(d) bits.

The computation of the Veriﬁer in each round is: (i) computing the result of

a gate and checking for bit equality; (ii) sampling a child. Gate operations and

equality are O(1) per round. We assume our circuits are T-uniform, which allows

the Veriﬁer to select the correct gate in time T(n)2. Thus the Veriﬁer runs in

O(rd ·T(n)) with r=O(log C).

2We point out that the Prover can provide the Veriﬁer with the requested gate and

then the Veriﬁer can use the uniformity of the circuit to check that the Prover has

given him the correct gate at each level in time O(T(n)).

280 M. Campanelli and R. Gennaro

5.2 Proofs of (Stand-Alone) Rationality

Theorem 3. The protocol in Sect. 5for r=1is a Rational Proof according to

Deﬁnition 1.

We prove the above theorem by showing that for every input xthe reward gap

Δ(x) is positive.

Proof. Let

Pa prover that always reports ˜y=y1=f(x) at Round 1.

Let us proceed by induction on the depth dof the circuit. If d= 1 then there

is no possibility for

Pto cheat successfully, and its reward is 0.

Assume d>1. We can think of the binary circuit Cas composed by two

subcircuits CLand CRand the output gate g1such that f(x)=g1(CL(x),CR(x)).

The respective depths dL,d

Rof these subcircuits are such that 0 ≤dL,d

R≤

d−1andmax(dL,d

R)=d−1. After sending ˜y, the protocol requires that

P

sends output values for CL(x)andCR(x); let us denote these claimed values

respectively with ˜yLand ˜yR. Notice that at least one of these alleged values will

be diﬀerent from the respective correct subcircuit output: if it were otherwise,

Vwould reject immediately as g(˜yL,˜yR)=f(x)=˜y. Thus at most one of the

two values ˜yL,˜yRis equal to the output of the corresponding subcircuit. The

probability that the

Pcheats successfully is:

Pr[V accepts] ≤1

2·(Pr[V accepts on CL] + Pr[V acceptson CR]) (2)

≤1

2·(1 −2−max(dL,dR))+1

2(3)

≤1

2·(1 −2−d+1)+ 1

2(4)

=1−2−d(5)

At Eq. 3 we used the inductive hypothesis and the fact that all probabilities are

at most 1.

Therefore the expected reward of

Pis ˜

R≤R(1 −2−d) and the reward gap

is Δ(x)=2

−dR(see Remark 2or an explanation of the equality sign).

The following useful corollary follows from the proof above.

Corollary 2. If the protocol described in Sect. 5is repeated r≥1times a prover

can cheat with probability at most (1 −2−d)r.

Remark 2. We point out that one can always build a prover strategy P∗which

always answers incorrectly and achieves exactly the reward R∗=R(1 −2−d).

This prover outputs an incorrect ˜yand then computes one of the subcircuits that

results in one of the input values (so that at least one of the inputs is correct).

This will allow him to recursively answer with values z0

iand z1

iwhere one of the

two is correct, and therefore be caught only with probability 2−d.

Remark 3. In order to have a non-negligible reward gap (see Remark 1)we

need to limit ourselves to circuits of depth d=O(log n).

Sequentially Composable Rational Proofs 281

5.3 Proof of Sequential Composability

General Suﬃcient Conditions for Sequential Composability

Lemma 2. Let Cbe a circuit of depth d. If the (C, )Unique Inner State

Assumption (see Assumption 1) holds for the function fcomputed by C,and

input distribution D, then the protocol presented above with rrepetitions is a

kR-sequentially composable Rational Proof for Cfor Dif the following inequal-

ity holds

(1 −2−d)r≤1

C

Proof. Let γ=˜

C

C. Consider x∈Dand prover

Pwhich invests eﬀort ˜

C≤C.

Under Assumption 1,

Pgives the correct outputs with probability γ+– assume

that in this case

Pcollects the reward R.If

Pgives an incorrect output we

know (following Corollary 2) that he collects the reward Rwith probability at

most (1 −2−d)rwhich by hypothesis is less than γ. So either way we have that

˜

R≤(γ+)Rand therefore applying Lemma 1concludes the proof.

The problem with the above Lemma is that it requires a large value of rfor

the result to be true resulting in an ineﬃcient Veriﬁer. In the following sections

we discuss two approaches that will allows us to prove sequential composability

even for an eﬃcient Veriﬁer:

– Limiting the class of provers we can handle in our security proof;

– Limiting the class of functions/circuits.

Limiting the Strategy of the Prover: Non-adaptive Provers. In proving

sequential composability it is useful to ﬁnd a connection between the amount

of work done by a dishonest prover and its probability of cheating. The more a

dishonest prover works, the higher its probability of cheating. This is true for our

protocol, since the more “subcircuits” the prover computes correctly, the higher

is the probability of convincing the veriﬁer of an incorrect output becomes. The

question then is: how can a prover with an “eﬀort budget” to spend maximize

its probability of success in our protocol?

As we discussed in Remark 2, there is an adaptive strategy for the

Pto max-

imize its probability of success: compute one subcircuit correctly at every round

of the protocol. We call this strategy “adaptive”, because the prover allocates

its “eﬀort budget” ˜

Con the ﬂy during the execution of the rational proof. Con-

versely a non-adaptive prover

Puses ˜

Cto compute some subcircuits in Cbefore

starting the protocol. Clearly an adaptive prover strategy is more powerful, than

a non-adaptive one (since the adaptive prover can direct its computation eﬀort

where it matters most, i.e. where the Veriﬁer “checks” the computation).

Is it possible to limit the Prover to a non-adaptive strategy? This could be

achieved by imposing some “timing” constraints to the execution of the proto-

col: to prevent the prover from performing large computations while interacting

with the Veriﬁer, the latter could request that prover’s responses be delivered

282 M. Campanelli and R. Gennaro

“immediately”, and if a delay happens then the Veriﬁer will not pay the reward.

Similar timing constraints have been used before in the cryptographic litera-

ture, e.g. see the notion of timing assumptions in the concurrent zero-knowledge

protocols in [5].

Therefore in the rest of this subsection we assume that non-adaptive strate-

gies are the only rational ones and proceed in analyzing our protocol under the

assumption that the prover is adopting a non-adaptive strategy.

Consider a prover

Pwith eﬀort budget ˜

C<C.ADFS (for “depth ﬁrst

search”) prover uses its eﬀort budget ˜

Cto compute a whole subcircuit of size

˜

Cand maximal depth dDF S. Call this subcircuit CDFS.

Pcan answer correctly

any veriﬁer’s query about a gate in CDFS . During the interaction with V,the

behavior of a DFS prover is as follows:

– At the beginning of the protocol send an arbitrary output value y1.

– During procedure Round(i, gi,y

i):

•If gi∈C

DFS then

Psends the two correct inputs z0

iand z1

i.

•If gi∈C

DFS and neither of gi’s input gate belongs to CDFS then

Psends

two arbitrary z0

iand z1

ithat are consistent with yi, i.e. gi(z0

i,z1

i)=yi.

•gi∈C

DFS and one of gi’s input gates belongs to CDFS , then

Pwill send

the correct wire known to him and another arbitrary value consistent with

yias above.

Lemma 3 (Advantage of a DFS Prover). In one repetition of the protocol

above, a DFS prover with eﬀort budget ˜

Cinvestment has probability of cheating

˜pDFS bounded by

˜pDFS ≤1−2−dDF S

The proof of Lemma 3follows easily from the proof of the stand-alone rationality

of our protocol (see Theorem 3).

If a DFS prover focuses on maximizing the depth of a computed subcircuit

given a certain investment, BFS provers allot their resources to compute all sub-

circuits rooted at a certain height. A BFS prover with eﬀort budget ˜

Ccomputes

the value of all gates up to the maximal height possible dBF S. Note that dBF S

is a function of the circuit Cand of the eﬀort ˜

C.LetCBF S be the collection

of gates computed by the BFS prover. The interaction of a BFS prover with V

throughout the protocol resembles that of the DFS prover outlined above:

– At the beginning of the protocol send an arbitrary output value y1.

– During procedure Round(i, gi,y

i):

•If gi∈ CBF S then

Psends the two correct inputs z0

iand z1

i.

•If gi∈ CBF S and neither of gi’s input gate belongs to CBFS then

Psends

two arbitrary z0

iand z1

ithat are consistent with yi, i.e. gi(z0

i,z1

i)=yi.

•gi∈ CBF S and both gi’s input gates belong to CDFS , then

Pwill send one

of the correct wires known to him and another arbitrary value consistent

with yias above.

Sequentially Composable Rational Proofs 283

As before, it is not hard to see that the probability of successful cheating by a

BFS prover can be bounded as follows:

Lemma 4 (Advantage of a BFS Prover). In one repetition of the proto-

col above, a BFS prover with eﬀort budget ˜

Chas probability of cheating ˜pBFS

bounded by

˜p≤1−2−dBFS

BFS and DFS provers are both special cases of the general non-adaptive strategy

which allots its investment ˜

Camong a general collection of subcircuits C.The

interaction with Vof such a prover is analogous to that of a BFS/DFS prover

but with a collection of computed subcircuits not constrained by any speciﬁc

height. We now try to formally deﬁne what the success probability of such a

prover is.

Deﬁnition 3 (Path Experiment). Consider a circuit Cand a collection C

of subcircuits of C. Perform the following experiment: starting from the output

gate, ﬂip a unbiased coin and choose the “left” subcircuit or the “right” subcircuit

at random with probability 1/2. Continue until the random path followed by the

experiment reaches a computed gate in C.Letibe the height of this gate, which is

the output of the experiment. Deﬁne with Πithe probability that this experiment

outputs i.

The proof of the following Lemma is a generalization of the proof of security of

our scheme. Once the “veriﬁcation path” chosen by the Veriﬁer enters a fully

computed subcircuit at height i(which happens with probability ΠC

i), the prob-

ability of success of the Prover is bounded by (1 −2−i)

Lemma 5 (Advantage of a Non Adaptive Prover). In one repetition of

the protocol above, a generic prover with eﬀort budget ˜

Cused to compute a

col lection Cof subcircuits, has probability of cheating ˜pCbounded by

˜pC≤

d

i=0

Πi(1 −2−i)

where Πi-s are deﬁned as in Deﬁnition 3.

Limiting the Class of Functions: Regular Circuits. Lemma 5still does

not produce a clear bound on the probability of success of a cheating prover.

The reason is that it is not obvious how to bound the probabilities ΠC

ithat arise

from the computed subcircuits Csince those depends in non-trivial ways from

the topology of the circuit C.

We now present a type of circuits for which it can be shown that the BFS

strategy is optimal. The restriction on the circuit is surprisingly simple: we call

them regular circuits. In the next section we show examples of interesting func-

tions that admit regular circuits.

284 M. Campanelli and R. Gennaro

Deﬁnition 4 (Regular Circuit). A circuit Cis said to be regular if the fol-

lowing conditions hold:

–Cis layered;

– every gate has fan-in 2;

– the inputs of every gate are the outputs of two distinct gates.

The following lemma states that, in regular circuits, we can bound the advantage

of any prover investing ˜

Cby looking at the advantage of a BFS prover with the

same investment.

Lemma 6 (A Bound for Provers’ Advantage in Regular Circuits). Let

Pbe a prover investing ˜

C.LetCbe the circuit being computed and δ=dBF S(C,˜

C).

In one repetition of the protocol above, the advantage of

Pis bounded by

˜p≤˜pBFS =1−2−δ

Proof. Let Cbe the family of subcircuits computed by

Pwith eﬀort ˜

C.As

pointed out above the probability of success for

Pis

˜p≤

d

i=0

ΠC

i(1 −2−i)

Consider now a prover

Pwhich uses ˜

Ceﬀort to compute a diﬀerent collection

of subcircuits Cdeﬁned as follows:

– Remove a gate from a subcircuit of height jin C: this produces two subcircuits

of height j−1. This is true because of the regularity of the circuit: since the

inputs of every gate are the outputs of two distinct gates, when removing a

gate of height jthis will produce two subcircuits of height j−1;

– Use that computation to “join” two subcircuits of height kinto a single sub-

circuit of height k+ 1. Again we are using the regularity of the circuit here:

since the circuit is layered, the only way to join two subcircuits into a single

computed subcircuit is to take two subcircuits of the same height.

What happens to the probability ˜pof success of

P?Letbe the number of

possible paths generated by the experiment above with C. Then the probability of

entering a computed subcircuit at height jdecreases by 1/ and that probability

weight goes to entering at height j−1. Similarly the probability of entering at

height kgoes down by 2/ and that probability weight is shifted to entering at

height k+ 1. Therefore

˜p≤

i=j,j−1,k,k+1

Πi(1 −2−i)

+(Πj−1

)(1 −2−j)+(Πj−1+1

)(1 −2−j+1)

+(Πk−2

)(1 −2−k)+(Πk+1 +2

)(1 −2−k−1)

Sequentially Composable Rational Proofs 285

=˜p+1

2j−1

2j−1+1

2k−1−1

2k

=˜p+2k−2k+1 +2

j+1 −2j

2j+k=˜p+2j−2k

2j+k

Note that ˜pincreases if j>kwhich means that it’s better to take “com-

putation” away from tall computed subcircuits to make them shorter, and use

the saved computation to increase the height of shorter computed subtrees, and

therefore that the probability is maximized when all the subtrees are of the same

height, i.e. by the BFS strategy which has probability of success ˜pBFS =1−2−δ.

The above Lemma, therefore, yields the following.

Theorem 4. Let Cbe a regular circuit of s ize C.Ifthe(C, )Unique Inner

State Assumption (see Assumption 1) holds for the function fcomputed by C,

and input distribution D, then the protocol presented above with rrepetitions is

akR-sequentially composable Rational Proof for Cfor Dif the prover follows a

non-adaptive strategy and the following inequality holds for all ˜

C

(1 −2−δ)r≤˜

C

C

where δ=dBFS(C,˜

C).

Proof. Let γ=˜

C

C. Consider x∈Dand prover

Pwhich invests eﬀort ˜

C≤C.

Under Assumption 1,

Pgives the correct outputs with probability γ+– assume

that in this case

Pcollects the reward R.

If

Pgives an incorrect output we can invoke Lemma 6and conclude that he

collects reward Rwith probability at most (1 −2−δ)rwhich by hypothesis is

less than γ. So either way we have that ˜

R≤(γ+)Rand therefore applying

Lemma 1concludes the proof.

6 Results for FFT Circuits

In this section we apply the previous results to the problem of computing FFT

circuits, and by extension to polynomial evaluations.

6.1 FFT Circuit for Computing a Single Coeﬃcient

The Fast Fourier Transform is an almost ubiquitous computational problem that

appears in many applications, including many of the volunteer computations

that motivated our work. As described in [4] a circuit to compute the FFT of a

vector of ninput elements, consists of log nlevels, where each level comprises n/2

butterﬂies gates. The output of the circuit is also a vector of ninput elements.

Let us focus on the circuit that computes a single element of the output

vector: it has log nlevels, and at level iit has n/2ibutterﬂies gates. Moreover

the circuit is regular, according to Deﬁnition 4.

286 M. Campanelli and R. Gennaro

Theorem 5. Under the (C, )-unique inner state assumption for input distribu-

tion D, the protocol in Sect. 5, when repeated r=O(1) times, yields sequentially

composable rational proofs for the FFT, under input distribution Dand assuming

non-adaptive prover strategies.

Proof. Since the circuit is regular we can prove sequential composability by

invoking Theorem 4and proving that for r=O(1), the following inequality

holds

˜p=(1−2−δ)r≤˜

C

C

where δ=dBFS (C,˜

C).

But for any ˜

δ<d, the structure of the FFT circuit implies that the number

of gates below height ˜

δis ˜

C˜

δ=Θ(C(1 −2−˜

δ)). Thus the inequality above can

be satisﬁed with r=Θ(1).

6.2 Mixed Strategies for Veriﬁcation

One of the typical uses of the FFT is to change representation for polynomials.

Given a polynomial P(x) of degree n−1 we can represent it as a vector of n

coeﬃcients [a0,...,a

n−1] or as a vector of npoints [P(ω0),...,P(ωn−1)]. If ωi

are the complext n-root of unity, the FFT is the algorithm that goes from one

representation to the other in O(nlog n) time, rather than the obvious O(n2).

In this section we consider the following problem: given two polynomial P, Q

of degree n−1 in point representation, compute the inner product of the coef-

ﬁcients of P, Q. A fan-in two circuit computing this function could be built as

follows:

– two parallel FFT subcircuits computing the coeﬃcient representation of P,Q

(log n-depth and nlog n) size total for the 2 circuits);

– a subcircuit where at the ﬁrst level the i-degree coeﬃcients are multiplied

with each other, and then all these products are added by a binary tree of

additions O(log n)-depth and O(n) size);

Note that this circuit is regular, and has depth 2 logn+ 1 and size nlog n+n+1.

Consider a prover

Pwho pays ˜

C<nlog neﬀort. Then, since the BFS strat-

egy is optimal, the probability of convincing the Veriﬁer of a wrong result of the

FFT is (1 −2−˜

d)rwhere ˜

d=clog nwith c≤1. Note also that ˜

C

C<1. Therefore

with r=O(nc) repetitions, the probability of success can be made smaller than

˜

C

C. The Veriﬁer’s complexity is O(nclog n)=o(nlog n).

If ˜

C≥nlog nthen the analysis above fails since ˜

d>log n. Here we observe

that in order for

Pto earn a larger reward than P, it must be that Phas run at

least k=O(log n) executions (since it is possible to ﬁnd k+ 1 inputs such that

(k+1)˜

C≤kC only if k>log n).

Assume for a moment that the prover always executes the same strategy

with the same running time. In this case we can use a “mixed” strategy for

veriﬁcation:

Sequentially Composable Rational Proofs 287

– The Veriﬁer pays the Prover only after kexecutions. Each execution is veriﬁed

as above (with ncrepetitions);

– Additionally the Veriﬁer uses the “check by re-execution” (from Sect. 4.4)

strategy every kexecutions (veriﬁying one execution by recomputing it);

– The Veriﬁer pays Rif all the checks are satisﬁed, 0 otherwise;

– The Veriﬁer’s complexity is O(knclog n+nlog n)=o(kn log n) – the latter

being the complexity of computing kinstances.

Notice that there are many plausible ways to assume that the expected cost ˜

C

remains the same through the k+ 1 proofs, for example by assuming that the

Prover can be “resetted” at the beginning of each execution and made oblivious

of the previous interactions.

7 Conclusion

Rational Proofs are a promising approach to the problem of verifying computa-

tions in a rational model, where the prover is not malicious, but only motivated

by the goal of maximizing its utility function. We showed that Rational Proofs

do not satisfy basic compositional properties in the case where a large number

of “computation problems” are outsourced, e.g. volunteered computations. We

showed that a “fast” incorrect answer is more remunerable for the prover, by

allowing him to solve more problems and collect more rewards. We presented

an enhanced deﬁnition of Rational Proofs that removes the economic incentive

for this strategy and we presented a protocol that achieves it for some uniform

bounded-depth circuits.

One thing to point out is that our protocol has two additional advantages:

– the honest Prover is always guaranteed a ﬁxed reward R, as opposed to some

of the protocols in [1,2] where the reward is a random variable even for the

honest prover;

– Our protocol is the ﬁrst example of a rational proof for arithmetic circuits.

Our work leaves many interesting research directions to explore:

– Is it possible to come up with a protocol that works for any bounded-depth

circuit, and not circuits with special “topological” conditions such as the ones

imposed by our results?

– Our results hold for “non-adaptive” prover strategies, though that seems more

a proof artifact to simplify the analysis, than a technical requirement. Is it

possible to lift that restriction?

– Are there other circuits which, like the FFT one, satisfy our notions and

requirements?

– What about rational proofs for arbitrary poly-time computations? Even if the

simpler stand-alone case?

288 M. Campanelli and R. Gennaro

References

1. Azar, P.D., Micali, S.: Rational proofs. In: 2012 ACM Symposium on Theory of

Computing, pp. 1017–1028 (2012)

2. Azar, P.D., Micali, S.: Super-eﬃcient rational proofs. In: 2013 ACM Conference on

Electronic Commerce, pp. 29–30 (2013)

3. Belenkiy, M., Chase, M., Erway, C.C., Jannotti, J., K¨up¸c¨u, A., Lysyanskaya, A.:

Incentivizing outsourced computation. In: NetEcon 2008, pp. 85–90 (2008)

4. Cormen, T., Leiserson, C., Rivest, R., Stein, C.: Introduction to Algorithms. MIT

Press (2001)

5. Dwork, C., Naor, M., Sahai, A.: Concurrent zero-knowledge. J. ACM 51(6), 851–

898 (2004)

6. Guo, S., Hubacek, P., Rosen, A., Vald, M.: Rational arguments: single round del-

egation with sublinear veriﬁcation. In: 2014 Innovations in Theoretical Computer

Science Conference (2014)

7. Goldwasser, S., Micali, S., Rackoﬀ, C.: The knowledge complexity of interactive

proof-systems. In: Proceedings of the seventeenth Annual ACM Symposium on

Theory of computing. ACM (1985)

8. Walﬁsh, M., Blumberg, A.J.: Verifying computations without reexecuting them.

Commun. ACM 58(2), 74–84 (2015)