ThesisPDF Available

A Lattice-Based Anonymous Reputation System

Authors:

Figures

Content may be subject to copyright.
A Lattice-Based
Anonymous Reputation System
Ravital Solomon
Wadham College
University of Oxford
A thesis submitted for the degree of
Master’s of Mathematics and Foundations of Computer Science
Trinity 2017
Contents
1 Introduction 1
2 Lattices 3
2.1 Lattices and Their Basic Properties . . . . . . . . . . . . . . . . . . . 3
2.2 SVP .................................... 5
2.3 FindingaGoodBasis........................... 6
2.3.1 In2Dimensions.......................... 6
2.3.2 LLLAlgorithm .......................... 7
2.4 Resistance to Quantum Attacks . . . . . . . . . . . . . . . . . . . . . 8
2.5 Average vs. Worst Case Hardness . . . . . . . . . . . . . . . . . . . . 8
2.6 SIS..................................... 9
2.7 LWE.................................... 9
2.8 Regev’s LWE Cryptosystem . . . . . . . . . . . . . . . . . . . . . . . 10
3 Zero-Knowledge Proofs 12
3.1 Foundations of Zero-Knowledge Proofs . . . . . . . . . . . . . . . . . 12
3.2 SigmaProtocols.............................. 14
3.3 Fiat-Shamir Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Kawachi’s Stern-like Protocol for Lattices . . . . . . . . . . . . . . . . 16
3.5 Decomposition-Extension Technique . . . . . . . . . . . . . . . . . . . 19
4 Group Signatures 20
4.1 Partially Dynamic Group Signatures . . . . . . . . . . . . . . . . . . 21
4.1.1 Syntax............................... 21
4.1.2 Security Intuition . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1.3 Security Experiments . . . . . . . . . . . . . . . . . . . . . . . 23
4.1.4 Primitives............................. 26
4.1.5 Overview of the Scheme . . . . . . . . . . . . . . . . . . . . . 26
4.2 Fully Dynamic Group Signatures . . . . . . . . . . . . . . . . . . . . 27
i
4.2.1 Syntax............................... 28
4.2.2 Security Requirements . . . . . . . . . . . . . . . . . . . . . . 30
4.2.3 Security Experiments . . . . . . . . . . . . . . . . . . . . . . . 31
4.2.4 Reductions ............................ 34
4.3 Lattice-Based Fully Dynamic Group Signatures . . . . . . . . . . . . 34
4.3.1 Syntax............................... 34
4.3.2 Hardness Assumptions . . . . . . . . . . . . . . . . . . . . . . 35
4.3.3 The Advantage of Accumulators . . . . . . . . . . . . . . . . . 35
4.3.4 TheScheme............................ 38
4.3.5 Zero-Knowledge Argument . . . . . . . . . . . . . . . . . . . . 42
4.3.6 SecurityProofs .......................... 47
5 Current Anonymous Reputation Systems 50
5.1 Anonymous Reputation Systems via Linkable Ring Signatures . . . . 50
5.2 Anonymous Reputation Systems via Blockchain . . . . . . . . . . . . 51
5.3 A Number-Theoretic Anonymous Reputation System . . . . . . . . . 52
5.3.1 Syntax............................... 52
5.3.2 Security Notions . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.3.3 Construction ........................... 54
5.3.4 Comments............................. 55
6 Our Lattice-Based Anonymous Reputation System 56
6.1 Introduction................................ 56
6.2 Improvementsto[6]............................ 57
6.3 Comparison with Current Anonymous Reputation Systems . . . . . . 57
6.4 Syntax................................... 58
6.5 Primitives ................................. 62
6.5.1 Lattice-Based Updateable Merkle Trees . . . . . . . . . . . . . 62
6.5.2 Naor-Yung Double Encryption Paradigm with Regev’s LWE
EncryptionScheme........................ 63
6.5.3 Simulation-Sound NIZKPs . . . . . . . . . . . . . . . . . . . . 63
6.5.4 TAGScheme ........................... 63
6.6 SecurityNotions ............................. 68
6.7 Construction ............................... 70
6.8 Analysis of the Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.8.1 Correctness ............................ 76
6.8.2 Security .............................. 77
ii
6.9 Zero Knowledge Argument . . . . . . . . . . . . . . . . . . . . . . . . 78
6.9.1 Protocol.............................. 80
6.9.2 Proof of Zero Knowledge . . . . . . . . . . . . . . . . . . . . . 81
6.9.3 Proof of Special Soundness . . . . . . . . . . . . . . . . . . . . 83
6.10 Security of the Reputation System . . . . . . . . . . . . . . . . . . . 84
6.10.1 Oracles............................... 84
6.10.2 Link-Anonymity.......................... 86
6.10.3 Non-Frameability . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.10.4 Link-Non-Frameability . . . . . . . . . . . . . . . . . . . . . . 91
6.10.5 Traceability ............................ 91
6.10.6 Tracing Soundness . . . . . . . . . . . . . . . . . . . . . . . . 93
6.10.7 Public Linkability . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.11Commentary ............................... 96
7 Conclusion and Future Directions 97
Bibliography 99
iii
List of Figures
2.1 A fundamental domain. [13] . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 An example of a good vs. bad basis. [13] . . . . . . . . . . . . . . . . 6
5.1 Informal Architecture of the Reputation System. [6] . . . . . . . . . . 54
6.1 A Comparison of our Lattice-Based Reputation System with AnonRep
[27], Beaver [25], and a Number-Theoretic Reputation System [6] . . 58
6.2 Informal Architecture of our Reputation System. . . . . . . . . . . . . 61
iv
Chapter 1
Introduction
With growing concerns of privacy, “anonymous” forms of digital communication and
payment have become increasingly popular. The rise of cryptocurrencies, blockchain,
and homomorphic encryption attest to this fact. A reputation system is a framework
by which users can review purchases they’ve made from vendors. Reputation systems
are an integral part of our world; we’re exposed to them on a near daily basis by
the likes of Amazon, eBay, AirBnB, and Uber. Given the current climate, it seems
natural to consider constructing an anonymous reputation system. The idea behind
an anonymous reputation system is quite simple. What if users could review their
purchases while maintaining some degree of anonymity?
Anonymous reputation systems are a fairly recent area of research. Currently
there are a few proposals for how such a system might be constructed. However,
no currently proposed system is based on post-quantum primitives. We will intro-
duce the first lattice-based anonymous reputation system in our paper. Lattice-based
cryptography is a rapidly growing area of research as its problems are still conjec-
tured to be hard with a quantum computer (unlike its popular cousin elliptic curve
cryptography).
Our anonymous reputation system will be based on lattice-based dynamic group
signatures. We will first start by giving a brief background on lattices and the as-
sociated hard problems in lattice cryptography. We’ll then review zero-knowledge
proofs as they are an integral part of groups signatures. Following that, we’ll provide
a detailed discussion of dynamic group signatures and the first fully dynamic lattice-
based group signature scheme. In Chapter 5 we’ll briefly cover the latest anonymous
reputation systems. Finally, in Chapter 6, we’ll present our construction of the first
lattice-based anonymous reputation system.
Although our anonymous reputation system uses post-quantum primitives, it is
unfortunately not a “post-quantum” anonymous reputation system. While we will
1
avoid a detailed discussion of this subtlety in our dissertation, we hope that our work
serves as a foundation for the construction of the first truly post-quantum anonymous
reputation system.
2
Chapter 2
Lattices
2.1 Lattices and Their Basic Properties
We briefly review the basics of lattices and the hard problems lattice cryptography is
based upon. We begin by recalling the definition of a lattice.
Definition 1 (n-dimensional lattice L, [13]).An n-dimensional lattice Lis a subset
of Rnthat satisfies the following conditions:
1. Lis an additive subgroup: 0∈ L,x∈ L, x +y∈ L (x, y ∈ L)
2. Lis discrete: every x∈ L has a neighborhood in Rnin which xis the only
lattice point
An easy example of a lattice is ZnRn. We can equivalently talk about a lattice
Lbeing generated by some set of vectors {~v1, ..., ~vn} ∈ Rn. We can express Las
L={α1~v1+... +αn~vn|α1, ..., αnZ}. As in the case of vector spaces, a basis Bfor
Lis any set of independent vectors that generates Land the dimension of Lis the
number of vectors in its basis.
Definition 2 (fundamental domain of L, [13]).Suppose L ⊂ Rnis an n-dimensional
lattice with basis B={~v1, ..., ~vn}. Then the fundamental domain Fof Lcorrespond-
ing to Bis:
F(~v1, ..., ~vn) = {t1~v1+... +tn~vn|0ti<1}
An important result regarding fundamental domains is the following theorem:
3
Figure 2.1: A fundamental domain. [13]
Theorem 2.1.1. [13] Suppose L ⊂ Rnis a lattice of dimension n with fundamental
domain F. Then every ~w Rncan be written as:
~w =~
t+~v
for a unique ~
t∈ F and a unique ~v ∈ L.
That is to say any vector in Rncan be written as a sum of a lattice vector and a
vector from the lattice’s fundamental domain. The proof is straightforward; it consists
of taking an arbitrary vector wRn, expressing it as a linear combination of L’s
basis vectors (since the lattice’s basis is also a basis for Rn), and then decomposing
the corresponding coefficients into an integer and decimal part.
It is worth nothing that det(L) is an invariant of the lattice L. Recall that in Rn
the determinant of some subset of vectors ~v1, ..., ~vncan be viewed as a measure of the
volume of the parallelpiped generated by {~v1, ..., ~vn}. Maximum volume is achieved
when all the vectors are orthogonal to one another. Then the next theorem easily
follows:
Theorem 2.1.2 (Hadamard’s inequality [13]).Suppose Lis a lattice. Let ~v1, ..., ~vn
be a basis for Land let Fbe L’s fundamental domain. Then:
4
det(L) = V ol(F)≤ ||~v1||...||~vn||
If L’s basis is orthogonal then the above is an equality. Thus the above can be
seen as a measure of how “orthogonal” L’s basis is.
2.2 SVP
At the heart of lattice-based cryptography lies the Shortest Vector Problem (SVP for
short). Let λ1(L) := minv∈L\{0}||v||. More generally, define λi(L) to be the smallest
rsuch that there are ilinearly independent vectors in Lof norm r.
Definition 3 (Shortest Vector Problem [19]).Given an arbitrary basis Bfor lattice
L, find a shortest non-zero lattice vector (i.e. v∈ L for which ||v|| =λ1(L)).
Note that it says ashortest non-zero lattice vector and not the shortest non-zero
lattice vector. There may multiple such vectors. We make note of some important
variants of SVP below.
Definition 4 (Approximate Shortest Vector Problem SVPγ[19]).Given a basis B
for a lattice L, find a non-zero vector v∈ L such that ||v|| ≤ γ(n)·λ1(L)(where
γ(n)1is an approximation factor taken to be a function of the lattice dimension
n).
As the name suggests, SVPγasks us to find a lattice vector that’s “approximately”
the shortest (lattice) vector. A variant of SVPγis SIVPγ.
Definition 5 (Approximate Shortest Independent Vector Problem SIVPγ[19]).Given
a basis Bof n-dimensional lattice L, output a set S ⊂ L of nlinearly independent
lattice vectors such that ||si|| ≤ γ(n)·λn(L)for all si∈ S.
We do not discuss the Closest Vector Problem (which asks us to find a closest
lattice vector to some given vector in Rn) as (1) no cryptosystem has yet been
proven secure based on CVP and (2) CVP can be reduced to SVP in a slightly higher
dimension [19, 13].
SVP is a hard problem as the dimension nof the lattice Lincreases. Solving
SVP often comes down to having a “good” basis. In this context, a good basis is one
with short and fairly orthogonal vectors. If we were working in Rn, we could use the
well-known Gram-Schmidt process to achieve an orthonormal basis. However, as we
are working with lattices, we cannot use the Gram-Schmidt process since it’s highly
5
Figure 2.2: An example of a good vs. bad basis. [13]
unlikely that the output will still be vectors in the lattice. Thus, our goal is to find a
basis with vectors as short as possible and as orthogonal as possible that are still in
the lattice.
2.3 Finding a Good Basis
Since we cannot use the Gram-Schmidt process as is, some changes are made to it to
get the Lenstra-Lenstra-Lovasz (LLL) algorithm which outputs a “reduced” lattice
basis in polynomial time. Although the LLL-reduced basis is not the shortest possible
basis, for many applications it’s “good enough.”
2.3.1 In 2 Dimensions
We begin by discussing the two dimensional case which is attributed to Gauss as the
LLL algorithm is merely a generalization of Gauss’ method [13].
We start with lattice L ⊂ R2with basis B={~v1, ~v2}Suppose, without loss of
generality, that ||v1|| <||v2||. If we were using Gram-Schmidt, we would then replace
v2with v
2where:
v
2=v2v1·v2
||v1||2v1
However, it’s very unlikely that v
2will ever be in Lso instead we replace v2with:
v
2=v2− bv1·v2
||v1||2ev1
6
where we’ve forced the coefficient to be an integer to guarantee a lattice vector. If
v2:= v
2is still longer than v1, we’re done. Otherwise we swap the vectors and repeat
the process.
Theorem 2.3.1. [13] The process described above terminates. When it finally ter-
minates, the resulting vector v1is a solution to SVP and the angle θbetween v1and
v2satisfies π
3θ2π
3.
So Gauss’ method yields both a solution to SVP and a basis that’s somewhat
short and orthogonal.
2.3.2 LLL Algorithm
The LLL algorithm generalizes Gauss’ lattice reduction process for 2 dimensional
lattices to lattices of arbitrary dimension. The output of the LLL algorithm is a
LLL-reduced basis which is defined below.
Definition 6 (LLL-reduced basis [13, 15]).Let B={v1, ..., vn}be a basis for lattice
Land let B={v
1, ..., v
n}be the corresponding Gram-Schmidt orthogonal basis. We
say Bis LLL-reduced if it satisfies the following conditions:
Size Condition |µi,j |=|vi·v
j|
||v
j||21
2(i, j s.t. 1j < i n)
Lovasz Condition ||v
i||2(3
4µ2
i,i1)||v
i1||2(i s.t. 1< i n)
Notice that the ordering of the basis vectors matters as LLL will output a list of
“short” vectors in increasing order of length. We will not state the LLL algorithm in
full detail. However, the main idea is that we first produce a basis satisfying the size
condition above. We then check if the Lovasz condition is also satisfied. If it is, we’re
done. If not, we reorder the vectors and attempt to further reduce the size.
The 3
4factor from the Lovasz condition is worth explaining. Technically speaking,
any factor <1 can be used (if 1 is used then there is no guarantee that the LLL
algorithm terminates in polynomial time) [13]. The closer the value is to 1, the
“better” the basis will be.
7
2.4 Resistance to Quantum Attacks
Lattice cryptography has become increasingly popular as it is resistant to quantum
attacks unlike elliptic curve cryptography. Almost all difficult problems in lattice
cryptography can be reduced to solving SVP (along with its associated variants).
Although quantum algorithms are not the focus of our dissertation, we will just
briefly mention why current quantum polynomial time algorithms do not work for
solving SVP.
Most cryptography in use today is based on the hardness of factoring or the
discrete log problem. Both of these problems can be solved in polynomial time on
a quantum computer via Shor’s algorithm [23]. The idea behind Shor’s algorithm
is that it reduces factoring and discrete log problem to finding the period of a a
function which can be done relatively quickly with a quantum computer via the
quantum Fourier transform. In this regard we need to define the following:
Definition 7 (Hidden Subgroup Problem).Let Gbe a group with subgroup H. Let
Xbe a finite set. We say a function f:G → Xhides Hif the function fis constant
on the cosets of Hbut takes different values on different cosets of H. This function
fis given via an oracle. Using information obtained from calls to this oracle, find a
generating set for H.
Shor’s algorithm essentially solves the Hidden Subgroup Problem for finite abelian
groups. However, to find an efficient quantum algorithm solving SVP, we need to solve
the Hidden Subgroup problem for the dihedral group. Thus, a new approach is needed
if we want to find a polynomial time quantum algorithm solving SVP. So far there has
been little success in achieving this. That is not to say that lattice cryptography will
still be resistant to quantum attacks in 5 or 10 years. However, for the time being,
lattice cryptography appears to be a viable choice for the post-quantum age.
2.5 Average vs. Worst Case Hardness
Lattice cryptography is also appealing due to the connection between worst and av-
erage case hardness for lattice problems. In 1996, Ajtai showed that certain problems
are hard on average if some related lattice problems are hard in the worst case [1, 19].
8
2.6 SIS
The Short Integer Solution (SIS) problem was first introduced by Ajtai in 1996 [1].
We define it here as it is serves as the foundation of lattice-based collision-resistant
hash functions that we will use later on.
Definition 8 (SISn,q,β,m [19]).Given muniformly random vectors aiZn
qforming
the columns of matrix AZn×m
q, find a vector zZm(z6= 0) with norm ||z|| ≤ β
such that:
Az= 0 Zn
q
Notice that SISn,q,β,m becomes easier when the number of columns mincreases
but it becomes harder as the number of entries nin aiincreases [19]. Some care must
be taken when choosing the parameters of the SIS problem. In particular, βand m
must be taken large enough to ensure that a solution even exists.
We state without proof the following theorem that explains the connection between
SIS and variants of SVP.
Theorem 2.6.1. [19] For any m=poly(n), any β > 0and any sufficiently large
qβ·poly(n), solving SISn,q,β,m with non-negligible probability is at least as hard
as solving SIVPγon an arbitrary n-dimensional lattice with overwhelming probability
for some γ=β·poly(n).
2.7 LWE
Learning with Errors (LWE) was first introduced by Regev in 2005. There are two
settings of Learning with Errors- namely in the ring setting and in the non-ring (Rn)
setting. We will only be looking at the non-ring setting in our schemes so we provide
the appropriate definitions below.
LWE has the following parameters- n, q Z+and an error distribution χover Z.
Generally χis the discrete Gaussian with width αq for α < 1.
Definition 9 (Learning with Errors Distribution [19]).Let ~s Zn
qbe the secret.
The LWE distribution As,χ over Zn
q×Zqis sampled by choosing ~a Zn
quniformly at
random, error eχ, and outputting (~a, b =h~s,~ai+e(mod q)).
There are two LWE problems- namely Search LWE and Decision LWE.
9
Definition 10 (Search LWE [19]).Given mindependent samples (~ai, bi)Zn
q×Zq
drawn from As,χ for uniformly random ~s Zn
q(where ~s is fixed for all samples), find
~s.
Definition 11 (Decision LWE [19]).Given mindependent samples (ai, bi)Zn
q×Zq
where every sample is distributed according to either (1) As,χ for sZn
qchosen uni-
formly at random but fixed for all samples or (2) the uniform distribution, distinguish
which is the case.
Intuitively speaking, Search LWE asks us to recover the secret ~s given some LWE
samples whereas Decision LWE asks us to tell apart LWE samples from uniformly
random ones.
We state without proof the following theorem due to Regev that explains the
connection between LWE and variants of SVP.
Theorem 2.7.1. [19, 21] For any m=poly(n), any modulus q2poly(n), and any
discretized Gaussian error distribution χof parameters αq 2nwhere 0α
1, solving Decision-LWEn,q,χ,m is at least as hard as quantumly solving SIVPγon
arbitrary n-dimension lattices for some γ=e
O(n/α).
As we have previously mentioned, there are currently no efficient quantum al-
gorithms solving SVP and its variants [19]. There is an important variant of LWE
that we must mention (as we will make use of it in Chapter 6 in constructing our
reputation system).
LWE with Binary Secret. Suppose instead that the secret vector swas cho-
sen from {0,1}nrather than Zn
q. Is Learning with Errors still “hard?” The answer
(thankfully) is yes. However, as sis being chosen from a significantly smaller space,
we naturally expect that LWE remains difficult only if the dimension nis appropri-
ately increased. Namely, Binary-LWE is “hard” if nis increased to approximately
O(nlog2(n)) [2].
2.8 Regev’s LWE Cryptosystem
As we will need Regev’s LWE cryptosystem in Section 4.6 and Chapter 6, we recall
the scheme here [21]. Regev’s cryptosystem was the first public key cryptosystem to
be based on the hardness of the LWE problem.
10
Public Parameters. dimensions m, n, modulus q, noise parameter α > 0
Key Generation. The secret key is the LWE secret sZn
qchosen uniformly at
random. The public key consists of the msamples (ai, bi)m
i=1 sampled from the LWE
distribution with secret s, modulus q, and error parameter α.
Encryption. The message will be encrypted bit by bit. Choose a random sub-
set Sfrom all possible subsets of [m]. If the bit is 0, then the encryption will be
(PiSai,PiSbi). If the bit is 1, then the encryption will be (PiSai,bq
2c+PiSbi)
Decryption. To decrypt the pair (a, b), compute b− ha, si. If b− ha, siis closer to
0 than to bq
2c, then (a, b) decrypts to 0. Otherwise (a, b) decrypts to 1.
We note for future reference that his cryptosystem is CPA-secure (and not CCA-
secure) [21].
11
Chapter 3
Zero-Knowledge Proofs
Zero-knowledge proofs are important to understand as they will be a crucial compo-
nent of group signatures. Thus we briefly review the basics of zero-knowledge proofs.
In particular, we’ll look at Stern-like protocols for lattices which will be extensively
used throughout Section 4.3 and Chapter 6.
3.1 Foundations of Zero-Knowledge Proofs
A zero-knowledge proof is a protocol between two parties, namely a Prover and a
Verifier, in which the Prover wants to convince the Verifier of some (secret) fact. How-
ever, the Prover wants the Verifier to learn nothing from his “proof” (hence the term
“zero-knowledge”). The Prover will be unbounded whereas the Verifier is polynomi-
ally bounded. Their interaction will take place in 3 stages- a commitment stage in
which the Prover Pcommits to a choice, a challenge stage in which the Verifier V
challenges Pto demonstrate P’s secret knowledge, and a response stage in which P
responds to V’s challenge.
Although Pmay successfully trick Vinto believing he possesses some “secret”
knowledge in a single interaction even if he doesn’t, Phas a small probability of
successfully doing so if he is challenged a “large” number of times on different com-
mitment values. Thus, the protocol will be repeated enough times that Vwill be
incorrectly convinced with only a small probability.
A zero-knowledge proof must satisfy three requirements- soundness, completeness,
and zero-knowledge [24].
1. Soundness. If Pdoesn’t actually know the secret being proved, then there
should be only a small probability of Vaccepting P’s proof.
12
2. Completeness. If Pdoes know the secret being proved, then Vshould always
accept P’s proof.
3. Zero-Knowledge. Vcould have written a valid protocol transcript without
ever having even interacted with P. This transcript is referred to as a simula-
tion. A proof has “perfect zero knowledge” if a computationally unbounded
adversary Acannot tell apart the set of valid transcripts (i.e. transcripts pro-
duced from actually running the protocol) from those that are simulated. If A
is computationally bounded, then the proof will have “computational zero
knowledge.”
We will now provide formal definitions of the above.
Definition 12 (Interactive Proof System [10]).An interactive proof system for lan-
guage Lis a pair of interactive Turing machines, (P, V ), such that Vis expected
polynomial time and the following conditions hold:
Soundness. For every constant c > 0, every Turing machine P, and all sufficiently
long x /L,
P([P(x), V (x)] = 0) 1− |x|c
Completeness. For every constant c > 0and all sufficiently long xL,
P([P(x), V (x)] = 1) 1− |x|c
Definition 13 (Zero-Knowledge Interactive Proof System [10]).Let (P, V )be an
interactive proof system for language L. We will say that proof system (P, V )is
zero-knowledge if for every expected polynomial-time interactive Turing machine
V, there exists an expected polynomial-time machine MVsuch that the probabil-
ity ensembles {MV(x)}xLand {P(x), V (x)}xLare polynomially indistinguishable.
We refer to MVas the simulator of V. Although this simulator MVhas the same
output distribution (more or less) as V, it produces this distribution without ever
interacting with the P rover.
We state without proof one of the most celebrated results of zero-knowledge proofs
from Goldreich, Micali, and Wigderson.
Theorem 3.1.1. [10] We can construct a zero-knowledge proof for every NP-statement.
13
3.2 Sigma Protocols
If we assume our Verifier Vis honest (i.e. he responds truly randomly to the com-
mitments), our 3 round zero-knowledge protocol is called a Sigma protocol. A Sigma
protocol has special soundness- given two protocol runs with the same commitment
value but different challenges, we can recover P’s secret.
Claim. Special soundness implies soundness.
We now provide an example of a Sigma protocol, namely Schnorr’s Identification
Protocol which is based on the discrete log problem (which we assume the reader has
familiary with).
Schnorr’s Identification Protocol. [22]
Setup. Suppose we have a finite abelian group Gof prime order q. The Prover’s
secret will be the discrete log xof g(where gx=y).
Commitment. The Prover picks random exponent k, computes gk=rand sends r
to the Verifier.
Challenge. The Verifier sends challenge exponent eto Prover.
Response. The Prover computes s=k+xe (mod q) and sends sto the Verifier.
Verification. The Verifier now computes gsyeand makes sure that gsye=r. If
the equality holds, then the Verifier accepts the proof.
Next, we verify that this protocol satisfies special soundness, completeness, and zero-
knowledge. Notice that special soundness implies soundness and argument of knowl-
edge.
Completeness. Notice that if the Prover knows xthen:
gsye=gk+xegxe =gk=r
Thus, the Verifier always accepts the proof.
Special Soundness. Suppose we’ve run the protocol twice with the same commit-
ment rbut different challenge exponents eand e0. Since r=gsyeand r=gs0ye0
14
then that means gsye=gs0ye0which holds:
gsgxe =gs0gxe0
s+x(e) = s0+x(e0) (mod q)
x=ss0
ee0(mod q)
Thus, we have recovered the Prover’s secret x.
Zero-Knowledge. Consider the following simulation. Pick random exponent e
(mod q). Compute gsyeand set r=gsye. Output the transcript:
Commitment. PV:r
Challenge. VP:e
Response. PV:s
This transcript is indistinguishable from one generated by an actual interaction be-
tween a Prover and Verifier. Thus our protocol is zero knowledge.
3.3 Fiat-Shamir Heuristic
Our presentation of zero-knowledge proofs so far has consisted of a 3-round interac-
tion between the Prover and Verifier. Suppose we wish to make our protocol non-
interactive. This is done by setting the challenge to be the hash of the commitment
(i.e. challenge = H(commitment) for some cryptographic hash function H) [9]. Since
hash functions are, by definition, “hard” to invert, this prevents the Prover from fixing
a challenge.
Warning. We must restrict our Prover to being computationally bounded otherwise
the Prover could “cheat” by inverting the hash function. Instead of a “zero knowledge
proof,” we say we have a zero-knowledge argument.
Now suppose instead of just hashing the commitment, we also include a message.
That is to say we take challenge = H(commitment kmessage). This process of turn-
ing a Sigma protocol into a signature scheme is referred to as the Fiat-Shamir heuristic
[9].
15
3.4 Kawachi’s Stern-like Protocol for Lattices
Stern introduced a protocol for zero-knowledge identification in 1993 [26]. Although
his protocol is based on error-correcting codes, it was adapted to suit lattices. As
previously, our protocol involves two parties- a Prover, who wants to identify himself,
and a Verifier. We introduce Kawachi’s Stern-like protocol for lattice-based non-
interactive zero knowledge proofs which will be used extensively in our reputation
system [14].
Let Abe a random matrix such that AZn×m
q. The Prover has secret key ~x
which is a random vector such that ~y =A~x (mod q). The common input will be
matrix Aand vector ~y.
Commitment. Pchooses random permutation πover [m] and a random vector
~r Zm
q. He also samples randomness ρ1, ρ2, ρ3. He sends CMT= (c1, c2, c3) where:
c1=COM(π, A~r;ρ1)
c2=COM(π(~r); ρ2)
c3=COM(π(~x +~r); ρ3)
Challenge. Vsends random challenge Ch ∈ {1,2,3}to P.
Response. Depending on the challenge, Presponds with:
if Ch=1, then RSP = (~s,~
t;ρ2, ρ3) where ~s =π(~x),~
t=π(~r)
if Ch=2, then RSP = (φ, ~u;ρ1, ρ3) where φ=π,~u =~x +~r
if Ch=3, then RSP = (ψ, ~v;ρ1, ρ2) where ψ=π,~v =~r
Verification. Depending on the challenge chosen, Vwill do the following:
if Ch=1, check c2=COM(~
t;ρ2), c3=COM(~s+~
t;ρ3) and ~s satisfies the properties
~x must satisfy
if Ch=2, check c1=COM(φ, A~u ~y;ρ1), c3=COM(φ(~u), ρ3)
if Ch=3, check c1=COM(ψ, A~v;ρ1), c2=COM(ψ(~v), ρ2)
Vaccepts if and only if all conditions hold.
Next, we’ll show that the protocol is statistically zero-knowledge and satisfies special
soundness (thus soundness and argument of knowledge).
16
Theorem 3.4.1. [14] Kawachi’s protocol is statistically zero-knowledge when COM
is a statistically hiding and computationally binding string commitment scheme.
Proof [14]. Suppose we have a PPT simulator SIM interacting with a possibly cheating
Verifier b
V. Given only the public input, SIM outputs with probability negligibly close
to 2/3 a simulated transcript that is statistically close to one produced by an honest
Prover in a real interaction. SIM starts by choosing a random challenge Ch ∈ {1,2,3}
as a prediction of the challenge value that b
Vwill NOT choose.
Case Ch = 1: Using basic linear algebra SIM computes x0Zm
qsuch that Ax0=y.
He then chooses random permutation π0over [m], a random vector r0Zm
q, and
random strings ρ0
1, ρ0
2, ρ0
3. He computes COM = (C0
1, C0
2, C0
3) and sends them to b
V
where:
C0
1=COM(π0,Ar0;ρ0
1)
C0
2=COM(π0(r0); ρ0
2)
C0
3=COM(π0(x0+r0); ρ0
3)
Upon receiving a challenge Ch from b
V,SIM responds with:
if Ch=1: Outputand abort
if Ch=2: Output (π0, x0+r0;ρ0
1, ρ0
3)
if Ch=3: Output (π0, r0ρ0
1, ρ0
2)
Case Ch = 2: SIM chooses random permutation π0over [m],
random vectors r0Zm
q,x0B(m, m/2) and random strings ρ0
1, ρ0
2, ρ0
3.
He computes COM = (C0
1, C0
2, C0
3) and sends them to b
Vwhere:
C0
1=COM(π0,Ar0;ρ0
1)
C0
2=COM(π0(r0); ρ0
2)
C0
3=COM(π0(x0+r0); ρ0
3)
Upon receiving a challenge Ch from b
V,SIM responds with:
if Ch=1: Output (π0(x0), π0(r0); ρ0
2, ρ0
3)
if Ch=2: Output and abort
if Ch=3: Output (π0, r0;ρ0
1, ρ0
2)
17
Case Ch = 3: SIM chooses random permutation πover [m], random vectors rZm
q,
x0B(m, m/2) and random strings ρ0
1, ρ0
2, ρ0
3. He computes COM = (C0
1, C0
2, C0
3) and
sends them to b
Vwhere:
C0
1=COM(π, A(x0+r)y;ρ0
1)
C0
2=COM(π(r); ρ0
2)
C0
3=COM(π(x0+r); ρ0
3)
Upon receiving challenge Ch from b
V,SIM responds with:
if Ch=1: Output (π(x0), π(r); ρ0
2, ρ0
3)
if Ch=2: Output (π, x0+r;ρ0
1, ρ0
3)
if Ch=3: Output and abort
For all cases we have that the distribution of CMT and the distribution of Ch from
b
Vare statistically close to those from a real interaction (by the statistical hiding
property of COM). That means the probability of the simulator SIM outputting is
negligibly close to 1/3. If SIM doesn’t halt, he will provide an accepted transcript
whose distribution will be statistically close to that of a Prover in a real interaction.
Theorem 3.4.2. Kawachi’s protocol satisfies special soundness.
Proof. Suppose that RSP1= (~s,~
t;ρ2, ρ3), RSP2= (φ, ~u;ρ1, ρ3), and RSP3= (ψ , ~v;ρ1, ρ2)
are 3 valid responses to the same commitment CMT = (c1, c2, c3) w.r.t. all 3 possible
values of Ch. Since the responses are valid, we know that:
c1=COM(φ, A~u ~y, ρ1) = COM(ψ, A~v)
c2=COM(~
t, ρ2) = COM(ψ(~v); ρ2)
c3=COM(~s +~
t;ρ3) = COM(φ(~u); ρ3)
Since COM is computationally binding, we can recover secret ~x by taking ~u ~v :=
(~x +~r)~r =~x.
18
3.5 Decomposition-Extension Technique
The Decomposition-Extension technique was introduced first in [16] and will be
needed in our reputation system to successfully incorporate the TAG scheme into
the group signatures. Thus, we provide a brief overview of the technique here as
presented in [17].
An integer x[0, β] can always be written as a linear combination of x1, ..., xp
{0,1}(where p=blog(β)c+ 1) with coefficients β1, ..., βpdetermined as follows:
β1=dβ/2e, β2=d(ββ1)/2e, β3=d(ββ1β2)/2e, ..., βp= 1.
That is to say x=Pp
j=1 βjxj. This idea can be extended to work for vector x
[β, β ]m. A Prover with secret vector x[β, β]mdecomposes his secret xinto a
linear combination of vectors bx1, ..., bxp∈ {−1,0,1}m. Thus, he can write Pp
j=1 βjbxj=
x. To prove possession of x,Pcan now instead argue in zero-knowledge the possession
of such bxj’s. He first extends each bxjto ~xjB3mwhere B3mis the set of vectors in
{−1,0,1}msuch that exactly mcoordinates are equal to 0, exactly mcoordinates are
equal to -1, and exactly mcoordinates are equal to 1. This set is chosen because for
any permutation πof 3melements, we have that ~xjB3mif and only if π(~xj)B3m.
As in Kawachi’s protocol in the previous section, we suppose Pwants to prove
knowledge of a secret vector xZmsatisfying ||x|| ≤ βsuch that Ax=y(mod q)
where Aand yare public. For the Stern-like 3 move protocol, Pwill show the
following [17]:
1. Given a random permutation πof 3melements, π(~xj)B3mfor all 1
jp. But that means ~xjB3mwhich implies that the corresponding
bxj∈ {−1,0,1}m. Thus Phas proven x[β, β]mas claimed.
2. APp
j=1 βj(~xj+rj)y=APp
j=1 βjrj(mod q) for matrix AZn×3m
q
obtained by appending 2m dummy variables to AZn×m
q. Let r1, ...rp
Z3m
qbe masking vectors for corresponding ~x1, ...~xp. This implies that Ax=
APp
j=1 βj~xj=y(mod q) as claimed.
19
Chapter 4
Group Signatures
The original notion of a “group signature” was provided by Chaum and Van Heyst
in 1991 [8]. In this setting, we have a group of members G={m1, ...., mn}and
an authority. Each member mihas his own signing key with which he can sign on
behalf of the entire group. Anyone can verify (via some “public verification key”) that
the signature comes from a member of G. Naturally, we would want such a “group
signature” to satisfy the following security requirements:
1. The signer’s identity is not revealed (“anonymity”).
2. An authority possessing a tracing key is able to identify which particular group
member produced a signature (“traceability”).
In the setting where the group members are static (i.e. no new members are added
or can be removed), constructing a group signature satisfying the above security
requirements is relatively straightforward. For the interested reader, we refer him to
the seminal work “Foundations of Group Signatures” [4].
The situation becomes more complex when we consider adding or removing group
members. In the first section, we will look at partially dynamic group signatures in
which new members can be added to the group but not removed. In the following
section, we will look fully dynamic group signatures which allows new members to
be added and removed. Finally, in the last section of this chapter, we look at [18]’s
fully dynamic lattice-based group signature scheme. This group signature scheme
will serve as the backbone of our construction of an anonymous reputation system in
Chapter 6.
20
4.1 Partially Dynamic Group Signatures
First, we will look at “partially dynamic” group signatures in which new members can
be added to the group but not removed. We have 3 disjoint parties involved, namely
an Opener, an Issuer, and a set of Users. The Opener and Issuer are “authorities” with
their own public/private key pairs. The Opener can trace signatures (i.e. find out
which group member produced a signature) whereas the Issuer is responsible for giving
members their group signing keys and adding their information to the registration
table.
4.1.1 Syntax
In [5] we have two authorities- namely an “Issuer” and an “Opener.” Subsequent
literature of group signatures refers to these authorities as the “Group Manager” and
Tracing Manager” respectively. We stay true to the notation from [5]’s paper but
hope that this does not cause too much confusion later on for the reader. The scheme
is specified by a tuple of polynomial time algorithms [5]:
GS =(GKg, UKg, Join, Issue, GSig, GVf, Open, Judge)
We assume that a trusted third party performs the initial group key generation. The
“group public key” gpk is set to be the public parameters of the system along with
the public keys of the Opener and Issuer.
1. GKg(1λ)(gpk,ik,ok): On input of a security parameter 1λ, this algorithm re-
turns the triple (gpk,ik,ok) consisting of the group public key gpk (which consists
of the public keys of the Issuer and Opener along with the public parameters),
the Issuer’s secret key ik, and the Opener’s secret key ok.
2. UKg(1λ)(upk[i],usk[i]): On input of the security parameter 1λ, the algorithm
returns a public-private key pair (upk[i], usk[i]). We assume that the table upk
(of users’ public keys) is public.
3. hJoin, Issue i: This is an interactive protocol between a user i(who has already
obtained a public-private key pair) and the Issuer (i.e. the Group Manager). Both
algorithms take as input an incoming message, the current state and returns
an outgoing message, an updated state, along with a decision which is one of
accept, reject, cont. We assume that the communication takes place over a
secure channel with the user initiating the protocol. If the Issuer (i.e. Group
21
Manager) accepts, he makes an entry for i, denoted by reg[i], in the registration
table reg. If iaccepts, the final state output by Join is a private signing key
gsk[i].
4. GSig(gpk,gsk[i], M )Σ: On input of the group public key gpk, a group signing
key gsk[i] and a message M, group member ican run this algorithm to get
signature Σ on M.
5. GVf(gpk, M, Σ) 1/0: On input of the group public key gpk, a message M,
and a group signature Σ, the algorithm returns 1 if Σ is a valid signature on
M, 0 otherwise.
6. Open(gpk,ok,reg, M, Σ) (i, τ ): On input of the group public key gpk, the
Opener’s (or Tracing Manager) opening key ok, a message M, and a valid signa-
ture Σ under gpk, the algorithm returns the identity iof the group member who
produced Σ along with a proof τof this claim. If no group member produced
Σ then the algorithm returns (0, τ ).
7. Judge(gpk, j, upk[j], M, Σ, τ )1/0: On input of the group public key gpk, an
integer j1, user public key upk[j], a valid signature Σ on message M, and
proof string τ, the algorithm returns 1 if τis a valid proof that user jproduced
Σ on M, 0 otherwise.
4.1.2 Security Intuition
[5] defines 3 different levels of “trust.” We say an party is “uncorrupt” if he is trusted,
“partially corrupt” if his secret key is available to the adversary but the party does
not deviate from his prescribed program, or “fully corrupt” if the party is controlled
entirely by the adversary so that he may not even follow his program. By defining the
trust levels in this way, we obtain the 3 security requirements- anonymity, traceability,
and non-frameability. We first provide informal definitions of these notions and then
proceed to give the oracles and security games in the next section.
We are able to achieve “anonymity” with a fully corrupt Issuer, “traceability”
with a a partially corrupt Opener, and “non-frameability” with fully corrupt Issuer
and Opener [5].
Anonymity [5]. We say that adversary Awins the anonymity game if he is able to
distinguish which of 2 signers (of his choice) produced a signature Σ on a message M
(also of his choice). Ahas the ability to corrupt the Issuer, obtain the public/private
22
key of any user, read/write/modify reg, corrupt users and interact with Issuer on their
behalf, and obtain the identity of the signer of any signature except the challenge one.
We say that GS is “anonymous” if the probability of a polynomial-time adversary A
winning the anonymity game is negligible (in λ).
Traceability [5]. We say that adversary Awins the traceability game if he is
able to produce a signature Σ such that an honest Opener is unable to identify the
origin of Σ or the Opener is unable to produce a proof τof his claim that is accepted
by the algorithm Judge. The adversary Acan add new group members, obtain the
public/private key pair of any user, read reg, corrupt users and interact with Issuer
on their behalf. We say that GS is “traceable” if the probability of a polynomial-time
adversary Awinning the traceability game is negligible (in λ).
Non-frameability [5]. We say that adversary Awins the non-frameability game
if he is able to create a Judge accepted proof τthat an honest user iproduced a valid
signature Σ even though idid not produce Σ. The adversary Acan fully corrupt
both the Opener and Issuer, obtain the signing keys of all users except the target one,
and corrupt all users but the target one. We say that GS is “non-frameable” if the
probability of a polynomial-time Awinning the non-frameability game is negligible
(in λ).
4.1.3 Security Experiments
We assume in all experiments that GKg has been run on input 1λto obtain the keys
gpk,ik,ok that the oracles will use later on. The experiment maintains the following
global variables which are manipulated by the oracles: a set of honest users HU, a set
of corrupted users CU, a set of message-signature pairs GSet, table upk of the user
public keys, and the registration table reg [5]. Initially, the sets HU, CU, GSet are
empty and the entries in the tables upk, reg are .
AddU(i) allows the adversary to add ito the group as a honest user.
This oracle adds ito HU, and picks the public-private key pair
(upk[i], usk[i]) for i. Next, it executes the Join/Issue protocol.
When Issue accepts, its final state is recorded as entry reg[i] in reg.
When Join accepts, its final state is recorded as the signing key gsk[i] of i.
The adversary gets back upk[i].
23
CrptU(i, upk) allows the adversary to corrupt user iby setting his public key
from upk[i] to the value upk chosen by the adversary.
This oracle also initializes the Issuer’s state in anticipation of the
Join/Issue protocol with i.
SndToI(i, Min) allows the adversary to engage in the Join/Issue protocol
on behalf of the corrupted user. Having corrupted user i, the
adversary can use this oracle to engage in the group joining protocol
with honest Issue-executing Issuer. Given iand a message Min to be sent
to the Issuer, this oracle computes a response as per Issue,
returns the outgoing message to the adversary, and finally sets
reg[i] to Issue’s final state.
SndToU(i, Min) allows the adversary to engage in the Join/Issue on behalf
of a corrupted Issuer. The adversary provides the oracle with iand a
message Min to be sent to user iwho is honest and executes Join.
The oracle maintains user i’s state by choosing a public-private key
pair for i, computing a response as per Join, returning the outgoing
message to the adversary, and finally setting the private signing key of i
to Join’s final state.
USK(i) allows the adversary to obtain the secret keys of a user.
He provides the oracle with the identity iof the user. The oracle
responds with gsk[i] and usk[i].
RReg(i) allows the adversary to read the contents of the registration
table reg for user i.
WReg(i, ·) allows the adversary to write/modify the contents of entry iof reg.
GSig(i, M ) allows the adversary to obtain a signature on message M
for user i(assuming user iis honest and his signing key is defined).
Ch(b, i0, i1, M) provides an adversary attacking anonymity with the “challenge.”
The adversary provides the oracle with a pair of identities i0, i1and a
message M. He receives a signature on Munder the signing key of ib.
24
Here we require that i0, i1are both honest users with well-defined
signing keys. The oracle records the message-signature pair in GSet
so that the adversary cannot later call the Open oracle on it.
Open(M, Σ) allows the adversary to “open” a signature. The adversary
provides the oracle with a message Mand signature Σ. In response,
the adversary gets the output of the Open algorithm computed under
the Opener’s key ok. Here we require that Σ was not previously returned
to the adversary in response to a query to Ch(b, ·,·,·).
We first define correctness and then give the security requirements of anonymity,
traceability, and non-frameability as in [5].
Experiment: Expcorr
GS ,A(λ)
(gpk,ik,ok) GKg(1λ); CU ← ∅;HU ← ∅
(i, M)← A(gpk :AddU(·),RReg(·))
If i6∈ HU then return 0; If gsk[i] =then return 0
ΣGSig(gpk,gsk[i], M ); If GVf(gpk, M, Σ) = 0 then return 1
(j, τ )Open(gpk,ok,reg, M, Σ); If i6=jthen return 1
If Judge(gpk, i, upk[i], M, Σ, τ ) = 0 then return 1, else return 0
Experiment: Expanonb
GS ,A(λ) (b∈ {0,1})
(gpk,ik,ok) GKg(1λ); CU ← ∅;HU ← ∅;GSet ← ∅
d← A(gpk,ik :Ch(b, ·,·), Open(·,·), SndToU(·,·), WReg(·,·), USK(·), CrptU(·,·))
Return d
Experiment: Exptrace
GS ,A(λ)
(gpk,ik,ok) GKg(1λ); CU ← ∅;HU ← ∅
(M, Σ) ← A(gpk,ok :SndToI(·,·), AddU(·), RReg(·), USK(·), CrptU(·,·))
If GVf(gpk, M, Σ) = 0 then return 0; (i, τ)Open(gpk,ok,reg, M, Σ)
25
If i= 0 or Judge(gpk, i, upk[i], M, Σ, τ ) = 0 then return 1, else return 0
Experiment: Expnonf rame
GS ,A(λ)
(gpk,ik,ok) GKg(1λ); CU ← ∅;HU ← ∅
(M, Σ, i, τ )← A(gpk,ok,ik :SndToU(·,·), WReg(·,·), GSig(·,·),USK(·), CrptU(·,·))
If GVf(gpk, M, Σ) = 0 then return 0
If the following are all true then return 1, else return 0:
iHU and gsk[i] 6=and Judge(gpk, i, upk[i], M, Σ, τ )=1
Adid not query USK(i) or GSig(i, M )
4.1.4 Primitives
To construct a partially dynamic group signature satisfying the above security re-
quirements, we need the following primitives:
Digital Signature Scheme (DS).We have digital signature scheme DS = (Ks,
Sig,Vf) consisting of algorithms for key generation Ks, signing Sig, and verification
Vf. We require DS to satisfy unforgeability under chosen message attack [11].
Public Key Encryption Scheme (AE).We have a a public key encryption scheme
AE = (Ke,Enc,Dec) consisting of algorithms for key generation Ke, encryption Enc,
and decryption Dec. We will need AE to satisfy indistinguishability under adaptive
chosen-ciphertext attack (i.e. IND-CCA secure) [20].
Non-interactive Zero Knowledge Proofs (N IZ KP ).We will need non-interactive
zero-knowledge proofs of membership in NP-languages that are simulation sound. In-
formally speaking “simulation soundness” requires that an adversary Acannot prove
any false statement even after seeing simulated proofs of arbitrary statements [12].
4.1.5 Overview of the Scheme
First we specify a digital signature scheme DS=(Ks,Sig,Vf) and a public key en-
cryption scheme AE=(Ke,Enc,Dec) satisfying the conditions described in Section
4.1.4. The claim is that an anonymous, traceable, and non-frameable group signature
scheme GS can be built using these primitives [5].
26
Recall that GKg(1λ) = (gpk,ik,ok). where the group public key gpk consists of
security parameter λ, the public encryption key pke, and the verification key for
digital signatures pks(i.e. “certification verification key”). The signing key skscor-
responding to pkswill be the Issuer’s secret key ik whereas the decryption key ske
corresponding to pkewill be the Opener’s secret key ok (along with the random coin
reused to generate (ske, pke)). Before initiating the Join protocol, user igenerates a
signing key skiand a verification key pki. He will use usk[i] to produce signature sigi
on pki(this act prevents him from possibly being framed by a corrupt Issuer). He
then sends (pki, sigi) to Issuer. If Issuer accepts i’s request to join the group, he signs
the user’s pkiusing his sks.Issuer then stores (pki, sigi) in the user registration table
reg. When iwishes to sign message M, he will use ski. However, since iwishes to be
“anonymous” upon signature verification, he encrypts his pkiunder pkeand proves
in zero-knowledge that verification will succeed with respect to his pki. To ensure
that non-group members can’t simply generate their own (pki, sigi) to sign messages,
user ialso encrypts his identity i,certiand then proves in zero-knowledge that this
certificate is a signature of (i, pki) under pks. Signature verification is simply a matter
of verifying non-interactive zero knowledge proofs. The Opener is able to identify the
signature’s signer since he has the decryption key ske.
The following lemmas from [5] explain the relationship between the security require-
ments and the primitives of the scheme.
Lemma 4.1.1. [5] If AE is IND-CCA secure, (P1, V1)is a simulation sound, com-
putational zero-knowledge proof system for ρ1over Dom1, and (P2, V2)is a computa-
tional zero-knowledge proof system for ρ2over Dom2, then GS is “anonymous.”
Lemma 4.1.2. [5] If DS is secure against forgery under chosen-message attack,
(P1, V1)is a sound non-interactive proof system for ρ1over Dom1and (P2, V2)is a
sound non-interactive proof system for ρ2over Dom2, then GS is “traceable.”
Lemma 4.1.3. [5] If DS is secure against forgery under chosen-message attack,
(P1, V1)is a sound non-interactive proof system for ρ1over Dom1and (P2, V2)is a
sound non-interactive proof system for ρ2over Dom2, then GS is “non-frameable.”
4.2 Fully Dynamic Group Signatures
[7] extends the results of [5] in the previous section to obtain a fully dynamic group
signature scheme supporting revocation. To accomplish this, they make use of a
27
“time” component that records when users joined the group or were revoked. They
also strengthen the security requirements of [Bellare et al] by adding the notion of
“tracing soundness.” As in the previous section we have a Tracing Manager,Group
Manager, and a set of Users. Here, we refer to the Issuer as the Group Manager since
this authority is also responsible for publishing group information infoτcorresponding
to epoch τ. Here infoτmay include new members added at epoch τ, members revoked
at τ, etc.. We also use the notation of the paper and refer to the Opener as the Tracing
Manager. Naturally the epochs should preserve the order in which their information
was published (i.e. if τ1< τ2then infoτ1precedes infoτ2[7].
4.2.1 Syntax
Our scheme is specified by the following tuple FDGS = (GSetup, GKgenGM , GKgenT M ,
UKGen, Join, Issue, UpdateGroup, Sign, Verify, Trace, Judge) of polynomial-time algo-
rithms [7]. The main difference between the syntax here and the syntax in Section
4.1 is the addition of the time component τand the UpdateGroup algorithm. We use
GM to refer to the Group Manager and T M to refer to the Tracing Manager.
1. GSetup(1λ)pp: On input security parameter 1λ, the setup algorithm outputs
public parameters pp and initializes the registration table reg.
2. hGKgenGM (pp), GKgenT M (pp)i: This is an interactive protocol between the
algorithms GKGenGM and GKGenTM run by the GM and T M respectively to
generate their keys as well as the group public key. Both algorithms take as
input the public parameters pp. If the protocol is successful, GKGenGM has a
private output of the GM’s secret key msk with public output of the GM ’s
public key mpk along with the initial group info info.GKgenT M has private
output of the T M ’s secret key tsk and public output of the T M’s public key
tpk. The group public key is gpk := (pp, mpk, tpk).
3. UKgen(1λ)(upk[uid], usk[uid]): On input security parameter 1λ, the algo-
rithm outputs a public-secret key pair (upk[uid], usk[uid]) for user uid. We
assume that the public key table upk is publicly available so that anyone can
get authentic copies of it.
4. hJoin(infoτ, gpk, uid, usk[uid]), Issue(infoτ, msk, uid, upk[uid])i: This is an inter-
active protocol between a user uid (who has already obtained a public-secret
key pair (upk[uid], usk[uid]) and the GM. The Join algorithm takes as input the
group’s info infoτ, the group’s public key gpk=(mpk, tpk, pp), the user’s identity
28
uid, and the user’s secret key usk[uid]. The Issue algorithm takes as input the
group’s info infoτ, the GM’s secret key msk, the identity of the user uid trying
to join, and uid’s public key upk[uid]. Upon successful completion, uid becomes
a member of the group. The final state of the Issue algorithm is stored in the
user registration table reg at index uid. The final state of the Join algorithm is
stored in gsk[uid]. The epoch τis an output for both parties.
We again assume that this protocol takes place over a secure channel where
the user uid initiates the protocol by calling Join. The GM may update the
system information after running this protocol. The user registration table reg
will store additional info used by the GM and T M .
5. UpdateGroup (gpk, msk, infoτcur rent ,S, reg) (infoτnew , reg): On input of the
GM’s secret key msk, a list of active users to be revoked S, and the current group
info infoτcurrent , the algorithm outputs new group info infoτnew while possibly
updating the registration table reg. If no changes have been made to the group,
the algorithm outputs . The algorithm aborts if any uid ∈ S has not run the
Join/Issue protocol. This algorithm is run by the GM to update the group info
while advancing the epoch.
6. Sign(gpk, gsk[uid], infoτ,M)Σ: On input of the group’s public key gpk=(mpk,
tpk, pp), user uid’s signing key gsk[uid], group info infoτat epoch τ, and message
M, the algorithm outputs a group signature Σ on Mby the group member uid.
If the user owning gsk[uid] is not an active member of the group at epoch τ, the
algorithm outputs .
7. Verify(gpk, infoτ,M,Σ)1/0: On input of the group’s public key gpk, group
info infoτ, a message M, and a signature Σ, this deterministic algorithm outputs
1 if Σ is valid group signature on Mat epoch τ, 0 otherwise.
8. Trace(gpk, tsk, infoτ, reg, M,Σ)(uid, Πtrace): On input of the group’s public
key gpk, the T M ’s secret key tsk, the group info infoτ, the registration table
reg, a message M, and a signature Σ, the algorithm outputs the identity of
the user uid who produced Σ and a proof Πtrace that attests to this fact. If
the algorithm cannot trace the signature to a particular group member, it will
return (0,Πtrace).
9. Judge(gpk, uid, infoτ,Πtrace,M,Σ)1/0: On input of the group’s public
key gpk, a user’s identity uid, the group info infoτ, a tracing proof Πtrace from
29
the Trace algorithm, along with a message Mand signature Σ, the algorithm
outputs 1 if Πtrace is a valid proof that uid produced Σ and 0 otherwise.
Additional Algorithms. We will need the following polynomial-time algorithm which
is only used in the security games.
IsActive(infoτ, reg, uid) 1/0: On input of the group info, the registration
table, and a user’s id, the algorithm outputs 1 if uid is an active member of the
group at epoch τand 0 otherwise.
4.2.2 Security Requirements
We include the same security requirements as in the partially dynamic case (namely
anonymity, traceability, non-frameability). However, we also add the notion of “trac-
ing soundness.” The time component of the scheme changes the previous definitions
slightly. We first provide informal definitions of the security requirements that the
FDGS scheme should achieve after which we provide the formal security games.
Anonymity [7]. We say that adversary Awins the anonymity game if he is able
to distinguish which of 2 signers (of his choice) has produced signature Σ on message
M(of his choice) at time τ(also of his choice). Ahas the ability to corrupt any user,
can choose GM0skeys and the group information for epoch τ. We will need both
challenge users to be active members of the group at τthough. We say FDGS is
“anonymous” if the probability of a PPT adversary Awinning the anonymity game
is negligible (in λ).
Traceability [7]. We say adversary Awins the traceability game if he is able to
produce a signature Σ such that either the T M is unable to identify the signer or the
signer is inactive at epoch τ(of his choice). We will say Aalso wins if T M is unable
to produce a proof Πtrace of the claim that is accepted by Judge. The adversary A
can corrupt any user and choose the keys of the T M. We say FDGS is “traceable”
if the probability of a PPT adversary Awinning the traceability game is negligible
(in λ).
Non-frameability [7]. We say that adversary Awins the non-frameability game
if he is able to create a signature Σ that is attributable to an honest member who
didn’t actually produce it. The adversary Acan fully corrupt both GM and T M .
30
In addition, he can corrupt all users except the target one. We say FDGS is “non-
frameable” if the probability of a PPT adversary Awinning the non-frameability
game is negligible (in λ).
Tracing Soundness [7]. We say that adversary Awins the tracing soundness
game if he is able to produce a valid signature Σ and valid tracing proofs Π,Π0such
that Σ traces to two different users. The adversary Ais allowed to corrupt all parties
(T M ,GM, and the users). We say FDGS has “tracing soundness” if the probability
of a PPT adversary Awinning the tracing soundness game is negligible (in λ).
4.2.3 Security Experiments
Before giving the security games, we must define the oracles available to the adver-
sary. We also have the following global lists that are maintained: HUL is a list of
honest users; CUL is a list of corrupt users whose personal secret keys have been cho-
sen by the adversary; BUL is a list of “bad” users whose personal and group signing
keys have been revealed to the adversary; SL is a list of signatures obtained from the
Sign oracle; CL is a list of challenge signatures obtained from the Challenge oracle [7].
AddU(uid) adds an honest user uid to the group at the current epoch.
CrptU(uid, upk) creates a new corrupt user whose public key upk[uid]
is chosen by the adversary. This is called in preparation to the
SndToM oracle.
SndToM(uid, Min)is used to engage in the Join-Issue protocol
with the honest Issue-executing GM.
SndToU(uid, Min)is used to engage in the Join-Issue protocol
with an honest, Join-executing user uid on behalf of a corrupt GM .
ReadReg(uid) returns the registration info reg[uid] of user uid.
ModifyReg(uid, val) modifies the entry reg[uid], setting reg[uid]:= val.
For brevity we will assume ModifyReg also provides the functionality
of ReadReg.
RevealU(uid) returns the personal secret key usk[uid] and the group
signing key gsk[uid] of group member uid.
Sign(uid, M,τ)returns a signature on the message Mby the group
member uid for epoch τassuming the corresponding group info
infoτis defined.
Chalb(infoτ, uid0, uid1,M)is a left-right oracle for defining anonymity.
31
The adversary chooses epoch τ, group info infoτ, two identities
(uid0, uid1), and a message Mand receives a group signature by
member uidbfor b← {0,1}for the chosen epoch. Both challenge
users are required to be active members at epoch τ.
The adversary can only call this oracle once.
Trace(M,Σ, infoτ)returns the identity of the signer of Σ on Mw.r.t.
infoτif the signature was not obtained from the Chalboracle.
UpdateGroup(S) allows the adversary to update the group where S
is a set of active members to be removed from the group.
Our security requirements are then given by the following games (which we have
taken from Bootle’s paper):
Experiment: ExpC orr
FDGS ,A(λ)
pp GSetup(1λ); HUL :=
((msk,mpk,info), (tsk,tpk)) ← h GKGenGM (pp), GKGenT M (pp)i
gpk := (pp, mpk, tpk)
(uid, M, τ )← AAddU, ReadReg, UpdateGroup(gpk, info)
If uid 6∈ HUL or gsk[uid] =, or infoτ=or IsActive(infotau,reg,uid) = 0,
return 0
ΣSign(gpk,gsk[uid], infoτ,M)
If Verify(gpk,infoτ,M,Σ)= 0, then return 1
(uid,Πtrace)Trace(gpk, tsk, infoτ, reg, M,Σ)
If uid 6=uidthen return 1
If Judge(gpk, uid, infoτ,Πtrace, upk[uid], M,Σ)= 0 then return 1. Else return
0.
Experiment: ExpAnonb
FDGS ,A(λ)
pp GSetup(1λ); HUL, CUL, BUL,SL, CL :=
(stinit, msk, mpk, info) ← A,GKGenT M (pp)i(init:pp)
32
Return 0 if GKGenTM did not accept or A’s output is not well-formed
Parse the output of GKGenT M as (tsk, tpk) and set gpk:=(pp, mpk,tpk)
b← AAddU, CrptU, SndToU, RevealU, Trace, ModifyReg, Chalb(play: stinit, gpk)
Return b
Experiment: ExpN onf rame
FDGS ,A(λ)
pp GSetup(1λ); HUL, CUL, BUL, SL :=
(stinit, info, msk, mpk, tsk, tpk) ← A(init:pp)
Return 0 if A’s output is not well-formed, otherwise set gpk:=(pp, mpk,tpk)
(M,Σ, uid, Πtrace, infoτ)← ACrptU, SndTuU, RevealU, Sign, ModifyReg(play: stinit, gpk)
If Verify(gpk, infoτ,M,Σ)= 0 then return 0
If Judge(gpk, uid, infoτ,Πtrace, upk[uid], M,Σ)= 0 then return 0
If uid 6∈ HUL \BUL or (uid, M,Σ,τ)SL then return 0. Else return 1.
Experiment: ExpT r ace
FDGS ,A(λ)
pp GSetup(1λ); HUL, CUL, BUL, SL :=
(sinit, tsk, tpk) ← AhGKGenGM (pp),·i(init:pp)
Return 0 if GKGenGM did not accept or A’s output is not well-formed
Parse the output of GKGenGM as (msk, mpk, info). Set gpk:=(pp, mpk, tpk).
(M, Σ, τ )← AAddU, CrptU, SndToM, RevealU, Sign, ModifyReg, UpdateGroup (play: stinit, gpk,
info)
If Verify(gpk, infoτ,M,Σ)= 0 then return 0.
(uid,Πtrace)Trace(gpk, tsk, infoτ, reg, M,Σ)
If IsActive(infoτ, reg, uid) = 0 then return 1.
If uid = 0 or Judge(gpk, uid, infoτ,Πtrace, upk[uid], M,Σ)= 0 then return 1.
Else return 0.
33
Experiment: ExpT r aceSound
FDGS ,A(λ)
pp GSetup(1λ); CUL :=
(stinit, info, msk, mpk, tsk, tpk) ← A(init:pp)
Return 0 if A’s output is not well formed, otherwise set gpk:=(pp, mpk, tpk)
(M, Σ,{uidi,Πtrace}2
i=1,infoτ)← ACrptU, ModifyReg (play: stinit, gpk)
If there exists i∈ {1,2}s.t. Judge(gpk, uidi, infoτ,Πtrace, upk[uidi], M,Σ)= 0
then return 0.
Return 1.
4.2.4 Reductions
We omit the proofs here but remark that this construction is secure in the sense of
the previous section (Section 4.2) [7, 5].
4.3 Lattice-Based Fully Dynamic Group Signatures
We will cover this group signature in considerably more detail as it will be the main
building block of our reputation system in Chapter 6. The syntax and security notions
are based on the most recent paper on the foundations of dynamic group signatures
[7] which we have covered in Section 4.2. [18] introduce updateable Merkle trees that
will allow them to obtain a fully dynamic group signature providing revocation. The
scheme is covered in Section 4.3.4 in detail (as we will be using many portions for it in
our reputation system). Similarly, the zero-knowledge argument and security proofs
serve as inspiration to our construction so we give them in full detail.
4.3.1 Syntax
[18] uses very similar syntax so we omit listing it here (see Section 4.2.1). There is
a slight difference that we make note of though. In [7]’s syntax from Section 4.2,
a user identifier uid is already assigned to the user when he initially generates his
public-secret key pair during UKgen. Whereas in the syntax here, the GM assigns a
user identifier uid to the user during the Join/Issue protocol. This change does not
affect the security experiments which are the same as in Section 4.2 [7, 18].
34
4.3.2 Hardness Assumptions
We briefly refer to the hardness results on which [18]’s construction will be based.
We have the SIS problem (which we’ll need for the Merkle Tree accumulator) and the
LWE problem (which we’ll need for the encryption scheme).
Definition 14 (1,19 from Ling).Given uniformly random matrix AZn×m
q, find
xZm(x6= 0) such that ||x||β0and A·x= 0 (mod q). This is the SIS
n,m,q,β0
problem.
For β0= 1, q=e
O(n), m= 2ndlogqe, the SIS
n,m,q,β0problem is at least as hard
as SIVPγwith γ=e
O(n) [Ling 19,37]. This will be used in choosing the parameters
of the scheme.
Definition 15 (49 from Ling).Let n, m 1,q2, and let χ0be a probability
distribution on Z. Define As,χ0be the distribution obtained by sampling aZn
qand
e0χ0, and outputting (a, s>·a+e)Zn
q×Zq. Then the LWEn,q,χ0problem asks us
to distinguish msamples chosen according to As,χ0from msamples chosen according
to the uniform distribution over Zn
q×Zq.
For qa prime power, χ0the discrete Gaussian DZ,αq (where αq 2n), the
LWEn,q,χ0problem is at least as hard as SIVPe
O(n/α)[Ling 43,36,37].
4.3.3 The Advantage of Accumulators
To achieve full dynamicity we will we need to make use of updateable lattice-based
Merkle trees. These were first introduced in LLNW (static) and then made fully
dynamic in [18]’s work. We first recall the definition of an accumulator scheme [18].
An accumulator scheme consists of the following tuple of polynomial time al-
gorithms:
1. TSetup (λ)pp: On input security parameter λ, output public parameters pp.
2. TAccpp(R)u: On input of a set R={d0, ..., dN1}of Ndata values, output
the accumulator value u.
3. TWitnesspp(R, d)w: On input of the data set Rand a value d, output if
d /R; otherwise, output a witness wfor the fact that dwas accumulated in
TAcc(R)pp.
35
4. TVerifypp(u, d, w)1/0: On input of the accumulator value uand a value-
witness pair (d, w) output 1 if (d, w) is valid for the accumulator uand 0 oth-
erwise.
We say an accumulator scheme (TSetup,TAcc,TWitness,TVerify) is “secure” if for
all PPT adversaries A:
P[pp TSetup(λ); (R, d, w)← A(pp) : d/RTVerify(TAccpp(R), d, w) = 1]
=nelg(λ)
Next, we introduce the [LLNW] Merkle tree accumulator as it will be needed in the
construction of the fully dynamic group signature (and thus ours by extension). The
scheme works with the following parameters: n=O(λ), q=e
O(n1.5), k=dlog2(q)e,
and m= 2nk.Zqis identified by {0,1, ..., q 1}. We also define G, the powers-of-2
matrix:
G=
1 2 4 ... 2k1
...
1 2 4 ... 2k1
Zn×nk
q
Ghas an important property that we will use. Namely, for any vZn
q,v=G·bin(v)
where bin(v)∈ {0,1}nk is the binary representation of v.
We present the family of hash functions that will be used in the scheme [Ling].
Definition 16. [18] The function family Hmapping {0,1}nk ×{0,1}nk to {0,1}nk is
defined as H={hA|AZn×m
q}where for A= [A0|A1]where A0,A1Zn×nk
qand
for any (u0, u1)∈ {0,1}nk × {0,1}nk we have:
hA(u0, u1) = bin(A0·u0+A1·u1(mod q)) ∈ {0,1}nk
Notice that hA(u0, u1) = uA0u0+A1u1=G·u(mod q). Now that
we have defined a family Hof SIS-based collision resistant hash functions, we can
construct a Merkle tree with N= 2lleaves [ling, llnw].
TSetup(λ). Sample AZn×m
qand output pp := A
TAccA(R=d0∈ {0,1}nk, ..., dN1∈ {0,1}nk ). For every j[0, N 1], let bin(j) =
(j1, ..., jl)∈ {0,1}lbe the binary representation of jand let dj=uj1,...,jl. We form
the tree of depth lbased on the Nleaves u0,0,...,0,...,u1,1,...,1as follows:
36
1. At depth i[l], for all (b1, ..., bi)∈ {0,1}ithe node ub1,...,bi∈ {0,1}nk is defined
as hA(ub1,...,bi,0, ub1,...,bi,1).
2. At depth 0 the root u∈ {0,1}nk is defined as hA(u0, u1).
The algorithm outputs the accumulator value u.
TWitnessA(R, d). If d6∈ R, then return . Otherwise d=djfor some j[0, N 1]
with binary representation (j1, ..., jl). Output the witness wdefined as:
w= ((j1, ..., jl),(uj1,...,jl1,¯
jl, ..., uj1,¯
j2, u¯
j1)) ∈ {0,1}l×({0,1}nk)l
for uj1,...,jl1,¯
jl, ..., uj1,¯
j2, u¯
j1computed by TAccA(R).
TVerifyA(u, d, w). If the witness wis of the form:
w= ((j1, ..., jl),(wl, ..., w1)) ∈ {0,1}l×({0,1}nk)l
The algorithm recursively computes the path vl, vl1, ..., v1, v0∈ {0,1}nk as follows:
vl=d
i∈ {l1, ..., 1,0}:vi=hA(vi+1, wi+1),if ji+1 = 0
hA(wi+1, vi+1),if ji+1 = 1
It returns 1 if v0=uand 0 otherwise.
Lemma 4.3.1 (Ling’s reference 29).The given accumulator scheme is correct and
secure in the sense of the definition above, assuming SIS
n,m,q,1is hard.
[Ling] now introduces an “update” algorithm that allows them to update the
Merkle tree without having to reconstruct the entire tree. If the value of a leaf has
changed, we will modify all values in the path from that leaf up to the root. We give
their algorithm below [Ling]:
TUpdateA((j1, ..., jl), d). Let bin(j) = (j1, ..., jl) and d∈ {0,1}nk. The algorithm
performs the following steps:
1. Let djbe the current value at the leaf of position determined by bin(j) and let
((j1, ..., jl),(wj,l, ..., wj,1)) be the associated witness.
2. Set vl:= dand recursively compute the path vl, vl1, ..., v1, v00,1nk as in
TVerify.
3. Set u:= v0;uj1:= v1;...;uj1,j2,...,jl1:= vl1;uj1,j2,...,jl:= vl=d.
We are now ready to present their scheme.
37
4.3.4 The Scheme
We present the scheme from [18] in full detail as we will be making changes to it when
we construct our reputation system in Chapter 6.
1. GSetup(λ)pp:
On input security parameter λ, the algorithm outputs the public parameters
pp ={λ, N, n, q, k, m, mE