Content uploaded by Paulo Matias
Author content
All content in this area was uploaded by Paulo Matias on Mar 13, 2018
Content may be subject to copyright.
Content uploaded by Diego F. Aranha
Author content
All content in this area was uploaded by Diego F. Aranha on Oct 03, 2017
Content may be subject to copyright.
Content uploaded by Diego F. Aranha
Author content
All content in this area was uploaded by Diego F. Aranha on Oct 03, 2017
Content may be subject to copyright.
NIZKCTF: A Non-Interactive Zero-Knowledge
Capture the Flag Platform
Paulo Matias
Federal University of São Carlos,
Brazil
matias@ufscar.br
Pedro Barbosa
Federal University of Campina Grande,
Brazil
pedroyossis@copin.ufcg.edu.br
Thiago Cardoso
Hekima,
Brazil
thiago.cardoso@hekima.com
Diego Mariano
University of São Paulo,
Brazil
diegomcampos@usp.br
Diego F. Aranha
University of Campinas,
Brazil
dfaranha@ic.unicamp.br
Abstract
Capture the Flag (CTF) competitions are increasingly impor-
tant for the Brazilian cybersecurity community as education
and professional tools. Unfortunately, CTF platforms may
suer from security issues, giving an unfair advantage to
competitors. To mitigate this, we propose NIZKCTF, the
rst open-audit CTF platform based on non-interactive zero-
knowledge proofs.
1 Introduction
A critical element of a robust cybersecurity strategy is hav-
ing trained people in recent technological security issues.
Research shows that the United States is the most prepared
country against cyber attacks [
1
], however, there is also a
problem of quantity and quality of professionals, especially
when it comes to more sophisticated skills such as security by
design, defensive programming, applied cryptography, threat
intelligence and forensic analysis after a compromise [
10
].
This problem is aggravated in developing countries, where
access to bleeding-edge resources for professional training
based on real-world experience is limited.
In order to reduce the shortage of cybersecurity profession-
als, companies, schools, universities and military institutions
have been promoting Capture the Flag (CTF) competitions
around the world to foster the engagement of professionals
in cybersecurity topics. CTF competitions are usually de-
signed to serve an educational purpose to give participants
experience with computer security problems from a wide
spectrum of technical areas, as well as conducting and re-
acting to the sort of attacks found in the real world. CTF
competitions can also serve as an convenient recruiting tool
to ll specic positions with highly-skilled talent. Reverse-
engineering, exploitation, forensics, web programming and
cryptanalysis are among the typical required skills in CTF
competitions. Because CTF competitions are inexpensive
to organize and run, they are strategic to countries such as
Brazil, allowing the local community to interact and compete
with international players.
There are two styles of CTF competitions: attack / defense
and jeopardy. In an attack / defense competition, each team
is given a machine to defend on an isolated network. Teams
are scored based on both their success in defending their
assigned machine and on their success in attacking the other
teams’ machines. Jeopardy competitions are more common
and usually involve multiple categories of problems, each of
which contains a variety of questions of dierent values and
levels of diculty. A correct solution to a problem reveals a
ag, which is submitted to the scoring platform for points.
Teams attempt to earn the most points in the competition’s
time frame (e.g., 24 hours), but usually do not directly attack
each other. Rather than a race, this style of gameplay en-
courages taking time to approach challenges and prioritizes
quantity of correct submissions over the timing.
An important concern is to protect the main software plat-
form against attacks. The platform is usually responsible for
storing the ags and updating the scoreboard as the con-
test progresses. Unfortunately, because CTF platforms suer
from the same software security issues as any software com-
ponent and due to incentives from the high competitiveness
in such environments, it is common to nd teams targeting
the platform instead of the challenges. There are no inde-
pendently veriable guarantees that the teams really solved
the challenges and the scoreboard is correct. Successful at-
tacks against the platform arguably demonstrate relevant
skills, but organizers may be more interested in enforcing
the rules and rewarding solutions for the challenges, due to
sponsorship duties or focused recruiting eorts.
In this paper, we propose a novel platform called NIZKCTF:
Non-Interactive Zero-Knowledge Capture the Flag Platform.
With NIZKCTF, there are no ags stored in the platform, and
therefore, for a solved challenge, the team does not submit
the ag itself, but a public zero-knowledge proof. This proof
is specic for the team and the challenge. Since it is a public
proof, the platform and other teams can audit and conrm
that the team has indeed solved the challenge, without being
able to deduce the ag. We implemented NIZKCTF as an
open-source project and its architecture includes software
arXiv:1708.05844v2 [cs.CR] 12 Mar 2018
elements such as a Git-based centralized repository (i.e..,
to commit the ags and receive the event updates), and a
continuous integration system (i.e., to automatically merge
all requests of teams in the repository of the competition).
Any CTF promoter can instantiate and use NIZKCTF in their
competition. After validating the usefulness and security of
NIZKCTF in two smaller competitions, we adopted the plat-
form to host the 2017 edition of Pwn2Win, an internationally
advertised and bilingual Brazilian CTF. During and after the
CTFs, teams were not able to compromise the nal result.
2 History and Background
The rst CTF competition apparently was hosted during DE-
FCON 1996 in Las Vegas. In Brazil, hacking challenges were
usually conducted in loco during national security events
such as the Hackers 2 Hackers Conference
1
since at least 2007.
The rst widely advertised Jeopardy competition was Hack-
ing n’ Roll
2
, which began to be organized in 2010 by the In-
formation Security Research Team of the State University of
Ceara (UECE). In 2014, more competitions started to appear,
such as the Pwn2Win (on line) and Hackaag (in loco) CTFs.
Despite challenges being enunciated only in Portuguese, an
Ukrainian team participated in the 2014 editions of Pwn2Win
and Hacking n’ Roll, increasing the interest for bilingual
and internationally advertised competitions. In March 2016,
Pwn2Win became the rst Brazilian CTF listed in the CTF-
time
3
international index, followed by 3DSCTF in December
of the same year. Figure 1 shows the growing trend in active
participation of Brazilian teams in international competi-
tions.
Figure 1.
After the rst bilingual Brazilian competitions
were promoted in 2016, there was an increase in the number
of internationally active Brazilian teams indexed by CTFtime.
As organizers of the Pwn2Win CTF, the authors faced
scalability issues during the 2016 edition. In the last couple of
hours of the competition, the main software platform became
nearly unable to render the scoreboard due to the amount of
concurrent accesses. Although this issue could be alleviated
by adopting a more ecient open source platform, there was
a growing apprehension of facing denial-of-service attacks
or even having ags leaked due to a software security aw or
server misconguration. As newcomers to the international
CTF scene, this could undermine the reputation of Brazilian
competitions as a whole, what motivated this work toward
a more robust and transparent CTF platform.
3 Related Work
Concerns about securing a CTF platform against attacks or
just making the competition more fair are not new. In 2015,
the University of Birmingham and Imperial College London
(UK) [
7
] announced a virtual machine (VM) containing vul-
nerable services and challenges: each student runs the VM
locally and attempts to solve challenges inside the VM as
they are made available. The solving of a challenge reveals a
ag that is unique to a particular VM instance, allowing for
the detection of collusion between students. As well as ac-
quiring ags, students also had to provide traditional written
answers to questions and sit an examination.
In 2015, the Carnegie Mellon University [
5
] released an
automatic problem generator (APG) for CTF competitions,
where a given challenge is not xed, but rather can have
many dierent automatically generated problem instances.
APG oers players a unique experience and can facilitate
deliberate practice where problems vary just enough to make
sure a user can replicate the solution idea. APG also allows
competition administrators to detect when users submit a
copied ag from another user to the scoring server.
There are works regarding problems that may aect the
overall quality of CTFs [
6
,
8
,
9
]. Chung et al. [
8
] present in-
sights and lessons learned from organizing CSAW, one of the
largest and most successful CTFs. Chapman et al. [
6
] present
the competition design of PicoCTF, as well as an evaluation
based on survey responses and website interaction statistics,
and insights into the students who played.
Despite the relevance, these works do not address solu-
tions for the security problem of players attacking the CTF
platform. A rst issue is players attacking the platform to
steal the ags. Although common sense dictates that ags
should be processed with an one-way function, similarly
to how passwords are usually stored, Table 1 shows that
most of the previously existing open-source CTF platforms
do not support this feature. The only exception is PicoCTF
Platform 2, which checks whether ags are correct by call-
ing a script specic to each challenge. The script could be
congured to check against a protected ag, although this
use case is not a builtin feature or documented at all.
2
Supports
Platform Protected ags Regex ags
NIZKCTF 4yes no
CTFd 5no yes
FBCTF 6no no
HackTheArch 7no yes
Mellivora 8no no
NightShade 9no yes
PicoCTF Platform 2 10 yes yes
RootTheBox 11 no yes
PyChallFactory 12 no no
Table 1.
Comparison between NIZKCTF and previously ex-
isting open source platforms.
4 Problem Statement
Unfortunately, it is common to nd vulnerabilities in CTF
platforms. Here we present just two examples.
RC3 CTF 2016
: For a moment during this CTF, the rst
place team had 3590 points. Suddenly, a team named “The
board is vulnerable, please contact admin@seadog007.me”
appeared in the scoreboard with 4500 points
13
. Figure 2
shows the record of this fact.
CODEGATE CTF 2016 Finals
: During this CTF, a team
discovered that the server hosting one of the challenges
had an old kernel version and was vulnerable to over-
layfs privilege escalation (CVE-2016-1576) [
12
]. The team
members were able to gain root access and get some “free
ags” (although they claim that they did not submit these
ags until they really solved the challenges). Tracing the
system calls of the SSH server, they waited for an adminis-
trator to log in and were able to get their password. Then,
the team noticed that other servers (including the score-
board) had the same password. After visiting the platform
servers and having fun, they stopped the intrusion and
proceeded to play as usual14.
The aforementioned cases illustrate the impact of software
vulnerabilities in CTF platforms and justify the ever growing
importance of securing them. This paper addresses this prob-
lem and intends to answer the following questions: (i) how to
guarantee integrity and a minimum level of fairness in a CTF
competition, preventing teams from stealing ags by exploit-
ing the platform? (ii) how to ensure auditability, allowing
anyone to verify whether teams really solved the challenges
according to the points presented in the scoreboard? (iii)
how to replicate the information required for checking cor-
rectness of solutions to the players, in order to reduce the
impact of eventual service shortage? Our proposed solution
is discussed next.
5 Competition Protocol
This section describes the novel competition protocol em-
ployed by NIZKCTF. A threat model for CTF competitions is
dened, followed by a formalization of the main theoretical
tools and requirements for auditability, together with tex-
tual descriptions for the security properties and employed
schemes for clarication.
5.1 Threat Model
In a CTF competition, the adversary is a player or team in-
terested in exploiting vulnerabilities in the platform, instead
of solving challenges, to obtain advantages such as stealing
ags or manipulating the scoreboard. Due the diculty in
developing a vulnerability-free system, NIZKCTF relies on
cryptographic primitives to provide security properties, such
as zero-knowledge proofs.
Another form of adversarial behavior is a team who wants
to submit ags in place of another team without their consent
(e.g., to harm a specic team by making it fall behind on
the scoreboard). For this reason, NIZKCTF makes a zero-
knowledge proof to be unique to a particular team.
The protocol alone does not protect against ag sharing,
i.e., teams copying and submitting ags from others. For a
solution that address this type of adversary, recall the auto-
matic problem generation proposed by Burket et al. [
5
]. Since
in a competition based on NIZKCTF it is possible to have
automatic problem generation, our proposal can be extended
to support protection against a ag-sharing adversary.
5.2 Zero-knowledge Proof of Knowledge
Let
L
be a NP language such that
pki∈L
i there exists a wit-
ness
ski
yielding
ML(pki,ski)=1
, where
ML
is a polynomial-
time Turing machine. Let us also assume that the probability
of computing skifrom pkiin polynomial time is negligible.
A non-interactive zero-knowledge (NIZK) proof of knowl-
edge [
3
] is a cryptographic scheme through which a prover
knowing
ski
can convince a verier of that fact, satisfying
the following properties:
Non-interactivity:
Access to a public set of common
parameters and to the contents of the proof itself must
be sucient to verify a proof. Since no interaction with
the prover is required, any party interested in acting
as a verier may do so.
Completeness:
If
pki∈L
the proof generated by an
honest prover knowing
ski
must be accepted by an
honest verier:
σ=Prove(pki,ski)=⇒Verify(pki,σ)=1
Validity:
Any probabilistic polynomial-time (PPT) prover
who does not know
ski
must have negligible proba-
bility of success
ϵ
of convincing a verier. Equiva-
lently, for every possible PPT prover
P
(even malicious
ones) there exists a knowledge extractor
Extract
that,
3
Figure 2.
In the RC3 CTF 2016, hackers exploited the scoreboard to report the vulnerability to the competition administrators.
given oracle access to
P
, is able to extract
ski
with over-
whelming probability
(
1
−ϵ)
every time
P
succeeds in
completing a new proof:
Pr[σ←P(pki,ski);sk ′
i←Extract(pki,σ):
ML(pki,sk ′
i) ∨ ((pki,ski) ∈ Q) ∨
¬Verify(pki,σ)] =1−ϵ,
where
Q
denotes a query tape which registers all pre-
vious queries that have been sent to a prover.
Zero-knowledge:
The proof discloses no information
about
ski
besides the fact that the prover knows its
value. Equivalently, for every possible PPT verier
V
(even malicious ones) there exists a simulator
Sim
that,
given oracle access to
V
, is able to convince
V
with a
negligible dierence in probability
ϵ
when compared
to an honest prover, even though
Sim
has no knowl-
edge of ski:
|Pr[σ←Prove(pki,ski);b←V(pki,σ):b=1]
−Pr[σ←Sim(pki);b←V(pki,σ):b=1] |
=ϵ.
In NIZKCTF, values of
pki
are publicly disclosed, but the
corresponding
ski
which allows proving
pki∈L
is kept se-
cret. Every player holds a witness
skt
attesting member-
ship to their team
t
. When a player solves a challenge
c
of
the competition, they obtain a witness
skc
asserting they
hold the answer to the challenge. In order to earn points
for their team, the player needs to publicly prove simulta-
neous knowledge of
skt
and
skc
. The concept of simultane-
ous knowledge is formalized by performing proofs on an
auxiliary NP language
L′
such that
pkt∥pkc∈L′
i there
exists a witness
skt∥skc
such that
ML′(pkt∥pkc,skt∥skc)=
ML(pkt,skt) ∧ ML(pkc,skc)=1
, where the operator
∥
de-
notes string concatenation.
Dierent approaches exist for proving knowledge of wit-
ness
skt∥skc
, but their practicality depends on the exact
choice of
ML
and, consequently, of
ML′
. A rst approach
would be a general-purpose non-interactive zero-knowledge
(NIZK) proof system, but the generality comes with a price,
in the form of large proving keys and expensive processing
time.
5.2.1 A scheme based on digital signatures
The approach proposed and implemented in NIZKCTF con-
sists in choosing a
ML(pki,ski)
which veries whether
ski
is the private key corresponding to the public key
pki
in a
digital signature scheme. This choice allows us to reduce our
proof of knowledge problem to that of digitally signing mes-
sages, whose implementation is simpler and more ecient
than any known general-purpose NIZK proof system.
The Schnorr signature scheme and its key-prexed variant
over elliptic curves EdDSA [
4
] satisfy completeness, validity
and zero-knowledge properties under the assumption that
the discrete logarithm problem over elliptic curves (ECDLP)
is hard [
11
,
14
]. However, incorrectly composing two signa-
tures when constructing a proof of simultaneous knowledge
may undermine the validity of such properties.
Let the following be the primitives of a secure digital
signature scheme:
Sign(ski,m)=s∥m
Signs the message
m
using the private key
ski
. Outputs
the message mprepended to the signature s.
Open(pki,s∥m)=(m,if sis valid
⊥,otherwise
Veries whether
s
is a valid signature for
m
produced
by the private key
ski
corresponding to the public
key
pki
. Outputs the original message
m
if the signa-
ture is valid, or ⊥if it is invalid.
We propose the following scheme to prove knowledge of
the witness skt∥skccorresponding to pkt∥pkc∈L′:
Prove(pkt∥pkc,skt∥skc)=Sign(skc,Sign(skt,c)) (1)
Verify(pkt∥pkc,σ)=(1,if m=c
0,if m,c,
where m=Open(pkt,Open(pkc,σ)).
(2)
4
We argue Eqs. 1 and 2 satisfy the properties of a NIZK
proof of knowledge scheme:
Non-interactivity:
Since
pkt
,
pkc
and the digital signa-
ture scheme parameters are public and known to all
parties, the proof
σ
can be veried by Eq. 2 without
interaction with the prover.
Completeness:
Since
Open(pki,Sign(ski,m)) =m
for
all
i
and for all
m
such that
ML(pki,ski)=
1, by simply
substituting into Eqs. 1 and 2:
∀(t,c):pkt∥pkc∈L′
,
Verify(pkt∥pkc,Prove(pkt∥pkc,skt∥skc)) =1.
Validity:
If the digital signature scheme satises validity
of the signed message
s∥m
, there exists a knowledge
extractor
Extract(pki,s∥m)
able to extract
ski
from
the
Sign(ski,m)
operation implemented by any (pos-
sibly malicious) PPT prover
P
. Therefore, a knowledge
extractor
Extract′
able to extract
skt∥skc
from
P
is
constructed as follows:
Extract′(pkt∥pkc,sc∥st∥c)=
Extract(pkt,st∥c) ∥ Extract(pkc,sc∥st∥c).
Let
Q
be the query tape of
Sign(ski,m)
, and
Q′
be
the query tape of
Prove(pkt∥pkc,skt∥skc)
.
Extract′
succeeds as long as:
(skt,c)<Q:
Otherwise,
P
may replay
st∥c
from a previous run,
causing Extract(pkt,st∥c)to fail in extracting skt.
(skc,st∥c)<Q:
Otherwise,
P
may replay
sc∥st∥c
from a previous run,
causing
Extract(pkc,sc∥st∥c)
to fail in extracting
skc.
However, since all messages signed by
skt
reference
c
,
(skt,c) ∈ Q⇐⇒ (pkt∥pkc,skt∥skc) ∈ Q′
. Sim-
ilarly, since all messages signed by
skc
reference
t
,
(skc,st∥c) ∈ Q⇐⇒ (pkt∥pkc,skt∥skc) ∈ Q′.
The denition of validity allows the knowledge extrac-
tor to fail when
(pkt∥pkc,skt∥skc) ∈ Q′
, thus existence
of Extract′proves that validity is satised.
Zero-knowledge:
If the digital signature scheme satis-
es zero-knowledge of the private key
ski
, there exists
a simulator
Sim(pki,m)
able to convince the validity
of the signed message
s∥m
to the
Open(pki,s∥m)
op-
eration implemented by any (possibly malicious) PPT
verier
V
. Therefore, there exists a simulator
Sim′
able
to convince Vof the proof validity:
Sim′(pkt∥pkc)=Sim(pkc,Sim(pkt,c)).
5.2.2 Requirements for the challenge witness
Zero-knowledge of the witness
skc
is useful only as long as
skc
cannot be easily found by exhaustive search. Recall
ML
and
pkc
are public. Therefore, if
skc
does not have sucient
randomness, an oine brute-force attack has non-negligible
chance of success in nding its value.
In some CTFs, the ag
fc
for a challenge
c
consists of
a random hexadecimal string large enough (e.g., 256 bits,
after decoding) to make a brute-force attack unfeasible. In
this case, the ag
fc
can be used directly as the seed for a
deterministic digital signature key pair generator:
(skc,pkc)=KeyPair(fc)
Many competitions, however, adopt password-like ags,
such as “
CTF-BR{you_mastered_technique_X}
”. In this case,
a password-based key derivation function
PBKDF
can be used
along with a public salt value
φc
to increase the diculty of
an oine brute-force attack [15]:
(skc,pkc)=KeyPair(PBKDF(φc,fc)) (3)
In NIZKCTF, we use Eq. 3 as a conservative choice to
support both types of ags.
5.3 Auditability
In order to allow CTFs to be openly audited and indepen-
dently veried, it is necessary to have all operations carried
in a database and the following requirements met:
History preservation:
The database must be able to
recover a snapshot of its state after each committed
transaction and preserve the logical order of these
transactions.
Immutability:
Once a transaction is committed, the
database must prevent it from being erased. If an appli-
cation needs to revert data to a previous state, the only
way to perform that operation must be by performing
a new transaction.
Replication:
Anyone interested in auditing the compe-
tition must be able to retrieve and replicate the entire
contents and transaction history of the database.
6 Implementation
Dierent instances of NIZKCTF can be constructed by choos-
ing distinct underlying technologies. We selected compo-
nents for implementing NIZKCTF with the goal of maximiz-
ing the usage of free-of-charge hosted services like
GitHub
(or
GitLab
) and Amazon cloud services which provide a per-
manent free tier (AWS Lambda and SNS).
Our implementation
15
is composed by the following mod-
ules: a distributed storage for sharing data (implemented by a
Git repository), a continuous integration script for accepting
submissions (implemented by an AWS Lambda function), a
command-line interface for interacting with the platform,
and a web interface for displaying the list of challenges and
the scoreboard.
As can be seen in Figure 3, the distributed storage is used
for propagating the necessary data while keeping the full
change history. Players then interact with the distributed
5
Local Git
Repository
AWS
Lambda
Player
Pull Request Pull Request
Pull Request
Trigger Merge Pull
Request
Local Git
Repository
Central Git
Repository
AWS API
Gateway
AWS
SNS
Player
Figure 3.
Overview of our implementation. Players modify
local Git repositories and create pull requests to the central
repository. An AWS Lambda function is triggered to merge
the pull request to the central repository if changes are valid.
storage by using the command-line interface, which imple-
ments the NIZKCTF protocol. A request to merge the new
data with the main repository is created and later evaluated
by the Lambda Function. The Lambda Function checks the
validity of the modications, then decides to accept or deny
the request. After the changes are merged, the web interface
starts using the most recent version of the data.
The distributed storage allows the replication of chal-
lenges, team registrations, proofs and other CTF metadata
while ensuring that the entire history is preserved. In our
implementation, this property is achieved by adopting a Git
repository as the database. Git commits are stored as a Di-
rected Acyclic Graph that can be later queried [
13
]. This
allows any participant to audit all changes made, including
ordering and timestamps.
When changes are made to the player’s local storage, a pull
request is created to merge the modications with the central
Git repository managed by the competition’s sta. The pull
request is evaluated by the Lambda Function and, if accepted,
all changed data is incorporated and can be propagated to
other teams.
In order to avoid tampering with commit history, the
repository is congured to disallow force pushes. Without
force pushes, changes committed to the central repository
must always descent from the central repository history.
In other words, pushed commits are not allowed to modify
the commit chain, guaranteeing immutability of previously
committed changes.
The Lambda Function works like a continuous integration
service. It is triggered by a GitHub hook to automatically
accept pull requests containing team registrations and proof
submissions. However, since the Lambda Function does not
have access to any privileged information about the chal-
lenges, any node with access to the distributed storage can
also be used for verifying submissions.
The command-line interface is a Python script used for
automating modications on the distributed storage and uses
libsodium for all implementations of cryptography. Currently,
the following operations are supported:
Login:
Connects to
GitHub
or
GitLab
, generates an API
token and creates a fork of the CTF’s main repository.
Register:
Registers a new team and creates a pull re-
quest.
Challenges:
Lists available challenges with their title,
description, categories and rewards.
Submit:
Checks if the challenge’s private key can be
successfully computed from the ag provided by the
competitor, then generates a submission request of the
zero-knowledge proof.
Score:
Reads the accepted submissions le and presents
the scoreboard.
The web interface uses the repository as a
GitHub
page
which exposes les with an HTTP server. This allows the
challenges and scoreboard to be viewed in a more user-
friendly way. The interface is implemented using only client-
side programming languages (HTML, CSS and Javascript).
Challenges and scoreboard les are loaded using Ajax in
order to give a dynamic feel.
It is worth noting that the distributed storage could also
be implemented using a blockchain. This would allow a fully
distributed implementation of NIZKCTF in which submis-
sion proofs are appended to the blockchain and validated by
contest participants. We do not choose such an implemen-
tation because it would require a small transaction fee for
each submitted proof.
7 Validation
To validate our proposal, we rst conducted two small CTFs
as pilot tests. The rst one was Pwn2Win Platform Test
Edition (PTE)
16
, a competition for 10 international invited
teams. The objective of PTE was to assess the usefulness and
security of NIZKCTF. In order to achieve that, we used the
Goal, Question, Metric (GQM) paradigm [
2
], a mechanism for
dening and evaluating goals using measurement.
GQM denes a measurement model on three levels: con-
ceptual level (Goal), operational level (Question) and quanti-
tative level (Metric). GQM templates are a structured way of
specifying goals and contains the following elds: purpose,
6
object of study, focus, stakeholder and context. Here is a GQM
template to express the goal of our study: The purpose of
this study was to
evaluate the usefulness
of
NIZKCTF
when
being used by the participants in a CTF competition.
To characterize the measurement object, we dened the
research question
RQ1
: Is our implementation of NIZKCTF
able to provide the features (e.g., challenges, submissions
and scoreboard) of a common CTF?
Pwn2Win PTE had 7 challenges and duration of 12 hours.
There were one challenge of exploitation, cryptography, web,
networking and miscellaneous, and two of reverse engineer-
ing. Teams were able to solve the challenges and there was a
scoreboard, just like a common CTF. Players also gave good
feedback and no one had objections in using NIZKCTF in
future CTFs. Therefore, we support a positive answer for
RQ1.
From the 10 invited teams, 5 of them scored (solved at least
one challenge). Since the CTF had many low-complexity
challenges and the invited teams were very experienced (for
example, one of the teams was the 2016 second place at
ctftime.org), we assume that teams that did not score were
focused in trying to exploit the platform, as we present next.
Here is another GQM template to express another goal
of our study: The purpose of this study was to
evaluate the
security
of
NIZKCTF
against
attacks to the platform
from the
participants in a CTF competition.
To characterize the measurement object, we dened the
research question
RQ2
: Is any participant able to attack the
platform and compromise the CTF result?
To answer this question, we made a bug bounty program,
with 450 BRL in cash prizes for teams who found vulnerabil-
ities to compromise the result. During and after (we kept the
platform online for 20 days) the CTF, teams were not able to
do that. Therefore, we support a negative answer for RQ2.
After that, we conducted the second pilot test: SCMPv8
CTF
17
, during the 8th Computer Science and Engineering
Week (SeComp) at the Federal University of São Carlos
(UFSCar). The purpose of this study was to
evaluate the ease
of use
of
NIZKCTF
when
being used
by
Brazilian undergrad-
uate students in a CTF competition.
To characterize the measurement object, we dened the re-
search question
RQ3
: Would students take their time to learn
how to use NIZKCTF when presented with an incentive?
We conducted SCMPv8 in two dierent tracks: one of
them using a traditional web-browser based CTF platform,
and the other one using NIZKCTF. Challenges available in
the traditional platform were a subset of the ones in NIZK-
CTF, and players were allowed to participate in both tracks.
Winners of the NIZKCTF track received 4 tickets to attend
Hackers 2 Hackers Conference (worth a total of 1200 BRL).
Students from other universities were invited to partici-
pate remotely. The traditional track received 27 team sub-
scriptions, from which 14 teams solved at least one challenge,
and 3 teams eventually solved every challenge available. The
NIZKCTF track received 5 team subscriptions, from which
4 teams solved at least one challenge, and 2 teams eventually
solved every challenge available. From this data, we infer a
mixed answer for
RQ3
, since only the most skilled teams
took their time to use NIZKCTF.
8 Deployment
Pwn2Win 2017 was the rst large-scale CTF on which NIZK-
CTF was deployed in production. During the 48 hours of the
competition, 283 teams registered, with 207 scoring at least
one point. From those teams, 30 were from Brazil and 21
scored points. The best placed Brazilian teams nished at the
11th, 27th and 30th ranking positions. The main repository
used to keep track of the submissions had 1372 clones and
more than 3800 commits submitted by more than 320 players
(repository contributors). The full record of submissions can
be found in the submissions repository18.
During the contest, the scalability of this model was tested
and validated. The page containing the challenge listing
and scoreboard had 2.7 million HTTP requests. The ag
submission process also worked as expected during dierent
loads. The mean time between submissions was 3 minutes,
reaching one submission every 2 seconds during peak hours.
Figure 4 presents the number of ag submissions by hour.
In moments of many concurrent submissions, we started
observing failures in GitHub’s merge API. This was an un-
documented behavior that was not triggered in small-scale
tests. To mitigate this problem we patched the application
with a hotx to retry the API request in case of errors. This
solution was enough to ensure the correct behavior of the
system.
Figure 4.
The Pwn2Win CTF 2017 platform processed at
least 5 correct ag submissions per hour, except for a brief
hiatus in October 22 at 6:00 UTC-2.
7
9 Conclusions
With the growing interest in CTFs, there is a need for a
secure and auditable platform. We presented a novel platform
called NIZKCTF: Non-Interactive Zero-Knowledge Capture
the Flag. Through the security properties assured by the
cryptographic mechanisms, we claim that a CTF running on
NIZKCTF is more secure than when running a traditional
platform.
The implementation of NIZKCTF was tested in three dif-
ferent CTFs with dierent characteristics and sizes. In these
events, the proposed system was shown to be a safe, scalable
and openly auditable alternative to current CTF platforms.
During this period no team was able to take advantage of
the platform, no tampering happened and the full history
was available to every team.
As future work, we intend to continue using our proposed
platform in upcoming competitions. We also intend to de-
velop a fully web-based client using Ajax to commit directly
through GitHub/GitLab API endpoints. This will reduce the
barriers for team registration and participation without com-
promising the benets of the platform. We hope that NIZK-
CTF will further engage the Brazilian community in CTF
competitions and cybersecurity training.
References
[1]
ABI Research. 2015. Global Cybersecurity Index & Cyberwellness Proles.
Technical Report. International Telecommunications Union, Geneva,
CH.
[2]
V. R. Basili. 1992. Software Modeling and Measurement: The Goal/Ques-
tion/Metric Paradigm. Technical Report. University of Maryland at
College Park, College Park, MD, USA.
[3]
M. Bellare and O. Goldreich. 1993. On Dening Proofs of Knowledge.
In 12th Annual International Cryptology Conference. Springer Berlin
Heidelberg, Santa Barbara, CA, USA, 390–420. hps://doi.org/10.1007/
3-540- 48071-4_28
[4]
D. J. Bernstein, N. Duif, T. Lange, P. Schwabe, and B. Yang. 2012. High-
speed high-security signatures. Journal of Cryptographic Engineering
2, 2 (2012), 77–89. hps://doi.org/10.1007/s13389- 012-0027- 1
[5]
J. Burket, P. Chapman, T. Becker, C. Ganas, and D. Brumley. 2015.
Automatic Problem Generation for Capture-the-Flag Competitions. In
2015 USENIX Summit on Gaming, Games, and Gamication in Security
Education (3GSE 15). USENIX Association, Washington, DC, USA.
[6]
P. Chapman, J. Burket, and D. Brumley. 2014. PicoCTF: A Game-
Based Computer Security Competition for High School Students. In
2014 USENIX Summit on Gaming, Games, and Gamication in Security
Education (3GSE 14). USENIX Association, San Diego, CA, USA.
[7]
T. Chothia and C. Novakovic. 2015. An Oine Capture The Flag-Style
Virtual Machine and an Assessment of Its Value for Cybersecurity Ed-
ucation. In 2015 USENIX Summit on Gaming, Games, and Gamication
in Security Education (3GSE 15). USENIX Association, Washington, DC,
USA. hps://www.usenix.org/conference/3gse15/summit-program/
presentation/chothia
[8]
K. Chung and J. Cohen. 2014. Learning Obstacles in the Capture
The Flag Model. In 2014 USENIX Summit on Gaming, Games, and
Gamication in Security Education (3GSE 14). USENIX Association,
San Diego, CA, USA. hps://www.usenix.org/conference/3gse14/
summit-program/presentation/chung
[9]
A. Davis, T. Leek, M. Zhivich, K. Gwinnup, and W. Leonard. 2014. The
Fun and Future of CTF. In 2014 USENIX Summit on Gaming, Games,
and Gamication in Security Education (3GSE 14). USENIX Associa-
tion, San Diego, CA, USA. hps://www.usenix.org/conference/3gse14/
summit-program/presentation/davis
[10]
K. Evans and F. Reeder. 2010. A Human Capital Crisis in Cybersecurity:
Technical Prociency Matters. Technical Report. Center for Strategic
and International Studies, Washington, DC, USA.
[11]
E. Kiltz, D. Masny, and J. Pan. 2016. Optimal Security Proofs for
Signatures from Identication Schemes. In 36th Annual International
Cryptology Conference. Springer Berlin Heidelberg, Santa Barbara, CA,
USA, 33–61. hps://doi.org/10.1007/978- 3-662- 53008-5_2
[12]
NIST. 2016. Computer Security Resource Center - National Vulnera-
bility Database. https://nvd.nist.gov/vuln/detail/CVE-2016-1576. (Feb.
2016).
[13]
B. O’Sullivan. 2009. Making Sense of Revision-Control Systems. Com-
mun. ACM 52, 9 (Sept. 2009), 56–62. hps://doi.org/10.1145/1562164.
1562183
[14]
D. Pointcheval and J. Stern. 2000. Security Arguments for Digital
Signatures and Blind Signatures. Journal of Cryptology 13, 3 (2000),
361–396. hps://doi.org/10.1007/s001450010003
[15]
F. F. Yao and Y. Lisa. Yin. 2005. Design and Analysis of Password-Based
Key Derivation Functions. In RSA Conference 2005. Springer Berlin
Heidelberg, San Francisco, CA, USA, 245–261. hps://doi.org/10.1007/
978-3- 540-30574-3_17
Notes
1hps://www.h2hc.com.br
2hp://www.insert.uece.br/en/events
3hps://ctime.org
4hps://github.com/pwn2winctf/nizkctf-tutorial
5hps://github.com/CTFd/CTFd
6hps://github.com/facebook/fbctf
7hps://github.com/mcpa-stlouis/hack-the-arch
8hps://github.com/Nakiami/mellivora
9hps://github.com/UnrealAkama/NightShade
10hps://github.com/picoCTF/picoCTF-Platform-2
11hps://github.com/moloch--/RootTheBox
12hps://github.com/pdautry/py_chall_factory
13hps://github.com/seadog007/RC3-CTF-2016-scoreboard
14hp://mslc.ctf.su/wp/codegategate
15hps://github.com/pwn2winctf/2017
16hps://github.com/pwn2winctf/PTE
17hps://github.com/scmp-ctf/SCMPv8
18hps://github.com/pwn2winctf/2017submissions
8