Available via license: CC BY 4.0
Content may be subject to copyright.
arXiv:2301.13652v1 [cs.GT] 31 Jan 2023
Round-Robin Beyond Additive Agents:
Existence and Fairness of Approximate Equilibriaβ
Georgios Amanatidis1, Georgios Birmpas2, Philip Lazos3,
Stefano Leonardi2, and Rebecca Reiο¬enhΓ€user4
1Department of Mathematical Sciences
University of Essex; Colchester, UK
georgios.amanatidis@essex.ac.uk
2Department of Computer, Control, and Management Engineering
Sapienza University of Rome; Rome, Italy
{birbas, leonardi}@diag.uniroma1.it
3Input Output; London, UK
philip.lazos@iohk.io
4Institute for Logic, Language and Computation
University of Amsterdam; Amsterdam, The Netherlands
r.e.m.reiο¬enhauser@uva.nl
Abstract
Fair allocation of indivisible goods has attracted extensive attention over the last two decades, yield-
ing numerous elegant algorithmic results and producing challenging open questions. The problem
becomes much harder in the presence of strategic agents. Ideally, one would want to design truthful
mechanisms that produce allocations with fairness guarantees. However, in the standard setting with-
out monetary transfers, it is generally impossible to have truthful mechanisms that provide non-trivial
fairness guarantees. Recently, Amanatidis et al. [5] suggested the study of mechanisms that produce
fair allocations in their equilibria. Speciο¬cally, when the agents have additive valuation functions,
the simple Round-Robin algorithm always has pure Nash equilibria and the corresponding allocations
are envy-free up to one good (EF1) with respect to the agentsβ true valuation functions. Following this
agenda, we show that this outstanding property of the Round-Robin mechanism extends much beyond
the above default assumption of additivity. In particular, we prove that for agents with cancelable valu-
ation functions (a natural class that contains, e.g., additive and budget-additive functions), this simple
mechanism always has equilibria and even its approximate equilibria correspond to approximately EF1
allocations with respect to the agentsβ true valuation functions. Further, we show that the approxi-
mate EF1 fairness of approximate equilibria surprisingly holds for the important class of submodular
valuation functions as well, even though exact equilibria fail to exist!
βThis work was supported by the ERC Advanced Grant 788893 AMDROMA βAlgorithmic and Mechanism Design Research
in Online Marketsβ, the MIUR PRIN project ALGADIMAR βAlgorithms, Games, and Digital Marketsβ, and the NWO Veni project
No. VI.Veni.192.153.
1
1 Introduction
Fair division refers to the problem of dividing a set of resources among a group of agents in a way that
every agent feels they have received a βfairβ share. The mathematical study of (a continuous version
of) the problem dates back to the work of Banach, Knaster, and Steinhaus [36], who, in a ο¬rst attempt
to formalize fairness, introduced the notion of proportionality, i.e., each of the πagents receives at least
1/π-th of the total value from fer perspective. Since then, diο¬erent variants of the problem have been
studied in mathematics, economics, political science, and computer science, and various fairness notions
have been deο¬ned. The most prominent fairness notion is envy-freeness [22,21,37], where each agent
values her set of resources at least as much as the set of any other agent. When the available resources are
indivisible items, i.e., items that cannot be split among agents, notions introduced for inο¬nitely divisible
resources, like proportionality and envy-freeness are impossible to satisfy, even approximately. In the
last two decades fair allocation of indivisible items has attracted extensive attention, especially within the
theoretical computer science community, yielding numerous elegant algorithmic results for various new
fairness notions tailored to this discrete version of the problem, such as envy-freeness up to one good (EF1)
[28,16], envy-freeness up to any good (EFX) [18], and maximin share fairness (MMS) [16]. We refer the
interested reader to the surveys of Procaccia [34], Bouveret et al. [15], Amanatidis et al. [6].
In this work, we study the problem of fairly allocating indivisible goods, i.e., items of non-negative
value, to strategic agents, i.e., agents who might misreport their private information if they have an incen-
tive to do so. Incentivising strategic agents to truthfully report their valuations is a central goalβand often
a notorious challengeβin mechanism design, in general. Speciο¬cally in fair division, this seems particu-
larly necessary, since any fairness guarantee on the outcome of a mechanism typically holds with respect
to its input, namely the reported preferences of the agents rather than their true, private preferences
which they may have chosen not to reveal. Without truthfulness, fairness guarantees seem to become
meaningless. Unfortunately, when monetary transfers are not allowed, as is the standard assumption in
fair division, such truthful mechanisms fail to exist for any meaningful notion of fairness, even for simple
settings with two agents who have additive valuation functions [2].
As an alternative, Amanatidis et al. [5] initiated the study of equilibrium fairness: when a mechanism
always exhibits stable (i.e., pure Nash equilibrium) states, each of which corresponds to a fair allocation
with respect to the true valuation functions, the need for extracting agentsβ true preferences is mitigated.
Surprisingly, they show that for the standard case of additive valuation functions, the simple Round-Robin
routine is such a mechanism with respect to EF1 fairness. Round-Robin takes as input an ordering of the
goods for each agent, and then cycles through the agents and allocates the goods one by one, giving to
each agent their most preferred available good. For agents with additive valuation functions, Round-Robin
is known to produce EF1 allocations (see, e.g., [30]). Note that, without monetary transfers, what distin-
guishes a mechanism from an algorithm is that its input is the, possibly misreported, agentsβ preferences.
To further explore the interplay between incentives and fairness, we take a step back and focus solely
on this very simple, yet fundamental, allocation protocol. It should be noted that the Round-Robin al-
gorithm is one of the very few fundamental procedures one can encounter throughout the discrete fair
division literature. Its central role is illustrated by various prominent results, besides producing EF1 alloca-
tions: it can be modiο¬ed to produce approximate MMS allocations [3], as well as EF1 allocations for mixed
goods and chores (i.e., items with negative value) [9]. It produces envy-free allocations with high proba-
bility when the values are drawn from distributions [29], it is used to produce a βniceβ initial allocation
as a subroutine in the state-of-the-art approximation algorithms for pairwise maximin share fair (PMMS)
allocations [25] and EFX allocations [4], it has the lowest communication complexity of any known fair
division algorithm, and, most relevant to this work, it is the only algorithm for producing fair allocations
for more than two agents that, when viewed as a mechanism, is known to even have equilibria [8].
2
We investigate the existence and the EF1 guarantees of approximate pure Nash equilibria of the Round-
Robin mechanism beyond additive valuation functions, i.e., when the goods already assigned to an agent
potentially change how they value the remaining goods. In particular, we are interested in whether any-
thing can be said about classes that largely generalize additive functions, like cancelable functions, i.e.,
functions where the marginal values with respect to any subset maintain the relative ordering of the
goods, and submodular functions, i.e., functions capturing the notion of diminishing returns. Although
the stability and equilibrium fairness properties of Round-Robin have been visited before [8,5], to the best
of our knowledge, we are the ο¬rst to study the problem for non-additive valuation functions and go be-
yond exact pure Nash equilibria. Cancelable functions also generalize budget-additive, unit-demand, and
multiplicative valuation functions [12], and recently have been of interest in the fair division literature as
several results can be extended to this class [12,1,19]. For similar reasons, cancelable functions seem to
be a good pairing with Round-Robin as well, at least in the algorithmic setting (see, e.g., Proposition 2.5).
Nevertheless, non-additive functions seem to be massively harder to analyze in our setting and come
with various obstacles. First, it is immediately clear that, even without strategic agents, the input of an
ordinal mechanism implemented as a simultaneous-move one-shot game, like the Round-Robin mecha-
nism we study here, can no longer capture the complexity of a submodular function (see also the relevant
discussion in Our Contributions). As a result, translating this sequential assignment to an estimate on the
value of each agentβs bundle of goods, is not obvious. Lastly, and this applies to cancelable functions as
well, assuming equilibria do exist and enough can be shown about the value of the assigned bundles to
establish fairness, there is no reason to expect that any fairness guarantee will hold with respect to the
true valuation functions, as the agents may misreport their preferences in an arbitrary fashion.
1.1 Contribution and Technical Considerations
We study the well-known Round-Robin mechanism (Mechanism 1) for the problem of fairly allocating a set
of indivisible goods to a set of strategic agents. We explore the existence of approximate equilibria, along
with the fairness guarantees that the corresponding allocations provide with respect to the agentsβ true
valuation functions. Qualitatively, we generalize the surprising connection between the stable states of
this simple mechanism and its fairness properties to all approximate equilibria equilibria and for valuation
functions as general as subadditive cancelable and submodular. In more detail, our main contributions can
be summarized as follows:
β’ We show that the natural generalization of the bluο¬ proο¬le of Aziz et al. [8] is an exact PNE that
always corresponds to an EF1 allocation, when agents have cancelable valuation functions (Theorem
3.2 along with Proposition 2.5). Our proof is simple and intuitive and generalizes the results of Aziz
et al. [8] and Amanatidis et al. [5].
β’ For agents with submodular valuation functions, we show that there are instances where no (3/4+
π)-approximate PNE exists (Proposition 3.4), thus creating a separation between the cancelable and
the submodular cases. Nevertheless, we prove that an appropriate generalization of the bluο¬ proο¬le
is a 1/2-approximate PNE (Theorem 3.7) that also produces an 1/2-EF1 allocation with respect to
the true valuation functions (Theorem 3.8).
β’ We provide a uniο¬ed proof that connects the factor of an approximate PNE with the fairness ap-
proximation factor of the respective allocation. In particular, any πΌ-approximate PNE results in a
πΌ/2-EF1 allocation for subadditive cancelable agents (Theorem 4.5), and in a πΌ/3-EF1 allocation for
submodular agents (Theorem 4.4). We complete the picture by providing lower bounds in both cases
(Theorem 4.3 and Proposition 4.8), which demonstrate that our results are almost tight.
3
While this is not the ο¬rst time Round-Robin is considered for non-additive agents, see, e.g., [13], to the
best of our knowledge, we are the ο¬rst to study its fairness guarantees for cancelable and submodular
valuation functions, independently of incentives. As a minor byproduct of our work, Theorem 3.8 and
the deο¬nition of the bluο¬ proο¬le imply that, given value oracles for the submodular functions, we can use
Round-Robin as a subroutine to produce 1/2-EF1 allocations.
This also raises the question of whether one should allow a more expressive bid, e.g., a value oracle.
While, of course, this is a viable direction, we avoid it here as it comes with a number of issues. Allowing
the input to be exponential in the number of goods is already problematic, especially when simplicity and
low communication complexity are two appealing traits of the original mechanism. Moreover, extracting
orderings from value oracles would essentially result in a mechanism equivalent to ours (if the ordering
of an agent depended only on her function) or to a sequential game (if the orderings depended on all
the functions) which is not what we want to explore here. Note that less information is not necessarily
an advantage towards our goal. While this results in a richer space of equilibria, fairness guarantees are
increasingly harder to achieve.
As a ο¬nal remark, all the algorithmic procedures we consider run in polynomial time, occasionally
assuming access to value oracles, e.g., Algorithms 2,3,4. Although we do not consider computational
complexity questions here, like how do agents compute best responses or how do they reach approximate
equilibria, we do consider such questions interesting directions for future work.
1.2 Further Related Work
The problem of fairly allocating indivisible goods to additive agents in the non-strategic setting has been
extensively studied; for a recent survey, see Amanatidis et al. [6]. Although the additivity of the valuation
functions is considered a standard assumption, there are many works that explore richer classes of val-
uation functions. Some prominent examples include the computation of EF1 allocations for agents with
general non-decreasing valuation functions [28], EFX allocations (or relaxations of EFX) under agents
with cancelable valuation functions [12,1,19] and subaditive valuation functions [33,20], respectively,
and approximate MMS allocations for submodular, XOS, and subadditive agents [11,23].
Moving to the strategic setting, Caragiannis et al. [17] and Markakis and Psomas [31] were the ο¬rst
to consider the question of whether it is possible to have mechanisms that are truthful and fair at the
same time, again assuming additive agents. Amanatidis et al. [2] resolved this question for two agents,
showing there is no truthful mechanism with fairness guarantees under any meaningful fairness notion.
As a result, subsequent papers considered truthful mechanism design under restricted valuation function
classes [24,10].
The stability of Round-Robin was ο¬rst studied by Aziz et al. [8], who proved that it always has PNE by
using a special case of retracted result of Bouveret and Lang [13] (this did not aο¬ect the former though;
see [7]). Finally, besides the work of Amanatidis et al. [5] mentioned earlier, the fairness properties of
Round-Robin under strategic agents have recently been studied by Psomas and Verma [35]. Therein it
is shown that Round-Robin, despite being non-truthful, satisο¬es a relaxation of truthfulness, as it is not
obviously manipulable.
2 Preliminaries
For πβN, let [π]denote the set {1,2, . . . ,π}. We will use π=[π]to denote the set of agents and
π={π1, . . . , ππ}to denote the set of goods. Each agent πβπhas a valuation function π£π: 2πβRβ₯0
over the subsets of goods. We assume that all π£πare normalized, i.e., π£π( β
) =0. We also adopt the shortcut
4
π£π(π|π)for the marginal value of a set πwith respect to a set π, i.e., π£π(π|π)=π£π(πβͺπ) βπ£(π). If π={π},
we write π£π(π|π)instead of π£( {π} | π). For each agent πβπ, we say that π£πis
β’non-decreasing (often referred to as monotone), if π£π(π) β€ π£π(π)for any πβπβπ.
β’submodular, if π£π(π|π) β₯ π£π(π|π)for any πβπβπand πβπ.
β’cancelable, if π£π(πβͺ {π}) >π£π(πβͺ {π}) β π£π(π)>π£π(π)for any π, π βπand πβπ\ (πβͺπ).
β’additive, if π£π(πβͺπ)=π£π(π) + π£π(π)for every π, π βπwith πβ©π=β
.
β’subadditive, if π£π(πβͺπ) β€ π£π(π) + π£π(π)for every π, π βπ.
Throughout this work, we only consider non-decreasing valuation functions, e.g., when we refer to sub-
modular functions, we mean non-decreasing submodular functions. Note that although both submodular
and (subadditive) cancelable functions are strict superclasses of additive functions, neither one is a super-
class of the other.
We will occasionally need an alternative characterization of submodular functions due to Nemhauser
et al. [32].
Theorem 2.1 (Nemhauser et al. [32]).A function π£: 2πβRβ₯0is (non-decreasing) submodular if and only
if we have π£(π) β€ π£(π) + Γπβπ\ππ£(π|π), for all π,π βπ.
Also, the following lemma summarizes some easy observations about cancelable functions.
Lemma 2.2. If π£: 2πβRβ₯0is cancelable, then π£π(πβͺπ
)>π£π(πβͺπ
) β π£π(π)>π£π(π), implying that
π£π(π) β₯ π£π(π) β π£π(πβͺπ
) β₯ π£π(πβͺπ
), for any π ,π , π
βπ, such that π
βπ\πβͺπ. In particular,
π£π(π)=π£π(π) β π£π(πβͺπ
)=π£π(πβͺπ
).
Note that, for π, π βπ, Lemma 2.2 directly implies that arg maxπβππ£(π) β arg maxπβππ£(π|π).
Despite the fact that the agents have valuation functions, the mechanism we study (Mechanism 1) is
ordinal, i.e., it only takes as input a preference ranking from each agent. Formally, the preference ranking
β»π, which agent πreports, deο¬nes a total order on π, i.e., πβ»ππβ²implies that good πprecedes good πβ²in
agent πβ declared preference ranking.1We call the vector of the agentsβ declared preference rankings, β»=
(β»1, . . . , β»π), the reported proο¬le for the instance. So, while an instance to our problem is an ordered triple
(π , π, v), where v=(π£1, . . . , π£π)is a vector of the agentsβ valuation functions, the input to Mechanism 1
is (π , π, β»)instead.
Note that β»πmay not reο¬ect the actual underlying values, i.e., πβ»ππβ²does not necessarily mean that
π£π(π)>π£π(πβ²)or, more generally, π£π(π|π)>π£π(πβ²|π)for a given πβπ. This might be due to agent π
misreporting her preference ranking, or due to the fact that any single preference ranking is not expressive
enough to fully capture all the partial orders induced by a submodular function. Nevertheless, a valuation
function π£πdoes induce a true preference ranking <β
π|πfor each set πβπ, which is a partial order, i.e.,
π<β
π|ππβ²βπ£π(π|π) β₯ π£π(πβ²|π)for all π, πβ²βπ. We use β»β
π|πif the corresponding preference ranking is
strict, i.e., when π<β
π|ππβ²β§πβ²<β
π|ππβπ=πβ², for all π, πβ²βπ\π. For additive (and more generally, for
cancelable) valuations, we drop πfor the notation and simply write <β
πor β»β
π. Finally, for a total order β»
on πand a set πβπ, we use top(β», π )to denote the βlargestβ element of πwith respect to β».
1See the discussion after the statement of Mechanism 1about why assuming that the reported preference rankings are total
(rather than partial) orders is without loss of generality.
5
2.1 Fairness Notions
A fair division mechanism produces an allocation (π΄1, . . . , π΄π), where π΄πis the bundle of agent π, which
is a partition of π. The latter corresponds to assuming no free disposal, namely all the goods must be
allocated.
There are several diο¬erent notions which attempt to capture which allocations are βfairβ. The most
prominent such notion in the fair division literature has been envy-freeness (EF) [22,21,37], which has
been the starting point for other relaxed notions, more appropriate for the indivisible goods setting we
study here, as envy-freeness up to one good (EF1) [28,16] and envy-freeness up to any good (EFX) [18]. Here
we focus on EF1.
Deο¬nition 2.3. An allocation (π΄1, . . . , π΄π)is
β’πΌ-envy-free (πΌ-EF), if for every π, π βπ,π£π(π΄π) β₯ πΌΒ·π£π(π΄π).
β’πΌ-envy-free up to one good (πΌ-EF1), if for every pair of agents π, π βπ, with π΄πβ β
, there exists a
good πβπ΄π, such that π£π(π΄π) β₯ πΌΒ·π£π(π΄π\ {π}).
When for every agent πβπwith π΄πβ β
, we have π£π(π΄π) β₯ πΌΒ·π£π(π΄π\ {π}) for some good πβπ΄π, we
say that (π΄1, . . . , π΄π)is πΌ-EF1 from agent πβs perspective, even when the allocation is not πΌ-EF1!
2.2 Mechanisms and Equilibria
We are interested in mechanisms that produce allocations with EF1 guarantees. When no payments are
allowed, like in our setting, an allocation mechanism Mis just an allocation algorithm that takes as input
the agentsβ reported preferences. In particular, Round-Robin, the mechanism of interest here, takes as
input the reported proο¬le β»and produces an allocation of all the goods. This distinction in terminology
is necessary as the reported input may not be consistent with the actual valuation functions due to the
agentsβ incentives. When the allocation returned by M (β»)has some fairness guarantee, e.g., it is 0.5-EF1,
we will attribute the same guarantee to the reported proο¬le itself, i.e., we will say that β»is 0.5-EF1.
We study the fairness guarantees of the (approximate) pure Nash equilibria of Round-Robin. Given a
preference proο¬le β»=(β»1, . . . , β»π), we write β»βπto denote (β»1, . . . , β»πβ1,β»π+1, . . . , β»π)and given a pref-
erence ranking β»β²
πwe use (β»β²
π,β»βπ)to denote the proο¬le (β»1,...,β»πβ1,β»β²
π,β»π+1, . . . , β»π). For the next def-
inition we abuse the notation slightly: given an allocation (π΄1, . . . , π΄π)produced by M(β»), we write
π£π(M (β»)) to denote π£π(π΄π); similarly for M (β»β²
π,β»βπ).
Deο¬nition 2.4. Let Mbe an allocation mechanism and consider a preference proο¬le β»=(β»1, . . . , β»π). We
say that the total order β»πis an πΌ-approximate best response to β»βπif for every total order, i.e., permutation
β»β²
πof π, we have πΌΒ·π£π(M(β»β²
π,β»βπ)) β€ π£π(M ( β»)). The proο¬le β»is an πΌ-approximate pure Nash equilibrium
(PNE) if, for each πβπ,β»πis an πΌ-approximate best response to β»βπ.
When πΌ=1, we simply refer to best responses and exact PNE.
2.3 The Round-Robin Mechanism
We state Round-Robin as a mechanism (Mechanism 1) that takes as input a reported proο¬le (β»1,...,β»π).
For the sake of presentation, we assume that the agents in each round (lines 3β6) are always considered
according to their βnameβ, i.e., agent 1 is considered ο¬rst, agent 2 second, and so on, instead of having
a permutation determining the priority of the agents as an extra argument of the input. This is without
loss of generality, as it only requires renaming the agents accordingly. We often refer to the process of
allocating a good to an agent (lines 4β6) as a step of the mechanism.
6
Mechanism 1 Round-Robin( β»1, . . . , β»π)// For πβπ,β»πis the reported preference ranking of agent π.
1: π=π;(π΄1, . . . ,π΄π)=(β
, . . . , β
);π=βπ/πβ
2: for π=1, . . . , π do // Each value of πdetermines the corresponding round.
3: for π=1, . . . , π do // The combination of πand πdetermines the corresponding step.
4: π=top(β»π, π )
5: π΄π=π΄πβͺ {π}// The current agent receives (what appears to be) her favorite available good.
6: π=π\ {π}// The good is no longer available.
7: return (π΄1, . . . ,π΄π)
Note that there is no need for a tie-breaking rule here, as the reported preference rankings are assumed
to be total orders. Equivalently, one could allow for partial orders (either directly or via cardinal bids as
it is done in [5]) paired with a deterministic tie-breaking rule, e.g., lexicographic tie-breaking, a priori
known to the agents.
In the rest of the paper, we will assume that π=ππ for some πβN, for simplicity. Note that this is
without loss of generality, as we may introduce at most πβ1 dummy goods that have marginal value of
0 with respect to any set for everyone and append them at the end of the reported preference rankings to
be allocated during the last steps of the mechanism.
We have already mentioned that Round-Robin as an algorithm produces EF1 allocations for additive
agents, where the input is assumed to be any strict variant β»β=( β»β
1| β
,β»β
2| β
, . . . , β»β
π| β
)of the truthful proο¬le
(<β
1| β
,<β
2| β
,...,<β
π| β
), i.e., the proο¬le where each agent ranks the goods according to their singleton value.
This property fully extends to cancelable valuation functions as well. The proof of Proposition 2.5 is
rather simple, but not as straightforward as the additive case; note that it requires Lemma 3.3 from the
next section.
Proposition 2.5. Let be β»βbe as described above. When all agents have cancelable valuation functions, the
allocation returned by Round-Robin(β»β)is EF1.
Proof. Let (π΄1, . . . ,π΄π)be the allocation returned by Round-Robin(β»β). Fix two agents, πand π, and let
π΄π={π₯1, π₯2, . . . ,π₯π}and π΄π={π¦1, π¦2, . . . ,π¦π}, where the goods in both sets are indexed according to the
round in which they were allocated to πand π, respectively. By the way Mechanism 1is deο¬ned, we have
π₯πβ»β
π| β
π¦π+1, for all πβ [πβ1]. Therefore, π₯π<β
π| β
π¦π+1, or equivalently, π£π(π₯π) β₯ π£π(π¦π+1), for all πβ [πβ1].
Thus, by Lemma 3.3, we get π£π(π΄π\ {π₯π}) β₯ π£π(π΄π\ {π¦1}), and using the fact that π£πis non-decreasing,
π£π(π΄π) β₯ π£π(π΄π\ {π¦1}).ξ
3 Existence of approximate PNE
At ο¬rst glance, it is not clear why Mechanism 1has any pure Nash equilibria, even approximate ones
for a constant approximation factor. For additive valuation functions, however, it is known that for any
instance we can construct a simple preference proο¬le, called the bluο¬ proο¬le, which is an exact PNE. While
the proof of this fact, in its full generality, is fragmented over three papers [8,14,5], we give here a simple
proof that generalizes the existence of exact PNE to cancelable valuation functions. As we shall see later,
extending this result to submodular functions is not possible and even deο¬ning a generalization of the
bluο¬ proο¬le which is a 0.5-approximate PNE is not straightforward.
3.1 Cancelable valuations
Deο¬ning the bluο¬ proο¬le for cancelable agents, we will start from a strict variant of the truthful proο¬le
(<β
1| β
,<β
2| β
,...,<β
π| β
), i.e., the proο¬le where each agent ranks the goods according to their value (as single-
7
tons) in descending order, as we did for Proposition 2.5. Assume that any ties are broken deterministically
to get the strict version β»β=(β»β
1| β
,β»β
2| β
, . . . , β»β
π| β
). Now, consider Round-Robin(β»β)and let β1, β 2, . . . , βπ
be a renaming of the goods according to the order in which they were allocated and β»bbe the correspond-
ing total order (i.e., β1β»bβ2β»b... β»bβπ). The bluο¬ proο¬le is the preference proο¬le β»b=(β»b,β»b,...,β»b),
where everyone ranks the goods in the order they were allocated in Round-Robin(β»β). The following fact
follows directly from the deο¬nition of the bluο¬ proο¬le and the description of Round-Robin.
Fact 3.1. If (β»β)is a strict version of the truthful preference proο¬le and (β»b)is the corresponding bluο¬ proο¬le,
then Round-Robin(β»b)and Round-Robin(β»β)both return the same allocation.
An interesting observation about this fact is that, combined with Proposition 2.5 and Theorem 3.2, it
implies that there is at least one PNE of Mechanism 1which is EF1! Of course, it is now known that all
exact PNE of Round-Robin are EF1 for agents with additive valuation functions and, as we will see later
on, even approximate PNE have (approximate) EF1 guarantees for much more general instances, including
the case of subadditive cancelable valuation functions.
Theorem 3.2. When all agents have cancelable valuation functions, the bluο¬ proο¬le is an exact PNE of
Mechanism 1.
We ο¬rst need to prove the following lemma that generalizes a straightforward property of additive
functions for cancelable functions.
Lemma 3.3. Suppose that π£(Β·) is a cancelable valuation function. Consider sets π={π₯1, π₯2, . . . , π₯π}and
π={π¦1, π¦2, . . . , π¦π}. If for every πβ [π], we have that π£(π₯π) β₯ π£(π¦π), then π£(π) β₯ π£(π).
Proof. We begin by arguing that it is without loss of generality to ο¬rst assume that the elements of πare
ordered by non-increasing value with respect to π£and then also assume that π¦πβ{π₯1, π₯2, . . . , π₯πβ1}, for
any πβ [π]. The former is indeed a matter of reindexing, if necessary, the elements of πand consistently
reindexing the corresponding elements of π. For the latter, suppose that there exist πsuch that π¦π=π₯π‘
for π‘β€πβ1 and consider the smallest π‘for which this happens. We have π£(π₯π‘) β₯ π£(π₯π‘+1) β₯ ... β₯π£(π₯π)
by the assumption on the ordering of the elements of π,π£(π₯π) β₯ π£(π¦π)by hypothesis, and π£(π¦π)=π£(π₯π‘).
Thus, π£(π₯π‘)=π£(π₯π‘+1)=... =π£(π₯π). Now we may rename the elements of πto {π¦β²
1, . . . ,π¦β²
π}by inserting
π¦πto the π‘-th position, i.e., π¦β²
π‘=π¦π,π¦β²
π =π¦π β1, for π‘+1β€π β€π, and π¦β²
π =π¦π , for π <π‘or π >π. Since only
π¦π‘, π¦π‘+1, . . . ,π¦πchanged indices but π£(π₯π‘)=π£(π₯π‘+1)=... =π£(π₯π), we again have that π£(π₯π) β₯ π£(π¦β²
π)for
every πβ [π]. Moreover, now the smallest βfor which there exist π>βsuch that π¦π=π₯βis strictly larger
than π‘. By repeating this renaming of the elements of πwe end up with a renaming {π¦β
1, . . . ,π¦β
π}such that
for every πβ [π],π£(π₯π) β₯ π£(π¦β
π)and π¦β
πβ{π₯1, π₯2, . . . , π₯ πβ1}.
So, assuming that the elements of πare ordered in non-increasing value with respect to π£and that
π¦πβ{π₯1, π₯2, . . . , π₯ πβ1}, for any πβ [π], suppose towards a contradiction that π£(π)<π£(π). That is,
π£({π₯1, π₯ 2, . . ., π₯π}) <π£({π¦1, π¦2, . . . ,π¦π}). Observe that if π£({π₯1, π₯2, . . . , π₯πβ1}) β₯ π£({π¦1, π¦2, . . . ,π¦πβ1}), this
would imply that π£({π₯1, . . . , π₯πβ1, π¦π}) β₯ π£({π¦1, . . . , π¦πβ1, π¦π}), by the deο¬nition of cancelable valuations
and the fact that π¦πβ{π₯1, . . . , π₯πβ1} βͺ {π¦1, . . . , π¦πβ1}. This leads to
π£({π₯1, . . . , π₯πβ1, π₯π}) β₯ π£( {π₯1, . . . , π₯πβ1, π¦π}) β₯ π£({π¦1, . . . ,π¦πβ1, π¦π}) ,
where the ο¬rst inequality follows from π£(π₯π) β₯ π£(π¦π)and Fact 2.2, contradicting our initial assumption.
Therefore, π£({π₯1, . . . , π₯πβ1}) <π£({π¦1, . . . , π¦πβ1}). By repeating the same argument πβ2 more times, we
end up with π£(π₯1)<π£(π¦1), a contradiction. ξ
Proof of Theorem 3.2.Now we show that the bluο¬ proο¬le for cancelable valuations is an exact PNE. Con-
sider the goods named β1, . . . ,βπas in the bluο¬ proο¬le, i.e., by the order in which they are picked when
8
each agent reports their preference order to be the one induced by all singleton good values. Consider
agent π. Her assigned set of goods under the bluο¬ proο¬le is π΄b
π={βπ, βπ+π, . . ., β(πβ1)π+π}, where π=π/π.
Assume now that she deviates from β»bto β»π, resulting in some allocated set π΄π={π¦1, π¦2, . . . , π¦π}, where
we assume π¦πto be allocated in round π. We need to show π£π(π΄b
π) β₯ π£π(π΄π).
To this end, we compare the goods allocated to agent πin both reports, one by one. If π£π(π¦π) β€
π£π(β(πβ1)π+π)for every πβ [π], then we are done by applying Lemma 3.3 with π΄b
πand π΄π. If some of
these inequalities fail, let πdenote the latest round such that π£π(π¦π)>π£π(β(πβ1)π+π. Therefore, in the exe-
cution of Mechanism 1with the bluο¬ proο¬le as input, π¦πwas no longer available in round π. However, π¦π
becomes available in round πonce agent πdeviates. This can only stem from the fact that at some point
before round π, a good βπ‘with π‘>(πβ1)π+πwas picked (since the overall number of goods picked per
round always stays the same). Clearly, the only agent who could have done so (since she is the only one
deviating from the common bluο¬ order) is agent π. Therefore, it holds that βπ‘=π¦πfor some π<π. Now,
we replace the ordered set π=(π¦1,π¦ 2, . . ., π¦π)by πβ²=(π¦1, . . ., π¦πβ1, π¦π, π¦π+1, . . . , π¦πβ1, π¦π, π¦π+1, . . . , π¦π), i.e.,
we simply exchange π¦πand π¦π. It will be convenient to rename π¦1, . . . , π¦πso that πβ²=(π¦β²
1, π¦ β²
2, . . . ,π¦β²
π)
We claim that it if agent πreports a preference ranking β»β²
πthat starts with all goods in πβ², in that
speciο¬c order, followed by everything else, in any order, she still gets π΄πbut the goods are allocated in the
order suggested by πβ². Indeed, ο¬rst notice that the ο¬rst πβ1 rounds of Round-Robin will be the same as
in the run with the original deviation β»π. Further, π¦β²
π=π¦πis allocated earlier under β»β²
πthan under β»π, and
thus it surely is available at the time. After that, rounds πβ1 to πβ1 will be the same as in the run with
the deviation β»π. Now π¦β²
π=π¦πis allocated later than before, namely in round π, but it is not among the
ο¬rst (πβ1)π+πgoods in the bluο¬ order, as noted above, which means it is not allocated to any other agent
in any round before the π-th under β»β²
π. Finally, rounds π+1 to πwill be the same as in the run with β»π.
Although agent πstill is assigned the same set π΄πby deviating to β»β²
π, we now have π£π(π¦β²
π)=π£π(π¦π) β€
π£π(β(πβ1)π+π, where the inequality holds because both goods are available in round πof the bluο¬ run, and
agent one prefers β(πβ1)π+π. Also, all later goods in πβ²remain unchanged, i.e., π¦β²
π =π¦π for π >π. Therefore,
the latest occurrence of some π¦β²
β>β(ββ1)π+πnow happens at an earlier point in the sequence, if at all.
Repeating this process until no such occurrence is left yields an ordering πβ=(π¦β
1, π¦β
2, . . . ,π¦β
π)of π΄πsuch
that for all πβ [π],π£π(π¦β
π) β€ π£π(β(πβ1)π+π). Now using Lemma 3.3 completes the proof. ξ
3.2 Submodular valuations
We move on to the much more general class of submodular valuations. In order to deο¬ne the bluο¬ proο¬le
in this case, we again would like to start from the truthful proο¬le. However, recall that Round-Robin
restricts each agentβs report to specifying an ordering on the good set πand these preference rankings
are not expressive enough to fully capture submodular valuation functions. In fact, it is not obvious what
βtruthfulβ means here without further assumptions on what information is known by the agents. Still, we
deο¬ne a truthfully greedy allocation and use this as our starting point.
Imagine that, instead of having a full preference proο¬le from the beginning, we only ask the active
agent π(i.e., the agent to which we are about to allocate a new good) for the good with the largest marginal
value with respect to her current set of goods π΄πand give this to her. Let β1, β2, . . . , βπbe a renaming of
the goods according to the order in which they would be allocated in this hypothetical truthfully greedy
scenario and β»bbe the corresponding total order. Like in the cancelable case, the bluο¬ proο¬le is the
preference proο¬le β»b=(β»b,β»b, . . . , β»b).
Formally, the renaming of the goods is performed as described in Algorithm 2below. It should be
noted that this deο¬nition of the bluο¬ proο¬le is consistent with the deο¬nition for cancelable functions,
assuming that all ties are resolved lexicographically.
Also notice that the allocation Round-Robin(β»b)produced under the bluο¬ proο¬le is exactly (π1, π 2,
. . . ,ππ), as described in Algorithm 2, i.e., ππ=π΄b
π={βπ, βπ+π, . . . , β(πβ1)π+π}, where recall that π=π/π.
9
Algorithm 2 Greedy renaming of goods for deο¬ning the bluο¬ proο¬le
Input: π,π, value oracles for π£1(Β·), . . . , π£π(Β·)
1: ππ=β
for πβ [π]
2: for π=1, . . . , π do
3: π=(πβ1) (mod π) + 1
4: βπ=arg max
πβπ\Γβπβ
π£π(π|ππ)// Ties are broken lexicographically.
5: ππ=ππβͺ {βπ}
6: return (β1, β2, . . . ,βπ)
The main result of this section is Theorem 3.7 stating that the bluο¬ proο¬le is a 1
2-approximate PNE
when agents have submodular valuation functions. While this sounds weaker than Theorem 3.2, it should
be noted that for submodular agents Mechanism 1does not have PNE in general, even for relatively simple
instances, as stated in Proposition 3.4. In fact, even the existence of approximate equilibria can be seen as
rather surprising, given the generality of the underlying valuation functions.
Proposition 3.4. There exists an instance where all agents have submodular valuation functions such that
Mechanism 1has no (3
4+π)-approximate PNE.
Proof. Consider an instance with 2 agents and 4 goods π={π1, π2, π3, π4}, with the following valuation
for all possible 2-sets:
π£1({π1, π2}) =3
π£1({π1, π3}) =3
π£1({π1, π4}) =4
π£1({π2, π3}) =4
π£1({π2, π4}) =3
π£1({π3, π4}) =3
π£2({π1, π2}) =4
π£2({π1, π3}) =4
π£2({π1, π4}) =3
π£2({π2, π3}) =3
π£2({π2, π4}) =4
π£2({π3, π4}) =4
In addition, all individual goods have the same value: π£1(π₯)=π£2(π₯)=2 for π₯βπ, while all 3-sets and
4-sets have value 4, for both agents.
We begin by establishing that this valuation function is indeed submodular for both agents. Observe
for any set πβπand πβ [2], π β [4]we have:
|π|=0βπ£π(ππ|π) β {2}
|π|=1βπ£π(ππ|π) β {1,2}
|π|=2βπ£π(ππ|π) β {0,1}
|π|=3βπ£π(ππ|π)=0,
which immediately implies that both valuation functions are indeed submodular.
Notice that for any reported preferences β»1,β»2, one of the two agents will receive goods leading to a
value of 3. If this is the agent 1, she can easily deviate and get 4 instead. In particular, if agent 2 has good
π2or π3ο¬rst in their preferences then agent 1 can get {π1, π4}, and if agent 2 has good π1or π4as ο¬rst then
agent 1 can get {π2, π3}instead. On the other hand, if agent 2 received a value of 3 they can also always
deviate to 4. Notice that for any ππ, agent 2 always has two sets diο¬erent sets {ππ, ππ},{ππ, ππ}with value
4 and one {ππ, ππ}with value 3. Thus, for any preference of agent 1 with πΛπβ»1πΛ
πβ»1πΛπβ»1πΛ
π, agent 2 can
10
deviate and get either {πΛ
π, π Λ
π}or {πΛπ, π Λ
π}, one of which must have value 4. Therefore, in every outcome
there exists an agent that can deviate to improve their value from 3 to 4. ξ
Moving towards the proof of Theorem 3.7 for the submodular case, we note that although it is very
diο¬erent from that of Theorem 3.2, we will still need an analog of the main property therein, i.e., the
existence of a good-wise comparison between the goods an agent gets under the bluο¬ proο¬le and the ones
she gets by deviating. As expected, the corresponding property here (see Lemma 3.5) is more nuanced and
does not immediately imply Theorem 3.7 as we are now missing the analog of Lemma 3.3.
Throughout this section, we are going to argue about an arbitrary agent π. To simplify the notation,
let us rename ππ=π΄b
π={βπ, βπ+π, . . . , β(πβ1)π+π}to simply π={π₯1, π₯2, . . . , π₯π}, where we have kept the
order of indices the same, i.e., π₯π=β(πβ1)π+π. This way, the goods in πare ordered according to how they
were allocated to agent πin the run of Mechanism 1with the bluο¬ proο¬le as input.
We also need to deο¬ne the ordering of the goods agent πgets when she deviates from the bluο¬ bid β»b
to another preference ranking β»π. Let π΄π=π={π¦1, π¦2, . . . ,π¦π}be this set of goods. Instead of renaming
the elements of πin a generic fashion like in the proof of Theorem 3.2, doing so becomes signiο¬cantly
more complicated, and we need to do it in a more systematic way, see Algorithm 3.
Algorithm 3 Greedy renaming of goods for the deviating agent π
Input: π={π₯1, π₯2, . . . , π₯π},π, and a value oracle for π£π( Β·)
1: π=π
2: for π=|π|, . . . , 1do
3: π¦β²
π=arg min
πβπ
π£π(π| {π₯1, . . . , π₯πβ1}) // Ties are broken lexicographically.
4: π=π\ {π¦β²
π}
5: return (π¦β²
1, π¦ β²
2, . . . ,π¦β²
|π|)
In what follows, we assume that the indexing π¦1, π¦2, . . . ,π¦πis already the result of Algorithm 3. This
renaming is crucial and it will be used repeatedly. In particular, we need this particular ordering in order to
prove that π£π(π₯π| {π₯1, . . . ,π₯ πβ1}) β₯ π£π(π¦π| {π₯1, . . . ,π₯πβ1}), for all πβ [π], in Lemma 3.5 below. Towards that,
we need to ο¬x some notation for the sake of readability. For πβ [π], we use ππ
βand ππ
+to denote the sets
{π₯1, π₯2, . . . , π₯ π}and {π₯π, π₯π+1, . . . , π₯π}, respectively. The sets ππ
βand ππ
+, for πβ [π], are deο¬ned analogously.
We also use π0
β=π0
β=β
. The main high-level idea of the proof is that if π£π(π¦β|πββ1
β)>π£π(π₯β|πββ1
β)
for some β, then it must be the case that during the execution of Round-Robin(β»b)every good in πβ
β=
{π¦1, . . ., π¦β}is allocated before the turn of agent πin round β. Then, using a simple counting argument, we
show that agent πcannot receive all the goods in πβ
βwhen deviating, leading to a contradiction.
Lemma 3.5. Let π={π₯1, π₯2, . . . , π₯π}be agent πβs bundle in Round-Robin(β»b), where goods are indexed in
the order they were allocated, and π={π¦1, π¦2, . . . ,π¦π}be πβs bundle in Round-Robin(β»π,β»b
βπ), where goods
are indexed by Algorithm 3. Then, for every πβ [π], we have π£π(π₯π|ππβ1
β) β₯ π£π(π¦π|ππβ1
β).
Proof. The way goods in πare indexed, we have that π₯πis the good allocated to agent πin round πof
Round-Robin(β»b). Suppose, towards a contradiction, that there is some ββ [π], for which we have
π£π(π¦β|πββ1
β)>π£π(π₯β|πββ1
β). First notice that ββ 1, as π₯1is, by the deο¬nition of the bluο¬ proο¬le, a singleton
of maximum value for agent πexcluding the goods allocated to agents 1 through πβ1 in round 1, regardless
of agent πβs bid. Thus, ββ₯2.
Let π΅βπand π·βπbe the sets of goods allocated (to any agent) up to right before a good is
allocated to agent πin round βin Round-Robin(β»b)and Round-Robin(β»π,β»b
βπ), respectively. Clearly, |π΅|=
|π·|=(ββ1)π+πβ1. In fact, we claim that in this case the two sets are equal.
11
Claim 3.6. It holds that π΅=π·. Moreover, {π¦1, . . . ,π¦β} β π΅.
Proof of the claim. We ο¬rst observe that π£π(π¦π|πββ1
β) β₯ π£π(π¦β|πββ1
β)>π£π(π₯β|πββ1
β), for every πβ [ββ1],
where the ο¬rst inequality follows from way Algorithm 3ordered the elements of π. Now consider the
execution of Round-Robin(β»b). Since π₯βwas the good allocated to agent πin round β,π₯βhad maximum
marginal value for agent πwith respect to πββ1
βamong the available goods. Thus, none of the goods
π¦1, . . . ,π¦βwere available at the time. That is, π¦1, . . . ,π¦βwere all already allocated to some of the agents
(possibly including agent πherself ). We conclude that {π¦1, . . . , π¦π} β π΅.
Now suppose for a contradiction that π·β π΅and consider the execution of Round-Robin(β»π,β»b
βπ).
Recall that the goods in π΅are still the (ββ1)π+πβ1 most preferable goods for every agent in π\ {π}
according to the proο¬le (β»π,β»b
βπ). Therefore, all agents in π\ {π}will get goods from π΅allocated to them
up to the point when a good is allocated to agent πin round β, regardless of what β»πis. If agent πalso
got only goods from π΅allocated to her in the ο¬rst ββ1 rounds of Round-Robin(β»π,β»b
βπ), then π·would
be equal to π΅. Thus, at least one good which is not in π΅(and thus, not in {π¦1, . . . , π¦β}) must have been
allocated to agent πin the ο¬rst ββ1 rounds. As a result, at the end of round ββ1, there are at least two
goods in {π¦1, . . . ,π¦β}that have not yet been allocated to π.
However, we claim that up to right before a good is allocated to agent πin round β+1, all goods
in π΅(and thus in {π¦1, . . . ,π¦β}as well) will have been allocated, leaving πwith at most ββ1 goods from
{π¦1, . . ., π¦β}in her ο¬nal bundle and leading to a contradiction. Indeed, this follows from a simple counting
argument. Right before a good is allocated to agent πin round β+1, the goods allocated to agents in π\ {π}
are exactly β(πβ1) + πβ1β₯ (ββ1)π+πβ1=|π΅|. As noted above, agents in π\ {π}will get goods from π΅
allocated to them as long as they are available. Thus, no goods from π΅, or from {π¦1, . . . , π¦β}in particular,
remain unallocated right before a good is allocated to agent πin round β+1. Therefore, agent πmay get at
most ββ1 goods from {π¦1, . . . ,π¦β}(at most ββ2 in the ο¬rst ββ1 rounds and one in round β), contradicting
the deο¬nition of the set π. We conclude that π·=π΅.β‘
Given the claim, it is now easy to complete the proof. Clearly, in the ο¬rst ββ1 rounds of Round-
Robin( β»π,β»b
βπ)at most ββ1 goods from {π¦1, . . . ,π¦β}have been allocated to agent π. However, when it
is πβs turn in round β, only goods in π\π·are available, by the deο¬nition of π·. By Claim 3.6, we have
{π¦1, . . ., π¦π} β π·, and thus there is at least one good {π¦1, . . . ,π¦β}that is allocated to another agent, which
contradicts the deο¬nition of π.ξ
We are now ready to state and prove the main result of this section.
Theorem 3.7. When all agents have submodular valuation functions, the bluο¬ proο¬le is a 1
2-approximate
PNE of Mechanism 1. Moreover, this is tight, i.e., for any π>0, there are instances where the bluο¬ proο¬le is
not a ξ1
2+πξ-approximate PNE.
Proof. We are going to use the notation used so far in the section and consider the possible deviation of
an arbitrary agent π. Like in the statement of Lemma 3.5,π={π₯1, . . . , π₯π}is agent πβs bundle in Round-
Robin(β»b), with goods indexed in the order they were allocated, and π={π¦1, π¦2, . . ., π¦π}is πβs bundle
in Round-Robin( β»π,β»b
βπ), with goods indexed by Algorithm 3. Also, recall that ππ
β={π₯1, . . . , π₯ π}and
ππ
+={π₯π, . . . ,π₯π}(and similarly for ππ
βand ππ
+). We also use the convention that ππ+1
+=β
. For any πβ [π],
we have
π£π(ππ
β) β π£π(ππβ1
β)=π£π(π₯π|ππβ1
β)
β₯π£π(π¦π|ππβ1
β)
β₯π£π(π¦π|ππβ1
ββͺππ+1
+)
12
=π£π(ππβ1
ββͺππ+1
+βͺ {π¦π}) β π£π(ππβ1
ββͺππ+1
+)
=π£π(ππβ1
ββͺππ
+) β π£π(ππβ1
ββͺππ+1
+)
β₯π£π(ππβ1
ββͺππ
+) β π£π(ππ
ββͺππ+1
+).
The ο¬rst inequality holds because Lemma 3.5 applies on πand π, whereas the second inequality holds
because of submodularity. Finally, the last inequality holds since ππβ1
ββππ
βand π£π(Β·) is non-decreasing,
for every πβπ. Using these inequalities along with a standard expression of the value of a set as a sum
of marginals, we have
π£π(π)=π£π(ππ
β) β π£π(π0
β)
=
π
Γ
π=1ξπ£π(ππ
β) β π£π(ππβ1
β)ξ
β₯
π
Γ
π=1ξπ£π(ππβ1
ββͺππ
+) β π£π(ππ
ββͺππ+1
+)ξ
=π£π(π0
ββͺπ1
+) β π£π(ππ
ββͺππ+1
+)
=π£π(π) β π£π(π).
Thus, we have π£π(π) β₯ 1
2Β·π£π(π), and we conclude that β»bis a 1
2-approximate PNE of Mechanism 1.
To show that the result is tight, consider an example with two agents and ο¬ve goods. The valuation
function of agent 1 is additive and deο¬ned as follows on the singletons:
π£1(π1)=2π£1(π2)=1π£1(π3)=1βπ1π£1(π2)=1βπ2π£1(π5)=1βπ3,
where 1 β«π3>π2>π1>0.
The valuation function of agent 2 is OXS2and deο¬ned by the maximum matchings in the bipartite
graph below, e.g., π£2({π1, π2}) =2+1=3 and π£2( {π1, π4, π5}) =2+1βπ2=3βπ2.
π1
π2
π3
π4
π5
2
1
1βπ1
1βπ2
1βπ3
It is not hard to see that the bluο¬ proο¬le for this instance consists of the following declared ordering
by both agents: π1>π2>π3>π4>π5. The allocation produced by Mechanism 1for the bluο¬ proο¬le
is then π΄=(π΄1, π΄2), where π΄1={π1, π3, π5}, and π΄2={π2, π4}. Observe that π£1(π΄1)=4βπ1βπ3and
π£2(π΄2)=1. It is easy to see that there is no proο¬table deviation for agent 1, while the maximum value that
2Roughly speaking, OXS functions generalize unit-demand functions. The set of OXS functions is a strict superset of additive
functions and a strict subset of submodular functions. See, [26,27].
13
agent 2 can attain by deviating is 2 βπ1βπ2. Agent 2 achieves this by reporting the preference ranking:
π3>π4>π1>π2>π5and getting goods {π3, π4}. This implies that for any π>0 one can chose
appropriately small π1, π2, π 3so that the bluο¬ proο¬le is not a ξ1
2+πξ-approximate PNE. ξ
In Section 4, we show that every approximate PNE of Mechanism 1results in an approximately EF1
allocation. Here, as a warm-up, we start this endeavor with an easy result which holds speciο¬cally for
the bluο¬ proο¬le (and can be extended to approximate PNE where all agents submit the same preference
ranking) but shows a better fairness guarantee than our general Theorem 4.4.
Theorem 3.8. When all agents have submodular valuation functions π£1, . . . , π£π, the allocation returned by
Round-Robin(β»b)is 1
2-EF1 with respect to π£1, . . . , π£π. Moreover, this is tight, i.e., for any π>0, there are
instances where this allocation is not ξ1
2+πξ-EF1.
Proof. In order to obtain a contradiction, suppose that the allocation (π΄b
1, π΄b
2, . . . ,π΄b
π)returned by Round-
Robin(β»b)is not 1
2-EF1. That is, there exist agents πand πsuch that π£π(π΄b
π)<0.5Β·π£π(π΄b
π\{π}), for all πβπ΄b
π.
We are going to show that this allows us to construct a deviation for agent πwhere she gets value more than
2π£π(π΄b
π), contradicting the fact that β»bis a 1
2-approximate PNE. Recall that using the renaming β1, β2, . . .
produced by Algorithm 2, we have π΄b
π={βπ, βπ+π, . . ., β(πβ1)π+π}and π΄b
π={βπ, βπ+π, . . . , β(πβ1)π+π}.
Let πΏbe the indicator variable of the event π<π, i.e., πΏis 1 if π<πand 0 otherwise. We will show
that it is possible for agent πto get the set {βπΏπ+π, β (1+πΏ)π+π, β (2+πΏ)π+π, . . . ,β(πβ1)π+π}, which is either the
entire π΄b
π(when π<π) or π΄b
π\ {βπ}(when π<π). In particular, let β»πbe a preference ranking that starts
with all goods in π΄b
πin the same order as they were allocated to agent πin Round-Robin(β»b), followed by
everything else, in any order.
Consider the execution of Round-Robin(β»π,β»b
βπ). The crucial, yet simple, observation (that makes
an inductive argument work) is that the ο¬rst πβ1 goods β1, . . . , βπβ1are allocated as before, then good
βπΏπ+π(rather than βπ) is allocated to agent π, and after that the πβ1 top goods for all agents in π\ {π}
according to β»b
βπare βπ, βπ+1, . . ., βπΏπ+πβ1, βπΏπ+π+1, . . . ,βπ+πβ1, and these are allocated in the next πβ1 steps
of the algorithm. As a result, right before a second good is allocated to agent π, the available goods are
βπ+π, βπ+π+1, . . . ,βπexactly as in the execution of Round-Robin(β»b).
More generally, right before an π-th good is allocated to π, her bundle is {βπΏπ+π, β (1+πΏ)π+π, β(2+πΏ)π+π,
. . . ,β(πβ2+πΏ)π+π}, and the available goods are β(πβ1)π+π, β (πβ1)π+π+1, . . ., βπ(as they were in the execution of
Round-Robin(β»b)). Then good β(πβ1+πΏ)π+π(rather than β(πβ1)π+π) is allocated to agent π, and after that the
πβ1 top goods for all agents according to β»b
βπare
β(πβ1)π+π, β(πβ1)π+π+1, . . . ,β(πβ1+πΏ)π+πβ1, β (πβ1+πΏ)π+π+1, . . . , βππ+πβ1,
and they are allocated in the next πβ1 steps of the algorithm. At the end, agent πgets the entire π΄b
πor
π΄b
π\ {βπ}plus some arbitrary good, depending on whether π<πor π<π. In either case, by monotonicity,
agent πβs value for her bundle is at least π£π(π΄b
π\ {βπ}) >2π£π(π΄b
π), where the last inequality follows from
our assumption that (π΄b
1, π΄b
2, . . . ,π΄b
π)is not 1
2-EF1. Therefore, by deviating from β»bto β»π, agent πincreases
her value by a factor strictly grater than 2, contradicting Theorem 3.7.
To show that this factor is tight, we again turn to the example given within the proof of Theorem 3.7.
Recall the allocation produced by Mechanism 1for the bluο¬ proο¬le is π΄=(π΄1, π΄ 2), with π΄1={π1, π3,π 5}
and π΄2={π2, π4}. Observe that agent 1 is envy-free towards agent 2 as π£1(π΄1)=4βπ1βπ3>2βπ2=π£1(π΄2).
On the other hand, π£2(π΄2)=1, whereas π£2(π΄1)=4βπ1βπ3and π£2(π΄1\ {π1}) =2βπ1βπ3. The latter
implies that for any π>0 one can chose appropriately small π1, π2, π3so that the bluο¬ proο¬le does not
result in a ξ1
2+πξ-EF1 allocation with respect to the true valuation functions of the agents. ξ
14
4 Fairness properties of PNE
In Section 2.3, Proposition 2.5, we state the fairness guarantees of Round-Robinβviewed as an algorithmβ
when all agents have cancelable valuation functions. So far, we have not discussed this matter for the
submodular case. It is not hard to see, however, that Theorem 3.8 and the deο¬nition of the bluο¬ proο¬le
via Algorithm 2imply that when we have (value oracles for) the valuation functions, then we can use
Round-Robin to algorithmically produce 1
2-EF1 allocations. Using similar arguments, we show next that
for any preference proο¬le β»=(β»1,...,β»π)and any πβπ, there is always a response β»β²
πof agent πto β»βπ,
such that the allocation returned by Round-Robin(β»β²
π,β»βπ)is 1
2-EF1 from agent πβs perspective.
Towards this, we ο¬rst need a variant of Algorithm 2that considers everyone in π\ {π}ο¬xed to their
report in β»βπand greedily determines a βgoodβ response for agent π. An intuitive interpretation of what
Algorithm 4below is doing, can be given if one sees Mechanism 1as a sequential game. Then, given that
everyone else stays consistent with β»βπ, agent πpicks a good of maximum marginal value every time her
turn is up.
Algorithm 4 Greedy response of agent πto β»βπ
Input: π,π,β»βπ, value oracle for π£π
1: π=π;π=β
2: for π=1, . . . , π do
3: β=(πβ1) (mod π) + 1
4: if β=πthen
5: π₯βπ/πβ=arg max
πβπ
π£π(π|π)// Ties are broken lexicographically.
6: π=πβͺ {π₯βπ/πβ}
7: π=π\ {π₯βπ/πβ}
8: else
9: π=top(β»β, π )
10: π=π\ {π}
11: return π₯1β»β²
ππ₯2β»β²
π... β»β²
ππ₯πβ»β²
π... // Arbitrarily complete β»β²
πwith goods in π\π.
Proving the next lemma closely follows the proof of Theorem 3.7 but without the need of an analog
of Lemma 3.5, as we get this for free from the way the greedy preference proο¬le β»β²
πis constructed.
Lemma 4.1. Assume that agent πhas a submodular valuation function π£π. If β»β²
πis the ranking returned by
Algorithm 4when given π,π,β»βπ,π£π, then the allocation (π΄β²
1, π΄β²
2, . . . ,π΄β²
π)returned by Round-Robin(β»β²
π,β»βπ)
is such that for every πβπ, with π΄β²
πβ β
, there exists a good πβπ΄β²
π, so that π£π(π΄β²
π) β₯ 1
2Β·π£π(π΄β²
π\ {π}).
Proof. First, it is straightforward to see that π΄β²
π=π, as computed in Algorithm