Content uploaded by Sebastian Wild
Author content
All content in this area was uploaded by Sebastian Wild on Apr 17, 2015
Content may be subject to copyright.
Efficient Algorithms for Envy-Free Stick
Division With Fewest Cuts
Raphael Reitzig∗Sebastian Wild∗
April 17, 2015
Segal-Halevi, Hassidim, and Aumann [SHHA15] propose the problem of
cutting sticks so that at least ksticks have equal length and no other stick
is longer. This allows for an envy-free allocation of sticks to kplayers, one
each. The resulting number of sticks should also be minimal.
We analyze the structure of this problem and devise a linear-time algorithm
for it.
1. Introduction
We consider the following situation:
Imagine you and your family move to a new house. Naturally, each of your k
children wants to have a room of their own, which is why you wisely opted for a
house with a sufficient number of rooms. The sizes of the rooms are, however,
not equal, and you anticipate that peace will not last long if any of the rascals
finds out that their room is smaller than any of the others’.
Removing walls is out of the question, but any room can (arbitrarily and it-
eratively) be divided into smaller rooms by installing drywalls. What is the
minimal number of walls needed to obtain (at least) kequally sized largest
rooms, (i. e., such that all other rooms are at most that large)?
In this paper, we give efficient algorithms for solving this problem, i.e., for determining
the optimal placement of walls. To the best of our knowledge, this is the first algorithmic
treatment of this problem. Even though we do like the children’s-rooms metaphor we
want to remain consistent with existing descriptions of the problem and will therefore
talk about sticks which can be cut (but not glued together), and refer to the problem as
Envy-Free Stick Division – hence the title.
∗Department of Computer Science, University of Kaiserslautern; {reitzig, wild} @ cs.uni-kl.de
1
1. Introduction
In the remainder of this first section, we give an overview over related problems and a
roadmap of our contribution.
We then introduce notation and give a formal definition of Envy-Free Stick Division
in Section 2. In Section 3 we develop the means to limit our search to finite sets of
candidates. We follow up by developing first a linearithmic, then a linear time algorithm
for solving Envy-Free Stick Division in Section 4. Finally, we propose smaller candidate
sets Section 5. A short conclusion in Section 6 on the complexity of Envy-Free Stick
Division completes the article.
We append a glossary of the notation we use in Appendix A for reference, and some
lower bounds on the number of distinct candidates in Appendix B.
1.1. Other Optimization Goals
Whenever one defines a new problem, legitimate questions arise: Is the given definition
natural? Do slight variations affect the properties of the problem?
We have already addressed the first question with the scenario right at the beginning of
the article: when assigning rooms to children, each installed drywall incurs a cost which
we try to minimize. The reader may or may not agree that this is a natural problem.
We now address the second question with a brief discussion of a few alternative defini-
tions, none of which changes the nature of the problem.
To reiterate, we consider the problem of dividing resources (fairly), some of which may
remain unallocated (as waste). Waste is bad, of course, so we want to avoid wasting too
much. We can formulate this goal in different ways; one can seek to
(G1) minimize the number of necessary cuts (as we do),
(G2) minimize the number of waste pieces,
(G3) minimize the total amount of waste (here, the total length of all unallocated
pieces), or
(G4) maximize the amount of resource each player gets (the length of the maximal
pieces).
The first two objectives are discrete (counting things) whereas the latter two consider
continuous quantities.
Obviously, (G3) and (G4) are dual formulations but lead to equivalent problems; the
total waste is always the (constant) total length of all sticks minus k·l?. Similarly, (G1)
is dual to (G2); ccuts divide nsticks into n+cpieces, and because exactly kof these
are non-waste, the number of wasted pieces is n+c−k.
The canonical division induced by the largest feasible cut length lis also optimal w. r. t.
the number of cuts needed; smaller cut lengths can only imply more cuts and larger
2
1. Introduction
cut lengths are not feasible. This result is one of the key insights towards algorithmic
solutions of Envy-Free Stick Division so we state it formally in Corollary 3.2 (page 12). In
the terms of above goals, this means that optimal solutions for (G3) and (G4) are also
optimal for (G1) and (G2).
The converse is also true. Let l?be the cut length of an optimal canonical division
w. r. t. the number of needed cuts (goal (G1)). This division must divide (at least) one
stick perfectly, that is into only maximal pieces; otherwise we could increase l?a tiny bit
until one stick is perfectly split, resulting in (at least) one cut less (as this stick does not
produce a waste piece now). But this means that we cannot increase l?by any (positive)
amount without immediately losing (at least) one maximal piece; we formalize this fact
in Lemma 3.1 (page 11). As the number of cuts grows with decreasing cut length, l?is
the largest cut length that yields at least kmaximal pieces. Together it follows that l?
must be the largest feasible cut length overall and thus induces an optimal solution for
goal (G4) (and therewith (G3)), too.
Therefore, all four objective functions we give above result in the same optimization
problem, and the same algorithmic solutions apply. The reader may use any of the four
formulations in case they do not agree with our choice of (G1).
1.2. Relation to Fair Division
This setting shares some features of fair resource allocation problems studied in econom-
ics, where a continuously divisible, homogeneous resource is to be allocated “fairly” to a
number of competing players. See Brams and Taylor [BT96] for a treatise of this field.
What exactly constitutes a “fair” division is subject to debate and several (potentially
contradicting) notions of fairness appear in the literature:
•proportional division (or simple fair division) guarantees that each player gets a
share that she values at least 1/k;
•envy-freeness ensures that no player (strictly) prefers another player’s share over
her own;
•equitable division requires that the (relative) value each player assigns to her own
share is the same for all players, so that everyone feels the same amount of “hap-
piness”.
This is just naming the most prominent ones. Our problem certainly is a search for an
envy-free allocation of the available rooms/sticks.
The core assumption for classic fair division problems is the subjective theory of value,
implying that each player has her own private (i.e., unknown) valuation of the resources
at hand, which in general forbids an objectively fair division of the resources. Note that
we do not assume subjective valuations in our problem; all the children value same rooms
3
1. Introduction
same: (linearly) by their size. This is what makes our problem easier than general fair
resource allocation.
On the other hand, while each given room is assumed to be continuously divisible,
existing walls (or cuts) constrain the set of possible allocations, and simply assigning
everyone a (contiguous) 1/k share is not possible. In general there may not be an envy-
free assignment then, at all, that is unless we allow wasting part of the resource. This
is what makes our problem hard . . . and interesting.
While we do think that Envy-Free Stick Division is worth studying in its own right, it
was initially motivated by a recent approach to envy-free cake cutting, i. e., the problem
of finding an envy-free assignment of a freely divisible, but inhomogeneous resource
(which means that valuations do not only depend on the size of the piece, but also on
its position). Segal-Halevi, Hassidim, and Aumann [SHHA15] devise a finite protocol
to find an envy-free division of a cake to kagents, where parts of the cake may remain
unassigned (waste).
They use an algorithm for solving Envy-Free Stick Division as a subroutine in their pro-
tocol, which they call Equalize(k). In short, their protocol works as follows: The players
cut pieces from the cake, one after another, only allowed to subdivide already produced
pieces. After that, players each choose their favorite piece among the existing pieces
in the opposite cutting order, i.e, the player who cut first is last to choose her piece of
cake. During the cutting phase, agents produce a well-chosen number of pieces of the
cake that they regard to be all of equal value (according to their personal valuation); all
other pieces are made at most that large. The number of maximal pieces is chosen such
that even after all players who precede the given player in “choosing order” have taken
their favorite piece of cake, there is still at least one of the current player’s maximal
pieces left for her to pick, guaranteeing envy-freeness of the overall allocation.
1.3. Relation to Proportional Apportionment
Proportional apportionment as discussed by, e. g., Cheng and Eppstein [CE14], is the
problem of assigning each party its “fair” share of a pool of indivisible, unit-value resource
items (seats in parliament), so that the fraction of items assigned to a party resembles
as closely as possible its (fractional, a priori known) value (the number of votes for a
party); these values are the input.
Most methods used in real-word election systems use sequential assignment, assigning
one seat at a time in a greedy fashion, where the current priority of each party depends
on its initial vote percentage and the number of already assigned seats. Different systems
differ in the function for computing these updated values, but all sensible ones seem to
be of the “highest averages form”: In each round they assign the next seat to a party that
(currently) maximizes vi
/dj(with ties broken arbitrarily), where viis the vote percentage
of party i,jis the number of seats party ihas already been assigned and d0, d1, d2, . . .
an increasing divisor sequence characterizing the method.
4
1. Introduction
The arguably most natural choice is dj=j+ 1, yielding the highest average method
of Jefferson, a. k. a. the greatest divisors method. Here vi
/djis the number of votes per
seat that party ican exhibit, assuming it will now be given its (j+ 1)th seat. The party
with the highest average number of votes per seat gets the next seat, which is fair in
the sense that – correcting for the already fixed partial seat allocation – the voters value
that party most. Cheng and Eppstein [CE14, Table 1] discuss other common choices for
divisor sequences.
d0= 0 is allowed in which case vi
/d0is supposed to mean vi+Mfor a very large
constant M(larger than the sum Pviof all values is sufficient). This ensures that
before any party is assigned a second seat, all other parties must have one seat, which is
a natural requirement for some allocations scenarios, e. g., the number of representatives
for each state in a federal republic might be chosen to resemble population counts, but
any state, regardless how small, should have at least one representative.
Note that even though the original description of the highest averages allocation proce-
dure is an iterative process, it is actually a static problem: The averages vi
/djare strictly
decreasing with jfor any i, so the sequence of maximal averages in the assignment
rounds is also decreasing. In fact, if we allocate a seat to a party with current average
ain round r, then amust have been the rth largest element in the multiset of all ratios
vi
/djfor all parties iand numbers j. Moreover, if we know the value of the kth largest
average a(k), we can determine for each party ihow many seats it should receive by
finding the largest jsuch that vi
/djis still larger than a(k). Cheng and Eppstein turn
this idea into an efficient algorithm for solving apportionment problems.
Envy-Free Stick Division is in fact a proportional apportionment problem; we change our
perspective and assign children to rooms (not vice versa!) so that each room gets its “fair
share” of kids, namely proportional to its size. As children are (hopefully considered to
be) indivisible, we can match size proportions only approximately. The increasing divisor
sequence for Envy-Free Stick Division is dj=j+ 1, exactly the greatest divisors method
discussed above.
Once we have determined the “share of children” ci∈N0of each room iof size Li, we
use the minimum of all Li
/ciratios as cut length l?(ignoring rooms with ci= 0 kids).
Therefore, Envy-Free Stick Division can be solved using Cheng and Eppstein’s linear-time
apportionment algorithm. We will, however, show in this article that we can – with a
slightly closer look at the underlying multiset of averages vi
/dj– devise an algorithm that
is (in our opinion) both conceptually simpler and promises to be faster in practice.
1.4. Overview of this Article
In this section, we give an informal description of the steps that lead to our solution for
Envy-Free Stick Division (see also Figure 1); formal definitions and proofs follow in the
main part of the paper.
5
1. Introduction
Envy-Free Stick Division
one-dimensional problem
search problem
discrete search problem
finite search problem
linearithmic search space linear search space
only canonical divisions
monotonic objective
piecewise constant constraint
trivial lower bound on l?
combinatorics sandwich bounds
Figure 1: Schematic overview of the refinement steps that turn a seemingly hard problem
into a tame task amenable to elementary yet efficient algorithmic solutions.
Without further restrictions, Envy-Free Stick Division is a non-linear continuous opti-
mization problem that does not seem to fall into any of the usual categories of problems
that are easy to solve. Any stick might be cut an arbitrary number of times at arbitrary
lengths, so the space of possible divisions is huge.
The first step to tame the problem is to observe that most of these divisions cannot be
optimal: Assuming we already know the size l?of the kmaximal pieces in an optimal
division (the size of the rooms assigned to the kids), we can recover a canonical optimal
division by simply cutting l?-sized pieces off of any stick longer than l?until all sticks
have length at most l?. Cutting a shorter piece only creates waste, cutting a larger
piece always entails a second cut for that piece. We can thus identify a (candidate) cut
length with its corresponding canonical division and so Envy-Free Stick Division reduces
to finding the optimal cut length l?.
The second major simplification comes from the observation that for canonical divisions,
the number of cuttings can only get larger when we decrease the cut length. (We cut
each sticks into shorter pieces, this can only mean more cuts.) Stated differently, the
objective function that we try to minimize is monotonic (in the cut length). This is a
6
1. Introduction
very fortunate situation since it allows a simple characterization of optimal solutions:
l?is the largest length, whose canonical division still contains (at least) kmaximal pieces
of equal size, transforming our optimization to a mere search problem for the point where
cut lengths transition from feasible to infeasible solutions.
By similar arguments, also the number of equal sized maximal pieces (in the canonical
division) for a cut length ldoes only increase when lis made smaller, so we can use
binary search to find the length l?where the number of maximal pieces first exceeds k.
The search is still over a continuous region, though.
Next we note that both the objective and the feasibility function are piecewise constant
with jumps only at lengths of the form Li
/j, where Liis the length of an input stick
and jis a natural number. Any (canonical) division for length lthat does not cut any
stick evenly into pieces of length lremains of same quality and cost if we change the
la very little. Moreover, any such division can obviously be improved by increasing
the cut length, until we cut one stick Lievenly, say into jpieces, as we then get the
same number of maximal pieces with (at least) one less cutting. We can thus restrict
our (binary) search for l?to these jump points, making the problem discrete – but still
infinite, as we do not yet have an upper bound on j.
We can, however, easily find lower bounds on l?– or, equivalently, upper bounds on j–
that render the search space finite. For example, we obviously never need to cut more
than kpieces out of any single stick, in particular not the largest one. This elementary
trick already reduces the search space to O(n2)candidates, where nis the number of
sticks in the input.
We will then show how to obtain even smaller candidate sets by developing slightly
cleverer upper and lower bounds for the number of maximal pieces (in the canonical
divisions) for cut length l. The intuitive idea is as follows. If we had a single stick with
the total length of all sticks, dividing it into kequal pieces would give us the ultimately
efficient division without any waste. The corresponding “ultimate cut length” is of course
easy to compute, but with pre-cut sticks, it will usually not be feasible.
However, we can bound how much we lose w. r. t. the ultimate division: each input stick
contributes (at most) one piece of waste. Hence, the optimal cut length cannot be “too
far” away from the ultimate cut length. With a little diligence (see Section 5.1), we can
derive an interval for the optimal cut length from these observations and show that the
number of jumps in this interval, i. e., the number of cut lengths to check, remains linear
in n. For k≤n, we can in fact get an O(k)bound by first removing sticks shorter than
the kth largest one.
The discussion above takes the point of view of mathematical optimization, describing
how to reduce the number of candidate cut lengths we have to check; we are still one step
away from turning this into an actual, executable algorithm. After reducing the problem
to a finite search problem, binary search naturally comes to mind; we work out the details
7
2. Problem Definition
in Section 4. However, sorting the candidate set and checking feasibility of candidates
dominate the runtime of this binary-search-based algorithm – this is unsatisfactory.
As hinted at in Section 1.3, it is possible to determine l?more directly, namely as
a specific order statistic of the candidate set. From the point of view of objective and
feasibility functions, this trick works because both functions essentially count the number
of unit jumps (i. e. occurrences of Li
/j) at points larger than the given length. This
approach yields a simple linear-time algorithm based on rank selection; we describe it
in detail in Section 4.1.
2. Problem Definition
We will mostly use the usual zoo of mathematical notation (as used in theoretical com-
puter science, that is). Since they are less often used, let us quickly introduce notation
for multisets, though. For some set X, denote a multiset over Xby
A={x1, x2, . . . }
with xi∈Xfor all i∈[1..|A|]. Note the bold letter; we will use these for multisets,
and regular letters for sets. Furthermore, denote the multiplicity of some x∈Xin Aas
A(x); in particular,
|A|=X
x∈X
A(x).
When we use a multiset as operator range, we want to consider every occurrence of
x∈A; for example,
X
x∈A
f(x) = X
i∈[1..|A|]
f(xi) = X
x∈X
A(x)·f(x).
As for multiset operations, we use multiset union ]that adds up cardinalities; that is,
if C=A]Bthen C(x) = A(x) + B(x)for all x∈X.Multiset difference works in the
reverse way; if C=A\Bthen C(x) = max{0,A(x)−B(x)}for all x∈X.
Intersection with a set B⊆Xis to be read as natural extension for the usual set
intersection; that is, if C=A∩Bthen C(x) = A(x)·B(x)for all x∈X(keep in mind
that B(x)∈ {0,1}).
Now for problem-specific notation. We call any length L∈Qastick.Cutting Lwith
length l > 0creates two pieces with lengths land L−lrespectively. By iteratively
cutting sticks and pieces thereof into smaller pieces, we can transform a set of sticks into
a set of pieces.
We define the following simple problem for fixing notation.
8
2. Problem Definition
Problem 1: Envy-Free Fixed-Length Stick Division
Input: Multiset L={L1, . . . , Ln}of sticks with lengths Li∈Q>0, target
number k∈N>0and cut length l∈Q>0.
Output: The (minimal) number of cuts necessary for cutting the input sticks
into sticks L0
1, . . . , L0
n0∈Q>0so that
i) (at least) kpieces have length l, i. e. |{i|L0
i=l}| ≥ k,
ii) and no piece is longer than l, i. e. L0
i≤lfor all i.
The solution is elementary; we give it after introducing formal notation.
We denote by m(L, l)the number of stick pieces of length l– we will also call these
maximal pieces – we can get when we cut stick Linto pieces no longer than l. This is
to mean that you first cut Linto two pieces, then possibly further cut those pieces and
so on, until all pieces have length at most l. Obviously, the best thing to do is to only
ever cut with length l. We thus have
m(L, l) = $L
l%.
Because we may also produce one shorter piece, the total number of pieces we obtain by
this process is given by
p(L, l) = &L
l',
and
c(L, l) = &L
l'−1
denotes the number of cuts we perform.
We extend this notation to multisets of sticks, that is
m(L, l):=X
L∈L
m(L, l) = X
L∈L$L
l%and
c(L, l):=X
L∈L
c(Li, l) = X
L∈L&L
l−1'.
See Figure 2 for a small example.
9
2. Problem Definition
L1L2L3L4L5
l
Figure 2: Sticks L={L1, . . . , L5}cut with some length l. Note how m(L, l) = 20 and
c(L, l)= 19. There are four non-maximal pieces.
Using this notation, the conditions of Envy-Free Fixed-Length Stick Division translate into
checking whether m(L, l)≥kfor cut length l; we call such lfeasible cut lengths (for L
and k). We define the following predicate as a shorthand:
Feasible(L, k, l):=(1, m(L, l)≥k;
0,otherwise.
Now we can give a concise algorithm for solving Envy-Free Fixed-Length Stick Division.
Algorithm 1: CanonicalCutting(L, k, l) :
1. If Feasible(L, k, l):
1.1. Answer c(L, l).
2. Otherwise:
2.1. Answer ∞(i. e. “not possible”).
Assuming the unit-cost RAM model – which we will do in this article – the runtime of
CanonicalCutting is clearly in O(n); evaluation of Feasible and cin time O(n)each
dominates. We will see later that a better bound is Θ(min(k, n)) (cf. Lemma 3.3).
Of course, different cut lengths lcause different numbers of cuts. We want to find an
optimal cut length, that is a length l?which minimizes the number of cuts necessary to
fulfill conditions i) and ii) of Envy-Free Fixed-Length Stick Division. We formalize this as
follows.
Problem 2: Envy-Free Stick Division
Input: Multiset L={L1, . . . , Ln}of sticks with lengths Li∈Q>0and target
number k∈N>0.
10
3. Exploiting Structure
Output: A (feasible) cut length l?∈Q>0which minimizes the result of Envy-
Free Fixed-Length Stick Division for L,kand l?.
We observe that the problem is not as easy as picking the smallest Li, cutting the
longest stick into kpieces, or using the kth longest stick (if k≤n). Consider the
following, admittedly artificial example which debunks all such attempts.
Example 2.1: Let
L={mx, (m−1)x+ 1,(m−2) ·x+ 2,...,m
/2·x+m
/2, x −1, x −1, . . . }
for a total of n=m2elements and k=3
/8·m2+3
/4·m, where m∈4N>0and x > m
/2.
Note that l?=x, that is in particular
•l?6=Liand
•l?6=Li
/k
for all i∈[1..n]. In fact, by controlling xwe get an (all but) arbitrary fraction of an Lifor l?.
It is possible to extend the example so that “mx” – the stick whose fraction is optimal – has
(almost) arbitrary index i, too.
As running example we will use (Lex, k)as defined by m= 4 and x= 2, that is
•Lex ={8,7,6,1,1,1,1,1,1,1,1,1,1,1,1,1}and
•k= 9.
Note that l?= 2 and m(Lex , l?) = 10 > k here. See Figure 3 for a plot of m(Lex , l).
3. Exploiting Structure
For ease of notation, we will from now on assume arbitrary but fixed input (L, k)be
given implicitly. In particular, we will use m(l)as short form of m(L, l), and similar
for cand Feasible.
At first, we observe that both constraint and objective function of Envy-Free Stick Division
belong to a specific, simple class of functions.
Lemma 3.1: Functions mand care non-increasing, piecewise-constant functions in l
with jump discontinuities of (only) the form Li
/jfor i∈[1..n]and j∈N>0.
Furthermore, mis left- and cis right-continuous.
Proof: The functions are given as finite sums of terms that are either of the form bL
lcor
dL
l−1e. Hence, all summands are piecewise constant and never increase with growing l.
Thus, the sum is also a non-increasing piecewise-constant function.
The form Li
/jof the jump discontinuities is apparent for each summand individually,
and they carry over to the sums by monotonicity.
11
3. Exploiting Structure
2 4 6 8
0
5
10
15
k= 9
l?= 2
l
m(l)
Figure 3: The number of maximal pieces m(Lex, l)in cut length lfor Lex as defined
in Example 2.1. The filled circles indicate the value of m(Lex , l)at the jump
discontinuities.
The missing continuity properties of mresp. cfollow from right-continuity of b·c resp.
left-continuity of d·e; the direction gets turned around because we consider l−1but other
than that arithmetic operations maintain continuity.
See Figure 3 for an illustrating plot.
Knowing this, we immediately get lots of structure in our solution space which we will
utilize thoroughly.
Corollary 3.2: l?= max{l∈Q>0|Feasible(l)}.
Note in particular that the maximum exists because Feasible is left-continuous.
This already tells us that any feasible length gives a lower bound on l?. One particular
simple case is k < n since then the kth largest stick is always feasible. This allows us
to get rid of all properly smaller input sticks, too, since they are certainly waste when
cutting with any optimal length. As a consequence, having any non-trivial lower bound
on l?already speeds up our search by ways of speeding up feasibility checks.
Lemma 3.3: Let L∈Q≥0fixed and denote with
I>L :={i∈[1..n]|Li> L}
the (index) set of all sticks in Lthat are longer than L. Then,
m(l) = X
i∈I>L
m(Li, l)
12
3. Exploiting Structure
for all l > L.
Proof: Clearly, all summands bLi
/lcin the definition of m(l)are zero for Li≤L<l.
As a direct consequence, we can push the time for checking feasibility of a candidate
solution from being proportional to ndown to being proportional to the number of Li
larger than a lower bound Lon the optimal length; we simply preprocess L>L in time
Θ(n). Since it is easy to find an Lithat can serve as L– e. g. any one that is shorter
than any known feasible solution – we will make use of this in the definition of our set
of candidate cut lengths.
In addition, the special shape of cand Feasible comes in handy. Recall that both
functions are step functions with (potential) jump discontinuities at lengths of the form
l=Li
/j(cf. Lemma 3.1). We will show that we can restrict our search for optimal cut
lengths to these values, and how to do away with many of them for efficiency.
Combining the two ideas, we will consider candidate multisets of the following form.
Definition 3.4: We define the candidate multiset(s)
C(I, fl, fu):=]
i∈ILi
j
fl(i)≤j≤fu(i)
dependent on index set I⊆[1..n]and functions fl:I→Nand fu:I→N∪ {∞}
which bound the denominator from below and above, respectively; either may implicitly
depend on Land/or k.
Note that |C|=Pi∈I[fu(i)−fl(i) + 1]. We denote the multiset of all candidates by
Call :=C([1..n],1,∞).
First, let us note that this definition covers the optimal solution as long as upper and
lower bounds are chosen appropriately.
Lemma 3.5: There is an optimal solution on a jump discontinuity of m, i. e. l?∈Call.
Proof: From its definition, we know that Feasible has exactly one jump discontinuity,
and from Lemma 3.1 (via m) we know that it is one of the Li
/j. By Corollary 3.2 and
left-continuity of Feasible (again via m) we know that this is indeed our solution l?.
Of course, our all-encompassing candidate multiset Call is infinite (as is the corresponding
set) and does hence not lend itself to a simple search. But there is hope: we already
know that l?≥lfor any feasible lwhich immediately implies finite (if maybe super-
polynomial) bounds on j(if we have such l). We will now show how to restrict the set
of candidates via suitable index sets Iand bounding functions fland fuso that we can
efficiently search for l?. We have to be careful not to inadvertently remove l?by choosing
bad bounding functions.
13
3. Exploiting Structure
Lemma 3.6: Let I⊆[1..n]and fl, fu:I→Nso that
i) fl(i)=1or Li
/(fl(i)−1) is infeasible, and
ii) Li
/fu(i)is feasible,
for all i∈I, and
iii) Li0is suboptimal (i. e. Li0is feasible, but not optimal)
for all i0∈[1..n]\I. Then,
l?∈C(I, fl, fu).
Proof: We argue that Call \C(I, fl, fu)does not contain the optimal solution l?; the
claim then follows with Lemma 3.5.
Let i∈[1..n]and j∈[1..∞]be arbitrary but fixed. We investigate three cases for why
length Li
/jmay not be included in C(I , fl, fu).
i /∈I:Candidate Li
/jis suboptimal by Corollary 3.2 because Li
/j≤Liand Liitself is
already suboptimal by iii).
j < fl(i):In this case, we must have fl(i)>1, so Li
/(fl(i)−1) is infeasible by i). Clearly,
Li
/j>Li
/fl(i), so Li
/j≥Li
/(fl(i)−1) and we get by monotonicity of Feasible (cf.
Lemma 3.1 via m) that Li
/jis infeasible, as well.
j > fu(i):Clearly, Li
/j<Li
/fu(i), where the latter is already feasible by ii). So, again
by Corollary 3.2, Li
/jis suboptimal.
Thus, we have shown that every candidate length Li
/jgiven by (i, j)∈I×[1..∞]is
either in C(I, fl, fu)or, failing that, infeasible or suboptimal.
We will call triples (I , fl, fu)of index set and bounding functions that fulfill Lemma 3.6
admissible restrictions (for Land k). We say that C(I, fl, fu)is admissible if (I , fl, fu)
is an admissible restriction.
We will restrict ourselves for the remainder of this article to index sets Ico that contain
indices of lengths that are larger than the n0th largest1length L(n0)in L, for n0=
min(k, n + 1). This corresponds to working with I>L(n0)as defined in Lemma 3.3. We
will have to show that such index sets are indeed admissible (alongside suitable bounding
functions); intuitively, if k≤nthen L(k)is always feasible, and otherwise we have to
work with all input lengths. We fix this convention for clarity and notational ease.
Definition 3.7: Define cut-off length Lco by
Lco :=(L(k), k ≤n;
0, k > n,
1We borrow from the common notation S(k)for the kth smallest element of sequence S.
14
3. Exploiting Structure
and index set Ico ⊆[1..n]as
Ico :=(I>Lco , Lco not optimal;
undefined,otherwise.
Note that Ico = [1..n]if k > n.
We will later see that we never invoke the undefined case as we already have l?=L(k)
then.
In order to illustrate that we have found a useful criterion for admissible bounds, let us
investigate shortly an admittedly rather obvious choice of bounding functions. We use
the null-bound fl=i7→ 1and fu=i7→ k; an optimal solution does not cut more
than k(equal-sized) pieces out of any one stick. The restriction ([1..n],1, k)is clearly
admissible; in particular, every Li
/kis feasible.
Example 2.1 Continued: For Lex and k= 9, we get
C(Ico,1, k) = 8
1,8
2,8
3,8
4,8
5,8
6,8
7,8
8,8
9,7
1,7
2,7
3,7
4,7
5,7
6,7
7,7
8,7
9,
6
1,6
2,6
3,6
4,6
5,6
6,6
7,6
8,6
9,1
1,1
2,1
3,1
4,1
5,1
6,1
7,1
8,1
9,
that is 36 candidates. Note that there are four duplicates, so there are 32 distinct candidates.
We give a full proof of admissibility and worst-case size here; it is illustrative even if
simple because later proofs will follow the same structure.
Lemma 3.8: If Lco 6=l?, then C(Ico,1, k)is admissible.
Furthermore, |C(Ico,1, k)|=k·min(k−1, n)∈Θk·min(k, n)in the worst case.
Proof: First, we show that (Ico,1, k)is an admissible restriction (cf. Lemma 3.6).
ad i): Since fl(i)=1for all i, this checks out.
ad ii): Clearly, m(Li
/k)≥kjust from the contribution of summand bLi
/lc.
ad iii): We distinguish the two cases of Lco (cf. Definition 3.7).
•If k > n then Ico = [1..n]which trivially fulfills iii).
•In the other case, k≤nand Lco is not optimal by assumption. Then Ico =
I>Lco ; therefore Li0≤Lco for any i0/∈Ico and Lemma 3.1 implies that Li0is
not optimal as well.
This concludes the proof of the first claim.
As for the number of candidates, note that clearly |C(Ico,1, k)|=Pi∈Ico k=|Ico | · k; the
claim follows with |Ico|= min(k−1, n)in the case that the Liare pairwise distinct (cf.
Definition 3.7).
15
4. Algorithms
Since we now know that we have to search only a finite domain for l?, we can start
thinking about effective and even efficient algorithms.
4. Algorithms
Just from the discussion above, a fairly elementary algorithm presents itself: first cut
the input down to the lengths given by Ico (cf. Definition 3.7), then use binary search
on the candidate set w. r. t. Feasible. This works because Feasible is non-increasing (cf.
Corollary 3.2 and Lemma 3.1).
Algorithm 2: SearchLstarhfl, fui(L, k) :
1. Compute n0:= min(k, n + 1).
2. If n0≤n:
2.1. Determine Lco :=L(n0), i. e. the n0th largest length.
2.2. If Lco is optimal, answer l?=Lco (and terminate).
20
. Otherwise (i. e. n0> n):
2.4. Set Lco := 0.
3. Assemble Ico :=I>Lco.
4. Compute C:=C(Ico, fl, fu)as sorted array.
5. Find l?by binary search on Cw. r. t. Feasible.
6. Answer l?.
For completeness we specify that fl, fu:Ico →N.
Theorem 4.1:
Let (Ico, fl, fu)be an admissible restriction where fland fucan be evaluated in time
O(1).
Then, algorithm SearchLstarhfl, fuisolves Envy-Free Stick Division in (worst-case)
time
T(n, k)∈Θ(n+|C|log |C|)
and space
S(n, k)∈Θ(n+|C|).
Proof: We deal with the three claims separately.
16
4. Algorithms
Correctness follows immediately from Lemma 3.6 and Lemma 3.1 resp. Corollary 3.2.
Note in particular that SearchLstar does indeed compute Ico as defined in Def-
inition 3.7, and the undefined case is never reached.
Runtime: Since the algorithm contains neither loops nor recursion (at the top level) we
can analyze every step on itself.
Steps 1, 2.4.: These clearly take time O(1).
Step 2.1.: There are well-known algorithms that perform selection in worst-case
time Θ(n).
Step 2.2.: Testing Lco for optimality is as easy as computing Feasible(Lco)and
counting the number aof integral Li
/Lco in m(Lco). If Feasible(Lco)(i. e.,
m(Lco)≥k) and m(Lco )−a<k, then Lco is the jump discontinuity of
Feasible and Lco is optimal; otherwise it is not.
Thus, this step takes time Θ(n).
Step 3: This can be implemented by a simple iteration over [1..n]with a constant-
time length check per entry, hence in time Θ(n).
Steps 3 Ico can be assembled by one traversal over Land stored as simple linked
list in (worst-case) time Θ(n).
Step 4: By Definition 3.4 we have |C|many candidates. Sorting these takes time
Θ(|C|log |C|)using e. g. Heapsort.
Step 5: The binary search clearly takes at most blog2|C|+ 1csteps. In each step,
we evaluate Feasible in time
•Θ(|Ico|)for all candidates l > Lco using Lemma 3.3, and
•O(1) for l≤Lco since we already know from feasibility of Lco via
Lemma 3.1 that these are feasible, too.
Therefore, this step needs time Θ(|Ico|log |C|)time in total.
It is easy to see that admissible bounds always fulfill fu(i)≥fl(i)for all
i∈Ico. Therefore, |Ico| ≤ |C|so the runtime of this step is dominated by
step 4.
Space: The algorithm stores Lof size Θ(n), plus maybe a copy for selection and par-
titioning (depends the actual algorithm used). Step 4 then creates a Θ(|C|)-large
representation of the candidate set. Both step 3 and 5 can be implemented it-
eratively, and a potential recursion depth (and therefore stack size) in step 2.1.
is bounded from above by its runtime O(n). A few additional auxiliary variables
require only constant amount of memory.
For practical purposes, eliminating duplicates in Step 4 is virtually free and can speed
up the subsequent search. In the worst case, however, we save at most a constant factor
17
4. Algorithms
with the bounding functions we consider (see Appendix B), so we decided to stick to the
clearer presentation using multisets (instead of candidate sets).
4.1. Knowing Beats Searching
We have seen that the runtime of algorithm SearchLstar is dominated by sorting the
candidate set. This is necessary for facilitating binary search; but do we have to search?
As it turns out, a slightly different point of view on the problem allows us to work with
the unsorted candidate multiset and we can save a factor log |C|.
The main observation is that mincreases its value by one at every jump discontinuity
(for each Li
/jthat has that same value). So, knowing m(l)for any candidate length l,
we know exactly how many candidates (counting duplicates) we have to move to get to
the jump of Feasible. Therefore, we can make do with selecting the solution from our
candidate set instead of searching through it.
The following lemma states the simple observation that m(L, l)is intimately related to
the “position” of lin the decreasingly sorted candidate multiset for L.
Lemma 4.2: For all L, l ∈Q>0,
m(L, l) = L
/j|j∈N∧L
/j≥l.
Proof: The right-hand side equals the largest integer j∈N0for which L
/j≥l, i. e.
j≤L
/l, which is by definition bL
/lc=m(L, l). Note that this argument extends to the
case L<lby formally setting L
/0=∞ ≥ l.
Since we consider multisets, we can lift this property to m(l):
Corollary 4.3: For all l∈Q>0,
m(l) =
n
X
i=1Li
/j|j∈N∧Li
/j≥l=Call ∩[l, ∞).
In other words, m(l)is the number of occurrences of candidates that are at least l. We
can use this to transform our search problem (cf. Corollary 3.2) into a selection problem.
Lemma 4.4: l?=C(k)
all .
Proof: Denote with Call the set of all candidates, that is l∈ Call ⇐⇒ Call(l)>0. We
can thus write the statement of Corollary 4.3 as
m(l) = X
l0∈ Call
l0≥l
Call(l0).(1)
18
4. Algorithms
i
li
m(li)
1
10
1
2
8
4
3
8
4
4
8
4
5
5
5
6
4
7
7
4
7
. . .
Figure 4: An example illustrating eq. (2) with li=C(i)
all for some suitable instance. Note
that the lower bound is tight for i∈ {1,5}and the upper for i= 2.
As a direct consequence, we get for every i∈Nthat
i≤mC(i)
all ≤i+CallC(i)
all −1; (2)
see Figure 4 for a sketch of the situation. Feasibility of l:=C(k)
all follows immediately.
Now let ˆ
l:= min{l0∈Call |l0> l}; we see that
m(ˆ
l)(1)
=m(l)−Call(l)(2)
≤k+Call(l)−1−Call (l) = k−1
and therefore ˆ
lis infeasible. By the choice of ˆ
land monotonicity of Feasible (cf.
Lemma 3.1) we get that l=C(k)
all is indeed the largest feasible candidate; this concludes
the proof via Corollary 3.2 and Lemma 3.5.
Of course, we want to select from a small candidate set such as those we saw above;
surely, selecting the kth largest element from these is not correct, in general. Also, not
all restrictions may allow us to select because if we miss an Li
/jbetween two others, we
may count wrong. The relation carries over to admissible restrictions with only small
adaptions, though.
Corollary 4.5: Let C=C(I, fl, fu)be an admissible candidate multiset. Then,
l?=C(k0)
with k0=k−Pi∈Ifl(i)−1.
Proof: With multiset
M:=]
i∈I
{Li
/j|i∈I, j < fl(i)},
we get by Lemma 3.6 ii) and Lemma 3.1 that
C∩[l?,∞) = Call ∩[l?,∞)\M.
19
4. Algorithms
In addition, we know from Lemma 4.4 that
Call ∩[l?,∞)(k)=l?.
Since Mcontains only infeasible candidates (cf. Lemma 3.6 i) and Lemma 3.1), we also
have that
M⊂(l?,∞),
and by definition
M∩C=∅.
The claim
l?=C(k)
all =C(k−|M|)
follows by counting.
Hence, we can use any of the candidate sets we have investigated above. Instead of
binary search we determine l?by selecting the k0th largest element according to Corol-
lary 4.5. Since selection takes only linear time we save a logarithmic factor compared to
SearchLstar.
We give the full algorithm for completeness; note that steps 1 and 2 have not changed
compared to SearchLstar.
Algorithm 3: SelectLstarhfl, fui(L, k) :
1. Compute n0:= min(k, n + 1).
2. If n0≤n:
2.1. Determine Lco :=L(n0), i. e. the n0th largest length.
2.2. If Lco is optimal, answer l?=Lco (and terminate).
20
. Otherwise (i. e. n0> n):
2.4. Set Lco := 0.
3. Assemble Ico :=I>Lco.
4. Compute C:=C(Ico, fl, fu)as multiset.
5. Determine k0:=k−Pi∈Ico fl(i)−1.
6. Answer l?:=C(k0).
20
5. Reducing the Number of Candidates
Theorem 4.6:
Let (Ico, fl, fu)be an admissible restriction where fland fucan be evaluated in time
O(1).
Then, SelectLstarhfl, fuisolves Envy-Free Stick Division in time and space Θ(n+|C|).
Proof: Correctness is clear from Lemma 5.2 and Corollary 4.5.
We borrow from the resource analysis of Theorem 4.1 with the following changes.
ad 4: We do not sort C, so creating the multiset takes only time Θ(|C|); the result takes
up space Θ(|C|), too, though.
ad 5,6: Instead of binary search on Cwith repeated evaluation of Feasible, we just have
to compute k0(which clearly takes time Θ(|Ico|)) and then select the k0th largest
element from C. This takes time Θ(|C|)using e. g. the median-of-medians algo-
rithm [Blu+73].
The resource requirements of the other steps remain unchanged, that is Θ(n). The
bounds we claim in the corollary follow directly.
Is has become clear now that decreasing the number of candidates is crucial for solving
Envy-Free Stick Division quickly, provided we do not drop l?along the way. We now
endeavor to do so by choosing better admissible bounding functions.
5. Reducing the Number of Candidates
We can decrease the number of candidates significantly by observing the following.
Whenever we cut L(i)(which is the ith largest length) into jpieces of length L(i)/j
each, we also get at least jpieces of the same length from each of the longer sticks. In
total, this makes for at least i·jpieces of length L(i)/j; see Figure 5 for a visualization.
By rearranging the inequality k≥i·j, we obtain a new admissible bound on j. For the
algorithm, we have to sort Ico, though, so that Li=L(i).
Example 2.1 Continued: For Lex and k= 9, we get
C(Ico,1,dk
/ie) = 8
1,8
2,8
3,8
4,8
5,8
6,8
7,8
8,8
9,7
1,7
2,7
3,7
4,7
5,6
1,6
2,6
3,
that is 17 candidates (16 distinct ones); compare to |C(Ico,1, k)|= 36).
Lemma 5.1: Assume that Lco 6=l?and Ico is sorted w. r. t. decreasing lengths.
Then, C(Ico,1,dk
/ie)is admissible.
Furthermore, |C(Ico,1,dk
/ie)| ∈ Θk·log(min(k, n))in the worst case.
Proof: Again, we start by showing that (Ico,1,dk
/ie)is an admissible restriction.
21
5. Reducing the Number of Candidates
L(1)
L(2) L(i−1)
L(i)
L(i+1)
. . . . . .
.
.
..
.
..
.
..
.
..
.
.
k
i
i
Figure 5: When considering cut lengths L(i)/j, no jlarger than dk
/ieis relevant. The
sketch shows a cutting with L(i)and j=k
/i. Note how we have kmaximal
pieces for sure (dark); there may be many more (light).
ad i), iii): Similar to the proof of Lemma 3.8.
ad ii): Because Ico is sorted, we have Li=L(i)and Li0≥Lifor i0≤i. Therefore, we
get for all the l=Li
/fu(i)=Li· dk
/ie−1with i∈Ico that
m(L) =
n
X
i0=1$Li0
l%≥
i
X
i0=1$Li
l%=
i
X
i0=1$k
i%≥k.
This concludes the proof of the first claim.
For the size bound, let for short C:=C(Ico,1,dk
/ie). Clearly, |C|=Pi∈Ico dk
/ie(cf.
Definition 3.4). With |Ico |=n0−1 = min(n, k −1) in the worst-case (cf. the proof of
Lemma 3.8), the Θ(klog n0)bound on |C|follows from
|C|=
n0−1
X
i=1 k
i≤n0+
n0
X
i=1
k
i=n0+k·Hn0∈Θ(klog n0)
and
|C|=
n0−1
X
i=1 k
i≥
n0−1
X
i=1
k
i=k·Hn0−1∈Θ(klog n0)
with the well-known asymptotic Hk∼ln kof the harmonic numbers [GKP94,
eq. (6.66)].
Combining Theorem 4.6 and Lemma 5.1 we have obtained an algorithm that takes time
and space Θn+k·log(min(k, n)). This is already quite efficient. By putting in some
22
5. Reducing the Number of Candidates
more work, however, we can save the last logarithmic factor that separates us from linear
time and space.
5.1. Sandwich Bounds
Lemma 3.6 gives us some idea about what criteria we can use for restricting the set of
lengths we investigate. We will now try to match these criteria as exactly as possible,
deriving an interval [l, l]⊆Q>0that includes l?and is as small as possible; from these,
we can infer almost as tight bounds (fl, fu).
Assume we have some length L < l?and consider only lengths l > L. We have seen
in Lemma 3.3 that we can then restrict ourselves to lengths from I>L when computing
m(l). Now, from the definition of mit is clear that we can sandwich m(l)by
X
i∈I>L
Li
l−1< m(l)≤X
i∈I>L
Li
l
for l > L. We denote for short ΣI:=Pi∈ILifor any I⊆[1..n]; rearranging terms,
we can thus express these bounds more easily, both with respect to notational and
computational effort. We get
ΣI>L
l− |I>L|< m(l)≤ΣI>L
l(3)
for all l > L. Note that L= 0 is a valid choice, as then simply I>L = [1..n].
Rearranging these inequalities “around” m(l) = kyields bounds on l?, which we can
translate into bounds (fl, fu)on j(cf. Definition 3.4). We lose some precision because
we round to integer bounds but that adds at most a linear number of candidates. A
small technical hurdle is to ensure that both bounds are greater than our chosen Lso
that we can apply the sandwich bounds (3) in our proof.
Lemma 5.2: Let Lco and Ico be defined as in Definition 3.7, and
•l:= maxLco,ΣIco
k+|Ico|and
•l:=ΣIco
k.
Then, Ico, p(Li, l), p(Li, l)is admissible.
Proof: First, we determine what we know about our length bounds. Recall that Ico =
I>Lco 6=∅and Lco is not optimal.
We see that lis feasible by calculating
m(l)
(3)
>ΣIco
l− |Ico|=ΣIco
ΣIco
k+|Ico|
− |Ico|=k, l > Lco,
=m(Lco)≥k, l =Lco >0,
(4)
23
5. Reducing the Number of Candidates
0.2 0.3 0.4 0.5 0.6 0.7
0
5
10
15
infeasible
dominated
1/l
m(l)
ΣIco /l
ΣIco /l − |Ico|
k= 9
Figure 6: The number of maximal pieces m(l)in the reciprocal of cut length lfor (Lex,9)
as defined in in Example 2.1. Note how we can exclude all but three candidates
(the filled circles) in a narrow corridor around 1
/l?= 0.5, defined by the points
at which the bounds from (3) attain k= 9, namely l= 1.75 and l= 2.3.
using in the second case that Lco is feasible. For the upper bound, we first note that
because Lco is not optimal, there is some δ > 0with
ΣIco ≥k(Lco +δ)> kLco,
from which we get by rearranging that l > Lco . Therefore, we can bound
m(l+ε)(3)
≤ΣIco
l+ε<ΣIco
l=ΣIco
ΣIco
k
=k(5)
for any ε > 0, that is any length larger than lis infeasible. Note in particular that, in
every case, l > l so we always have a non-empty interval to work with.
We now show the conditions of Lemma 3.6 one by one.
ad i) Let i∈Ico. If p(Li, l)=1the condition is trivially fulfilled. In the other case, we
calculate
l:=Li
p(Li, l)−1=Li
dLi
/le − 1>Li
Li
/l=l
and therewith m(l)< k by (5).
ad ii) Let i∈Ico again. We calculate
l:=Li
p(Li, l)=Li
dLi
/le≤Li
Li
/l=l
24
5. Reducing the Number of Candidates
which implies by Lemma 3.1 that
m(l)≥m(l)(4)
≥k.
ad iii) See the proof of Lemma 3.8.
Example 2.1 Continued: For Lex and k= 9, we get
CIco, p(Li, l), p(Li, l)=8
4,8
5,7
4,7
3,6
3,6
4,
that is six candidates (five distinct ones); compare to |C(Ico,1,dk
/ie)|= 17 and |C(Ico,1, k)|= 36.
See Figure 6 for a visualization of the effect our bounds have on the candidate set; note that we
keep some additional candidates smaller than l.
We see in this example that the bounds from Lemma 5.2 are not as tight as could be;
C(Ico, p(Li, l), p(Li, l)) ∩[l, l]can be properly smaller (but not by more than one element
per Li), and since l?∈[l, l]it is still a valid candidate set in some sense.
The reason for us sticking with the larger set is that we have defined admissibility in
a way that is local to each Li– we need to envelop l?for each length – so we have no
way to express global length bounds formally, at least not within this framework. In
particular, using fu=i7→ bLi
/lcis not admissible.
Nevertheless, we have obtained yet another admissible restriction and, as it turns out,
it is good enough to achieve a linear candidate set. Only some combinatorics stand
between us and our next corollary.
Lemma 5.3: |C(Ico, p(Li, l), p(Li, l))| ∈ Θmin(k, n)in the worst case.
Proof: Recall that |Ico|= min(k−1, n)in the worst case (cf. the proof of Lemma 3.8).
The upper bound on |C|then follows from the following calculation:
|C|=X
i∈Icop(Li, l)−p(Li, l)+1
=|Ico|+X
i∈IcoLi
l−X
i∈IcoLi
l
≤ |Ico|+X
i∈IcoLi
l+ 1−X
i∈Ico
Li
l
=|Ico|+ ΣIco ·k+|Ico |
ΣIco
+|Ico| − ΣIco ·k
ΣIco
= 3 · |Ico |.
A similar calculation shows the lower bound |C| ≥ |Ico |.
25
6. Conclusion
(fl, fu)SearchLstarhfl, fuiSelectLstarhfl, fui
(1, k) Θ(kn log k) Θ(kn)
1,dk
/ie2Θ(klog(k) log(n)) Θ(klog n)
p(Li, l), p(Li, l)Θ(nlog n) Θ(n)
Figure 7: Assuming k≥n, the table shows the worst-case runtime bounds shown above
for the combinations of algorithm and bounding functions.
6. Conclusion
We have given a formal definition of Envy-Free Stick Division, derived means to restrict the
search for an optimal solution to a small, discrete space of candidates, and developed
algorithms that perform this search efficiently. Figure 7 summarizes the asymptotic
runtimes of the combinations of candidate space and algorithm.
All in all, we have shown the following complexity bounds on our problem.
Corollary 6.1: Envy-Free Stick Division can be solved in time and space O(n).
Proof: Algorithm SelectLstarhp(Li, l), p(Li, l)iserves as a witness via Theorem 4.6,
Lemma 5.2 and Lemma 5.3.
A simple adversary argument shows that a sublinear algorithm is impossible; since the
input is not sorted, adding a sufficiently large stick breaks any algorithm that does not
consider all sticks. We have thus found an asymptotically optimal algorithm. Given
the easy structure and almost elementary nature – we need but two calls to a selection
algorithm – we expect it to be efficient in practice as well.
Acknowledgments
Erel Segal-Halevi3posed the original question [SH14] on Computer Science Stack Ex-
change. Our approach is based on observations in the answers by Abhishek Bansal
(user19901694), InstructedA5and FrankW6. Hence, even though the eventual algorithm
and its presentation have been developed and refined offline with the use of a blackboard
2The additional time Θ(|Ico|log |Ico |)necessary for sorting Ico as required by Lemma 5.1 is always
dominated by generating C.
3http://cs.stackexchange.com/users/1342/
4http://cs.stackexchange.com/users/19311/
5http://cs.stackexchange.com/users/20169/
6http://cs.stackexchange.com/users/13022/
26
References
and lots of paper, the result has been the product of a small “crowd” collaboration made
possible by the Stack Exchange platform.
We thank Chao Xu for pointing us towards the work by Cheng and Eppstein [CE14],
and for providing the observation we utilize in Section 4.1.
References
[Blu+73] Manuel Blum, Robert W. Floyd, Vaughan Pratt, Ronald L. Rivest, and
Robert E. Tarjan. “Time Bounds for Selection.” In: Journal of Computer
and System Sciences 7.4 (Aug. 1973), pp. 448–461. doi:10.1016/S0022-
0000(73)80033-9.
[BT96] Steven J. Brams and Alan D. Taylor. Fair division. Cambridge University
Press, 1996. isbn: 978-0-521-55644-6.
[CE14] Zhanpeng Cheng and David Eppstein. “Linear-time Algorithms for Pro-
portional Apportionment.” In: International Symposium on Algorithms and
Computation (ISAAC) 2014. Springer, 2014. doi:10. 1007 /978 - 3- 319 -
13075-0_46.
[GKP94] Ronald L. Graham, Donald E. Knuth, and Oren Patashnik. Concrete math-
ematics: a foundation for computer science. Addison-Wesley, 1994. isbn:
978-0-20-155802-9.
[SH14] Erel Segal-Halevi. Cutting equal sticks from different sticks. Sept. 2014. url:
http://cs.stackexchange.com/q/30073 (visited on 11/26/2014).
[SHHA15] Erel Segal-Halevi, Avinatan Hassidim, and Yonatan Aumann. “Waste Makes
Haste: Bounded Time Protocols for Envy-Free Cake Cutting with Free Dis-
posal.” In: The 14th International Conference on Autonomous Agents and
Multiagent Systems (AAMAS). May 2015.
27
A. Notation Index
A. Notation Index
In this section, we collect the notation used in this paper. Some might be seen as “stan-
dard”, but we think including them here hurts less than a potential misunderstanding
caused by omitting them.
Generic Mathematical Notation
[1..n]..........Theset{1, . . . , n} ⊆ N.
bxc,dxe. . . . . . . . floor and ceiling functions, as used in [GKP94].
ln n. . . . . . . . . . . natural logarithm.
log2n.........(log n)2
Hn...........nth harmonic number; Hn=Pn
i=1 1/i.
pn...........nth prime number.
A. . . . . . . . . . . . multisets are denoted by bold capital letters.
A(x). . . . . . . . . . multiplicity of xin A, i. e., we are using the function notation of
multisets here.
A]B. . . . . . . . . multiset union; multiplicities add up.
L(k)..........Thekth largest element of multiset L(assuming it exists);
if the elements of Lcan be written in non-increasing order, Lis given
by L(1) ≥L(2) ≥L(3) ≥ · · · .
Example: For L={10,10,8,8,8,5}, we have L(1) =L(2) = 10,
L(3) =L(4) =L(5) = 8 and L(6) = 5.
Notation Specific to the Problem
stick . . . . . . . . . . one of the lengths of the input, before any cutting.
piece . . . . . . . . . . one of the lengths after cutting; each piece results from one input stick
after some cutting operations.
maximal piece . . . . . piece of maximal length (after cutting).
n. . . . . . . . . . . . number of sticks in the input.
L,Li,L........L={L1, . . . , Ln}with Li∈Q>0for all i∈[1..n]contains the lengths
of the sticks in the input.
We use Las a free variable that represents (bounds on) input stick
lengths.
k............k∈N, the number of maximal pieces required.
28
B. On the Number of Distinct Candidates
l. . . . . . . . . . . . free variable that represents (bounds on) candidate cut lengths; by
Lemma 3.1 only l=Li
/jfor j∈Nhave to be considered.
l?. . . . . . . . . . . . the optimal cut length, i. e., the cut length that yields at least k
maximal pieces while minimizing the total length of non-maximal (i. e.
waste) pieces.
c(L, l). . . . . . . . . the number of cuts needed to cut stick Linto pieces of lengths ≤l;
c(L, l) = dL
l−1e.
m(L, l). . . . . . . . . the number of maximal pieces obtainable by cutting stick Linto
pieces of lengths ≤l;m(L, l) = bL
lc.
p(L, l). . . . . . . . . the minimal total number of pieces resulting from cutting stick Linto
pieces of lengths ≤l;p(L, l) = dL
le.
c(l) = c(L, l). . . . . total number of cuts needed to cut all sticks into pieces of lengths ≤l;
c(L, l) = PL∈Lc(L, l).
m(l) = m(L, l). . . . total number of maximal pieces resulting from cutting stick Linto
pieces of lengths ≤l;m(L, l) = PL∈Lm(L, l).
Feasible(l) = Feasible(L, k, l)
indicator function that is 1when lis a feasible length and 0otherwise;
Feasible(L, k, l)=[m(L, l)≥k].
C(I, fl, fu). . . . . . . multiset of candidate lengths Li
/j, restricted by the index set Iof
considered input sticks Li, and lower resp. upper bound on j; cf.
Definition 3.4 (page 13).
Call . . . . . . . . . . . The unrestricted (infinite) candidate set Call =C([1..n],1,∞).
admissible restriction (I, fl, fu)
sufficient conditions on restriction (I, fl, fu)to ensure that
Envy-Free Stick Division ∈ C(I, fl, fu); cf. Lemma 3.6 (page 14).
I>L . . . . . . . . . . . the set of indices of input sticks Li> L; cf. Lemma 3.3.
Ico,Lco . . . . . . . . Ico =I>Lco is our canonical index set with cutoff length Lco the
kth largest input length; cf. Definition 3.7 (page 14).
ΣI. . . . . . . . . . . assuming I⊆[1..n], this is a shorthand for Pi∈ILi.
l,l. . . . . . . . . . . lower and upper bounds on candidate lengths l≤l≤lso that
l?∈ Call ∩[l, l]; see Lemma 5.2 (page 23).
B. On the Number of Distinct Candidates
As mentioned in Section 4, algorithm SearchLstar can profit from removing duplicates
from the candidate multisets during sorting. We will show in the subsequent proofs that
none of the restrictions introduced above cause more than a constant fraction of all
candidates to be duplicates.
29
B. On the Number of Distinct Candidates
We denote with C(. . . )the set obtained by removing duplicates from the multiset C(. . . )
with the same restrictions.
Lemma B.1: |C(Ico ,1, k)| ∈ Θ|C(Ico ,1, k)|in the worst case.
Proof: Let for short C:=|C(Ico,1, k)|and C:=C(Ico,1, k). It is clear that C≤ |C|;
we will show now that C∈Ω(|C|)in the worst case.
Consider instance
Lprimes ={pn, . . . , p1}
with pithe ith prime number and any k∈N; note that Li=pn−i+1. Let for ease
of notation n0:= min(k, n + 1); note that Lco =L(n0)
primes if k≤n. We have |Ico|=
min(k−1, n)because the Liare pairwise distinct, and therefore |C|=k|Ico|=k(n0−1).
Since the Liare also pairwise coprime, all candidates Li
/jfor which jis not a multiple
of Liare pairwise distinct. Therefore, we get
C≥ |C| −
n
X
i=n−n0+2$k
pi%
≥ |C| −
n
X
i=n−n0+2
k
pi
=|C| − (n0−1)k·
n
X
i=n−n0+2
1
pi
≥ |C|−|C| · n0
pn0
≥ |C|−|C| · 2
3
=|C|
3.
In particular, we can show that k
/pk≤2
/3by k
/pk<0.4for k≥20 [GKP94, eq. (4.20)]
and checking all k < 20 manually; the maximum is attained at k= 2.
Lemma B.2: |C(Ico ,1,dk
/ie)| ∈ Θ|C(Ico,1,dk
/ie)|in the worst case.
Proof: Let for short C:=|C(Ico,1,dk
/ie)|and C:=C(Ico,1,dk
/ie). It is clear that
C≤ |C|; we will show now that C∈Ω(|C|)in the worst case.
30
B. On the Number of Distinct Candidates
We make use of the same instance (Lprimes, k)we used in the proof of Lemma B.1, with
a similar calculation:
C=|C| −
n
X
i=n−n0+2$dk
/ie
pi%
≥ |C| −
n
X
i=n−n0+2
k
i+ 1
pi
≥ |C| − n0−1
pn0−1
·1 + k
n0−1
≥ |C| − 2
3·1 + k
n0−1
∈Θ(|C|)
because k
/n0∈o(klog n0) = o(|C|).
Lemma B.3: |C(Ico , p(Li, l), p(Li, l))| ∈ Θ|C(Ico , p(Li, l), p(Li, l))|in the worst case.
Proof: Let again for short C:=|C(Ico, p(Li, l), p(Li, l))|and C:=C(Ico, p(Li, l), p(Li, l)).
It is clear that C≤ |C|; we will show now that C∈Ω(|C|)in the worst case.
We make use of our trusted instance (Lprimes, k)again. We show that very prime yields
at least one candidate unique to itself, as long as kis constant (which is sufficient for a
worst-case argument).
Recall that l > l so every Lihas some j; we note furthermore that for fixed i∈Ico,
j≤p(Li, l) = LiΣIco
k+|Ico|=&pi·k+|Ico|
Pi0∈Ico pi0'≤pi·2k
pn≤2k < Li
for big enough n, in particular because pn∼nln n[GKP94, p 110]. That is, every Li
with i∈Ico yields at least one Li
/jno other does, since all Liare co-prime. Hence
C≥ |Ico| ∈ Θ(|C|).
31