Secretary Problems with Convex Costs
ABSTRACT We consider online resource allocation problems where given a set of requests
our goal is to select a subset that maximizes a value minus cost type of
objective function. Requests are presented online in random order, and each
request possesses an adversarial value and an adversarial size. The online
algorithm must make an irrevocable accept/reject decision as soon as it sees
each request. The "profit" of a set of accepted requests is its total value
minus a convex cost function of its total size. This problem falls within the
framework of secretary problems. Unlike previous work in that area, one of the
main challenges we face is that the objective function can be positive or
negative and we must guard against accepting requests that look good early on
but cause the solution to have an arbitrarily large cost as more requests are
accepted. This requires designing new techniques.
We study this problem under various feasibility constraints and present
online algorithms with competitive ratios only a constant factor worse than
those known in the absence of costs for the same feasibility constraints. We
also consider a multidimensional version of the problem that generalizes
multidimensional knapsack within a secretary framework. In the absence of any
feasibility constraints, we present an O(l) competitive algorithm where l is
the number of dimensions; this matches within constant factors the best known
ratio for multidimensional knapsack secretary.

Conference Paper: Assessing new conditions for secretary problem using MultiAgent Systems
[Show abstract] [Hide abstract]
ABSTRACT: The secretary problems are of types of the optimal stopping theory problems and are also distinguished as well known problems in applied probability, statistics and decision theory fields. The importance of these kinds of problems defines many new conditions which have been discussed till now. The significance of these types of problems in social issues causes to define many conditions which are vastly discussed. In this paper, a new condition of this problem is considered. These conditions which have been performed using a real time method are based on MultiAgent Systems techniques. In this paper, after evaluating this method, the resulted answers are examined using MultiAgent system techniques.Fuzzy Systems (IFSC), 2013 13th Iranian Conference on; 01/2013  SourceAvailable from: de.arxiv.org
Article: A Simple OrderOblivious O(log log(rank))Competitive Algorithm for the Matroid Secretary Problem
[Show abstract] [Hide abstract]
ABSTRACT: We present an O(log log(rank)) algorithm for the matroid secretary problem. Our algorithm can be interpreted as a distribution over a simple type of matroid secretary algorithms which are easy to analyze. This improves on the previously best algorithm for the matroid secretary problem in both the competitive ratio and its simplicity. Furthermore, our procedure is orderoblivious, which implies that it leads to an O(log log(rank)) competitive algorithm for singlesample prophet inequalities.04/2014;  SourceAvailable from: José Verschae
Conference Paper: How to Pack Your Items When You Have to Buy Your Knapsack
[Show abstract] [Hide abstract]
ABSTRACT: In this paper we consider a generalization of the classical knapsack problem. While in the standard setting a fixed capacity may not be exceeded by the weight of the chosen items, we replace this hard constraint by a weightdependent cost function. The objective is to maximize the total profit of the chosen items minus the cost induced by their total weight. We study two natural classes of cost functions, namely convex and concave functions. For the concave case, we show that the problem can be solved in polynomial time; for the convex case we present an FPTAS and a 2approximation algorithm with the running time of O(n log n), where n is the number of items. Before, only a 3approximation algorithm was known. We note that our problem with a convex cost function is a special case of maximizing a nonmonotone, possibly negative submodular function.Mathematical Foundations of Computer Science; 01/2013
Page 1
arXiv:1112.1136v1 [cs.DS] 6 Dec 2011
Secretary Problems with Convex Costs
Siddharth BarmanSeeun Umboh
David Malec
Shuchi Chawla
University of Wisconsin–Madison
{sid, seeun, shuchi, dmalec}@cs.wisc.edu
Abstract
We consider online resource allocation problems where given a set of requests our goal is to select
a subset that maximizes a value minus cost type of objective function. Requests are presented online
in random order, and each request possesses an adversarial value and an adversarial size. The online
algorithm must make an irrevocable accept/reject decision as soon as it sees each request. The “profit”
of a set of accepted requests is its total value minus a convex cost function of its total size. This problem
falls within the framework of secretary problems. Unlike previous work in that area, one of the main
challenges we face is that the objective function can be positive or negative and we must guard against
accepting requests that look good early on but cause the solution to have an arbitrarily large cost as more
requests are accepted. This requires designing new techniques.
We study this problem under various feasibility constraints and present online algorithms with com
petitive ratios only a constant factor worse than those known in the absence of costs for the same feasi
bility constraints. We also consider a multidimensional version of the problem that generalizes multi
dimensional knapsack within a secretary framework. In the absence of any feasibility constraints, we
presentanO(ℓ) competitivealgorithmwhereℓis thenumberofdimensions; thismatcheswithinconstant
factors the best known ratio for multidimensional knapsack secretary.
1 Introduction
We study online resource allocation problems under a natural profit objective: a single server accepts or
rejects requests for service so as to maximize the total value of the accepted requests minus the cost imposed
by them on the system. This model captures, for example, the optimization problem faced by a cloud
computing service accepting jobs, a wireless access point accepting connections from mobile nodes, or an
advertiser in a sponsored search auction deciding which keywords to bid on. In many of these settings, the
server must make accept or reject decisions in an online fashion as soon as requests are received without
knowledge of the quality future requests. We design online algorithms with the goal of achieving a small
competitive ratio—ratio of the algorithm’s performance to that of the best possible (offline optimal) solution.
A classical example of online decision making is the secretary problem. Here a company is interested in
hiring a candidate for a single position; candidates arrive for interview in random order, and the company
must accept or reject each candidate following the interview. The goal is to select the best candidate as
often as possible. What makes the problem challenging is that each interview merely reveals the rank of the
candidate relative to the ones seen previously, but not the ones following. Nevertheless, Dynkin [11] showed
that it is possible to succeed with constant probability using the following algorithm: unconditionally reject
the first 1/e fraction of the candidates; then hire the next candidate that is better than all of the ones seen
1
Page 2
previously. Dynkin showed that as the number of candidates goes to infinity, this algorithm hires the best
candidate with probability approaching 1/e and in fact this is the best possible.
More general resource allocation settings may allow picking multiple candidates subject to a certain
feasibility constraint. We call such a problem a generalized secretary problem (GSP) and use (Φ,F) to
denote an instance of the problem. Here F denotes a feasibility constraint that the set of accepted requests
must satisfy (e.g. the size of the set cannot exceed a given bound), and Φ denotes an objective function
that we wish to maximize. As in the classical setting, we assume that requests arrive in random order; the
feasibility constraint F is known in advance but the quality of each request, in particular its contribution
to Φ, is only revealed when the request arrives. Recent work has explored variants of the GSP where Φ
is the sum over the accepted requests of the “value” of each request. For such a sumofvalues objective,
constant factor competitive ratios are known for various kinds of feasibility constraints including cardinality
constraints [17, 19], knapsack constraints [4], and certain matroid constraints [5].
In many settings, the linear sumofvalues objective does not adequately capture the tradeoffs that the
server faces in accepting or rejecting a request, and feasibility constraints provide only a rough approxima
tion. Consider, for example, a wireless access point accepting connections. Each accepted request improves
resource utilization and brings value to the access point. However as the number of accepted requests grows
the access point performs greater multiplexing of the spectrum, and must use more and more transmitting
power in order to maintain a reasonable connection bandwidth for each request. The power consumption
and its associated cost are nonlinear functions of the total load on the access point. This directly translates
into a value minus cost type of objective function where the cost is an increasing function of the load or total
size of all the requests accepted.
Our goal then is to accept a set A out of a universe U of requests such that the “profit” π(A) = v(A) −
C(s(A)) is maximized; here v(A) is the total value of all requests in A, s(A) is the total size, and C is a
known increasing convex cost function1.
Note that when the cost function takes on only the values 0 and ∞ it captures a knapsack constraint,
and therefore the problem (π,2U) (i.e. where the feasibility constraint is trivial) is a generalization of the
knapsack secretary problem [4]. We further consider objectives that generalize the ℓdimensional knapsack
secretary problem. Here, we are given ℓ different (known) convex cost functions Cifor 1 ≤ i ≤ ℓ, and
each request is endowed with ℓ sizes, one for each dimension. The profit of a set is given by π(A) =
v(A) −?ℓ
costs, we obtain online algorithms with competitive ratios within a constant factor of those achievable for a
sumofvalues objective with the same feasibility constraints. For ℓdimensional costs, in the absence of any
constraints, we obtain an O(ℓ) competitive ratio. We remark that this is essentially the best approximation
achievable even in the offline setting: Dean et al. [9] show an Ω(ℓ1−ǫ)hardness for the simpler ℓdimensional
knapsack problem under a standard complexitytheoretic assumption. For the multidimensional problem
with general feasibility constraints, our competitive ratios are worse by a factor of O(ℓ5) over the corre
sponding versions without costs. Improving this factor is a possible avenue for future research.
We remark that the profit function π is a submodular function. Recently several works [13, 6, 16] have
looked at secretary problems with submodular objective functions and developed constant competitive algo
rithms. However, all of these works make the crucial assumption that the objective is always nonnegative;
it therefore does not capture π as a special case. In particular, if Φ is a monotone increasing submodular
function (that is, if adding more elements to the solution cannot decrease its objective value), then to obtain
i=1Ci(si(A)) where si(A) is the total size of the set in dimension i.
Weconsider theprofitmaximization problem under various feasibility constraints. Forsingledimensional
1Convexity is crucial in obtaining any nontrivial competitive ratio—if the cost function were concave, the only solutions with
a nonnegative objective function value may be to accept everything or nothing.
2
Page 3
a good competitive ratio it suffices to show that the online solution captures a good fraction of the optimal
solution. In the case of [6] and [16], the objective function is not necessarily monotone. Nevertheless,
nonnegativity implies that the universe of elements can be divided into two parts, over each of which the
objective essentially behaves like a monotone submodular function in the sense that adding extra elements
to a good subset of the optimal solution does not decrease its objective function value. In our setting, in
contrast, adding elements with too large a size to the solution can cause the cost of the solution to become
too large and therefore imply a negative profit, even if the rest of the elements are good in terms of their
valuesize tradeoff. As a consequence we can only guarantee good profit when no “bad” elements are added
to the solution, and must ensure that this holds with constant probability. This necessitates designing new
techniques.
Our techniques.
classify elements as “good” or“bad” based onathreshold on their value tosize ratio (a.k.a. density) such that
any large enough subset of the good elements provides a good approximation to profit; the optimal threshold
is defined according to the offline optimal fractional solution. Our algorithm learns an estimate of this
threshold from the first few elements (that we call the sample) and accepts all the elements in the remaining
stream that cross the threshold. Learning the threshold from the sample is challenging. First, following the
intuition about avoiding all bad elements, our estimate must be conservative, i.e. exceed the true threshold,
with constant probability. Second, the optimal threshold for the sample can differ significantly from the
optimal threshold for the entire stream and is therefore not a good candidate for our estimate. Our key
observation is that the optimal profit over the sample is a much better behaved random variable and is, in
particular, sufficiently concentrated; we use this observation to carefully pick an estimate for the density
threshold.
With general feasibility constraints, it is no longer sufficient to merely classify elements as good and
bad: an arbitrary feasible subset of the good elements is not necessarily a good approximation. Instead, we
decompose the profit function into two parts, each of which can be optimized by maximizing a certain sum
ofvalues function (see Section 4). This suggests a reduction from our problem to two different instances
of the GSP with sumofvalues objectives. The catch is that the new objectives are not necessarily non
negative and so previous approaches for the GSP don’t work directly. We show that if the decomposition of
the profit function is done with respect to a good density threshold and an extra filtering step is applied to
weed out bad elements, then the two new objectives on the remaining elements are always nonnegative and
admit good solutions. At this point we can employ previous work on GSP with a sumofvalues objective
to obtain a good approximation to one or the other component of profit. We note that while the exposition
in Section 4 focuses on a matroid feasibility constraint, the results of that section extend to any downwards
closed feasibility constraint that admits good offline and online algorithms with a sumofvalues objective2.
In the multidimensional setting (discussed in Section 5), elements have different sizes along different
dimensions. Therefore, a single density does not capture the valuesize tradeoff that an element offers.
Instead we can decompose the value of an element into ℓ different values, one for each dimension, and
define densities in each dimension accordingly. This decomposes the profit across dimensions as well.
Then, at a loss of a factor of ℓ, we can approximate the profit objective along the “best” dimension. The
problem with this approach is that a solution that is good (or even best) in one dimension may in fact be
terrible with respect to the overall profit, if its profit along other dimensions is negative. Surprisingly we
show that it is possible to partition values across dimensions in such a way that there is a single ordering over
In the absence of feasibility constraints (see Section 3), we note that it is possible to
2We obtain an O(α4β) competitive algorithm where α is the best offline approximation and β is the best online competitive
ratio for the sumofvalues objective.
3
Page 4
elements in terms of their valuesize tradeoff that is respected in each dimension; this allows us to prove that
a solution that is good in one dimension is also good in other dimensions. We present an O(ℓ) competitive
algorithm for the unconstrained setting based on this approach in Section 5, and defer a discussion of the
constrained setting to Section 6.
Related work.
survey. Recently a number of papers have explored variants of the GSP with a sumofvalues objective.
Hajiaghayi et al. [17] considered the variant where up to k secretaries can be selected (a.k.a. the ksecretary
problem) in a gametheoretic setting and gave a strategyproof constantcompetitive mechanism. Klein
berg [19] later showed an improved 1 − O(1/√k) competitive algorithm for the classical setting. Babaioff
et al. [4] generalized this to a setting where different candidates have different sizes and the total size of the
selected set must be bounded by a given amount, and gave a constant factor approximation. In [5] Babaioff
et al. considered another generalization of the ksecretary problem to matroid feasibility constraints. A
matroid is a set system over U that is downwards closed (that is, subsets of feasible sets are feasible), and
satisfies a certain exchange property (see [21] for a comprehensive treatment). They presented an O(logr)
competitive algorithm, where r is the rank of the matroid, or the size of a maximal feasible set. This was
subsequently improved to a O(√logr)competitive algorithm by Chakraborty and Lachish [7]. Several pa
pers have improved upon the competitive ratio for special classes of matroids [1, 10, 20]. Bateni et al. [6]
and Gupta et al. [16] were the first to (independently) consider nonlinear objectives in this context. They
gave online algorithms for nonmonotone nonnegative submodular objective functions with competitive ra
tios within constant factors of the ratios known for the sumofvalues objective under the same feasibility
constraint. Other versions of the problem that have been studied recently include: settings where elements
are drawn from known or unknown distributions but arrive in an adversarial order [8, 18, 22], versions where
values are permuted randomly across elements of a nonsymmetric set system [24], and settings where the
algorithm is allowed to reverse some of its decisions at a cost [2, 3].
The classical secretary problem has been studied extensively; see [14, 15] and [23] for a
2 Notation and Preliminaries
We consider instances of the generalized secretary problem represented by the pair (π,F), and an implicit
number n of requests or elements that arrive in an online fashion. U denotes the universe of elements.
F ⊂ 2Uis a known downwardsclosed feasibility constraint. Our goal is to accept a subset of elements
A ⊂ U with A ∈ F such that the objective function π(A) is maximized. For a given set T ⊂ U, we use
O∗(T) = argmaxA∈F∩2T π(A) to denote the optimal solution over T; O∗is used as shorthand for O∗(U).
We now describe the function π.
In the singledimensional cost setting, each element e ∈ U is endowed with a value v(e) and a size
s(e). Values and sizes are integral and are a priori unknown. The size and value functions extend to sets
of elements as s(A) =?
following quantities will be useful in our analysis:
e∈As(e) and v(A) =?
e∈Av(e). Then the “profit” of a subset is given by
π(A) = v(A) − C(s(A)) where C is a nondecreasing convex function on size: C : Z+→ Z+. The
• The density of an element, ρ(e) := v(e)/s(e). We assume without loss of generality that densities of
elements are unique and denote the unique element with density γ by eγ.
• The marginal cost function, c(s) := C(s) − C(s − 1). Note that this is an increasing function.
4
Page 5
• The inverse marginal cost function, ¯ s(ρ) which is defined to be the maximum size for which an
element of density ρ will have a nonnegative profit increment, that is, the maximum s for which
ρ ≥ c(s).
• The density prefix for a given density γ and a set T, PT
density prefix,¯PT
γ := {e ∈ T : ρ(e) ≥ γ}, and the partial
γand¯PU
γ:= PT
γ\ {eγ}. We use Pγand¯Pγas shorthand for PU
γrespectively.
We will sometimes find it useful to discuss fractional relaxations of the offline problem of maximizing
π subject to F. To this end, we extend the definition of subsets of U to allow for fractional membership.
We use αe to denote an αfraction of element e; this has value v(αe) = αv(e) and size s(αe) = αs(e). We
say that a fractional subset A is feasible if its support supp(A) is feasible. Note that when the feasibility
constraint can be expressed as a set of linear constraints, this relaxation is more restrictive than the natural
linear relaxation.
Note that since costs are a convex nondecreasing function of size, it may at times be more profitable
to accept a fraction of an element rather than the whole. That is, argmaxαπ(αe) may be strictly less than
1. For such elements, ρ(e) < c(s(e)). We use F to denote the set of all such elements: F = {e ∈ U :
argmaxαπ(αe) < 1}, and I = U \ F to denote the remaining elements. Our solutions will generally
approximate the optimal profit from F by running Dynkin’s algorithm for the classical secretary problem;
most of our analysis will focus on I. Let F∗(T) denote the optimal (feasible) fractional subset of T ∩ I for
a given set T. Then π(F∗(T)) ≥ π(O∗(T ∩ I)). We use F∗as shorthand for F∗(U), and let s∗be the size
of this solution.
In the multidimensional setting each element has an ℓdimensional size s(e) = (s1(e),...,sℓ(e)). The
cost function is composed of ℓ different nondecreasing convex functions, Ci: Z+→ Z+. The cost of a set
of elements is defined to be C(A) =?
iCi(si(A)) and the profit of A, as before, is its value minus its cost:
π(A) = v(A) − C(A).
2.1 Balanced Sampling
Our algorithms learn the distribution of element values and sizes by observing the first few elements. Be
cause of the random order of arrival, these elements form a random subset of the universe U. The following
concentration result is useful in formalizing the representativeness of the sample.
Lemma 2.1. Given constant c ≥ 3 and a set of elements I with associated nonnegative weights, wifor
i ∈ I, say we construct a random subset J by including each element of I uniformly at random with
probability 1/2. If for all k ∈ I, wk≤1
least 0.76:
c
?
?
i∈Iwithen the following inequality holds with probability at
j∈J
wj≥ β(c)
?
i∈I
wi,
where β(c) is a nondecreasing function of c (and furthermore is independent of I).
We begin the proof of Lemma 2.1 with a restatement of Lemma 1 from [12] since it plays a crucial role
in our argument. Note that we choose a different parameterization than they do, since in our setting the
balance between approximation ratio and probability of success is different.
Lemma 2.2. Let Xi, for i ≥ 1, be indicator random variables for a sequence of independent, fair coin flips.
Then, for Si=?i
5
k=1Xk, we have Pr[∀i, Si≥ ⌊i/3⌋] ≥ 0.76.