Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We study the problem of sorting under incomplete information, when queries are used to resolve uncertainties. Each of n data items has an unknown value, which is known to lie in a given interval. We can pay a query cost to learn the actual value, and we may allow an error threshold in the sorting. The goal is to find a nearly-sorted permutation by performing a minimum-cost set of queries. We show that an offline optimum query set can be found in polynomial time, and that both oblivious and adaptive problems have simple query-competitive algorithms. The query-competitiveness for the oblivious problem is n for uniform query costs, and unbounded for arbitrary costs; for the adaptive problem, the ratio is 2. We then present a unified adaptive strategy for uniform query costs that yields the following improved results: (i) a 3/2-query-competitive randomized algorithm; (ii) a 5/3-query-competitive deterministic algorithm if the dependency graph has no 2-components after some preprocessing, which has query-competitive ratio 3/2+O(1/k) if the components obtained have size at least k; and (iii) an exact algorithm if the intervals constitute a laminar family. The first two results have matching lower bounds, and we have a lower bound of 7/5 for large components. We also give a randomized adaptive algorithm with query-competitive factor 1+433≈1.7698 for arbitrary query costs, and we show that the 2-query competitive deterministic adaptive algorithm can be generalized for queries returning intervals and for a more general graph problem (which is also a generalization of the vertex cover problem), by using the local ratio technique. Furthermore, we prove that the advice complexity of the adaptive problem is ⌊n/2⌋ if no error threshold is allowed, and ⌈n/3⋅lg⁡3⌉ for the general case. Finally, we present some graph-theoretical results regarding co-threshold tolerance graphs, and we discuss uncertainty variants of some classical interval problems.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Variants of hypergraph orientation have been widely studied since the model of explorable uncertainty has been proposed [23]. Sorting and hypergraph orientation are well known to admit efficient polynomialtime algorithms if precise input data is given, and they are well understood in the setting of explorable uncertainty: The best known deterministic algorithms are 2-competitive, and no deterministic algorithm can be better [23,22,9]. For sorting, the competitive ratio can be improved to 1.5 using randomization [22]. ...
... Sorting and hypergraph orientation are well known to admit efficient polynomialtime algorithms if precise input data is given, and they are well understood in the setting of explorable uncertainty: The best known deterministic algorithms are 2-competitive, and no deterministic algorithm can be better [23,22,9]. For sorting, the competitive ratio can be improved to 1.5 using randomization [22]. In the stochastic setting, where the precise weights of the vertices are drawn according to known distributions over the intervals, there exists a 1.618-competitive algorithm for hypergraph orientation and a 4/3-competitive algorithm for special cases [9]. ...
... Querying this set leads to 1-consistency but may perform arbitrarily bad in case of incorrect predictions (as shown down below). On the other hand, known 2-competitive algorithms for the adversarial problems without predictions [23,22] are not better than 2-consistent, and the algorithms for the stochastic setting [9] do not guarantee any robustness at all. The known lower bounds of 2 rule out any robustness factor less than 2 for our model. ...
Preprint
Full-text available
Learning-augmented algorithms have been attracting increasing interest, but have only recently been considered in the setting of explorable uncertainty where precise values of uncertain input elements can be obtained by a query and the goal is to minimize the number of queries needed to solve a problem. We study learning-augmented algorithms for sorting and hypergraph orientation under uncertainty, assuming access to untrusted predictions for the uncertain values. Our algorithms provide improved performance guarantees for accurate predictions while maintaining worst-case guarantees that are best possible without predictions. For hypergraph orientation, for any γ2\gamma \geq 2, we give an algorithm that achieves a competitive ratio of 1+1/γ1+1/\gamma for correct predictions and γ\gamma for arbitrarily wrong predictions. For sorting, we achieve an optimal solution for accurate predictions while still being 2-competitive for arbitrarily wrong predictions. These tradeoffs are the best possible. We also consider different error metrics and show that the performance of our algorithms degrades smoothly with the prediction error in all the cases where this is possible.
... In this query model, we consider very fundamental problems that underlie numerous applications: sorting, computing the minimum element, and computing a minimum spanning tree in a graph with uncertain edge weights. These problems are well understood in the setting of explorable uncertainty: The best known deterministic algorithms are 2-competitive and no deterministic algorithm can be better [24,37,39,49]. For the sorting and minimum problems, we consider the setting where we want to solve the problem for a number of different, possibly overlapping subsets of a given ground set of uncertain elements. ...
... In particular, he showed for the problem of identifying all maximum elements of a set of uncertain values that querying the intervals in order of non-increasing right endpoints requires at most one more query than the optimal query set. Subsequent work addressed finding the k-th smallest value in a set of uncertainty intervals [26,36], caching problems [52], computing a function value [40], sorting [37], and combinatorial optimization problems, such as shortest path [25], the knapsack problem [29], scheduling problems [3,6,20], the MST problem and matroids [21,24,27,49,50]. ...
... Both, a deterministic 2-competitive algorithm and a randomized 1.707-competitive algorithm, are known for the more general problem of finding the minimum base in a matroid [23,49], even for the case with non-uniform query costs [49]. For sorting a single set, a 2-competitive algorithm exists (even with arbitrary query costs) and is best possible [37]. In the case of uniform query costs, the algorithm simply queries witness sets of size 2; in the case of arbitrary costs, it first queries a minimum-weight vertex cover of the interval graph corresponding to the instance and then executes any remaining queries that are still necessary. ...
Preprint
Full-text available
We study how to utilize (possibly machine-learned) predictions in a model for optimization under uncertainty that allows an algorithm to query unknown data. The goal is to minimize the number of queries needed to solve the problem. Considering fundamental problems such as finding the minima of intersecting sets of elements or sorting them, as well as the minimum spanning tree problem, we discuss different measures for the prediction accuracy and design algorithms with performance guarantees that improve with the accuracy of predictions and that are robust with respect to very poor prediction quality. We also provide new structural insights for the minimum spanning tree problem that might be useful in the context of explorable uncertainty regardless of predictions. Our results prove that untrusted predictions can circumvent known lower bounds in the model of explorable uncertainty. We complement our results by experiments that empirically confirm the performance of our algorithms.
... It enables us to investigate how uncertainty influences online decision quality in a more quantitative way. The concept of exploring uncertainty has raised a lot of attention and has been studied on different problems, such as sorting [15], finding the median [12], identifying a set with the minimumweight among a given collection of feasible sets [9], finding shortest paths [11], computing minimum spanning trees [16], etc. More recent work and a survey can be found in [8,11,14]. ...
Preprint
Full-text available
In this work, we study a scheduling problem with explorable uncertainty. Each job comes with an upper limit of its processing time, which could be potentially reduced by testing the job, which also takes time. The objective is to schedule all jobs on a single machine with a minimum total completion time. The challenge lies in deciding which jobs to test and the order of testing/processing jobs. The online problem was first introduced with unit testing time and later generalized to variable testing times. For this general setting, the upper bounds of the competitive ratio are shown to be 4 and 3.3794 for deterministic and randomized online algorithms; while the lower bounds for unit testing time stands, which are 1.8546 (deterministic) and 1.6257 (randomized). We continue the study on variable testing times setting. We first enhance the analysis framework and improve the competitive ratio of the deterministic algorithm from 4 to 1+22.41431+\sqrt{2} \approx 2.4143. Using the new analysis framework, we propose a new deterministic algorithm that further improves the competitive ratio to 2.316513. The new framework also enables us to develop a randomized algorithm improving the expected competitive ratio from 3.3794 to 2.152271.
Chapter
In this work, we study a scheduling problem with explorable uncertainty. Each job comes with an upper limit of its processing time, which could be potentially reduced by testing the job, which also takes time. The objective is to schedule all jobs on a single machine with a minimum total completion time. The challenge lies in deciding which jobs to test and the order of testing/processing jobs. The online problem was first introduced with unit testing time [5, 6] and later generalized to variable testing times [1]. For this general setting, the upper bounds of the competitive ratio are shown to be 4 and 3.3794 for deterministic and randomized online algorithms [1]; while the lower bounds for unit testing time stands [5, 6], which are 1.8546 (deterministic) and 1.6257 (randomized). We continue the study on variable testing times setting. We first enhance the analysis framework in [1] and improve the competitive ratio of the deterministic algorithm in [1] from 4 to 1+22.41431+\sqrt{2} \approx 2.4143. Using the new analysis framework, we propose a new deterministic algorithm that further improves the competitive ratio to 2.316513. The new framework also enables us to develop a randomized algorithm improving the expected competitive ratio from 3.3794 to 2.152271.
Article
We consider two-stage robust optimization problems, which can be seen as games between a decision maker and an adversary. After the decision maker fixes part of the solution, the adversary chooses a scenario from a specified uncertainty set. Afterwards, the decision maker can react to this scenario by completing the partial first-stage solution to a full solution. We extend this classic setting by adding another adversary stage after the second decision-maker stage, which results in min-max-min-max problems, thus pushing two-stage settings further towards more general multi-stage problems. We focus on budgeted uncertainty sets and consider both the continuous and discrete case. For the former, we show that a wide range of robust combinatorial optimization problems can be decomposed into polynomially many subproblems, which can be solved in polynomial time for example in the case of (representative) selection. For the latter, we prove NP-hardness for a wide range of problems, but note that the special case where first- and second-stage adversarial costs are equal can remain solvable in polynomial time.
Chapter
Full-text available
Decision-making under uncertainty is a major challenge in logistics. Mathematical optimization has a long tradition in providing powerful methods for solving logistics problems. While classical optimization models for uncertainty in the input data do not consider the option to actively query the precise value of uncertain input elements, this option is in practice often available at a certain cost. The recent line of research on optimization under explorable uncertainty develops methods with provable performance guarantees for such scenarios. In this chapter, we highlight some recent results from the mathematical optimization perspective and outline the potential power of such model and techniques for solving logistics problems.
Article
We study problems with stochastic uncertainty information on intervals for which the precise value can be queried by paying a cost. The goal is to devise an adaptive decision tree to find a correct solution to the problem in consideration while minimizing the expected total query cost. We show that, for the sorting problem, such a decision tree can be found in polynomial time. For the problem of finding the data item with minimum value, we have some evidence for hardness. This contradicts intuition, since the minimum problem is easier both in the online setting with adversarial inputs and in the offline verification setting. However, the stochastic assumption can be leveraged to beat both deterministic and randomized approximation lower bounds for the online setting.
Article
Full-text available
We introduce a novel adversarial model for scheduling with explorable uncertainty. In this model, the processing time of a job can potentially be reduced (by an a priori unknown amount) by testing the job. Testing a job j takes one unit of time and may reduce its processing time from the given upper limit pˉj\bar{p}_j (which is the time taken to execute the job if it is not tested) to any value between 0 and pˉj\bar{p}_j. This setting is motivated e.g., by applications where a code optimizer can be run on a job before executing it. We consider the objective of minimizing the sum of completion times on a single machine. All jobs are available from the start, but the reduction in their processing times as a result of testing is unknown, making this an online problem that is amenable to competitive analysis. The need to balance the time spent on tests and the time spent on job executions adds a novel flavor to the problem. We give the first and nearly tight lower and upper bounds on the competitive ratio for deterministic and randomized algorithms. We also show that minimizing the makespan is a considerably easier problem for which we give optimal deterministic and randomized online algorithms.
Article
Full-text available
We present a framework for computing with input data specified by intervals, representing uncertainty in the values of the input parameters. To compute a solution, the algorithm can query the input parameters that yield more refined estimates in the form of sub-intervals and the objective is to minimize the number of queries. The previous approaches address the scenario where every query returns an exact value. Our framework is more general as it can deal with a wider variety of inputs and query responses and we establish interesting relationships between them that have not been investigated previously. Although some of the approaches of the previous restricted models can be adapted to the more general model, we require more sophisticated techniques for the analysis and we also obtain improved algorithms for the previous model. We address selection problems in the generalized model and show that there exist 2-update competitive algorithms that do not depend on the lengths or distribution of the sub-intervals and hold against the worst case adversary. We also obtain similar bounds on the competitive ratio for the MST problem in graphs.
Article
Full-text available
Merging sorted segments is a core topic of fundamental computer science that has many different applications, such as n-body simulation. In this research, we propose Lazy-Merge, a novel implementation of sequential in-place k-way merging algorithms, that can be utilized in their parallel counterparts. The implementation divides the k-way merging problem into t ordered and independent smaller k-way merging tasks (partitions), but each merging task includes a set of scattered ranges to be merged by an existing merging algorithm. The final merged list includes ranges with ordered elements, but the ranges themselves are not ordered. Lazy-Merge utilizes a novel usage of indexes to access the entire set of merged elements in order. Its merging time complexity is O(k log (n/k) + merge(n/p), where k, n, and p are the number of segments, the list size and the number of processors (partitions), respectively. Here, merge(n/p) represents the time needed to merge n/p elements by the used in-place merging algorithm. The time complexity of accessing an element in the merged list is O(log k), that time can be constant if k processors are used. The results of the proposed work are compared with those of bitonic merge and the best time-space optimal algorithms on number of moves and execution time. In comparison with the existing algorithms, significant speedup and reasonable reduction factor for number of moves have been achieved.
Article
Full-text available
The local ratio technique is a methodology for the design and analysis of algorithms for a broad range of optimization problems. The technique is remarkably simple and elegant, and yet can be applied to several classical and fundamental problems (including covering problems, packing problems, and scheduling problems). The local ratio technique uses elementary math and requires combinatorial insight into the structure and properties of the problem at hand. Typically, when using the technique, one has to invent a weight function for a problem instance under which every "reasonable" solution is "good." The local ratio technique is closely related to the primal-dual schema, though it is not based on weak LP duality (which is the basis of the primal-dual approach) since it is not based on linear programming.In this survey we, introduce the local ratio technique and demonstrate its use in the design and analysis of algorithms for various problems. We trace the evolution path of the technique since its inception in the 1980's, culminating with the most recent development, namely, fractional local ratio, which can be viewed as a new LP rounding technique.
Article
Full-text available
We consider the minimum spanning tree problem in a setting where information about the edge weights of the given graph is uncertain. Initially, for each edge e of the graph only a set Ae, called an uncertainty area, that contains the actual edge weight we is known. The algorithm can ‘update ’ e to obtain the edge weight we ∈ Ae. The task is to output the edge set of a minimum spanning tree after a minimum number of updates. An algorithm is k-update competitive if it makes at most k times as many updates as the optimum. We present a 2-update competitive algorithm if all areas Ae are open or trivial, which is the best possible among deterministic algorithms. The condition on the areas Ae is to exclude degenerate inputs for which no constant update competitive algorithm can exist. Next, we consider a setting where the vertices of the graph correspond to points in Euclidean space and the weight of an edge is equal to the distance of its endpoints. The location of each point is initially given as an uncertainty area, and an update reveals the exact location of the point. We give a general relation between the edge uncertainty and the vertex uncertainty versions of a problem and use it to derive a 4-update competitive
Article
We consider the minimum spanning tree (MST) problem in an uncertainty model where interval edge weights can be explored to obtain the exact weight. The task is to find an MST by querying the minimum number of edges. This problem has received quite some attention from the algorithms theory community. In this article, we conduct the first practical experiments for MST under uncertainty, theoretically compare three known algorithms, and compare theoretical with practical behavior of the algorithms. Among others, we observe that the average performance and the absolute number of queries are both far from the theoretical worst-case bounds. Furthermore, we investigate a known general preprocessing procedure and develop an implementation thereof that maximally reduces the data uncertainty. We also characterize a class of instances that is solved to optimality by our preprocessing. Our experiments are based on practical data from an application in telecommunications and uncertainty instances generated from the standard TSPLib graph library.
Article
We consider a stochastic variant of the packing-type integer linear programming problem, which contains random variables in the objective vector. We are allowed to reveal each entry of the objective vector by conducting a query, and the task is to find a good solution by conducting a small number of queries. We propose a general framework of adaptive and non-adaptive algorithms for this problem, and provide a unified methodology for analyzing the performance of those algorithms. We also demonstrate our framework by applying it to a variety of stochastic combinatorial optimization problems such as matching, matroid, and stable set problems.
Conference Paper
We consider a single machine, a set of unit-time jobs, and a set of unit-time errors. We assume that the time-slot at which each error will occur is not known in advance but, for every error, there exists an uncertainty area during which the error will take place. In order to find if the error occurs in a specific time-slot, it is necessary to issue a query to it. In this work, we study two problems: (i) the error-query scheduling problem, whose aim is to reveal enough error-free slots with the minimum number of queries, and (ii) the lexicographic error-query scheduling problem where we seek the earliest error-free slots with the minimum number of queries. We consider both the off-line and the on-line versions of the above problems. In the former, the whole instance and its characteristics are known in advance and we give a polynomial-time algorithm for the error-query scheduling problem. In the latter, the adversary has the power to decide, in an on-line way, the time-slot of appearance for each error. We propose then both lower bounds and algorithms whose competitive ratios asymptotically match these lower bounds.
Article
Given a graph with "uncertainty intervals" on the edges, we want to identify a minimum spanning tree by querying some edges for their exact edge weights which lie in the given uncertainty intervals. Our objective is to minimize the number of edge queries. It is known that there is a deterministic algorithm with best possible competitive ratio 2 [T. Erlebach, et al., in Proceedings of STACS, Schloss Dagstuhl, Dagstuhl, Germany, 2008, pp. 277-288]. Our main result is a randomized algorithm with expected competitive ratio 1 + 1/√2 ≈ 1.707, solving the long-standing open problem of whether an expected competitive ratio strictly less than 2 can be achieved [T. Erlebach and M. Hoffmann, Bull. Eur. Assoc. Theor. Comput. Sci. EATCS, 116 (2015)]. We also present novel results for various extensions, including arbitrary matroids and more general querying models.
Article
In online scenarios requests arrive over time, and each request must be serviced in an irrevocable manner before the next request arrives. Online algorithms with advice is an area of research where one attempts to measure how much knowledge of future requests is necessary to achieve a given performance level, as defined by the competitive ratio. When this knowledge, the advice, is obtainable, this leads to practical algorithms, called semi-online algorithms. On the other hand, each negative result gives robust results about the limitations of a broad range of semi-online algorithms. This survey explains the models for online algorithms with advice, motivates the study in general, presents some examples of the work that has been carried out, and includes an extensive set of references, organized by problem studied.
Book
This rapidly developing field encompasses many disciplines including operations research, mathematics, and probability. Conversely, it is being applied in a wide variety of subjects ranging from agriculture to financial planning and from industrial engineering to computer networks. This textbook provides a first course in stochastic programming suitable for students with a basic knowledge of linear programming, elementary analysis, and probability. The authors present a broad overview of the main themes and methods of the subject, thus helping students develop an intuition for how to model uncertainty into mathematical problems, what uncertainty changes bring to the decision process, and what techniques help to manage uncertainty in solving the problems. The early chapters introduce some worked examples of stochastic programming, demonstrate how a stochastic model is formally built, develop the properties of stochastic programs and the basic solution techniques used to solve them. The book then goes on to cover approximation and sampling techniques and is rounded off by an in-depth case study. A well-paced and wide-ranging introduction to this subject.
Conference Paper
In the verification under uncertainty setting, an algorithm is given, for each input item, an uncertainty area that is guaranteed to contain the exact input value, as well as an assumed input value. An update of an input item reveals its exact value. If the exact value is equal to the assumed value, we say that the update verifies the assumed value. We consider verification under uncertainty for the minimum spanning tree (MST) problem for undirected weighted graphs, where each edge is associated with an uncertainty area and an assumed edge weight. The objective of an algorithm is to compute the smallest set of updates with the property that, if the updates of all edges in the set verify their assumed weights, the edge set of an MST can be computed. We give a polynomial-time optimal algorithm for the MST verification problem by relating the choices of updates to vertex covers in a bipartite auxiliary graph. Furthermore, we consider an alternative uncertainty setting where the vertices are embedded in the plane, the weight of an edge is the Euclidean distance between the endpoints of the edge, and the uncertainty is about the location of the vertices. An update of a vertex yields the exact location of that vertex. We prove that the MST verification problem in this vertex uncertainty setting is NP-hard. This shows a surprising difference in complexity between the edge and vertex uncertainty settings of the MST verification problem.
Article
A graph G = (V, E) is a threshold tolerance if it is possible to associate weights and tolerances with each node of G so that two nodes are adjacent exactly when the sum of their weights exceeds either one of their tolerances. Threshold tolerance graphs are a special case of the well-known class of tolerance graphs and generalize the class of threshold graphs which are also extensively studied in literature. In this note we relate the threshold tolerance graphs with other important graph classes. In particular we show that threshold tolerance graphs are a proper subclass of co-strongly chordal graphs and strictly include the class of co-interval graphs. To this purpose, we exploit the relation with another graph class, min leaf power graphs (mLPGs).
Article
Considering the model of computing under uncertainty where element weights are uncertain but can be obtained at a cost by query operations, we study the problem of identifying a cheapest (minimum-weight) set among a given collection of feasible sets using a minimum number of queries of element weights. For the general case we present an algorithm that makes at most queries, where d is the maximum cardinality of any given set and OPT is the optimal number of queries needed to identify a cheapest set. For the minimum multi-cut problem in trees with d terminal pairs, we give an algorithm that makes at most queries. For the problem of computing a minimum-weight base of a given matroid, we give an algorithm that makes at most queries, generalizing a known result for the minimum spanning tree problem. For each of the above algorithms we give matching lower bounds. We also settle the complexity of the verification version of the general cheapest set problem and the minimum multi-cut problem in trees under uncertainty.
Article
A graph G=(V,E) is a threshold tolerance graph if each vertex v∈V can be assigned a weight wv and a tolerance tv such that two vertices x,y∈V are adjacent if wx+wy≥min(tx,ty). Currently, the most efficient recognition algorithm for threshold tolerance graphs is the algorithm of Monma, Reed, and Trotter which has an O(n4) runtime. We give an O(n2) algorithm for recognizing threshold tolerance and their complements, the co-threshold tolerance (co-TT) graphs, resolving an open question of Golumbic, Weingarten, and Limouzy.
Article
We consider robust knapsack problems where item weights are uncertain. We are allowed to query an item to find its exact weight,where the number of such queries is bounded by a given parameter Q. After these queries are made, we need to pack the items robustly, i.e., so that the choice of items is feasible for every remaining possible scenario of item weights.The central question that we consider is: Which items should be queried in order to gain maximum profit? We introduce the notion of query competitiveness for strict robustness to evaluate the quality of an algorithm for this problem, and obtain lower and upper bounds on this competitiveness for interval-based uncertainty. Similar to the study of online algorithms, we study the competitiveness under different frameworks, namely we analyze the worst-case query competitiveness for deterministic algorithms, the expected query competitiveness for randomized algorithms and the average case competitiveness for known distributions of the uncertain input data. We derive theoretical bounds for these different frameworks and evaluate them experimentally. We also extend this approach to Γ-restricted uncertainties introduced by Bertsimas and Sim.Furthermore, we present heuristic algorithms for the problem. In computational experiments considering both the interval-based and the Γ-restricted uncertainty, we evaluate their empirical performance. While the usage of a Γ-restricted uncertainty improves the nominal performance of a solution (as expected), we find that the query competitiveness gets worse.
Article
Consider a linear program (LP) with uncertain objective coefficients, for which we have a Bayesian prior. We can collect information to improve our understanding of these coefficients, but this may be expensive, giving us a separate problem of optimizing the collection of information to improve the quality of the solution relative to the true cost coefficients. We formulate this information collection problem for LPs for the first time and derive a knowledge gradient policy which finds the marginal value of each measurement by solving a sequence of LPs. We prove that this policy is asymptotically optimal and demonstrate its performance on a network flow problem.
Conference Paper
The study of algorithms that handle imprecise input data for which precise data can be requested is an interesting area. In the verification under uncertainty setting, which is the focus of this paper, an algorithm is also given an assumed set of precise input data. The aim of the algorithm is to update the smallest set of input data such that if the updated input data is the same as the corresponding assumed input data, a solution can be calculated. We study this setting for the maximal point problem in two dimensions. Here there are three types of data, a set of points P = {p 1,…,p n }, the uncertainty areas information consisting of areas of uncertainty A i for each 1 ≤ i ≤ n, with p i ∈ A i , and the set of P′ = {p′1, . . . , p′k } containing the assumed points, with p′i ∈ A i . An update of an area A i reveals the actual location of p i and verifies the assumed location if p′ i = p i . The objective of an algorithm is to compute the smallest set of points with the property that, if the updates of these points verify the assumed data, the set of maximal points among P can be computed. We show that the maximal point verification problem is NP-hard, by a reduction from the minimum set cover problem.
Article
In this paper, we present a new characterization of complement Threshold Tolerance graphs (co-TT for short) and find a recognition algorithm for the subclass of split co-TT graphs running in O(n2)O(n2) time. Currently, the best recognition algorithms for co-TT graphs and for split co-TT graphs run in O(n4)O(n4) time (Hammer and Simeone (1981) [4]; Monma et al. (1988) [7]).
Article
In this paper, we introduce a class of graphs that generalize threshold graphs by introducing threshold tolerances. Several characterizations of these graphs are presented, one of which leads to a polynomial-time recognition algorithm. It is also shown that the complements of these graphs contain interval graphs and threshold graphs, and are contained in the subclass of chordal graphs called strongly chordal graphs, and in the class of interval tolerance graphs.
Article
Bell System Technical Journal, also pp. 623-656 (October)
Article
We consider the problems of computing maximal points and the convex hull of a set of points in two dimensions, when the points are “in motion.” We assume that the point locations (or trajectories) are not known precisely and determining these values exactly is feasible, but expensive. In our model the algorithm only knows areas within which each of the input points lie, and is required to identify the maximal points or points on the convex hull correctly by updating some points (i.e., determining their location exactly). We compare the number of points updated by the algorithm on a given instance to the minimum number of points that must be updated by a nondeterministic strategy in order to compute the answer provably correctly. We give algorithms for both of the above problems that always update at most three times as many points as the nondeterministic strategy, and show that this is the best possible. Our model is similar to that in [3] and [5].
Article
We consider the problem of estimating the length of the shortest path from a vertex s to a vertex t in a DAG whose edge lengths are known only approximately but can be determined exactly at a cost. Initially, for each edge e, the length of e is known only to lie within an interval [le,he]; the estimation algorithm can pay we to find the exact length of e. We study the problem of finding the cheapest set of edges such that, if exactly these edges are queried, the length of the shortest s–t path will be known, within an additive κ⩾0, an input parameter. An actual s–t path, whose true length exceeds that of the shortest s–t path by at most κ, will be obtained as well. The problem of finding a cheap set of edge queries is in neither NP nor co-NP unless NP = co-NP. We give positive and negative results for two special cases and for the general case, which we show is in Σ2.
Article
Tolerance graphs arise from the intersection of intervals with varying tolerances in a way that generalizes both interval graphs and permutation graphs. In this paper we prove that every tolerance graph is perfect by demonstrating that its complement is perfectly orderable. We show that a tolerance graph cannot contain a chordless cycle of length greater than or equal to 5 nor the complement of one. We also discuss the subclasses of bounded tolerance graphs, proper tolerance graphs, and unit tolerance graphs and present several possible applications and open questions.
Article
This paper reviews the state-of-the-art in robust design optimization – the search for designs and solutions which are immune with respect to production tolerances, parameter drifts during operation time, model sensitivities and others. Starting with a short glimps of Taguchi’s robust design methodology, a detailed survey of approaches to robust optimization is presented. This includes a detailed discussion on how to account for design uncertainties and how to measure robustness (i.e., how to evaluate robustness). The main focus will be on the different approaches to perform robust optimization in practice including the methods of mathematical programming, deterministic nonlinear optimization, and direct search methods such as stochastic approximation and evolutionary computation. It discusses the strengths and weaknesses of the different methods, thus, providing a basis for guiding the engineer to the most appropriate techniques. It also addresses performance aspects and test scenarios for direct robust optimization techniques.
Article
A linear time approximation algorithm for the weighted set-covering problem is presented. For the special case of the weighted vertex cover problem it produces a solution of weight which is at most twice the weight of an optimal solution.
Article
A finite undirected graph is called chordal if every simple circuit has a chord. Given a chordal graph, we present, ways for constructing efficient algorithms for finding a minimum coloring, a minimum covering by cliques, a maximum clique, and a maximum independent set. The proofs are based on a theorem of D. Rose [3] that a finite graph is chordal if and only if it has some special orientation called an R-orientation. In the last part of this paper we prove that an infinite graph is chordal if and only if it has an R-orientation.
Article
Strict consistency of replicated data is infeasible or not required by many distributed applications, so current systems often permit stale replication, in which cached copies of data values are allowed to become out of date. Queries over cached data return an answer quickly, but the stale answer may be unboundedly imprecise. Alternatively, queries over remote master data return a precise answer, but with potentially poor performance. To bridge the gap between these two extremes, we propose a new class of replication systems called TRAPP (Tradeoff in Replication Precision and Performance). TRAPP systems give each user fine-grained control over the tradeoff between precision and performance: Caches store ranges that are guaranteed to bound the current data values, instead of storing stale exact values.
Article
We study the problem of computing a function f(x1, ..., xn) given that the actual values of the variables xi's are known only with some uncertainity. For each variable xi, an interval Ii is known such that the value of xi is guaranteed to fall within this interval. Any such interval can be probed to obtain the actual value of the underlying variable; however, there is a cost associated with each such probe. The goal is to adaptively identify a minimum cost sequence of probes such that regardless of the actual values taken by the unprobed xi's, the value of the function f can be computed to within a specified precision. We design online algorithms for this problem when f is either the selection function or an aggregation function such as sum or average. We consider three natural models of precision and give algorithms for each model. We analyze our algorithms in the framework of competitive analysis and show that our algorithms are asymptotically optimal. Finally, we also study online algorithms for functions that are obtained by composing together selection and aggregation functions.
Sorting and selection with imprecise comparisons
  • Ajtai
Query-competitive algorithms for computing with uncertainty
  • Erlebach
Computing the median with uncertainty
  • Feder
Introduction to Stochastic Programming
  • Birge