Daniel Frost’s research while affiliated with University of California, Irvine and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (14)


Backjump-based backtracking for constraint satisfaction problems
  • Article

January 2002

·

272 Reads

·

133 Citations

Artificial Intelligence

·

Daniel Frost

The performance of backtracking algorithms for solving finite-domain constraint satisfaction problems can be improved substantially by look-back and look-ahead methods. Look-back techniques extract information by analyzing failing search paths that are terminated by dead-ends. Look-ahead techniques use constraint propagation algorithms to avoid such dead-ends altogether. This paper describes a number of look-back variants including backjumping and constraint recording which recognize and avoid some unnecessary explorations of the search space. The last portion of the paper gives an overview of look-ahead methods such as forward checking and dynamic variable ordering, and discusses their combination with backjumping.


Optimizing With Constraints: A Case Study in

August 2000

·

17 Reads

A well-studied problem in the electric power industry is that of optimally scheduling preventative maintenance of power generating units within a power plant. We show how these problems can be cast as constraint satisfaction problems and provide an "iterative learning" algorithm which solves the problem in the following manner. In order to find an optimal schedule, the algorithm solves a series of CSPs with successively tighter cost-bound constraints. For the solution of each problem in the series we use constraint learning, which involves recording additional constraints that are uncovered during search. However, instead of solving each problem independently, after a problem is solved successfully with a certain cost-bound, the new constraints recorded by learning are used in subsequent attempts to find a schedule with a lower cost-bound.


Figure 1: A small CSP. Note that the disallowed pairs are shown on each arc.
Dead-End Driven Learning
  • Article
  • Full-text available

August 2000

·

89 Reads

·

77 Citations

The paper evaluates the e ectiveness of learning for speeding up the solution of constraint satisfaction problems. It extends previous work (Dechter 1990) by introducing a new and powerful variant of learning and by presenting an extensive empirical study on much larger and more di cult problem instances. Our results show that learning can speed up backjumping when using either a xed or dynamic variable ordering. However, the improvement with a dynamic variable ordering is not as great, and for some classes of problems learning is helpful only when a limit is placed on the size of new constraints learned. 1.

Download

Table 2 :
Figure 17: Average CPU seconds on 100 small problems (15 units, 13 weeks) to nd a schedule meeting the cost-bound on the y-axis, using BJ+DVO with learning () and without learning (?). Cumulative number of constraints learned corresponds to right-hand scale.
Figure 18: Average CPU seconds on 100 large problems (20 units, 20 weeks) to nd a schedule meeting the cost-bound on the y-axis, using BJ+DVO with learning () and without learning (?). Cumulative number of constraints learned corresponds to right-hand scale.  
Constraint Processing for Optimal Maintenance Scheduling

August 2000

·

64 Reads

·

11 Citations

A well-studied problem in the electric power industry is that of optimally scheduling preventative maintenance of power generating units within a power plant. We show how these problems can be cast as constraint satisfaction problems and provide an "iterative learning" algorithm which solves the problem in the following manner. In order to find an optimal schedule, the algorithm solves a series of CSPs with successively tighter cost-bound constraints. For the solution of each problem in the series we use constraint learning, which involves recording additional constraints that are uncovered during search. However, instead of solving each problem independently, after a problem is solved successfully with a certain cost-bound, the new constraints recorded by learning are used in subsequent attempts to find a schedule with a lower cost-bound. We show empirically that on a class of randomly generated maintenance scheduling problems iterative learning reduces the time to find a good schedu...


In Search of the Best Constraint Satisfaction Search

August 2000

·

62 Reads

·

50 Citations

We present the results of an empirical study of several constraint satisfaction search algorithms and heuristics. Using a random problem generator that allows us to create instances with given characteristics, we show how the relative performance of various search methods varies with the number of variables, the tightness of the constraints, and the sparseness of the constraint graph. A version of backjumping using a dynamic variable ordering heuristic is shown to be extremely effective on a wide range of problems. We conducted our experiments with problem instances drawn from our experiments with problem instances drawn from the 50% satisfiable range.


Maintenance Scheduling Problems as Benchmarks for Constraint Algorithms

December 1998

·

199 Reads

·

13 Citations

Annals of Mathematics and Artificial Intelligence

The paper focuses on evaluating constraint satisfaction search algorithms on application based random problem instances. The application we use is a well-studied problem in the electric power industry: optimally scheduling preventive maintenance of power generating units within a power plant. We show how these scheduling problems can be cast as constraint satisfaction problems and used to define the structure of randomly generated non-binary CSPs. The random problem instances are then used to evaluate several previously studied algorithms. The paper also demonstrates how constraint satisfaction can be used for optimization tasks. To find an optimal maintenance schedule, a series of CSPs are solved with successively tighter cost-bound constraints. We introduce and experiment with an "iterative learning" algorithm which records additional constraints uncovered during search. The constraints recorded during the solution of one instance with a certain cost-bound are used again on subsequen...



Figure 1: The constraint graph and constraint relations of the scheduling problem in Example 1.
Figure 3: A modiied coloring problem.
Figure 9: A small CSP. The constraints are: x 3 < x 1 ; x 3 < x 2 ; x 3 < x 5 ; x 3 < x 4 ; x 4 < x 5. The allowed pairs are shown on each arc. straint graph alone. Given an i-leaf dead-end (a 1 ; :::; a i ), those subsets of values associated with the ancestors of x i+1 are identiied and included in the connict set.
Backtracking Algorithms for Constraint Satisfaction Problems - a Tutorial Survey

May 1998

·

1,806 Reads

·

32 Citations

Over the past twenty years a number of backtracking algorithms for constraint satisfaction problems have been developed. This survey describes the basic backtrack search within the search space framework and then presents a number of improvements including look-back methods such as backjumping, constraint recording, backmarking, and look-ahead methods such as forward checking and dynamic variable ordering. 1 Introduction Constraint networks have proven successful in modeling mundane cognitive tasks such as vision, language comprehension, default reasoning, and abduction, as well as specialized reasoning tasks including diagnosis, design, and temporal and spatial reasoning. The constraint paradigm can be considered a generalization of propositional logic, in that variables may be assigned values from a set with any number of elements not just true and false. This flexibility in the number of values can improve the ease and naturalness with which interesting problems are modeled. ...


Figure 6: Average CPU seconds on 100 large problems 20 units, 20 weeks to nd a schedule meeting the cost-bound on the y-axis, using BJ+DVO with learning and without learning ?. Cumulative n umber of constraints learned corresponds to right-hand scale.
Optimizing With Constraints: A Case Study in Scheduling Maintenance of Electric Power Units

January 1998

·

252 Reads

·

17 Citations

Lecture Notes in Computer Science

A well-studied problem in the electric power industry is that of optimally scheduling preventative maintenance of power generating units within a power plant [1, 3]. The general purpose of determining a maintenance schedule is to determine the duration and sequence of outages of power generating units over a given time period, while minimizing operating and maintenance costs over the planning period, subject to various constraints. We show how maintenance scheduling can be cast as a constraint satisfaction problem and used to define the structure of randomly generated non-binary CSPs. These random problem instances are then used to evaluate several previously studied backtracking-based algorithms, including backjumping and dynamic variable ordering augmented with constraint learning and look-ahead value ordering [2]. We also define and report on a new ⩼erative learning’ algorithm which solves maintenance scheduling problems in the following manner. In order to find an optimal schedule, the algorithm solves a series of CSPs with successively tighter cost-bound constraints. For the solution of each problem in the series constraint learning is applied, which involves recording additional constraints that are uncovered during search. However, instead of solving each problem in the series independently, after a problem is solved successfully with a certain cost-bound, the new constraints recorded by learning are used in subsequent attempts to find a schedule with a lower cost-bound. We show empirically that on a class of randomly generated maintenance scheduling problems iterative learning reduces the time required to find a good schedule.


Statistical Analysis of Backtracking on Inconsistent CSPs

September 1997

·

52 Reads

·

14 Citations

Lecture Notes in Computer Science

. We analyze the distribution of computational effort required by backtracking algorithms on unsatisfiable CSPs, using analogies with reliability models, where lifetime of a specimen before failure corresponds to the runtime of backtracking on unsatisfiable CSPs. We extend the results of [7] by showing empirically that the lognormal distribution is a good approximation of the backtracking effort on unsolvable CSPs not only at the 50% satisfiable point, but in a relatively wide region. We also show how the law of proportionate effect [9] commonly used to derive the lognormal distribution can be applied to modeling the number of nodes expanded in a search tree. Moreover, for certain intervals of C=N , where N is the number of variables, and C is the number of constraints, the parameters of the corresponding lognormal distribution can be approximated by the linear lognormal model [11] where mean log(deadends) is linear in C=N , and variance of log(deadends) is close to constant. The line...


Citations (11)


... The study of the runtime distributions instead of just medians and means often provides a better characterization of search methods and much useful information in the design of algorithms. For instance, complete backtrack search methods exhibit fat and heavy-tailed behavior [47,41,23]. Fat-tailedness is based on the kurtosis of a distribution. ...

Reference:

Randomness and Structure
Summarizing CSP hardness with continuous pro distributions

... Unlike existing backtracking methods, F doesn't follow a systematic removal of the jobs. Exact approaches (Chen & van Beek, 2001;Dechter & Frost, 2002;Beek, 2006;Lecoutre et al., 2009) avoid chronological backtracking by using some kind of back jumping. For example, the backtracking of Patterson et al. (1990) is a DFS-based B&B implicit enumeration that may guarantee optimality. ...

Backjump-based backtracking for constraint satisfaction problems
  • Citing Article
  • January 2002

Artificial Intelligence

... However, measures like the mean or the variance cannot capture the long-tailed behavior of difficult instances. Some authors (e. g., [FRV97,GS97,RF97]) thus shifted their focus to studying the runtime distributions of search algorithms, which helps to understand these methods better and draw meaningful conclusions for the design of new algorithms. ...

Summarizing CSP Hardness with Continuous Probability Distributions.

... After backtracking, the solver will satisfy S again from the very beginning and then discover that the formula is satisfied. Note that the solver is not expected to encounter any conflicts while satisfying S for the second time because of the phase saving heuristic [4,10,14] which re-assigns the same polarity to every assigned variable. Yet, it will have to re-assign all the 10 7 variables in V (S) and propagate after each assignment. ...

In Search of the Best Constraint Satisfaction Search

... The problem of scheduling preventive maintenance tasks of power generating units is of considerable interest to the power generation industry [10]. A typical power plant consists of many power generating units which need to be scheduled for maintenance over the planning period for a known duration. ...

Maintenance Scheduling Problems as Benchmarks for Constraint Algorithms

Annals of Mathematics and Artificial Intelligence

... When no more deductions are possible, the preprocessing phase stops. To complete (if feasible) the two bijections, and to explore the remaining space, we then launch a backtracking phase [74] . The idea is to choose a potential mapping, perform the mapping and look recursively if we can build a solution from this choice. ...

Backtracking Algorithms for Constraint Satisfaction Problems - a Tutorial Survey

... Previous implementations of PMS have deployed a myriad of AI techniques, including genetic algorithms (GAs) [5], mixed-integer programming (MIP) [4], and formulations as constraint satisfaction problems (CSPs) [9]. To our knowledge, there is no prior PMS implementation based on answer set programming (ASP, see [3] for an overview) that offers a rule-based language for knowledge representation. ...

Optimizing With Constraints: A Case Study in Scheduling Maintenance of Electric Power Units

Lecture Notes in Computer Science