Jimmy Ho-Man Lee’s research while affiliated with The University of Hong Kong and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (110)


Exploiting Functional Constraints in Automatic Dominance Breaking for Constraint Optimization
  • Article

September 2023

·

7 Reads

·

4 Citations

Journal of Artificial Intelligence Research

Jimmy H.M. Lee

·

Allen Z. Zhong

Dominance breaking is a powerful technique in improving the solving efficiency of Constraint Optimization Problems (COPs) by removing provably suboptimal solutions with additional constraints. While dominance breaking is effective in a range of practical problems, it is usually problem specific and requires human insights into problem structures to come up with correct dominance breaking constraints. Recently, a framework is proposed to generate nogood constraints automatically for dominance breaking, which formulates nogood generation as solving auxiliary Constraint Satisfaction Problems (CSPs). However, the framework uses a pattern matching approach to synthesize the auxiliary generation CSPs from the specific forms of objectives and constraints in target COPs, and is only applicable to a limited class of COPs. This paper proposes a novel rewriting system to derive constraints for the auxiliary generation CSPs automatically from COPs with nested function calls, significantly generalizing the original framework. In particular, the rewriting system exploits functional constraints flattened from nested functions in a high-level modeling language. To generate more effective dominance breaking nogoods and derive more relaxed constraints in generation CSPs, we further characterize how to extend the system with rewriting rules exploiting function properties, such as monotonicity, commutativity, and associativity, for specific functional constraints. Experimentation shows significant runtime speedup using the dominance breaking nogoods generated by our proposed method. Studying patterns of generated nogoods also demonstrates that our proposal can reveal dominance relations in the literature and discover new dominance relations on problems with ineffective or no known dominance breaking constraints.



Predict+Optimize for Packing and Covering LPs with Unknown Parameters in Constraints

June 2023

·

16 Reads

·

10 Citations

Proceedings of the AAAI Conference on Artificial Intelligence

Predict+Optimize is a recently proposed framework which combines machine learning and constrained optimization, tackling optimization problems that contain parameters that are unknown at solving time. The goal is to predict the unknown parameters and use the estimates to solve for an estimated optimal solution to the optimization problem. However, all prior works have focused on the case where unknown parameters appear only in the optimization objective and not the constraints, for the simple reason that if the constraints were not known exactly, the estimated optimal solution might not even be feasible under the true parameters. The contributions of this paper are two-fold. First, we propose a novel and practically relevant framework for the Predict+Optimize setting, but with unknown parameters in both the objective and the constraints. We introduce the notion of a correction function, and an additional penalty term in the loss function, modelling practical scenarios where an estimated optimal solution can be modified into a feasible solution after the true parameters are revealed, but at an additional cost. Second, we propose a corresponding algorithmic approach for our framework, which handles all packing and covering linear programs. Our approach is inspired by the prior work of Mandi and Guns, though with crucial modifications and re-derivations for our very different setting. Experimentation demonstrates the superior empirical performance of our method over classical approaches.


Finding Good Partial Assignments during Restart-Based Branch and Bound Search

June 2023

·

7 Reads

Proceedings of the AAAI Conference on Artificial Intelligence

Restart-based Branch-and-Bound Search (BBS) is a standard algorithm for solving Constraint Optimization Problems (COPs). In this paper, we propose an approach to find good partial assignments to jumpstart search at each restart for general COPs, which are identified by comparing different best solutions found in different restart runs. We consider information extracted from historical solutions to evaluate the quality of the partial assignments. Thus the good partial assignments are dynamically updated as the current best solution evolves. Our approach makes restart-based BBS explore different promising sub-search-spaces to find high-quality solutions. Experiments on the MiniZinc benchmark suite show how our approach brings significant improvements to a black-box COP solver equipped with the state of the art search techniques. Our method finds better solutions and proves optimality for more instances.


Branch & Learn with Post-hoc Correction for Predict+Optimize with Unknown Parameters in Constraints

May 2023

·

3 Reads

·

2 Citations

Lecture Notes in Computer Science

Combining machine learning and constrained optimization, Predict+ Optimize tackles optimization problems containing parameters that are unknown at the time of solving. Prior works focus on cases with unknowns only in the objectives. A new framework was recently proposed to cater for unknowns also in constraints by introducing a loss function, called Post-hoc Regret, that takes into account the cost of correcting an unsatisfiable prediction. Since Post-hoc Regret is non-differentiable, the previous work computes only its approximation. While the notion of Post-hoc Regret is general, its specific implementation is applicable to only packing and covering linear programming problems. In this paper, we first show how to compute Post-hoc Regret exactly for any optimization problem solvable by a recursive algorithm satisfying simple conditions. Experimentation demonstrates substantial improvement in the quality of solutions as compared to the earlier approximation approach. Furthermore, we show experimentally the empirical behavior of different combinations of correction and penalty functions used in the Post-hoc Regret of the same benchmarks. Results provide insights for defining the appropriate Post-hoc Regret in different application scenarios.KeywordsConstraint OptimizationMachine LearningPredict+Optimize


Branch & Learn with Post-hoc Correction for Predict+Optimize with Unknown Parameters in Constraints

March 2023

·

8 Reads

Combining machine learning and constrained optimization, Predict+Optimize tackles optimization problems containing parameters that are unknown at the time of solving. Prior works focus on cases with unknowns only in the objectives. A new framework was recently proposed to cater for unknowns also in constraints by introducing a loss function, called Post-hoc Regret, that takes into account the cost of correcting an unsatisfiable prediction. Since Post-hoc Regret is non-differentiable, the previous work computes only its approximation. While the notion of Post-hoc Regret is general, its specific implementation is applicable to only packing and covering linear programming problems. In this paper, we first show how to compute Post-hoc Regret exactly for any optimization problem solvable by a recursive algorithm satisfying simple conditions. Experimentation demonstrates substantial improvement in the quality of solutions as compared to the earlier approximation approach. Furthermore, we show experimentally the empirical behavior of different combinations of correction and penalty functions used in the Post-hoc Regret of the same benchmarks. Results provide insights for defining the appropriate Post-hoc Regret in different application scenarios.


Predict+Optimize for Packing and Covering LPs with Unknown Parameters in Constraints

September 2022

·

16 Reads

Predict+Optimize is a recently proposed framework which combines machine learning and constrained optimization, tackling optimization problems that contain parameters that are unknown at solving time. The goal is to predict the unknown parameters and use the estimates to solve for an estimated optimal solution to the optimization problem. However, all prior works have focused on the case where unknown parameters appear only in the optimization objective and not the constraints, for the simple reason that if the constraints were not known exactly, the estimated optimal solution might not even be feasible under the true parameters. The contributions of this paper are two-fold. First, we propose a novel and practically relevant framework for the Predict+Optimize setting, but with unknown parameters in both the objective and the constraints. We introduce the notion of a correction function, and an additional penalty term in the loss function, modelling practical scenarios where an estimated optimal solution can be modified into a feasible solution after the true parameters are revealed, but at an additional cost. Second, we propose a corresponding algorithmic approach for our framework, which handles all packing and covering linear programs. Our approach is inspired by the prior work of Mandi and Guns, though with crucial modifications and re-derivations for our very different setting. Experimentation demonstrates the superior empirical performance of our method over classical approaches.


Branch & Learn for Recursively and Iteratively Solvable Problems in Predict+Optimize

May 2022

·

4 Reads

This paper proposes Branch & Learn, a framework for Predict+Optimize to tackle optimization problems containing parameters that are unknown at the time of solving. Given an optimization problem solvable by a recursive algorithm satisfying simple conditions, we show how a corresponding learning algorithm can be constructed directly and methodically from the recursive algorithm. Our framework applies also to iterative algorithms by viewing them as a degenerate form of recursion. Extensive experimentation shows better performance for our proposal over classical and state-of-the-art approaches.


From MOOC to SPOC: Fable-Based Learning

August 2021

·

16 Reads

·

2 Citations

Lecture Notes in Computer Science

This presentation gives the pedagogical innovations and experience of the co-development of three MOOCs on the subject of “Modeling and Solving Discrete Optimization Problems” by The Chinese University of Hong Kong and the University of Melbourne. In a nutshell, the MOOCs feature the Fable-based Learning approach, which is a form of problem-based learning encapsulated in a coherent story plot. Each lecture video begins with an animation that tells a story based on a classic novel. The protagonists of the story encounter a problem requiring technical assistance from the two professors from modern time via a magical tablet granted to them by a fairy god. The new pedagogy aims at increasing learners’ motivation and interests as well as situating the learners in a coherent learning context. In addition to scriptwriting, animation production and embedding the teaching materials in the story plot, another challenge of the project is the remote distance between the two institutions as well as the need to produce all teaching materials in both (Mandarin) Chinese and English to cater for different geographic learning needs. The MOOCs have been running recurrently on Coursera since 2017. We present learner statistics and feedback, and discuss our experience with and preliminary observations of adopting the online materials in a Flipped Classroom setting.


Towards More Practical and Efficient Automatic Dominance Breaking

May 2021

·

2 Reads

·

2 Citations

Proceedings of the AAAI Conference on Artificial Intelligence

Dominance breaking is shown to be an effective technique to improve the solving speed of Constraint Optimization Problems (COPs). The paper proposes separate techniques to generalize and make more efficient the nogood generation phase of an automated dominance breaking framework by Lee and Zhong's. The first contribution is in giving conditions that allow skipping the checking of non-efficiently checkable constraints and yet still produce sufficient useful nogoods, thus opening up possibilities to apply the technique on COPs that were previously impractical. The second contribution identifies and avoids the generation of dominance breaking nogoods that are both logically and propagation redundant. The nogood generation model is strengthened using the notion of Common Assignment Elimination to avoid generation of nogoods that are subsumed by other nogoods, thus reducing the search space substantially. Extensive experimentation confirms the benefits of the new proposals.


Citations (70)


... Another relevant AI trend in this regard is represented by approaches implementing the Predict+Optimize framework, which combines machine learning and constrained optimization to deal with optimization problems that contain parameters that are unknown at solving time (Hu et al. 2023). These approaches might be especially beneficial for online conformance checking, where at each step it is not known how the process will unfold, as well as for approaches dealing with uncertainties, as already mentioned by previous studies (Felli et al. 2021). ...

Reference:

Artificial intelligence in conformance checking: state of the art and research agenda
Predict+Optimize for Packing and Covering LPs with Unknown Parameters in Constraints
  • Citing Article
  • June 2023

Proceedings of the AAAI Conference on Artificial Intelligence

... This idea of a penalty function shares a fundamental resemblance to the concept of recourse action in stochastic programming (Ruszczyński and Shapiro, 2003). In a later work, Hu et al. (2023a) apply the 'branch & learn' to minimize post-hoc regret in CO problems, solvable by recursion. ...

Branch & Learn with Post-hoc Correction for Predict+Optimize with Unknown Parameters in Constraints
  • Citing Chapter
  • May 2023

Lecture Notes in Computer Science

... The notion of subproblem dominance is even more general than subproblem equivalence [5,7,[11][12][13][14][15][16]. A dominance relation "describes pairs of assignments where one is known to be at least as good as the other one in terms of satisfiability or objective value" [11]. ...

Towards More Practical and Efficient Automatic Dominance Breaking
  • Citing Article
  • May 2021

Proceedings of the AAAI Conference on Artificial Intelligence

... Finally, if matching is not possible, then we return a set of nogoods combined with nogoods of unsuccessful assignments or nogoods of the next recursive call. Instead of simple backward jumps, the search uses a set of nogoods and restarts [27], [28]. After succeeding or failing of finding the embedding, the search process restarts from the beginning, ensuring not to repeat any portion of the search space that has already been visited. ...

Increasing Nogoods in Restart-Based Search
  • Citing Article
  • March 2016

Proceedings of the AAAI Conference on Artificial Intelligence

... With the popularity of the Internet and the demand for information technology in education, catechism (Massive Open Online Courses) came into being, becoming a large-scale, wide-coverage, open, and shared learning resource (Goldbergetal., 2015).SPOC (Small Private Online Course) is more popular than MOOC, focuses on a specific group of students, provides students with personalized learning needs, and helps to focus on students' learning progress. SPOC is regarded as an emerging model of teaching and learning after the MOOC era ( (Lee, 2021). In early experiments, Professor Fox demonstrated that the use of MOOC resources as SPOC teaching materials can enhance student learning, improve classroom teaching effectiveness, and significantly increase learning enrolments (Fox, 2013). ...

From MOOC to SPOC: Fable-Based Learning
  • Citing Chapter
  • August 2021

Lecture Notes in Computer Science

... The notion of subproblem dominance is even more general than subproblem equivalence [5,7,[11][12][13][14][15][16]. A dominance relation "describes pairs of assignments where one is known to be at least as good as the other one in terms of satisfiability or objective value" [11]. ...

Automatic Dominance Breaking for a Class of Constraint Optimization Problems
  • Citing Conference Paper
  • July 2020

... different restart runs (Boussemart et al. 2004;Michel and Van Hentenryck 2012;Li, Yin, and Li 2021). Recently, Frequent Pattern Mining-based Search (FPMS) (Li et al. 2020) is proposed to find good subtrees for solving COPs. It makes BBS directly zoom into a promising sub-search-space by running a pre-processing phase using frequent pattern mining on high quality sampled solutions. ...

Finding Good Subtrees for Constraint Optimization Problems Using Frequent Pattern Mining
  • Citing Article
  • April 2020

Proceedings of the AAAI Conference on Artificial Intelligence

... When learning resources are scarce, it might be helpful to provide students with alternative learning environments where they can spend more time preparing for exams and processing what they have learned. With the use of AI fusion algorithms, this nontraditional system will allow for more calculated and streamlined training adjustments (Chan et al., 2020). If teachers use this method, they may get insights on how their instruction is being received by their pupils. ...

Teaching Constraint Programming Using Fable-Based Learning
  • Citing Article
  • April 2020

Proceedings of the AAAI Conference on Artificial Intelligence

·

Cecilia Chun

·

Holly Fung

·

[...]

·

... As already hinted by the introductory example, Infrared does not focus on general constraint solving as performed by constraint programming systems such as Gecode [26]. Adding evaluation to our models ties this work closer to weighted constraint problems or cost networks, with some superĄcial relations to cost function optimizers such as Toulbar2 [27]. While such systems combine search with forms of constraint consistency, our solving strategies come from the area of constraint processing in constraint networks [12]. ...

Tractability-preserving Transformations of Global Cost Functions
  • Citing Article
  • June 2016

Artificial Intelligence

... By recording these so-called nld-nogoods, we obtain the guarantee of never exploring the same subtrees, further making the approach complete. This restart-based learning mechanism has been extended to take into account symmetry breaking [11,13] and the increasing nature of nld-nogoods [12], called increasing-nogoods for this reason. ...

An Increasing-Nogoods Global Constraint for Symmetry Breaking During Search
  • Citing Conference Paper
  • September 2014

Lecture Notes in Computer Science