Lars Kotthoff’s research while affiliated with University of Wyoming and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (106)


Figure 5: Spearman correlation index for each code feature (column) with the performance of the algorithms (fitness) for all benchmarks and methods (rows).
Code Evolution Graphs: Understanding Large Language Model Driven Design of Algorithms
  • Preprint
  • File available

March 2025

·

28 Reads

·

Anna V. Kononova

·

Lars Kotthoff

·

Large Language Models (LLMs) have demonstrated great promise in generating code, especially when used inside an evolutionary computation framework to iteratively optimize the generated algorithms. However, in some cases they fail to generate competitive algorithms or the code optimization stalls, and we are left with no recourse because of a lack of understanding of the generation process and generated codes. We present a novel approach to mitigate this problem by enabling users to analyze the generated codes inside the evolutionary process and how they evolve over repeated prompting of the LLM. We show results for three benchmark problem classes and demonstrate novel insights. In particular, LLMs tend to generate more complex code with repeated prompting, but additional complexity can hurt algorithmic performance in some cases. Different LLMs have different coding ``styles'' and generated code tends to be dissimilar to other LLMs. These two findings suggest that using different LLMs inside the code evolution frameworks might produce higher performing code than using only one LLM.

Download



The third and fourth international competitions on computational models of argumentation: Design, results and analysis

April 2024

·

53 Reads

·

3 Citations

Argument and Computation

The International Competition on Computational Models of Argumentation (ICCMA) focuses on reasoning tasks in abstract argumentation frameworks. Submitted solvers are tested on a selected collection of benchmark instances, including artificially generated argumentation frameworks and some frameworks formalizing real-world problems. This paper presents the novelties introduced in the organization of the Third (2019) and Fourth (2021) editions of the competition. In particular, we proposed new tracks to competitors, one dedicated to dynamic solvers (i.e., solvers that incrementally compute solutions of frameworks obtained by incrementally modifying original ones) in ICCMA’19 and one dedicated to approximate algorithms in ICCMA’21. From the analysis of the results, we noticed that i) dynamic recomputation of solutions leads to significant performance improvements, ii) approximation provides much faster results with satisfactory accuracy, and iii) classical solvers improved with respect to previous editions, thus revealing advancement in state of the art.


FlexiBO: A Decoupled Cost-Aware Multi-objective Optimization Approach for Deep Neural Networks (Abstract Reprint)

March 2024

·

3 Reads

Proceedings of the AAAI Conference on Artificial Intelligence

The design of machine learning systems often requires trading off different objectives, for example, prediction error and energy consumption for deep neural networks (DNNs). Typically, no single design performs well in all objectives; therefore, finding Pareto-optimal designs is of interest. The search for Pareto-optimal designs involves evaluating designs in an iterative process, and the measurements are used to evaluate an acquisition function that guides the search process. However, measuring different objectives incurs different costs. For example, the cost of measuring the prediction error of DNNs is orders of magnitude higher than that of measuring the energy consumption of a pre-trained DNN as it requires re-training the DNN. Current state-of-the-art methods do not consider this difference in objective evaluation cost, potentially incurring expensive evaluations of objective functions in the optimization process. In this paper, we develop a novel decoupled and cost-aware multi-objective optimization algorithm, which we call Flexible Multi-Objective Bayesian Optimization (FlexiBO) to address this issue. For evaluating each design, FlexiBO selects the objective with higher relative gain by weighting the improvement of the hypervolume of the Pareto region with the measurement cost of each objective. This strategy, therefore, balances the expense of collecting new information with the knowledge gained through objective evaluations, preventing FlexiBO from performing expensive measurements for little to no gain. We evaluate FlexiBO on seven state-of-the-art DNNs for image recognition, natural language processing (NLP), and speech-to-text translation. Our results indicate that, given the same total experimental budget, FlexiBO discovers designs with 4.8% to 12.4% lower hypervolume error than the best method in state-of-the-art multi-objective optimization.







Citations (61)


... 8 Nitpick could not be installed in StarExec. None of the resulting insights are affected because enough resources were allocated for the system to find its solutions-see Section 3.1 of [51] for details on Peter Principle Points of automated reasoning systems. 9 See the QMLTP directory at https://github.com/TPTPWorld/NonClassicalLogic. ...

Reference:

Solving quantified modal logic problems by translation to classical logics
An Empirical Assessment of Progress in Automated Theorem Proving

... An argumentation framework might need to be represented by multiple such tree structures. We can work with range-based semantics by considering the overall effects of multiple trees such as the stage (Lagniez et al., 2020) or preferred semantics. Moreover, by monitoring these local structures, their effects and the range of the effects can be computed instantly. ...

The third and fourth international competitions on computational models of argumentation: Design, results and analysis

Argument and Computation

... LM, random forest (RF), and fully connected neural network (FcNN) models were built in R (version 4.3.3) using the 'mlr3; (Bischl et al. 2024), 'ranger' (Wright and Ziegler 2017), and 'nnet' (Venables and Ripley 2002) packages to predict METs and kJ/min. RFs, referred to as an ensemble method, are built upon decision trees that are grown by bootstrapping the data and using different subsets and features at each split (Biau and Scornet 2016. ...

Applied Machine Learning Using mlr3 in R
  • Citing Book
  • December 2023

... MOO has been widely employed in conjunction with NAS methodologies, culminating in the development of Multi-Objective Neural Architecture Search (MONAS). This approach is particularly valuable for (a) designing DNNs with the goal of not solely optimising the accuracy but also considering resource consumption [11,23,56], and (b) for compressing pretrained models [18,40,63]. Notably, our framework represents one of the pioneering efforts to formulate and address device-specific MOO problems to achieve specific SLOs at the system level. ...

FlexiBO: A Decoupled Cost-Aware Multi-Objective Optimization Approach for Deep Neural Networks

Journal of Artificial Intelligence Research

... [20]. For Random Forest, we implemented the "mlr3" [21] package in R with the classif. ranger interface, training 10 decision trees (num.trees ...

mlr3: A modern object-oriented machine learning framework in R

The Journal of Open Source Software

... WB is the best choice for packing in most cases and notably faster than DBLFD, taking 32.7 seconds compared to 5,969 seconds on average, obtaining a higher average BVU (0.43 vs. 0.39). A simple 'winner-takesall' approach would support the selection of the WB algorithm for future instances (Kotthoff et al., 2012). However, the performance variation over a large set of instances indicates that an effective algorithm selection model could outperform the selection of a single packing algorithm. ...

A Preliminary Evaluation of Machine Learning in Algorithm Selection for Search Problems
  • Citing Article
  • August 2021

Proceedings of the International Symposium on Combinatorial Search

... Finally, as we take the online algorithm selection problem as a running example for our setting, it is worth mentioning that bandit-based approaches have been already considered for this problem (Gagliolo and Schmidhuber 2007;Gagliolo and Schmidhuber 2010;Degroote 2017;Degroote et al. 2018;Tornede et al. 2022). However, these focus on certain algorithmic problem classes, such as the boolean satisfiability problem (SAT) or the quantified boolean formula problem (QBF). ...

A Regression-Based Methodology for Online Algorithm Selection
  • Citing Article
  • September 2021

Proceedings of the International Symposium on Combinatorial Search

... A significant part of the publications is devoted to describing the results of the study of individual tasks and tasks that arise when using tools for solving similar multi-criteria optimization tasks. Algorithms [17] and tools [18] for analyzing and optimizing hyperparameters are considered as similar tools in modern studies. Studies devoted to the use of simulation modeling tools for LC management of individual aspects of an IS [19] are not left without attention. ...

Automated Benchmark-Driven Design and Explanation of Hyperparameter Optimizers

IEEE Transactions on Evolutionary Computation

... Machine learning is an alternative approach to modeling the LIGC process. Using Bayesian model-based optimization, a notable improvement of the Raman G/D ratio, which indicates the degree of graphitization in the LIG patterns, was achieved by a factor of four [19,20]. Furthermore, to monitor the process of LIG formation, computer vision and deep transfer learning models were developed [21]. ...

Optimizing Laser-Induced Graphene Production

... The aim of marginal contribution of algorithms/workflows is to determine how much the performance of an existing portfolio of configurations (e.g., hyperparameters, algorithms, workflows) can be improved by adding a new alternative to it [32]. A more general concept is the Shapley value that has been used to determine a marginal contribution of an algorithm/workflow with respect to a given portfolio or any of its subsets [14]. Transfer HPO Various methods exploit data collected from previous HPO tasks to initiate the search on the current task. ...

Using the Shapley Value to Analyze Algorithm Portfolios
  • Citing Article
  • March 2016

Proceedings of the AAAI Conference on Artificial Intelligence