A visualisation of the Riemann distribution, with unbounded support. Plot based on (Müller et al., 2021)

A visualisation of the Riemann distribution, with unbounded support. Plot based on (Müller et al., 2021)

Source publication
Preprint
Full-text available
In this paper, we use Prior-data Fitted Networks (PFNs) as a flexible surrogate for Bayesian Optimization (BO). PFNs are neural processes that are trained to approximate the posterior predictive distribution (PPD) for any prior distribution that can be efficiently sampled from. We describe how this flexibility can be exploited for surrogate modelin...

Context in source publication

Context 1
... from the understanding that neural networks excel in classification tasks and taking inspiration from discretizations in distributional reinforcement learning (Bellemare et al., 2017), they employed a discretized continuous distribution named Riemann Distribution. It discretizes the space into buckets B, which are selected such that each bucket has equal probability in prior-data: p(y ∈ b) = 1/|B|, ∀b ∈ B. A Riemann distribution with unbounded support is utilized, as suggested by Müller et al. (2021), which replaces the final bar on each side with a suitably scaled half-normal distribution, as shown in Figure 6. For a more precise definition, we direct the reader to Müller et al. (2021). ...

Citations

... This can be challenging due to the cost of training the model and the potential of unintended bias in the data. To avoid retraining, general-purpose GenAI-based recommenders could conceivably be trained on synthetic data (e.g., prior-data fitted networks [42]) or on previous optimization data. Regardless, harnessing GenAI's distributional learning capabilities to recommend new solutions to evaluate could significantly accelerate optimization. ...
Preprint
The field of engineering is shaped by the tools and methods used to solve problems. Optimization is one such class of powerful, robust, and effective engineering tools proven over decades of use. Within just a few years, generative artificial intelligence (GenAI) has risen as another promising tool for general-purpose problem-solving. While optimization shines at finding high-quality and precise solutions that satisfy constraints, GenAI excels at inferring problem requirements, bridging solution domains, handling mixed data modalities, and rapidly generating copious numbers of solutions. These differing attributes also make the two frameworks complementary. Hybrid generative optimization algorithms present a new paradigm for engineering problem-solving and have shown promise across a few engineering applications. We expect significant developments in the near future around generative optimization, leading to changes in how engineers solve problems using computational tools. We offer our perspective on existing methods, areas of promise, and key research questions.