[Show abstract][Hide abstract] ABSTRACT: Nowadays, constraints play an important role in industry, because most industrial optimization tasks underly several restrictions. Finding good solutions for a particular problem with respect to all constraint functions can be expensive, especially when the dimensionality of the search space is large and many constraint functions are involved. Unfortunately function evaluations in industrial optimization are heavily limited, because often expensive simulations must be conducted. For such high-dimensional optimization tasks, the constraint optimization algorithm COBRA was proposed, making use of surrogate modeling for both the objective and the constraint functions. In this paper we present a new mechanism for COBRA to repair infill solutions with slightly violated constraints. The repair mechanism is based on gradient descent on surro-gates of the constraint functions and aims at finding nearby feasible solutions. We test the repair mechanism on a real-world problem from the automotive industry and on other synthetic test cases. It is shown in this paper that with the integration of the repair method, the percentage of in-feasible solutions is significantly reduced, leading to faster convergence and better final results.
[Show abstract][Hide abstract] ABSTRACT: Abstract Recent research revealed that model-assisted parameter tuning can improve the quality of supervised machine learning (ML) models. The tuned models were especially found to generalize better and to be more robust compared to other optimization approaches. However, the advantages of the tuning often came along with high computation times, meaning a real burden for employing tuning algorithms. While the training with a reduced number of patterns can be a solution to this, it is often connected with decreasing model accuracies and increasing instabilities and noise. Hence, we propose a novel approach defined by a two criteria optimization task, where both the runtime and the quality of ML models are optimized. Because the budgets for this optimization task are usually very restricted in ML, the surrogate-assisted Efficient Global Optimization (EGO) algorithm is adapted. In order to cope with noisy experiments, we apply two hypervolume indicator based EGO algorithms with smoothing and re-interpolation of the surrogate models. The techniques do not need replicates. We find that these EGO techniques can outperform traditional approaches such as latin hypercube sampling (LHS), as well as EGO variants with replicates.
[Show abstract][Hide abstract] ABSTRACT: Systems that learn to play board games are often
trained by self-play on the basis of temporal difference (TD)
learning. Successful examples include Tesauro’s well known TDGammon
and Lucas’ Othello agent. For other board games of
moderate complexity like Connect Four, we found in previous
work that a successful system requires a very rich initial feature
set with more than half a million of weights and several millions
of training games. In this work we study the benefits of eligibility
traces added to this system. To the best of our knowledge,
eligibility traces have not been used before for such a large system.
Different versions of eligibility traces (standard, resetting, and
replacing traces) are compared. We show that eligibility traces
speed up the learning by a factor of two and that they increase
the asymptotic playing strength.
CIG'2014, International Conference on Computational Intelligence in Games, Dortmund; 08/2014
[Show abstract][Hide abstract] ABSTRACT: Learning complex game functions is still a difficult task. We apply temporal difference learning (TDL), a well-known variant of the reinforcement learning approach, in combination with n-tuple networks to the game Connect-4. Our agent is trained just by self-play. It is able, for the first time, to consistently beat the optimal-playing Minimax agent (in game situations where a win is possible). The n-tuple network induces a mighty feature space: It is not necessary to design certain features, but the agent learns to select the right ones. We believe that the n-tuple network is an important ingredient for the overall success and identify several aspects that are relevant for achieving high-quality results. The architecture is sufficiently general to be applied to similar reinforcement learning tasks as well.
PPSN'2012: 12th International Conference on Parallel Problem Solving From Nature; 09/2012
[Show abstract][Hide abstract] ABSTRACT: Kernel-based methods like Support Vector Machines (SVM) have been established as powerful techniques in machine learning. The idea of SVM is to perform a mapping from the input space to a higher-dimensional feature space using a kernel function, so that a linear learning algorithm can be employed. However, the burden of choosing the appropriate kernel function is usually left to the user. It can easily be shown that the accuracy of the learned model highly depends on the chosen kernel function and its parameters, especially for complex tasks. In order to obtain a good classification or regression model, an appropriate kernel function in combination with optimized pre- and post-processed data must be used. To circumvent these obstacles, we present two solutions for optimizing kernel functions: (a) automated hyperparameter tuning of kernel functions combined with an optimization of pre- and post-processing options by Sequential Parameter Optimization (SPO) and (b) evolving new kernel functions by Genetic Programming (GP). We review modern techniques for both approaches, comparing their different strengths and weaknesses. We apply tuning to SVM kernels for both regression and classification. Automatic hyperparameter tuning of standard kernels and pre- and post-processing options always yielded to systems with excellent prediction accuracy on the considered problems. Especially SPO-tuned kernels lead to much better results than all other tested tuning approaches. Regarding GP-based kernel evolution, our method rediscovered multiple standard kernels, but no significant improvements over standard kernels were obtained.
[Show abstract][Hide abstract] ABSTRACT: Computational Intelligence (CI) provides good and robust working solutions for global optimization. CI is especially suited for solv-ing difficult tasks in parameter optimization when the fitness function is noisy. Such situations and fitness landscapes frequently arise in real-world applications like Data Mining (DM). Unfortunately, parameter tuning in DM is computationally expensive and CI-based methods often require lots of function evaluations until they finally converge in good solutions. Earlier studies have shown that surrogate models can lead to a decrease of real function evaluations. However, each function evaluation remains time-consuming. In this paper we investigate if and how the fitness land-scape of the parameter space changes, when only fewer observations are used for the model trainings during tuning. A representative study on seven DM tasks shows that the results are nevertheless competitive. On all these tasks, a fraction of 10-15% of the training data is sufficient. With this the computation time can be reduced by a factor of 6-10.
PPSN'2012: 12th International Conference on Parallel Problem Solving From Nature; 09/2012
[Show abstract][Hide abstract] ABSTRACT: Slow feature analysis (SFA) is a bioinspired method for extracting slowly varying driving forces from quickly varying non-stationary time series. We show here that it is possible for SFA to detect a component which is even slower than the driving force itself (e.g., the envelope of a modulated sine wave). It depends on circumstances like the embedding dimension, the time series predictability, or the base frequency, whether the driving force itself or a slower subcomponent is detected. Interestingly, we observe a swift phase transition from one regime to another and it is the objective of this work to quantify the influence of various parameters on this phase transition. We conclude that what is perceived as slow by SFA varies and that a more or less fast switching from one regime to another occurs, perhaps showing some similarity to human perception. Reference to this paper should be made as follows: Konen, W. and Koch, P. (2011) 'The slowness principle: SFA can detect different slow components in non-stationary time series', Int.
International Journal of Innovative Computing and Applications 01/2011; 3(3):3-10. DOI:10.1504/IJICA.2011.037946
[Show abstract][Hide abstract] ABSTRACT: Sequential parameter optimization (SPO) is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms. In this study, SPO is directly used as an optimization method on different noisy mathematical test functions. SPO includes a broad variety of meta models, which can have significant impact on its performance. Additionally, Optimal Computing Budget Allocation (OCBA), which is an enhanced method for handling the computational budget spent for selecting new design points, is presented. The OCBA approach can intelligently determine the most efficient replication numbers. Moreover, we study the of performance of different meta models being integrated in SPO. Our results reveal that the incorporation of OCBA and the selection of Gaussian process models are highly beneficial. SPO outperformed three different alternative optimization algorithms on a set of five noisy mathematical test functions.
Proceedings of the 13th annual conference companion on Genetic and evolutionary computation; 01/2011
[Show abstract][Hide abstract] ABSTRACT: The complex, often redundant and noisy data in real-world data mining (DM) applications frequently lead to inferior results when out-of-the-box DM models are applied. A tuning of parameters is essential to achieve high-quality results. In this work we aim at tuning parameters of the preprocessing and the modeling phase conjointly. The framework TDM (Tuned Data Mining) was developed to facilitate the search for good parameters and the comparison of different tuners. It is shown that tuning is of great importance for high-quality results. Surrogate-model based tuning utilizing the Sequential Parameter Optimization Toolbox (SPOT) is compared with other tuners (CMA-ES, BFGS, LHD) and evidence is found that SPOT is well suited for this task. In benchmark tasks like the Data Mining Cup (DMC) tuned models achieve remarkably better ranks than their untuned counterparts.
13th Annual Genetic and Evolutionary Computation Conference, GECCO 2011, Proceedings, Dublin, Ireland, July 12-16, 2011; 01/2011
[Show abstract][Hide abstract] ABSTRACT: Slow Feature Analysis (SFA) has been established as a robust and versatile technique from the neurosciences to learn slowly varying functions from quickly changing signals. Recently, the method has been also applied to classification tasks. Here we apply SFA for the first time to a time series classification problem originating from gesture recognition. The gestures used in our experiments are based on acceleration signals of the Bluetooth Wiimote controller (Nintendo). We show that SFA achieves results comparable to the well-known Random Forest predictor in shorter computation time, given a sufficient number of training patterns. However - and this is a novelty to SFA classification - we discovered that SFA requires the number of training patterns to be strictly greater than the dimension of the nonlinear function space. If too few patterns are available, we find that the model constructed by SFA severely overfits and leads to high test set errors. We analyze the reasons for overfitting and present a new solution based on parametric bootstrap to overcome this problem.
Neural Networks (IJCNN), The 2010 International Joint Conference on; 08/2010
[Show abstract][Hide abstract] ABSTRACT: The prediction of fill levels in stormwater tanks is an important practical problem in water resource management. In this study state-of-the-art CI methods, i.e., Neural Networks (NN) and Genetic Programming (GP), are compared with respect to their applicability to this problem. The performance of both methods crucially depends on their parametrization. We compare different parameter tuning approaches, e.g. neuro-evolution and Sequential Parameter Optimization (SPO). In comparison to NN, GP yields superior results. By optimizing GP parameters, GP runtime can be significantly reduced without degrading result quality. The SPO-based parameter tuning leads to results with significantly lower standard deviation as compared to the GA based parameter tuning. Our methodology can be transferred to other optimization and simulation problems, where complex models have to be tuned.
Evolutionary Computation (CEC), 2010 IEEE Congress on; 08/2010