Patrick Koch

Cologne University of Applied Sciences, Köln, North Rhine-Westphalia, Germany

Are you Patrick Koch?

Claim your profile

Publications (25)3.85 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Abstract Recent research revealed that model-assisted parameter tuning can improve the quality of supervised machine learning (ML) models. The tuned models were especially found to generalize better and to be more robust compared to other optimization approaches. However, the advantages of the tuning often came along with high computation times, meaning a real burden for employing tuning algorithms. While the training with a reduced number of patterns can be a solution to this, it is often connected with decreasing model accuracies and increasing instabilities and noise. Hence, we propose a novel approach defined by a two criteria optimization task, where both the runtime and the quality of ML models are optimized. Because the budgets for this optimization task are usually very restricted in ML, the surrogate-assisted Efficient Global Optimization (EGO) algorithm is adapted. In order to cope with noisy experiments, we apply two hypervolume indicator based EGO algorithms with smoothing and re-interpolation of the surrogate models. The techniques do not need replicates. We find that these EGO techniques can outperform traditional approaches such as latin hypercube sampling (LHS), as well as EGO variants with replicates.
    Applied Soft Computing 04/2015; DOI:10.1016/j.asoc.2015.01.005 · 2.68 Impact Factor
  • Source
    IEEE Transactions on Computational Intelligence and AI in Games 01/2015; (accepted 11/2014):1. DOI:10.1109/TCIAIG.2014.2367105 · 1.17 Impact Factor
  • Source
    Patrick Koch, Wolfgang Konen
    [Show abstract] [Hide abstract]
    ABSTRACT: Computational Intelligence (CI) provides good and robust working solutions for global optimization. CI is especially suited for solv-ing difficult tasks in parameter optimization when the fitness function is noisy. Such situations and fitness landscapes frequently arise in real-world applications like Data Mining (DM). Unfortunately, parameter tuning in DM is computationally expensive and CI-based methods often require lots of function evaluations until they finally converge in good solutions. Earlier studies have shown that surrogate models can lead to a decrease of real function evaluations. However, each function evaluation remains time-consuming. In this paper we investigate if and how the fitness land-scape of the parameter space changes, when only fewer observations are used for the model trainings during tuning. A representative study on seven DM tasks shows that the results are nevertheless competitive. On all these tasks, a fraction of 10-15% of the training data is sufficient. With this the computation time can be reduced by a factor of 6-10.
    PPSN'2012: 12th International Conference on Parallel Problem Solving From Nature; 09/2012
  • Source
    Markus Thill, Patrick Koch, Wolfgang Konen
    [Show abstract] [Hide abstract]
    ABSTRACT: Learning complex game functions is still a difficult task. We apply temporal difference learning (TDL), a well-known variant of the reinforcement learning approach, in combination with n-tuple networks to the game Connect-4. Our agent is trained just by self-play. It is able, for the first time, to consistently beat the optimal-playing Minimax agent (in game situations where a win is possible). The n-tuple network induces a mighty feature space: It is not necessary to design certain features, but the agent learns to select the right ones. We believe that the n-tuple network is an important ingredient for the overall success and identify several aspects that are relevant for achieving high-quality results. The architecture is sufficiently general to be applied to similar reinforcement learning tasks as well.
    PPSN'2012: 12th International Conference on Parallel Problem Solving From Nature; 09/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Kernel-based methods like Support Vector Machines (SVM) have been established as powerful techniques in machine learning. The idea of SVM is to perform a mapping from the input space to a higher-dimensional feature space using a kernel function, so that a linear learning algorithm can be employed. However, the burden of choosing the appropriate kernel function is usually left to the user. It can easily be shown that the accuracy of the learned model highly depends on the chosen kernel function and its parameters, especially for complex tasks. In order to obtain a good classification or regression model, an appropriate kernel function in combination with optimized pre- and post-processed data must be used. To circumvent these obstacles, we present two solutions for optimizing kernel functions: (a) automated hyperparameter tuning of kernel functions combined with an optimization of pre- and post-processing options by Sequential Parameter Optimization (SPO) and (b) evolving new kernel functions by Genetic Programming (GP). We review modern techniques for both approaches, comparing their different strengths and weaknesses. We apply tuning to SVM kernels for both regression and classification. Automatic hyperparameter tuning of standard kernels and pre- and post-processing options always yielded to systems with excellent prediction accuracy on the considered problems. Especially SPO-tuned kernels lead to much better results than all other tested tuning approaches. Regarding GP-based kernel evolution, our method rediscovered multiple standard kernels, but no significant improvements over standard kernels were obtained.
    Evolutionary Intelligence 09/2012; 5:153-170. DOI:10.1007/s12065-012-0073-8
  • Source
    06/2011;
  • Source
    Wolfgang Konen, Patrick Koch
    [Show abstract] [Hide abstract]
    ABSTRACT: Slow feature analysis (SFA) is a bioinspired method for extracting slowly varying driving forces from quickly varying non-stationary time series. We show here that it is possible for SFA to detect a component which is even slower than the driving force itself (e.g., the envelope of a modulated sine wave). It depends on circumstances like the embedding dimension, the time series predictability, or the base frequency, whether the driving force itself or a slower subcomponent is detected. Interestingly, we observe a swift phase transition from one regime to another and it is the objective of this work to quantify the influence of various parameters on this phase transition. We conclude that what is perceived as slow by SFA varies and that a more or less fast switching from one regime to another occurs, perhaps showing some similarity to human perception. Reference to this paper should be made as follows: Konen, W. and Koch, P. (2011) 'The slowness principle: SFA can detect different slow components in non-stationary time series', Int.
    International Journal of Innovative Computing and Applications 01/2011; 3(3):3-10. DOI:10.1504/IJICA.2011.037946
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Sequential parameter optimization (SPO) is a heuristic that combines classical and modern statistical techniques to improve the performance of search algorithms. In this study, SPO is directly used as an optimization method on different noisy mathematical test functions. SPO includes a broad variety of meta models, which can have significant impact on its performance. Additionally, Optimal Computing Budget Allocation (OCBA), which is an enhanced method for handling the computational budget spent for selecting new design points, is presented. The OCBA approach can intelligently determine the most efficient replication numbers. Moreover, we study the of performance of different meta models being integrated in SPO. Our results reveal that the incorporation of OCBA and the selection of Gaussian process models are highly beneficial. SPO outperformed three different alternative optimization algorithms on a set of five noisy mathematical test functions.
    Proceedings of the 13th annual conference companion on Genetic and evolutionary computation; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The complex, often redundant and noisy data in real-world data mining (DM) applications frequently lead to inferior results when out-of-the-box DM models are applied. A tuning of parameters is essential to achieve high-quality results. In this work we aim at tuning parameters of the preprocessing and the modeling phase conjointly. The framework TDM (Tuned Data Mining) was developed to facilitate the search for good parameters and the comparison of different tuners. It is shown that tuning is of great importance for high-quality results. Surrogate-model based tuning utilizing the Sequential Parameter Optimization Toolbox (SPOT) is compared with other tuners (CMA-ES, BFGS, LHD) and evidence is found that SPOT is well suited for this task. In benchmark tasks like the Data Mining Cup (DMC) tuned models achieve remarkably better ranks than their untuned counterparts.
    13th Annual Genetic and Evolutionary Computation Conference, GECCO 2011, Proceedings, Dublin, Ireland, July 12-16, 2011; 01/2011
  • Source
  • Source
    Proceedings 21. Workshop Computational Intelligence; 01/2011
  • Source
    Conference Paper: Tuned Data Mining in R
    Proceedings 21. Workshop Computational Intelligence; 01/2011
  • Source
    P. Koch, W. Konen, K. Hein
    [Show abstract] [Hide abstract]
    ABSTRACT: Slow Feature Analysis (SFA) has been established as a robust and versatile technique from the neurosciences to learn slowly varying functions from quickly changing signals. Recently, the method has been also applied to classification tasks. Here we apply SFA for the first time to a time series classification problem originating from gesture recognition. The gestures used in our experiments are based on acceleration signals of the Bluetooth Wiimote controller (Nintendo). We show that SFA achieves results comparable to the well-known Random Forest predictor in shorter computation time, given a sufficient number of training patterns. However - and this is a novelty to SFA classification - we discovered that SFA requires the number of training patterns to be strictly greater than the dimension of the nonlinear function space. If too few patterns are available, we find that the model constructed by SFA severely overfits and leads to high test set errors. We analyze the reasons for overfitting and present a new solution based on parametric bootstrap to overcome this problem.
    Neural Networks (IJCNN), The 2010 International Joint Conference on; 08/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The prediction of fill levels in stormwater tanks is an important practical problem in water resource management. In this study state-of-the-art CI methods, i.e., Neural Networks (NN) and Genetic Programming (GP), are compared with respect to their applicability to this problem. The performance of both methods crucially depends on their parametrization. We compare different parameter tuning approaches, e.g. neuro-evolution and Sequential Parameter Optimization (SPO). In comparison to NN, GP yields superior results. By optimizing GP parameters, GP runtime can be significantly reduced without degrading result quality. The SPO-based parameter tuning leads to results with significantly lower standard deviation as compared to the GA based parameter tuning. Our methodology can be transferred to other optimization and simulation problems, where complex models have to be tuned.
    Evolutionary Computation (CEC), 2010 IEEE Congress on; 08/2010
  • 02/2010;
  • Source
    02/2010;
  • Source
    Proceedings 20. Workshop Computational Intelligence; 01/2010
  • Source
    Proceedings 20. Workshop Computational Intelligence; 01/2010
  • Source
  • Source
    Proceedings 20. Workshop Computational Intelligence; 01/2010