Conference Paper

Architecture Performance Prediction Using Evolutionary Artificial Neural Networks.

DOI: 10.1007/978-3-540-78761-7_18 Conference: Applications of Evolutionary Computing, EvoWorkshops 2008: EvoCOMNET, EvoFIN, EvoHOT, EvoIASP, EvoMUSART, EvoNUM, EvoSTOC, and EvoTransLog, Naples, Italy, March 26-28, 2008. Proceedings
Source: DBLP

ABSTRACT The design of computer architectures requires the setting of multiple parameters on which the final performance depends. The
number of possible combinations make an extremely huge search space. A way of setting such parameters is simulating all the
architecture configurations using benchmarks. However, simulation is a slow solution since evaluating a single point of the
search space can take hours. In this work we propose using artificial neural networks to predict the configurations performance
instead of simulating all them. A prior model proposed by Ypek et al. [1] uses multilayer perceptron (MLP) and statistical
analysis of the search space to minimize the number of training samples needed. In this paper we use evolutionary MLP and
a random sampling of the space, which reduces the need to compute the performance of parameter settings in advance. Results
show a high accuracy of the estimations and a simplification in the method to select the configurations we have to simulate
to optimize the MLP.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Open Computing Language (OpenCL) is emerging as a standard for parallel programming of heterogeneous hardware accelerators. With respect to device specific languages, OpenCL enables application portability but does not guarantee performance portability, eventually requiring additional tuning of the implementation to a specific platform or to unpredictable dynamic workloads. In this paper, we present a methodology to analyze the customization space of an OpenCL application in order to improve performance portability and to support dynamic adaptation. We formulate our case study by implementing an OpenCL image stereo-matching application (which computes the relative depth of objects from a pair of stereo images) customized to the STMicroelectronics Platform 2012 many-core computing fabric. In particular, we use design space exploration techniques to generate a set of operating points that represent specific configurations of the parameters allowing different trade-offs between performance and accuracy of the algorithm itself. These points give detailed knowledge about the interaction between the application parameters, the underlying architecture and the performance of the system; they could also be used by a run-time manager software layer to meet dynamic Quality-of-Service (QoS) constraints. To analyze the customization space, we use cycle-accurate simulations for the target architecture. Since the profiling phase of each configuration takes a long simulation time, we designed our methodology to reduce the overall number of simulations by exploiting some important features of the application parameters; our analysis also enables the identification of the parameters that could be explored on a high-level simulation model to reduce the simulation time. The resulting methodology is one order of magnitude more efficient than an exhaustive exploration and, given its randomized nature, it increases the probability to avoid sub-optimal trade-offs.
    Proceedings of the eighth IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis; 10/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The decision which hardware platform to use for a certain application is an important problem in computer architecture. This paper reports on a study where a data-mining approach is used for this decision. It relies purely on source-code characteristics, to avoid potentially expensive program executions. One challenge in this context is that one cannot infer how often functions that are part of the application are typically executed. The main insight of this study is twofold: (a) Source code characteristics are sufficient nevertheless. (b) Linking individual functions with the runtime behaviour of the program as a whole yields good predictions. In other words, while individual data objects from the training set may be quite inaccurate, the resulting model is not.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In continuous optimisation, Surrogate Models (SMs) are often indispensable components of optimisation algorithms aimed at tackling real-world problems whose candidate solutions are very expensive to evaluate. Because of the inherent spatial intuition behind these models, they are naturally suited to continuous problems but they do not seem applicable to combinatorial problems except for the special case when solutions are naturally encoded as integer vectors. In this paper, we show that SMs can be naturally generalised to encompass combinatorial spaces based in principle on any arbitrarily complex underlying solution representation by generalising their geometric interpretation from continuous to general metric spaces. As an initial illustrative example, we show how Radial Basis Function Networks (RBFNs) can be used successfully as surrogate models to optimise combinatorial problems defined on the Hamming space associated with binary strings.
    Evolutionary Computation in Combinatorial Optimization - 11th European Conference, EvoCOP 2011, Torino, Italy, April 27-29, 2011. Proceedings; 01/2011

Full-text (2 Sources)

Available from
May 23, 2014