Conference Proceeding

Evaluation of Adaptive Computing Concepts for Classical ERP Systems and Enterprise Services

Technische Univ. Munchen;
02/2006; DOI:10.1109/CEC-EEE.2006.45 ISBN: 0-7695-2511-3 In proceeding of: E-Commerce Technology, 2006. The 8th IEEE International Conference on and Enterprise Computing, E-Commerce, and E-Services, The 3rd IEEE International Conference on
Source: DBLP

ABSTRACT To ensure the operability and reliability of large scale enterprise resource planning systems (ERP), a peak-load oriented hardware sizing is often used. Better utilization can be achieved by employing an adaptive infrastructure based on smaller computational units in combination with an intelligent allocation management. The SAP University Competence Center (German SAP HCC) at the Technische Universitat Munchen provides support for 55 ERP training systems. The evaluation of the historical load data revealed that many applications exhibit cyclical resource consumption. In this paper we show the extraction of load patterns and present self-organizing controlling concepts in the context of the SAP HCC

0 0
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Advances in virtualization technology are enabling the creation of resource pools of servers that permit multiple application workloads to share each server in the pool. Understanding the nature of enterprise workloads is crucial to properly designing and provisioning current and future services in such pools. This paper considers issues of workload analysis, performance modeling, and capacity planning. Our goal is to automate the efficient use of resource pools when hosting large numbers of enterprise services. We use a trace based approach for capacity management that relies on i) the characterization of workload demand patterns, ii) the generation of synthetic workloads that predict future demands based on the patterns, and m) a workload placement recommendation service. The accuracy of capacity planning predictions depends on our ability to characterize workload demand patterns, to recognize trends for expected changes in future demands, and to reflect business forecasts for otherwise unexpected changes in future demands. A workload analysis demonstrates the busrtiness and repetitive nature of enterprise workloads. Workloads are automatically classified according to their periodic behavior. The similarity among repeated occurrences of patterns is evaluated. Synthetic workloads are generated from the patterns in a manner that maintains the periodic nature, burstiness, and trending behavior of the workloads. A case study involving six months of data for 139 enterprise applications is used to apply and evaluate the enterprise workload analysis and related capacity planning methods. The results show that when consolidating to 8 processor systems, we predicted future per-server required capacity to within one processor 95% of the time. The accuracy of predictions for required capacity suggests that such resource savings can be achieved with little risk.
    Workload Characterization, 2007. IISWC 2007. IEEE 10th International Symposium on; 10/2007
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: The consolidation of multiple servers and their workloads aims to minimize the number of servers needed thereby enabling the efficient use of server and power resources. At the same time, applications participating in consolidation scenarios often have specific quality of service requirements that need to be supported. To evaluate which workloads can be consolidated to which servers we employ a trace-based approach that determines a near optimal workload placement that provides specific qualities of service. However, the chosen workload placement is based on past demands that may not perfectly predict future demands. To further improve efficiency and application quality of service we apply the trace-based technique repeatedly, as a workload placement controller. We integrate the workload placement controller with a reactive controller that observes current behavior to i) migrate workloads off of overloaded servers and ii) free and shut down lightly-loaded servers. To evaluate the effectiveness of the approach, we developed a new host load emulation environment that simulates different management policies in a time effective manner. A case study involving three months of data for 138 SAP applications compares our integrated controller approach with the use of each controller separately. The study considers trade-offs between i) required capacity and power usage, ii) resource access quality of service for CPU and memory resources, and iii) the number of migrations. We consider two typical enterprise environments: blade and server based resource pool infrastructures. The results show that the integrated controller approach outperforms the use of either controller separately for the enterprise application workloads in our study. We show the influence of the blade and server pool infrastructures on the effectiveness of the management policies.
    Dependable Systems and Networks With FTCS and DCC, 2008. DSN 2008. IEEE International Conference on; 07/2008
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: The advent of Grid computing gives enterprises an ever increasing choice of computing options, yet research has so far hardly addressed the problem of mixing the different computing options in a cost-minimal fashion. The following paper presents a comprehensive cost model and a mixed integer optimization model which can be used to minimize the IT expenditures of an enterprise and help in decision-making when to outsource certain business software applications. A sample scenario is analyzed and promising cost savings are demonstrated. Possible applications of the model to future research questions are outlined.
    Grid Economics and Business Models, 6th International Workshop, GECON 2009, Delft, The Netherlands, August 24, 2009. Proceedings; 01/2009


1 Download
Available from