Stefan WesnerUniversity of Cologne | UOC · Parallel and Distributed Systems
Stefan Wesner
Prof. Dr.-Ing.
About
123
Publications
19,622
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,279
Citations
Introduction
More than twenty years of experience in coordination and participating in collaborative research projects in the field of High Performance Computing, Cloud Computing, Cloud Continuum as well as Network projects.
Additional affiliations
April 2013 - June 2022
BelWue
Position
- Scientific Chairman
Description
- BelWue is connecting the 9 universities and more than 40 universities of applied sciences in the region of Baden-Württemberg with Internet connectivity and is delivering IT Services for schools such as eLearning System hosting
Publications
Publications (123)
Volume data these days is usually massive in terms of its topology, multiple fields, or temporal component. With the gap between compute and memory performance widening, the memory subsystem becomes the primary bottleneck for scientific volume visualization. Simple, structured, regular representations are often infeasible because the buses and inte...
Time series representation and discretisation methods are susceptible to scaling over massive data streams. A recent approach for transferring time series data to the realm of symbols under primitives, named shapeoids has emerged in the area of data mining and pattern recognition. A shapeoid will characterise a subset of the time series curve in wo...
Virtualisation first and cloud computing later has led to a consolidation of workload in data centres that also comprises latency-sensitive application domains such as High Performance Computing and telecommunication. These types of applications require strict latency guarantees to maintain their Quality of Service. In virtualised environments with...
Auto-scaling is able to change the scale of an application at runtime. Understanding the application characteristics, scaling impact as well as the workload, an auto-scaler aligns the acquired resources to match the current workload. For distributed Database Management Systems (DBMS) forming the backend of many large-scale cloud applications, it is...
In this paper, we presented a comprehensive stability analysis of statistical models derived from the network usage data to design an efficient and optimal resource management in a Cloud data centre. In recent years, it has been noticed that network has a significant impact on the HPC and business critical applications when they are run in a cloud...
In this paper, we present a statistical model based VM place-
ment approach for Cloud infrastructures. The model is motivated by
the fact that more and more resource demanding applications are de-
ployed in Cloud Infrastructures and in particular, communication data
rate and latency bound applications are suffering from common place-
ment algorithm...
The REliable CApacity Provisioning and enhanced remediation for distributed cloud applications (RECAP) project aims to advance cloud and edge computing technology, to develop mechanisms for reliable capacity provisioning, and to make application placement, infrastructure management, and capacity provisioning autonomous, predictable and optimized. T...
In this chapter starting from key properties of flexible cloud based infrastructures and its evolution over time the well established taxonomy of different Cloud models are introduced. Approaches to deal with changing demands and how for example Infrastructure as a Service (IaaS) can react on them driven by rules are illustrated and motivated with...
This chapter describes an approach for increasing the scalability of applications by exploiting inherent concurrency in order to parallelize and distribute the code. It focuses on concurrency in the sense of reduced dependencies between logical parts of an application. A promising approach consists in exploitation of concurrency rather than ‘automa...
Cloud computing is the promise to provide flexible IT solutions. This correlates with an increasing demand in flexibility of business processes in companies. However, there is still a huge gap between business and IT management. The evolution of cloud service models tries to bridge this by bringing up fine grained and multi-dimensional service mode...
IaaS Cloud systems enable the Cloud provider to overbook
his data centre by selling more virtual resources than physical resources
available. This approach works if on average the resource utilisation of
a virtual machine is lower than the virtual machine boundaries. If this
assumption is violated only locally, Cloud users will experience performan...
I/O subsystem performance is becoming increasingly important for a wide range of applications. The demand can be met with a large memory capacity or fast local SSDs, but such solutions cause very high investment costs and are rather inflexible. Here, we investigate a multi-tiered approach, combining memory, local SSDs and InfiniBand-attached block...
Data Centre Design is a complex task due to the wide range of technological options and building blocks available. Deriving the data centre system architecture is done based on
a couple of uncertain assumptions rather than known facts. The future workload of users or, more precisely, of their applications can only be predicted. Furthermore, the ben...
The modern Semantic Web scenarios require reasoning algorithms to be flexible, modular, and highly-configurable. A solid approach, followed in the design of the most currently existing reasoners, is not sufficient when dealing with today's challenges of data analysis across multiple sources of heterogeneous data or when the data amount grows to the...
High Performance Computing is becoming increasingly relevant for industry and academia. With the current development on the processor market, modern systems quickly grow in size, i.e. in number of cores, but only little in terms of performance, i.e. in actual execution speed. Reason for that is the increasing impact of the memory and communication...
In this paper we present our practical experiences during the implementation of our hierarchical scheduling model for unknown applications with fluctuating resource demands. This work is not about evaluation of our model, but it presents mechanisms which were enacted on several levels of our implementation in order to circumvent shortcomings of the...
The current cloud landscape is highly heterogeneous caused by a vast number of cloud offerings by different providers. This hinders the selection of a cloud provider and its divergent offerings based on the requirements of an application and ultimately its deployment. This paper introduces a model based execution-ware that helps coping with these c...
Many programs in computational quantum chemistry need a fast storage system capable of serving more than 10,000 I/O operations per second while also being large enough to store the temporary files created by these applications. A good solution which fulfills both requirements is a hybrid approach consisting of a large network storage and small but...
The first part of the paper discusses limitations of current Cloud offerings
to efficiently support performance critical applications. A technical simulation from
quantum chemistry is used as guiding example. The focus is on I/O performance
being the major bottleneck for this kind of application and virtualisation in this area
is much less deve...
The vision of the CACTOS project is focused on cloud topology optimization. This is realised by providing
new types of data centre optimization and simulation mechanisms. CACTOS is holistic and aims to support
both design-time and run-time optimization, covers data acquisition and application profiling as well as
infrastructure management. One o...
The modern Semantic Web scenarios require reasoning algorithms to be flexible, modular, and highly-configurable. A solid approach, followed in the design of the most currently existing reasoners, is not sufficient when dealing with today's challenges of data analysis across multiple sources of heterogeneous data or when the data amount grows to the...
Versatility and scalability are the major factors for meeting the requirements of a flexible production of the 21st century. A versatile production can only be realized if the machine control infrastructure is also versatile and scalable - current machine controls are not. Limitations in areas like e.g. reconfiguration ability, security and computa...
Summary: RegaDB is a free and open source data management and analysis environment for infectious diseases. RegaDB allows clinicians to store, manage and analyse patient data, including viral genetic sequences. Moreover, RegaDB provides researchers with a mechanism to collect data in a uniform format and offers them a canvas to make newly developed...
Students of psychology as a minor subject often face the problem that they cannot attend psychology lectures as they coincide with courses in their major field of studies. A solution for these students might be lecture-recordings. In two field experiments, students' learning outcome in live-lectures and low-effort lecture-recordings did not differ....
RegaDB is a free and open source data management and analysis environment for infectious diseases. RegaDB allows clinicians to store, manage and analyze patient data, including viral genetic sequences. Moreover, RegaDB provides researchers with a mechanism to collect data in a uniform format and offers them a canvas to make newly developed bioinfor...
Clouds have become the modern concept of utility computing – not only over the web, but in general. As such, they are the seeming solution for all kind of computing and storage problems, ranging from simple database servers to high performance computing. However, clouds have specific characteristics and hence design specifics which impact on the ca...
Reasoning is one of the essential application areas of the modern Semantic Web. Nowadays, the semantic reasoning algorithms are facing significant challenges when dealing with the emergence of the Internet-scale knowledge bases, comprising extremely large amounts of data. The traditional reasoning approaches have only been approved for small, close...
Cloud computing has been gaining popularity for quite some time in various areas, on the infrastructure, platform and application level. Recently, the possibility to provide high performance computing (HPC) as a service has been investigated in conjunction with the cloud computing paradigm. While this is a viable solution for applications that do n...
In this paper we present an overview of the CoolEmAll project which addresses the important problem of data center energy efficiency. To this end, CoolEmAll aims at delivering advanced simulation, visualization and decision support tools along with open models of data center building blocks to be used in simulations. Both building blocks and the to...
The increasing complexity of current and future very large computing sys- tems with a rapidly growing number of cores and nodes requires high human effort on administration and maintenance of these systems. Existing monitoring tools are neither scalable nor capable to reduce the overwhelming flow of information and provide only essential informatio...
In this paper we present objectives and expected results of the CoolEmAll project that aims at improving energy-efficiency of data centers. To achieve this goal, CoolEmAll works on data center simulation and visualization tools (SVD Toolkit) and models of data center efficiency building blocks (DEBBs) that will be used by the SVD Toolkit to facilit...
We demonstrate the OPTIMIS toolkit for scalable and dependable service platforms and architectures that enable flexible and
dynamic provisioning of Cloud services. The innovations demonstrated are aimed at optimizing Cloud services and infrastructures
based on aspects such as trust, risk, eco-efficiency, cost, performance and legal constraints. Ada...
With the growing amount of computational resources available, not only locally (e.g. multicore processors), but also across the Internet, utility computing (aka Clouds and Grids) becomes more and more interesting as a means to outsource applications and services, respectively. So far, these systems still act like external resources / devices that h...
The concept of Service Level Agreements (SLAs) has come to attention particularly in conjunction with Grid computing. SLAs allow for a controlled collaboration between partners, that is service providers and their customers. SLAs have originated in the telecommunication industry but have found broad uptake in the Grid computing community, especiall...
IntroductionConcepts of a Cloud MashupRealizing Resource MashupsConclusions
References
The vision of the recently started GAMES European Re- search project is a new generation of energy efficient IT Service Centres, designed taking into account both the characteristics of the applications running in the centre and context-aware adaptivity features that can be enabled both at the application level and within the IT and utility infras-...
A key element for outsourcing critical parts of a business process in Service Oriented Architectures are Service Level Agreements (SLAs). They build the key element to move from solely trust-based towards controlled cross-organizational collaboration. While originating from the domain of telecommu- nications the SLA concept has gained particular at...
With the growing amount of computational resources not only locally (multi-core), but also across the web, utility computing (aka Clouds and Grids) becomes more and more interesting as a means to outsource management and services. So far, these machines still act like external resources that have to be explicitly selected, integrated, accessed etc....
The trend of merging telecommunication infrastructures with traditional Information Technology (IT) infrastructures is ongoing and important for commercial service providers. The driver behind this development is, on one hand, the strong need for enhanced services and on the other hand, the need of telecommunication operators aiming at value-added...
The advent of specialised processors and the movement towards multi-core processors lead to an increased complexity for the application developers as well as for computing centres aiming to deliver computing resources for a wider community. While the increasing variety of different computing architectures is typically seen as a challenge, this arti...
short paper / presentation
short paper / presentation
Service-Oriented Infrastructures including Grid and Cloud Computing are technologies in a critical transition to wider adoption by business. Their use may enable enterprises to achieve optimal IT utilization, including sharing resources and services across enterprises and on-demand utilization of those made available by business partners over the n...
Moving away from simple data sharing within the science community towards cross-organizational collaboration scenarios significantly increased challenges related to security and privacy. They need to be addressed in order to make cross-organizational applications such as collaborative working environments a business proposition within communities s...
Flooding is a wide spread and devastating natural disaster worldwide. Floods that took place in the last decade in China were ranked the worst amongst recorded floods worldwide in terms of the number of human fatalities and economic losses (Munich Re-Insurance). Rapid economic development and population expansion into low lying flood plains has wor...
Cloud Computing provides a solution for remote hosting of applications and processes in a scalable and managed environment. With the increasing number of cores in a single processor and better network performance, provisioning on platform level becomes less of an issue for future machines and thus for future business environments. Instead, it will...
Over recent years, resource provisioning over the Internet has moved from Grid to Cloud computing. Whilst the capabilities and the ease of use have increased, uptake is still comparatively slow, in particular in the commercial context. This paper discusses a novel resource provisioning concept called Service-oriented Operating Systems and how it di...
Most people use more than one computing system for their daily work: an office computer, a corporate laptop for travel, and a private desktop computer. These machines not only differ in their power and resources but also in their environment, including deployed applications, available files, and so on. The current trend is leading to an even greate...
This thesis describes an Service Level Agreement based model for dynamic virtual organisations and a corresponding management framework for service providers making them able to fullfill such SLAs. The proposed framework is realised as a hierachical model starting from low level management close the hardware and network primitives necessary to real...
A wide range of mechanisms for providing context information and changes are available. In this position paper the authors outline potential collaborative application scenarios from three different EC research projects and how Grids can support the realization of adaptive and context aware collaborations using Grid concepts.
Current semantic Web reasoning systems do not scale to the requirements of their hottest applications, such as analyzing data from millions of mobile devices, dealing with terabytes of scientific data, and content management in enterprises with thousands of knowledge workers. In this paper, we present our plan of building the large knowledge collid...
The Internet has become a powerful means of communication and interaction and various research projects have shown its potential to revolutionize business models and means of cooperation. Only recently, development has made significant progress in catching up with research and a series of products have been exposed to the market which may well repr...
The provision of value added IT services are clearly a key element of the strategy of all telecommunications operators. Leading innovation in the service domain can become critical for the future success and growth of telecom operators. In this article we discuss a subset of European research projects and how they contribute to the innovation strat...
The Internet has become a powerful means of communication and interaction and various research projects have shown its potential to revolutionize business models and means of cooperation. Only recently, development has made significant progress in catching up with research and a series of products have been exposed to the market which may well repr...
As the line that separates the old circuit switched based telephony, the IP-packet switched based Internet and mobility by the hand of the success of the 2G of cellular phones - blurs more and more, the existent value chains are experiencing a deep recombination due to the clash of the different interests. From the resultant technological melee, th...
The use of wireless networking technologies has emerged over recent years in many application domains. The area of grids determines a potentially huge application domain, since the typical centralized computing centers require access from anywhere, e.g., from field engineers who are situated in a wireless network domain. Thus, the integration of su...
Modern day approaches towards realising Service Level Agreement (SLA) specifications generally disrespect essential privacy and confidentiality issues, in particular with respect to exposing information about the infrastructure and how quality related data is gathered and processed. Such information is neither helpful for the customer, nor does it...
The rise in practical Virtual Organisations (VOs) requires secure access to data and interactions between their partners.
Ad hoc solutions to meet these requirements are possible, but Web services hold out the potential for generic security solutions
whose cost can be spread across several short lived dynamic VOs. This paper identifies trust and se...
”Virtual Organizations” belong to the key concepts in the Grid com- puting community. They are currently evolving from basically static to dynamic solutions that are created ad-hoc in reaction to a market demand. This paper pro- vides a definition of ”dynamic Virtual Organizations” in order to assess specific challenges of an abstract collaborative...
An important part of a research project is the exploitation of the achieved results and the fostering of innovation processes within the involved industrial partners. Typically, the exploitation activities are started late in the execution of a project. In this paper, an approach that starts with preparatory action for exploitation from the first d...
Mobility is becoming a central aspect of everyday life and must not be ignored by the ongoing efforts in defining the Next Generation Grid architectures. Currently existing network agnostic Grid middleware solutions are duplicating functionality available from lower layers and cannot benefit from a richer set of additional information available suc...
In this paper we describe the demonstration of selected security & contract management capabilities of an early prototype
infrastructure enabling dynamic Virtual Organisations for the purpose of providing virtualised services and resources that
can be offered on-demand, following a utility computing model, and integrated into aggregated services wh...
The current understanding of Virtual Organisations within the Grid community is driven by the needs of the High Performance Computing and Research community. For this reason the VO models have been historically very static, as the nature of the collaboration in this environment is fixed or at least only rarely changing. Another typical assumption o...