Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper introduces a platform to support serverless computing for scalable event-driven data processing that features a multi-level elasticity approach combined with virtualization of GPUs. The platform supports the execution of applications based on Docker containers in response to file uploads to a data storage in order to perform the data processing in parallel. This is managed by an elastic Kubernetes cluster whose size automatically grows and shrinks depending on the number of files to be processed. To accelerate the processing time of each file, several approaches involving virtualized access to GPUs, either locally or remote, have been evaluated. A use case that involves the inference based on deep learning techniques on transthoracic echocardiography imaging has been carried out to assess the benefits and limitations of the platform. The results indicate that the combination of serverless computing and GPU virtualization introduce an efficient and cost-effective event-driven accelerated computing approach that can be applied for a wide variety of scientific applications.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Used strategy Kim et al. [151] GPU support Integrate NVIDIA-Docker and support GPU-based containers Naranjo et al. [193] GPU support Use GPU virtualization Ringlein et al. [212] FPGA support Design a platform architecture with disaggregated FPGAs Bacis et al. [76] FPGA support Monitor and time-share FPGAs ...
... Heterogeneous accelerator support. Besides existing studies about GPU support [151,193] and FPGA support [76,212], other accelerators like Tensor Processing Unit (TPU) also should be noticed for cloud providers of serverless computing. However, supporting new accelerators may be challenging in serverless platforms because it may require designing a new scheduler, resource allocation pattern, or billing model. ...
... Different from using GPU-support containers, Naranjo et al.[193] used GPU virtualization. They tried several virtualized access methods to GPUs, including remote access to GPU devices via the rCUDA framework [51], as well as direct access to GPU devices via PCI passthrough[49]. ...
Article
Serverless computing is an emerging cloud computing paradigm, being adopted to develop a wide range of software applications. It allows developers to focus on the application logic in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, its unique characteristic poses new challenges to the development and deployment of serverless-based applications. To tackle these challenges, enormous research efforts have been devoted. This paper provides a comprehensive literature review to characterize the current research state of serverless computing. Specifically, this paper covers 164 papers on 17 research directions of serverless computing, including performance optimization, programming framework, application migration, multi-cloud development, testing and debugging, etc. It also derives research trends, focus, and commonly-used platforms for serverless computing, as well as promising research opportunities.
... In order to share a physical GPU among containers, some works directly assign an entire physical GPU to one or more containers, while some others partition physical GPUs into multiple virtual GPUs(vGPUs) and allocate one or more vGPUs to the container applying for GPU resources. Furthermore, prior work [10] has been done on extending Kubernetes to enable remote GPU virtualization, which makes containers running in non-GPU nodes share GPU resources of GPU nodes for task acceleration. Remote GPU virtualization allows the nodes of the cluster to share the GPUs present in the computing facility, which increases overall GPU utilization and reduces energy consumption and the amount of GPUs installed in the cluster [11]. ...
... API forwarding solution can overcome the limitations of black-box GPU driver by virtualizing GPUs at the library level, but it is limited by its performance overhead and functional incompleteness [13]. 2. Previous work [10] on remote GPU virtualization in Kubernetes mainly focuses on GPU acceleration. Thus, the problem of communication overhead and sharedresource inference remains unsolved. ...
... Several studies [16,[27][28][29] further extend ConVGPU to support compute resource usage isolation on kernel execution based on API forwarding. Moreover, OSCAR [10] uses API forwarding to provide serverless functions with access to remote GPUs from the containers of a Kubernetes cluster. ...
Article
Full-text available
With the increasing number of new containerized applications, such as high performance and deep learning applications, started to reply on GPU, efficiently supporting GPU in container cloud becomes essential. While GPU sharing has been extensively studied for VM, limited work has been done for containers. Existing works only use a single specific GPU virtualization technique to deploy containers, like GPU pass-through or API forwarding, and lack remote GPU virtualization optimization. The limitations lead to low system throughput and container performance degradation due to the dynamic and heterogeneous nature of container resource requirement and GPU virtualization technique, and the problem of communication overhead and resource racing. Therefore, we designed and implemented KubeGPU, which extends Kubernetes to enable GPU sharing with adaptive share strategy. Adaptive sharing strategy gives KubeGPU the ability to make a dynamic choice of GPU virtualization to deploy containers according to available GPU resources and containers’ configuration parameters such as GPU resource requirement in order to achieve a good container performance and system throughput. Besides that, network-aware scheduling approach and fine-grained allocation of remote GPU resources are proposed to optimize remote GPU virtualization. Finally, using representative real-world workloads for HPC and deep learning, we demonstrate the superiority of KubeGPU compared to other existing works, and the effectiveness of KubeGPU in minimizing communication overhead and eliminating remote GPU resource racing.
... The FaaS model has demonstrated that increased sharing of resources between applications can increase resource efficiency of running workflows while decreasing management complexity [19,32]. Although approaches to integrating specific accelerators with FaaS platforms exist [53,70], we ask the opposite question: how can the abstraction of FaaS benefit hardware accelerators to support workflows in federated heterogeneous computing? To answer this question, we make the following contributions in this paper: ...
... Like application kernels in heterogeneous computing, serverless functions are often fine-grained and require careful orchestration to optimize resource utilization [38,45,51]. Some work also specifically addresses the use of accelerators (predominantly, GPUs) in serverless computing [50,53,70,90]. ...
... As a result, research focus has been increasingly directed towards expanding the FaaS service oferings by adding access to hardware accelerators. Naranjo et al. introduce a GPU enabled serverless framework which links virtual GPUs with the OpenFaas serverless framework via the rCUDA [42] remote GPU virtualization service [91]. Ringlein et al. propose a system architecture involving disaggregated FPGAs within a FaaS ofering [102]. ...
... With increased exploitation of serverless for diferent application domains, there has been a rise in demand for access to specialized hardware where required. Research on such lexibile serverless frameworks is growing [91] and has great potential. ...
Article
Full-text available
Serverless computing has emerged as an attractive deployment option for cloud applications in recent times. The unique features of this computing model include rapid auto-scaling, strong isolation, fine-grained billing options and access to a massive service ecosystem which autonomously handles resource management decisions. This model is increasingly being explored for deployments in geographically distributed edge and fog computing networks as well, due to these characteristics. Effective management of computing resources has always gained a lot of attention among researchers. The need to automate the entire process of resource provisioning, allocation, scheduling, monitoring and scaling, has resulted in the need for specialized focus on resource management under the serverless model. In this article, we identify the major aspects covering the broader concept of resource management in serverless environments and propose a taxonomy of elements which influence these aspects, encompassing characteristics of system design, workload attributes and stakeholder expectations. We take a holistic view on serverless environments deployed across edge, fog and cloud computing networks. We also analyse existing works discussing aspects of serverless resource management using this taxonomy. This article further identifies gaps in literature and highlights future research directions for improving capabilities of this computing model.
... As a result, research focus has been increasingly directed towards expanding the FaaS service oferings by adding access to hardware accelerators. Naranjo et al. introduce a GPU enabled serverless framework which links virtual GPUs with the OpenFaas serverless framework via the rCUDA [42] remote GPU virtualization service [91]. Ringlein et al. propose a system architecture involving disaggregated FPGAs within a FaaS ofering [102]. ...
... With increased exploitation of serverless for diferent application domains, there has been a rise in demand for access to specialized hardware where required. Research on such lexibile serverless frameworks is growing [91] and has great potential. ...
Preprint
Full-text available
Serverless computing has emerged as an attractive deployment option for cloud applications in recent times. The unique features of this computing model include, rapid auto-scaling, strong isolation, fine-grained billing options and access to a massive service ecosystem which autonomously handles resource management decisions. This model is increasingly being explored for deployments in geographically distributed edge and fog computing networks as well, due to these characteristics. Effective management of computing resources has always gained a lot of attention among researchers. The need to automate the entire process of resource provisioning, allocation, scheduling, monitoring and scaling, has resulted in the need for specialized focus on resource management under the serverless model. In this article, we identify the major aspects covering the broader concept of resource management in serverless environments and propose a taxonomy of elements which influence these aspects, encompassing characteristics of system design, workload attributes and stakeholder expectations. We take a holistic view on serverless environments deployed across edge, fog and cloud computing networks. We also analyse existing works discussing aspects of serverless resource management using this taxonomy. This article further identifies gaps in literature and highlights future research directions for improving capabilities of this computing model.
... Year C S CS EC RU RT Wen et al. [18] 2021 ✗ ✗ ✗ Perez et al. [19] 2019 ✗ ✗ ✗ ✗ Kim et al. [20] 2020 ✗ ✗ ✗ ✗ ✗ Enes et al. [21] 2020 ✗ ✗ ✗ ✗ Jackson et al. [22] 2018 ✗ ✗ ✗ ✗ ✗ Shafiei et al. [23] 2022 ✗ ✗ ✗ ✗ ✗ Golec et al. [25] 2021 ✗ ✗ ✗ ✗ Singh et al. [24] 2022 ✗ ✗ Grafberger et al. [26] 2021 ✗ ✗ ✗ Li et al. [27] 2022 ✗ ✗ ✗ ✗ Bebortta et al. [28] 2020 ✗ ✗ ✗ ✗ Gill et al. [29] 2021 ✗ ✗ ✗ ✗ ✗ ✗ Mateus et al. [30] 2022 ✗ ✗ ✗ ✗ Marin et al. [31] 2022 ✗ ✗ ✗ ✗ Mahmoudi et al. [10] 2020 ✗ ✗ Yussupov et al. [32] 2019 ✗ ✗ ✗ ✗ ✗ Van Eyk et al. [33] 2018 ✗ ✗ ✗ ✗ ✗ Cordingly et al. [34] 2020 ✗ ✗ ✗ ✗ ✗ Bardsley et al. [35] 2018 ✗ ✗ ✗ ✗ ✗ Jackson et al. [22] 2018 ✗ ✗ ✗ ✗ Rajan et al. [36] 2018 ✗ ✗ Grogan et al. [37] 2020 ✗ ✗ ✗ ✗ ✗ Vahidinia et al. [38] 2022 ✗ ✗ ✗ ✗ ✗ Liu et al. [39] 2023 ✗ ✗ ✗ ✗ ✗ Fuerst et al. [40] 2021 ✗ ✗ ✗ ✗ ✗ Mampage et al. [41] 2021 ✗ ✗ ✗ ✗ Kaur et al. [42] 2019 ✗ ✗ ✗ ✗ ✗ Datta et al. [43] 2024 ✗ ✗ ✗ ✗ ✗ Naranjo et al. [44] 2020 ✗ ✗ ✗ ✗ ✗ Sarroca et al. [45] 2024 ✗ ✗ ✗ ✗ ✗ Zuk et al. [46] 2022 Figure 1: State diagram of function instance a new function instance on an existing virtual machine, which affects the response time that is experienced by the users. Extensive research is conducted to mitigate cold start in serverless computing [38], [49]. ...
Article
Full-text available
Serverless computing has evolved as a prominent paradigm within cloud computing, providing on-demand resource provisioning and capabilities crucial to Science and Technology for Energy Transition (STET) applications. Despite the efficiency of the auto-scalable approaches in optimizing performance and cost in distributed systems, their potential remains underutilized in serverless computing due to the lack of comprehensive approaches. So an auto-scalable approach has been designed using Q-learning, which enables optimal resource scaling decisions. This approach proves useful for adjusting resources dynamically to maximize resource utilization by automatically scaling up or down resources as needed. Further, the proposed approach has been validated using AWS Lambda with key performance metrics such as probability of cold start, average response time, idle instance count, energy consumption, etc. The experimental results demonstrate that the proposed approach performs better than the existing approach by considering the above parameters. Finally, the proposed approach has also been validated to optimize the energy consumption of smart meter data.
... A significant development in computing is the availability of a wide range of highly flexible devices, known as nodes, which can be deployed anytime and anywhere; only a fee is incurred when they are employed. Hence, with traditional computing environments or data centers, achieving this goal would not be possible due to the limitations of the existing technology [18]. It is essential to note that unused, underused, and inactive resources significantly impact energy waste. ...
Article
Full-text available
In recent years, serverless computing has received significant attention due to its innovative approach to cloud computing. In this novel approach, a new payment model is presented, and a microservice architecture is implemented to convert applications into functions. These characteristics make it an appropriate choice for topics related to the Internet of Things (IoT) devices at the network’s edge because they constantly suffer from a lack of resources, and the topic of optimal use of resources is significant for them. Scheduling algorithms are used in serverless computing to allocate resources, which is a mechanism for optimizing resource utilization. This process can be challenging due to a number of factors, including dynamic behavior, heterogeneous resources, workloads that vary in volume, and variations in number of requests. Therefore, these factors have caused the presentation of algorithms with different scheduling approaches in the literature. Despite many related serverless computing studies in the literature, to the best of the author’s knowledge, no systematic, comprehensive, and detailed survey has been published that focuses on scheduling algorithms in serverless computing. In this paper, we propose a survey on scheduling approaches in serverless computing across different computing environments, including cloud computing, edge computing, and fog computing, that are presented in a classical taxonomy. The proposed taxonomy is classified into six main approaches: Energy-aware, Data-aware, Deadline-aware, Package-aware, Resource-aware, and Hybrid. After that, open issues and inadequately investigated or new research challenges are discussed, and the survey is concluded.
... Serverless computing offers several benefits over cloud and edge computing, such as low cost, scalability, and no need to maintain infrastructure [40]. It is more cost-effective than traditional clouds as developers do not need high computational power or space. ...
Article
Full-text available
Computing paradigms have evolved significantly in recent decades, moving from large room-sized resources (processors and memory) to incredibly small computing nodes. Recently, the power of computing has attracted almost all current application fields. Currently, distributed computing continuum systems (DCCSs) are unleashing the era of a computing paradigm that unifies various computing resources, including cloud, fog/edge computing, the Internet of Things (IoT), and mobile devices into a seamless and integrated continuum. Its seamless infrastructure efficiently manages diverse processing loads and ensures a consistent user experience. Furthermore, it provides a holistic solution to meet modern computing needs. In this context, this paper presents a deeper understanding of DCCSs' potential in today's computing environment. First, we discuss the evolution of computing paradigms up to DCCS. The general architectures, components, and various computing devices are discussed, and the benefits and limitations of each computing paradigm are analyzed. After that, our discussion continues into various computing devices that constitute part of DCCS to achieve computational goals in current and futuristic applications. In addition, we delve into the key features and benefits of DCCS from the perspective of current computing needs. Furthermore, we provide a comprehensive overview of emerging applications (with a case study analysis) that desperately need DCCS architectures to perform their tasks. Finally, we describe the open challenges and possible developments that need to be made to DCCS to unleash its widespread potential for the majority of applications.
... 17,18 Serverless platform extensions include research works that improve existing platforms by introducing novel scaling, placement, or routing techniques. 19,20 Serverless programming extensions include open source projects, such as Zappa 21 or Chalice, 22 and research works that introduce new approaches of programming serverless functions. 23,24 Serverless function runtimes focus on the execution aspect of functions and mostly consist of works that introduce new virtualization or isolation techniques. ...
Article
Full-text available
The edge–cloud continuum combines heterogeneous resources, which are complex to manage. Serverless edge computing is a suitable candidate to manage the continuum by abstracting away the underlying infrastructure, improving developers’ experiences, and optimizing overall resource utilization. However, understanding and overcoming programming support, reliability, and performance engineering challenges are essential for the success of serverless edge computing. In this article, we review and evaluate the maturity of serverless approaches for the edge–cloud continuum. Our review includes commercial, community-driven offerings and approaches from academia. We identify several maturity levels of serverless edge computing and use them as criteria to evaluate the maturity of current state-of-the-art serverless approaches with a special focus on the programming, reliability, and performance challenges. Finally, we lay a road map toward the next generation of serverless edge computing systems.
... Naranjo et al. [18] addressed the GPU monopolization problem by introducing rCUDA [7], a GPU virtualization framework, to FaaS. The solution prevents the FaaS functions from directly managing GPUs by intercepting the GPU operations from FaaS functions to the rCUDA interface. ...
Preprint
Function-as-a-Service (FaaS) is emerging as an important cloud computing service model as it can improve the scalability and usability of a wide range of applications, especially Machine-Learning (ML) inference tasks that require scalable resources and complex software configurations. These inference tasks heavily rely on GPUs to achieve high performance; however, support for GPUs is currently lacking in the existing FaaS solutions. The unique event-triggered and short-lived nature of functions poses new challenges to enabling GPUs on FaaS, which must consider the overhead of transferring data (e.g., ML model parameters and inputs/outputs) between GPU and host memory. This paper proposes a novel GPU-enabled FaaS solution that enables ML inference functions to efficiently utilize GPUs to accelerate their computations. First, it extends existing FaaS frameworks such as OpenFaaS to support the scheduling and execution of functions across GPUs in a FaaS cluster. Second, it provides caching of ML models in GPU memory to improve the performance of model inference functions and global management of GPU memories to improve cache utilization. Third, it offers co-designed GPU function scheduling and cache management to optimize the performance of ML inference functions. Specifically, the paper proposes locality-aware scheduling, which maximizes the utilization of both GPU memory for cache hits and GPU cores for parallel processing. A thorough evaluation based on real-world traces and ML models shows that the proposed GPU-enabled FaaS works well for ML inference tasks, and the proposed locality-aware scheduler achieves a speedup of 48x compared to the default, load balancing only schedulers.
... Prior articles [4][5][6][7] examined the concept of virtualization technologies conceptually, providing the scientific community with a broad understanding of the subject. Furthermore, a few articles [8][9][10][11] attempted to evaluate the performance of containers, hypervisors, and explicit cloud platforms but the analysis and comparisons were limited. ...
Article
Full-text available
The development of the Next-Generation Wireless Network (NGWN) is becoming a reality. To conduct specialized processes more, rapid network deployment has become essential. Methodologies like Network Function Virtualization (NFV), Software-Defined Networks (SDN), and cloud computing will be crucial in addressing various challenges that 5G networks will face, particularly adaptability, scalability, and reliability. The motivation behind this work is to confirm the function of virtualization and the capabilities offered by various virtualization platforms, including hypervisors, clouds, and containers, which will serve as a guide to dealing with the stimulating environment of 5G. This is particularly crucial when implementing network operations at the edge of 5G networks, where limited resources and prompt user responses are mandatory. Experimental results prove that containers outperform hypervisor-based virtualized infrastructure and cloud platforms’ latency and network throughput at the expense of higher virtualized processor use. In contrast to public clouds, where a set of rules is created to allow only the appropriate traffic, security is still a problem with containers.
... Heterogeneous accelerator support. Besides studies of GPU support [125,160] and FPGA support [62,176], other accelerators like Tensor Processing Unit (TPU) also should be noticed for cloud providers of serverless computing. ...
Preprint
Serverless computing is an emerging cloud computing paradigm. Moreover, it has become an attractive development option for cloud-based applications for software developers. The most significant advantage of serverless computing is to free software developers from the burden of complex underlying management tasks and allow them to focus on only the application logic implementation. Based on its benign characteristics and bright prospect, it has been an increasingly hot topic in various scenarios, such as machine learning, scientific computing, video processing, and Internet of Things. However, none of the studies focuses on a comprehensive analysis of the current research state of the art of serverless computing from the research scope and depth. To fill this knowledge gap, we present a comprehensive literature review to summarize the current research state of the art of serverless computing. This review is based on selected 164 research papers to answer three key aspects, i.e., research directions (What), existing solutions (How), and platforms and venues (Where). Specifically, first, we construct a taxonomy linked to research directions about the serverless computing literature. Our taxonomy has 18 research categories covering performance optimization, programming framework, application migration, multi-cloud development, cost, testing, debugging, etc. Second, we classify the related studies of each research direction and elaborate on their specific solutions. Third, we investigate the distributions of experimental platforms and publication venues for existing techniques. Finally, based on our analysis, we discuss some key challenges and envision promising opportunities for future research on the serverless platform side, serverless application side, and serverless computing community side.
... We think a multiplexing accelerator in serverless is the key to solving these obstacles. For example, some works [98,150] integrate GPUs into serverless systems, and BlastFunction [14] makes FPGAs available in serverless. However, the current works are still insufficient. ...
Preprint
The development of cloud infrastructures inspires the emergence of cloud-native computing. As the most promising architecture for deploying microservices, serverless computing has recently attracted more and more attention in both industry and academia. Due to its inherent scalability and flexibility, serverless computing becomes attractive and more pervasive for ever-growing Internet services. Despite the momentum in the cloud-native community, the existing challenges and compromises still wait for more advanced research and solutions to further explore the potentials of the serverless computing model. As a contribution to this knowledge, this article surveys and elaborates the research domains in the serverless context by decoupling the architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. Inspired by the security model, we highlight the key implications and limitations of these works in each layer, and make suggestions for potential challenges to the field of future serverless computing.
... Numba also provides support for generating code for accelerators such as Nvidia/AMD GPUs using NVVM [30] and HLC [31]. Using GPUs for accelerating FaaS functions [32] is our interest for the investigation in the future, but is out of scope for this work. ...
Preprint
Full-text available
FaaS allows an application to be decomposed into functions that are executed on a FaaS platform. The FaaS platform is responsible for the resource provisioning of the functions. Recently, there is a growing trend towards the execution of compute-intensive FaaS functions that run for several seconds. However, due to the billing policies followed by commercial FaaS offerings, the execution of these functions can incur significantly higher costs. Moreover, due to the abstraction of underlying processor architectures on which the functions are executed, the performance optimization of these functions is challenging. As a result, most FaaS functions use pre-compiled libraries generic to x86-64 leading to performance degradation. In this paper, we examine the underlying processor architectures for Google Cloud Functions (GCF) and determine their prevalence across the 19 available GCF regions. We modify, adapt, and optimize three compute-intensive FaaS workloads written in Python using Numba, a JIT compiler based on LLVM, and present results wrt performance, memory consumption, and costs on GCF. Results from our experiments show that the optimization of FaaS functions can improve performance by 12.8x (geometric mean) and save costs by 73.4% on average for the three functions. Our results show that optimization of the FaaS functions for the specific architecture is very important. We achieved a maximum speedup of 1.79x by tuning the function especially for the instruction set of the underlying processor architecture.
Conference Paper
Este artigo apresenta uma revisão integrativa da literatura sobre a aplicação de High Performance Computing (HPC) em ambientes serverless, destacando o estado atual das pesquisas e identificando possíveis direções futuras. A revisão foi conduzida em diversas bases de dados acadêmicas, utilizando uma string de pesquisa que combinou termos relacionados à integração de HPC com serverless, com foco em desempenho, escalabilidade e eficiência. Critérios de inclusão e exclusão foram aplicados para selecionar os estudos mais relevantes. Os resultados apontam que a combinação de HPC com serverless oferece benefícios como elasticidade e otimização de custos, mas ainda enfrenta desafios, como latência de invocações remotas e gerenciamento eficiente de recursos. Tecnologias como RDMA e otimizações de I/O têm potencial para mitigar alguns desses problemas. A integração de HPC com serverless apresenta potencial, com oportunidades para otimização e inovação, mas requer avanços adicionais para superar as limitações existentes
Research
Full-text available
As the Fourth Industrial Revolution unfolds, there has been a surge in devices requiring large-scale data processing, such as AI systems, smart factories, and self-driving cars. To meet this growing demand, global investment in IoT technology has skyrocketed. However, these devices often face challenges due to increased sensor size and cost. While cloud computing can help mitigate these issues, centralized cloud solutions often introduce delays due to distance. With the increasing complexity and hyper-connectivity of these devices, there is a growing trend toward decentralization through edge servers. This study aims to develop a technology framework that integrates digital twin technology with edge computing, areas where Korea currently lags behind. The proposed framework will facilitate the creation of a low-latency computing environment for low-resource devices by leveraging edge computing to reduce delays in data-intensive applications, particularly as demand continues to grow in the future.
Article
Full-text available
The evolution of cloud computing systems is examined in this study, which follows the path from conventional virtualisation to modern serverless computing models. Virtualisation optimised resource utilisation by allowing several VMs to operate on a single physical server at first, but it also added overhead and complexity to administration. With the use of common operating systems and quick deployment, containerisation signalled a move towards more effective and flexible solutions. By eliminating the need for infrastructure administration, concentrating on event-driven function execution, and providing improved scalability and cost effectiveness, serverless computing further revolutionised cloud infrastructure. This paper emphasises the consequences for resource management and application development while highlighting the developments, difficulties, and potential paths in cloud computing.
Article
With the rapid development of information technology, the concept of the Metaverse has swept the world and set off a new wave of the industrial revolution. The construction of living and manufacturing scenes based on the Metaverse requires the joint participation of scientists and engineers from various fields where “human” is at the core. In the Metaverse, predicting human behavior and response based on the deep learning model is meaningful because the prediction results can provide more satisfactory services for participants. Therefore, how to deploy a multi-stage machine learning reasoning model has become the bottleneck to improving the development level of Metaverse. Thanks to its scalability and pay-as-you-go billing model, the emerging serverless computing can effectively cope with the workload of machine learning inference. However, the statelessness of serverless computing and the lack of good GPU resource-sharing support make it difficult to deploy the machine learning model directly on the serverless computing platform to play its advantages. Therefore, we propose SMSS, a stateful model inference service, which is deployed on a serverless computing platform that supports GPU sharing. Since the serverless computing platform does not support stateful workflow execution, SMSS adopts log-based workflow runtime support. We also design a mechanism of two-layer GPU sharing to fully explore the potential of inter-model and intra-model GPU sharing. We evaluate the effectiveness of SMSS with real workloads. Our experimental results show that log-based stateful workflow operation support can ensure the stateful execution of tasks with low overhead but facilitate error location and recovery. Two-layer GPU Sharing can reduce the cold start time of inference tasks to two orders of magnitude at most.
Article
Serverless platforms have been attracting applications from traditional platforms because infrastructure management responsibilities are shifted from users to providers. Many applications well-suited to serverless environments could leverage GPU acceleration to enhance their performance. Unfortunately, current serverless platforms do not expose GPUs to serverless applications.
Conference Paper
The increasing use of hardware processing accelerators tailored for specific applications, such as the Vision Processing Unit (VPU) for image recognition, further increases developers' configuration, development, and management over-head. Developers have successfully used fully automated elastic cloud services such as serverless computing to counter these additional efforts and shorten development cycles for applications running on CPUs. Unfortunately, current cloud solutions do not yet provide these simplifications for applications that require hardware acceleration. However, as the development of special-ized hardware acceleration continues to provide performance and cost improvements, it will become increasingly important to enable ease of use in the cloud. In this paper, we present an initial design and implemen-tation of Hardless, an extensible and generalized serverless computing architecture that can support workloads for arbitrary hardware accelerators. We show how Hardless can scale across different commodity hardware accelerators and support a variety of workloads using the same execution and programming model common in serverless computing today.
Article
The development of cloud infrastructures inspires the emergence of cloud-native computing. As the most promising architecture for deploying microservices, serverless computing has recently attracted more and more attention in both industry and academia. Due to its inherent scalability and flexibility, serverless computing becomes attractive and more pervasive for ever-growing Internet services. Despite the momentum in the cloud-native community, the existing challenges and compromises still wait for more advanced research and solutions to further explore the potentials of the serverless computing model. As a contribution to this knowledge, this article surveys and elaborates the research domains in the serverless context by decoupling the architecture into four stack layers: Virtualization, Encapsule, System Orchestration, and System Coordination. Inspired by the security model, we highlight the key implications and limitations of these works in each layer, and make suggestions for potential challenges to the field of future serverless computing.
Article
Serverless computing and, in particular, the functions as a service model has become a convincing paradigm for the development and implementation of highly scalable applications in the cloud. This is due to the transparent management of three key functionalities: triggering of functions due to events, automatic provisioning and scalability of resources, and fine-grained pay-per-use. This article presents a serverless web-based scientific gateway to execute the inference phase of previously trained machine learning and artificial intelligence models. The execution of the models is performed both in Amazon Web Services and in on-premises clouds with the OSCAR framework for serverless scientific computing. In both cases, the computing infrastructure grows elastically according to the demand adopting scale-to-zero approaches to minimize costs. The web interface provides an improved user experience by simplifying the use of the models. The usage of machine learning in a computing platform that can use both on-premises clouds and public clouds constitutes a step forward in the adoption of serverless computing for scientific applications.
Article
Full-text available
Serverless computing menjadi paradigma tren baru di cloud komputasi, memungkinkan pengembang untuk fokus pada aplikasi inti logika dan aplikasi prototipe cepat. Serverless computing memiliki biaya yang lebih rendah dan kenyamanan yang diberikannya kepada pengguna yang tidak perlu fokus pada manajemen server. Karena prospeknya yang bagus Serverless computing, dalam beberapa tahun terakhir, sebagian besar vendor cloud utama telah meluncurkan platform komputasi tanpa server komoditas mereka. Namun, karakteristik platform ini belum dipelajari secara sistematis. Perlunya analisis platform ini secara kualitatif yaitu mulai dari pengembangan, penyebaran, dan aspek runtime untuk membentuk taksonomi karakteristik. Google Cloud Platform memiliki beberapa jenis serverless Computing, dalam artikel ini akan dijelaskan mengenai perbandingan antara beberapa Serverless Computing yaitu diantaranya Cloud Functions, App Engine, Cloud Run,dan Google Kuberenetes Engine (GKE).
Article
Full-text available
Serverless computing has gained importance over the last decade as an exciting new field, owing to its large influence in reducing costs, decreasing latency, improving scalability, and eliminating server-side management, to name a few. However, to date there is a lack of in-depth survey that would help developers and researchers better understand the significance of serverless computing in different contexts. Thus, it is essential to present research evidence that has been published in this area. In this systematic survey, 275 research papers that examined serverless computing from well-known literature databases were extensively reviewed to extract useful data. Then, the obtained data were analyzed to answer several research questions regarding state-of-the-art contributions of serverless computing, its concepts, its platforms, its usage, etc. We moreover discuss the challenges that serverless computing faces nowadays and how future research could enable its implementation and usage.
Article
Serverless computing is an emerging event‐driven programming model that accelerates the development and deployment of scalable web services on cloud computing systems. Though widely integrated with the public cloud, serverless computing use is nascent for edge‐based, Internet of Things (IoT) deployments. In this work, we present STOIC (serverless teleoperable hybrid cloud), an IoT application deployment and offloading system that extends the serverless model in three ways. First, STOIC adopts a dynamic feedback control mechanism to precisely predict latency and dispatch workloads uniformly across edge and cloud systems using a distributed serverless framework. Second, STOIC leverages hardware acceleration (e.g., GPU resources) for serverless function execution when available from the underlying cloud system. Third, STOIC can be configured in multiple ways to overcome deployment variability associated with public cloud use. We overview the design and implementation of STOIC and empirically evaluate it using real‐world machine learning applications and multitier IoT deployments (edge and cloud). Specifically, we show that STOIC can be used for training image processing workloads (for object recognition)—once thought too resource‐intensive for edge deployments. We find that STOIC reduces overall execution time (response latency) and achieves placement accuracy that ranges from 92% to 97%.
Article
Full-text available
MapReduce is one of the most widely used programming models for analysing large-scale datasets, i.e. Big Data. In recent years, serverless computing and, in particular, Functions as a Service (FaaS) has surged as an execution model in which no explicit management of servers (e.g. virtual machines) is performed by the user. Instead, the Cloud provider dynamically allocates resources to the function invocations and fine-grained billing is introduced depending on the execution time and allocated memory, as exemplified by AWS Lambda. In this article, a high-performant serverless architecture has been created to execute MapReduce jobs on AWS Lambda using Amazon S3 as the storage backend. In addition, a thorough assessment has been carried out to study the suitability of AWS Lambda as a platform for the execution of High Throughput Computing jobs. The results indicate that AWS Lambda provides a convenient computing platform for general-purpose applications that fit within the constraints of the service (15 min of maximum execution time, 3008 MB of RAM and 512 MB of disk space) but it exhibits an inhomogeneous performance behaviour that may jeopardise adoption for tightly coupled computing jobs.
Article
Full-text available
In modern virtual computing environment, existing GPU virtualization techniques are unable to take full advantage of a GPU's powerful 2D/3D hardware-accelerated graphics rendering performance or parallel computing potential, or it has not been considered that the internal resources of a GPU domain are fairly allocated between VMs with different performance requirements. Therefore, we propose a multi-channel GPU virtualization architecture (VMCG), model the corresponding credit allocating and transferring mechanisms, and redesign the virtual multi-channel GPU fair-scheduling algorithm. VMCG provides a separate V-Channel for each guest VM (DomU) that competes with other VMs for the same physical GPU resources, and each DomU submits command request blocks to its respective V-Channel according to the corresponding DomU ID. Through the virtual multi-channel GPU fair-scheduling algorithm, not only do multiple DomUs make full use of native GPU hardware acceleration, but the fairness of GPU resource allocation is significantly improved during GPU-intensive workloads from multiple DomUs running on the same host. Experimental results show that, for 2D/3D graphics applications, performance is close to 96\% of that of the native GPU, performance is improved by approximately 500\% for parallel computing applications, and GPU resource-allocation fairness is improved by approximately 60\%-80\%.
Article
Full-text available
New architectural patterns (e.g. microservices), the massive adoption of Linux containers (e.g. Docker containers), and improvements in key features of Cloud computing such as auto-scaling, have helped developers to decouple complex and monolithic systems into smaller stateless services. In turn, Cloud providers have introduced serverless computing, where applications can be defined as a workflow of event-triggered functions. However, serverless services, such as AWS Lambda, impose serious restrictions for these applications (e.g. using a predefined set of programming languages or difficulting the installation and deployment of external libraries). This paper addresses such issues by introducing a framework and a methodology to create Serverless Container-aware ARchitectures (SCAR). The SCAR framework can be used to create highly-parallel event-driven serverless applications that run on customized runtime environments defined as Docker images on top of AWS Lambda. This paper describes the architecture of SCAR together with the cache-based optimizations applied to minimize cost, exemplified on a massive image processing use case. The results show that, by means of SCAR, AWS Lambda becomes a convenient platform for High Throughput Computing, specially for highly-parallel bursty workloads of short stateless jobs.
Conference Paper
Full-text available
In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.
Conference Paper
Full-text available
Cloud computing enables an entire ecosystem of developing, composing, and providing IT services. An emerging class of cloud-based software architectures, serverless, focuses on providing software architects the ability to execute arbitrary functions with small overhead in server management, as Function-as-a-service (FaaS). However useful, serverless and FaaS suffer from a community problem that faces every emerging technology, which has indeed also hampered cloud computing a decade ago: lack of clear terminology, and scattered vision about the field. In this work, we address this community problem. We clarify the term serverless, by reducing it to cloud functions as programming units, and a model of executing simple and complex (e.g., workflows of) functions with operations managed primarily by the cloud provider. We propose a research vision, where 4 key directions (perspectives) present 17 technical opportunities and challenges.
Conference Paper
Full-text available
As more scientific workloads are moved into the cloud, the need for high performance accelerators increases. Accelerators such as GPUs offer improvements in both performance and power efficiency over traditional multi-core processors, however, their use in the cloud has been limited. Today, several common hypervisors support GPU passthrough, but their performance has not been systematically characterized. In this paper we show that low overhead GPU passthrough is achievable across 4 major hypervisors and two processor microarchitectures. We compare the performance of two generations of NVIDIA GPUs within the Xen, VMWare ESXi, and KVM hypervisors, and we also compare the performance to that of Linux Containers (LXC). We show that GPU passthrough to KVM achieves 98 - 100\% of the base system's performance across two architectures, while Xen and VMWare achieve 96 - 99\% of the base systems performance, respectively. In addition, we describe several valuable lessons learned through our analysis and share the advantages and disadvantages of each hypervisor/GPU passthrough solution.
Article
Full-text available
Cloud infrastructures are becoming an appropriate solution to address the computational needs of scientific applications. However, the use of public or on-premises Infrastructure as a Service (IaaS) clouds requires users to have non-trivial system administration skills. Resource provisioning systems provide facilities to choose the most suitable Virtual Machine Images (VMI) and basic configuration of multiple instances and subnetworks. Other tasks such as the configuration of cluster services, computational frameworks or specific applications are not trivial on the cloud, and normally users have to manually select the VMI that best fits, including undesired additional services and software packages. This paper presents a set of components that ease the access and the usability of IaaS clouds by automating the VMI selection, deployment, configuration, software installation, monitoring and update of Virtual Appliances. It supports APIs from a large number of virtual platforms, making user applications cloud-agnostic. In addition it integrates a contextualization system to enable the installation and configuration of all the user required applications providing the user with a fully functional infrastructure. Therefore, golden VMIs and configuration recipes can be easily reused across different deployments. Moreover, the contextualization agent included in the framework supports horizontal (increase/decrease the number of resources) and vertical (increase/decrease resources within a running Virtual Machine) by properly reconfiguring the software installed, considering the configuration of the multiple resources running. This paves the way for automatic virtual infrastructure deployment, customization and elastic modification at runtime for IaaS clouds.
Article
Full-text available
The use of virtualization to abstract underlying hardware can aid in sharing such resources and in efficiently managing their use by high performance applications. Unfortunately, virtualization also prevents efficient access to accelerators, such as Graphics Processing Units (GPUs), that have be-come critical components in the design and architecture of HPC systems. Supporting General Purpose computing on GPUs (GPGPU) with accelerators from different vendors presents significant challenges due to proprietary program-ming models, heterogeneity, and the need to share accelera-tor resources between different Virtual Machines (VMs). To address this problem, this paper presents GViM, a sys-tem designed for virtualizing and managing the resources of a general purpose system accelerated by graphics proces-sors. Using the NVIDIA GPU as an example, we discuss how such accelerators can be virtualized without additional hardware support and describe the basic extensions needed for resource management. Our evaluation with a Xen-based implementation of GViM demonstrate efficiency and flexi-bility in system usage coupled with only small performance penalties for the virtualized vs. non-virtualized solutions.
Conference Paper
Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. While distributed computation frameworks have moved beyond a simple map-reduce model, many users are still left to struggle with complex cluster management and configuration tools, even for running simple embarrassingly parallel jobs. We argue that stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity. Furthermore, using our prototype implementation, PyWren, we show that this model is general enough to implement a number of distributed computing models, such as BSP, efficiently. Extrapolating from recent trends in network bandwidth and the advent of disaggregated storage, we suggest that stateless functions are a natural fit for data processing in future computing environments.
Article
Distributed computing remains inaccessible to a large number of users, in spite of many open source platforms and extensive commercial offerings. While distributed computation frameworks have moved beyond a simple map-reduce model, many users are still left to struggle with complex cluster management and configuration tools, even for running simple embarrassingly parallel jobs. We argue that stateless functions represent a viable platform for these users, eliminating cluster management overhead, fulfilling the promise of elasticity. Furthermore, using our prototype implementation, PyWren, we show that this model is general enough to implement a number of distributed computing models, such as BSP, efficiently. Extrapolating from recent trends in network bandwidth and the advent of disaggregated storage, we suggest that stateless functions are a natural fit for data processing in future computing environments.
Conference Paper
The computational power and memory bandwidth of graphics processing units (GPUs) have turned them into attractive platforms for general-purpose applications at significant speed gains versus their CPU counterparts [1]. In addition, an increasing number of today’s state-of-the-art supercomputers [2] include commodity GPUs to bring us unprecedented levels of high performance and low cost. In this paper, we describe CUDA as the software and hardware paradigm behind those achievements. We summarize its evolution over the past decade, explain its major features and provide insights about future trends for this emerging trend to continue as flagship within high performance computing.
Article
Background: Accurate estimates of Rheumatic Heart Disease (RHD) burden are needed to justify improved integration of RHD prevention and screening into the public health systems, but data from Latin America are still sparse. Objective: To determine the prevalence of RHD among socioeconomically disadvantaged youth (5-18years) in Brazil and examine risk factors for the disease. Methods: The PROVAR program utilizes non-expert screeners, telemedicine, and handheld and standard portable echocardiography to conduct echocardiographic screening in socioeconomically disadvantaged schools in Minas Gerais, Brazil. Cardiologists in the US and Brazil provide expert interpretation according to the 2012 World Heart Federation Guidelines. Here we report prevalence data from the first 14months of screening, and examine risk factors for RHD. Results: 5996 students were screened across 21 schools. Median age was 11.9 [9.0/15.0] years, 59% females. RHD prevalence was 42/1000 (n=251): 37/1000 borderline (n=221) and 5/1000 definite (n=30). Pathologic mitral regurgitation was observed in 203 (80.9%), pathologic aortic regurgitation in 38 (15.1%), and mixed mitral/aortic valve disease in 10 (4.0%) children. Older children had higher prevalence (50/1000 vs. 28/1000, p<0.001), but no difference was observed between northern (lower resourced) and central areas (34/1000 vs. 44/1000, p=0.31). Females had higher prevalence (48/1000 vs. 35/1000, p=0.016). Age (OR=1.15, 95% CI:1.10-1.21, p<0.001) was the only variable independently associated with RHD findings. Conclusions: RHD continues to be an important and under recognized condition among socioeconomically disadvantaged Brazilian schoolchildren. Our data adds to the compelling case for renewed investment in RHD prevention and early detection in Latin America.
Chapter
There is a trend towards using graphics processing units (GPUs) not only for graphics visualization, but also for accelerating scientific applications. But their use for this purpose is not without disadvantages: GPUs increase costs and energy consumption. Furthermore, GPUs are generally underutilized. Using virtual machines could be a possible solution to address these problems, however, current solutions for providing GPU acceleration to virtual machines environments, such as KVM or Xen, present some issues. In this paper we propose the use of remote GPUs to accelerate scientific applications running inside KVM virtual machines. Our analysis shows that this approach could be a possible solution, with low overhead when used over InfiniBand networks.
Article
Graphic processing units (GPUs) provide a massively-parallel computational power and encourage the use of general-purpose computing on GPUs (GPGPU). The distinguished design of discrete GPUs helps them to provide the high throughput, scalability, and energy efficiency needed for GPGPU applications. Despite the previous study on GPU virtualization, the tradeoffs between the virtualization approaches remain unclear, because of a lack of designs for or quantitative evaluations of the hypervisor-level virtualization for discrete GPUs. Shedding light on these tradeoffs and the technical requirements for the hypervisor-level virtualization would facilitate the development of an appropriate GPU virtualization solution. GPUvm, which is an open architecture for hypervisor-level GPU virtualization with a particular emphasis on using the Xen hypervisor, is presented in this paper. {GPUvm} offers three virtualization modes: the full-, naive para-, and high-performance para-virtualization. {GPUvm} exposes low-and high-level interfaces such as memory-mapped I/O and DRM APIs to the guest virtual machines (VMs). Our experiments using a relevant commodity GPU showed that {GPUvm} incurs different overheads as the level of the exposed interfaces is changed. The results also showed that a coarse-grained fairness on the GPU among multiple VMs can be achieved using GPU scheduling.
Conference Paper
The use of graphics processing units (GPUs) to accelerate some portions of applications is widespread nowadays. To avoid the usual inconveniences associated with these accelerators (high acquisition cost, high energy consumption, and low utilization), one possible solution is sharing them among several nodes in the cluster. Several years ago, remote GPU virtualization middleware systems appeared to implement this solution. Although these systems tackled the aforementioned inconveniences, their performance was usually impaired by the low bandwidth attained by the underlying network. However, the recent advances in InfiniBand fabrics have changed this trend. In this paper we analyze how the high bandwidth provided by the new EDR 100G InfiniBand fabric allows remote GPU virtualization middleware systems not only to perform very similar to local GPUs, but also to improve overall performance for some applications.
Conference Paper
Using GPUs reduces execution time of many applications but increases acquisition cost and power consumption. Furthermore, GPUs usually attain a relatively low utilization. In this context, remote GPU virtualization solutions were recently created to overcome the drawbacks of using GPUs. Currently, many different remote GPU virtualization frameworks exist, all of them presenting very different characteristics. These differences among them may lead to differences in performance. In this work we present a performance comparison among the only three CUDA remote GPU virtualization frameworks publicly available at no cost. Results show that performance greatly depends on the exact framework used, being the rCUDA virtualization solution the one that stands out among them. Furthermore, rCUDA doubles performance over CUDA for pageable memory copies.
Conference Paper
Graphics Processing Units (GPU) have become important components in high performance computing (HPC) systems for their massively parallel computing capability and energy efficiency. Virtualization technologies are increasingly applied to HPC to reduce administration costs and improve system utilization. However, virtualizing the GPU to support general purpose computing presents many challenges because of the complexity of this device. On VMware's ESX hypervisor, DirectPath I/O can provide virtual machines (VM) high performance access to physical GPUs. However, this technology does not allow multiplexing for sharing GPUs among VMs and is not compatible with vMotion, VMware's technology for transparently migrating VMs among hosts inside clusters. In this paper, we address these issues by implementing a solution that uses "remote API execution" and takes advantage of DirectPath I/O to enable general purpose GPU on ESX. This solution, named vmCUDA, allows CUDA applications running concurrently in multiple VMs on ESX to share GPU(s). Our solution requires neither recompilation nor even editing of the source code of CUDA applications. Our performance evaluation has shown that vmCUDA introduced an overhead of 0.6% - 3.5% for applications with moderate data size and 14% - 20% for those with large data (e.g. 12.5 GB - 237.5GB in our experiments).
Article
This paper presents a general energy management system for High Performance Computing (HPC) clusters and cloud infrastructures that powers off cluster nodes when they are not being used, and conversely powers them on when they are needed. This system can be integrated with different HPC cluster middleware, such as Batch-Queuing Systems or Cloud Management Systems, and can also use different mechanisms for powering on and off the computing nodes. The presented system makes it possible to implement different energy-saving policies depending on the priorities and particularities of the cluster. It also provides a hook system to extend the functionality, and a sensor system in order to take into account environmental information. The paper describes the successful integration of the system proposed with some popular Batch-Queuing Systems, and also with some Cloud Management middlewares, presenting two real use-cases that show significant energy/costs savings of 27% and 17%.
Conference Paper
This paper describes vCUDA, a general-purpose graphics processing unit (GPGPU) computing solution for virtual machines (VMs). vCUDA allows applications executing within VMs to leverage hardware acceleration, which can be beneficial to the performance of a class of high-performance computing (HPC) applications. The key insights in our design include API call interception and redirection and a dedicated RPC system for VMs. With API interception and redirection, Compute Unified Device Architecture (CUDA) applications in VMs can access a graphics hardware device and achieve high computing performance in a transparent way. In the current study, vCUDA achieved a near-native performance with the dedicated RPC system. We carried out a detailed analysis of the performance of our framework. Using a number of unmodified official examples from CUDA SDK and third-party applications in the evaluation, we observed that CUDA applications running with vCUDA exhibited a very low performance penalty in comparison with the native environment, thereby demonstrating the viability of vCUDA architecture.
Conference Paper
The GPU Virtualization Service (gVirtuS) presented in this work tries to fill the gap between in-house hosted computing clusters, equipped with GPGPUs devices, and pay-for-use high performance virtual clusters deployed via public or private computing clouds. gVirtuS allows an instanced virtual machine to access GPGPUs in a transparent and hypervisor independent way, with an overhead slightly greater than a real machine/GPGPU setup. The performance of the components of gVirtuS is assessed through a suite of tests in different deployment scenarios, such as providing GPGPU power to cloud computing based HPC clusters and sharing remotely hosted GPGPUs among HPC nodes.
Conference Paper
The increasing computing requirements for GPUs (Graphics Processing Units) have favoured the design and marketing of commodity devices that nowadays can also be used to accelerate general purpose computing. Therefore, future high performance clusters intended for HPC (High Performance Computing) will likely include such devices. However, high-end GPU-based accelerators used in HPC feature a considerable energy consumption, so that attaching a GPU to every node of a cluster has a strong impact on its overall power consumption. In this paper we detail a framework that enables remote GPU acceleration in HPC clusters, thus allowing a reduction in the number of accelerators installed in the cluster. This leads to energy, acquisition, maintenance, and space savings.
Computer Aided Diagnosis for Rheumatic Heart Disease by AI Applied to Features Extraction from Echocardiography
  • E Camacho-Ramos
  • A Jimenez-Pastor
  • I Blanquer
  • F García-Castro
  • A Alberich-Bayarri
Google Cloud Functions
  • Google
Automatic visceral fat characterisation on CT scans through deep learning and CNN for the assessment of metabolic syndrome
  • A Jimenez-Pastor
  • A Alberich-Bayarri
  • F Garcia-Castro
  • L Marti-Bonmati