Chapter

KOSMOS: Vertical and Horizontal Resource Autoscaling for Kubernetes

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Cloud applications are increasingly executed onto lightweight containers that can be efficiently managed to cope with highly varying and unpredictable workloads. Kubernetes, the most popular container orchestrator, provides means to automatically scale containerized applications to keep their response time under control. Kubernetes provisions resources using two main components: i) Horizontal Pod Autoscaler (HPA), which controls the amount of containers running for an application, and ii) Vertical Pod Autoscaler (VPA), which oversees the resource allocation of existing containers. These two components have several limitations: they must control different metrics, they use simple threshold-based rules, and the reconfiguration of existing containers requires stopping and restarting them.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Full-text available
Yaklaşık beş milyar internet kullanıcısının bulunduğu, her iki alışverişten birinin internet üzerinden yapıldığı günümüz dünyasında internet servislerinin ana yükünü bulut uygulamaları çekmektedir. Son kullanıcılara hızlı ve kesintisiz hizmet sağlamak ve artan rekabet ortamında maliyetleri düşürebilmek için uygulamaların kopya sayısını, kaynaklarını ve uygulama içi parametreleri otomatik olarak yönetmeye yönelik çalışmalar giderek artmaktadır. Bu çalışmada uygulama parametrelerinin performans ve maliyet üzerindeki etkisi Yüksek Boyutlu Model Gösterilimi adlı evrensel duyarlılık analizi yöntemi ile incelenmiş ve sonuçlar TeaStore adlı örnek bir bulut uygulaması üzerinde deneysel olarak gösterilmiştir. Bulut uygulamaları bağlamında ilk kez yapılan bu çalışma sayesinde öncelik verilecek veya ihmal edilecek parametreler belirlenerek bulut uygulamalarının yönetiminde kullanılan araçların iyileştirilmesi mümkün olacaktır.
Article
Containers have been a pervasive approach to help rapidly develop, test and update the Internet of Things applications (IoT). The autoscaling of containers can adaptively allocate computing resources for various data volumes over time. Therefore, elasticity, a critical feature of a cloud platform, is significant to measure the performance of lightweight containers. In this paper, we propose a framework with container auto-scaler. It monitors containers resource usage and accordingly scales in or scales out containers in need. Further, we define elasticity mathematically in order to quantify the cloud elasticity using the proposed framework. Extensive experiments are carried out with different workload modes, workload durations, and scaling cool-down period of times. Experiment results show that the framework captures the workload variation firmly with a very short delay. We also find out that the cloud platform shows the best elasticity in repeat workload mode due to its recurring and predictable feature. Finally, we discover the length of the cool-down period should be properly set up in order to balance system stability and good elasticity.
Conference Paper
In these days, Cloud computing is provided by various service ways, and it is possible that practical implement and service by virtualized environment. With developing of cloud computing techniques, many companies propose the different type of platforms through research the relevant technique. Among other platforms, we are going to talk about the performance comparison analysis of Linux Container and Virtual Machine in this paper. We built Cloud environment first on Docker which is based on Linux Container and Hypervisor which is Virtual Machine, we analyzed each of the size, Boot speed, and CPU performance. With this analysis result, Users will be able to understand characteristic of each platforms, and they will be able to choose the platforms reasonably what they need.