Fig 2 - uploaded by Pravallika Mannem
Content may be subject to copyright.
Operating principle of the proposed model

Operating principle of the proposed model

Source publication
Article
Full-text available
Cloud computing architectures are more scalable and economical which is the main reason that has contributed to its popularity. However, they bring their own set of challenges when it comes to workload scheduling and resource utilization because virtual machines (VM) and applications have to share different types of resources like servers, storage,...

Context in source publication

Context 1
... example, when working with cloud architectures this means collecting metrics on CPU and memory use, network traffic, or application performance metrics. Fig 2 shows the operating principle of the proposed model. ...

Citations

Article
Full-text available
The use of artificial intelligence (AI) in cloud architectures has significantly increased processing efficiency and scale. However, with the development of complex algorithms and big data as well as surprisingly entered into our machine learning world; workload management becomes a significant issue in AI cloud computing. Existing workload management solutions are rule-based heuristics that may result in underutilization of resources and poor performance. For that, we present an algorithmic comparative approach to easing the burden of workload management for AI-driven cloud architectures. This is in contrast to executing a batch of tasks with different algorithms and comparing performance, cost, etc. We use ML methods to determine the best algorithm for our workload, and then deploy this in a self-contained binary that can switch between algorithms at runtime on an available resource. We validated our scheme with simulations, which demonstrates the capability of superior resource use and diminished completion time in comparison to rule-based schemes. When needed, flexibility and scalability allow you easier control over workloads that are subject to change or allocation. By simplifying AI-driven cloud workload management, the elasticity of their overall approach greatly enhances efficiency and scalability for those organizations looking to run even larger and take advantage of more complex workloads faster Tweet this Share on Facebook.
Article
Full-text available
Cloud computing has been disrupting the way businesses work through an effective, and low-cost platform for delivering services and resources. However, as cloud computing is growing at a faster pace the complexity of administering and upkeep of such huge systems has become more complex. Time-consuming and resource-intensive tasks make repetitive operations like scaling resources or performance monitoring too slow and cumbersome, which in turn makes cloud architecture not well suited to efficiently managing workload fluctuations. This in turn has led to an increasing effort towards automating monotonous tasks for cloud architectures, using perhaps supervised learning techniques. This means that supervised learning algorithms can learn from the past, and can be used for prediction as well (which is very important in any operation: forecasting resource needs so you have capacity ready before it was needed using predictive analytics real-time data). This will relieve human operators of some work, making the system more efficient. By using the power of supervised learning, we can continuously optimize cloud architectures for costefficient and efficient resource provisioning. It also provides better scalability & adaptability for the system thus making it more fault-tolerant (in accordance to bootstrapping) against sudden spikes in workload that cannot be mitigated.
Article
Full-text available
Cloud computing has been disrupting the way businesses work through an effective, and low-cost platform for delivering services and resources. However, as cloud computing is growing at a faster pace the complexity of administering and upkeep of such huge systems has become more complex. Time-consuming and resource-intensive tasks make repetitive operations like scaling resources or performance monitoring too slow and cumbersome, which in turn makes cloud architecture not well suited to efficiently managing workload fluctuations. This in turn has led to an increasing effort towards automating monotonous tasks for cloud architectures, using perhaps supervised learning techniques. This means that supervised learning algorithms can learn from the past, and can be used for prediction as well (which is very important in any operation: forecasting resource needs so you have capacity ready before it was needed using predictive analytics real-time data). This will relieve human operators of some work, making the system more efficient. By using the power of supervised learning, we can continuously optimize cloud architectures for cost-efficient and efficient resource provisioning. It also provides better scalability & adaptability for the system thus making it more fault-tolerant (in accordance to bootstrapping) against sudden spikes in workload that cannot be mitigated.