Lab
Kiran Kumar Patibandla's Lab
Institution: Walmart Stores
Department: Department of Information Technology
Featured research (5)
The use of artificial intelligence (AI) in cloud architectures has significantly increased processing efficiency and scale. However, with the development of complex algorithms and big data as well as surprisingly entered into our machine learning world; workload management becomes a significant issue in AI cloud computing. Existing workload management solutions are rule-based heuristics that may result in underutilization of resources and poor performance. For that, we present an algorithmic comparative approach to easing the burden of workload management for AI-driven cloud architectures. This is in contrast to executing a batch of tasks with different algorithms and comparing performance, cost, etc. We use ML methods to determine the best algorithm for our workload, and then deploy this in a self-contained binary that can switch between algorithms at runtime on an available resource. We validated our scheme with simulations, which demonstrates the capability of superior resource use and diminished completion time in comparison to rule-based schemes. When needed, flexibility and scalability allow you easier control over workloads that are subject to change or allocation. By simplifying AI-driven cloud workload management, the elasticity of their overall approach greatly enhances efficiency and scalability for those organizations looking to run even larger and take advantage of more complex workloads faster Tweet this Share on Facebook.
Cloud computing has been disrupting the way businesses work through an effective, and low-cost platform for delivering services and resources. However, as cloud computing is growing at a faster pace the complexity of administering and upkeep of such huge systems has become more complex. Time-consuming and resource-intensive tasks make repetitive operations like scaling resources or performance monitoring too slow and cumbersome, which in turn makes cloud architecture not well suited to efficiently managing workload fluctuations. This in turn has led to an increasing effort towards automating monotonous tasks for cloud architectures, using perhaps supervised learning techniques. This means that supervised learning algorithms can learn from the past, and can be used for prediction as well (which is very important in any operation: forecasting resource needs so you have capacity ready before it was needed using predictive analytics real-time data). This will relieve human operators of some work, making the system more efficient. By using the power of supervised learning, we can continuously optimize cloud architectures for cost-efficient and efficient resource provisioning. It also provides better scalability & adaptability for the system thus making it more fault-tolerant (in accordance to bootstrapping) against sudden spikes in workload that cannot be mitigated.
Cloud computing has been disrupting the way businesses work through an effective, and low-cost platform for delivering services and resources. However, as cloud computing is growing at a faster pace the complexity of administering and upkeep of such huge systems has become more complex. Time-consuming and resource-intensive tasks make repetitive operations like scaling resources or performance monitoring too slow and cumbersome, which in turn makes cloud architecture not well suited to efficiently managing workload fluctuations. This in turn has led to an increasing effort towards automating monotonous tasks for cloud architectures, using perhaps supervised learning techniques. This means that supervised learning algorithms can learn from the past, and can be used for prediction as well (which is very important in any operation: forecasting resource needs so you have capacity ready before it was needed using predictive analytics real-time data). This will relieve human operators of some work, making the system more efficient. By using the power of supervised learning, we can continuously optimize cloud architectures for costefficient and efficient resource provisioning. It also provides better scalability & adaptability for the system thus making it more fault-tolerant (in accordance to bootstrapping) against sudden spikes in workload that cannot be mitigated.
With the rapid development and widespread application of the Internet of Things (IoT), big data, and 5G networks, traditional cloud computing is increasingly unable to handle the massive amounts of data generated by network edge devices. In response, edge computing has emerged as a promising solution. However, due to its open nature, characterized by content awareness, real-time computing, and parallel processing, edge computing exacerbates the existing data security and privacy challenges already present in cloud environments. This paper outlines the research background of data security and privacy protection in edge computing and proposes a data-centric security framework. It provides a comprehensive review of the most recent advancements in key technologies related to data security, access control, identity authentication, and privacy protection that could be applicable to edge computing. The scalability and applicability of various approaches are analyzed and discussed in detail. Additionally, several practical instances of edge computing that are currently in use are introduced. Finally, the paper highlights key research directions and offers recommendations for future study.
Lab head
Members
Srikamal Boyina