Figure 4- - uploaded by Rao Mikkilineni
Content may be subject to copyright.
Diagram showing the physical representation the proof-of-concept

Diagram showing the physical representation the proof-of-concept

Source publication
Conference Paper
Full-text available
Cloud computing is fundamentally altering the expectations for how and when computing, storage and networking resources should be allocated, managed and consumed. End-users are increasingly sensitive to the latency of services they consume. Service Developers want the Service Providers to ensure or provide the capability to dynamically allocate and...

Context in source publication

Context 1
... mediation layer provided two interface options i.e. a telnet accessible shell prompt for command line control as well as a browser based console application developed using Adobe Flex UI components that was served up using an Apache Web Server. The physical architecture of the Proof-of-concept is shown in figure 4. ...

Similar publications

Article
Full-text available
Service Function Chaining (SFC) is the problem of deploying various network service instances over geographically distributed data centers and providing inter-connectivity among them. The goal is to enable the network traffic to flow smoothly through the underlying network, resulting in an optimal quality of experience to the end-users. Proper chai...

Citations

... In general, priority scheduling is utilized in operating systems when a large number of activities are scheduled for execution, and the system performs the tasks based on their priority. It is also a preemptive scheduling technique, as seen in Figure 2, where a job with the greatest priority is executed before any others [44]. ((b11, b21), ((b12, b22),..., ((b1n, b2n) ...
Article
Full-text available
The continuously evolving world of cloud computing presents new challenges in resource allocation as dispersed systems struggle with overloaded conditions. In this regard, we introduce OptiDJS+, a cutting-edge enhanced dynamic Johnson sequencing algorithm made to successfully handle resource scheduling challenges in cloud computing settings. With a solid foundation in the dynamic Johnson sequencing algorithm, OptiDJS+ builds upon it to suit the demands of modern cloud infrastructures. OptiDJS+ makes use of sophisticated optimization algorithms, heuristic approaches, and adaptive mechanisms to improve resource allocation, workload distribution, and task scheduling. To obtain the best performance, this strategy uses historical data, dynamic resource reconfiguration, and adaptation to changing workloads. It accomplishes this by utilizing real-time monitoring and machine learning. It takes factors like load balance and make-up into account. We outline the design philosophies, implementation specifics, and empirical assessments of OptiDJS+ in this work. Through rigorous testing and benchmarking against cutting-edge scheduling algorithms, we show the better performance and resilience of OptiDJS+ in terms of reaction times, resource utilization, and scalability. The outcomes underline its success in reducing resource contention and raising service quality generally in cloud computing environments. In contexts where there is distributed overloading, OptiDJS+ offers a significant advancement in the search for effective resource scheduling solutions. Its versatility, optimization skills, and improved decision-making procedures make it a viable tool for tackling the resource allocation issues that cloud service providers and consumers encounter daily. We think that OptiDJS+ opens the way for more dependable and effective cloud computing ecosystems, assisting in the full realization of cloud technologies’ promises across a range of application areas. In order to use the OptiDJS+ Johnson sequencing algorithm for cloud computing task scheduling, we provide a two-step procedure. After examining the links between the jobs, we generate a Gantt chart. The Gantt chart graph is then changed into a two-machine OptiDJS+ Johnson sequencing problem by assigning tasks to servers. The OptiDJS+ dynamic Johnson sequencing approach is then used to minimize the time span and find the best sequence of operations on each server. Through extensive simulations and testing, we evaluate the performance of our proposed OptiDJS+ dynamic Johnson sequencing approach with two servers to that of current scheduling techniques. The results demonstrate that our technique greatly improves performance in terms of makespan reduction and resource utilization. The recommended approach also demonstrates its ability to scale and is effective at resolving challenging work scheduling problems in cloud computing environments.
... Information digging for the agribusiness vicinity has been the issue of the exam for an extended time. Information digging has been applied for dissecting the dust sorts and houses to reserve them [7]. Likewise, soil statistics digging is precious for crop expectation and concluding the higher yield grouping thinking about beyond harvest successions in comparable farmland with the continued soil complement data. ...
... The usage of IoT has been proposed with inside the horticulture region in [7]. In creators have depicted FMS engineering which makes use of Future Internet qualities. ...
Article
Full-text available
In this paper, we proposed a multidisciplinary version for outstanding farming due to the important thing innovations: IoT, Sensors. Ranchers, Argo-Marketing organizations, and Argo-Vendors need to be enrolled withinside the Agricola module via the Mobile App module. Agricola ability is applied to shop the subtleties of ranchers, occasional soil houses of farmlands, ago-merchants, and ago-showcasing organizations, Argo e-management plans, and innovative herbal circumstances. Soil and weather houses are detected and on occasion shipped off Agricola via IoT (Beagle Black Bone). Bigdata exam on Agricola facts is completed for compost prerequisites, excellent yield preparations investigation, all-out advent, and innovative inventory and marketplace necessities. The proposed version is useful for enlargement in farming advent and value management of Argo-items.
... Private and public clouds coexist in a distributed mode with conventional networks. A continuously available ecosystem that can endure equipment failure while continuing to operate Data technologies are used to understand and address business needs in cognitive cloud computing [6]. A global, regulated platform that unifies the control of IT and network infrastructure infrastructural facilities through a single platform. ...
... Finally, they discovered that the offered procedures are capable of producing not only effective use of VM capabilities, but it is also possible to preserve resources on a physical computer in the testing. V. Sarathy et al. (2010) suggested and defined source architecture for a network-centric datacenter network operations center that sponges and extends basic points from the telecom industry to the technology world, enabling flexibility, adaptability, dependability, and privacy. They also discussed a concrete evidence system that was developed to show how variable management information may be used to provide legitimate service assurance for networkcentric computing architectures. ...
Article
Full-text available
We propose a dynamic automated infrastructure model for the cloud data centre which is aimed as an efficient service stipulation for the enormous number of users. The data center and cloud computing technologies have been at the moment rendering attention to major research and development efforts by companies, governments, and academic and other research institutions. In that, the difficult task is to facilitate the infrastructure to construct the information available to application-driven services and make business-smart decisions. On the other hand, the challenges that remain are the provision of dynamic infrastructure for applications and information anywhere. Further, developing technologies to handle private cloud computing infrastructure and operations in a completely automated and secure way has been critical. As a result, the focus of this article is on service and infrastructure life cycle management. We also show how cloud users interact with the cloud, how they request services from the cloud, how they select cloud strategies to deliver the desired service, and how they analyze their cloud consumption.
... The control plane is centralized and managed through an SDN controller, while the data plane remains distributed across network devices that forward packets based on the controller's directives. This architecture provides network administrators with the flexibility to manage networks programmatically and make real-time adjustments to traffic flows [4]. ...
Article
As demand for high-performance, efficient, and secure data center operations rises, traditional network architectures are increasingly inadequate for modern digital ecosystems. Emerging technologies such as cloud computing, AI, IoT, and big data have overwhelmed existing infrastructures, driving the need for innovative solutions. This paper examines advancements in scalable frameworks, specifically Software-Defined Networking (SDN) and Network Function Virtualization (NFV). SDN centralizes control for dynamic traffic management, while NFV virtualizes network services to enhance flexibility and cost efficiency. Beyond scalability, robust security is crucial. The paper explores micro-segmentation, which isolates network segments to limit cyber-attack spread, and zero-trust architecture, which enforces strict verification for all users and devices. These models strengthen defenses but also introduce complexity. Performance evaluations highlight the benefits and limitations of these architectures, considering metrics like latency and resource utilization. The future of network architectures will integrate AI and machine learning for automated management and threat detection. Quantum computing may redefine encryption, presenting both opportunities and challenges. Ultimately, investing in advanced, adaptable, and secure network solutions is essential to keep pace with the growing demands of next-generation data centers.
... Rapid usage of internet all over the globe, cloud computing has already been headed in the IT industry [1,2]. Cloud computing is transforming the computing landscape adapting with instantaneous requirements [3,4]. Cloud concept and its computing process is the emerging topic in the internet-centric and IT-market oriented business place. ...
Article
Full-text available
Cloud computing is a business model or an infrastructure consisting of a pool of physical resources that can be arranged on-demand basis. As the end users are concerned about getting better services from the service providers, this manuscript is using scheduling strategies which in turn facilitate the users with minimization of both task unit completion time (Refers to the time taken by each task unit to complete its task), and the average waiting time (Refers to the average waiting time of the cloud customers). In this paper, we have analysed the simulation results and compared the average task unit completion time. We have also evaluated and compared the performance parameters by means of queuing model.
... As a result, small VMs are often selected for containers. However, a large number of small VMs leads to VM sprawl [51]. On the other hand, creating large VMs leads to unused VM resources [44]. ...
... VM sprawl [51] is the major reason for the low utilization of data centers and the sub&Just-Fit/FF rule can lead to it. In a data center where VM sprawl occurs, PMs are filled with a large number of small VMs and most of them are low utilized. ...
Article
Full-text available
Containers are lightweight and provide the potential to reduce more energy consumption of data centers than Virtual Machines (VMs) in container-based clouds. The on-line resource allocation is the most common operation in clouds. However, the on-line Resource Allocation in Container-based clouds (RAC) is new and challenging because of its two-level architecture, i.e. the allocations of containers to VMs and the allocation of VMs to physical machines. These two allocations interact with each other, and hence cannot be made separately. Since on-line container allocation requires a real-time response, most current allocation techniques rely on heuristics (e.g. First Fit and Best Fit), which do not consider the comprehensive information such as workload patterns and VM types. As a result, resources are not used efficiently and the energy consumption is not sufficiently optimized. We first propose a novel model of the on-line RAC problem with the consideration of VM overheads, VM types and an affinity constraint. Then, we design a Cooperative Coevolution Genetic Programming (CCGP) hyper-heuristic approach to solve the RAC problem. The CCGP can learn the workload patterns and VM types from historical workload traces and generate allocation rules. The experiments show significant improvement in energy consumption compared to the state-of-the-art algorithms.
... The research analysis of the subject has shown that the scientists" works on cloud computing in education can be divided into two approaches. The first one deals with cloud computing in general, namely, how it is built and which service models it can have (Sarathy, Narayan, Mikkilineni, 2010), how it works (Rayport & Heyward, 2009;Thomas, 2011;Palaniappan, 2014), which problems may appear when it is used in the classroom and which advantages it can have (Gill, 2006;Plummer, Cearley, Smith, 2008;Abdullahi, Salleh, Alwan, 2018). The first approach examines as well the peculiarities of applying particular cloud computing such as Google apps (Herrick, 2009), multimedia "Prezi" (Shim & Lee, 2018), social networks (Haouta & Idelhadj, 2018), gamification websites (Khaleel, Wook, & Ashaari, 2018) etc. ...
Article
Full-text available
In this article the impact of cloud computing has been shown and the empirical analysis of the effectiveness of the Quizlet service for students has been conducted. The research purpose is to study the cloud computing influence on professional foreign language learning for university students’ vocabulary development. The range of research methods (theoretical, empirical, and statistical) has been used to reach the research purpose and justify the research findings. To check the effectiveness of applying the Quizlet service in foreign language learning the empirical methods such as testing (written), observation were used as well as the pedagogical experiment, conducted with the law students. The statistical method helped to evaluate the results of the pedagogical experiment. A lot of vocabulary flashcards for the Quizlet service were created by the authors to develop law students’ legal vocabulary in English classes. Their activities and tasks include Flashcards (tasks: Match, Translate, Click card to see the definition), Learn (task: Match every term and definition correctly two times to finish), Write (task: Type the answer in English), Spell (task: Type what you hear), Test (tasks: Written questions, Matching questions, Multiple choice questions, True/False questions). The results of pre-test and final test in the experimental group proved the effectiveness of cloud computing for university students’ vocabulary development.
... This resource utilization can be achieved by the better scheduling algorithm [6] in cloud computing. At any time users can send the request for resources to the cloud provider [7,8] and it should be made to run the applications. So, in this situation Cloud Brokerage Service (CBS) performs the task of a mediator which makes it easier to find the best resources for the users. ...
Article
Full-text available
This Paper focuses on multi-criteria decision making techniques (MCDMs), especially analytical networking process (ANP) algorithm to design a model in order to minimize the task scheduling cost during implementation using a queuing model in a cloud environment and also deals with minimization of the waiting time of the task. The simulated results of the algorithm give better outcomes as compared to other existing algorithms by 15 percent.
... If any changes in the control mechanisms are required at any instant of time, the control mechanism should be able to be changed without significantly altering any other components of the system [43]. The control mechanisms employed in the system cannot take static or rigid forms as adaptive techniques have to be incorporated to the control mechanisms to make the system operable under any dynamic conditions [44]. ...
Article
Full-text available
Every year, natural disasters cause major loss of human life, damage to infrastructure and significant economic impact on the areas involved. Geospatial Scientists aim to help in mitigating or managing such hazards by computational modeling of these complex events, while Information Communication Technology (ICT) supports the execution of various models addressing different aspects of disaster management. The execution of natural hazard models using traditional ICT foundations is not possible in a timely manner due to the complex nature of the models, the need for large-scale computational resources as well as intensive data and concurrent-access requirements. Cloud Computing can address these challenges with near-unlimited capacity for computation, storage and networking, and the ability to offer natural hazard modeling systems as end services has now become more realistic than ever. However, researchers face several open challenges in adopting and utilizing Cloud Computing technologies during disasters. As such, this survey paper aggregates all these challenges, reflects on the current research trends and outlines a conceptual Cloud-based solution framework for more effective natural hazards modeling and management systems using Cloud infrastructure in conjunction with other technologies such as Internet of Things(IoT) networks, fog and edge computing. We draw a clear picture of the current research state in the area and suggest further research directions for future systems.
... Traditional cloud services have been mainly designed to achieve high throughput and small delay [1], [2]. However, these performance measures fail to capture the timeliness of the information from the application perspective which is important for real-time cloud, and IoT (Internet-of-things), applications. ...
... Given that the service time distribution is shifted exponential whose expression is given in (2), it is easy to show that for every job from vehicle j the expected service time at VM v is β v,j + 1 αv,j . Then, following [20], the expected service time at VM v is given by ...
Preprint
The demand for real-time cloud applications has seen an unprecedented growth over the past decade. These applications require rapidly data transfer and fast computations. This paper considers a scenario where multiple IoT devices update information on the cloud, and request a computation from the cloud at certain times. The time required to complete the request for computation includes the time to wait for computation to start on busy virtual machines, performing the computation, waiting and service in the networking stage for delivering the output to the end user. In this context, the freshness of the information is an important concern and is different from the completion time. This paper proposes novel scheduling strategies for both computation and networking stages. Based on these strategies, the age-of-information (AoI) metric and the completion time are characterized. A convex combination of the two metrics is optimized over the scheduling parameters. The problem is shown to be convex and thus can be solved optimally. Moreover, based on the offline policy, an online algorithm for job scheduling is developed. Numerical results demonstrate significant improvement as compared to the considered baselines.