ArticlePDF Available

Abstract and Figures

Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area. KeywordsCloud computing-Data centers-Virtualization
Content may be subject to copyright.
A preview of the PDF is not available
... Zhang et al. [11] having analyzed the best practices of using cloud computing in universities and, based on personal experience in the field of application of information and communication technologies in the educational process of a university, identified the main CAPs to support the scientific and educational activities of the university. A number of researchers in their works [12], [13] note that the task of predicting the scale of use of server computing resources in the cloud infrastructure of an organization, or, for example, a university, is important and relevant. ...
... The latter is true based on the consideration that during virtualization it is not recommended to allocate more than 50% of the server's processor and RAM resources to VMs. The lower estimate of the optimal number of servers for a private university cloud can be expressed as (11) and the average lower bound for the optimal number of servers in a private university cloud looks as (12). ...
Article
Full-text available
When designing a virtual desktop infrastructure (VDI) for a university or inter-university cloud, developers must overcome many complex technical challenges. One of these tasks is estimating the required number of virtualization cluster nodes. Such nodes host virtual machines for users. These virtual machines can be used by students and teachers to complete academic assignments or research work. Another task that arises in the VDI design process is the problem of algorithmizing the placement of virtual machines in a computer network. In this case, optimal placement of virtual machines will reduce the number of computer nodes without affecting functionality. And this, ultimately, helps to reduce the cost of such a solution, which is important for educational institutions. The article proposes a model for estimating the required number of virtualization cluster nodes. The proposed model is based on a combined approach, which involves jointly solving the problem of optimal packaging and finding the configuration of server platforms of a private university cloud using a genetic algorithm. The model introduced in this research is universal. It can be used in the design of university cloud systems for different purposes-for example, educational systems or inter-university scientific laboratory management systems.
... R. Buyya et al. [6] offer a more detailed view with 19 characteristics, such as scalability, security, privacy, resource allocation, and failure management, cited 8,802 times. Q. Zhang et al. [22] emphasize multi-tenancy, shared resource pooling, geo-distribution, and dynamic resource provisioning, also receiving 21,755 citations. L.M. Vaquero et al. [20] describe ten characteristics, focusing on userfriendliness, virtualization, scalability, and resource variety, accumulating 4,888 citations. ...
Conference Paper
Full-text available
Cloud computing (CC) has revolutionized the way data is stored, processed, and managed, with every aspect of cloud services fundamentally revolving around data. However, there remains a gap in understanding how data transitions through different states within the cloud, particularly concerning security, privacy, and efficiency. This study addresses the problem of developing a unified and comprehensive Data Lifecycle (DLC) model tailored to cloud environments. By synthesizing existing models and identifying key shortcomings, this research proposes a comprehensive data lifecycle framework that integrates essential cloud characteristics. The model is designed to address real-world challenges such as enhancing data security across lifecycle phases, ensuring privacy during data transfers, and optimizing resources during storage and usage. The results highlight how this model not only improves the understanding of data flow in the cloud but also provides a foundation for implementing targeted solutions for specific data phases. This approach enhances overall system efficiency, reduces time and energy consumption, and offers a more secure and privacy-focused cloud computing environment.
... CI/CD guarantees that code changes undergo tests, integration and deployment in different cloud platforms. Therefore, Organizations are in a position to minimize human interface error, enhance frequent deployment, and balance the multi-cloud production environments due to these control procedures being fully accomplished by mechanic systems [6]. ...
Article
In this paper, the challenges, strategies, and techniques involved in the right use of Kubernetes to perform multi-cloud are summarised. Some of them are lack of coordination between different systems and domains, security concerns and costs and providing solutions in terms of how coordination might be most appropriate and how different kinds of automation might be most usefully employed. The following performance optimization techniques are also considered: as the concepts of resource management and load distribution. Keywords: Multiple Cloud Operating Model, Kubernetes Virtualization, Resource Optimization
... Takabi et al. [5],Yu et al [6] discussed the emerging security challenges in cloud. Zhang et al. [7] have discussed a brief study on the security challenges in various types of clouds. Modi et al. [8] analyzed the security pitfalls and solutions at various cloud layers. ...
Conference Paper
Full-text available
With the advent of cloud computing, drastic growth of Internet and mobile technologies, there is an exponential growth of ubiquitous users, who are trying to access the cloud server for critical data and resources over an insecure communication channel called Internet. Remote cloud servers need to authenticate the remote users upfront to offer data storage, scheduling and processing services. Hence, secure authentication or verification protocols have been designed to assure authenticity of the remote user and reliability of the outsourced data. In 2011, Hao et alfrom SUNY Buffalo, have proposed a time-bound ticket-based mutual verification scheme and claimed that their proposal is completely resistant to major cryptographic attacks. In this paper, we will show that Hao et al scheme is susceptible to all major cryptographic vulnerabilities like user impersonation attack, server masquerade attack etc.
Article
Full-text available
This research paper explores strategies and solutions for optimizing scalability and performance in cloud services. It examines various aspects of cloud architecture, scalability techniques, performance optimization strategies, and advanced technologies. The study delves into vertical and horizontal scaling, auto-scaling techniques, load balancing, caching mechanisms, and database optimization. Additionally, it investigates the role of containerization, serverless computing, and edge computing in enhancing cloud performance. Security considerations, monitoring tools, cost optimization strategies, and future trends are also discussed. The paper aims to provide a comprehensive overview of the challenges and solutions in cloud service optimization, offering valuable insights for cloud service providers and researchers in the field.
Article
Full-text available
Enterprise application performance determines business success levels because these systems enable decisive operational functions and decision processes. Cloud computing is an innovative management solution that provides adaptable systems, scalable abilities, and affordable operational costs for enterprise applications. Maximizing cloud benefits requires proper management of cloud resources to take complete advantage. The paper evaluates significant cloud resource management approaches, including auto-scaling, load balancing, resource optimization, and AI technology to handle performance issues. Multiple best practices are delivered that help achieve top application performance, reduced costs, and reliable operation. This text investigates modern cloud resource management patterns through edge computing analysis, serverless architecture, and sustainable cloud environment approaches for future cloud resource management frameworks. The adopted strategies shield applications from interruptions while creating adaptability and efficiency, allowing enterprises to meet their upcoming operational requirements.
Article
Cloud computing is a transformative paradigm that enables the distribution of processing power, application execution, and storage across networks of remote computer systems. This model allows for the flexible allocation and release of IT resources over the internet, offering an affordable solution for both businesses and individuals. Through cloud services, users can access a variety of offerings, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), and Desktop as a Service (DaaS), with pricing based on actual usage. In an increasingly competitive market with diverse service options, selecting a long-term cloud provider can be challenging. Dominant providers like Microsoft Azure and Google Cloud lead this market. This paper provides an in-depth evaluation of the image processing services offered by these providers, focusing on Azure Custom Vision, Azure Computer Vision, Azure Cognitive Services, Google Cloud Vision API, and AutoML Vision. The analysis explores the performance and capabilities of these services, emphasizing their strengths and leadership in cloud technology. The primary goal of this study is to offer a comparative analysis of Azure and Google Cloud, helping organizations and users make informed decisions that align with their long-term objectives. In addition, the paper examines the security measures implemented for Integration Platform as a Service (iPaaS) on both platforms, providing a detailed review of their security features and protective mechanisms. The study also highlights key parameters such as performance, scalability, usability, cost, and security to assist organizations in choosing the most appropriate platform for their specific requirements. Case studies and emerging trends in cloud-based image processing are also covered
Article
Cloud computing is a technological solution that allows managing large volumes of data, standing out for its scalability, flexibility and ability to reduce costs. This study aimed to analyze its impact on large-scale data management through a literature review of recent research in the Scopus database, using keywords such as “cloud AND computing, impact, management” in a time range from 2019 to 2023, obtaining 1421 relevant papers. The results identified three key benefits: the ability to scale resources on demand, optimizing operational costs, and promoting real-time global collaboration. However, significant challenges were also highlighted, such as security and privacy concerns, dependency on external vendors, and connectivity requirements. The discussion underscores that while cloud computing represents a powerful tool for digital transformation, its adoption requires robust strategies to address the associated risks. In conclusion, this technology is positioned as a crucial enabler for organizational innovation and competitiveness, provided that proactive approaches are adopted to mitigate its inherent challenges.
Thesis
Full-text available
Server virtualization and container orchestration are fundamental for creating efficient and scalable IT infrastructures. This Master’s Thesis (TFM) focuses on the creation of a virtual infrastructure applied to an online sales environment. For this, a Dell R710 server has been used, configuring several virtual machines. The project is not limited to virtualization. A Docker Swarm has also been developed to efficiently manage and orchestrate containers, integrating GlusterFS to optimize this process. Throughout the work, essential tools such as Portainer, Authentik, Redis, Postgres, Poste.io, MySQL, WordPress, Nginx Proxy Manager, and PhpMyAdmin have been used, each contributing robustness and functionality to the system. One of the most interesting challenges was the configuration of redirections with Nginx, allowing the management of domains and subdomains, and connecting services hosted on other physical machines within the network. To enhance security and improve DNS management, Cloudflare was integrated between the DNS and the server. Additionally, Cockpit was installed on the machines with Ubuntu to provide a user-friendly web management interface. The practical application of this infrastructure is materialized in the creation of an online sales environment, which includes email services and redirection to external providers, thus demonstrating its applicability in the real world. This TFM has been an enriching experience, combining specific technologies and tools to build a complete and functional operating system capable of supporting an e-commerce environment. This demonstrates the versatility and potential of virtualization and container orchestration.
Article
A fundamental challenge in data center networking is how to efficiently interconnect an exponentially increasing number of servers. This paper presents DCell, a novel network structure that has many desirable features for data center networking. DCell is a recursively defined structure, in which a high-level DCell is constructed from many low-level DCells and DCells at the same level are fully connected with one another. DCell scales doubly exponentially as the node degree increases. DCell is fault tolerant since it does not have single point of failure and its distributed fault-tolerant routing protocol performs near shortest-path routing even in the presence of severe link or node failures. DCell also provides higher network capacity than the traditional tree-based structure for various types of services. Furthermore, DCell can be incrementally expanded and a partial DCell provides the same appealing features. Results from theoretical analysis, simulations, and experiments show that DCell is a viable interconnection structure for data centers.