Conference Paper

C-Cloud: A Cost-Efficient Reliable Cloud of Surplus Computing Resources

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents C-CLOUD, a democratic cloud infrastructure for renting computing resources includingnon-cloud resources (i.e. computing equipment not part of any cloud infrastructure, such as, PCs, laptops, enterprise servers and clusters). C-CLOUD enables enormous amount of surplus computing resources, in the range of hundreds of millions, to be rented out to cloud users. Such a sharing of resources allows resource owners to earn from idle resources, and cloud users to have a cost-efficient alternative to large cloud providers. Compared to existing approaches to sharing surplus resources, C-CLOUD has two key challenges: ensuring Service Level Agreement (SLA) and reliability of reservations made over heterogeneous resources, and providing appropriate mechanism to encourage sharing of resources. In this context, C-CLOUD introduces novel incentive mechanism that determines resourcerents parametrically based on their reliability and capability.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With the advent of the mobile Internet, Internet of Things, and big data era, the amount of data is exponentially increasing [1][2][3], and the personal storage capacity cannot meet existing storage needs. Cloud storage is a new concept extended and developed from the concept of cloud computing. ...
Article
Full-text available
An increasing number of data owners (DOs) are opting to move their data to the cloud due to the advent of the cloud storage paradigm. The cloud storage data integrity verification approach is often adopted by the DO in order to guarantee the integrity of the data that it stores in the cloud. In a pay-as-you-go cloud environment, DOs must pay additional costs to a third-party validator (TPA) for carrying out verification procedures, on top of the rates they must pay to cloud service providers. But TPA isn't totally reliable in the integrity verification itself. To address the untrustworthy issue of TPA and achieve service payment fairness, a data integrity verification method that promotes privacy protection and equitable payment is suggested. In order to achieve data location integrity verification and verifiable dynamic data updating, a new kind of data authentication structure called a Merkle hash tree based on hierarchy is first introduced; second, an Interactive Dynamic Data Integrity Proof Mechanism (NIDPDP) is introduced to acknowledge data privacy protection and minimize communication overhead. In order to ensure that all parties really adhere to the regulations that are enforced, smart contracts (SCs) in conjunction with blockchain technology are utilized to accomplish equitable service payment between DO, cloud storage servers (CSS), and TPA. The approach may successfully secure user data privacy, achieve fair payment, and have lower computational and communication overhead, as demonstrated by performance analysis and trials.
Chapter
Many technologies are being considered to enhance the productivity of Industries such as IoT enabled smart services, cloud computing empowered on-premises services and machine learning assisted techniques for predicting future risk management. These technologies are gaining and producing insights with values to the product delivery. Continues customer services are also achieved through these technologies to bridge up the gap between client’s expectation with service providers. This 4.0 evolution is aiming to create and synchronize the interconnection between entities such as man, machines, and programming devices to enable the data-driven decision-making process. The various automation software and visualization software are used to identify the pain points of the industry concerning its consumers. Cloud computing is used to create internet-based service access to globally available consumers. This cloud technique is practised for the following reasons. 1. Development of new applications and services—Multiple Language Support. 2. Storage, backup, and recovery—All the types of data. 3. Hosting Applications—File hosting and Apps deployment. 4. Prompt Launch of Software—with subscription. 5. Multimedia support—Audio & Video. The main advantages of using these cloud-assisted services are improved collaboration, ease to access, unbounded storage capacity, low cost maintenance and security mechanisms. In this book chapter, the cloud computing architecture, technology solutions, deployment models, working principles, underlying virtualization concepts with industry practices are discussed and analysed in an explained view. A research study and report are also incorporated with industry 4.0 standards with its advancements.
Chapter
Sharing the information with cloud computing allows many users to easily communicate accumulation, increasing the ability of work in collaborative scenarios and having a wide range of applications. However, maintaining data sharing security inside a group and properly disseminating data that has been leased inside a bunch context are significant to take exception. Keep a close eye on the vital deal. C-Cloud enables Internet customers to rent out excess system resources valued millions of dollars. This sort of resource sharing allows resource owners to benefit from unused resources while also giving cloud clients with a more cost-effective option than large cloud providers. Scientific computer applications, especially molecular modeling calculations, were create a wonderful information as well as computer vision methodologies to aid, expedite, and enhance. Cloud computing platforms, on the other hand, are gaining popularity in scientific computing because they provide “unlimited” processing power, easier programming and deployment techniques, and access to computer accelerators. Alternative against basic power derived from fossil fuel in the short and medium haul. However, because renewable energy sources are variable in their availability, it’s really challenging to schedule work automatically and effectively within renewable energy constraints and deadlines. This same goal of this research would be to develop a novel scheduling scheme based on neural supervised learning that applies scheduling techniques like work flow displacement as well as fog exploding automatically in a public cloud. Our main goals are to maximize the utilization of renewable energy while avoiding missed deadlines.KeywordsCloud computingData sharingMachine learningEfficiencyRenewable energy
Chapter
This paper describes the working principle of hybrid heat pump system which is used for the applications of water heating in the domestic purpose along with flow rate optimization of the condenser cum storage tank. The performance analysis of experimental heat pump water system of heating to minimize domestic energy consumption was also reported. Serpentine tube arrangement in the flat plate collector has been used for this study to collect solar energy to reduce the compressor work. The objective of this paper is to optimize the use of hot water and hot water output to maintain a constant temperature in the condenser cum storage tank of water for the hot water supply during nighttime. The current study performs flow optimization depending on an experimental examination for a water heater with a solar aided heat pump. Results clearly show the best system's performance as comparing with electric geyser. Experimental results also depicted that the system performance and COP are high in the sunrise time.KeywordsRefrigerantHot waterSolar water heating system
Article
Full-text available
- Cloud computing is the on-demand availability of computer system resources, particularly data storage and computing power, without direct active management by the user. Here an online organ donation system is developed with the help of the aforementioned technologies. The main aim of the paper is to make sure that the organs of the people, who have come forward to donate, reach the respective individual in need of the particular organ. Since everything is stored in the cloud, anybody can access it anytime, which in turn puts the data at risk. The secured sharing of donor detail is necessary. Encryption is done at two levels to provide security, one while the data is entered and the other done by a third party providing proxy re-encryption. Now, whenever a brain-dead patient is in the hospital, after thorough verification for any complications, the deciding parameters such as the organs that can be transplanted, blood group, presence of HIV positive and location, is compared with the cloud database using query filtering, which results in the nearest organ donor available meeting the requirements. The performance enhancement of cloud data using encryption schemes is of high potential. Keywords: - cloud computing; proxy re-encryption; query filter.
Article
Ever since the very first cloud service introduced by Amazon Web Services in 2006 the term ‘Cloud computing’ became buzzing all over the field of Information Technology. Not only in IT field but in all technology related fields and even in business and marketing field began to utilize the Cloud Computing Technology. This technology is so versatile and powerful that apart from running softwares in cloud platform and storing data a whole Operating system can be run. In day-today life we are using more cloud based applications involving healthcare, banking, marketing, education and web search engines, etc. In this paper we are specific on healthcare services because it plays a major role in daily operations such as medication, regular health check-up and emergency based services. Especially cutting edge technology can be very useful in emergency situations
Conference Paper
Commoditizing idle computing resources by sharing them in a marketplace has gained increased attention in recent years as a potential disruption to the modern cloud-based service delivery. Recent initiatives have focused on scavenging for idle resources and provide suitable incentives accordingly. A recent work on resources marketplace has proposed a Marketplace for Compute Infrastructure that not only allows resource owners to get incentives by sharing resources to the marketplace but also ensures Service Level Agreements (SLAs), such as performance guarantees, for the computing jobs to be run by the shared resources. This paper proposes a Trust for Resource Marketplace (TRM) system that computes the trust level among the entities in a resource marketplace (RM), by incorporating key aspects of the interactions among these entities. In particular, an RM has three kinds of entities: users (with task requests), resources (on which task is executed), and resource owners. Over these entities, the system allows two kinds of trust queries: (i) for a user, a trust indexing of resources or resource owners and (ii) for a resource owner, a trust indexing of users. This is achieved by a novel interaction graph modelling followed by spectral analysis of this graph, thereby, capturing both direct and indirect relationships among RM entities while deriving trust indexes. Experiments with a combination of real and synthesized traces on TRM implementation show that the proposed trust computation can capture indirect relationship among entities and is robust against limited changes in topology.
Conference Paper
There has been an increasing popularity of applications deployed on mobile devices, such as smartphones or tablets. Many of them, e.g., YouTube [1], Pandora [2], Facebook [3] and etc, require access to the Internet for content sharing while running, and contribute a huge amount of data traffic sent through cellular networks [9], which causes cellular networks currently to be overloaded. Moreover, it is predicted that mobile data traffic will increase very fast in the next few years [9]. As a result, many cellular network providers are putting a lot of effort to seeking solutions for improving their network capacity, e.g., upgrade their infrastructure, as well as decide to move away from unlimited data plans to less flexible charging models [4]. In this paper, we address the problem of efficient rich content sharing from/to mobile devices by proposing practical approaches that provide high delivery performance, reduce cellular data traffic, and release the pressure of cellular networks' heavy load on mobile users and cellular network services providers. Our approaches [13--16] all share a common technique: using complementary networks, such as WiFi, WiFi ad hoc or Bluetooth, equipped in most modern mobile devices to offload data traffic previously planned to be transmitted over cellular networks. For each proposed approach, we prove its feasibility by testing it on an Android based testbed and evaluate its performance and scalability using simulations.
Conference Paper
Staggering growth levels in the number of mobile devices and amount of mobile Internet usage has caused network providers to move away from unlimited data plans to less flexible charging models. As a result, users are being required to pay more for short accesses or under-utilize a longer-term data plan. In this paper, we propose CrowdMAC, a crowdsourcing approach in which mobile users create a marketplace for mobile Internet access. Mobile users with residue capacity in their data plans share their access with other nearby mobile users for a small fee. CrowdMAC is implemented as a middleware framework with incentive-based mechanisms for admission control, service selection, and mobility management. CrowdMAC is implemented and evaluated on a testbed of Android phones and in the well known Qualnet simulator. Our evaluation results show that CrowdMAC: (i) effectively exercises the trade-off between revenue and transfer delay, (ii) adequately satisfies user-specified (delay) quality levels, and (iii) properly adapts to device mobility and achieves performance very close to the ideal case (upper bound).
Article
In a volunteer-based computational grid computing, one big challenge for effective job allocation is resource availability. As resources in this environment are volatile, matching guest jobs to suitable resources is very important. To improve scheduling, especially in terms of avoiding job failures due to resource unavailability, we propose a new job-scheduling algorithm called first-come-first-served plus predictor (FCFSPP). This scheduling algorithm is based on an existing resource availability prediction method that anticipates the future availability of resources to help make reliable job allocation decisions. According to the simulation results, FCFSPP does not only reduce the number of job failures but also maintain acceptable job throughput in volatile volunteer environments by providing reliable job allocation decisions.
Chapter
This chapter reviews dynamism in desktop Grid computing and explains the advanced stochastic scheduling scheme with the Markov Job Scheduler based on Availability (MJSA) in the environment. In recent years, Grid computing [1] has received considerable interest in the field of academics and enterprise. Numerous attempts have been made to organize cost efficient large-scale Grid computing. Desktop Grid computing [13,19,2] is a more flexible paradigm that is used to achieve high performance and high throughput with desktop resources that are less stable and has more inferior performance compared to traditional Grid. It is comprised of a diverse set of desktops interconnected with various network forms ranging from Local Area Network (LAN) to the Internet. Desktop Grid system has played a leading role in the development of large scale aggregated computing power harvested from the edge of the Internet at lower cost. The main goals of the system are to accomplish high throughput and performance by mobilizing the potential colossal computational resources of idle desktops. However, since a desktop peer is a fluctuating resource that connects to the system, performs computations and disconnects to the network at will, desktop volatility makes the system unstable and unreliable. To develop a reliable desktop Grid computing system, a scheduling scheme must consider the dynamic nature (i.e., volatility) of volunteers and a resource selection scheme should adapt to such a dynamic environment, as the selection is getting complicated due to the uncertain behavior of desktops. This chapter demonstrates desktop state change modelling and an advanced resource selection scheme, Selection of Credible Resource with Elastic Window (SCREW), to choose reliable resources in dynamic computational desktop Grid environments. Markov modelling of the dynamic state turning provides understanding of the pattern of desktop behavior while SCREW selects qualified desktops that satisfy time requirements to complete given workloads and adapts to the needs of the user and the application on the fly.
Conference Paper
The computing resources in a volunteer computing system are highly diverse in terms of software and hardware type, speed, availability, reliability, network connectivity, and other properties. Similarly, the jobs to be performed may vary widely in terms of their hardware and completion time requirements. To maximize system performance, the system's job selection policy must accommodate both types of diversity.In this paper we discuss diversity in the context of world community grid (a large volunteer computing project sponsored by IBM) and BOINC, the middleware system on which it is based. We then discuss the techniques used in the BOINC scheduler to efficiently match diverse jobs to diverse hosts.
30 million unused computers in the UK, says research
  • P Goss
P. Goss, "30 million unused computers in the UK, says research," Techradar, Available: http://www.techradar.com/news/internet/pc/computing/30-million-unused-computers-in-the-uk-says-research-912899.