Conference Paper

Fuzzy-GRA Trust Model for Cloud Risk Management

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Further, the authors computed the individual trust attributes and proposed a machine learning-based algorithm to categorize the extracted trust features and merge them to generate the final decision. In another recent work, Razaque et al. [21] proposed a trust-based model for risk management in the cloud using fuzzy mathematics and gray relational theory. ...
Article
In this paper, we propose a trustworthy service provisioning scheme for Safety-as-a-Service (Safe-aaS) infrastructure in IoT-based intelligent transport systems. Typically, a Safe-aaS infrastructure provides customized safety-related decisions dynamically to multiple end-users, founded on the concept of decision virtualization. We consider road transportation as the application environment of Safe-aaS to generate trustworthy decisions. On the other hand, the efficiency and accuracy of the decision generated depend on the security, privacy, and trustworthiness of the participating sensor nodes and the route through which data travel. We propose a trust evaluation model to compute the trustworthiness of the data generated from these nodes. Further, we consider direct and indirect trust for each of the sensor nodes and update their trust measures at regular intervals. Based on these measures, we evaluate the reputation of each data collected from the network. We formulate an integer linear programming (ILP) model to select the optimal data for decision-making, alleviating the effects from illegitimate sensor nodes. Further, we prove that the ILP is NP-hard and use a dynamic programming approach to solve the ILP. Experimental results show that our proposed trust evaluation model has more than 8% attack detection rate and 13% reduction in false detection rate in a network with 50% malicious nodes, while compared to the benchmark schemes. The proposed trustworthy data selection algorithm outperforms against different greedy solutions.
Full-text available
Article
Nowadays, cloud computing is one of the important and rapidly growing services; its capabilities and applications have been extended to various areas of life. Cloud computing systems face many security issues, such as scalability, integrity, confidentiality, unauthorized access, etc. An illegitimate intruder may gain access to a sensitive cloud computing system and use the data for inappropriate purposes, which may lead to losses in business or system damage. This paper proposes a hybrid unauthorized data handling (HUDH) scheme for big data in cloud computing. The HUDH scheme aims to restrict illegitimate users from accessing the cloud and to provide data security provisions. The proposed HUDH consists of three steps: data encryption, data access, and intrusion detection. The HUDH scheme involves three algorithms: advanced encryption standards (AES) for encryption, attribute-based access control (ABAC) for data access control, and hybrid intrusion detection (HID) for unauthorized access detection. The proposed scheme is implemented using the Python and Java languages. The testing results demonstrated that the HUDH scheme can delegate computation overhead to powerful cloud servers. User confidentiality, access privilege, and user secret key accountability can be attained with more than 97% accuracy.
Full-text available
Preprint
Nowadays, cloud computing is one of the important and rapidly growing paradigms that extend its capabilities and applications in various areas of life. The cloud computing system challenges many security issues, such as scalability, integrity, confidentiality, and unauthorized access, etc. An illegitimate intruder may gain access to the sensitive cloud computing system and use the data for inappropriate purposes that may lead to losses in business or system damage. This paper proposes a hybrid unauthorized data handling (HUDH) scheme for Big data in cloud computing. The HUDU aims to restrict illegitimate users from accessing the cloud and data security provision. The proposed HUDH consists of three steps: data encryption, data access, and intrusion detection. HUDH involves three algorithms; Advanced Encryption Standards (AES) for encryption, Attribute-Based Access Control (ABAC) for data access control, and Hybrid Intrusion Detection (HID) for unauthorized access detection. The proposed scheme is implemented using Python and Java language. Testing results demonstrate that the HUDH can delegate computation overhead to powerful cloud servers. User confidentiality, access privilege, and user secret key accountability can be attained with more than 97% high accuracy.
Full-text available
Article
Cloud Computing is an example of the distributed system where the end user has to connect to the services given by the cloud which is maintained by the cloud service provider (CSP). The user has to have certain trust upon the cloud as finally, the end user has to migrate the jobs into the cloud of some third party, as the on-premises data or sources are to be kept across the globe,the CSP have to maintain the trust level so that the end user can opt for the services given by the certain trusted Cloud.Ultimately there will be various elements of levels happening at the CSP side to maintain the trust level, like the safety features for security has to be identified ,federation related or Virtual Machine migration techniques status has to be always monitored to maintain and avoid certain uncertainty which will affect the trust level of the cloud, which can lead to the compromised situation in between the end user and CSP, as a result the trust value will decrease, In this paper we are proposing a techniques where the security features and conditions for load balancing monitoring technique with proactive actions will be analyzed to maintain the specified trust level .
Full-text available
Article
The Cloud computing paradigm provides numerous attractive services to customers such as the provision of the on-demand self-service, usage-based pricing, ubiquitous network access, transference of risk, and location independent resource sharing. However, the security of cloud computing, especially its data privacy, is a highly challengeable task. To address the data privacy issues, several mechanisms have been proposed that use the third party auditor (TPA) to ensure the integrity of outsourced data for the satisfaction of cloud users (CUs). However, the role of the TPA could be the potential security threat itself and can create new security vulnerabilities for the customer’s data. Moreover, the cloud service providers (CSPs) and the CUs could also be the adversaries while deteriorating the stored private data. As a result, the objective of this research is twofold. Our first research goal is to analyze the data privacy-preserving issues by identifying unique privacy requirements and presenting a supportable solution that eliminates the possible threats towards data privacy. Our second research goal is to develop the privacy-preserving model (PPM) to audit all the stakeholders in order to provide a relatively secure cloud computing environment. Specifically, the proposed model ensures the quality of service (QoS) of cloud services and detects potential malicious insiders in CSPs and TPAs. Furthermore, our proposed model provides a methodology to audit a TPA for minimizing any potential insider threats. In addition, CUs can use the proposed model to periodically audit the CSPs using the TPA to ensure the integrity of the outsourced data. For demonstrating and validating the performance, the proposed PPM is programmed in C++ and tested on GreenCloud with NS2 by applying merging processes. The experimental results help to identify the effectiveness, operational efficiency, and reliability of the CSPs. In addition, the results demonstrate the successful rate of handling the negative role of the TPA and determining the TPA’s malicious insider detection capabilities.
Full-text available
Article
Depending on the use of the Internet and network, data-stream classification has been applied in the intrusion detection field. Due to unlimited and difficult storage features, the routine classification algorithm (eg. C4.5, currently widely used classification algorithm with higher classification accuracy) tends to incorrect classification and memory leaks. In this paper, we propose an improved Hoeffding tree data-stream classification algorithm, Hoeffding-ID and apply it to the network data-stream process of the intrusion detection field. Experimental results shows that the Hoeffding-ID algorithm has relative high detection accuracy, low positives rate and memory usage not increasing with the data samples.
Full-text available
Article
Cloud computing is still in its infancy in spite of gaining tremendous momentum recently, high security is one of the major obstacles for opening up the new era of the long dreamed vision of computing as a utility. As the sensitive applications and data are moved into the cloud data centers, run on virtual computing resources in the form of virtual machine. This unique attributes, however, poses many novel tangible and intangible security challenges. It might be difficult to track the security issue in cloud computing environments. So this paper primarily aims to highlight the major security, privacy and trust issues in current existing cloud computing environments and help users recognize the tangible and intangible threats associated with their uses, which includes: (a) surveying the most relevant security, privacy and trust issues that pose threats in current existing cloud computing environments; and (b) analyzing the way that may be addressed to eliminate these potential privacy, security and trust threats, and providing a high secure, trustworthy, and dependable cloud computing environment. In the near future, we will further analysis and evaluate privacy, security and trust issues in cloud computing environment by a quantifiable approach, further develop and deploy a complete security, privacy trust evaluation, management framework on really cloud computing environments. (c) 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [CEIS 2011]
Full-text available
Article
As more and more organizations consider moving their applications and data from dedicated hosting infrastructure, which they own and operate, to shared infrastructure leased from 'the cloud', security remains a key sticking point. Tenants of cloud hosting providers have substantially less control over the construction, operation, and auditing of infrastructure they lease than infrastructure they own. Because cloud-hosted infrastructure is shared, attackers can exploit the proximity that comes from becoming a tenant of the same cloud hosting provider. As a result, some have argued that that cloud-hosted infrastructure is inherently less secure than the self-hosted infrastructure, and that it will never be appropriate for high-stakes applications such as health care or financial transaction processing. We strive to present a more balanced treatment of the potential security impacts of transitioning to cloud-hosted infrastructure, surveying both the security costs and security benefits of doing so. The costs include exposure to new threats, some of which are technological, but many others of which are contractual, jurisdictional, and organizational. We also survey potential countermeasures to address these threats, which are also as likely to be contractual or procedural as technological. Transitioning to a cloud-hosted infrastructure may also have security benefits; some security measures have high up-front costs, may become affordable when amortized at cloud scale, and impact threats common to both cloud-and self-hosted infrastructures.
Full-text available
Chapter
A key-issue for the effectiveness of collaborative decision support systems is the problem of the trustworthiness of the entities involved in the process. Trust has been always used by humans as a form of collective intelligence to support effective decision making process. Computational trust models are becoming now a popular technique across many applications such as cloud computing, p2p networks, wikis, e-commerce sites, social network. The chapter provides an overview of the current landscape of computational models of trust and reputation, and it presents an experimental study case in the domain of social search, where we show how trust techniques can be applied to enhance the quality of social search engine predictions.
Full-text available
Article
A fundamental consideration in designing successful trust scores in a peer-to-peer system is the self-interest of individual peers. We propose a strategyproof partition mechanism that provides incentives for peers to share files, is non-manipulable by selfish interests, and approximates trust scores based on EigenTrust. The basic idea behind the partition mechanism is that the peers are partitioned into peer groups and incentives are structured so that a peer only downloads from peers in one other peer group. We show that the total error in the trust values decreases exponentially with the number of peer groups. In addition to theoretically guaranteeing non-manipulability, in practice our trust system performs nearly as well as EigenTrust and has better load-balancing properties.
Full-text available
Conference Paper
When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that humans often exhibit non-rational, biased behavior with respect to trust. This paper reports how some variations of biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better.
Full-text available
Article
Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.
Article
Providing runtime intelligence of a workflow in a highly dynamic cloud execution environment is a challenging task due the continuously changing cloud resources. Guaranteeing a certain level of workflow Quality of Service (QoS) during the execution will require continuous monitoring to detect any performance violation due to resource shortage or even cloud service interruption. Most of orchestration schemes are either configuration, or deployment dependent and they do not cope with dynamically changing environment resources. In this paper, we propose a workflow orchestration, monitoring, and adaptation model that relies on trust evaluation to detect QoS performance degradation and perform an automatic reconfiguration to guarantee QoS of the workflow. The monitoring and adaptation schemes are able to detect and repair different types of real time errors and trigger different adaptation actions including workflow reconfiguration, migration, and resource scaling. We formalize the cloud resource orchestration using state machine that efficiently captures different dynamic properties of the cloud execution environment. In addition, we use validation model checker to validate our model in terms of reachability, liveness, and safety properties. Extensive experimentation is performed using a health monitoring workflow we have developed to handle dataset from Intelligent Monitoring in Intensive Care III (MIMICIII) and deployed over Docker swarm cluster. A set of scenarios were carefully chosen to evaluate workflow monitoring and the different adaptation schemes we have implemented. The results prove that our automated workflow orchestration model is self-adapting, self-configuring, react efficiently to changes and adapt accordingly while supporting high level of Workflow QoS.
Article
Trustworthiness is a comprehensive quality metric which is used to assess the quality of the services in service-oriented environments. However, trust prediction of cloud services based on the multi-faceted Quality of Service (QoS) attributes is a challenging task due to the complicated and non-linear relationships between the QoS values and the corresponding trust result. Recent research works reveal the significance of Artificial Neural Network (ANN) and its variants in providing a reasonable degree of success in trust prediction problems. However, the challenges with respect to weight assignment, training time and kernel functions make ANN and its variants under continuous advancements. Hence, this work presents a novel multi-level Hypergraph Coarsening based Robust Heteroscedastic Probabilistic Neural Network (HC-RHRPNN) to predict trustworthiness of cloud services to build high-quality service applications. HC-RHRPNN employs hypergraph coarsening to identify the informative samples, which were then used to train HRPNN to improve its prediction accuracy and minimize the runtime. The performance of HC-RHRPNN was evaluated using Quality of Web Service (QWS) dataset, a public QoS dataset in terms of classifier accuracy, precision, recall, and F-Score.
Book
This book inclusively and systematically presents the fundamental methods, models and techniques of practical application of grey data analysis, bringing together the authors’ many years of theoretical exploration, real-life application, and teaching. It also reflects the majority of recent theoretical and applied advances in the theory achieved by scholars from across the world, providing readers a vivid overall picture of this new theory and its pioneering research activities. The book includes 12 chapters, covering the introduction to grey systems, a novel framework of grey system theory, grey numbers and their operations, sequence operators and grey data mining, grey incidence analysis models, grey clustering evaluation models, series of GM models, combined grey models, techniques for grey systems forecasting, grey models for decision-making, techniques for grey control, etc. It also includes a software package that allows practitioners to conveniently and practically employ the theory and methods presented in this book. All methods and models presented here were chosen for their practical applicability and have been widely employed in various research works. I still remember 1983, when I first participated in a course on Grey System Theory. The mimeographed teaching materials had a blue cover and were presented as a book. It was like finding a treasure: This fascinating book really inspired me as a young intellectual going through a period of confusion and lack of academic direction. It shone with pearls of wisdom and offered a beacon in the mist for a man trying to find his way in academic research. This book became the guiding light in my life journey, inspiring me to forge an indissoluble bond with Grey System Theory. ——Sifeng Liu
Conference Paper
Wireless Cloud computing delivers the data and computing resources through the internet, on a pay for usage basis. By using this, we can automatically update our software. We can use only the space required for the server, which reduces the carbon footprint. Task scheduling is the main problem in cloud computing which reduces the system performance. To improve system performance, there is need of an efficient task-scheduling algorithm. Existing task-scheduling algorithms focus on task-resource requirements, CPU memory, execution time and execution cost. However, these do not consider network bandwidth. In this paper, we introduce an efficient task-scheduling algorithm, which presents divisible task scheduling by considering network bandwidth. By this, we can allocate the workflow based on the availability of network bandwidth. Our proposed task-scheduling algorithm uses a nonlinear programming model for divisible task scheduling, which assigns the correct number of tasks to each virtual machine. Based on the allocation, we design an algorithm for divisible load scheduling by considering the network bandwidth.
Article
The ν-Support Vector Regression (ν-SVR) is an effective regression learning algorithm, which has the advantage of using a parameter ν on controlling the number of support vectors and adjusting the width of the tube automatically. However, compared to ν-Support Vector Classification (ν-SVC) (Schölkopf et al., 2000), ν-SVR introduces an additional linear term into its objective function. Thus, directly applying the accurate on-line ν-SVC algorithm (AONSVM) to ν-SVR will not generate an effective initial solution. It is the main challenge to design an incremental ν-SVR learning algorithm. To overcome this challenge, we propose a special procedure called initial adjustments in this paper. This procedure adjusts the weights of ν-SVC based on the Karush-Kuhn-Tucker (KKT) conditions to prepare an initial solution for the incremental learning. Combining the initial adjustments with the two steps of AONSVM produces an exact and effective incremental ν-SVR learning algorithm (INSVR). Theoretical analysis has proven the existence of the three key inverse matrices, which are the cornerstones of the three steps of INSVR (including the initial adjustments), respectively. The experiments on benchmark datasets demonstrate that INSVR can avoid the infeasible updating paths as far as possible, and successfully converges to the optimal solution. The results also show that INSVR is faster than batch ν-SVR algorithms with both cold and warm starts. Copyright © 2015 Elsevier Ltd. All rights reserved.
Article
For service users to get the best service that meet their requirements, they prefer to personalize their nonfunctional attributes, such as reliability and price. However, the personalization makes it challenging for service providers to completely meet users' preferences, because they have to deal with conflicting nonfunctional attributes when selecting services for users. With this in mind, users may sometimes want to explicitly specify their trade-offs among nonfunctional attributes to make their preferences known to service providers. In this article, we present a novel service selection method based on fuzzy logic that considers users' personalized preferences and their trade-offs on nonfunctional attributes during service selection. The method allows users to represent their elastic nonfunctional requirements and associated importance using linguistic terms to specify their personalized trade-off strategies. We present examples showing how the service selection framework is used and a prototype with real-world airline services to evaluate the proposed framework's application.
Article
In a service-oriented environment, it is inevitable and indeed quite common to deal with web services, whose reliability is unknown to the users. The reputation system is a popular technique currently used for providing a global quality score of a service provider to requesters. However, such global information is far from sufficient for service requesters to choose the most qualified services. In order to tackle this problem, the authors present a trust based architecture containing a computational trust model for quantifying and comparing the trustworthiness of services. In this trust model, they firstly construct a network based on the direct trust relations between participants and rating similarity in service oriented environments, then propose an algorithm for propagating trust in the social network based environment which can produce personalized trust information for a specific service requester, and finally implement the trust model and simulate various malicious behaviors in not only dense but also sparse networks which can verify the attack-resistant and robustness of the proposed approach. The experiment results also demonstrate the feasibility and benefit of the approach.
Despite widespread use of reputation mechanisms in P2P systems, little has been done in the area of analytical evaluation of these mechanisms. Current approaches for evaluation involve simulation and experimentation. These approaches provide evaluation of the mechanism in a few settings in which the experiment is designed; however, it is difficult to use these simulations for direct comparison of reputation mechanisms over a large number of systems and attacker models. In this paper, we present several analytical metrics and a utility-based method for evaluating reputation mechanisms. Further, we provide a case study of an evaluation of the EigenTrust reputation mechanism to demonstrate the use of these metrics and methods.
Article
In the presence of a variety of service providers that offer web services with overlapping or identical functionality, service consumers need a mechanism to distinguish one service from another based on their own subjective quality of service (QoS) preferences. Typical approaches in this field rely on trusted third parties to monitor the behaviour of service providers and endorse their performance based on their delivered services to different users. However, the issue of evaluating the credibility of user reports is one of the essential problems yet to be solved in the e-Business application area. In this paper we propose a two-layered preference-oriented service selection framework that integrates trust and reputation management techniques with an advanced procurement auction model in order to choose the most pertinent service provider that meets a consumer's QoS requirements. We will give a formal description of our approach and validate it with experiments demonstrating that our solution yields high-quality results under various realistic circumstances.
Article
In Peer-to-Peer (P2P) trust management, feedback provides an efficient and effective way to build a reputation-based trust relationship among peers. There is no doubt that the scalability of a feedback aggregating overlay is the most fundamental requirement for large-scale P2P computing. However, most previous works either paid little attention to the scalability of feedback aggregating overlay or relied on the flooding-based strategy to collect feedback, which greatly affects the system scalability. In this paper, we proposed a scalable feedback aggregating (SFA) overlay for large-scale P2P trust evaluation. First, the local trust rating method is defined based on the time attenuation function, which can satisfy the two dynamic properties of trust. The SFA overlay is then proposed from a scalable perspective. Not only can the SFA overlay strengthen the scalability of the feedback aggregation mechanism for large-scale P2P applications, but it can also reduce networking risk and improve system efficiency. More importantly, based on the SFA overlay, an adaptive trustworthiness computing method can be defined. This method surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Finally, the authors design the key techniques and security mechanism to be simple in implementation for the easy incorporation of the mechanism into the existing P2P overlay network. Through theoretical and experimental analysis, the SFA-based trust model shows remarkable enhancement in scalability for large-scale P2P computing, as well as has greater adaptability and accuracy in handling various dynamic behaviors of peers.
Conference Paper
To facilitate rapid development of service-based systems (SBS), many service discovering and matching techniques have been developed to find services according to users' functionality requirements. However, users usually also have requirements on non-functional qualities of services (QoS), such as throughput, delay, reliability and security, which are also critical for the success of SBS. In this paper, a QoS-based service ranking and selection approach is presented to help users to select the service that best satisfies users' QoS requirements from a set of services having already satisfied users' functionality requirements. To determine how well a service satisfies users' concerned QoS requirements, a set of functions is presented to normalize services' QoS on various QoS aspects with different metrics and scales, compute services' satisfaction scores on each QoS aspect, and combine each services' satisfaction scores on all QoS aspects together as an overall satisfaction scores. Compared with existing service ranking and selection techniques, our approach has the following advantages: 1) selects the service that best satisfies users QoS requirements instead of the service with the best QoS which may be much overqualified for the users' QoS requirements, 2) improves the flexibility in users' QoS requirement specification, and 3) uses the prospect theory to more accurately model the relation between services' QoS and their satisfaction scores.