Conference Paper

CoVFeFE: Collusion-Resilient Verifiable Computing Framework for Resource-Constrained Devices at Network Edge

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, replication-based collusion defense mechanisms are mainly divided into two different areas: Prevention [16], [7], [8], [17], [9], [18], or detection and mitigation [6], [11], [10], [21], [22], [23]. While in prevention solutions the main target is to incentivize colluding workers to betray the collusion [9] (by giving higher rewards in a game), or enlarging the voting pool to decrease the probability of winning of colluding servers in majority voting [7], nevertheless, the main weakness of prevention is it works like a vaccine for diseases and it can not guarantee to prevent from collusion. ...
Chapter
Full-text available
Attestation is a fundamental building block to establish trust over software systems. When used in conjunction with trusted execution environments, it guarantees the genuineness of the code executed against powerful attackers and threats, paving the way for adoption in several sensitive application domains. This paper reviews remote attestation principles and explains how the modern and industrially well-established trusted execution environments Intel SGX, Arm TrustZone and AMD SEV, as well as emerging RISC-V solutions, leverage these mechanisms.KeywordsTrusted execution environmentsAttestationIntel SGXArm TrustZoneAMD SEVRISC-V
Article
Full-text available
The vehicular ad hoc network (VANET) has been widely used as an application of mobile ad hoc networking in the automotive industry. However, in the 5G/B5G era, the Internet of Things as a cutting-edge technology is gradually transforming the current Internet into a fully integrated future Internet. At the same time, it will promote the existing research fields to develop in new directions, such as smart home, smart community, smart health, and intelligent transportation. The VANET needs to accelerate the pace of technological transformation when it has to meet the application requirements of intelligent transportation systems, vehicle automatic control, and intelligent road information service. Based on this context, the Internet of Vehicles (IoV) has come into being, which aims to realize the information exchange between the vehicle and all entities that may be related to it. IoV's goals are to reduce accidents, ease traffic congestion, and provide other information services. At present, IoV has attracted much attention from academia and industry. In order to provide assistance to relevant research, this article designs a new network architecture for the future network with greater data throughput, lower latency, higher security, and massive connectivity. Furthermore, this article explores a comprehensive literature review of the basic information of IoV, including basic VANET technology, several network architectures, and typical application of IoV.
Article
Full-text available
“Volunteer computing” is the use of consumer digital devices for high-throughput scientific computing. It can provide large computing capacity at low cost, but presents challenges due to device heterogeneity, unreliability, and churn. BOINC, a widely-used open-source middleware system for volunteer computing, addresses these challenges. We describe BOINC’s features, architecture, implementation, and algorithms.
Conference Paper
Full-text available
The increase of personal data on mobile devices has been followed by legislation that forces service providers to process and maintain users’ data under strict data protection policies. In this paper, we propose a new primitive for mobile applications called auditable mobile function (AMF) to help service providers enforcing such policies by enabling them to process sensitive data within users’ devices and collecting proofs of function execution integrity. We present SafeChecker, a computation verification system that provides mobile application support for AMFs, and evaluate the practicality of different usage scenario AMFs on TrustZone-enabled hardware.
Conference Paper
Full-text available
Cloud computing has become an irreversible trend. Together comes the pressing need for verifiability, to assure the client the correctness of computation outsourced to the cloud. Existing verifiable computation techniques all have a high overhead, thus if being deployed in the clouds, would render cloud computing more expensive than the on-premises counterpart. To achieve verifiability at a reasonable cost, we leverage game theory and propose a smart contract based solution. In a nutshell, a client lets two clouds compute the same task, and uses smart contracts to stimulate tension, betrayal and distrust between the clouds, so that rational clouds will not collude and cheat. In the absence of collusion, verification of correctness can be done easily by crosschecking the results from the two clouds. We provide a formal analysis of the games induced by the contracts, and prove that the contracts will be effective under certain reasonable assumptions. By resorting to game theory and smart contracts, we are able to avoid heavy cryptographic protocols. The client only needs to pay two clouds to compute in the clear, and a small transaction fee to use the smart contracts. We also conducted a feasibility study that involves implementing the contracts in Solidity and running them on the official Ethereum network.
Article
Full-text available
Large scale grids permit to share grid resources spread over different autonomous administrative sites in the internet. The rapid progress of grid systems opens the door for numerous companies to adopt this technology in their business development. This progress is characterized by the increasing openness and opportunity of resource-sharing across organizations in different domains. In the business context, these shared resources can be misused by some malicious users that can abuse the provided resources and make them behave maliciously to return wrong results and sabotage the jobs execution. The common technique used by most grid systems to deal with this problem is based on replication with voting. Nevertheless, these techniques rely on the assumption that the grid resources behave independently. They may be ineffective where a number of collusive resources collectively return the same wrong results of a job execution. In order to overcome this threat, we propose a Reputation-Based Voting (RBV) approach, which investigates the trustworthiness of the grid resources through a reputation system, and then takes a decision about the results. In addition, the performance of our approach and other voting techniques, like m-first voting and credibility-based voting, are evaluated through simulation to perceive the effect of collusive grid resources on the correctness of the results. The obtained results show that our approach achieves a lower errorrate and better performance in terms of overhead.
Conference Paper
Full-text available
ns3 is a simulation framework for computer networks, derived from a long line of serial simulators. Recently, ns3 incorporated a parallel, distributed scheduler, which enables distributed ns3 simulation for the first time. In this paper we discuss the current implementation and some of its limitations, with an eye to exploring potential improvements. In order to gauge progress, it is essential to have a meaningful performance metric and a suitable benchmark problem. Therefore we outline how to measure the simulation critical path and use that to construct a parallel performance metric. Second, we propose a scalable benchmark model, inspired by the global Internet.
Article
Full-text available
Desktop Grid systems reached a preeminent place among the most powerful computing platforms in the planet. Unfortunately, they are extremely vulnerable to mischief, because computing projects exert no administrative or technical control on volunteers. These can very easily output bad results, due to software or hardware glitches (resulting from over-clocking for instance), to get unfair computational credit, or simply to ruin the project. To mitigate this problem, Desktop Grid servers replicate work units and apply majority voting, typically on 2 or 3 results. In this paper, we observe that simple majority voting is powerless against malicious volunteers that collude to attack the project. We argue that to identify this type of attack and to spot colluding nodes, each work unit needs at least 3 voters. In addition, we propose to post-process the voting pools in two steps. i) In the first step, we use a statistical approach to identify nodes that were not colluding, but submitted bad results; ii) then, we use a rather simple principle to go after malicious nodes which acted together: they might have won conflicting voting pools against nodes that were not identified in step i. We use simulation to show that our heuristic can be quite effective against colluding nodes, in scenarios where honest nodes form a majority.
Conference Paper
Full-text available
Distance metric is widely used in similarity estimation. In this paper we find that the most popular Euclidean and Manhattan distance may not be suitable for all data distributions. A general guideline to establish the relation between a distribution model and its corresponding similarity estimation is proposed. Based on maximum likelihood theory, we propose new distance metrics, such as harmonic distance and geometric distance. Because the feature elements may be from heterogeneous sources and usually have different influence on similarity estimation, it is inappropriate to model the distribution as isotropic. We propose a novel boosted distance metric that not only finds the best distance metric that fits the distribution of the underlying elements but also selects the most important feature elements with respect to similarity. The boosted distance metric is tested on fifteen benchmark data sets from the UCI repository and two image retrieval applications. In all the experiments, robust results are obtained based on the proposed methods
Conference Paper
Full-text available
By exploiting idle time on volunteer machines, desktop grids provide a way to execute large sets of tasks with negligible maintenance and low cost. Although desktop grids are attractive for cost-conscious projects, relying on external resources may compromise the correctness of application execution due to the well-known unreliability of nodes. In this paper, we consider the most challenging threat model: organized groups of cheaters that may collude to produce incorrect results. We propose two on-line algorithms for detecting collusion and characterizing the participant behaviors. Using several real-life traces, we show that our approach is accurate and efficient in identifying collusion and in estimating group behavior.
Conference Paper
Full-text available
We describe different strategies a central authority, the boss, can use to distribute computation to untrusted contractors. Our problem is inspired by volunteer distributed computing projects such as SETI@home, which outsource computation to large numbers of participants. For many tasks, verifying a task's output requires as much work as computing it again; additionally, some tasks may produce certain outputs with greater probability than others. A selfish contractor may try to exploit these factors, by submitting potentially incorrect results and claiming a reward. Further, malicious contractors may respond incorrectly, to cause direct harm or to create additional overhead for result-checking. We consider the scenario where there is a credit system whereby users can be rewarded for good work and fined for cheating. We show how to set rewards and fines that incentivize proper behavior from rational contractors, and mitigate the damage caused by malicious contractors. We analyze two strategies: random double-checking by the boss, and hiring multiple contractors to perform the same job. We also present a bounty mechanism when multiple contractors are employed; the key insight is to give a reward to a contractor who catches another worker cheating. Furthermore, if we can assume that at least a small fraction h of the contractors are honest (1% - 10%), then we can provide graceful degradation for the accuracy of the system and the work the boss has to perform. This is much better than the Byzantine approach, which typically assumes h > 60%.
Article
Full-text available
From agreement problems to replicated software execution, we frequently find scenarios with voting pools. Unfortunately, Byzantine adversaries can join and collude to distort the results of an election. We address the problem of detecting these colluders, in scenarios where they repeatedly participate in voting decisions. We investigate different malicious strategies, such as naïve or colluding attacks, with fixed identifiers or in whitewashing attacks. Using a graph-theoretic approach, we frame collusion detection as a problem of identifying maximum independent sets. We then propose several new graph-based methods and show, via analysis and simulations, their effectiveness and practical applicability for collusion detection.
Article
Widespread applications of 5G technology have prompted the outsourcing of computation dominated by the Internet of Things (IoT) cloud to improve transmission efficiency, which has created a novel paradigm for improving the speed of common connected objects in IoT. However, although it makes easier for ubiquitous resource-constrained equipment that outsources computing tasks to achieve high-speed transmission services, security concerns, such as a lack of reliability and collusion attacks, still exist in the outsourcing computation. In this paper, we propose a reliable, anti-collusion outsourcing computation and verification protocol, which uses distributed storage solutions in response to the issue of centralized storage, leverages homomorphic encryption to deal with outsourcing computation, and ensures data privacy. Moreover, we embed outsourcing computation results and a novel polynomial factorization algorithm into the smart contract of Ethereum, which not only enables the verification of the outsourcing result without trusted third party but also resists against the collusion attacks. The results of the theoretical analysis and experimental performance evaluation demonstrate that the proposed protocol is secure, reliable, and more effective compared with state-of-the-art approaches.
Article
We consider the verifiable problem of delegation computing in cloud, which means a client needs to outsource a computing function to untrusted servers, and verify the returned computational results. A recently related result of Dong et al. (CCS 2017) shows a client outsources the same computation task to two different servers, which achieves verifiability by simply cross-checking. In Dong′s replication-based scheme, although the expensive cryptographic algorithms for verification is not needed, the delegation overhead is double. In order to reduce delegation overhead of Dong′s scheme, we propose a novel incentive-compatible rational delegation computing scheme. Specifically, the client sends duplicate tasks to some servers for cross-checking, meanwhile, each sever only knows the probability distribution of receiving a duplicate task. Furthermore, considering rational delegation computing as a dynamic game with imperfect information, we design reasonable and effective incentive mechanisms. Afterwards, we seek a unique sequential equilibrium in each game, and strictly prove that rational players still have no motivation to deviate from honest behavior under a condition of lower delegation overhead. Detailed analysis indicates that the lowest delegation overhead achieved by our scheme is only n2n-2 of that achieved by Dong′s scheme, where n means the number of servers.
Article
To investigate the diversified technologies in Internet of Vehicles (IoV) under intelligent edge computing, artificial intelligence, intelligent edge computing, and IoV are combined. Also, it proposes an IoV model for intelligent edge computing task offloading and migration under the SDVN (Software Defined Vehicular Networks) architecture, that is, the JDE-VCO (Joint Delay and Energy-Vehicle Computational task Offloading) optimization. And the simulation is performed. The results show that in the analysis of the impact of different offloading strategies on the IoV, it is found that the JDE-VCO algorithm is superior to other schemes in terms of transmission delay and total offloading energy consumption. In the analysis of the impact of the task unloading of the IoV, the JDE-VCO algorithm is less than RTO (Random Tasks Offloading) and UTO (Uniform Tasks Offloading) algorithm schemes in terms of the number of tasks per unit time, and the average task completion time for the same amount of uploaded data. In the analysis of the packet loss ratio and transmission delay, it can be found that the packet loss ratio and transmission delay of the JDE-VCO algorithm are less than the RTO and UTO algorithms. Moreover, the packet loss ratio of the JDE-VCO algorithm is about 0.1, and the transmission delay is stable at 0.2s, which has obvious advantages. Therefore, through research, the IoV model of task offloading and migration built by intelligent edge computing can significantly improve the load sharing rate, offloading efficiency, packet loss ratio, and transmission delay when the IoV is processing tasks and uploading data. It provides experimental basis for the improvement of the IoV system.
Article
Software attacks on modern computer systems have been a persisting challenge for several decades, leading to a continuous arms race between attacks and defenses. As a first line of defense, operating system kernels enforce process isolation to limit potential attacks to only the code containing the vulnerabilities. However, vulnerabilities in the kernel itself (for example, various vulnerabilities found by Google Project Zero), side-channel attacks,1 or even physical attacks2 can be used to undermine process isolation.
Chapter
We consider a setting where a verifier with limited computation power delegates a resource intensive computation task—which requires a T×ST\times S computation tableau—to two provers where the provers are rational in that each prover maximizes their own payoff—taking into account losses incurred by the cost of computation. We design a mechanism called the Minimal Refereed Mechanism (MRM) such that if the verifier has O(logS+logT)O(\log S + \log T) time and O(logS+logT)O(\log S + \log T) space computation power, then both provers will provide a honest result without the verifier putting any effort to verify the results. The amount of computation required for the provers (and thus the cost) is a multiplicative log\log S-factor more than the computation itself, making this schema efficient especially for low-space computations.
Article
The rapid development of cloud computing promotes a wide deployment of data and computation outsourcing to cloud service providers by resource-limited entities. Based on a pay-per-use model, a client without enough computational power can easily outsource large-scale computational tasks to a cloud. Nonetheless, the issue of security and privacy becomes a major concern when the customer’s sensitive or confidential data is not processed in a fully trusted cloud environment. Recently, a number of publications have been proposed to investigate and design specific secure outsourcing schemes for different computational tasks. The aim of this survey is to systemize and present the cutting-edge technologies in this area. It starts by presenting security threats and requirements, followed with other factors that should be considered when constructing secure computation outsourcing schemes. In an organized way, we then dwell on the existing secure outsourcing solutions to different computational tasks such as matrix computations, mathematical optimization, and so on, treating data confidentiality as well as computation integrity. Finally, we provide a discussion of the literature and a list of open challenges in the area.
Article
Many grid-computing systems adopt voting-based techniques to resist sabotage. However, these techniques become ineffective in grid systems subject to collusion behavior, where some malicious resources can collectively sabotage a job execution by returning identical wrong results. Spot-checking has been used to detect and tackle the collusive issue by sending randomly chosen resources a certain number of spotter jobs with known correct results to estimate resource credibility based on the returned result. This paper makes original contributions by formulating and solving a new spot-checking optimization problem for grid systems subject to collusion attacks, with the objective to minimize probability of the genuine task failure (PGTF, i.e., the wrong output probability) while meeting an expected overhead constraint. The problem solution contains an optimal combination of task distribution policy parameters, including the number of deployed spotter tasks, the number of resources tested by each spotter task, and the number of resources assigned to perform the genuine task. The optimization procedure encompasses a new iterative method for evaluating system performance metrics of PGTF and expected overhead in terms of the total number of task assignments. Both fixed and uncertain attack parameters are considered. Illustrative examples are provided to demonstrate the proposed optimization problem and solution methodology.
Conference Paper
With the advent of cloud computing, individuals and companies alike are looking for opportunities to leverage cloud resources not only for storage but also for computation. Nevertheless, the reliance on the cloud to perform computation raises the unavoidable challenge of how to assure the correctness of the delegated computation. In this regard, we introduce two cryptographic protocols for publicly verifiable computation that allow a lightweight client to securely outsource to a cloud server the evaluation of high-degree univariate polynomials and the multiplication of large matrices. Similarly to existing work, our protocols follow the amortized verifiable computation approach. Furthermore, by exploiting the mathematical properties of polynomials and matrices, they are more efficient and give way to public delegatability. Finally, besides their efficiency, our protocols are provably secure under well-studied assumptions.
Article
VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output. VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.
Conference Paper
VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output. VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.
Article
We present VC3, the first system that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hyper visor out of the TCB, thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes. Experimental results on common benchmarks show that VC3 performs well compared with unprotected Hadoop: VC3's average runtime overhead is negligible for its base security guarantees, 4.5% with write integrity and 8% with read/write integrity.
Article
From theoretical possibility to near practicality.
Conference Paper
We address the problem in which a client stores a large amount of data with an untrusted server in such a way that, at any moment, the client can ask the server to compute a function on some portion of its outsourced data. In this scenario, the client must be able to efficiently verify the correctness of the result despite no longer knowing the inputs of the delegated computation, it must be able to keep adding elements to its remote storage, and it does not have to fix in advance (i.e., at data outsourcing time) the functions that it will delegate. Even more ambitiously, clients should be able to verify in time independent of the input-size -- a very appealing property for computations over huge amounts of data. In this work we propose novel cryptographic techniques that solve the above problem for the class of computations of quadratic polynomials over a large number of variables. This class covers a wide range of significant arithmetic computations -- notably, many important statistics. To confirm the efficiency of our solution, we show encouraging performance results, e.g., correctness proofs have size below 1 kB and are verifiable by clients in less than 10 milliseconds.
Chapter
As networks of computing devices grow larger and more complex, the need for highly accurate and scalable network simulation technologies becomes critical. Despite the emergence of large-scale testbeds for network research, simulation still plays a vital role in terms of scalability (both in size and in experimental speed), reproducibility, rapid prototyping, and education. With simulation based studies, the approach can be studied in detail at varying scales, with varying data applications, varying field conditions, and will result in reproducible and analyzable results.
Conference Paper
We introduce and formalize the notion of Verifiable Computation, which enables a computationally weak client to “outsource” the computation of a function F on various dynamically-chosen inputs x 1,...,x k to one or more workers. The workers return the result of the function evaluation, e.g., y i = F(x i ), as well as a proof that the computation of F was carried out correctly on the given value x i . The primary constraint is that the verification of the proof should require substantially less computational effort than computing F(x i ) from scratch. We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in O(m·polyλ) time, where m is the bit-length of the output of F, and λ is a security parameter. The protocol requires a one-time pre-processing stage by the client which takes O(|C|·polyλ) time, where C is the smallest known Boolean circuit computing F. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the x i or y i values.
Article
The Set Covering problem (SCP) is a well known combinatorial optimization problem, which is NP-hard. We conducted a comparative study of nine different approximation algorithms for the SCP, including several greedy variants, fractional relaxations, randomized algorithms and a neural network algorithm. The algorithms were tested on a set of random-generated problems with up to 500 rows and 5000 columns, and on two sets of problems originating in combinatorial questions with up to 28160 rows and 11264 columns.On the random problems and on one set of combinatorial problems, the best algorithm among those we tested was a randomized greedy algorithm, with the neural network algorithm very close in second place. On the other set of combinatorial problems, the best algorithm was a deterministic greedy variant, and the randomized algorithms (both randomized greedy and neural network) performed quite poorly. The other algorithms we tested were always inferior to the ones mentioned above.
Conference Paper
This paper presents a proposal of a collusion-resistant sabotage-tolerance mechanism against malicious sabotage attempts in volunteer computing (VC) systems. While VC systems reach the most powerful computing platforms, they are vulnerable to mischief because volunteers can output erroneous results, for various reasons ranging from hardware failure to intentional sabotage. To protect the integrity of data and validate the computation results, most VC systems rely on the replication-based sabotage-tolerance mechanisms such as the m-first voting. However, these mechanisms are powerless against malicious volunteers (saboteurs) that have intention and use some form of collusion. To face this collusion threat, this paper proposes a spot-checking-based mechanism, which estimates the frequency of malicious attempts and dynamically eliminates erroneous results from the system. Simulation of VCs shows that the proposed method can improve the performance of VC systems up to 30 percent, while guaranteeing the same level of computational correctness.
Conference Paper
The current move to Cloud Computing raises the need for verifiable delegation of computations, where a weak client delegates his computation to a powerful server, while maintaining the ability to verify that the result is correct. Although there are prior solutions to this problem, none of them is yet both general and practical for real-world use. We demonstrate a relatively efficient and general solution where the client delegates the computation to several servers, and is guaranteed to determine the correct answer as long as even a single server is honest. We show: A protocol for any efficiently computable function, with logarithmically many rounds, based on any collision-resistant hash family. The protocol is set in terms of Turing Machines but can be adapted to other computation models. An adaptation of the protocol for the X86 computation model and a prototype implementation, called Quin, for Windows executables. We describe the architecture of Quin and experiment with several parameters on live clouds. We show that the protocol is practical, can work with nowadays clouds, and is efficient both for the servers and for the client.
Conference Paper
We establish significantly improved bounds on the performance of the greedy algorithm for approximatingset cover. In particular, we provide the first substantial improvement of the 20-year-old classical harmonic upper bound,H(m), of Johnson, Lovász, and Chvátal, by showing that the performance ratio of the greedy algorithm is, in fact,exactlylnm−lnlnm+Θ(1), wheremis the size of the ground set. The difference between the upper and lower bounds turns out to be less than 1.1. This provides the first tight analysis of the greedy algorithm, as well as the first upper bound that lies belowH(m) by a function going to infinity withm. We also show that the approximation guarantee for the greedy algorithm is better than the guarantee recently established by Srinivasan for the randomized rounding technique, thus improving the bounds on theintegrality gap. Our improvements result from a new approach which might be generally useful for attacking other similar problems.
Conference Paper
A common technique for result verification in grid com- puting is to delegate a computation redundantly to different workers and apply majority voting to the returned results. However, the technique is sensitive to "collusion" where a majority of malicious workers collectively returns the same incorrect result. In this paper, we propose a mechanism that identifies groups of colluding workers. The mechanism is based on the fact that colluders can succeed in a vote only when they hold the majority. This information allows us to build clusters of workers that voted similarly in the past, and so detect collusion. We find that the more strongly workers collude, the better they can be identified. ten. This approach tolerates a certain number of incorrect results in a vote. However, it does not resist a majority of colluding workers that collectively return the same incorrect result. Even though workers are randomly selected for each vote, with the possibility of massive attacks (e.g., (4)), the probability for a majority of colluders becomes significant. Therefore, mechanisms are required that detect colluding behavior of malicious workers. Approach and Contribution We present a mechanism for collusion detection that exploits the information of how often pairs of workers are together in the majority/minority of votes, and how often they are in opposite groups. In cases, where colluding workers win a vote, they are always together in the majority, whereas honest workers together form the minority. We first show theoretically that this fact allows a line to be drawn between honest and colluding workers. Secondly, we propose an algorithm that uses graph clustering to discover this division. Finally, we evaluate the algorithm in terms of accuracy and running time, using two different graph clustering algorithms from the literature. We find that, given a certain number of observations, our mech- anism can successfully detect sophisticated colluders.
Conference Paper
Peer-to-peer grids that seek to harvest idle cycles available throughout the Internet are vulnerable to hosts that fraudulently accept computational tasks and then maliciously return arbitrary results. Current strategies employed by popular cooperative computing grids, such as SETI@Home, rely heavily on task replication to check results. However, result verification through replication suffers from two potential shortcomings: (1) susceptibility to collusion in which a group of malicious hosts conspire to return the same bad results and (2) high fixed overhead incurred by running redundant copies of the task. In this paper, we first propose a scheme called Quiz to combat collusion. The basic idea of Quiz is to insert indistinguishable quiz tasks with verifiable results known to the client within a package containing several normal tasks. The client can then accept or reject the normal task results based on the correctness of quiz results. Our second contribution is the promotion of trust-based task scheduling in peer-to-peer grids. By coupling a reputation system with the basic verification schemes - Replication and Quiz - a client can potentially avoid malicious hosts and also reduce the overhead of verification for trusted hosts.
Sur-vey on the internet of vehicles: Network architectures and applications
  • B Ji
  • X Zhang
  • S Mumtaz
  • C Han
  • C Li
  • H Wen