In this paper, we consider combining replication and data partitioning schemes to assure data availability, confidentiality, and timely accesses for data grid applications. Data objects are partitioned into shares and shares and dispersed. The shares may be replicated to achieve better performance and availability. We develop models for assessing confidentiality, availability, and communication cost for different placements and use the metrics to guide placement decisions. A new probabilistic security assurance model and an efficient availability computing algorithm have been developed. Due to the nature of contradicting goals, we model the placement decision problem as a multi-objective problem and use a genetic algorithm to determine Pareto optimal placement solutions.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.
[Show abstract][Hide abstract] ABSTRACT: It has been reported  that life holds but two certainties, death and taxes. And indeed, it does appear that any society, and in the context of this article, any large-scale distributed system, must address both death (failure) and the establishment and maintenance of infrastructure (which we assert is a major motivation for taxes, so as to justify our title!). Two supposedly new approaches to distributed computing have emerged in the past few years, both claiming to address the problem of organizing large-scale computational societies: peer-to-peer (P2P) [15, 36, 49] and Grid computing . Both approaches have seen rapid evolution, widespread deployment, successful application, considerable hype, and a certain amount of (sometimes warranted) criticism. The two technologies appear to have the same final objective, the pooling and coordinated use of large sets of distributed resources, but are based in different communities and, at least in their current designs, focus on different requirements.
Peer-to-Peer Systems II, Second International Workshop, IPTPS 2003, Berkeley, CA, USA, February 21-22,2003, Revised Papers; 01/2003
[Show abstract][Hide abstract] ABSTRACT: We present a distributed algorithm for file allocation that guarantees high assurance, availability, and scalability in a large distributed file system. The algorithm can use replication and fragmentation schemes to allocate the files over multiple servers. The file confidentiality and integrity are preserved, even in the presence of a successful attack that compromises a subset of the file servers. The algorithm is adaptive in the sense that it changes the file allocation as the read-write patterns and the location of the clients in the network change. We formally prove that, assuming read-write patterns are stable, the algorithm converges toward an optimal file allocation, where optimality is defined as maximizing the file assurance.
IEEE Transactions on Parallel and Distributed Systems 10/2003; 14(9):885- 896. DOI:10.1109/TPDS.2003.1233711 · 2.17 Impact Factor