Figure 2 - uploaded by Manuel Fähndrich
Content may be subject to copyright.
Visualization of the construction rules for revision diagrams in Def. 6.

Visualization of the construction rules for revision diagrams in Def. 6.

Source publication
Article
Full-text available
When distributed clients query or update shared data, eventual consistency can provide better availability than strong consistency models. However, programming and implementing such systems can be difficult unless we establish a reasonable consistency model, i.e. some minimal guarantees that programmers can understand and systems can provide effect...

Context in source publication

Context 1
... now give a formal, constructive definition for revision dia- grams. DEFINITION 6. A revision diagram is a directed graph constructed by applying a (possibly empty or infinite) sequence of the following construction steps (see Fig. 2) to a single initial start vertex (called the root): The join condition expresses that the terminal t (the "joiner") must be reachable from the fork vertex that started the revision that contains t (the "joinee"). This condition makes revision diagrams more restricted than general task graphs. See Fig 1(b) for some examples of invalid ...

Similar publications

Chapter
Full-text available
In Internet, cloud computing plays an important role to share information and data. Virtualization is an important technique in the cloud environment to share data and information. It is also important computing environment to enables academic IT resources or industry through on-demand dynamically allocation. The main aim of this research paper is...
Article
Full-text available
Computing on the cloud has changed the working of mankind in every manner, from storing to fetching every information on the cloud. To protect data on the cloud various access procedures and policies are used such as authentication and authorization. Authentication means the intended user is access data on the cloud and authorization means the user...
Article
Full-text available
Zusammenfassung Die Möglichkeit zur Teilnahme an Online-Prüfungen ist generell bedingt durch die Stabilität und Geschwindigkeit der Internetverbindung der Prüfungsteilnehmer sowie die Zuverlässigkeit der Computer und Server, über welche die Prüfung bereitgestellt wird. Insbesondere bei großen Teilnehmerzahlen können hohe Serverlasten zu Beginn und...
Article
Full-text available
Data sharing is a method that allows users to legally access data over the cloud. Cloud computing architecture is used to enable the data sharing capabilities only to the authorized users from the data stored in the cloud server. In the cloud, the number of users is extremely large and the users connect and leave randomly hence that the system need...
Conference Paper
Full-text available
IoT is an emerging topic in the field of IT that has attracted the interest of researchers from different parts of the world. Authentication of IoT includes the establishment of a model for controlling access to IoT devices through the internet and other unsecured network platforms. Strong authentication of IoT is necessary for ensuring that machin...

Citations

... Bailis et al. [5] adopts this model to define read atomicity. Burckhardt et al. [11] and Cerone et al. [12] propose axiomatic specifications of consistency models for transaction systems using visibility and arbitration relationships. Shapiro et al. [35] propose a classification along three dimensions (total order, visibility, and transaction composition) for transactional consistency models. ...
Chapter
Full-text available
Many transaction systems distribute, partition, and replicate their data for scalability, availability, and fault tolerance. However, observing and maintaining strong consistency of distributed and partially replicated data leads to high transaction latencies. Since different applications require different consistency guarantees, there is a plethora of consistency properties—from weak ones such as read atomicity through various forms of snapshot isolation to stronger serializability properties—and distributed transaction systems (DTSs) guaranteeing such properties. This paper presents a general framework for formally specifying a DTS in Maude, and formalizes in Maude nine common consistency properties for DTSs so defined. Furthermore, we provide a fully automated method for analyzing whether the DTS satisfies the desired property for all initial states up to given bounds on system parameters. This is based on automatically recording relevant history during a Maude run and defining the consistency properties on such histories. To the best of our knowledge, this is the first time that model checking of all these properties in a unified, systematic manner is investigated. We have implemented a tool that automates our method, and use it to model check state-of-the-art DTSs such as P-Store, RAMP, Walter, Jessy, and ROLA.
... This requires complete isolation of the effects of any two methods. Such an extreme is used, e. g., in the CR library [19]. The typical csm variable, however, will strike a trade- off between these two extremes. ...
... Concurrent revisions [19] introduce a generic and deterministic programming model for parallel programming. This model supports fork-join parallelism and processes are allowed to make concurrent modifications to shared data by creat- ing local copies that are eventually merged using suitable (programmer specified) merge functions at join boundaries. ...
... Our approach subsumes Kahn buffers of SHIM and the local-copy-merge protocol of concurrent revisions by an appropriate choice of method interface and policy. None of these approaches [19,21,22] uses a clock as a central barrier mechanism like our approach does. ...
Chapter
Full-text available
Synchronous Programming (SP) is a universal computational principle that provides deterministic concurrency. The same input sequence with the same timing always results in the same externally observable output sequence, even if the internal behaviour generates uncertainty in the scheduling of concurrent memory accesses. Consequently, SP languages have always been strongly founded on mathematical semantics that support formal program analysis. So far, however, communication has been constrained to a set of primitive clock-synchronised shared memory (csm) data types, such as data-flow registers, streams and signals with restricted read and write accesses that limit modularity and behavioural abstractions.
... Atomic operations are supported in some weakly consistent data stores as highly available transactions (HATs) [11,9,7]. HATs support atomicity without reducing availability. ...
Article
Full-text available
Scalable and highly available systems often require data stores that offer weaker consistency guarantees than traditional relational databases systems. The correctness of these applications highly depends on the resilience of the application model against data inconsistencies. In particular regarding application security, it is difficult to determine which inconsistencies can be tolerated and which might lead to security breaches. In this paper, we discuss the problem of how to develop an access control layer for applications using weakly consistent data stores without loosing the performance benefits gained by using weaker consistency models. We present ACGreGate, a Java framework for implementing correct access control layers for applications using weakly consistent data stores. Under certain requirements on the data store, ACGreGate ensures that the access control layer operates correctly with respect to dynamically adaptable security policies. We used ACGreGate to implement the access control layer of a student management system. This case study shows that practically useful security policies can be implemented with the framework incurring little overhead. A comparison with a setup using a centralized server shows the benefits of using ACGreGate for scalability of the service to geo-scale.
... But before presenting it (in §4), we need to define the semantics of the store itself: which values can operations on primitive objects return in an execution of the store? This is determined by the consistency model of causally consistent transactions [26,18,19,24,17,12,4], which we informally described in §1. To formalise it, we use a variant of the framework proposed by Burckhardt et al. [11,12,10], which defines the store semantics declaratively, without referring to implementation-level concepts such as replicas or messages. ...
... This is determined by the consistency model of causally consistent transactions [26,18,19,24,17,12,4], which we informally described in §1. To formalise it, we use a variant of the framework proposed by Burckhardt et al. [11,12,10], which defines the store semantics declaratively, without referring to implementation-level concepts such as replicas or messages. The framework models store executions using structures on events and relations in the style of weak memory models and allows us to define the semantics of the store in two stages. ...
... A correspondence between the declarative store specification and operational models closer to implementations was established elsewhere [11,12,10]. Although we do not present an operational model in this paper, we often explain various features of the store specification framework by referring to the implementation-level concepts they are meant to model. ...
Conference Paper
Modern large-scale distributed systems often rely on eventually consistent replicated stores, which achieve scalability in exchange for providing weak semantic guarantees. To compensate for this weakness, researchers have proposed various abstractions for programming on eventual consistency, such as replicated data types for resolving conflicting updates at different replicas and weak forms of transactions for maintaining relationships among objects. However, the subtle semantics of these abstractions makes using them correctly far from trivial. To address this challenge, we propose composite replicated data types, which formalise a common way of organising applications on top of eventually consistent stores. Similarly to an abstract data type, a composite data type encapsulates objects of replicated data types and operations used to access them, implemented using transactions.We develop a method for reasoning about programs with composite data types that reflects their modularity: the method allows abstracting away the internals of composite data type implementations when reasoning about their clients. We express the method as a denotational semantics for a programming language with composite data types. We demonstrate the effectiveness of our semantics by applying it to verify subtle data type examples and prove that it is sound and complete with respect to a standard non-compositional semantics.
... The above definition of eventual consistency assumes a total order of write actions. For the sake of simplicity, we avoid the definition of even weaker notions as the ones in which events are not globally ordered [11], which could also be recast into this framework. ...
... A thread of research investigates criteria and data types for the correct implementation of eventually consistent storages. The work in [11] studies a similar notion of store to the one we consider; it defines sufficient rules for the correct implementation of transactions in a server using revision diagrams. [9] proposes data types to ensure eventual consistency over cloud systems. ...
... A more general model of computation in which actions are just partially ordered could be more naturally represented by substituting traces with partially ordered sets of actions. We leave this extension, and the formalisation of weaker models of consistency, such as the Revision Diagrams studied in [11,29], as a future work. We also analyse some concrete implementations of stores with different consistency levels, by using idealised operational models. ...
Article
Full-text available
Managing data over cloud infrastructures raises novel challenges with respect to existing and well-studied approaches such as ACID and long-running transactions. One of the main requirements is to provide availability and partition tolerance in a scenario with replicas and distributed control. This comes at the price of a weaker consistency, usually called eventual consistency. These weak memory models have proved to be suitable in a number of scenarios, such as the analysis of large data with map reduce. However, due to the widespread availability of cloud infrastructures, weak storages are used not only by specialised applications but also by general purpose applications. We provide a formal approach, based on process calculi, to reason about the behaviour of programs that rely on cloud stores. For instance, it allows to check that the composition of a process with a cloud store ensures ‘strong’ properties through a wise usage of asynchronous message-passing; in this case, we say that the process supports the consistency level provided by the cloud store. The proposed approach is compositional: the support of a consistency level is preserved by parallel composition when the preorder used to compare process-store ensembles is the weak simulation.
... Weak forms of transactional guarantees can be made available under partitions, using consistency models such as eventually consistent transactions [Burckhardt et al., 2012a[Burckhardt et al., , 2014b, causally consistent transactions [Li et al., 2012, Lloyd et al., 2013, or highly available transactions [Bailis et al., 2013[Bailis et al., , 2014. ...
... 2. Revision Consistency. Our system uses revision diagrams to guarantee eventual consistency, as proposed in [1]. Conceptually, the cloud stores the main revision, while devices maintain local revisions that are periodically synchronized. ...
... These models are connected by a fork-join automaton (an abstract data type supporting eventual consistency) derived automatically from the schema (Section 5). Together, these models extend and concretize earlier work on eventually consistent transactions [1]. ...
... yield does not force synchronization: it is perfectly acceptable for yield to do nothing at all (which is in fact all it can do in situations where the device is not connected). Another way to describe the effect of yield is that the absence of a yield guarantees isolation and atomicity; yield statements thus partition the execution into a form of transaction (called eventually consistent transactions in [1]). Effectively, this implies that everything is always executing inside a transaction. ...
Conference Paper
Full-text available
Mobile devices commonly access shared data stored on a server. To ensure responsiveness, many applications maintain local replicas of the shared data that remain instantly accessible even if the server is slow or temporarily unavailable. Despite its apparent simplicity and commonality, this scenario can be surprisingly challenging. In particular, a correct and reliable implementation of the communication protocol and the conflict resolution to achieve eventual consistency is daunting even for experts. To make eventual consistency more programmable, we propose the use of specialized cloud data types. These cloud types provide eventually consistent storage at the programming language level, and thus abstract the numerous implementation details (servers, networks, caches, protocols). We demonstrate (1) how cloud types enable simple programs to use eventually consistent storage without introducing undue complexity, and (2) how to provide cloud types using a system and protocol comprised of multiple servers and clients.
... We include the proof in the full version [4]. For proving our main result later on, we need to establish another basic fact about revision diagrams. ...
... We call a path direct if all of its f-edges (if any) appear after all of its j-edges (if any). The following lemma (which appears as a theorem in [6], and for which we include a proof in [4] as well) shows that we can always choose direct paths: ...
... The proof of our Theorem (in Section 3.5 below) constructs partial orders < v , < a from the revision diagram by (1) specifying x < v y iff there is a path from x to y in the revision diagram, and (1) specifying < a to order all events in a joined revision to occur in between the joiner terminal and the join vertex. Note that the converse of Thm. 1 is not true, not even if restricted to finite histories (we include a finite counterexample in the full version [4]). Also Note that the most difficult part of the proof is the safety, not the liveness, since the proof that < a is a partial order extending < v depends on the join condition in a nontrivial way. ...
Conference Paper
Full-text available
When distributed clients query or update shared data, eventual consistency can provide better availability than strong consistency models. However, programming and implementing such systems can be difficult unless we establish a reasonable consistency model, i.e. some minimal guarantees that programmers can understand and systems can provide effectively. To this end, we propose a novel consistency model based on eventually consistent transactions. Unlike serializable transactions, eventually consistent transactions are ordered by two order relations (visibility and arbitration) rather than a single order relation. To demonstrate that eventually consistent transactions can be effectively implemented, we establish a handful of simple operational rules for managing replicas, versions and updates, based on graphs called revision diagrams. We prove that these rules are sufficient to guarantee correct implementation of eventually consistent transactions. Finally, we present two operational models (single server and server pool) of systems that provide eventually consistent transactions.
Article
Hardware consolidation in the datacenter often leads to scalability bottlenecks from heavy utilization of critical resources, such as the storage and network bandwidth. Client-side caching on durable media is already applied at block level to reduce the storage backend load but has received criticism for added overhead, restricted sharing, and possible data loss at client crash. We introduce a journal to the kernel-level client of an object-based distributed filesystem to improve durability at high I/O performance and reduced shared resource utilization. Storage virtualization at the file interface achieves clear consistency semantics across data and metadata, supports native file sharing among clients, and provides flexible configuration of durable data staging at the host. Over a prototype that we have implemented, we experimentally quantify the performance and efficiency of the proposed Arion system in comparison to a production system. We run microbenchmarks and application-level workloads over a local cluster and a public cloud. We demonstrate reduced latency by 60% and improved performance up to 150% at reduced server network and disk bandwidth by 41% and 77%, respectively. The performance improvement reaches 92% for 16 relational databases as clients and gets as high as 11.3x with two key-value stores as clients.