Mohammad Alomari

University of Dammam, Damman, Eastern Province, Saudi Arabia

Are you Mohammad Alomari?

Claim your profile

Publications (10)18.76 Total impact

  • Mohammad Alomari, Alan Fekete, Uwe Röhm
    [Show abstract] [Hide abstract]
    ABSTRACT: Snapshot Isolation (SI) is a multiversion concurrency control that has been implemented by several open source and commercial database systems (Oracle, Microsoft SQL Server, and previous releases of PostgreSQL). The main feature of SI is that a read operation does not block a write operation and vice versa, which allows higher degree of concurrency than traditional two-phase locking. SI prevents many anomalies that appear in other isolation levels, but it still can result in non-serializable executions, in which database integrity constraints can be violated. Several techniques are known to modify the application code based on preanalysis, in order to ensure that every execution is serializable on engines running SI. We introduce a new technique called External Lock Manager (ELM). In using a technique, there is a choice to make, of which pairs of transactions need to have conflicts introduced. We measure the performance impact of the choices available, among techniques and conflicts.
    Information Systems 03/2014; 40:84–101. DOI:10.1016/j.is.2013.10.002 · 1.24 Impact Factor
  • M. Alomari
    [Show abstract] [Hide abstract]
    ABSTRACT: Snapshot Isolation (SI) is a concurrency control mechanism that has been implemented by several commercial and open resources platforms. However, under SI, a set of program may experience a non-serializable execution, in which database integrity constraints can be violated. An elegant approach from Fekete (2005) shows how to guarantee serializable execution on platforms that offer both SI and traditional two-phase locking (2PL) concurrency control, by running some transactions (pivots) with 2PL and the rest at SI. While Fekete's Pivot 2PL technique performs better than running all transactions at 2PL, it often loses much performance compared to SI for all transactions. In this paper we identify causes that harm performance of Pivot 2PL, and we propose an improved approach, called Pivot Ordered2PL, in which a few transactions are rewritten (without changing their functionality). We evaluate Pivot Ordered2PL and find it ensures serializable execution with performance close to that of SI.
    Computer Systems and Applications (AICCSA), 2013 ACS International Conference on; 01/2013
  • Source
    Sherif Sakr, Mohammad Alomari
    [Show abstract] [Hide abstract]
    ABSTRACT: Database management technology has played a vital role in facilitating key advancements of the information technology field. Database researchers—and computer scientists in general—consider prestigious conferences as their favorite and effective tools for presenting their original research study and for getting good publicity. With the main aim of retaining the high quality and the prestige of these conference, program committee members plays the major role of evaluating the submitted articles and deciding which submissions are to be included in the conference programs. In this article, we study the program committees of four top-tier and prestigious database conferences (SIGMOD, VLDB, ICDE, EDBT) over a period of 10years (2001–2010). We report about the growth in the number of program committee members in comparison to the size of the research community in the last decade. We also analyze the rate of change in the membership of the committees of the different editions of these conferences. Finally, we report about the major contributing scholars in the committees of these conferences as a mean of acknowledging their impact in the community. KeywordsDatabase technology–Program committees
    Scientometrics 04/2012; 91(1):173-184. DOI:10.1007/s11192-011-0530-7 · 2.27 Impact Factor
  • Source
    Sherif Sakr, Mohammad Alomari
    [Show abstract] [Hide abstract]
    ABSTRACT: We analyze the database research publications of four major core database technology conferences (SIGMOD, VLDB, ICDE, EDBT), two main theoretical database conferences (PODS, ICDT) and three database journals (TODS, VLDB Journal, TKDE) over a period of 10 years (2001 - 2010). Our analysis considers only regular papers as we do not include short papers, demo papers, posters, tutorials or panels into our statistics. We rank the research scholars according to their number of publication in each conference/journal separately and in combined. We also report about the growth in the number of research publications and the size of the research community in the last decade.
    Scientometrics 02/2011; 88(2). DOI:10.1007/s11192-011-0385-y · 2.27 Impact Factor
  • Source
    Sherif Sakr, Anna Liu, Daniel M. Batista, Mohammad Alomari
    IEEE Communications Surveys &amp Tutorials 01/2011; 13:311-336. · 6.49 Impact Factor
  • Source
    Sherif Sakr, Anna Liu, D.M. Batista, Mohammad Alomari
    [Show abstract] [Hide abstract]
    ABSTRACT: In the last two decades, the continuous increase of computational power has produced an overwhelming flow of data. Moreover, the recent advances in Web technology has made it easy for any user to provide and consume content of any form. This has called for a paradigm shift in the computing architecture and large scale data processing mechanisms. Cloud computing is associated with a new paradigm for the provision of computing infrastructure. This paradigm shifts the location of this infrastructure to the network to reduce the costs associated with the management of hardware and software resources. This paper gives a comprehensive survey of numerous approaches and mechanisms of deploying data-intensive applications in the cloud which are gaining a lot of momentum in both research and industrial communities. We analyze the various design decisions of each approach and its suitability to support certain classes of applications and end-users. A discussion of some open issues and future challenges pertaining to scalability, consistency, economical processing of large scale data on the cloud is provided. We highlight the characteristics of the best candidate classes of applications that can be deployed in the cloud.
    IEEE Communications Surveys &amp Tutorials 01/2011; 13(3-13):311 - 336. DOI:10.1109/SURV.2011.032211.00087 · 6.49 Impact Factor
  • M. Alomari, A. Fekete, U. Rohm
    [Show abstract] [Hide abstract]
    ABSTRACT: Snapshot Isolation (SI) is a popular concurrency control mechanism that has been implemented by many commercial and open-source platforms (e.g. Oracle, Postgre SQL, and MS SQL Server 2005). Unfortunately, SI can result in nonserializable execution, in which database integrity constraints can be violated. The literature reports some techniques to ensure that all executions are serializable when run in an engine that uses SI for concurrency control. These modify the application by introducing conflicting SQL statements. However, with each of these techniques the DBA has to make a choice among possible transactions to modify - and as we previously showed, making a bad choice of which transactions to modify can come with a hefty performance reduction. In this paper we propose a novel technique called ELM to introduce conflicts in a separate lock-manager object. Experiments with two platforms show that ELM has peak performance which is similar to SI, no matter which transactions are chosen for modification. That is, ELM is much less vulnerable from poor DBA choices than the previous techniques.
    Data Engineering, 2009. ICDE '09. IEEE 25th International Conference on; 05/2009
  • Mohammad Alomari, Michael Cahill, Alan Fekete, U. Rohm
    [Show abstract] [Hide abstract]
    ABSTRACT: Several common DBMS engines use the multi- version concurrency control mechanism called Snapshot Isolation, even though application programs can experience non- serializable executions when run concurrently on such a platform. Several proposals exist for modifying the application programs, without changing their semantics, so that they are certain to execute serializably even on an engine that uses SI. We evaluate the performance impact of these proposals, and find that some have limited impact (only a few percent drop in throughput at a given multi-programming level) while others lead to much greater reduction in throughput of up-to 60% in high contention scenarios. We present experimental results for both an open- source and a commercial engine. We relate these to the theory, giving guidelines on which conflicts to introduce so as to ensure correctness with little impact on performance.
    Data Engineering, 2008. ICDE 2008. IEEE 24th International Conference on; 05/2008
  • Mohammad Alomari, Michael Cahill, Alan Fekete, U. Rohm
    [Show abstract] [Hide abstract]
    ABSTRACT: It is usually expected that performance is reduced by using stricter concurrency control, which reduces the risk of anomalies that can lead to data corruption. For example, the weak isolation level Read Committed allows anomalies that are prevented by two-phase locking (abbreviated 2PL), and because 2PL holds locks for longer than RC, it has lower throughput. We show that sometimes, guaranteed correctness can be obtained along with better throughput than RC, by use of the multiversion snapshot isolation mechanism along with modifications to application programs as proposed by Fekete et al. We investigate the conditions under which this effect occurs.
    Computer Systems and Applications, 2008. AICCSA 2008. IEEE/ACS International Conference on; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Snapshot Isolation concurrency control (SI) allows substantial performance gains compared to holding commit-duration readlocks, while still avoiding many anomalies such as lost updates or inconsistent reads. However, for some sets of application programs, SI can result in non-serializable execution, in which database integrity constraints can be violated. The literature reports two different approaches to ensuring all executions are serializable while still using SI for concurrency control. In one approach, the application programs are modified (without changing their stand-alone semantics) by introducing extra conflicts. In the other approach, the application programs are not changed, but a small subset of them must be run using standard two-phase locking rather than SI. We compare the performance impact of these two approaches. Our conclusion is that the convenience of preserving the application code (and adjusting only the isolation level for some transactions) leads to a very substantial performance penalty against the best that can be done through application modification.
    Database Systems for Advanced Applications, 13th International Conference, DASFAA 2008, New Delhi, India, March 19-21, 2008. Proceedings; 01/2008

Publication Stats

126 Citations
18.76 Total Impact Points

Institutions

  • 2013–2014
    • University of Dammam
      • College of Computer Sciences and Information Technology
      Damman, Eastern Province, Saudi Arabia
  • 2008–2012
    • University of Sydney
      • School of Information Technologies
      Sydney, New South Wales, Australia