506
321.19
0.63
576

Publication History View all

  • [Show abstract] [Hide abstract]
    ABSTRACT: Energy costs now dominate IT infrastructure total cost of ownership, with datacentre operators predicted to spend more on energy than hardware infrastructure in the next five years. With Western European datacentre power consumption estimated at 56 TWh/year in 2007 and projected to double by 2020, improvements in energy efficiency of IT operations is imperative. The issue is further compounded by social and political factors and strict environmental legislation governing organisations. One such example of large IT systems includes high-throughput cycle stealing distributed systems such as HTCondor and BOINC, which allow organisations to leverage spare capacity on existing infrastructure to undertake valuable computation. As a consequence of increased scrutiny of the energy impact of these systems, aggressive power management policies are often employed to reduce the energy impact of institutional clusters, but in doing so these policies severely restrict the computational resources available for high-throughput systems. These policies are often configured to quickly transition servers and end-user cluster machines into low power states after only short idle periods, further compounding the issue of reliability. In this thesis, we evaluate operating policies for energy efficiency in large-scale computing environments by means of trace-driven discrete event simulation, leveraging real-world workload traces collected within Newcastle University. The major contributions of this thesis are as follows: Evaluation of novel energy efficient management policies for a decentralised peer-to-peer (P2P) BitTorrent environment. Introduce a novel simulation environment for the evaluation of energy efficiency of large scale high-throughput computing systems, and propose a generalisable model of energy consumption in high-throughput computing systems. Proposal and evaluation of resource allocation strategies for energy consumption in high-throughput computing systems for a real workload. Proposal and evaluation for a real workload of mechanisms to reduce wasted task execution within high-throughput computing systems to reduce energy consumption. Evaluation of the impact of fault tolerance mechanisms on energy consumption.
    School of Computing Science, Newcastle University, 01/2015, Degree: PhD
  • [Show abstract] [Hide abstract]
    ABSTRACT: The creation of a consistent system description is a challenging problem of requirements engineering. Formal and informal reasoning can greatly contribute to meet this challenge. However, this demands that formal and informal reasoning and the system ...
    Science of Computer Programming 03/2014; 82:1. DOI:10.1016/j.scico.2013.06.001
  • 02/2014; DOI:10.1016/j.jisa.2014.02.002
  • [Show abstract] [Hide abstract]
    ABSTRACT: Mobile ad hoc networks are becoming very attractive and useful in many kinds of communication and networking applications. Due to the advantage of numerical analysis, analytical modelling formalisms, such as stochastic Petri nets, queuing networks and stochastic process algebra have been widely used for performance analysis of communication systems. To the best of our knowledge, there is no previous analytical study that analyses the performance of multi-hop ad hoc networks, where mobile nodes move according to a random mobility model in terms of the end-to-end delay and throughput. This work presents a novel analytical framework developed using stochastic reward nets for modelling and analysis of multi-hop ad hoc networks, based on the IEEE 802.11 DCF MAC protocol, where mobile nodes move according to the random waypoint mobility model. The proposed framework is used to analyse the performance of multi-hop ad hoc networks as a function of network parameters such as the transmission range, carrier sensing range, interference range, number of nodes, network area size, packet size, and packet generation rate. The proposed framework is organized into several models to break up the complexity of modelling the complete network, and make it easier to analyse each model as required. The framework is based on the idea of decomposition and fixed point iteration of stochastic reward nets. The proposed models are validated using extensive simulations.
    Simulation Modelling Practice and Theory 11/2013; 38:69–97. DOI:10.1016/j.simpat.2013.06.005
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A constant influx of new data poses a challenge in keeping the annotation in biological databases current. Most biological databases contain significant quantities of textual annotation, which often contains the richest source of knowledge. Many databases reuse existing knowledge; during the curation process annotations are often propagated between entries. However, this is often not made explicit. Therefore, it can be hard, potentially impossible, for a reader to identify where an annotation originated from. Within this work we attempt to identify annotation provenance and track its subsequent propagation. Specifically, we exploit annotation reuse within the UniProt Knowledgebase (UniProtKB), at the level of individual sentences. We describe a visualisation approach for the provenance and propagation of sentences in UniProtKB which enables a large-scale statistical analysis. Initially levels of sentence reuse within UniProtKB were analysed, showing that reuse is heavily prevalent, which enables the tracking of provenance and propagation. By analysing sentences throughout UniProtKB, a number of interesting propagation patterns were identified, covering over sentences. Over sentences remain in the database after they have been removed from the entries where they originally occurred. Analysing a subset of these sentences suggest that approximately are erroneous, whilst appear to be inconsistent. These results suggest that being able to visualise sentence propagation and provenance can aid in the determination of the accuracy and quality of textual annotation. Source code and supplementary data are available from the authors website at http://homepages.cs.ncl.ac.uk/m.j.bell1/sentence_analysis/.
    PLoS ONE 10/2013; 8(10):e75541. DOI:10.1371/journal.pone.0075541
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The human connectome at the level of fiber tracts between brain regions has been shown to differ in patients with brain disorders compared to healthy control groups. Nonetheless, there is a potentially large number of different network organizations for individual patients that could lead to cognitive deficits prohibiting correct diagnosis. Therefore changes that can distinguish groups might not be sufficient to diagnose the disease that an individual patient suffers from and to indicate the best treatment option for that patient. We describe the challenges introduced by the large variability of connectomes within healthy subjects and patients and outline three common strategies to use connectomes as biomarkers of brain diseases. Finally, we propose a fourth option in using models of simulated brain activity (the dynamic connectome) based on structural connectivity rather than the structure (connectome) itself as a biomarker of disease. Dynamic connectomes, in addition to currently used structural, functional, or effective connectivity, could be an important future biomarker for clinical applications.
    Frontiers in Human Neuroscience 08/2013; 7:484. DOI:10.3389/fnhum.2013.00484
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Customers submit streams of jobs of different types for execution at a service center. The number of jobs in each stream and the rate of their submission are specified. A service level agreement indicates the charge paid by the customer, the quality of service promised by the provider and the penalty to be paid by the latter if the QoS requirement is not met. To save energy, servers may be powered up and down dynamically. The objective is to maximize the revenues received while minimizing the penalties paid and the energy consumption costs of the servers used. To that end, heuristic policies are proposed for making decisions about stream admissions and server activation and deactivation. Those policies are motivated by queueing models. The results of several simulation experiments are described.
    Electronic Notes in Theoretical Computer Science 08/2013; 296:199–210. DOI:10.1016/j.entcs.2013.07.013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The adoption and deployment of 3DTV can be seen as a major step in the history of television, comparable to the transition from analogue to digital and standard to high definition TV. Although 3D is expected to emerge from the cinema to peoples' home, there is still a lack of knowledge on how people (future end users) perceive 3DTV and how this influences their viewing experience as well as their acceptance of 3DTV. Within this paper, findings from a three-day field evaluation study on people's 3DTV experiences, focusing on the feeling of sickness and presence, are presented. Contrary to the traditional controlled laboratory setting, the study was conducted in the public setting of a shopping center and involved 700 participants. The study revealed initial insights on users' feeling of presence and sickness when watching 3DTV content. Results from this explorative study show that most of the participants reported symptoms of sickness after watching 3DTV with an effect of gender and age on the reported feeling of sickness. Our results further suggest that the users' previous experience with 3D content has an influence on how realistic people rate the viewing experience and how involved they feel. The particularities of the study environment, a shopping mall, are reflected in our findings and future research directions and action points for investigating people's viewing experiences of 3DTV are summarized.
    Entertainment Computing 02/2013; 4(1):71-81. DOI:10.1016/j.entcom.2012.03.001
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction.
    Philosophical Transactions of The Royal Society A Mathematical Physical and Engineering Sciences 01/2013; 371(1983):20120085. DOI:10.1098/rsta.2012.0085
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Event processing involves continuous evaluation of queries over streams of events. Response-time optimization is traditionally done over a fixed set of nodes and/or by using metrics measured at query-operator levels. Cloud computing makes it easy to acquire and release computing nodes as required. Leveraging this flexibility, we propose a novel, queueing-theory-based approach for meeting specified response-time targets against fluctuating event arrival rates by drawing only the necessary amount of computing resources from a cloud platform. In the proposed approach, the entire processing engine of a distinct query is modelled as an atomic unit for predicting response times. Several such units hosted on a single node are modelled as a multiple class M/G/1 system. These aspects eliminate intrusive, low-level performance measurements at run-time, and also offer portability and scalability. Using model-based predictions, cloud resources are efficiently used to meet response-time targets. The efficacy of the approach is demonstrated through cloud-based experiments.
    Philosophical Transactions of The Royal Society A Mathematical Physical and Engineering Sciences 01/2013; 371(1983):20120095. DOI:10.1098/rsta.2012.0095
Information provided on this web page is aggregated encyclopedic and bibliographical information relating to the named institution. Information provided is not approved by the institution itself. The institution’s logo (and/or other graphical identification, such as a coat of arms) is used only to identify the institution in a nominal way. Under certain jurisdictions it may be property of the institution.