703
659.15
0.94
854

Publication History View all

  • [Show abstract] [Hide abstract]
    ABSTRACT: From the text: This special issue of Theoretical Computer Science collects 10 papers selected from among 21 submissions received as a result of a general call for papers on Quantitative Aspects of Programming Languages and Systems after the QAPL 2011 and QAPL 2012 editions of the workshop. Out of the selected papers, half are based on work presented at either QAPL 2011 or QAPL 2012. All papers have been peer-reviewed in accordance with the high standards of Theoretical Computer Science.
    Theoretical Computer Science 06/2014; 538:1. DOI:10.1016/j.tcs.2014.05.011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The biochemical paradigm is well-suited for modelling autonomous systems and new programming languages are emerging from this approach. However, in order to validate such programs, we need to define precisely their semantics and to provide verification techniques. In this paper, we consider a higher-order biochemical calculus that models the structure of system states and its dynamics thanks to rewriting abstractions, namely rules and strategies. We extend this calculus with a runtime verification technique in order to perform automatic discovery of property satisfaction failure. The property specification language is a subclass of LTL safety and liveness properties.
    Electronic Notes in Theoretical Computer Science 12/2013; 297:27–46. DOI:10.1016/j.entcs.2013.12.003
  • [Show abstract] [Hide abstract]
    ABSTRACT: We address two distinct problems with de facto mobile device authentication, as provided by a password or sketch. Firstly, device activity is permitted on an all-or-nothing basis, depending on whether the user successfully authenticates at the beginning of a session. This ignores the fact that tasks performed on a mobile device have a range of sensitivities, depending on the nature of the data and services accessed. Secondly, users are forced to re-authenticate frequently due to the bursty nature that characterizes mobile device use. Owners react to this by disabling the mechanism, or by choosing a weak “secret”. To address both issues, we propose an extensible Transparent Authentication Framework that integrates multiple behavioral biometrics with conventional authentication to implement an effortless and continuous authentication mechanism. Our security and usability evaluation of the proposed framework showed that a legitimate device owner can perform all device tasks, while being asked to authenticate explicitly 67% less often than without a transparent authentication method. Furthermore, our evaluation showed that attackers are soon denied access to on-device tasks as their behavioral biometrics are collected. Our results support the creation of a working prototype of our framework, and provide support for further research into transparent authentication on mobile devices.
    Computers & Security 11/2013; 39:127–136. DOI:10.1016/j.cose.2013.05.005
  • [Show abstract] [Hide abstract]
    ABSTRACT: We identify four roles that social networking plays in the 'attribution problem', which obscures whether or not cyber-attacks were state-sponsored. First, social networks motivate individuals to participate in Distributed Denial of Service attacks by providing malware and identifying potential targets. Second, attackers use an individual's social network to focus attacks, through spear phishing. Recipients are more likely to open infected attachments when they come from a trusted source. Third, social networking infrastructures create disposable architectures to coordinate attacks through command and control servers. The ubiquitous nature of these architectures makes it difficult to determine who owns and operates the servers. Finally, governments recruit anti-social criminal networks to launch attacks on third-party infrastructures using botnets. The closing sections identify a roadmap to increase resilience against the 'dark side' of social networking. Practitioner Summary: This paper provides readers with an overview of state-sponsored cyber-attacks. I show how many of these threats have exploited social networks and social media. The aim was to alert practitioners to the dark side of computing, where attackers learn to exploit new interaction techniques and new forms of working.
    Ergonomics 07/2013; 57(3). DOI:10.1080/00140139.2013.812749
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present formal specification and verification of a robot moving in a complex network, using temporal sequence learning to avoid obstacles. Our aim is to demonstrate the benefit of using a formal approach to analyze such a system as a complementary approach to simulation. We first describe a classical closed-loop simulation of the system and compare this approach to one in which the system is analyzed using formal verification. We show that the formal verification has some advantages over classical simulation and finds deficiencies our classical simulation did not identify. Specifically we present a formal specification of the system, defined in the Promela modeling language and show how the associated model is verified using the Spin model checker. We then introduce an abstract model that is suitable for verifying the same properties for any environment with obstacles under a given set of assumptions. We outline how we can prove that our abstraction is sound: any property that holds for the abstracted model will hold in the original (unabstracted) model.
    Neural Computation 06/2013; DOI:10.1162/NECO_a_00493
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The problem of content-based video retrieval continues to pose a challenge to the research community, the performance of video retrieval systems being low due to the semantic gap. In this paper we consider whether taking advantage of context can aid the video retrieval process by making the prediction of relevance easier, i.e. if it is easier for a classification system to predict the relevance of a video shot under a given context, then that context has potential in also improving retrieval, since the underlying features better differentiate relevant from non-relevant video shots. We use an operational definition of context, where datasets can be split into disjoint sub-collections which reflect a particular context. Contexts considered include task difficulty and user expertise, among others. In the classification process, four main types of features are used to represent video-shots: conventional low-level visual features representing physical properties of the video shots, behavioral features which are based on user interaction with the video shots, and two different bag-of-words features obtained from the Automatic Speech Recognition from the audio of the video.So, we measure how well each kind of video representation performs and, for each of these representations, our datasets are then split into different contexts in order to discover which contexts affect the performance of a number of trained classifiers. Thus, we aim to discover contexts which improve the classifier’s performance and if this improvement is consistent regardless the kind of representation. Experimental results show which kind of the tested document representations works best for the different features; following on from this, we then identify the contexts which result in the classifiers performing better.
    Journal of Information Processing and Management 01/2013; DOI:10.1016/j.ipm.2010.05.003
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider the Bayesian analysis of mechanistic models describing the dynamic behavior of ligand-gated ion channels. The opening of the transmembrane pore in an ion channel is brought about by conformational changes in the protein, which results in a flow of ions through the pore. Remarkably, given the diameter of the pore, the flow of ions from a small number of channels or indeed from a single ion channel molecule can be recorded experimentally. This produces a large time-series of high-resolution experimental data, which can be used to investigate the gating process of these channels. We give a brief overview of the achievements and limitations of alternative maximum-likelihood approaches to this type of modeling, before investigating the statistical issues associated with analyzing stochastic model reaction mechanisms from a Bayesian perspective. Finally, we compare a number of Markov chain Monte Carlo algorithms that may be used to tackle this challenging inference problem.
    Methods in molecular biology (Clifton, N.J.) 01/2013; 1021:247-272. DOI:10.1007/978-1-62703-450-0_13
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Signalling pathways are well-known abstractions that explain the mechanisms whereby cells respond to signals. Collections of pathways form networks, and interactions between pathways in a network, known as cross-talk, enables further complex signalling behaviours. While there are several formal modelling approaches for signalling pathways, none make cross-talk explicit; the aim of this paper is to define and categorise cross-talk in a rigorous way. We define a modular approach to pathway and network modelling, based on the module construct in the PRISM modelling language, and a set of generic signalling modules. Five different types of cross-talk are defined according to various biologically meaningful combinations of variable sharing, synchronisation labels and reaction renaming. The approach is illustrated with a case-study analysis of cross-talk between the TGF-ββ, WNT and MAPK pathways.
    Theoretical Computer Science 10/2012; 456:30–50. DOI:10.1016/j.tcs.2012.07.003
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Packet loss is a major problem for real-time Internet applications. Markov models of packet loss are often used to develop and evaluate the performance of these applications. Despite their wide use, these models have not been validated in terms of how well they capture the loss conditions experienced by residential Internet users. We evaluate the accuracy of common packet loss models using traces of IPTV-like traffic measured on residential ADSL and Cable links, and find that these models are insufficient to capture the observed packet loss patterns. We introduce a new type of model, incorporating packet delay information, and show improved accuracy over previous models.
    LCN 2012: Proceedings of the 37th Annual IEEE Conference on Local Computer Networks; 10/2012
Information provided on this web page is aggregated encyclopedic and bibliographical information relating to the named institution. Information provided is not approved by the institution itself. The institution’s logo (and/or other graphical identification, such as a coat of arms) is used only to identify the institution in a nominal way. Under certain jurisdictions it may be property of the institution.