Journal of Computer and System Sciences (J Comput Syst Sci )

Description

The Journal of Computer and System Sciences publishes original research papers in computer science and related subjects in system science, with attention to the relevant mathematical theory. Applications-oriented papers may also be accepted. Research Areas Include: Traditional Subjects such as: Theory of algorithms and computability; Formal languages; Automata theory; Contemporary Subjects such as Complexity theory Algorithmic; Complexity; Parallel and distributed computing; Computer networks; Neural networks; Computational learning theory; Database theory and practice Computer modeling of complex systems.

  • Impact factor
    1.00
  • 5-year impact
    1.11
  • Cited half-life
    0.00
  • Immediacy index
    0.29
  • Eigenfactor
    0.01
  • Article influence
    0.97
  • Website
    Journal of Computer and System Sciences website
  • Other titles
    Journal of computer and system sciences (Online), Journal of computer and system sciences
  • ISSN
    1090-2724
  • OCLC
    36943413
  • Material type
    Document, Periodical, Internet resource
  • Document type
    Internet Resource, Computer File, Journal / Magazine / Newspaper

Publications in this journal

  • Journal of Computer and System Sciences 02/2015; 81(1):1–2.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Accurate simulation is vital for the proper design and evaluation of any computing architecture. Researchers seek unified simulation frameworks that can model heterogeneous architectures like CPU and GPU devices and their interactions as computing patterns move toward heterogeneous era. In this paper, we introduce MCMG (Multi-CPU Multi-GPU) simulator, a cycle accurate, modular and open-source toolset that enables simulating x86 CPUs and Nvidia G80 like GPUs simultaneously. Targeting heterogeneous architectural exploration, MCMG supports fully configuration of multiple CPUs, GPUs and their memory sub-system. Not only CPUs, relative running frequency of each GPU can be also defined conveniently. Simulation validation is demonstrated with a preliminary architectural exploration study. Then we present shared LLC access results of heterogeneous cores and show reasonable explanation. Finally, we conclude the job.
    Journal of Computer and System Sciences 02/2015; 81(1):57–71.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Functional correctness of low-level operating-system (OS) code is an indispensable requirement. However, many applications rely also on quantitative aspects such as speed, energy efficiency, resilience with regards to errors and other cost factors. We report on our experiences of applying probabilistic model-checking techniques for analysing the quantitative long-run behaviour of low-level OS-code. Our approach, illustrated in a case study analysing a simple test-and-test-and-set (TTS) spinlock protocol, combines measure-based simulation with probabilistic model-checking to obtain high-level models of the performance of realistic systems and to tune the models to predict future system behaviour. We report how we obtained a nearly perfect match of analytic results and measurements and how we tackled the state-explosion problem to obtain model-checking results for a large number of processes where measurements are no longer feasible. These results gave us valuable insights in the delicate interplay between lock load, average spinning times and other performance measures.
    Journal of Computer and System Sciences 02/2015; 81(1):258–287.
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that processing big graph data can be costly on Cloud. Processing big graph data introduces complex and multiple iterations that raise challenges such as parallel memory bottlenecks, deadlocks, and inefficiency. To tackle the challenges, we propose a novel technique for effectively processing big graph data on Cloud. Specifically, the big data will be compressed with its spatiotemporal features on Cloud. By exploring spatial data correlation, we partition a graph data set into clusters. In a cluster, the workload can be shared by the inference based on time series similarity. By exploiting temporal correlation, in each time series or a single graph edge, temporal data compression is conducted. A novel data driven scheduling is also developed for data processing optimization. The experiment results demonstrate that the spatiotemporal compression and scheduling achieve significant performance gains in terms of data size and data fidelity loss.
    Journal of Computer and System Sciences 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Cloud systems provide significant benefits by allowing users to store massive amount of data on demand in a cost-effective manner. Role-based access control (RBAC) is a well-known access control model which can be used to protect the security of cloud data storage. Although cryptographic RBAC schemes have been developed recently to secure data outsourcing, these schemes assume the existence of a trusted administrator managing all the users and roles, which is not realistic in large-scale systems. In this paper, we introduce a cryptographic administrative model AdC-RBAC for managing and enforcing access policies for cryptographic RBAC schemes. The AdC-RBAC model uses cryptographic techniques to ensure that the administrative tasks are performed only by authorised administrative roles. Then we propose a role-based encryption (RBE) scheme and show how the AdC-RBAC model decentralises the administrative tasks in the RBE scheme thereby making it practical for security policy management in large-scale cloud systems.
    Journal of Computer and System Sciences 12/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Minimum Fill-in is a fundamental and classical problem arising in sparse matrix computations. In terms of graphs it can be formulated as a problem of finding a triangulation of a given graph with the minimum number of edges. In this paper, we study the parameterized complexity of local search for the Minimum Fill-in problem in the following form: Given a triangulation H of a graph G , is there a better triangulation, i.e. triangulation with less edges than H , within a given distance from H ? We prove that this problem is fixed-parameter tractable (FPT) being parameterized by the distance from the initial triangulation, by providing an algorithm that in time f(k)|G|O(1)f(k)|G|O(1) decides if a better triangulation of G can be obtained by swapping at most k edges of H. Our result adds Minimum Fill-in to the list of very few problems for which local search is known to be FPT.
    Journal of Computer and System Sciences 11/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently proposed formal reliability analysis techniques have overcome the inaccuracies of traditional simulation based techniques but can only handle problems involving discrete random variables. In this paper, we extend the capabilities of existing theorem proving based reliability analysis by formalizing several important statistical properties of continuous random variables like the second moment and the variance. We also formalize commonly used concepts about the reliability theory such as survival, hazard, cumulative hazard and fractile functions. With these extensions, it is now possible to formally reason about important measures of reliability (the probabilities of failure, the failure risks and the mean-time-to failure) associated with the life of a system that operates in an uncertain and harsh environment and is usually continuous in nature. We illustrate the modeling and verification process with the help of examples involving the reliability analysis of essential electronic and electrical system components.
    Journal of Computer and System Sciences 01/2014; 80(2):323–345.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes an efficient broadcast encryption scheme for key distribution in MANET. No message exchange is required to establish a group key. The communication overhead remains unchanged as group size grows. In order for the group member to obtain session key, only one bilinear pairing computation is required. The proposal is also evaluated through efficiency, security analysis and comparison with other existing schemes. We test the efficiency of the scheme on a modern station through simulation. The performance analysis shows its suitability for large scale MANETs. It is shown that the new scheme is provable secure in standard model. The comparison indicates that this scheme has efficiency surpassing congeneric schemes. Furthermore, an improved scheme against chosen ciphertext attack (CCA) is proposed in order to enhance the security. Thus, the proposal in this paper can not only meet security demands but is also efficient in terms of computation and communication.
    Journal of Computer and System Sciences 01/2014; 80(3):533–545.
  • [Show abstract] [Hide abstract]
    ABSTRACT: An investigation is carried out on the nature of QoS measures for queues with correlated traffic in both discrete and continuous time domains. The study focuses on the single server GIG/M[X]/1/N and GIG/Geo[X]/1/N queues with finite capacity, N, a general batch renewal arrival process (BRAP), GIG and either batch Poisson, M[X] or batch geometric, Geo[X] service times with general batch sizes, X. Closed form expressions for QoS measures, such as queue length & waiting time distributions and blocking probabilities are stochastically derived and showed to be, essentially, time domain invariant. Moreover, the sGGeosGGeo/Geo/1/N queue with a shifted generalised geometric (sGGeo) distribution is employed to assess the adverse impact of varying degrees of traffic correlations upon basic QoS measures and consequently, illustrative numerical results are presented. Finally, the global balance queue length distribution of the MGeo/MGeo/1/N queue is devised and reinterpreted in terms of information theoretic principle of entropy maximisation.
    Journal of Computer and System Sciences 01/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recently proposed formal reliability analysis techniques have overcome the inaccuracies of traditional simulation based techniques but can only handle problems involving discrete random variables. In this paper, we extend the capabilities of existing ...
    Journal of Computer and System Sciences 01/2014; 80(2):321–322.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We define extensions of the full branching-time temporal logic CTL⁎ in which the path quantifiers are relativised by formal languages of infinite words, and consider its natural fragments obtained by extending the logics CTL and CTL+ in the same way. This yields a small and two-dimensional hierarchy of temporal logics parametrised by the class of languages used for the path restriction on one hand, and the use of temporal operators on the other. We motivate the study of such logics through two application scenarios: in abstraction and refinement they offer more precise means for the exclusion of spurious traces; and they may be useful in software synthesis where decidable logics without the finite model property are required. We study the relative expressive power of these logics as well as the complexities of their satisfiability and model-checking problems.
    Journal of Computer and System Sciences 01/2014; 80(2):375–389.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The displacement calculus of Morrill, Valentín and Fadda (2011) [25] aspires to replace the calculus of Lambek (1958) [13] as the foundation of categorial grammar by accommodating intercalation as well as concatenation while remaining free of structural rules and enjoying Cut-elimination and its good corollaries. Jäger (2005) [11] proposes a type logical treatment of anaphora with syntactic duplication using limited contraction. Morrill and Valentín (2010) [24] apply (modal) displacement calculus to anaphora with lexical duplication and propose extension with a negation as failure in conjunction with additives to capture binding conditions. In this paper we present an account of anaphora developing characteristics and employing machinery from both of these proposals.
    Journal of Computer and System Sciences 01/2014; 80(2):390–409.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe the mechanisation of some foundational results in the theory of context-free languages (CFLs), using the HOL4 system. We focus on pushdown automata (PDAs). We show that two standard acceptance criteria for PDAs (“accept-by-empty-stack” and “accept-by-final-state”) are equivalent in power. We are then able to show that the pushdown automata (PDAs) and context-free grammars (CFGs) accept the same languages by showing that each can emulate the other. With both of these models to hand, we can then show a number of basic, but important results. For example, we prove the basic closure properties of the context-free languages such as union and concatenation. Along the way, we also discuss the varying extent to which textbook proofs (we follow Hopcroft and Ullman) and our mechanisations diverge: sometimes elegant textbook proofs remain elegant in HOL; sometimes the required mechanisation effort blows up unconscionably.
    Journal of Computer and System Sciences 01/2014; 80(2):346–362.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Discovering frequent factors from long strings is an important problem in many applications, such as biosequence mining. In classical approaches, the algorithms process a vast database of small strings. However, in this paper we analyze a small database of long strings. The main difference resides in the high number of patterns to analyze. To tackle the problem, we have developed a new algorithm for discovering frequent factors in long strings. We present an Apriori-like solution which exploits the fact that any super-pattern of a non-frequent pattern cannot be frequent. The SANSPOS algorithm does a multiple-pass, candidate generation and test approach. Multiple length patterns can be generated in a pass. This algorithm uses a new data structure to arrange nodes in a trie. A Positioning Matrix is defined as a new positioning strategy. By using Positioning Matrices, we can apply advanced prune heuristics in a trie with a minimal computational cost. The Positioning Matrices let us process strings including Short Tandem Repeats and calculate different interestingness measures efficiently. Furthermore, in our algorithm we apply parallelism to transverse different sections of the input strings concurrently, speeding up the resulting running time. The algorithm has been successfully used in natural language and biological sequence contexts.
    Journal of Computer and System Sciences 01/2014; 80(1):3–15.