Journal of Computer and System Sciences (J Comput Syst Sci)

Journal description

The Journal of Computer and System Sciences publishes original research papers in computer science and related subjects in system science, with attention to the relevant mathematical theory. Applications-oriented papers may also be accepted. Research Areas Include: Traditional Subjects such as: Theory of algorithms and computability; Formal languages; Automata theory; Contemporary Subjects such as Complexity theory Algorithmic; Complexity; Parallel and distributed computing; Computer networks; Neural networks; Computational learning theory; Database theory and practice Computer modeling of complex systems.

Current impact factor: 1.09

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 1.091
2012 Impact Factor 1
2011 Impact Factor 1.157
2010 Impact Factor 1.631
2009 Impact Factor 1.304
2008 Impact Factor 1.244
2007 Impact Factor 1.185
2006 Impact Factor 1.252
2005 Impact Factor 1.328
2004 Impact Factor 1.03
2003 Impact Factor 0.795
2002 Impact Factor 1.174
2001 Impact Factor 0.661
2000 Impact Factor 0.664
1999 Impact Factor 0.872
1998 Impact Factor 0.577
1997 Impact Factor 0.602
1996 Impact Factor 0.679
1995 Impact Factor 0.723
1994 Impact Factor 0.513
1993 Impact Factor 0.413
1992 Impact Factor 0.536

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.11
Cited half-life 0.00
Immediacy index 0.29
Eigenfactor 0.01
Article influence 0.97
Website Journal of Computer and System Sciences website
Other titles Journal of computer and system sciences (Online), Journal of computer and system sciences
ISSN 1090-2724
OCLC 36943413
Material type Document, Periodical, Internet resource
Document type Internet Resource, Computer File, Journal / Magazine / Newspaper

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Security and privacy issues in Radio Frequency Identification (RFID) systems mainly result from limited storage and computation resources of RFID tags and unpredictable communication environment. Although many security protocols for RFID system have been proposed, most of them have various flaws. We propose a random graph-based methodology enabling automated benchmarking of RFID security. First, we formalize the capability of adversaries by a set of atomic actions. Second, Vulnerability Aware Graphs (VAGs) were developed to elaborate the interactions between adversaries and RFID systems, which are used to discover the potential attacks of adversaries via some paths on the graphs. The quantitative analysis on VAGs can predict the probability that the adversary leverages the potential flaws to perform attacks. Moreover, a joint entropy-based method is provided to measure the indistinguishability of RFID tags under passive attacks. Analysis and simulation were conducted to show the validity and effectiveness of VAGs.
    Journal of Computer and System Sciences 09/2015; DOI:10.1016/j.jcss.2014.12.015
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article proposes a new technique for Privacy Preserving Collaborative Filtering (PPCF) based on microaggregation, which provides accurate recommendations estimated from perturbed data whilst guaranteeing user k-anonymity. The experimental results presented in this article show the effectiveness of the proposed technique in protecting users' privacy without compromising the quality of the recommendations. In this sense, the proposed approach perturbs data in a much more efficient way than other well-known methods such as Gaussian Noise Addition (GNA).
    Journal of Computer and System Sciences 09/2015; DOI:10.1016/j.jcss.2014.12.013
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider a generalisation of the classical problem of pattern avoidance in infinite words with functional dependencies between pattern variables. More precisely, we consider patterns involving permutations. The foremost remarkable fact regarding this new setting is that the notion of avoidability index (the smallest alphabet size for which a pattern is avoidable) is meaningless, since a pattern with permutations that is avoidable in one alphabet can be unavoidable in a larger alphabet. We characterise the (un-)avoidability of all patterns of the form , called cubic patterns with permutations here, for all alphabet sizes in both the morphic and antimorphic case.
    Journal of Computer and System Sciences 04/2015; DOI:10.1016/j.jcss.2015.04.001
  • Source
    Journal of Computer and System Sciences 03/2015; 78. DOI:10.1016/j.jcss.2015.02.002
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper studies the problem of finding the path median on a tree in which vertex weights are uncertain and the uncertainty is characterized by given intervals. It is required to find a minmax regret solution, which minimizes the worst-case loss in the objective function. An -time algorithm is presented, improving the previous upper bound from .
    Journal of Computer and System Sciences 02/2015; 127. DOI:10.1016/j.jcss.2015.01.002
  • Journal of Computer and System Sciences 02/2015; 81(1):1–2. DOI:10.1016/j.jcss.2014.09.001
  • [Show abstract] [Hide abstract]
    ABSTRACT: Accurate simulation is vital for the proper design and evaluation of any computing architecture. Researchers seek unified simulation frameworks that can model heterogeneous architectures like CPU and GPU devices and their interactions as computing patterns move toward heterogeneous era. In this paper, we introduce MCMG (Multi-CPU Multi-GPU) simulator, a cycle accurate, modular and open-source toolset that enables simulating x86 CPUs and Nvidia G80 like GPUs simultaneously. Targeting heterogeneous architectural exploration, MCMG supports fully configuration of multiple CPUs, GPUs and their memory sub-system. Not only CPUs, relative running frequency of each GPU can be also defined conveniently. Simulation validation is demonstrated with a preliminary architectural exploration study. Then we present shared LLC access results of heterogeneous cores and show reasonable explanation. Finally, we conclude the job.
    Journal of Computer and System Sciences 02/2015; 81(1):57–71. DOI:10.1016/j.jcss.2014.06.017
  • [Show abstract] [Hide abstract]
    ABSTRACT: Although Collaborative Filtering (CF) -based recommender systems have received great success in a variety of applications, they still under-perform and are unable to provide accurate recommendations when users and items have few ratings, resulting in reduced coverage. To overcome these limitations, we propose an effective hybrid user-item trust-based (HUIT) recommendation approach in this paper that fuses the users' and items' implicit trust information. We have also considered and computed user and item global reputations into this approach. This approach allows the recommender system to make an increased number of accurate predictions, especially in circumstances where users and items have few ratings. Experiments on four real-world datasets, particularly a business-to-business (B2B) case study, show that the proposed HUIT recommendation approach significantly outperforms state-of-the-art recommendation algorithms in terms of recommendation accuracy and coverage, as well as significantly alleviating data sparsity, cold-start user and cold-start item problems.
    Journal of Computer and System Sciences 01/2015; DOI:10.1016/j.jcss.2014.12.029
  • [Show abstract] [Hide abstract]
    ABSTRACT: Data deduplication is an attractive technology to reduce storage space for increasing vast amount of duplicated and redundant data. In a cloud storage system with data deduplication, duplicate copies of data will be eliminated and only one copy will be kept in the storage. To protect the confidentiality of sensitive data while supporting deduplication, the convergent encryption technique has been proposed to encrypt the data before outsourcing. However, the issue of keyword search over encrypted data in deduplication storage system has to be addressed for efficient data utilization. This paper firstly proposes two constructions which support secure keyword search in this scenario. In these constructions, the integrity of the data can be realized by just checking the convergent key, without other traditional integrity auditing mechanisms. Then, two extensions are presented to support fuzzy keyword search and block-level deduplication. Finally, security analysis is given.
    Journal of Computer and System Sciences 01/2015; DOI:10.1016/j.jcss.2014.12.026
  • [Show abstract] [Hide abstract]
    ABSTRACT: Motivated by the need for robust and fast distributed computation in highly dynamic Peer-to-Peer (P2P) networks, we study algorithms for the fundamental distributed agreement problem. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to design fast algorithms (running in a small number of rounds) that guarantee, despite high node churn rate, that almost all nodes reach a stable agreement. Our main contributions are randomized distributed algorithms that guarantee stable almost-everywhere agreement with high probability even under high adversarial churn in number of rounds that is polylogarithmic in n, where n is the stable network size. In particular, we present the following results:
    Journal of Computer and System Sciences 01/2015; DOI:10.1016/j.jcss.2014.10.005
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nonlinear feedback shift registers (NFSRs) have been used as the main building blocks in many stream ciphers and convolutional decoders. The linearization of NFSRs is to find their state transition matrices. This paper uses a Boolean network approach to facilitate the linearization of NFSRs. A new state transition matrix is found for an NFSR, which can be simply computed from the truth table of its feedback function. Compared to the existing results, the new state transition matrix is easier to compute and is more explicit. Some properties of the matrix are provided, which are helpful to theoretically analyze NFSRs.
    Journal of Computer and System Sciences 12/2014; 81(4). DOI:10.1016/j.jcss.2014.12.030
  • [Show abstract] [Hide abstract]
    ABSTRACT: Due to the serious information overload problem on the Internet, recommender systems have emerged as an important tool for recommending more useful information to users by providing personalized services for individual users. However, in the “big data” era, recommender systems face significant challenges, such as how to process massive data efficiently and accurately. In this paper we propose an incremental algorithm based on singular value decomposition (SVD) with good scalability, which combines the Incremental SVD algorithm with the Approximating the Singular Value Decomposition (ApproSVD) algorithm, called the Incremental ApproSVD. Furthermore, strict error analysis demonstrates the effectiveness of the performance of our Incremental ApproSVD algorithm. We then present an empirical study to compare the prediction accuracy and running time between our Incremental ApproSVD algorithm and the Incremental SVD algorithm on the MovieLens dataset and Flixster dataset. The experimental results demonstrate that our proposed method outperforms its counterparts.
    Journal of Computer and System Sciences 12/2014; 81(4). DOI:10.1016/j.jcss.2014.11.016
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that processing big graph data can be costly on Cloud. Processing big graph data introduces complex and multiple iterations that raise challenges such as parallel memory bottlenecks, deadlocks, and inefficiency. To tackle the challenges, we propose a novel technique for effectively processing big graph data on Cloud. Specifically, the big data will be compressed with its spatiotemporal features on Cloud. By exploring spatial data correlation, we partition a graph data set into clusters. In a cluster, the workload can be shared by the inference based on time series similarity. By exploiting temporal correlation, in each time series or a single graph edge, temporal data compression is conducted. A novel data driven scheduling is also developed for data processing optimization. The experiment results demonstrate that the spatiotemporal compression and scheduling achieve significant performance gains in terms of data size and data fidelity loss.
    Journal of Computer and System Sciences 12/2014; 80(8). DOI:10.1016/j.jcss.2014.04.022
  • [Show abstract] [Hide abstract]
    ABSTRACT: To secure multimedia communications, existing encryption techniques usually encrypt the whole data stream using the same session key during a session. The use of the session key confronts with tradeoff problem between session key creation latency and security for the real-time multimedia stream. The main feature of our proposed scheme is to selectively encrypt RTP packets using different one-time packet keys in the same session for real-time multimedia applications. The packet key, which has already been used, will never be reused throughout the same session. The use of the one-time packet key enables to improve security strength of real-time multimedia. To solve the issue of the real-time packet key exchanges related to the timely use of the one-time packet keys, this paper suggests the one-time packet key exchange method that does not need to occur on a packet-by-packet basis.
    Journal of Computer and System Sciences 12/2014; DOI:10.1016/j.jcss.2014.04.023
  • [Show abstract] [Hide abstract]
    ABSTRACT: The concept of cloud computing has emerged as the next generation of computing infrastructure to reduce the costs associated with the management of hardware and software resources. It is vital to its success that cloud computing is featured efficient, flexible and secure characteristics. In this paper, we propose an efficient and anonymous data sharing protocol with flexible sharing style, named EFADS, for outsourcing data onto the cloud. Through formal security analysis, we demonstrate that EFADS provides data confidentiality and data sharer's anonymity without requiring any fully-trusted party. From experimental results, we show that EFADS is more efficient than existing competing approaches. Furthermore, the proxy re-encryption scheme we propose in this paper may be independent of interests, i.e., compared to those previously reported proxy re-encryption schemes, the proposed scheme is the first pairing-free, anonymous and unidirectional proxy re-encryption scheme in the standard model.
    Journal of Computer and System Sciences 12/2014; 80(8). DOI:10.1016/j.jcss.2014.04.021
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern software systems are frequently required to be adaptive in order to cope with constant changes. Unfortunately, service-oriented systems built with WS-BPEL are still too rigid. In this paper, we propose a novel model-driven approach to supporting the development of dynamically adaptive WS-BPEL based systems. We model the system functionality with two distinct but highly correlated parts: a stable part called the base model describing the flow logic aspect and a volatile part called the variable model describing the decision logic aspect. We develop an aspect-oriented method to weave the base model and the variable model together so that runtime changes can be applied to the variable model without affecting the base model. A model-driven platform has been implemented to support the development of adaptive WS-BPEL processes. In-lab experiments show that our approach has low performance overhead. A real-life case study also validates the applicability of our approach.
    Journal of Computer and System Sciences 11/2014; 81(3). DOI:10.1016/j.jcss.2014.11.008