Jinjun Chen

Hunan University of Science and Technology, Siangtan, Hunan, China

Are you Jinjun Chen?

Claim your profile

Publications (156)101.73 Total impact

  • Mingdong Tang · Xiaoling Dai · Jianxun Liu · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: With the advent of cloud computing, employing various cloud services to build highly reliable cloud applications has become increasingly popular. The trustworthiness of cloud services is a critical issue that hinders the development of cloud applications, and thus is an urgently-required research problem. Previous studies evaluate trustworthiness of services via either QoS monitoring mechanisms or user feedback ratings, while seldom they combine both of them for enhancing service trust evaluation. This paper proposes a trustworthy selection framework for cloud service selection, named TRUSS. Aiming at developing an effective trust evaluation middleware for TRUSS, we propose an integrated trust evaluation method via combining objective trust assessment and subjective trust assessment. The objective trust assessment is based on QoS monitoring, while the subjective trust assessment is based on user feedback ratings. Experiments conducted using a synthesized dataset show that our proposed method significantly outperforms the other trust and reputation methods.
    No preview · Article · Jan 2016 · Future Generation Computer Systems
  • [Show abstract] [Hide abstract]
    ABSTRACT: To assess the quality of services (QoS) in service selection, collaborative service QoS prediction has recently garnered increasing attention. They focus on exploring the historical QoS information generated by interactions between users and services. However, they may suffer from the data sparsity issue because interactions between users and services are usually sparse in real scenarios. They also seldom consider the network environments of users and services, which surely will affect cloud service QoS. To address the data sparsity issue and improve the QoS prediction accuracy, the following paper proposes a collaborative QoS prediction method with location-based data smoothing. The method first computes neighborhoods of users and services based on their locations which provide a basis for data smoothing. It then combines user-based and service-based collaborative filtering techniques to make QoS predictions. Experiments conducted using a real service invocation dataset validate the performance of the proposed QoS prediction method. Copyright © 2015 John Wiley & Sons, Ltd.
    No preview · Article · Oct 2015 · Concurrency and Computation Practice and Experience
  • [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing provides promising scalable IT infrastructure to support various processing of a variety of big data applications in sectors such as healthcare and business. Data sets like electronic health records in such applications often contain privacy-sensitive information, which brings about privacy concerns potentially if the information is released or shared to third-parties in cloud. A practical and widely-adopted technique for data privacy preservation is to anonymize data via generalization to satisfy a given privacy model. However, most existing privacy preserving approaches tailored to small-scale data sets often fall short when encountering big data, due to their insufficiency or poor scalability. In this paper, we investigate the local-recoding problem for big data anonymization against proximity privacy breaches and attempt to identify a scalable solution to this problem. Specifically, we present a proximity privacy model with allowing semantic proximity of sensitive values and multiple sensitive attributes, and model the problem of local recoding as a proximity-aware clustering problem. A scalable two-phase clustering approach consisting of a t-ancestors clustering (similar to k-means) algorithm and a proximity-aware agglomerative clustering algorithm is proposed to address the above problem. We design the algorithms with MapReduce to gain high scalability by performing data-parallel computation in cloud. Extensive experiments on real-life data sets demonstrate that our approach significantly improves the capability of defending the proximity privacy breaches, the scalability and the time-efficiency of local-recoding anonymization over existing approaches.
    No preview · Article · Aug 2015 · IEEE Transactions on Computers
  • Shaoqian Zhang · Wenmin Lin · Wanchun Dou · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Web service composition allows users to create value-added composite Web services on existent services, where top-k composite services are helpful for users to find a satisfying composite service efficiently. However, with an increasing number of Web services and users' various composition preferences, computing top-k composite services dynamically for different users is difficult. In view of this challenge, an optimization method for top-k composite services is proposed, based on a preference-aware service dominance relationship. Concretely speaking, firstly, user preferences are modeled with the preference-aware service dominance. Then, in local service selection, a multi-index based algorithm is proposed, named Multi-Index, for computing candidate services of each task dynamically. After that, in global optimization, combined with a service lattice, top-k composite services are selected under a dominant number-aware service ranking. A case study is also presented for illustrating the authors' solution. At last, an experiment was conducted to verify the proposed method.
    No preview · Article · Aug 2015 · International Journal of Web Services Research
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Social networks, though started as a software tool enabling people to connect with each other, have emerged in recent times as platforms for businesses, individuals and government agencies to conduct a number of activities ranging from marketing to emergency situation management. As a result, a large number of social network analytics tools have been developed for a variety of applications. A snapshot of social networks at any particular time, called a social graph, represents the connectivity of nodes and potentially the flow of information amongst the nodes (or vertices) in the graph. Understanding the flow of information in a social graph plays an important role in social network applications. Two specific problems related to information flow have implications in many social network applications: (a) finding a minimum set of nodes one has to know to recover the whole graph (also known as the vertex cover problem) and (b) determining the minimum set of nodes one required to reach all nodes in the graph within a specific number of hops (we refer this as the vertex reach problem). Finding an optimal solution to these problems is NP-Hard. In this paper, we propose approximation based approaches and show that our approaches outperform existing approaches using both a theoretical analysis and experimental results.
    Full-text · Conference Paper · Jun 2015
  • Saixia Lyu · Jianxun Liu · Mingdong Tang · Yu Xu · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Predicting trustworthiness of mobile services is a fundamental need for mobile service selection. With the popularization of mobile social networks, employing trust propagation to predict trust of a user placed on a mobile service becomes available. However, existing methods based on trust propagation in social networks may suffer from a scalability problem, i.e., their trust computation for two indirectly connected users is likely too time-consuming to be acceptable in very large social networks. To address this issue, this paper proposes a trust propagation method which exploits the peculiar properties of social networks and incorporates a landmark-based method with preprocessing to improve the efficiency of trust prediction. In this method, a small number of landmark users in the social network are firstly selected as referees in trust propagation, and the trust between these landmark users and the other users are then pre-computed. The trust between two indirectly connected users is finally estimated via aggregating the referrals provided by the landmark users. To evaluate the performance of the proposed method, comprehensive experiments are conducted using a real online social network. The experimental results show that our method is quite more efficient than the other four classic trust propagation methods in trust prediction.
    No preview · Article · Jun 2015 · Mobile Networks and Applications
  • Jinjun Chen · Surya Nepal

    No preview · Article · Apr 2015 · Computing
  • Congyang Chen · Jianxun Liu · Yiping Wen · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing owns merits of more efficiency and less cost in fields of information processing and service mode. Algorithms of workflow scheduling in the cloud can contribute to cutting cost and improving the quality of services, therefore, it has been a hot research topic. In this paper, the workflow technology in the cloud and the needs for cloud workflow scheduling are firstly introduced. Then, typical cloud workflow scheduling algorithms are analyzed and classified into three categories. In the end, typical cloud workflow scheduling research tools such as CloudSim, WorkflowSim and SwinFlow-Cloud are evaluated. Besides, we also analyze the existing problems of current workflow scheduling algorithm in the cloud and introduce the directions of the future research. With the promotion of the world's leading companies, cloud computing has achieved significant developments and applications in recent years. Cloud computing can be defined as the epitome of distributed computing, parallel computing, utility computing , pervasive computing and grid computing. We can obtain data storage which it is security, convenient, efficient, and huge amounts of computer power based on the Internet. Regarded as a distributed computing paradigm with rapid growth, it has shown some obvious difference with other. In this regard, the running environment is controlled by user/application program in cloud environment; always the jobs in user level will not be exposed to the scheduling system, the VM will be allocated to users or none. In brief, cloud computing environment provides three forms of services, they are Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service(SaaS), which can offer the flexible payment services [1,2]. It can satisfy the various requires of services and bring the possibility of service innovations [3]. It also has made great strides in developing cloud computing simulation tools. Many cloud computing platforms are also implemented by some famous corporations.
    No preview · Conference Paper · Feb 2015
  • Chi Yang · Chang Liu · Xuyun Zhang · Surya Nepal · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Big sensor data is prevalent in both industry and scientific research applications where the data is generated with high volume and velocity it is difficult to process using on-hand database management tools or traditional data processing applications. Cloud computing provides a promising platform to support the addressing of this challenge as it provides a flexible stack of massive computing, storage, and software services in a scalable manner at low cost. Some techniques have been developed in recent years for processing sensor data on cloud, such as sensor-cloud. However, these techniques do not provide efficient support on fast detection and locating of errors in big sensor data sets. For fast data error detection in big sensor data sets, in this paper, we develop a novel data error detection approach which exploits the full computation potential of cloud platform and the network feature of WSN. Firstly, a set of sensor data error types are classified and defined. Based on that classification, the network feature of a clustered WSN is introduced and analyzed to support fast error detection and location. Specifically, in our proposed approach, the error detection is based on the scale-free network topology and most of detection operations can be conducted in limited temporal or spatial data blocks instead of a whole big data set. Hence the detection and location process can be dramatically accelerated. Furthermore, the detection and location tasks can be distributed to cloud platform to fully exploit the computation power and massive storage. Through the experiment on our cloud computing platform of U-Cloud, it is demonstrated that our proposed approach can significantly reduce the time for error detection and location in big data sets generated by large scale sensor network systems with acceptable error detecting accuracy.
    No preview · Article · Feb 2015 · IEEE Transactions on Parallel and Distributed Systems

  • No preview · Article · Jan 2015 · IEEE Transactions on Cloud Computing
  • Congyang Chen · Jianxun Liu · Yiping Wen · Jinjun Chen · Dong Zhou
    [Show abstract] [Hide abstract]
    ABSTRACT: In the context of cloud computing and big data, the data of all walks of life has been obtained conveniently. Some information of users in the business process is in need of protection with the popularity of workflow applications, which will greatly affect the scheduling of workflow. Meanwhile, the amount of data is usually very large in workflow, the data privacy protection in workflow has also become an important research problem. In this paper, in order to satisfy the requirement of data privacy protection from user and minimize the total scheduling cost in workflow scheduling, we proposed a privacy and cost aware method based on genetic algorithm for data intensive workflow applications which takes into account computation cost, data transmission cost and data storage cost in cloud to solve this problem on finding the best scheduling solution. The proposed algorithm uses the summation of upward and downward rank values for prioritizing workflow tasks, then merges it to make an optimal initial population to obtain a good solution quickly. Besides, a series of operations like selection, crossover and mutation have been used to optimize the scheduling. In the workflow task scheduling, we assign the datacenter for tasks needing privacy protection, which data of these tasks cannot be moved or copied to other datacenter. Finally, we demonstrate the potential of proposed algorithm for optimizing economic cost with user privacy protection requirement. The experimental results show that proposed algorithm can help improve the scheduling and save the time and cost by an average of 3.6% and 15.6% respectively.
    No preview · Chapter · Jan 2015
  • Chang Liu · Rajiv Ranjan · Xuyun Zhang · Chi Yang · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Big data is attracting more and more interests from numerous industries. Afewexamples are oil and gas mining, scientific research (biology, chemistry, physics), online social networks (Twitter, Facebook), multimedia data, and business transactions. With mountains of data collected from increasingly efficient data collecting devices as well as stored on fast-growing storage hardware, people are keen to find solutions to store and process the data more efficiently, and to discover more values from the mass at the same time. When referring to big data research problems, people often brings the 4 v’s-volume, velocity, variety, and value. These pose various brand-new challenges to computer scientists nowadays.
    No preview · Article · Jan 2015
  • Congyang Chen · Jianxun Liu · Yiping Wen · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing owns merits of more efficiency and less cost in fields of information processing and service mode. Algorithms of workflow scheduling in the cloud can contribute to cutting cost and improving the quality of services, therefore, it has been a hot research topic. In this paper, the workflow technology in the cloud and the needs for cloud workflow scheduling are firstly introduced. Then, typical cloud workflow scheduling algorithms are analyzed and classified into three categories. In the end, typical cloud workflow scheduling research tools such as CloudSim, WorkflowSim and SwinFlow-Cloud are evaluated. Besides, we also analyze the existing problems of current workflow scheduling algorithm in the cloud and introduce the directions of the future research.
    No preview · Chapter · Jan 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: Following the two trends of computerization and informatization, another emerging trend is cyberization in which numerous and various cyber entities in cyberspace will exist in cyber-enabled worlds, including the cyber world and cyber-conjugated physical, social, and mental worlds. Computer science and information science, as holistic fields, have, respectively, played important roles in computerization and informatization. Similarly, it is necessary for there to be a corresponding field for cyberization. Cybermatics is proposed as such a holistic field for the systematic study of cyber entities in cyberspace and cyber world, and their properties, functions, and conjugations with entities in conventional spaces/worlds. This paper sets out to explain the necessity and rationale for, and significance of, the proposed field of Cybermatics, what it is and what it encompasses, and how it is related to other fields and areas.
    No preview · Article · Jan 2015
  • Xiaolong Xu · Wanchun Dou · Xuyun Zhang · Jinjun Chen

    No preview · Article · Jan 2015 · IEEE Transactions on Cloud Computing
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is well known that processing big graph data can be costly on Cloud. Processing big graph data introduces complex and multiple iterations that raise challenges such as parallel memory bottlenecks, deadlocks, and inefficiency. To tackle the challenges, we propose a novel technique for effectively processing big graph data on Cloud. Specifically, the big data will be compressed with its spatiotemporal features on Cloud. By exploring spatial data correlation, we partition a graph data set into clusters. In a cluster, the workload can be shared by the inference based on time series similarity. By exploiting temporal correlation, in each time series or a single graph edge, temporal data compression is conducted. A novel data driven scheduling is also developed for data processing optimization. The experiment results demonstrate that the spatiotemporal compression and scheduling achieve significant performance gains in terms of data size and data fidelity loss.
    No preview · Article · Dec 2014 · Journal of Computer and System Sciences
  • Shunmei Meng · Wanchun Dou · Xuyun Zhang · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Service recommender systems have been shown as valuable tools for providing appropriate recommendations to users. In the last decade, the amount of customers, services and online information has grown rapidly, yielding the big data analysis problem for service recommender systems. Consequently, traditional service recommender systems often suffer from scalability and inefficiency problems when processing or analysing such large-scale data. Moreover, most of existing service recommender systems present the same ratings and rankings of services to different users without considering diverse users' preferences, and therefore fails to meet users' personalized requirements. In this paper, we propose a Keyword-Aware Service Recommendation method, named KASR, to address the above challenges. It aims at presenting a personalized service recommendation list and recommending the most appropriate services to the users effectively. Specifically, keywords are used to indicate users' preferences, and a user-based Collaborative Filtering algorithm is adopted to generate appropriate recommendations. To improve its scalability and efficiency in big data environment, KASR is implemented on Hadoop, a widely-adopted distributed computing platform using the MapReduce parallel processing paradigm. Finally, extensive experiments are conducted on real-world data sets, and results demonstrate that KASR significantly improves the accuracy and scalability of service recommender systems over existing approaches.
    No preview · Article · Dec 2014 · IEEE Transactions on Parallel and Distributed Systems
  • Source
    Chang Liu · Nick Beaugeard · Chi Yang · Xuyun Zhang · Jinjun Chen
    [Show abstract] [Hide abstract]
    ABSTRACT: Big data is one of the most referred key words in recent information and communications technology industry. As the new-generation distributed computing platform, cloud environments offer high efficiency and low cost for data-intensive storage and computation for big data applications. Cloud resources and services are available in pay-as-you-go mode, which brings extraordinary flexibility and cost-effectiveness as well as minimal investments in their own computing infrastructure. However, these advantages come at a price—people no longer have direct control over their own data. Based on this view, data security becomes a major concern in the adoption of cloud computing. Authenticated key exchange is essential to a security system that is based on high-efficiency symmetric-key encryptions. With virtualisation technology being applied, existing key exchange schemes such as Internet key exchange become time consuming when directly deployed into cloud computing environment, especially for large-scale tasks that involve intensive user–cloud interactions, such as scheduling and data auditing. In this paper, we propose a novel hierarchical key exchange scheme, namely hierarchical key exchange for big data in cloud, which aims at providing efficient security-aware scheduling and auditing for cloud environments. In this novel key exchange scheme, we developed a two-phase layer-by-layer iterative key exchange strategy to achieve more efficient authenticated key exchange without sacrificing the level of data security. Both theoretical analysis and experimental results demonstrate that when deployed in cloud environments with diverse server layouts, efficiency of the proposed scheme is dramatically superior to its predecessors cloud computing background key exchange and Internet key exchange schemes. Copyright © 2014 John Wiley & Sons, Ltd.
    Full-text · Article · Nov 2014 · Concurrency and Computation Practice and Experience
  • [Show abstract] [Hide abstract]
    ABSTRACT: In big data applications, data privacy is one of the most concerned issues because processing large-scale privacy-sensitive data sets often requires computation power provided by public cloud services. Sub-tree data anonymization, achieving a good trade-off between data utility and information loss, is a widely adopted scheme to anonymize data sets for privacy preservation. Top-Down Specialization (TDS) and Bottom-Up Generalization (BUG) are two ways to fulfill sub-tree anonymization. However, existing approaches for sub-tree anonymization fall short of parallelization capability, thereby lacking scalability in handling big data on cloud. Still, either TDS or BUG individually suffers from poor performance for certain valuing of k-anonymity parameter. In this paper, we propose a hybrid approach that combines TDS and BUG together for efficient sub-tree anonymization over big data. Further, we design MapReduce based algorithms for the two components (TDS and BUG) to gain high scalability by exploiting powerful computation capability of cloud. Experiment evaluation demonstrates that the hybrid approach significantly improves the scalability and efficiency of sub-tree anonymization scheme over existing approaches.
    No preview · Article · Aug 2014 · Journal of Computer and System Sciences
  • Source
    Laurence T. Yang · Jinjun Chen

    Preview · Article · Aug 2014

Publication Stats

2k Citations
101.73 Total Impact Points


  • 2015
    • Hunan University of Science and Technology
      Siangtan, Hunan, China
  • 2011-2015
    • University of Technology Sydney
      • Faculty of Engineering and Information Technology
      Sydney, New South Wales, Australia
  • 2014
    • Qufu Normal University
      Küfow, Shandong Sheng, China
  • 2004-2013
    • Swinburne University of Technology
      • • Faculty of Information & Communication Technologies
      • • Centre for Internet Computer and E-Commerce
      Melbourne, Victoria, Australia