Junde Song

Beijing University of Posts and Telecommunications, Peping, Beijing, China

Are you Junde Song?

Claim your profile

Publications (170)8.32 Total impact

  • Ting Wang · Junde Song · Meina Song
    Soft Computing 07/2015; DOI:10.1007/s00500-015-1797-z · 1.27 Impact Factor
  • Ting Wang · Ke Xu · Junde Song · Meina Song
    [Show abstract] [Hide abstract]
    ABSTRACT: In order to improve the accuracy and robustness of geolocation (geographic location) databases, a method based on machine learning called GeoCop (Geolocation Cop) is proposed for optimizing the geolocation databases of Internet hosts. In addition to network measurement, which is always used by the existing geolocation methods, our geolocation model for Internet hosts is also derived by both routing policy and machine learning. After optimization with the GeoCop method, the geolocation databases of Internet hosts are less prone to imperfect measurement and irregular routing. In addition to three frequently used geolocation databases (IP138, QQWry, and IPcn), we obtain two other geolocation databases by implementing two well-known geolocation methods (the constraint-based geolocation method and the topology-based geolocation method) for constructing the optimized objects. Finally, we give a comprehensive analysis on the performance of our method. On one hand, we use typical benchmarks to compare the performance of these databases after optimization; on the other hand, we also perform statistical tests to display the improvement of the GeoCop method. As presented in the comparison tables, the GeoCop method not only achieves improved performance in both accuracy and robustness but also enjoys less measurements and calculation overheads.
    Mathematical Problems in Engineering 01/2015; 2015:1-17. DOI:10.1155/2015/972642 · 0.76 Impact Factor
  • Jian Zhou · Liyan Sun · Xianwei Zhou · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: The group merging/splitting event is different to the joining/leaving events in which only a member joins or leaves group, but in the group merging/splitting event two small groups merge together into a group or a group is divided into two independent parts. Rekeying is an importance issue for key management whose target is to guarantee forward security and backward security in case of membership changes, however rekeying efficiency is related to group scale in most existing group key management schemes, so as to those schemes are not suitable to the applications whose rekeying time delay is limited strictly. In particular, multiple members are involved in the group merging/splitting event, thus the rekeying performance becomes a worried problem. In this paper, a high performance group merging/splitting group key management scheme is proposed based on an one-encryption-key multi-decryption-key key protocol, in the proposed scheme each member has an unique decryption key that is corresponding to a common encryption key so as to only the common encryption key is updated when the group merging/splitting event happens, however the secret decryption key still keeps unchanged. In efficiency aspect, since no more than a message on merging/splitting event is sent, at time the network load is reduced since only a group member’s key material is enough for other group members to agree a fresh common encryption key. In security aspect, our proposed scheme achieves the key management security requirements including passive security, forward security, backward security and key independence. Therefore, our proposed scheme is suitable to the dynamitic networks that the rekeying time delay is limited strictly such as tolerate delay networks.
    Wireless Personal Communications 03/2014; 75(2). DOI:10.1007/s11277-013-1436-x · 0.65 Impact Factor
  • Junjie Tong · E. Haihong · Meina Song · Junde Song · Yanfei Li
    [Show abstract] [Hide abstract]
    ABSTRACT: Client-side Quality-of-Service (QoS) evaluation of Web services is a critical factor in selecting the optimal Web service from a set of functionally equivalent service candidates. And collaborative filtering (CF) method becomes an important way for automatic QoS evaluation. Traditional CF approaches for this problem predict the QoS values by employing historical QoS information, but their performance may suffer from the sparsity of data such the increase of the failure rates and decrease of the accuracy. In this paper, we investigate the data sparsity problem in QoS value prediction. By constructing the user similarity weighted network, we use the modified local link prediction methods to find implicit neighbors to alleviate the lack of neighbors and low successful rate in predictions caused by the sparse data. We also consider the user location proximity and weak tie affection carefully in link prediction. The experiment results on the public dataset validate that via local link prediction, the accuracy and the successful rate both increase compared to the traditional user-based CF approach.
    2013 IEEE International Conference on High Performance Computing and Communications (HPCC) & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (EUC); 11/2013
  • Ning Guo · Yanhua Yu · Meina Song · Junde Song · Yu Fu
    [Show abstract] [Hide abstract]
    ABSTRACT: Nowadays in many real-world scenarios, high speed data streams are usually with non-uniform misclassification costs and thus call for cost-sensitive classification algorithms of data streams. However, only little literature focuses on this issue. On the other hand, the existing algorithms for cost-sensitive classification can achieve excellent performance in the metric of total misclassification costs, but always lead to obvious reduction of accuracy, which restrains the practical application greatly. In this paper, we present an improved folk theorem. Based on the new theorem, the existing accuracy-based classification algorithm can be converted into soft cost-sensitive one immediately, which allows us to take both accuracy and cost into account. Following the idea of this theorem, the soft-CsGDT algorithm is proposed to process the data streams with non-uniform misclassification costs, which is an expansion of GDT. With both synthetic and real-world datasets, the experimental results show that compared with the cost-sensitive algorithm, the accuracy in our soft-CsGDT is significantly improved, while the total misclassification costs are approximately the same.
  • Source
    Yanhua Yu · Meina Song · Yu Fu · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: Traffic prediction plays an integral role in telecommunication network planning and network optimization. In this paper, we investigate the traffic forecasting for data services in 3G mobile networks. Although the Box-Jenkins model has been proven to be appropriate for voice traffic (since the arrival of calls follows a Poisson distribution), it has been demonstrated that the Internet traffic exhibits statistical self-similarity and has to be modeled using the Fractional AutoRegressive Integrated Moving Average (FARIMA) process. However, a few studies have concluded that the FARIMA process may fail in modeling the Internet traffic. To this end, we conducted experiments on the modeling of benchmark Internet traffic and found that the FARIMA process fails because of the significant multifractal characteristic inherent in the traffic series. Thereafter, we investigate the traffic series of data services in a 3G mobile network from a province in China. Rich multifractal spectra are found in this series. Based on this observation, an integrated method combining the AutoRegressive Moving Average (ARMA) and FARIMA processes is applied. The obtained experimental results verify the effectiveness of the integrated prediction method.
    Tsinghua Science & Technology 08/2013; 18(4). DOI:10.1109/TST.2013.6574678
  • Pei Zhang · Haihong E · Bo Li · Guili He · Xiang Zhang · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper is based on the TTCN-3 terminal conformance test structure defined by ETSI (European Telecommunication Standards Institute). And a security module is provided for TD-LTE/TD-SCDMA inter-RAT test system which uses the test structure. The proposed security module realization mechanism can be applied to conformance test system implementation directly, consequently, this paper possesses quite significant sense in both theory and realism. What is more important, the security module realization of TD-SCDMA system fills a gap in the TTCN-3 test structure.
    Communications in China - Workshops (CIC/ICCC), 2013 IEEE/CIC International Conference on; 08/2013
  • Jian Zhou · Meina Song · Junde Song · Xian-wei Zhou · Liyan Sun
    [Show abstract] [Hide abstract]
    ABSTRACT: In deep space delay tolerant networks rekeying expend vast amounts of energy and delay time as a reliable end-to-end communication is very difficult to be available between members and key management center. In order to deal with the question, this paper puts forwards an autonomic group key management scheme for deep space DTN, in which a logical key tree based on one-encryption-key multi-decryption-key key protocol is presented. Each leaf node with a secret decryption key corresponds to a network member and each non-leaf node corresponds to a public encryption key generated by all leaf node’s decryption keys that belong to the non-leaf node’s sub tree. In the proposed scheme, each legitimate member has the same capability of modifying public encryption key with himself decryption key as key management center, so rekeying can be fulfilled successfully by a local leaving or joining member in lack of key management center support. In the security aspect, forward security and backward security are guaranteed. In the efficiency aspect, our proposed scheme’s rekeying message cost is half of LKH scheme when a new member joins, furthermore in member leaving event a leaving member makes tradeoff between computation cost and message cost except for rekeying message cost is constant and is not related to network scale. Therefore, our proposed scheme is more suitable for deep space DTN than LKH and the localization of rekeying is realized securely.
    Wireless Personal Communications 07/2013; 77(1):269-287. DOI:10.1007/s11277-013-1505-1 · 0.65 Impact Factor
  • Guan Le · Ke Xu · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: Cloud computing is the promising key technology to build future architecture of massive IT systems and one of key benefits of cloud computing is to provide its customers with elastic resources according to the fluctuation of request workloads. In this paper, we propose adaptive resource management policy to handle requests of deadline-bound application with elastic cloud. Adaptive resource management architecture has been proposed, and we divide resource management into two parts, resource provision and job scheduling. We design analytical provision model for adaptive provision based on queuing theory, by introducing a key metric named average interval time. Three job scheduling policies are raised to dequeue appropriate jobs to execute, First-Come-First-Service (FCFS), Shortest Job First (SJF) and Nearest Deadline First (NDF), for different preference toward execution order. Simulation evaluation has been set up with realistic grid workload, and results show that our provisioning model gives elastic resource provisioning for dynamic workload and FCFS achieves better performance compared with other scheduling policies.
    Service Sciences (ICSS), 2013 International Conference on; 01/2013
  • Yanhua Yu · Meina Song · Junde Song
    Journal of Computers 12/2012; 7(12). DOI:10.4304/jcp.7.12.2921-2930
  • Ke Xu · Hui Zhang · Meina Song · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: The enormous increase in Internet traffic usage has been leading to problems such as increased complexity of routing topology, explosion in routing table entries, provider-dependent addressing, which reduce the speed of network service. The emerging new techniques such as CDN, P2P, VPN, etc. speed up the network from different perspectives. A new speed up system called CANR, content aware and name based routing, is proposed in this paper, which integrates benefits of several existing mehanisms. CANR consists of a cluster of proxy peers deployed in different network domains, which can work as collaborative routers, forwarding requests to each other to speed up the cross-domain visits. CANR can automatically aware the changes of network and re-construct name-based routing table based on a new multi objectie k shortest algoritm by itself, finding a set of cheapest and most fast k routing paths, which is different from current static preconfigured systems.
    Proceedings of the 2012 international conference on Pervasive Computing and the Networked World; 11/2012
  • Guan Le · Ke Xu · Meina Song · Junde Song
    10/2012; 4(19):601-611. DOI:10.4156/aiss.vol4.issue19.74
  • Wen'an Zhou · Jie Chang · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: The service adaptation mechanism is one of the key issues in ubiquitous network. However, most of the proposed approaches of service adaptation are either not context-aware or based on a specific definition of context in specific environment. This paper presents a context-aware service adaptation mechanism under ubiquitous network relying on user-to-object, space-time interaction patterns which helps perform service adaptation according to user's contexts (such as preferences and habits), network context, service context and device context. The contribution of this paper is to propose: 1) importing user similarity into the service adaptation process and also considering users' trust value; 2) similar users-based service adaptation algorithm (SUSA) is proposed, by combining entropy theory and fuzzy analytic hierarchy process algorithm (FAHP). Evaluation results show that the service adaptation approach based on context-aware, user similarity and SUSA algorithm outperforms the traditional service adaptation algorithm in the accuracy aspect.
    Vehicular Technology Conference (VTC Spring), 2012 IEEE 75th; 05/2012
  • Source
    Xiaobo Wang · Xianwei Zhou · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: In the future, it is important to construct infrastructure on the surface of deep space planets, and then networking can be achieved to support both the communication between surface network nodes and planet satellite access. And because multiple access is an important technique in deep space communication, a scene of deep space exploration was proposed based on multiple access, which include planet surface network and satellite access network. Then hypergraph theory was used to model the network, thus provide a new way to improve the network connectivity, save frequency spectrum resource and reduce mutual interference, and also how to construct a hyper-edge was described. According to the network model, a 7-layer network architecture was introduced.
    Journal of Networks 04/2012; 7(4). DOI:10.4304/jnw.7.4.723-729
  • Xianqi Lu · Si Chen · Wen-an Zhou · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: Coordinated multipoint (CoMP) transmission/reception is a new transmission scheme in LTE-Advanced system to satisfy the system requirements and improve the cell-edge throughput. It is essential to design a series of feasible and effective signaling processes to implement and deploy CoMP in the real network and achieve the expected performance. In this contribution, we discuss some affecting factors that may lead to variability of the CoMP signaling process design, including the network structure, the scheduling strategy, and the activation and deactivation model. We also propose some common requirements that keep the generality of the CoMP signaling process design, during the sub process of activation, data transmission, and deactivation. In the end, we provide a brief summary of the contribution and an outlook of the future studies.
    Computer Science and Automation Engineering (CSAE), 2012 IEEE International Conference on; 01/2012
  • Jie Chang · Wen'an Zhou · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: The service continuity mechanism is to provide multimedia application with seamless multimedia experience over multiple networks (Wifi, 3G, Wimax, LTE) as well as multiple devices (mobile, PCs, PDA, Settop box) in ubiquitous network. However, most of the proposed approaches of service continuity are either not adaptive resources provisioning or based on a specific definition of context in ubiquitous network environment. This paper presents a multi-device service continuity with adaptive resources provisioning mechanism under ubiquitous network relying on IMS which helps perform service continuity according to user's contexts (such as preferences and habits), network context, service context and device context. The contribution of this paper is to propose: 1) service continuity scenario, architecture and requirements; 2) signaling flow of resource switching for the proposed scenario is proposed; 3) process of adaptive resources provisioning is put forward. Evaluation results show that the service continuity approach based on context-aware, multi-device coordination and adaptive resources provisioning process outperforms the traditional service continuity.
    01/2012; DOI:10.1109/WCNC.2012.6214296
  • Guan Le · Ke Xu · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: Federated clouds establish a model for independent cloud providers to cooperate and share resource to cope with unforeseen demand loads. One of the challenges in federated clouds is how to find suitable virtual machines to locate resource requests, with constrains of multiple resource attributes. This paper proposes a Gossip-based Hybrid Multi-attribute Overlay (GHMO) for effective resource discovery in federated clouds. GHMO enhances structured overlay with gossip protocol, expanding possible routing ranges to improve the efficiency multi-attribute search. A weight overlay is introduced in the case of routing costs among federated clouds, and an improved neighbor selection strategy is raised to reduce routing costs in process of multi-attribute search. Experimental evaluations show that the performance of the proposed approach is acceptable and stable, and the routing hop and cost is reduced compared with other multi-attribute search methods.
    e-Business Engineering (ICEBE), 2012 IEEE Ninth International Conference on; 01/2012
  • Source
    Lei You · Ping Wu · Mei Song · Junde Song · Yong Zhang
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we consider a cross-layer design aimed to enhance performance for uplink transmission in an orthogonal frequency division multiple-access (OFDMA)-based cellular network with fixed relay stations. Because mobile stations (MSs) spend most of the power on the uplink transmission, power efficiency resource allocation becomes very important to MSs. We develop a cross-layer optimisation framework for two types of uplink flows (inelastic and elastic flows) that have different quality-of-service requirements. For inelastic flows with fixed-rate requirement, we formulate the cross-layer optimisation problem as the minimisation of the sum transmission power of MSs under the constraints of flow conservation law, subcarrier assignment, relaying path selection and power allocation. For elastic flows with flexible-service-rate requirement, we consider the cross-layer trade-off between uplink service rate and power consumption of MSs and pose the optimisation problem as the maximisation of a linear combination of utility (of service rates) and power consumption (of MSs). Different trade-offs can be achieved by varying the weighting parameters. Dual decomposition and subgradient methods are used to solve the problems optimally with reduced computational complexity. The simulation results show that, through the proposed cross-layer resource optimisation framework and algorithms, significant benefits of deployment of multiple fixed relays in an OFDMA cellular network can be fully obtained such as reduction in power consumption, increase in service rate and energy savings in the uplink transmission of MSs. Copyright © 2011 John Wiley & Sons, Ltd.
    European Transactions on Telecommunications 10/2011; 22(6):296-314. DOI:10.1002/ett.1480 · 1.35 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Using virtualization to consolidate servers is a routine method for reducing power consumption in data centers. Current practice, however, assumes homogeneous servers that operate in a homogeneous physical environment. Experimental evidence collected in our mid-size, fully instrumented data center challenges those assumptions, by finding that chassis construction can significantly influence cooling power usage. In particular, the multiple power domains in a single chassis can have different levels of power efficiency, and further, power consumption is affected by the differences in electrical current levels across these two domains. This paper describes experiments designed to validate these facts, followed by a proposed current-aware capacity management system (CACM) that controls resource allocation across power domains by periodically migrating virtual machines among servers. The method not only fully accounts for the influence of current difference between the two domains, but also enforces power caps and safety levels for node temperature levels. Comparisons with industry-standard techniques that are not aware of physical constraints show that current-awareness can improve performance as well as power consumption, with about 16% in energy savings. Such savings indicate the utility of adding physical awareness to the ways in which IT systems are managed.
    Green Computing Conference and Workshops (IGCC), 2011 International; 08/2011
  • Jing Han · Meina Song · Junde Song
    [Show abstract] [Hide abstract]
    ABSTRACT: The traditional relational database to online storage are becoming increasingly problematic: the low performance do not gracefully meet the needs of mass data, the storage approaches of massive data are still not perfect presents. NoSQL and distributed memory database technologies have the potential to simplify or eliminate many of these challenges. NoSQL database technologies can provide Key-value style of data storage and largely ensure high performance. Distributed memory database technologies provide a means for easily store mass data in cloud in a dynamic and scalable manner. This paper argues for a new architecture called CDSA, which is a distributed memory NoSQL database architecture for Cloud computing to improve the performance of querying data and ensure mass data storage in cloud by using rational strategy. Furthermore, add or deleted any node from the distributed database cluster, the others node can work without stop service. We believe that CDSA can provide durable storage with high throughput and lower access latency.
    Computer and Information Science (ICIS), 2011 IEEE/ACIS 10th International Conference on; 06/2011