Qiang Yang

Fourth Military Medical University, Xi’an, Liaoning, China

Are you Qiang Yang?

Claim your profile

Publications (499)353.68 Total impact


  • No preview · Article · Jan 2016 · ACM Transactions on the Web
  • Zhejing Bao · Qin Zhou · Zhihui Yang · Qiang Yang · Lizhong Xu · Ting Wu
    [Show abstract] [Hide abstract]
    ABSTRACT: In part II of this two-part paper, the improved particle swarm optimization (IPSO) algorithm for solving the microgrid (MG) day-ahead cooling and electricity coordinated scheduling is proposed. Significant improvements are made in comparison with the conventional PSO algorithm from two aspects. First, the mandatory correction is implemented to ensure the complex coupled constraints among the components of a particle are met after the particle's position is updated, which could enhance the algorithm performance when solving the problem including complex constraints. Second, it is assumed that a solution denoted by a particle occupies a neighboring area, the size of which decreases from a certain value to nearly zero as the iteration step increases to its limitation, which helps to avoid the pre-maturity of algorithm. For an MG composed of the combined cooling, heating and power (CCHP) units, PV panels, wind turbines, and storage batteries, a range of case studies under different MG operating modes are carried out through simulations. The simulation results demonstrate the proposed multi time-scale, multi energy-type coordinated MG scheduling solution can achieve the co-optimization of multi energy-type supply to meet customer's cooling and electricity demands, and make the MG be controllable as seen from the connected main grid.
    No preview · Article · Sep 2015 · IEEE Transactions on Power Systems
  • Zhejing Bao · Qin Zhou · Zhihui Yang · Qiang Yang · Lizhong Xu · Ting Wu
    [Show abstract] [Hide abstract]
    ABSTRACT: For optimal microgrid (MG) operation, one challenge is the supply of cooling and electricity energy is a coupled co-optimization issue when considering the combined cooling, heating and power (CCHP) units and ice-storage air-conditioners. Another challenge is the inherent randomness of renewable energy within the MG should be accommodated by MG itself. In Part I of this two-part paper, the partial load performance of CCHPs and the performance of ice-storage air-conditioners are modeled, and the cooling and electricity coordinated MG day-ahead scheduling and real-time dispatching models are established. In day-ahead scheduling model, the uncertainty of wind and solar power is represented by multi-scenarios and the objective is to achieve the minimal expected MG operation cost. In real-time dispatching model, the different time-scale dispatch schemes are respectively applied for cooling and electricity to smooth out the fluctuations of renewable energy supply and to follow the variations of cooling and electricity demands by the fine dispatching of the components within MG such that the impact of MG to the connected main grid is minimal. The proposed MG multi time-scale cooling and electricity coordinated schedule achieves an integrated optimization for multi energy-type supply, and makes the MG be controllable as seen from the main grid.
    No preview · Article · Sep 2015 · IEEE Transactions on Power Systems
  • [Show abstract] [Hide abstract]
    ABSTRACT: Differential privacy (DP) has been widely explored in academia recently but less so in industry possibly due to its strong privacy guarantee. This paper makes the first attempt to implement three basic DP architectures in the deployed telecommunication (telco) big data platform for data mining applications. We find that all DP architectures have less than 5% loss of prediction accuracy when the weak privacy guarantee is adopted (e.g., privacy budget parameter e ≥ 3). However, when the strong privacy guarantee is assumed (e.g., privacy budget parameter e ≤ 0:1), all DP architectures lead to 15% ~ 30% accuracy loss, which implies that real-word industrial data mining systems cannot work well under such a strong privacy guarantee recommended by previous research works. Among the three basic DP architectures, the Hybridized DM (Data Mining) and DB (Database) architecture performs the best because of its complicated privacy protection design for the specific data mining algorithm. Through extensive experiments on big data, we also observe that the accuracy loss increases by increasing the variety of features, but decreases by increasing the volume of training data. Therefore, to make DP practically usable in large-scale industrial systems, our observations suggest that we may explore three possible research directions in future: (1) Relaxing the privacy guarantee (e.g., increasing privacy budget e) and studying its effectiveness on specific industrial applications; (2) Designing specific privacy scheme for specific data mining algorithms; and (3) Using large volume of data but with low variety for training the classification models.
    No preview · Conference Paper · Aug 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: Tumor necrosis factor-α (TNF-α) antagonism alleviates MI/R injury. However, the mechanisms by which the downstream mediators of TNF-α change after acute antagonism during MI/R remain unclear. Adiponectin (APN) exerts anti-ischemic effects, but it is downregulated during MI/R. This study was conducted to investigate whether TNF-α is responsible for the decrease of APN, and whether antagonizing TNF-α affects MI/R injury by increasing APN. Male adult wild-type (WT), APN knockout (APN KO) mice, and those with cardiac knockdowns of APN receptors via siRNA injection were subjected to 30 min of MI followed by reperfusion. The TNF-α antagonist etanercept or globular domain of APN (gAD) was injected 10 min before reperfusion. Etanercept ameliorated MI/R injury in WT mice as evidenced by improved cardiac function, reduced infarct size, and cardiomyocyte apoptosis. APN concentrations were augmented in response to etanercept, followed by an increase in AMP-activated protein kinase phosphorylation. Etanercept still increased cardiac function and reduced infarct size and apoptosis in both APN KO and APN receptors knockdown mice. However, its potential was significantly weakened in these mice compared to the WT mice. TNF-α is responsible for the decrease in APN during MI/R. The cardioprotective effects of TNF-α neutralization are partially due to the upregulation of APN. The results provide more insight into the TNF-α-mediated signaling effects during MI/R, and support the need for clinical trials in order to validate the efficacy of acute TNF-α antagonism in the treatment of MI/R injury. Copyright © 2015, American Journal of Physiology - Heart and Circulatory Physiology.
    No preview · Article · Apr 2015 · AJP Heart and Circulatory Physiology
  • Youjian Zhang · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the network-based leader-following consensus problem for the second-order multi-agent systems with nonlinear dynamics. Based on the Lyapunov-Krasovskii theory, a new delay-dependent sufficient condition in terms of linear matrix inequalities (LMIs) is presented to guarantee the consensus of the multi-agent system, and a sufficient condition for network-based controller design is proposed to ensure the followers reach consensus with the leader for second-order multi-agent systems with nonlinear dynamics. The effectiveness and applicability of the suggested solution is evaluated and verified through the simulation of two numerical examples.
    No preview · Article · Apr 2015 · Transactions of the Institute of Measurement and Control
  • Bingjie Ruan · Qiang Yang · Xinli Fang · Wenjun Yan

    No preview · Conference Paper · Oct 2014
  • Youjian Zhang · Peiran Li · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the synchronization problem is addressed in the context of Lur'e type complex switched network (CSN) with coupling time-varying delay in which every node is a Lur'e system. Based on the Lyapunov–Krasovskii theory and linear matrix inequality (LMI) technique, a delay-dependent synchronization criterion and a decentralized state feedback dynamic controller for synchronization of CSNs have been proposed. By choosing a common Lyapunov–Krasovskii functional and using the combined reciprocal convex technique, some previously ignored terms can be reconsidered and less conservative conditions can be obtained. In addition, by using an eigenvalue-decoupling method and convex optimization theory, high-dimension LMIs are decoupled into a set of low-dimension ones and the computation complexity of the criterion can be significantly reduced. The effectiveness and applicability of the suggested control solution is verified and assessed through the analysis for two numerical examples.
    No preview · Article · Sep 2014 · Asian Journal of Control
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Drosophila Dscam1 is a cell-surface protein that plays important roles in neural development and axon tiling of neurons. It is known that thousands of isoforms bind themselves through specific homophilic interactions, a process which provides the basis for cellular self-recognition. Detailed biochemical studies of specific isoforms strongly suggest that homophilic binding, i.e. the formation of homodimers by identical Dscam1 isomers, is of great importance for the self-avoidance of neurons. Due to experimental limitations, it is currently impossible to measure the homophilic binding affinities for all 19,000 potential isoforms.ResultsHere we reconstructed the DNA sequences of an ancestral Dscam form (which likely existed approximately 40¿~¿50 million years ago) using a comparative genomic approach. On the basis of this sequence, we established a working model to predict the self-binding affinities of all isoforms in both the current and the ancestral genome, using machine-learning methods. Detailed computational analysis was performed to compare the self-binding affinities of all isoforms present in these two genomes. Our results revealed that 1) isoforms containing newly derived variable domains exhibit higher self-binding affinities than those with conserved domains, and 2) current isoforms display higher self-binding affinities than their counterparts in the ancient genome. As thousands of Dscam isoforms are needed for the self-avoidance of the neuron, we propose that an increase in self-binding affinity provides the basis for the successful evolution of the arthropod brain.Conclusions Our data presented here provide an excellent model for future experimental studies of the binding behavior of Dscam isoforms. The results of our analysis indicate that evolution favored the rise of novel variable domains thanks to their higher self-binding affinities, rather than selection merely on the basis of simple expansion of isoform diversity, as that this particular selection process would have established the powerful mechanisms required for neuronal self-avoidance. Thus, we reveal here a new molecular mechanism for the successful evolution of arthropod brains.
    Preview · Article · Aug 2014 · BMC Evolutionary Biology
  • Bin Wu · Erheng Zhong · Ben Tan · Andrew Horner · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.
    No preview · Article · Aug 2014
  • Ying Wei · Yangqiu Song · Yi Zhen · Bo Liu · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Hashing has enjoyed a great success in large-scale similarity search. Recently, researchers have studied the multi-modal hashing to meet the need of similarity search across different types of media. However, most of the existing methods are applied to search across multi-views among which explicit bridge information is provided. Given a heterogeneous media search task, we observe that abundant multi-view data can be found on the Web which can serve as an auxiliary bridge. In this paper, we propose a Heterogeneous Translated Hashing (HTH) method with such auxiliary bridge incorporated not only to improve current multi-view search but also to enable similarity search across heterogeneous media which have no direct correspondence. HTH simultaneously learns hash functions embedding heterogeneous media into different Hamming spaces, and translators aligning these spaces. Unlike almost all existing methods that map heterogeneous data in a common Hamming space, mapping to different spaces provides more flexible and discriminative ability. We empirically verify the effectiveness and efficiency of our algorithm on two real world large datasets, one publicly available dataset of Flickr and the other MIRFLICKR-Yahoo Answers dataset.
    No preview · Article · Aug 2014
  • Ben Tan · Erheng Zhong · Evan Wei Xiang · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning, which aims to help learning tasks in a target domain by leveraging knowledge from auxiliary domains, has been demonstrated to be effective in different applications such as text mining, sentiment analysis, and so on. In addition, in many real-world applications, auxiliary data are described from multiple perspectives and usually carried by multiple sources. For example, to help classify videos on Youtube, which include three perspectives: image, voice and subtitles, one may borrow data from Flickr, Last.FM and Google News. Although any single instance in these domains can only cover a part of the views available on Youtube, the piece of information carried by them may compensate one another. If we can exploit these auxiliary domains in a collective manner, and transfer the knowledge to the target domain, we can improve the target model building from multiple perspectives. In this article, we consider this transfer learning problem as Transfer Learning with Multiple Views and Multiple Sources. As different sources may have different probability distributions and different views may compensate or be inconsistent with each other, merging all data in a simplistic manner will not give an optimal result. Thus, we propose a novel algorithm to leverage knowledge from different views and sources collaboratively, by letting different views from different sources complement each other through a co-training style framework, at the same time, it revises the distribution differences in different domains. We conduct empirical studies on several real-world datasets to show that the proposed approach can improve the classification accuracy by up to 8% against different kinds of state-of-the-art baselines.
    No preview · Article · Aug 2014 · Statistical Analysis and Data Mining
  • Article: OceanST

    No preview · Article · Aug 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper looks into the outer synchronization issue of two networks with nonlinear, time-delay and different topological characteristics. Given two general dynamic network models, a set of synchronization criteria based on the Lyapunov stability analysis are proposed and the theoretical analysis is presented. In addition, to expand the generality of the proposed control criteria, two synchronization schemes for two linear coupling networks and similar topologies are derived respectively. Unlike the existing solutions obtained from specific studies, the proposed outer synchronization scheme can be applicable for a class of complex networks with nonlinear and time-delay characteristics and the synchronization criteria are not restricted by the symmetry of the network topological structures. This greatly improves the applicability and generality of the controller in practical deployment. The numerical simulation study on the network with dynamic chaotic characteristic are carried out and the result validates and demonstrates the effectiveness of the suggested control mechanism.
    No preview · Conference Paper · Jul 2014
  • Hankz Hankui Zhuo · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Applying learning techniques to acquire action models is an area of intense research interest. Most previous work in this area has assumed that there is a significant amount of training data available in a planning domain of interest. However, it is often difficult to acquire sufficient training data to ensure the learnt action models are of high quality. In this paper, we seek to explore a novel algorithm framework, called TRAMP, to learn action models with limited training data in a target domain, via transferring as much of the available information from other domains (called source domains) as possible to help the learning task, assuming action models in source domains can be transferred to the target domain. TRAMP transfers knowledge from source domains by first building structure mappings between source and target domains, and then exploiting extra knowledge from Web search to bridge and transfer knowledge from sources. Specifically, TRAMP first encodes training data with a set of propositions, and formulates the transferred knowledge as a set of weighted formulas. After that it learns action models for the target domain to best explain the set of propositions and the transferred knowledge. We empirically evaluate TRAMP in different settings to see their advantages and disadvantages in six planning domains, including four International Planning Competition (IPC) domains and two synthetic domains.
    No preview · Article · Jul 2014 · Artificial Intelligence
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hierarchical Task Network (HTN) planning is an effective yet knowledge intensive problem-solving technique. It requires humans to encode knowledge in the form of methods and action models. Methods describe how to decompose tasks into subtasks and the preconditions under which those methods are applicable whereas action models describe how actions change the world. Encoding such knowledge is a difficult and time-consuming process, even for domain experts. In this paper, we propose a new learning algorithm, called HTNLearn, to help acquire HTN methods and action models. HTNLearn receives as input a collection of plan traces with partially annotated intermediate state information, and a set of annotated tasks that specify the conditions before and after the tasks' completion. In addition, plan traces are annotated with potentially empty partial decomposition trees that record the processes of decomposing tasks to subtasks. HTNLearn outputs are a collection of methods and action models. HTNLearn first encodes constraints about the methods and action models as a constraint satisfaction problem, and then solves the problem using a weighted MAX-SAT solver. HTNLearn can learn methods and action models simultaneously from partially observed plan traces (i.e., plan traces where the intermediate states are partially observable). We test HTNLearn in several HTN domains. The experimental results show that our algorithm HTNLearn is both effective and efficient.
    No preview · Article · Jul 2014 · Artificial Intelligence
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning is established as an effective technology to leverage rich labeled data from some source domain to build an accurate classifier for the target domain. The basic assumption is that the input domains may share certain knowledge structure, which can be encoded into common latent factors and extracted by preserving important property of original data, e.g., statistical property and geometric structure. In this paper, we show that different properties of input data can be complementary to each other and exploring them simultaneously can make the learning model robust to the domain difference. We propose a general framework, referred to as Graph Co-Regularized Transfer Learning (GTL), where various matrix factorization models can be incorporated. Specifically, GTL aims to extract common latent factors for knowledge transfer by preserving the statistical property across domains, and simultaneously, refine the latent factors to alleviate negative transfer by preserving the geometric structure in each domain. Based on the framework, we propose two novel methods using NMF and NMTF, respectively. Extensive experiments verify that GTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.
    Preview · Article · Jul 2014 · IEEE Transactions on Knowledge and Data Engineering
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Advanced satellite tracking technologies have collected huge amounts of wild bird migration data. Biologists use these data to understand dynamic migration patterns, study correlations between habitats, and predict global spreading trends of avian influenza. The research discussed here transforms the biological problem into a machine learning problem by converting wild bird migratory paths into graphs. H5N1 outbreak prediction is achieved by discovering weighted closed cliques from the graphs using the mining algorithm High-wEight cLosed cliquE miNing (HELEN). The learning algorithm HELEN-p then predicts potential H5N1 outbreaks at habitats. This prediction method is more accurate than traditional methods used on a migration dataset obtained through a real satellite bird-tracking system. Empirical analysis shows that H5N1 spreads in a manner of high-weight closed cliques and frequent cliques.
    Full-text · Article · Jul 2014 · Intelligent Systems, IEEE
  • Xinli Fang · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper exploited the cascading failure behavior in the new context of directed complex networks by introducing the concept of neighbor links. Two novel network attack strategies, i.e. the minimum in-degree attack strategy (MIAS) and the maximum out-degree attack strategy (MOAS), are proposed and their impacts are assessed through simulation experiments by using the random attack strategy (RAS) as the comparison benchmark for a range of network scenarios (directed random network, directed scale-free network and the IEEE 118 network model). The numerical result shows that the cascading failure propagation in directed complex networks is highly dependent on the attack strategies and the directionality of the network, as well as other network configurations.
    No preview · Article · Jun 2014 · Safety Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Latent Dirichlet allocation (LDA) is a popular topic modeling technique in academia but less so in industry, especially in large-scale applications involving search engines and online advertisement systems. A main underlying reason is that the topic models used have been too small in scale to be useful; for example, some of the largest LDA models reported in literature have up to $10^3$ topics, which cover difficultly the long-tail semantic word sets. In this paper, we show that the number of topics is a key factor that can significantly boost the utility of topic-modeling system. In particular, we show that a "big" LDA model with at least $10^5$ topics inferred from $10^9$ search queries can achieve a significant improvement on industrial search engine and online advertising system, both of which serving hundreds of millions of users. We develop a novel distributed system called Peacock to learn big LDA models from big data. The main features on Peacock include hierarchical parallel architecture, real-time prediction, and topic de-duplication. We empirically demonstrate that the Peacock system is capable of providing significant benefits via highly scalable LDA topic models for several industrial applications.
    Full-text · Article · May 2014

Publication Stats

15k Citations
353.68 Total Impact Points

Institutions

  • 2011-2015
    • Fourth Military Medical University
      • Department of Cardiology
      Xi’an, Liaoning, China
    • Zhejiang Gongshang University
      Hang-hsien, Zhejiang Sheng, China
  • 2010-2015
    • Zhejiang University
      • • College of Electrical Engineering
      • • College of Computer Science and Technology
      Hang-hsien, Zhejiang Sheng, China
    • Stanford University
      Palo Alto, California, United States
  • 1970-2015
    • The Hong Kong University of Science and Technology
      • Department of Computer Science and Engineering
      Chiu-lung, Kowloon City, Hong Kong
  • 2013
    • Hong Kong Institute of Technology
      Hong Kong, Hong Kong
  • 2009-2013
    • Imperial College London
      • Department of Electrical and Electronic Engineering
      Londinium, England, United Kingdom
  • 2012
    • Microsoft
      Washington, West Virginia, United States
    • The University of Hong Kong
      Hong Kong, Hong Kong
  • 2007
    • Sun Yat-Sen University
      Shengcheng, Guangdong, China
  • 2006
    • Peking University
      Peping, Beijing, China
  • 1996-2006
    • Simon Fraser University
      • School of Computing Science
      Burnaby, British Columbia, Canada
  • 2005
    • University of Washington Seattle
      • Department of Genome Sciences
      Seattle, Washington, United States
  • 2004
    • University of Vermont
      Burlington, Vermont, United States
    • The University of Western Ontario
      • Department of Computer Science
      London, Ontario, Canada
  • 2001
    • Shanghai Jiao Tong University
      • Department of Computer Science and Engineering
      Shanghai, Shanghai Shi, China
  • 2000
    • Tsinghua University
      • Department of Computer Science and Technology
      Peping, Beijing, China
  • 1990-2000
    • University of Waterloo
      Waterloo, Ontario, Canada
  • 1989
    • University of Maryland, College Park
      • Department of Computer Science
      Maryland, United States