Qiang Yang

Zhejiang University, Hang-hsien, Zhejiang Sheng, China

Are you Qiang Yang?

Claim your profile

Publications (474)345.15 Total impact

  • Zhejing Bao · Qin Zhou · Zhihui Yang · Qiang Yang · Lizhong Xu · Ting Wu
    [Show abstract] [Hide abstract]
    ABSTRACT: In part II of this two-part paper, the improved particle swarm optimization (IPSO) algorithm for solving the microgrid (MG) day-ahead cooling and electricity coordinated scheduling is proposed. Significant improvements are made in comparison with the conventional PSO algorithm from two aspects. First, the mandatory correction is implemented to ensure the complex coupled constraints among the components of a particle are met after the particle's position is updated, which could enhance the algorithm performance when solving the problem including complex constraints. Second, it is assumed that a solution denoted by a particle occupies a neighboring area, the size of which decreases from a certain value to nearly zero as the iteration step increases to its limitation, which helps to avoid the pre-maturity of algorithm. For an MG composed of the combined cooling, heating and power (CCHP) units, PV panels, wind turbines, and storage batteries, a range of case studies under different MG operating modes are carried out through simulations. The simulation results demonstrate the proposed multi time-scale, multi energy-type coordinated MG scheduling solution can achieve the co-optimization of multi energy-type supply to meet customer's cooling and electricity demands, and make the MG be controllable as seen from the connected main grid.
    IEEE Transactions on Power Systems 09/2015; 30(5):2267-2277. DOI:10.1109/TPWRS.2014.2367124 · 2.81 Impact Factor
  • Zhejing Bao · Qin Zhou · Zhihui Yang · Qiang Yang · Lizhong Xu · Ting Wu
    [Show abstract] [Hide abstract]
    ABSTRACT: For optimal microgrid (MG) operation, one challenge is the supply of cooling and electricity energy is a coupled co-optimization issue when considering the combined cooling, heating and power (CCHP) units and ice-storage air-conditioners. Another challenge is the inherent randomness of renewable energy within the MG should be accommodated by MG itself. In Part I of this two-part paper, the partial load performance of CCHPs and the performance of ice-storage air-conditioners are modeled, and the cooling and electricity coordinated MG day-ahead scheduling and real-time dispatching models are established. In day-ahead scheduling model, the uncertainty of wind and solar power is represented by multi-scenarios and the objective is to achieve the minimal expected MG operation cost. In real-time dispatching model, the different time-scale dispatch schemes are respectively applied for cooling and electricity to smooth out the fluctuations of renewable energy supply and to follow the variations of cooling and electricity demands by the fine dispatching of the components within MG such that the impact of MG to the connected main grid is minimal. The proposed MG multi time-scale cooling and electricity coordinated schedule achieves an integrated optimization for multi energy-type supply, and makes the MG be controllable as seen from the main grid.
    IEEE Transactions on Power Systems 09/2015; 30(5):2257-2266. DOI:10.1109/TPWRS.2014.2367127 · 2.81 Impact Factor
  • Proceedings of VLDB; 08/2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: Tumor necrosis factor-α (TNF-α) antagonism alleviates MI/R injury. However, the mechanisms by which the downstream mediators of TNF-α change after acute antagonism during MI/R remain unclear. Adiponectin (APN) exerts anti-ischemic effects, but it is downregulated during MI/R. This study was conducted to investigate whether TNF-α is responsible for the decrease of APN, and whether antagonizing TNF-α affects MI/R injury by increasing APN. Male adult wild-type (WT), APN knockout (APN KO) mice, and those with cardiac knockdowns of APN receptors via siRNA injection were subjected to 30 min of MI followed by reperfusion. The TNF-α antagonist etanercept or globular domain of APN (gAD) was injected 10 min before reperfusion. Etanercept ameliorated MI/R injury in WT mice as evidenced by improved cardiac function, reduced infarct size, and cardiomyocyte apoptosis. APN concentrations were augmented in response to etanercept, followed by an increase in AMP-activated protein kinase phosphorylation. Etanercept still increased cardiac function and reduced infarct size and apoptosis in both APN KO and APN receptors knockdown mice. However, its potential was significantly weakened in these mice compared to the WT mice. TNF-α is responsible for the decrease in APN during MI/R. The cardioprotective effects of TNF-α neutralization are partially due to the upregulation of APN. The results provide more insight into the TNF-α-mediated signaling effects during MI/R, and support the need for clinical trials in order to validate the efficacy of acute TNF-α antagonism in the treatment of MI/R injury. Copyright © 2015, American Journal of Physiology - Heart and Circulatory Physiology.
    AJP Heart and Circulatory Physiology 04/2015; 308(12):ajpheart.00346.2014. DOI:10.1152/ajpheart.00346.2014 · 3.84 Impact Factor
  • Youjian Zhang · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses the network-based leader-following consensus problem for the second-order multi-agent systems with nonlinear dynamics. Based on the Lyapunov-Krasovskii theory, a new delay-dependent sufficient condition in terms of linear matrix inequalities (LMIs) is presented to guarantee the consensus of the multi-agent system, and a sufficient condition for network-based controller design is proposed to ensure the followers reach consensus with the leader for second-order multi-agent systems with nonlinear dynamics. The effectiveness and applicability of the suggested solution is evaluated and verified through the simulation of two numerical examples.
    Transactions of the Institute of Measurement and Control 04/2015; DOI:10.1177/0142331215579447 · 0.96 Impact Factor
  • Youjian Zhang · Peiran Li · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the synchronization problem is addressed in the context of Lur'e type complex switched network (CSN) with coupling time-varying delay in which every node is a Lur'e system. Based on the Lyapunov–Krasovskii theory and linear matrix inequality (LMI) technique, a delay-dependent synchronization criterion and a decentralized state feedback dynamic controller for synchronization of CSNs have been proposed. By choosing a common Lyapunov–Krasovskii functional and using the combined reciprocal convex technique, some previously ignored terms can be reconsidered and less conservative conditions can be obtained. In addition, by using an eigenvalue-decoupling method and convex optimization theory, high-dimension LMIs are decoupled into a set of low-dimension ones and the computation complexity of the criterion can be significantly reduced. The effectiveness and applicability of the suggested control solution is verified and assessed through the analysis for two numerical examples.
    Asian Journal of Control 09/2014; DOI:10.1002/asjc.980 · 1.56 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background Drosophila Dscam1 is a cell-surface protein that plays important roles in neural development and axon tiling of neurons. It is known that thousands of isoforms bind themselves through specific homophilic interactions, a process which provides the basis for cellular self-recognition. Detailed biochemical studies of specific isoforms strongly suggest that homophilic binding, i.e. the formation of homodimers by identical Dscam1 isomers, is of great importance for the self-avoidance of neurons. Due to experimental limitations, it is currently impossible to measure the homophilic binding affinities for all 19,000 potential isoforms.ResultsHere we reconstructed the DNA sequences of an ancestral Dscam form (which likely existed approximately 40¿~¿50 million years ago) using a comparative genomic approach. On the basis of this sequence, we established a working model to predict the self-binding affinities of all isoforms in both the current and the ancestral genome, using machine-learning methods. Detailed computational analysis was performed to compare the self-binding affinities of all isoforms present in these two genomes. Our results revealed that 1) isoforms containing newly derived variable domains exhibit higher self-binding affinities than those with conserved domains, and 2) current isoforms display higher self-binding affinities than their counterparts in the ancient genome. As thousands of Dscam isoforms are needed for the self-avoidance of the neuron, we propose that an increase in self-binding affinity provides the basis for the successful evolution of the arthropod brain.Conclusions Our data presented here provide an excellent model for future experimental studies of the binding behavior of Dscam isoforms. The results of our analysis indicate that evolution favored the rise of novel variable domains thanks to their higher self-binding affinities, rather than selection merely on the basis of simple expansion of isoform diversity, as that this particular selection process would have established the powerful mechanisms required for neuronal self-avoidance. Thus, we reveal here a new molecular mechanism for the successful evolution of arthropod brains.
    BMC Evolutionary Biology 08/2014; 14(1):186. DOI:10.1186/s12862-014-0186-z · 3.37 Impact Factor
  • Bin Wu · Erheng Zhong · Ben Tan · Andrew Horner · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.
  • Ying Wei · Yangqiu Song · Yi Zhen · Bo Liu · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Hashing has enjoyed a great success in large-scale similarity search. Recently, researchers have studied the multi-modal hashing to meet the need of similarity search across different types of media. However, most of the existing methods are applied to search across multi-views among which explicit bridge information is provided. Given a heterogeneous media search task, we observe that abundant multi-view data can be found on the Web which can serve as an auxiliary bridge. In this paper, we propose a Heterogeneous Translated Hashing (HTH) method with such auxiliary bridge incorporated not only to improve current multi-view search but also to enable similarity search across heterogeneous media which have no direct correspondence. HTH simultaneously learns hash functions embedding heterogeneous media into different Hamming spaces, and translators aligning these spaces. Unlike almost all existing methods that map heterogeneous data in a common Hamming space, mapping to different spaces provides more flexible and discriminative ability. We empirically verify the effectiveness and efficiency of our algorithm on two real world large datasets, one publicly available dataset of Flickr and the other MIRFLICKR-Yahoo Answers dataset.
  • Ben Tan · Erheng Zhong · Evan Wei Xiang · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning, which aims to help learning tasks in a target domain by leveraging knowledge from auxiliary domains, has been demonstrated to be effective in different applications such as text mining, sentiment analysis, and so on. In addition, in many real-world applications, auxiliary data are described from multiple perspectives and usually carried by multiple sources. For example, to help classify videos on Youtube, which include three perspectives: image, voice and subtitles, one may borrow data from Flickr, Last.FM and Google News. Although any single instance in these domains can only cover a part of the views available on Youtube, the piece of information carried by them may compensate one another. If we can exploit these auxiliary domains in a collective manner, and transfer the knowledge to the target domain, we can improve the target model building from multiple perspectives. In this article, we consider this transfer learning problem as Transfer Learning with Multiple Views and Multiple Sources. As different sources may have different probability distributions and different views may compensate or be inconsistent with each other, merging all data in a simplistic manner will not give an optimal result. Thus, we propose a novel algorithm to leverage knowledge from different views and sources collaboratively, by letting different views from different sources complement each other through a co-training style framework, at the same time, it revises the distribution differences in different domains. We conduct empirical studies on several real-world datasets to show that the proposed approach can improve the classification accuracy by up to 8% against different kinds of state-of-the-art baselines.
    Statistical Analysis and Data Mining 08/2014; 7(4). DOI:10.1002/sam.11226
  • Article: OceanST
  • Hankz Hankui Zhuo · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Applying learning techniques to acquire action models is an area of intense research interest. Most previous work in this area has assumed that there is a significant amount of training data available in a planning domain of interest. However, it is often difficult to acquire sufficient training data to ensure the learnt action models are of high quality. In this paper, we seek to explore a novel algorithm framework, called TRAMP, to learn action models with limited training data in a target domain, via transferring as much of the available information from other domains (called source domains) as possible to help the learning task, assuming action models in source domains can be transferred to the target domain. TRAMP transfers knowledge from source domains by first building structure mappings between source and target domains, and then exploiting extra knowledge from Web search to bridge and transfer knowledge from sources. Specifically, TRAMP first encodes training data with a set of propositions, and formulates the transferred knowledge as a set of weighted formulas. After that it learns action models for the target domain to best explain the set of propositions and the transferred knowledge. We empirically evaluate TRAMP in different settings to see their advantages and disadvantages in six planning domains, including four International Planning Competition (IPC) domains and two synthetic domains.
    Artificial Intelligence 07/2014; 212(1). DOI:10.1016/j.artint.2014.03.004 · 3.37 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hierarchical Task Network (HTN) planning is an effective yet knowledge intensive problem-solving technique. It requires humans to encode knowledge in the form of methods and action models. Methods describe how to decompose tasks into subtasks and the preconditions under which those methods are applicable whereas action models describe how actions change the world. Encoding such knowledge is a difficult and time-consuming process, even for domain experts. In this paper, we propose a new learning algorithm, called HTNLearn, to help acquire HTN methods and action models. HTNLearn receives as input a collection of plan traces with partially annotated intermediate state information, and a set of annotated tasks that specify the conditions before and after the tasks' completion. In addition, plan traces are annotated with potentially empty partial decomposition trees that record the processes of decomposing tasks to subtasks. HTNLearn outputs are a collection of methods and action models. HTNLearn first encodes constraints about the methods and action models as a constraint satisfaction problem, and then solves the problem using a weighted MAX-SAT solver. HTNLearn can learn methods and action models simultaneously from partially observed plan traces (i.e., plan traces where the intermediate states are partially observable). We test HTNLearn in several HTN domains. The experimental results show that our algorithm HTNLearn is both effective and efficient.
    Artificial Intelligence 07/2014; 212(1). DOI:10.1016/j.artint.2014.04.003 · 3.37 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Advanced satellite tracking technologies have collected huge amounts of wild bird migration data. Biologists use these data to understand dynamic migration patterns, study correlations between habitats, and predict global spreading trends of avian influenza. The research discussed here transforms the biological problem into a machine learning problem by converting wild bird migratory paths into graphs. H5N1 outbreak prediction is achieved by discovering weighted closed cliques from the graphs using the mining algorithm High-wEight cLosed cliquE miNing (HELEN). The learning algorithm HELEN-p then predicts potential H5N1 outbreaks at habitats. This prediction method is more accurate than traditional methods used on a migration dataset obtained through a real satellite bird-tracking system. Empirical analysis shows that H5N1 spreads in a manner of high-weight closed cliques and frequent cliques.
    Intelligent Systems, IEEE 07/2014; 29(4):10-17. DOI:10.1109/MIS.2013.38 · 2.34 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning is established as an effective technology to leverage rich labeled data from some source domain to build an accurate classifier for the target domain. The basic assumption is that the input domains may share certain knowledge structure, which can be encoded into common latent factors and extracted by preserving important property of original data, e.g., statistical property and geometric structure. In this paper, we show that different properties of input data can be complementary to each other and exploring them simultaneously can make the learning model robust to the domain difference. We propose a general framework, referred to as Graph Co-Regularized Transfer Learning (GTL), where various matrix factorization models can be incorporated. Specifically, GTL aims to extract common latent factors for knowledge transfer by preserving the statistical property across domains, and simultaneously, refine the latent factors to alleviate negative transfer by preserving the geometric structure in each domain. Based on the framework, we propose two novel methods using NMF and NMTF, respectively. Extensive experiments verify that GTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.
    IEEE Transactions on Knowledge and Data Engineering 07/2014; 26(7):1805-1818. DOI:10.1109/TKDE.2013.97 · 2.07 Impact Factor
  • Xinli Fang · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper exploited the cascading failure behavior in the new context of directed complex networks by introducing the concept of neighbor links. Two novel network attack strategies, i.e. the minimum in-degree attack strategy (MIAS) and the maximum out-degree attack strategy (MOAS), are proposed and their impacts are assessed through simulation experiments by using the random attack strategy (RAS) as the comparison benchmark for a range of network scenarios (directed random network, directed scale-free network and the IEEE 118 network model). The numerical result shows that the cascading failure propagation in directed complex networks is highly dependent on the attack strategies and the directionality of the network, as well as other network configurations.
    Safety Science 06/2014; 65:1–9. DOI:10.1016/j.ssci.2013.12.015 · 1.83 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Latent Dirichlet allocation (LDA) is a popular topic modeling technique in academia but less so in industry, especially in large-scale applications involving search engines and online advertisement systems. A main underlying reason is that the topic models used have been too small in scale to be useful; for example, some of the largest LDA models reported in literature have up to $10^3$ topics, which cover difficultly the long-tail semantic word sets. In this paper, we show that the number of topics is a key factor that can significantly boost the utility of topic-modeling system. In particular, we show that a "big" LDA model with at least $10^5$ topics inferred from $10^9$ search queries can achieve a significant improvement on industrial search engine and online advertising system, both of which serving hundreds of millions of users. We develop a novel distributed system called Peacock to learn big LDA models from big data. The main features on Peacock include hierarchical parallel architecture, real-time prediction, and topic de-duplication. We empirically demonstrate that the Peacock system is capable of providing significant benefits via highly scalable LDA topic models for several industrial applications.
  • Ruliang Dong · Qiang Yang · Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: Island operation of a fraction of power distribution network with distributed generators (DGs) is considered an efficient operational paradigm to enhance the security of power supply. This paper attempts to address the issue of island partitioning in distribution network with the penetration of small-scale DGs and present a two-stage algorithmic solution. In the suggested solution, the CSP-based method is firstly adopted to create a collection of network partitioning results in respect to individual DGs meeting the imposed constraints from the distribution network which can be carried out in an offline fashion assuming the fault occurrence at certain points. Aiming to identify the optimal partitioning solution, the heuristic simulate anneal arithmetic (SAA) algorithmic design was employed. Through such two-stage approach, the process of optimal island partitioning can be obtained with acceptable time complexity in large-scale power distribution networks. The proposed partitioning solution is assessed through a set of numerical comparative study with the IEEE 69-bus test model by using two available solutions as the comparison benchmark. The numerical result demonstrates that the proposed solution performs well in terms of guaranteeing the reliable supply of essential power loads as well as improving the utilization efficiency of distributed generations.
    2014 26th Chinese Control And Decision Conference (CCDC); 05/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Friendship prediction is an important task in social network analysis (SNA). It can help users identify friends and improve their level of activity. Most previous approaches predict users' friendship based on their historical records, such as their existing friendship, social interactions, etc. However, in reality, most users have limited friends in a single network, and the data can be very sparse. The sparsity problem causes existing methods to overfit the rare observations and suffer from serious performance degradation. This is particularly true when a new social network just starts to form. We observe that many of today's social networks are composite in nature, where people are often engaged in multiple networks. In addition, users' friendships are always correlated, for example, they are both friends on Facebook and Google+. Thus, by considering those overlapping users as the bridge, the friendship knowledge in other networks can help predict their friendships in the current network. This can be achieved by exploiting the knowledge in different networks in a collective manner. However, as each individual network has its own properties that can be incompatible and inconsistent with other networks, the naive merging of all networks into a single one may not work well. The proposed solution is to extract the common behaviors between different networks via a hierarchical Bayesian model. It captures the common knowledge across networks, while avoiding negative impacts due to network differences. Empirical studies demonstrate that the proposed approach improves the mean average precision of friendship prediction over state-of-the-art baselines on nine real-world social networking datasets significantly.
  • Erheng Zhong · Wei Fan · Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Accurate prediction of user behaviors is important for many social media applications, including social marketing, personalization, and recommendation. A major challenge lies in that although many previous works model user behavior from only historical behavior logs, the available user behavior data or interactions between users and items in a given social network are usually very limited and sparse (e.g., ⩾ 99.9% empty), which makes models overfit the rare observations and fail to provide accurate predictions. We observe that many people are members of several social networks in the same time, such as Facebook, Twitter, and Tencent’s QQ. Importantly, users’ behaviors and interests in different networks influence one another. This provides an opportunity to leverage the knowledge of user behaviors in different networks by considering the overlapping users in different networks as bridges, in order to alleviate the data sparsity problem, and enhance the predictive performance of user behavior modeling. Combining different networks “simply and naively” does not work well. In this article, we formulate the problem to model multiple networks as “adaptive composite transfer” and propose a framework called ComSoc. ComSoc first selects the most suitable networks inside a composite social network via a hierarchical Bayesian model, parameterized for individual users. It then builds topic models for user behavior prediction using both the relationships in the selected networks and related behavior data. With different relational regularization, we introduce different implementations, corresponding to different ways to transfer knowledge from composite social relations. To handle big data, we have implemented the algorithm using Map/Reduce. We demonstrate that the proposed composite network-based user behavior models significantly improve the predictive accuracy over a number of existing approaches on several real-world applications, including a very large social networking dataset from Tencent Inc.
    ACM Transactions on Knowledge Discovery from Data 02/2014; 8(1). DOI:10.1145/2556613 · 0.93 Impact Factor

Publication Stats

13k Citations
345.15 Total Impact Points


  • 2010–2014
    • Zhejiang University
      • • College of Electrical Engineering
      • • College of Computer Science and Technology
      Hang-hsien, Zhejiang Sheng, China
    • Stanford University
      Palo Alto, California, United States
    • Pennsylvania State University
      University Park, Maryland, United States
    • IBM
      Armonk, New York, United States
  • 1970–2014
    • The Hong Kong University of Science and Technology
      • • Department of Computer Science and Engineering
      • • Applied Genomics Center
      Chiu-lung, Kowloon City, Hong Kong
  • 2013
    • Hong Kong Institute of Technology
      Hong Kong, Hong Kong
  • 2009–2013
    • Imperial College London
      • Department of Electrical and Electronic Engineering
      Londinium, England, United Kingdom
  • 2012
    • Microsoft
      Washington, West Virginia, United States
    • The University of Hong Kong
      Hong Kong, Hong Kong
  • 2011
    • Fourth Military Medical University
      Xi’an, Liaoning, China
    • Zhejiang Gongshang University
      Hang-hsien, Zhejiang Sheng, China
  • 2007
    • Sun Yat-Sen University
      Shengcheng, Guangdong, China
  • 2006
    • Peking University
      • School of Mathematical Sciences
      Beijing, Beijing Shi, China
  • 1996–2006
    • Simon Fraser University
      • School of Computing Science
      Burnaby, British Columbia, Canada
  • 2004
    • University of Vermont
      Burlington, Vermont, United States
  • 2001
    • Shanghai Jiao Tong University
      • Department of Computer Science and Engineering
      Shanghai, Shanghai Shi, China
  • 2000
    • Tsinghua University
      • Department of Computer Science and Technology
      Peping, Beijing, China
  • 1990–2000
    • University of Waterloo
      Waterloo, Ontario, Canada
  • 1989
    • University of Maryland, College Park
      • Department of Computer Science
      Maryland, United States