Qiang Yang

Zhejiang University, Hang-hsien, Zhejiang Sheng, China

Are you Qiang Yang?

Claim your profile

Publications (467)342.3 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Tumor necrosis factor-α (TNF-α) antagonism alleviates MI/R injury. However, the mechanisms by which the downstream mediators of TNF-α change after acute antagonism during MI/R remain unclear. Adiponectin (APN) exerts anti-ischemic effects, but it is downregulated during MI/R. This study was conducted to investigate whether TNF-α is responsible for the decrease of APN, and whether antagonizing TNF-α affects MI/R injury by increasing APN. Male adult wild-type (WT), APN knockout (APN KO) mice, and those with cardiac knockdowns of APN receptors via siRNA injection were subjected to 30 min of MI followed by reperfusion. The TNF-α antagonist etanercept or globular domain of APN (gAD) was injected 10 min before reperfusion. Etanercept ameliorated MI/R injury in WT mice as evidenced by improved cardiac function, reduced infarct size, and cardiomyocyte apoptosis. APN concentrations were augmented in response to etanercept, followed by an increase in AMP-activated protein kinase phosphorylation. Etanercept still increased cardiac function and reduced infarct size and apoptosis in both APN KO and APN receptors knockdown mice. However, its potential was significantly weakened in these mice compared to the WT mice. TNF-α is responsible for the decrease in APN during MI/R. The cardioprotective effects of TNF-α neutralization are partially due to the upregulation of APN. The results provide more insight into the TNF-α-mediated signaling effects during MI/R, and support the need for clinical trials in order to validate the efficacy of acute TNF-α antagonism in the treatment of MI/R injury. Copyright © 2015, American Journal of Physiology - Heart and Circulatory Physiology.
    AJP Heart and Circulatory Physiology 04/2015; DOI:10.1152/ajpheart.00346.2014 · 4.01 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the synchronization problem is addressed in the context of Lur'e type complex switched network (CSN) with coupling time-varying delay in which every node is a Lur'e system. Based on the Lyapunov–Krasovskii theory and linear matrix inequality (LMI) technique, a delay-dependent synchronization criterion and a decentralized state feedback dynamic controller for synchronization of CSNs have been proposed. By choosing a common Lyapunov–Krasovskii functional and using the combined reciprocal convex technique, some previously ignored terms can be reconsidered and less conservative conditions can be obtained. In addition, by using an eigenvalue-decoupling method and convex optimization theory, high-dimension LMIs are decoupled into a set of low-dimension ones and the computation complexity of the criterion can be significantly reduced. The effectiveness and applicability of the suggested control solution is verified and assessed through the analysis for two numerical examples.
    Asian Journal of Control 09/2014; DOI:10.1002/asjc.980 · 1.41 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hashing has enjoyed a great success in large-scale similarity search. Recently, researchers have studied the multi-modal hashing to meet the need of similarity search across different types of media. However, most of the existing methods are applied to search across multi-views among which explicit bridge information is provided. Given a heterogeneous media search task, we observe that abundant multi-view data can be found on the Web which can serve as an auxiliary bridge. In this paper, we propose a Heterogeneous Translated Hashing (HTH) method with such auxiliary bridge incorporated not only to improve current multi-view search but also to enable similarity search across heterogeneous media which have no direct correspondence. HTH simultaneously learns hash functions embedding heterogeneous media into different Hamming spaces, and translators aligning these spaces. Unlike almost all existing methods that map heterogeneous data in a common Hamming space, mapping to different spaces provides more flexible and discriminative ability. We empirically verify the effectiveness and efficiency of our algorithm on two real world large datasets, one publicly available dataset of Flickr and the other MIRFLICKR-Yahoo Answers dataset.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Time-sync video tagging aims to automatically generate tags for each video shot. It can improve the user's experience in previewing a video's timeline structure compared to traditional schemes that tag an entire video clip. In this paper, we propose a new application which extracts time-sync video tags by automatically exploiting crowdsourced comments from video websites such as Nico Nico Douga, where videos are commented on by online crowd users in a time-sync manner. The challenge of the proposed application is that users with bias interact with one another frequently and bring noise into the data, while the comments are too sparse to compensate for the noise. Previous techniques are unable to handle this task well as they consider video semantics independently, which may overfit the sparse comments in each shot and thus fail to provide accurate modeling. To resolve these issues, we propose a novel temporal and personalized topic model that jointly considers temporal dependencies between video semantics, users' interaction in commenting, and users' preferences as prior knowledge. Our proposed model shares knowledge across video shots via users to enrich the short comments, and peels off user interaction and user bias to solve the noisy-comment problem. Log-likelihood analyses and user studies on large datasets show that the proposed model outperforms several state-of-the-art baselines in video tagging quality. Case studies also demonstrate our model's capability of extracting tags from the crowdsourced short and noisy comments.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning, which aims to help learning tasks in a target domain by leveraging knowledge from auxiliary domains, has been demonstrated to be effective in different applications such as text mining, sentiment analysis, and so on. In addition, in many real-world applications, auxiliary data are described from multiple perspectives and usually carried by multiple sources. For example, to help classify videos on Youtube, which include three perspectives: image, voice and subtitles, one may borrow data from Flickr, Last.FM and Google News. Although any single instance in these domains can only cover a part of the views available on Youtube, the piece of information carried by them may compensate one another. If we can exploit these auxiliary domains in a collective manner, and transfer the knowledge to the target domain, we can improve the target model building from multiple perspectives. In this article, we consider this transfer learning problem as Transfer Learning with Multiple Views and Multiple Sources. As different sources may have different probability distributions and different views may compensate or be inconsistent with each other, merging all data in a simplistic manner will not give an optimal result. Thus, we propose a novel algorithm to leverage knowledge from different views and sources collaboratively, by letting different views from different sources complement each other through a co-training style framework, at the same time, it revises the distribution differences in different domains. We conduct empirical studies on several real-world datasets to show that the proposed approach can improve the classification accuracy by up to 8% against different kinds of state-of-the-art baselines.
    Statistical Analysis and Data Mining 08/2014; 7(4). DOI:10.1002/sam.11226
  • Article: OceanST
  • Hankz Hankui Zhuo, Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Applying learning techniques to acquire action models is an area of intense research interest. Most previous work in this area has assumed that there is a significant amount of training data available in a planning domain of interest. However, it is often difficult to acquire sufficient training data to ensure the learnt action models are of high quality. In this paper, we seek to explore a novel algorithm framework, called TRAMP, to learn action models with limited training data in a target domain, via transferring as much of the available information from other domains (called source domains) as possible to help the learning task, assuming action models in source domains can be transferred to the target domain. TRAMP transfers knowledge from source domains by first building structure mappings between source and target domains, and then exploiting extra knowledge from Web search to bridge and transfer knowledge from sources. Specifically, TRAMP first encodes training data with a set of propositions, and formulates the transferred knowledge as a set of weighted formulas. After that it learns action models for the target domain to best explain the set of propositions and the transferred knowledge. We empirically evaluate TRAMP in different settings to see their advantages and disadvantages in six planning domains, including four International Planning Competition (IPC) domains and two synthetic domains.
    Artificial Intelligence 07/2014; 212. DOI:10.1016/j.artint.2014.03.004 · 2.71 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Hierarchical Task Network (HTN) planning is an effective yet knowledge intensive problem-solving technique. It requires humans to encode knowledge in the form of methods and action models. Methods describe how to decompose tasks into subtasks and the preconditions under which those methods are applicable whereas action models describe how actions change the world. Encoding such knowledge is a difficult and time-consuming process, even for domain experts. In this paper, we propose a new learning algorithm, called HTNLearn, to help acquire HTN methods and action models. HTNLearn receives as input a collection of plan traces with partially annotated intermediate state information, and a set of annotated tasks that specify the conditions before and after the tasks' completion. In addition, plan traces are annotated with potentially empty partial decomposition trees that record the processes of decomposing tasks to subtasks. HTNLearn outputs are a collection of methods and action models. HTNLearn first encodes constraints about the methods and action models as a constraint satisfaction problem, and then solves the problem using a weighted MAX-SAT solver. HTNLearn can learn methods and action models simultaneously from partially observed plan traces (i.e., plan traces where the intermediate states are partially observable). We test HTNLearn in several HTN domains. The experimental results show that our algorithm HTNLearn is both effective and efficient.
    Artificial Intelligence 07/2014; 212. DOI:10.1016/j.artint.2014.04.003 · 2.71 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Advanced satellite tracking technologies have collected huge amounts of wild bird migration data. Biologists use these data to understand dynamic migration patterns, study correlations between habitats, and predict global spreading trends of avian influenza. The research discussed here transforms the biological problem into a machine learning problem by converting wild bird migratory paths into graphs. H5N1 outbreak prediction is achieved by discovering weighted closed cliques from the graphs using the mining algorithm High-wEight cLosed cliquE miNing (HELEN). The learning algorithm HELEN-p then predicts potential H5N1 outbreaks at habitats. This prediction method is more accurate than traditional methods used on a migration dataset obtained through a real satellite bird-tracking system. Empirical analysis shows that H5N1 spreads in a manner of high-weight closed cliques and frequent cliques.
    Intelligent Systems, IEEE 07/2014; 29(4):10-17. DOI:10.1109/MIS.2013.38 · 1.92 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Transfer learning is established as an effective technology to leverage rich labeled data from some source domain to build an accurate classifier for the target domain. The basic assumption is that the input domains may share certain knowledge structure, which can be encoded into common latent factors and extracted by preserving important property of original data, e.g., statistical property and geometric structure. In this paper, we show that different properties of input data can be complementary to each other and exploring them simultaneously can make the learning model robust to the domain difference. We propose a general framework, referred to as Graph Co-Regularized Transfer Learning (GTL), where various matrix factorization models can be incorporated. Specifically, GTL aims to extract common latent factors for knowledge transfer by preserving the statistical property across domains, and simultaneously, refine the latent factors to alleviate negative transfer by preserving the geometric structure in each domain. Based on the framework, we propose two novel methods using NMF and NMTF, respectively. Extensive experiments verify that GTL can significantly outperform state-of-the-art learning methods on several public text and image datasets.
    IEEE Transactions on Knowledge and Data Engineering 07/2014; 26(7):1805-1818. DOI:10.1109/TKDE.2013.97 · 1.82 Impact Factor
  • Xinli Fang, Qiang Yang, Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper exploited the cascading failure behavior in the new context of directed complex networks by introducing the concept of neighbor links. Two novel network attack strategies, i.e. the minimum in-degree attack strategy (MIAS) and the maximum out-degree attack strategy (MOAS), are proposed and their impacts are assessed through simulation experiments by using the random attack strategy (RAS) as the comparison benchmark for a range of network scenarios (directed random network, directed scale-free network and the IEEE 118 network model). The numerical result shows that the cascading failure propagation in directed complex networks is highly dependent on the attack strategies and the directionality of the network, as well as other network configurations.
    Safety Science 06/2014; 65:1–9. DOI:10.1016/j.ssci.2013.12.015 · 1.67 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Latent Dirichlet allocation (LDA) is a popular topic modeling technique in academia but less so in industry, especially in large-scale applications involving search engines and online advertisement systems. A main underlying reason is that the topic models used have been too small in scale to be useful; for example, some of the largest LDA models reported in literature have up to $10^3$ topics, which cover difficultly the long-tail semantic word sets. In this paper, we show that the number of topics is a key factor that can significantly boost the utility of topic-modeling system. In particular, we show that a "big" LDA model with at least $10^5$ topics inferred from $10^9$ search queries can achieve a significant improvement on industrial search engine and online advertising system, both of which serving hundreds of millions of users. We develop a novel distributed system called Peacock to learn big LDA models from big data. The main features on Peacock include hierarchical parallel architecture, real-time prediction, and topic de-duplication. We empirically demonstrate that the Peacock system is capable of providing significant benefits via highly scalable LDA topic models for several industrial applications.
  • Ruliang Dong, Qiang Yang, Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: Island operation of a fraction of power distribution network with distributed generators (DGs) is considered an efficient operational paradigm to enhance the security of power supply. This paper attempts to address the issue of island partitioning in distribution network with the penetration of small-scale DGs and present a two-stage algorithmic solution. In the suggested solution, the CSP-based method is firstly adopted to create a collection of network partitioning results in respect to individual DGs meeting the imposed constraints from the distribution network which can be carried out in an offline fashion assuming the fault occurrence at certain points. Aiming to identify the optimal partitioning solution, the heuristic simulate anneal arithmetic (SAA) algorithmic design was employed. Through such two-stage approach, the process of optimal island partitioning can be obtained with acceptable time complexity in large-scale power distribution networks. The proposed partitioning solution is assessed through a set of numerical comparative study with the IEEE 69-bus test model by using two available solutions as the comparison benchmark. The numerical result demonstrates that the proposed solution performs well in terms of guaranteeing the reliable supply of essential power loads as well as improving the utilization efficiency of distributed generations.
    2014 26th Chinese Control And Decision Conference (CCDC); 05/2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Friendship prediction is an important task in social network analysis (SNA). It can help users identify friends and improve their level of activity. Most previous approaches predict users' friendship based on their historical records, such as their existing friendship, social interactions, etc. However, in reality, most users have limited friends in a single network, and the data can be very sparse. The sparsity problem causes existing methods to overfit the rare observations and suffer from serious performance degradation. This is particularly true when a new social network just starts to form. We observe that many of today's social networks are composite in nature, where people are often engaged in multiple networks. In addition, users' friendships are always correlated, for example, they are both friends on Facebook and Google+. Thus, by considering those overlapping users as the bridge, the friendship knowledge in other networks can help predict their friendships in the current network. This can be achieved by exploiting the knowledge in different networks in a collective manner. However, as each individual network has its own properties that can be incompatible and inconsistent with other networks, the naive merging of all networks into a single one may not work well. The proposed solution is to extract the common behaviors between different networks via a hierarchical Bayesian model. It captures the common knowledge across networks, while avoiding negative impacts due to network differences. Empirical studies demonstrate that the proposed approach improves the mean average precision of friendship prediction over state-of-the-art baselines on nine real-world social networking datasets significantly.
  • Erheng Zhong, Wei Fan, Qiang Yang
    [Show abstract] [Hide abstract]
    ABSTRACT: Accurate prediction of user behaviors is important for many social media applications, including social marketing, personalization, and recommendation. A major challenge lies in that although many previous works model user behavior from only historical behavior logs, the available user behavior data or interactions between users and items in a given social network are usually very limited and sparse (e.g., ⩾ 99.9% empty), which makes models overfit the rare observations and fail to provide accurate predictions. We observe that many people are members of several social networks in the same time, such as Facebook, Twitter, and Tencent’s QQ. Importantly, users’ behaviors and interests in different networks influence one another. This provides an opportunity to leverage the knowledge of user behaviors in different networks by considering the overlapping users in different networks as bridges, in order to alleviate the data sparsity problem, and enhance the predictive performance of user behavior modeling. Combining different networks “simply and naively” does not work well. In this article, we formulate the problem to model multiple networks as “adaptive composite transfer” and propose a framework called ComSoc. ComSoc first selects the most suitable networks inside a composite social network via a hierarchical Bayesian model, parameterized for individual users. It then builds topic models for user behavior prediction using both the relationships in the selected networks and related behavior data. With different relational regularization, we introduce different implementations, corresponding to different ways to transfer knowledge from composite social relations. To handle big data, we have implemented the algorithm using Map/Reduce. We demonstrate that the proposed composite network-based user behavior models significantly improve the predictive accuracy over a number of existing approaches on several real-world applications, including a very large social networking dataset from Tencent Inc.
    ACM Transactions on Knowledge Discovery from Data 02/2014; 8(1). DOI:10.1145/2556613 · 1.15 Impact Factor
  • Youjian Zhang, Wenjun Yan, Qiang Yang
    Mathematical Problems in Engineering 01/2014; 2014:1-8. DOI:10.1155/2014/461635 · 1.08 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Distance vector routing protocols have been widely adopted as an efficient routing mechanism in current Internet, and many wireless networks. However, as is well-known, the existing distance vector routing protocols are insecure as it lacks of effective authorization mechanisms and routing updates aggregated from other routers. As a result, the network routing-based attacks become a critical issue which could lead to a more deteriorate performance than other general network attacks. To efficiently address this issue, this paper, through analyzing the routing model and its security aspect, and presents a novel approach on guaranteeing the routing security. Based on the model, we present the security mechanism including the message exchange and update message security authentication mechanism. The suggested approach shows that the security mechanism can effectively verify the integrity and validate the freshness of routing update messages received from neighbor nodes. In comparison with exiting mechanisms (SDV, S-RIP etc), the proposed model provides enhanced security without introducing significant network overheads and complexity.
    Sciece China. Information Sciences 01/2014; 57(1). DOI:10.1007/s11432-012-4659-7 · 0.70 Impact Factor
  • Xinli Fang, Qiang Yang, Wenjun Yan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper exploits the network outer synchronization problem in a generic context for complex networks with nonlinear time-delay characteristics and nonidentical time-varying topological structures. Based on the classic Lyapunov stability theory, the synchronization criteria and adaptive control strategy are presented, respectively, by adopting an appropriate Lyapunov-Krasovskii energy function and the convergence of the system error can also be well proved. The existing results of network outer synchronization can be obtained by giving certain conditions, for example, treating the coupling matrices as time-invariant, and by applying the suggested generic synchronization criteria and control scheme. The numerical simulation experiments for networks scenarios with dynamic chaotic characteristics and time-varying topologies are carried out and the result verifies the correctness and effectiveness of the proposed control solution.
    Mathematical Problems in Engineering 01/2014; 2014:1-10. DOI:10.1155/2014/437673 · 1.08 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Demographics prediction is an important component of user profile modeling. The accurate prediction of users’ demographics can help promote many applications, ranging from web search, personalization to behavior targeting. In this paper, we focus on how to predict users’ demographics, including “gender”, “job type”, “marital status”, “age” and “number of family members”, based on mobile data, such as users’ usage logs, physical activities and environmental contexts. The core idea is to build a supervised learning framework, where each user is represented as a feature vector and users’ demographics are considered as prediction targets. The most important component is to construct features from raw data and then supervised learning models can be applied. We propose a feature construction framework, CFC (contextual feature construction), where each feature is defined as the conditional probability of one user activity under the given contexts. Consequently, besides employing standard supervised learning models, we propose a regularized multi-task learning framework to model different kinds of demographics predictions collectively. We also propose a cost-sensitive classification framework for regression tasks, in order to benefit from the existing dimension reduction methods. Finally, due to the limited training instances, we employ ensemble to avoid overfitting. The experimental results show that the framework achieves classification accuracies on “gender”, “job” and “marital status” as high as 96%, 83% and 86%, respectively, and achieves Root Mean Square Error (RMSE) on “age” and “number of family members” as low as 0.69 and 0.66 respectively, under the leave-one-out evaluation.
    Pervasive and Mobile Computing 12/2013; 9(6):823–837. DOI:10.1016/j.pmcj.2013.07.009 · 1.67 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: We present in this paper our winning solution to Dedicated Task 1 in Nokia Mobile Data Challenge (MDC). MDC Task 1 is to infer the semantic category of a place based on the smartphone sensing data obtained at that place. We approach this task in a standard supervised learning setting: we extract discriminative features from the sensor data and use state-of-the-art classifiers (SVM, Logistic Regression and Decision Tree Family) to build classification models. We have found that feature engineering, or in other words, constructing features using human heuristics, is very effective for this task. In particular, we have proposed a novel feature engineering technique, Conditional Feature (CF), a general framework for domain-specific feature construction. In total, we have generated 2,796,200 features and in our final five submissions we use feature selection to select 100 to 2000 features. One of our key findings is that features conditioned on fine-granularity time intervals, e.g. every 30 min, are most effective. Our best 10-fold CV accuracy on training set is 75.1% by Gradient Boosted Trees, and the second best accuracy is 74.6% by L1-regularized Logistic Regression. Besides the good performance, we also report briefly our experience of using F# language for large-scale (∼70 GB raw text data) conditional feature construction.
    Pervasive and Mobile Computing 12/2013; 9(6):772–783. DOI:10.1016/j.pmcj.2013.07.004 · 1.67 Impact Factor

Publication Stats

12k Citations
342.30 Total Impact Points

Institutions

  • 2010–2014
    • Zhejiang University
      • • College of Electrical Engineering
      • • College of Computer Science and Technology
      Hang-hsien, Zhejiang Sheng, China
    • Pennsylvania State University
      University Park, Maryland, United States
    • Stanford University
      Palo Alto, California, United States
    • IBM
      Armonk, New York, United States
  • 1970–2014
    • The Hong Kong University of Science and Technology
      • Department of Computer Science and Engineering
      Chiu-lung, Kowloon City, Hong Kong
  • 2009–2013
    • Imperial College London
      • Department of Electrical and Electronic Engineering
      Londinium, England, United Kingdom
  • 2012
    • Microsoft
      Washington, West Virginia, United States
    • The University of Hong Kong
      Hong Kong, Hong Kong
  • 2011
    • Zhejiang Gongshang University
      Hang-hsien, Zhejiang Sheng, China
    • Southeast University (China)
      • School of Computer Science and Engineering
      Nanjing, Jiangxi Sheng, China
    • Fourth Military Medical University
      Xi’an, Liaoning, China
  • 2007
    • Sun Yat-Sen University
      Shengcheng, Guangdong, China
    • The Chinese University of Hong Kong
      • Department of Information Engineering
      Hong Kong, Hong Kong
    • Institute for Infocomm Research
      Tumasik, Singapore
    • University of Wisconsin, Madison
      • Department of Computer Sciences
      Madison, MS, United States
  • 2006
    • Peking University
      • School of Mathematical Sciences
      Beijing, Beijing Shi, China
  • 1996–2006
    • Simon Fraser University
      • School of Computing Science
      Burnaby, British Columbia, Canada
  • 2004
    • University of Vermont
      Burlington, Vermont, United States
  • 2001
    • Shanghai Jiao Tong University
      • Department of Computer Science and Engineering
      Shanghai, Shanghai Shi, China
  • 2000
    • Tsinghua University
      • Department of Computer Science and Technology
      Peping, Beijing, China
  • 1990–2000
    • University of Waterloo
      Waterloo, Ontario, Canada
  • 1989
    • University of Maryland, College Park
      • Department of Computer Science
      Maryland, United States