[Show abstract][Hide abstract] ABSTRACT: How to improve the learning efficiency and optimize the encapsulation of subtasks is a key problem that hierarchical reinforcement learning needs to solve. This paper proposes a modular hierarchical reinforcement learning al-gorithm, named MHRL, in which the modularized hierarchical subtasks are trained by their independent reward systems. During learning, the MHRL pro-duces an optimization strategy for different modular layers, which makes inde-pendent modules be able to concurrently execute. In addition, this paper pre-sents some experimental results for solving application problems with nested learning processes. The results show that the MHRL can increase learning reus-ability and improve learning efficiency dramatically.
Proceedings of the 8th international conference on Intelligent Computing Theories and Applications; 07/2012
[Show abstract][Hide abstract] ABSTRACT: Blogs have been popular social networking platforms in recent years. Blog opinion retrieval is one of the key issues that needs to be solved. In this paper, we investigate if the Condorcet fusion and the weighted Condorcet fusion can be used for effectiveness improvement of blog opinion retrieval. The experiments carried out with the data set from the TREC 2008 Blog track show that the Condorcet fusion is effective and the weighted Condorcet fusion, with its weights trained by linear discriminant analysis, is very effective. Both of them outperform the best component result by a clear margin.
Database and Expert Systems Applications (DEXA), 2012 23rd International Workshop on; 01/2012
[Show abstract][Hide abstract] ABSTRACT: In information retrieval systems and digital libraries, retrieval result evaluation is a very important aspect. Up to now, almost all commonly used metrics such as average precision and recall level precision are ranking based metrics. In this work, we investigate if it is a good option to use a score based method, the Euclidean distance, for retrieval evaluation. Two variations of it are discussed: one uses the linear model to estimate the relation between rank and relevance in resultant lists, and the other uses a more sophisticated cubic regression model for this. Our experiments with two groups of submitted results to TREC demonstrate that the introduced new metrics have strong correlation with ranking based metrics when we consider the average of all 50 queries. On the other hand, our experiments also show that one of the variations (the linear model) has better overall quality than all those ranking based metrics involved. Another surprising finding is that a commonly used metric, average precision, may not be as good as previously thought.
Advances in Databases - 28th British National Conference on Databases, BNCOD 28, Manchester, UK, July 12-14, 2011, Revised Selected Papers; 01/2011
[Show abstract][Hide abstract] ABSTRACT: In information retrieval, data fusion has been investigated by many researchers. Previous investigation and experimentation
demonstrate that the linear combination method is an effective data fusion method for combining multiple information retrieval
results. One advantage is its flexibility since different weights can be assigned to different component systems so as to
obtain better fusion results. However, how to obtain suitable weights for all the component retrieval systems is still an
In this paper, we use the multiple linear regression technique to obtain optimum weights for all involved component systems.
Optimum is in the least squares sense that minimize the difference between the estimated scores of all documents by linear
combination and the judged scores of those documents. Our experiments with four groups of runs submitted to TREC show that
the linear combination method with such weights steadily outperforms the best component system and other major data fusion
methods such as CombSum, CombMNZ, and the linear combination method with performance level/performance square weighting schemas
by large margins.
Database and Expert Systems Applications - 22nd International Conference, DEXA 2011, Toulouse, France, August 29 - September 2, 2011, Proceedings, Part II; 01/2011
[Show abstract][Hide abstract] ABSTRACT: In this paper we present a new data fusion method in information retrieval, which uses ranking information of resultant documents. Our method is based on the modelling of rank-probability of relevance of documents in resultant document list using logarithmic models. The proposed method is more effective than other data fusion methods which also use ranking information, and is as effective as some data fusion methods which rely on reliable scoring information.
Advanced Computer Theory and Engineering (ICACTE), 2010 3rd International Conference on; 09/2010
[Show abstract][Hide abstract] ABSTRACT: The sensitivity of a neural network's output to its parameter variation is an important issue in both theoretical researches and practical applications of neural networks. This paper proposes a quantified sensitivity measure of the Radial Basis Function Neural Networks (RBFNNs) to input variation. The sensitivity is defined as the mathematical expectation of squared output deviations caused by input variations. In order to quantify the sensitivity, the input is treated as a statistical variable and a numerical integral technique is employed to approximately compute the expectation. Experimental verifications are run and the results show a very good agreement between the proposed sensitivity computation and computer simulation. The quantified sensitivity measure could be helpful as a general tool for evaluating RBFNNs' performance.
International Joint Conference on Neural Networks, IJCNN 2010, Barcelona, Spain, 18-23 July, 2010; 01/2010
[Show abstract][Hide abstract] ABSTRACT: The computation of the sensitivity of a Madaline’s output to its parameter perturbation is systematically discussed. Firstly,
according to the discrete feature of Adalines, a method based on discrete stochastic technique is proposed, which derives
some analytical formulas for the computation of Adalines’ sensitivity. The method can theoretically solve some problems that
are unsolvable by the existing methods based on continuous stochastic techniques, release some unpractical constraints, and
make it available to theoretically analyze the approximation error of Adalines’ sensitivity. Secondly, on the basis of the
sensitivity of Adalines and the structural characteristics of Madalines, a new selection strategy depending on a type of dedication
degree for computing Madalines’ sensitivity is proposed, which is superior to current popular way of simply averaging in both
precision and complexity. The proposed formulas and algorithm have the advantages of simplicity, low computational complexity,
small approximation error, and high generality, as have been verified by a great amount of experimental simulations.
Sciece China. Information Sciences 01/2010; 53:2399-2414. · 0.71 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Ensemble learning is one of the main directions in machine learning and data mining, which allows learners to achieve higher
training accuracy and better generalization ability. In this paper, with an aim at improving generalization performance, a
novel approach to construct an ensemble of neural networks is proposed. The main contributions of the approach are its diversity
measure for selecting diverse individual neural networks and weighted fusion technique for assigning proper weights to the
selected individuals. Experimental results demonstrate that the proposed approach is effective.
Intelligent Data Engineering and Automated Learning - IDEAL 2008, 9th International Conference, Daejeon, South Korea, November 2-5, 2008, Proceedings; 01/2008
[Show abstract][Hide abstract] ABSTRACT: In information retrieval, the linear combination method is a very flexible and effective data fusion method, since different
weights can be assigned to different component systems. However, it remains an open question which weighting schema is good.
Previously, a simple weighting schema was very often used: for a system, its weight is assigned as its average performance
over a group of training queries. In this paper, we investigate the weighting issue by extensive experiments. We find that,
a series of power functions of average performance, which can be implemented as efficiently as the simple weighting schema,
is more effective than the simple weighting schema for data fusion.
Foundations of Intelligent Systems, 17th International Symposium, ISMIS 2008, Toronto, Canada, May 20-23, 2008, Proceedings; 01/2008
[Show abstract][Hide abstract] ABSTRACT: Ensemble learning to construct learners in regression and classification has practically and theoretically been proved to be able to improve the generalization capability of the learners. Nowadays, most neural network ensembles are obtained by manipulating training data and networks' architecture etc, such as Bagging, Boosting, and other methods like evolutionary techniques. In this paper, a new method to construct neural network ensembles is presented, which aims at selecting, by means of output sensitivity of an individual network, the most diverse members from a pool of trained networks. Conceptually, the sensitivity reflects a network's output behavior at a given data point, for example, the trend of the network's output nearby. So the sensitivity can be helpful to explicitly measure the output diversity among individuals in the pool. In our research, Multilayer Perceptrons (MLPs) are focused on, and the sensitivity is adopted as the partial derivative of an MLP's output to its input at data point. Based on the sensitivity, we developed four different measures for the selection of the most diverse individuals from a given pool of trained MLPs. Some experiments on the UCI benchmark data have been conducted, and the comparisons of our results with those from Bagging and Boosting show that our method has some advantages over the existing ensemble methods in ensemble size and generalization performance.
Proceedings of the International Joint Conference on Neural Networks, IJCNN 2007, Celebrating 20 years of neural networks, Orlando, Florida, USA, August 12-17, 2007; 01/2007