Applied Intelligence (APPL INTELL)

Publisher: Springer Science+Business Media, Springer Verlag

Journal description

The international journal of Applied Intelligence provides a medium for exchanging scientific research and technological achievements accomplished by the international community. The focus of the work is on research in artificial intelligence and neural networks. The journal addresses issues involving solutions of real-life manufacturing defense management government and industrial problems which are too complex to be solved through conventional approaches and which require the simulation of intelligent thought processes heuristics applications of knowledge and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The emphasis of the reported work is on new and original research and technological developments rather than reports on the application of existing technology to different sets of data. Earlier work reported in these fields has been limited in application and has solved simplified structured problems which rarely occur in real-life situations. Only recently have researchers started addressing real and complex issues applicable to difficult problems. The journal welcomes such developments and functions as a catalyst in disseminating the original research and technological achievements of the international community in these areas.

Current impact factor: 1.85

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2012 Impact Factor 1.853
2011 Impact Factor 0.849
2010 Impact Factor 0.881
2009 Impact Factor 0.988
2008 Impact Factor 0.775
2007 Impact Factor 0.5
2006 Impact Factor 0.329
2005 Impact Factor 0.569
2004 Impact Factor 0.477
2003 Impact Factor 0.776
2002 Impact Factor 0.686
2001 Impact Factor 0.493
2000 Impact Factor 0.42
1999 Impact Factor 0.291
1998 Impact Factor 0.326
1997 Impact Factor 0.268
1996 Impact Factor 0.139
1995 Impact Factor 0.05

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.94
Cited half-life 5.90
Immediacy index 0.19
Eigenfactor 0.00
Article influence 0.30
Website Applied Intelligence website
Other titles Applied intelligence (Dordrecht, Netherlands)
ISSN 0924-669X
OCLC 25272842
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Springer Verlag

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on pre-print servers such as arXiv.org
    • Author's post-print on author's personal website immediately
    • Author's post-print on any open access repository after 12 months after publication
    • Publisher's version/PDF cannot be used
    • Published source must be acknowledged
    • Must link to publisher version
    • Set phrase to accompany link to published version (see policy)
    • Articles in some journals can be made Open Access on payment of additional charge
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we explore prediction intervals and how they can be used for model evaluation and discrimination in the supervised regression setting of medium sized datasets. We review three different methods for making prediction intervals and the statistics used for their evaluation. How the prediction intervals look like, how different methods behave and how the prediction intervals can be utilized for the graphical evaluation of models is illustrated with the help of simple datasets. Afterwards we propose a combined method for making prediction intervals and explore its performance with two voting schemes for combining predictions of a diverse ensemble of models. All methods are tested on a large set of datasets on which we evaluate individual methods and aggregated variants for their abilities of selecting the best predictions. The analysis of correlations between the root mean squared error and our evaluation statistic show that both stability and reliability of the results increase as the techniques get more elaborate. We confirm that the methodology is suitable for the graphical comparison of individual models and is a viable way of discriminating among model candidates.
    Applied Intelligence 06/2015; 42(4). DOI:10.1007/s10489-014-0632-z
  • [Show abstract] [Hide abstract]
    ABSTRACT: In data-mining algorithms contingency tables are frequently built from ADtrees, as ADtrees have been demonstrated to be an efficient data structure for caching sufficient statistics. This paper introduces three modifications. The first two use a one-dimensional array and a hash map for representing contingency tables, and the third uses the non-recursive approach to build contingency tables from sparse ADtrees. We implement algorithms to construct contingency tables with a two-dimensional array, a tree, a one-dimensional array, and a hash map using recursion and non-recursive approaches in Python. We empirically test these algorithms in five aspects with a large number of randomly generated datasets. We also apply the modified algorithms to Bayesian networks learning and test the performance improvements using three real-life datasets. We demonstrate experimentally that all three of these modifications improve algorithm performance. The improvements are more significant with higher arities and larger arity values.
    Applied Intelligence 06/2015; 42(4). DOI:10.1007/s10489-014-0624-z
  • [Show abstract] [Hide abstract]
    ABSTRACT: Link prediction in social networks has attracted increasing attention in various fields such as sociology, anthropology, information science, and computer science. Most existing methods adopt a static graph representation to predict new links. However, these methods lose some important topological information of dynamic networks. In this work, we present a method for link prediction in dynamic networks by integrating temporal information, community structure, and node centrality in the network. Information on all of these aspects is highly beneficial in predicting potential links in social networks. Temporal information offers link occurrence behavior in the dynamic network, while community clustering shows how strong the connection between two individual nodes is, based on whether they share the same community. The centrality of a node, which measures its relative importance within a network, is highly related with future links in social networks. We predict a node’s future importance by eigenvector centrality, and use this for link prediction. Merging the typological information, including community structure and centrality, with temporal information generates a more realistic model for link prediction in dynamic networks. Experimental results on real datasets show that our method based on the integrated time model can predict future links efficiently in temporal social networks, and achieves higher quality results than traditional methods.
    Applied Intelligence 06/2015; 42(4). DOI:10.1007/s10489-014-0631-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: Permutation-based encoding is used by many evolutionary algorithms dealing with combinatorial optimization problems. An important aspect of the evolutionary search process refers to the recombination process of existing individuals in order to generate new potentially better fit offspring leading to more promising areas of the search space. In this paper, we describe and analyze the best-order recombination operator for permutation-based encoding. The proposed operator uses genetic information from the two parents and from the best individual obtained up to the current generation. These sources of information are integrated to determine the best order of values in the new permutation. In order to evaluate the performance of best-order crossover, we address three well-known \(\mathcal {NP}\) -hard optimization problems i.e. Travelling Salesman Problem, Vehicle Routing Problem and Resource-Constrained Project Scheduling Problem. For each of these problems, a set of benchmark instances is considered in a comparative analysis of the proposed operator with eight other crossover schemes designed for permutation representation. All crossover operators are integrated in the same standard evolutionary framework and using the same parameter setting to allow a comparison focused on the recombination process. Numerical results emphasize a good performance of the proposed crossover scheme which is able to lead to overall better quality solutions.
    Applied Intelligence 06/2015; 42(4). DOI:10.1007/s10489-014-0623-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: With increasing globalization, supplier selection has become more and more important than before. In the process of determining the best supplier, the expert judgements might be vague or incomplete due to the inherent uncertainty and imprecision on their perception. In addition to that, the sub-criteria are relevant to each other in the selection of right supplier. In this paper, a novel methodology based on fuzzy set theory and analytic network process (FEANP) is developed to address both the uncertain information involved and the interrelationships among the attributes. This paper concludes with a case study describing the implementation of this model at a real-world supplier selection scenario. At last, by in comparison with existing methods, we demonstrate the effectiveness of the proposed model.
    Applied Intelligence 05/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Existing causal discovery algorithms are usually not effective and efficient enough on high dimensional data. Because the high dimensionality reduces the discovered accuracy and increases the computation complexity. To alleviate these problems, we present a three-phase approach to learn the structure of nonlinear causal models by taking the advantage of feature selection method and two state of the art causal discovery methods. In the first phase, a greedy search method based on Max-Relevance and Min-Redundancy is employed to discover the candidate causal set, a rough skeleton of the causal network is generated accordingly. In the second phase, constraint-based method is explored to discover the accurate skeleton from the rough skeleton. In the third phase, direction learning algorithm IGCI is conducted to distinguish the direction of causalities from the accurate skeleton. The experimental results show that the proposed approach is both effective and scalable, particularly with interesting findings on the high dimensional data.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0607-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: As is well known, the Greedy Ensemble Pruning (GEP) algorithm, also called the Directed Hill Climbing Ensemble Pruning (DHCEP) algorithm, possesses relatively good performance and high speed. However, because the algorithm only explores a relatively small subspace within the whole solution space, it often produces suboptimal solutions of the ensemble pruning problem. Aiming to address this drawback, in this work, we propose a novel Randomized GEP (RandomGEP) algorithm, also called the Randomized DHCEP (RandomDHCEP) algorithm, that effectively enlarges the search space of the classical DHCEP while maintaining the same level of time complexity with the help of a randomization technique. The randomization of the classical DHCEP algorithm achieves a good tradeoff between the effectiveness and efficiency of ensemble pruning. Besides, the RandomDHCEP algorithm naturally inherits the two intrinsic advantages that a randomized algorithm usually possesses. First, in most cases, its running time or space requirements are smaller than well-behaved deterministic ensemble pruning algorithms. Second, it is easy to comprehend and implement. Experimental results on three benchmark classification datasets verify the practicality and effectiveness of the RandomGEP algorithm.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0605-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: Class imbalances have been reported to compromise the performance of most standard classifiers, such as Naive Bayes, Decision Trees and Neural Networks. Aiming to solve this problem, various solutions have been explored mainly via balancing the skewed class distribution or improving the existing classification algorithms. However, these methods pay more attention on the imbalance distribution, ignoring the discriminative ability of features in the context of class imbalance data. In this perspective, a dissimilarity-based method is proposed to deal with the classification of imbalanced data. Our proposed method first removes the useless and redundant features by feature selection from the given data set; and then, extracts representative instances from the reduced data as prototypes; finally, projects the reduced data into a dissimilarity space by constructing new features, and builds the classification model with data in the dissimilarity space. Extensive experiments over 24 benchmark class imbalance data sets show that, compared with seven other imbalance data tackling solutions, our proposed method greatly improves the performance of imbalance learning, and outperforms the other solutions with all given classification algorithms.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0610-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: The least square twin support vector machine (LS-TSVM) obtains two non-parallel hyperplanes by directly solving two systems of linear equations instead of two quadratic programming problems (QPPs) as in the conventional twin support vector machine (TSVM), which makes the computational speed of LS-TSVM faster than that of the TSVM. However, LS-TSVM ignores the structural information of data which may contain some vital prior domain knowledge for training a classifier. In this paper, we apply the prior structural information of data into the LS-TSVM to build a better classifier, called the structural least square twin support vector machine (S-LSTSVM). Since it incorporates the data distribution information into the model, S-LSTSVM has good generalization performance. Furthermore, S-LSTSVM costs less time by solving two systems of linear equations compared with other existing methods based on structural information. Experimental results on twelve benchmark datasets demonstrate that our S-LSTSVM performs well. Finally, we apply it into Alzheimer’s disease diagnosis to further demonstrate the advantage of our algorithm.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0611-4
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pattern mining is a data mining technique used for discovering significant patterns and has been applied to various applications such as disease analysis in medical databases and decision making in business. Frequent pattern mining based on item frequencies is the most fundamental topic in the pattern mining field. However, it is difficult to discover the important patterns on the basis of only frequencies since characteristics of real-world databases such as relative importance of items and non-binary transactions are not reflected. In this regard, utility pattern mining has been considered as an emergent research topic that deals with the characteristics. In real-world applications, meanwhile newly generated data by continuous operation or data in other databases for integration analysis can be gradually added to the current database. To efficiently deal with both existing and new data as a database, it is necessary to reflect increased data to previous analysis results without analyzing the whole database again. In this paper, we propose an algorithm called HUPID-Growth (High Utility Patterns in Incremental Databases Growth) for mining high utility patterns in incremental databases. Moreover, we suggest a tree structure constructed with a single database scan named HUPID-Tree (High Utility Patterns in Incremental Databases Tree), and a restructuring method with a novel data structure called TIList (Tail-node Information List) in order to process incremental databases more efficiently. We conduct various experiments for performance evaluation with state-of-the-art algorithms. The experimental results show that the proposed algorithm more efficiently processes real datasets compared to previous ones.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0601-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, outlier detection has attracted considerable attention. The identification of outliers is important for many applications, including those related to intrusion detection, credit card fraud, criminal activity in electronic commerce, medical diagnosis and anti-terrorism. Various outlier detection methods have been proposed for solving problems in different domains. In this paper, a new outlier detection method is proposed from the perspectives of granular computing (GrC) and rough set theory. First, we give a definition of outliers called GR(GrC and rough sets)-based outliers. Second, to detect GR-based outliers, an outlier detection algorithm called ODGrCR is proposed. Third, the effectiveness of ODGrCR is evaluated by using a number of real data sets. The experimental results show that our algorithm is effective for outlier detection. In particular, our algorithm takes much less running time than other outlier detection methods.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0591-4
  • [Show abstract] [Hide abstract]
    ABSTRACT: Reinforcement learning is one of the subjects of Artificial Intelligence and learning automata have been considered as one of the most powerful tools in this research area. On the evolution of learning automata, the rate of convergence is the most primary goal of designing a learning algorithm. In this paper, we propose a deterministic-estimator based learning automata (LA) of which the estimate of each action is the upper bound of a confidence interval, rather than the Maximum Likelihood Estimate (MLE) that has been widely used in current schemes of Estimator LA. The philosophy here is to assign more confidence on actions that are selected only for a few times, so that the automaton is encouraged to explore the uncertain actions. When all the actions have been fully explored, the automaton behaves just like the Generalized Pursuit Algorithm. A refined analysis is presented to show the 𝜖-optimality of the proposed algorithm. It has been demonstrated by extensive simulations that the presented learning automaton (LA) is faster than any deterministic estimator learning automata that have been reported to date. Moreover, we extend our algorithm to the stochastic estimator schemes. It is also shown that the extended LA has achieved a significant performance improvement, comparing with the current state of the art algorithm of learning automata, especially in complex and confusing environments.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0594-1
  • [Show abstract] [Hide abstract]
    ABSTRACT: Negative selection algorithms are important for artificial immune systems to produce detectors. But there are problems such as high time complexity, large number of detectors, a lot of redundant coverage between detectors in traditional negative selection algorithms, resulting in low efficiency for detectors’ generation and limitations in the application of immune algorithms. Based on the distribution of self set in morphological space, the algorithm proposed in this paper introduces the immune optimization mechanism, and produces candidate detectors hierarchically from far to near, with selves as the center. First, the self set is regarded as the evolution population. After immune optimization operations, detectors of the first level are generated which locate far away from the self space and cover larger non-self space, achieving that fewer detectors cover as much non-self space as possible. Then, repeat the process to obtain the second level detectors which locate close to detectors of the first level and near the self space and cover smaller non-self space, reducing detection loopholes. By analogy, qualified detector set will be obtained finally. In detectors’ generation process, the random production range of detectors is limited, and the self-reaction rate between candidate detectors is smaller, which effectively reduces the number of mature detectors and redundant coverage. Theoretical analysis demonstrates that the time complexity is linear with the size of self set, which greatly reduces the influence of growth of self scales over the time complexity. Experimental results show that IO-RNSA has better time efficiency and generation quality than classical negative selection algorithms, and improves detection rate and decreases false alarm rate.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0599-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: Coalitional Resource Games (CRGs) are a natural and formal framework in which agents wish to form coalitions to pool their scarce resources in order to achieve a set of goals that satisfy all members of a coalition. Thus far, many computational questions surrounding CRGs have been studied, but to our knowledge, a number of natural decision problems in CRGs have not been solved. Therefore, in this paper we investigate the possibility of using binary particle swarm optimization (BPSO) as a stochastic search process to search for Maximal Successful Coalition (MAXSC) in CRGs, which is a DP -complete problem. For this purpose, we develop a one-dimensional binary encoding scheme, propose strategies for encoding repair to ensure that each encoding in every iteration process is approximately valid and logicallsy consistent, and discuss some key properties of repair strategies. To evaluate the effectiveness of our algorithms, we compare them with the only other algorithm available in the literature for identifying MAXSC (due to Shrot, Aumann, and Kraus). The result shows that our algorithms are significantly faster especially for large-scale datasets.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0589-y