Applied Intelligence (APPL INTELL)

Publisher: Springer Science+Business Media

Journal description

The international journal of Applied Intelligence provides a medium for exchanging scientific research and technological achievements accomplished by the international community. The focus of the work is on research in artificial intelligence and neural networks. The journal addresses issues involving solutions of real-life manufacturing defense management government and industrial problems which are too complex to be solved through conventional approaches and which require the simulation of intelligent thought processes heuristics applications of knowledge and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The emphasis of the reported work is on new and original research and technological developments rather than reports on the application of existing technology to different sets of data. Earlier work reported in these fields has been limited in application and has solved simplified structured problems which rarely occur in real-life situations. Only recently have researchers started addressing real and complex issues applicable to difficult problems. The journal welcomes such developments and functions as a catalyst in disseminating the original research and technological achievements of the international community in these areas.

Current impact factor: 1.85

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2012 Impact Factor 1.853
2011 Impact Factor 0.849
2010 Impact Factor 0.881
2009 Impact Factor 0.988
2008 Impact Factor 0.775
2007 Impact Factor 0.5
2006 Impact Factor 0.329
2005 Impact Factor 0.569
2004 Impact Factor 0.477
2003 Impact Factor 0.776
2002 Impact Factor 0.686
2001 Impact Factor 0.493
2000 Impact Factor 0.42
1999 Impact Factor 0.291
1998 Impact Factor 0.326
1997 Impact Factor 0.268
1996 Impact Factor 0.139
1995 Impact Factor 0.05

Impact factor over time

Impact factor
Year

Additional details

5-year impact 1.94
Cited half-life 5.90
Immediacy index 0.19
Eigenfactor 0.00
Article influence 0.30
Website Applied Intelligence website
Other titles Applied intelligence (Dordrecht, Netherlands)
ISSN 0924-669X
OCLC 25272842
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: With increasing globalization, supplier selection has become more and more important than before. In the process of determining the best supplier, the expert judgements might be vague or incomplete due to the inherent uncertainty and imprecision on their perception. In addition to that, the sub-criteria are relevant to each other in the selection of right supplier. In this paper, a novel methodology based on fuzzy set theory and analytic network process (FEANP) is developed to address both the uncertain information involved and the interrelationships among the attributes. This paper concludes with a case study describing the implementation of this model at a real-world supplier selection scenario. At last, by in comparison with existing methods, we demonstrate the effectiveness of the proposed model.
    Applied Intelligence 05/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Existing causal discovery algorithms are usually not effective and efficient enough on high dimensional data. Because the high dimensionality reduces the discovered accuracy and increases the computation complexity. To alleviate these problems, we present a three-phase approach to learn the structure of nonlinear causal models by taking the advantage of feature selection method and two state of the art causal discovery methods. In the first phase, a greedy search method based on Max-Relevance and Min-Redundancy is employed to discover the candidate causal set, a rough skeleton of the causal network is generated accordingly. In the second phase, constraint-based method is explored to discover the accurate skeleton from the rough skeleton. In the third phase, direction learning algorithm IGCI is conducted to distinguish the direction of causalities from the accurate skeleton. The experimental results show that the proposed approach is both effective and scalable, particularly with interesting findings on the high dimensional data.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0607-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: As is well known, the Greedy Ensemble Pruning (GEP) algorithm, also called the Directed Hill Climbing Ensemble Pruning (DHCEP) algorithm, possesses relatively good performance and high speed. However, because the algorithm only explores a relatively small subspace within the whole solution space, it often produces suboptimal solutions of the ensemble pruning problem. Aiming to address this drawback, in this work, we propose a novel Randomized GEP (RandomGEP) algorithm, also called the Randomized DHCEP (RandomDHCEP) algorithm, that effectively enlarges the search space of the classical DHCEP while maintaining the same level of time complexity with the help of a randomization technique. The randomization of the classical DHCEP algorithm achieves a good tradeoff between the effectiveness and efficiency of ensemble pruning. Besides, the RandomDHCEP algorithm naturally inherits the two intrinsic advantages that a randomized algorithm usually possesses. First, in most cases, its running time or space requirements are smaller than well-behaved deterministic ensemble pruning algorithms. Second, it is easy to comprehend and implement. Experimental results on three benchmark classification datasets verify the practicality and effectiveness of the RandomGEP algorithm.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0605-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: Class imbalances have been reported to compromise the performance of most standard classifiers, such as Naive Bayes, Decision Trees and Neural Networks. Aiming to solve this problem, various solutions have been explored mainly via balancing the skewed class distribution or improving the existing classification algorithms. However, these methods pay more attention on the imbalance distribution, ignoring the discriminative ability of features in the context of class imbalance data. In this perspective, a dissimilarity-based method is proposed to deal with the classification of imbalanced data. Our proposed method first removes the useless and redundant features by feature selection from the given data set; and then, extracts representative instances from the reduced data as prototypes; finally, projects the reduced data into a dissimilarity space by constructing new features, and builds the classification model with data in the dissimilarity space. Extensive experiments over 24 benchmark class imbalance data sets show that, compared with seven other imbalance data tackling solutions, our proposed method greatly improves the performance of imbalance learning, and outperforms the other solutions with all given classification algorithms.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0610-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: The least square twin support vector machine (LS-TSVM) obtains two non-parallel hyperplanes by directly solving two systems of linear equations instead of two quadratic programming problems (QPPs) as in the conventional twin support vector machine (TSVM), which makes the computational speed of LS-TSVM faster than that of the TSVM. However, LS-TSVM ignores the structural information of data which may contain some vital prior domain knowledge for training a classifier. In this paper, we apply the prior structural information of data into the LS-TSVM to build a better classifier, called the structural least square twin support vector machine (S-LSTSVM). Since it incorporates the data distribution information into the model, S-LSTSVM has good generalization performance. Furthermore, S-LSTSVM costs less time by solving two systems of linear equations compared with other existing methods based on structural information. Experimental results on twelve benchmark datasets demonstrate that our S-LSTSVM performs well. Finally, we apply it into Alzheimer’s disease diagnosis to further demonstrate the advantage of our algorithm.
    Applied Intelligence 04/2015; 42(3). DOI:10.1007/s10489-014-0611-4
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years, outlier detection has attracted considerable attention. The identification of outliers is important for many applications, including those related to intrusion detection, credit card fraud, criminal activity in electronic commerce, medical diagnosis and anti-terrorism. Various outlier detection methods have been proposed for solving problems in different domains. In this paper, a new outlier detection method is proposed from the perspectives of granular computing (GrC) and rough set theory. First, we give a definition of outliers called GR(GrC and rough sets)-based outliers. Second, to detect GR-based outliers, an outlier detection algorithm called ODGrCR is proposed. Third, the effectiveness of ODGrCR is evaluated by using a number of real data sets. The experimental results show that our algorithm is effective for outlier detection. In particular, our algorithm takes much less running time than other outlier detection methods.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0591-4
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pattern mining is a data mining technique used for discovering significant patterns and has been applied to various applications such as disease analysis in medical databases and decision making in business. Frequent pattern mining based on item frequencies is the most fundamental topic in the pattern mining field. However, it is difficult to discover the important patterns on the basis of only frequencies since characteristics of real-world databases such as relative importance of items and non-binary transactions are not reflected. In this regard, utility pattern mining has been considered as an emergent research topic that deals with the characteristics. In real-world applications, meanwhile newly generated data by continuous operation or data in other databases for integration analysis can be gradually added to the current database. To efficiently deal with both existing and new data as a database, it is necessary to reflect increased data to previous analysis results without analyzing the whole database again. In this paper, we propose an algorithm called HUPID-Growth (High Utility Patterns in Incremental Databases Growth) for mining high utility patterns in incremental databases. Moreover, we suggest a tree structure constructed with a single database scan named HUPID-Tree (High Utility Patterns in Incremental Databases Tree), and a restructuring method with a novel data structure called TIList (Tail-node Information List) in order to process incremental databases more efficiently. We conduct various experiments for performance evaluation with state-of-the-art algorithms. The experimental results show that the proposed algorithm more efficiently processes real datasets compared to previous ones.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0601-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: Reinforcement learning is one of the subjects of Artificial Intelligence and learning automata have been considered as one of the most powerful tools in this research area. On the evolution of learning automata, the rate of convergence is the most primary goal of designing a learning algorithm. In this paper, we propose a deterministic-estimator based learning automata (LA) of which the estimate of each action is the upper bound of a confidence interval, rather than the Maximum Likelihood Estimate (MLE) that has been widely used in current schemes of Estimator LA. The philosophy here is to assign more confidence on actions that are selected only for a few times, so that the automaton is encouraged to explore the uncertain actions. When all the actions have been fully explored, the automaton behaves just like the Generalized Pursuit Algorithm. A refined analysis is presented to show the đťś–-optimality of the proposed algorithm. It has been demonstrated by extensive simulations that the presented learning automaton (LA) is faster than any deterministic estimator learning automata that have been reported to date. Moreover, we extend our algorithm to the stochastic estimator schemes. It is also shown that the extended LA has achieved a significant performance improvement, comparing with the current state of the art algorithm of learning automata, especially in complex and confusing environments.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0594-1
  • [Show abstract] [Hide abstract]
    ABSTRACT: Negative selection algorithms are important for artificial immune systems to produce detectors. But there are problems such as high time complexity, large number of detectors, a lot of redundant coverage between detectors in traditional negative selection algorithms, resulting in low efficiency for detectors’ generation and limitations in the application of immune algorithms. Based on the distribution of self set in morphological space, the algorithm proposed in this paper introduces the immune optimization mechanism, and produces candidate detectors hierarchically from far to near, with selves as the center. First, the self set is regarded as the evolution population. After immune optimization operations, detectors of the first level are generated which locate far away from the self space and cover larger non-self space, achieving that fewer detectors cover as much non-self space as possible. Then, repeat the process to obtain the second level detectors which locate close to detectors of the first level and near the self space and cover smaller non-self space, reducing detection loopholes. By analogy, qualified detector set will be obtained finally. In detectors’ generation process, the random production range of detectors is limited, and the self-reaction rate between candidate detectors is smaller, which effectively reduces the number of mature detectors and redundant coverage. Theoretical analysis demonstrates that the time complexity is linear with the size of self set, which greatly reduces the influence of growth of self scales over the time complexity. Experimental results show that IO-RNSA has better time efficiency and generation quality than classical negative selection algorithms, and improves detection rate and decreases false alarm rate.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0599-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: Biological entities, such as birds with their flocking behavior, ants with their social colonies, fish with their shoaling behavior and honey bees with their complex nest construction, represent a great source of inspiration in the optimization and data mining domains. Following this line of thought, we propose the Communicating Ants for Clustering with Backtracking strategy (CACB) algorithm, which is based on a dynamic and an adaptive aggregation threshold and a backtracking strategy where artificial ants are allowed to turn back in their previous aggregation decisions. The CACB algorithm is a hierarchical clustering algorithm that generates compact dendrograms since it allows the aggregation of more than two clusters at a time. Its high performance is experimentally shown through several real benchmark data sets and a content-based image retrieval system.
    Applied Intelligence 03/2015; 42(2):174-194. DOI:10.1007/s10489-014-0573-6
  • [Show abstract] [Hide abstract]
    ABSTRACT: Coalitional Resource Games (CRGs) are a natural and formal framework in which agents wish to form coalitions to pool their scarce resources in order to achieve a set of goals that satisfy all members of a coalition. Thus far, many computational questions surrounding CRGs have been studied, but to our knowledge, a number of natural decision problems in CRGs have not been solved. Therefore, in this paper we investigate the possibility of using binary particle swarm optimization (BPSO) as a stochastic search process to search for Maximal Successful Coalition (MAXSC) in CRGs, which is a DP -complete problem. For this purpose, we develop a one-dimensional binary encoding scheme, propose strategies for encoding repair to ensure that each encoding in every iteration process is approximately valid and logicallsy consistent, and discuss some key properties of repair strategies. To evaluate the effectiveness of our algorithms, we compare them with the only other algorithm available in the literature for identifying MAXSC (due to Shrot, Aumann, and Kraus). The result shows that our algorithms are significantly faster especially for large-scale datasets.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0589-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: Economic dispatch (ED) problem exhibits highly nonlinear characteristics, such as prohibited operating zone, ramp rate limits, and non-smooth property. Due to its nonlinear characteristics, it is hard to achieve the expected solution by the classical methods. To overcome the challenging difficulty, this paper proposes an improved firefly algorithm (FA) to solve economic dispatch (ED) problem. The improved FA employs two strategies to enhance the search ability and avoid the premature usually suffered from in standard FA. The first one is based on the distance information among the fireflies, and it adjusts the light absorption coefficient adaptively. The other one is a decreasing strategy for the randomization parameter. Additionally, a crossover operation is employed to create potential solution with high diversity. The designs are able to enhance the search ability and performance of FA, which have been demonstrated on six benchmark functions. To validate the proposed algorithm, we also use three different systems to demonstrate its efficiency and feasibility in solving ED problem. The experimental results show that the proposed FA method was capable of achieving higher quality solutions in ED problems.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0593-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present a multiobjective credibilistic model for commercial off-the-shelf (COTS) products selection in a modular software system under a fuzzy environment. To treat imprecise parameters, we use a credibility-based approach that combines the expected value and chance-constrained programming techniques. The model simultaneously minimizes the total cost, size, and execution time of the modular software system subject to many realistic constraints including system reliability, delivery time, and compatibility issues among the available COTS products. We use a two-phase interactive approach as the solution methodology. An empirical study is included to demonstrate the applicability of the proposed model and the solution approach in real-world applications of COTS selection. Further, a thorough performance analysis and comparison is done to claim the superiority of the proposed methodology over the existing fuzzy programming approaches used for COTS products selection problem.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0602-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: The intuitionistic fuzzy set, as a generation of Zadeh’ fuzzy set, can express and process uncertainty much better, by introducing hesitation degree. Similarity measures between intuitionistic fuzzy sets (IFSs) are used to indicate the similarity degree between the information carried by IFSs. Although several similarity measures for intuitionistic fuzzy sets have been proposed in previous studies, some of those cannot satisfy the axioms of similarity, or provide counter-intuitive cases. In this paper, we first review several widely used similarity measures and then propose new similarity measures. As the consistency of two IFSs, the proposed similarity measure is defined by the direct operation on the membership function, non-membership function, hesitation function and the upper bound of membership function of two IFS, rather than based on the distance measure or the relationship of membership and non-membership functions. It proves that the proposed similarity measures satisfy the properties of the axiomatic definition for similarity measures. Comparison between the previous similarity measures and the proposed similarity measure indicates that the proposed similarity measure does not provide any counter-intuitive cases. Moreover, it is demonstrated that the proposed similarity measure is capable of discriminating the difference between patterns.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0596-z
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an integrated framework for a suite of dynamic path planning algorithms for real time and concurrent movement of multiple robotic carts across a parking garage floor layout without driving lanes. Planning and search algorithms were formulated to address major aspects of automation for storage and retrieval of cars loaded onto robotic mobile carts. Path planning algorithms including A*, D* Lite and Uniform Cost Search were implemented and integrated within a unified framework to guide the robotic carts from a starting point to their destination during the storage and the retrieval processes. For the circumstances where there is no clear path for a car being stored or retrieved, a procedure was developed to unblock the obstacles in the path. A policy that minimizes obstructions was defined for assigning the parking spots on a given floor for arriving cars. Performance evaluation of the overall proposed system was done using a multithreaded software application. A variety of rectangular parking lot layouts including those with 20 Ă— 20, 20 Ă— 40, 30 Ă— 40, and 40 Ă— 40 parking spaces were considered in the simulation study. Performance metrics of path length, planning or search time and memory space requirements were monitored. Simulation results demonstrate that the proposed design facilitates near optimal paths, and is able to handle tens of concurrent requests in real time and in the presence of immobilized carts.
    Applied Intelligence 03/2015; 42(2). DOI:10.1007/s10489-014-0598-x
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a hybrid evolutionary algorithm with guided mutation (EA/G) to solve the minimum weight dominating set problem (MWDS) which is N P-hard in nature not only for general graphs, but also for unit disk graphs (UDG). MWDS finds practical applications in diverse domains such as clustering in wireless networks, intrusion detection in adhoc networks, multi-document summarization in information retrieval, query selection in web databases etc. EA/G is a recently proposed evolutionary algorithm that tries to overcome the shortcomings of genetic algorithms (GAs) and estimation of distribution algorithms (EDAs) both, and, that can be considered as a cross between the two. The solution obtained through EA/G algorithm is further improved through an improvement operator. We have compared the performance of our proposed hybrid approach with the state-of-the-art approaches on general graphs as well as UDG. Computational results show the superiority of our approach in terms of solution quality as well as execution time.
    Applied Intelligence 02/2015; DOI:10.1007/s10489-015-0654-1
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents CCMM (acronym for image Clique Matching), a new deterministic algorithm for the visual feature matching problem when images have low distortion. CCMM is multi-hypothesis, i.e. for each feature to be matched in the original image it builds an association graph which captures pairwise compatibility with a subset of candidate features in the target image. It then solves optimum joint compatibility by searching for a maximum clique. CCMM is shown to be more robust than traditional RANSAC-based single-hypothesis approaches. Moreover, the order of the graph grows linearly with the number of hypothesis, which keeps computational requirements bounded for real life applications such as UAV image mosaicing or digital terrain model extraction. The paper also includes extensive empirical validation
    Applied Intelligence 02/2015; DOI:10.1007/s10489-015-0646-1