Applied Intelligence Journal Impact Factor & Information

Publisher: Springer Science+Business Media, Springer Verlag

Journal description

The international journal of Applied Intelligence provides a medium for exchanging scientific research and technological achievements accomplished by the international community. The focus of the work is on research in artificial intelligence and neural networks. The journal addresses issues involving solutions of real-life manufacturing defense management government and industrial problems which are too complex to be solved through conventional approaches and which require the simulation of intelligent thought processes heuristics applications of knowledge and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance. The emphasis of the reported work is on new and original research and technological developments rather than reports on the application of existing technology to different sets of data. Earlier work reported in these fields has been limited in application and has solved simplified structured problems which rarely occur in real-life situations. Only recently have researchers started addressing real and complex issues applicable to difficult problems. The journal welcomes such developments and functions as a catalyst in disseminating the original research and technological achievements of the international community in these areas.

Current impact factor: 1.85

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2012 Impact Factor 1.853
2011 Impact Factor 0.849
2010 Impact Factor 0.881
2009 Impact Factor 0.988
2008 Impact Factor 0.775
2007 Impact Factor 0.5
2006 Impact Factor 0.329
2005 Impact Factor 0.569
2004 Impact Factor 0.477
2003 Impact Factor 0.776
2002 Impact Factor 0.686
2001 Impact Factor 0.493
2000 Impact Factor 0.42
1999 Impact Factor 0.291
1998 Impact Factor 0.326
1997 Impact Factor 0.268
1996 Impact Factor 0.139
1995 Impact Factor 0.05

Impact factor over time

Impact factor

Additional details

5-year impact 1.94
Cited half-life 5.90
Immediacy index 0.19
Eigenfactor 0.00
Article influence 0.30
Website Applied Intelligence website
Other titles Applied intelligence (Dordrecht, Netherlands)
ISSN 0924-669X
OCLC 25272842
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Springer Verlag

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on pre-print servers such as
    • Author's post-print on author's personal website immediately
    • Author's post-print on any open access repository after 12 months after publication
    • Publisher's version/PDF cannot be used
    • Published source must be acknowledged
    • Must link to publisher version
    • Set phrase to accompany link to published version (see policy)
    • Articles in some journals can be made Open Access on payment of additional charge
  • Classification
    ​ green

Publications in this journal

  • Jianwei Zheng · Hong Qiu · Wanliang Wang · Chenchen Kong · Hailun Wang
    Applied Intelligence 09/2015; DOI:10.1007/s10489-015-0709-3
  • [Show abstract] [Hide abstract]
    ABSTRACT: DNA reassembling is an NP-hard problem (Brun, Theor Comput Sci 395:31–46, 2008; Medvedev et al 2007; Ma and Lombardi 2008). The present article presents a locally guided global learning system to solve the problem of genome reassembling. We have used a reference DNA sequence which is 99 % similar to an unknown DNA sequence. Two different sequences from the same organism generally have around 99 % similarity (Wei et al 2007). We have considered different DNA sequences from NCBI website (http:// www. ncbi. nlm. nih. gov). Then we have simulated the tasks of cloning the sequence, followed by shearing the clones to a number of short reads. In our algorithm, we have introduced a new concept in the task of DNA reassembling using Ant Colony Optimization, where pheromone concentration is proportional to the score of assembled DNA fragments with some known reference sequences within the same organism. Unlike local overlapping, we have used here local alignment score of short reads with some known local reference region as the heuristic information. The result shows that our algorithm is capable of reassembling at par with the state-of-the-art. DNA reassembling techniques may need a massive parallel computation and huge memory space (Kurniawan et al 2008) because of size ~109bp of DNA sequences of mammals (Miller et al, Genomics 95:315–327, 2010; Blazewicz et al, Comput Biol Chem 33:224–230, 2009; Butler et al, Genome Res 18:810–820, 2008; Joshi et al 2011; Stupar et al, Arch Oncol 19:3–4, 2011; Quail et al, BMC Genomics 13:1471–2164, 2012), and ACO is inherently concurrent in nature (Dorigo and Stutzle 2004). Due to lack of appropriate computational resources, we had to confine ourselves to deal with the sequences of length up to ∼105b p. We have considered 22 sequences of different organism, including Homo sapiens BRCA1 (127429bp) gene. For large sequences, we have applied hierarchical BAC-by-BAC sequencing (Fig. 2) (Myers, Comput Sci Eng 1:33–43, 1999), to stitch the individual segments to retrieve the original DNA sequence.
    Applied Intelligence 09/2015; 43(2). DOI:10.1007/s10489-015-0650-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: Geo-demographic analysis is an essential part of a geographical information system (GIS) for predicting people’s behavior based on statistical models and their residential location. Fuzzy Geographically Weighted Clustering (FGWC) serves as one of the most efficient algorithms in geo-demographic analysis. Despite being an effective algorithm, FGWC is sensitive to initialize when the random selection of cluster centers makes the iterative process falling into the local optimal solution easily. Artificial Bee Colony (ABC), one of the most popular meta-heuristic algorithms, can be regarded as the tool to achieve global optimization solutions. This research aims to propose a novel geo-demographic analysis algorithm that integrates FGWC to the optimization scheme of ABC for improving geo-demographic clustering accuracy. Experimental results on various datasets show that the clustering quality of the proposed algorithm called FGWC-ABC is better than those of other relevant methods. The proposed algorithm is also applied to a decision-making application for analyzing crime behavior problem in the population using the US communities and crime data set. It provides fuzzy rules to determine the violent crime rate in terms of linguistic labels from socioeconomic variables. These results are significant to make predictions of further US violent crime rate and to facilitate appropriate decisions on prevention such the situations in the future.
    Applied Intelligence 09/2015; 43(2):1-22. DOI:10.1007/s10489-015-0705-7
  • [Show abstract] [Hide abstract]
    ABSTRACT: Twin support vector machine (TWSVM) is regarded as a milestone in the development of powerful SVMs. However, there are some inconsistencies with TWSVM that can lead to many reasonable modifications with different outputs. In order to obtain better performance, we propose a novel combined outputs framework that combines rational outputs. Based on this framework, an optimal output model, called the linearly combined twin bounded support vector machine (LCTBSVM), is presented. Our LCTBSVM is based on the outputs of several TWSVMs, and produces the optimal output by solving an optimization problem. Furthermore, two heuristic algorithms are suggested in order to solve the optimization problem. Our comprehensive experiments show the superior generalization performance of our LCTBSVM compared with SVM, PSVM, GEPSVM, and some current TWSVMs, thus confirming the value of our theoretical analysis approach.
    Applied Intelligence 09/2015; 43(2). DOI:10.1007/s10489-015-0655-0
  • [Show abstract] [Hide abstract]
    ABSTRACT: Credit scoring, which is also called credit risk assessment has attracted the attention of many financial institutions and much research has been carried out. In this work, a new Extreme Learning Machines’ (ELMs) Ensemble Selection algorithm based on the Greedy Randomized Adaptive Search Procedure (GRASP), referred to as ELMsGraspEnS, is proposed for credit risk assessment of enterprises. On the one hand, the ELM is used as the base learner for ELMsGraspEnS owing to its significant advantages including an extremely fast learning speed, good generalization performance, and avoidance of issues like local minima and overfitting. On the other hand, to ameliorate the local optima problem faced by classical greedy ensemble selection methods, we incorporated GRASP, a meta-heuristic multi-start algorithm for combinatorial optimization problems, into the solution of ensemble selection, and proposed an ensemble selection algorithm based on GRASP (GraspEnS) in our previous work. The GraspEnS algorithm has the following three advantages. (1) By incorporating a random factor, a solution is often able to escape local optima. (2) GraspEnS realizes a multi-start search to some degree. (3) A better performing subensemble can usually be found with GraspEnS. Moreover, not much research on applying ensemble selection approaches to credit scoring has been reported in the literature. In this paper, we integrate the ELM with GraspEnS, and propose a novel ensemble selection algorithm based on GRASP (ELMsGraspEnS). ELMsGraspEnS naturally inherits the inherent advantages of both the ELM and GraspEnS, effectively combining their advantages. The experimental results of applying ELMsGraspEnS to three benchmark real world credit datasets show that in most cases ELMsGraspEnS significantly improves the performance of credit risk assessment compared with several state-of-the-art algorithms. Thus, it can be concluded that ELMsGraspEnS simultaneously exhibits relatively high efficiency and effectiveness.
    Applied Intelligence 09/2015; 43(2). DOI:10.1007/s10489-015-0653-2
  • [Show abstract] [Hide abstract]
    ABSTRACT: Detection and analysis of activities of daily living (ADLs) are important in activity tracking, security monitoring, and life support in elderly healthcare. Recently, many research projects have employed wearable devices to detect and analyze ADLs. However, most wearable devices obstruct natural movement of the body, and the analysis of activities lacks adequate consideration of various real attributes. To tackle these issues, we proposed a two-fold solution. First, regarding unobtrusive detection of ADLs, only one small device is worn on a finger to sense and collect activity information, and identifiable features are extracted from the finger-related signals to identify various activities. Second, to reflect realistic life situations, a weighted sequence alignment approach is proposed to analyze an activity sequence detected by the device, as well as attributes of each activity in the sequence. The system is validated using 10 daily activities and 3 activity sequences. Results show 96.8 % accuracy in recognizing activities and the effectiveness of sequence analysis.
    Applied Intelligence 09/2015; 43(2). DOI:10.1007/s10489-015-0649-y
  • [Show abstract] [Hide abstract]
    ABSTRACT: This work investigates a possibility degree-based micro immune optimization approach to seek the optimal solution of nonlinear interval number programming with constraints. Such approach is designed under the guideline of the theoretical results acquired in the current work, relying upon interval arithmetic rules, interval order relation and immune theory. It involves in two phases of optimization. The first phase, based on a new possibility degree approach, assumes searching efficient solutions of natural interval extension optimization. This executes five modules - constraint bound handling, population division, dynamic proliferation, mutation and selection, with the help of a varying threshold of interval bound. The second phase collects the optimal solution(s) from these efficient solutions after optimizing the bounds of their objective intervals, in terms of the theoretical results. The numerical experiments illustrated that such approach with high efficiency performs well over one recent nested genetic algorithm and is of potential use for complex interval number programming.
    Applied Intelligence 09/2015; 43(2). DOI:10.1007/s10489-014-0639-5
  • [Show abstract] [Hide abstract]
    ABSTRACT: With its unique migration operator and mutation operator, Biogeography-Based Optimization (BBO), which simulates migration of species in natural biogeography, is different from existing evolutionary algorithms, but it has shortcomings such as poor convergence precision and slow convergence speed when it is applied to solve complex optimization problems. Therefore, we put forward a Cooperative Coevolutionary Biogeography-Based Optimizer (CBBO) in this paper. In CBBO, the whole population is divided into multiple sub-populations first, and then each subpopulation is evolved with an improved BBO separately. The fitness evaluation of habitats of a subpopulation is conducted by constructing context vectors with selected habitats from other sub-populations. Our CBBO tests are based on 13 benchmark functions and are also compared with several other evolutionary algorithms. Experimental results demonstrate that CBBO is able to achieve better results than other evolutionary algorithms on most of the benchmark functions.
    Applied Intelligence 07/2015; 43(1). DOI:10.1007/s10489-014-0627-9
  • [Show abstract] [Hide abstract]
    ABSTRACT: Erasable itemset mining, first proposed in 2009, is an interesting problem in supply chain optimization. The dPidset structure, a very effective structure for mining erasable itemsets, was introduced in 2014. The dPidset structure outperforms previous structures such as PID_List and NC_Set. Algorithms based on dPidset can effectively mine erasable itemsets. However, for very dense datasets, the mining time and memory usage are large. Therefore, this paper proposes an effective approach that uses the subsume concept for mining erasable itemsets for very dense datasets. The subsume concept is used to help early determine the information of a large number of erasable itemsets without the usual computational cost. Then, the erasable itemsets for very dense datasets (EIFDD) algorithm, which uses the subsume concept and the dPidset structure for the erasable itemset mining of very dense datasets, is proposed. An illustrative example is given to demonstrate the proposed algorithm. Finally, an experiment is conducted to show the effectiveness of EIFDD.
    Applied Intelligence 07/2015; 43(1). DOI:10.1007/s10489-014-0644-8
  • [Show abstract] [Hide abstract]
    ABSTRACT: The problem of retrieving time series similar to a specified query pattern has been recently addressed within the case based reasoning (CBR) literature. Providing a flexible and efficient way of dealing with such an issue is of paramount importance in many domains (e.g., medical), where the evolution of specific parameters is collected in the form of time series. In the past, we have developed a framework for retrieving time series, applying temporal abstractions. With respect to more classical (mathematical) approaches, our framework provides significant advantages. In particular, multi-level abstraction mechanisms and proper indexing techniques allow for flexible query issuing, and for efficient and interactive query answering. In this paper, we present an extension to such a framework, which aims to support sub-series matching as well. Indeed, sub-series retrieval may be crucial when the whole time series evolution is not of interest, while critical patterns to be searched for are only “local”. Moreover, sometimes the relative order of patterns, but not their precise location in time, may be known. Finally, an interactive search, at different abstraction levels, may be required by the decision maker. Our extended framework (which is currently being applied in haemodialysis, but is domain independent) deals with all these issues.
    Applied Intelligence 07/2015; 43(1). DOI:10.1007/s10489-014-0628-8
  • [Show abstract] [Hide abstract]
    ABSTRACT: Shuffled frog leaping algorithm (SFLA) has shown its good performance in many optimization problems. This paper proposes a Mnemonic Shuffled Frog Leaping Algorithm with Cooperation and Mutation (MSFLACM), which is inspired by the competition and cooperation methods of different evolutionary computing, such as PSO, GA, and etc. In the algorithm, shuffled frog leaping algorithm and improved local search strategy, cooperation and mutation to improve accuracy and that exhibits strong robustness and high accuracy for high-dimensional continuous function optimization. A modified shuffled frog leaping algorithm (MSFLA) is investigated that improves the leaping rule by combining velocity updating equation of PSO. To improve accuracy, if the worst position in the memeplex couldn’t get a better position in the local exploration procedure of the MSFLA, the paper introduces cooperation and mutation, which prevents local optimum and updates the worst position in the memeplex. By making comparative experiments on several widely used benchmark functions, analysis results show that the performances of that improved variant are more promising than the recently developed SFLA for searching optimum value of unimodal or multimodal continuous functions.
    Applied Intelligence 07/2015; 43(1). DOI:10.1007/s10489-014-0642-x
  • [Show abstract] [Hide abstract]
    ABSTRACT: The task assignment problem is an important topic in multi-agent systems research. Distributed real-time systems must accommodate a number of communication tasks, and the difficulty in building such systems lies in task assignment (i.e., where to place the tasks). This paper presents a novel approach that is based on artificial bee colony algorithm (ABC) to address dynamic task assignment problems in multi-agent cooperative systems. The initial bee population (solution) is constructed by the initial task assignment algorithm through a greedy heuristic. Each bee is formed by the number of tasks and agents, and the number of employed bees is equal to the number of onlooker bees. After being generated, the solution is improved through a local search process called greedy selection. This process is implemented by onlooker and employed bees. In greedy selection, if the fitness value of the candidate source is greater than that of the current source, the bee forgets the current source and memorizes the new candidate source. Experiments are performed with two test suites (TIG representing real-life tree and Fork–Join problems and randomly generated TIGs). Results are compared with other nature-inspired approaches, such as genetic and particle swarm optimization algorithms, in terms of CPU time and communication cost. The findings show that ABC improves these two criteria significantly with respect to the other approaches.
    Applied Intelligence 07/2015; 43(1). DOI:10.1007/s10489-014-0640-z