Lab
A.I.Lab
Institution: Tomas Bata University in Zlín
About the lab
Our lab is focused on the development and deployment of novel artificial intelligence algorithms in the following areas:
- Evolutionary computation
- Swarm intelligence
- Artificial neural networks
- Chaos theory
- Complex systems
- Data mining
- Image processing
- Pattern recognition
- Dynamic data prediction
- Evolutionary computation
- Swarm intelligence
- Artificial neural networks
- Chaos theory
- Complex systems
- Data mining
- Image processing
- Pattern recognition
- Dynamic data prediction
Featured research (10)
Sequential pattern mining in general and one particular form, clickstream pattern mining, are data mining topics that have recently attracted attention due to their potential applications of discovering useful patterns. However, in order to provide them as real-world service applications, one issue that needs to be addressed is that traditional algorithms often view databases as static, although in practice databases often grow over time and invalidate parts of the previous results after updates, forcing the algorithms to rerun from scratch on the updated databases to obtain updated frequent patterns. This can be inefficient as a service application due to the cost in terms of resources, and the returning of results to users can take longer when the databases get bigger. The response time can be shortened if the algorithms update the results based on incremental changes in databases. Thus, we propose PF-CUP (pre-frequent clickstream mining using pseudo-IDList), an approach towards incremental clickstream pattern mining as a service. The algorithm is based on the pre-large concept to maintain and update results and a data structure called a pre-frequent hash table to maintain the information about patterns. The experiments completed on different databases show that the proposed algorithm is efficient in incremental clickstream pattern mining.
Benchmarking various metaheuristics and their new enhancements, strategies, and adaptation mechanisms has become standard in computational intelligence research. Recently, many challenges and issues regarding fair comparisons and recommendations towards good practices for benchmarking of metaheuristic algorithms, have been identified. This paper is aimed at an important issues in metaheuristics design and benchmarking, which are boundary strategies or boundary control methods (BCM). This work aims to investigate whether the choice of a BCM could significantly influence the performance of competitive algorithms. The experiments encompass the top three performing algorithms from IEEE CEC competitions 2017 and 2020 with six different boundary control methods. We provide extensive statistical analysis and rankings resulting in conclusions and recommendations for metaheuristics researchers and possibly also for the future direction of benchmark definitions. We conclude that the BCM should be considered another vital metaheuristics input variable for unambiguous reproducibility of results in benchmarking and for a better understanding of population dynamics, since the BCM setting could impact the optimization method performance.
Recent progress in whitefly detection tool.
In recent years, mining informative data and discovering hidden information have become increasingly in demand. One of the popular means to achieve this is sequential pattern mining, which is to find informative patterns stored in databases. Its applications cover different areas and many methods have been proposed. Recently, pseudo-IDLists were proposed to improve both runtime and memory usage in the mining process. However, the idea cannot be directly used for sequential pattern mining as it only works on clickstream patterns, a more distinct type of sequential pattern. We propose adaptations and changes to the original idea to introduce SUI (Sequential pattern mining Using Indices). Comparing SUI with two other state-of-the-art algorithms on six test databases, we show that SUI has effective and efficient performance and memory usage.
Comparing various metaheuristics based on an equal number of objective function evaluations has become standard practice. Many contemporary publications use a specific number of objective function evaluations by the benchmarking sets definitions. Furthermore, many publications deal with the recurrent theme of late stagnation, which may lead to the impression that continuing the optimization process could be a waste of computational capabilities. But is it? Recently, many challenges, issues, and questions have been raised regarding fair comparisons and recommendations towards good practices for benchmarking metaheuristic algorithms. The aim of this work is not to compare the performance of several well-known algorithms but to investigate the issues that can appear in benchmarking and comparisons of metaheuristics performance (no matter what the problem is). This paper studies the impact of a higher evaluation number on a selection of metaheuristic algorithms. We examine the effect of a raised evaluation budget on overall performance, mean convergence, and population diversity of selected swarm algorithms and IEEE CEC competition winners. Even though the final impact varies based on current algorithm selection, it may significantly affect the final verdict of metaheuristics comparison. This work has picked an important benchmarking issue and made extensive analysis, resulting in conclusions and possible recommendations for users working with real engineering optimization problems or researching the metaheuristics algorithms. Especially nowadays, when metaheuristic algorithms are used for increasingly complex optimization problems, and meet machine learning in AutoML frameworks, we conclude that the objective function evaluation budget should be considered another vital optimization input variable.