# Habib IzadkhahUniversity of Tabriz · Department of Computer Science

Habib Izadkhah

Ph.D. in Computer Science

## About

122

Publications

12,716

Reads

**How we measure 'reads'**

A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more

581

Citations

Introduction

Dr. Izadkhah has been announced as the top 5 most influential authors in Software Modularization research by a recent study.
Ref: Software Module Clustering: An In-Depth Literature Analysis, IEEE Transactions on Software Engineering, 2020.

**Skills and Expertise**

Additional affiliations

December 2013 - present

Education

September 2009 - September 2013

## Publications

Publications (122)

Solving combinatorial optimization problems (COPs) poses a significant challenge in various application domains. The NP-hardness of many COPs necessitates the integration of meta-heuristics to effectively tackle these problems by leveraging the strengths of each meta-heuristic. However, hybrid meta-heuristics lack a mechanism to determine when to a...

Data anonymization is a technique that safeguards individuals’ privacy by modifying attribute values in published data. However, increased modifications enhance privacy but diminish the utility of published data, necessitating a balance between privacy and utility levels. K-Anonymity is a crucial anonymization technique that generates k-anonymous c...

Drug repurposing is an exciting field of research toward recognizing a new FDA-approved drug target for the treatment of a specific disease. It has received extensive attention regarding the tedious, time-consuming, and highly expensive procedure with a high risk of failure of new drug discovery. Data-driven approaches are an important class of met...

This chapter talks about 13 string-based problems. These challenges are explained with some examples and then programmed in Python. The problems are listed as follows: 1. Performing Pancake Scramble into texts. 2. Reversing the vowels of a given text. 3. Word shape from a Text Corpus. 4. Word height from a Text Corpus. 5. Combining adjacent colors...

This chapter discusses six miscellaneous problems. These challenges are elucidated with illustrative examples and subsequently implemented using the Python programming language. The problems are enumerated as follows: 1. Performing a perfect riffle to items. 2. Exact change money into an array of coin denominations. 3. Just keeping items whose Freq...

This chapter talks about 21 number-based problems. These challenges are explained with some examples and then programmed in Python. 1. Checking if a number is Cyclop. 2. Checking if there is a domino cycle in a list of numbers. 3. Extracting increasing digits from a given string. 4. Expanding integer intervals. 5. Collapsing integer intervals. 6. C...

This chapter provides an overview of the fundamental concepts and knowledge of Python required for this book. It includes instructions for installing and running a simple Python program, covers the concept of variables and the basic data types of boolean, integer, float, string, and the sequence type of range. The chapter also discusses data struct...

This chapter talks about 8 count-based problems. These challenges are explained with some examples and then programmed in Python. The problems are listed as follows: 1. Counting the number of carries when addition two given numbers. 2. Counting the number of animals that are growling animals. 3. Counting the number of consecutive summers for a poli...

SARS-CoV-2, a member of the coronavirus family, is an RNA virus characterized by a single-stranded genome and is responsible for the development of COVID-19. The emergence of the Omicron variant of SARS-CoV-2 in 2021 marked a significant variation recognized by the World Health Organization. The primary objective of this study is to investigate the...

Breast cancer is one of the most prevalent cancers among women worldwide, and early detection of the disease can be lifesaving. Detecting breast cancer early allows for treatment to begin faster, increasing the chances of a successful outcome. Machine learning helps in the early detection of breast cancer even in places where there is no access to...

A graph is a nonlinear data structure that has a wide range of applications. Most of the concepts in computer science and the real world can be visualized and represented in terms of graph data structure. This chapter provides 70 exercises for addressing different applications of graph algorithms. To this end, this chapter provides exercises on gra...

In this chapter, we examine the four basic data structures including arrays, stacks, queues, and linked lists. To this end, this chapter provides 191 exercises on the basic data structures in terms of theoretical and application.

The purpose of sorting is to arrange the elements of a given list in a specific order (ascending or descending). Sorting algorithms are categorized according to the following–
By number of comparisons
By number of swaps
By memory (space) usage
By recursion
By stability
By adaptability
This chapter provides 431 exercises on the 35 sorting algorithms...

The theory of computational complexity examines the difficulty of solving problems by computer (more precisely, algorithmically). In this theory, the complexity of problem definitions is classified into two sets; P which denotes “Polynomial” time and NP which indicates “Non-deterministic Polynomial” time. There are also NP-Hard and NP-Complete sets...

Recursive relations are useful methods for analyzing recursive algorithms. This chapter provides exercises for developing skills in solving recurrence relations. You may be familiar with how to analyze the time complexity of algorithms. However, to analyze recursive algorithms, we need more sophisticated techniques such as solving recursive relatio...

There is usually more than one algorithm for solving problems. The question that arises in such cases is which of these algorithms works best. The algorithms are compared based on runtime and memory usage. Therefore, it is an efficient algorithm that wastes running time and less memory. Because there is enough memory today, it is very important to...

Mathematical induction is a technique for proving results or establishing statements for natural numbers. This chapter illustrates the method through a variety of examples and provides 50 exercises on mathematical induction. To this end, this chapter provides exercises on summations, inequalities, floors and ceilings, divisibility, postage stamps,...

Searching algorithms are used to retrieve a given element from data structures. According to the search strategy, these algorithms are generally classified into two categories: sequential search and interval search. This chapter provides 76 exercises on the search methods including linear search, binary search, Ternary search, binary search tree, F...

Greedy algorithms are simple algorithms used in optimization problems. This algorithm makes the optimal choice in each step so that it can find the optimal way to solve the whole problem. This chapter provides 169 exercises for addressing different aspects of greedy algorithms. To this end, this chapter provides exercises on activity selection prob...

Dynamic programming is another way to design algorithms. Dynamic programming is similar to the divide and conquer techniques in that the problem is divided into smaller sub-problems. But in this method, we first solve the smaller sub-problems, save the results, and then retrieve it whenever one of them is needed, instead of recalculating. Dynamic p...

The divide and conquer method is used for solving problems. In this type of method, the main problem is divided into sub-problems that are exactly similar to the main problem but smaller in size. The first phase of this method, as its name suggests, is to break or divide the problem into sub-problems, and the second phase is to solve smaller proble...

Having come up with your new and innovative algorithm, how do you measure its efficiency? Certainly, we would prefer to design an algorithm which to be as efficient as possible, therefore we will require some method to prove that it will really operate as well as we had expected. Also, some algorithms are more efficient than others. But what do we...

The backtracking algorithm is a problem-solving algorithm that tests all possible solutions and goes back wherever the solution was not appropriate (did not match the constraints of the problem) and corrects itself and finds a new way. Unlike brute force which tests all the ways and then we find the answer between them. This algorithm is usually us...

Diagnosis of covid-19 using deep learning on CT scan images can play an important role in helping doctors. In this research, deep transfer learning is used, using the architecture of EfficientNet-B2 and ViT_l_32 neural networks as a base, and by combining these models, the proposed model is built. For evaluation , confusion matrix, precision, accur...

Multiprocessor systems with parallel computing play an important role in data processing. Considering the optimal use of existing computing systems, scheduling on parallel systems has gained great significance. Usually, a sequential program to run on parallel systems is modeled by a task graph. Because scheduling of task graphs onto processors is c...

In recent years, convolutional neural networks (CNNs) have outperformed conventional methods in end-to-end speaker identification (SI) systems. The CNN training time is considerably long due to the need for large amounts of training data and high costs of computation and memory consumption. This paper proposes a new CNN for text-independent SI insp...

Breast cancer is one of the diseases that is gradually becoming more prevalent in today’s society. Machine learning is helping in early detection of mammographic lesions. In this work, the fully connected Neural Network (FCNN) deep learning architecture used to diagnose the breast cancer. In addition, we studied the effect of different techniques t...

Clustering techniques are used to extract the structure of software for understanding, maintaining, and refactoring. In the literature, most of the proposed approaches for software clustering are divided into hierarchical algorithms and search-based techniques. In the former, clustering is a process of merging (splitting) similar (non similar) clus...

We would like to take a step further and discuss building a system that analyzes patterns in data and can then learn to generate novel data based on learned information. This is a newly emerging field in deep learning. It has brought a great deal of success and attention in deep learning in recent years. This chapter introduces generative networks,...

According to the World Health Organization (WHO), cardiovascular disease (CVD) is the leading cause of death and disability worldwide, killing more than 17 million people every year. This toll accounts for a third of all deaths worldwide and half of all noncommunicable disease-related deaths. More than 80% of these deaths occur in low- and middle-i...

A large amount of the data such as speech, protein sequence, bases in a DNA molecule, time series (e.g., weather forecast data and financial data), videos, and texts are inherently serial (sequential). The current value of sequential data depends on previous values. Recurrent neural networks (RNNs, for short) are a powerful type of neural networks...

This chapter discusses deep learning applications, which show that deep learning is embedded in all aspects of our lives. In fact, people deal with these applications on a daily basis. The chapter will then point out why deep learning is important in bioinformatics. Finally, the concepts presented in all chapters of the book will be reviewed.

This chapter discusses the legendary deep learning architectures first. It then presents several deep learning applications in bioinformatics and discusses several challenges in deep learning as well as the methods of overcoming those challenges.

Neural networks are a subset of machine learning, and they are at the heart of deep learning algorithms. This chapter introduces neural networks, which have been among the most widely used and successful models for machine learning. Most of the modern deep learning models are also based on neural networks. These networks have a long history of deve...

In bioinformatics, classification of the input data is a very important task that refers to a predictive modeling problem where a class label is predicted for any given input data. An instance is to classify the breast cancer into benign and malignant categories. The Pima Indians Diabetes Dataset is employed to predict the onset of diabetes based o...

Known as a high-level programming language, Python is easy to use and learn. It also works quickly; furthermore, it is very versatile and may become the dominant platform for machine learning. If you would like to set up a career in deep learning, you need to know Python programming along with the Python ecosystem. Creating effective deep learning...

The state-of-the-art image classification techniques have emerged from submissions by academics and industry leaders to the ImageNet large-scale visual recognition challenge (ILSVRC). ILSVRC is an annual contest aimed at developing and improving computer vision techniques on a subset of the ImageNet dataset. This chapter addresses eight well-known...

This chapter introduces the convolutional neural networks (CNNs, or ConvNets for short), deep neural networks with special image processing applications. These networks significantly improve the processing of information by providing several new concepts, and are still a great option in many problems involving continuously sampled data in a rectang...

A single-layer neural network is the simplest type of neural networks laying the foundation for understanding other kinds of neural networks. There are no hidden layers in this network that only contains input and output layers. After the advent of the single-layer neural network, it took nearly 30 years for the multilayer neural perceptron (MLP) n...

Ability to use deep learning is among the most highly sought skills in technology. In recent years, it has drawn a great deal of research attention and led to many practical applications. Deep learning is a specialized subset of machine learning that has improved the ability to classify, recognize, detect, and generate—in one word, understand. This...

Deep learning and relevant techniques are classified as a subset of machine learning, whereas machine learning is considered a subset of artificial intelligence. Therefore, this chapter reviews the definitions of these concepts and their differences. It also outlines some major challenges in deep learning. Moreover, some of the basic mathematical c...

Topological data analysis (TDA) is a novel and
rapidly growing area of modern data science that uses topological,
geometric, and algebraic tools to extract structural features from
very complex and large-scale data that are usually incomplete
and noisy. The primary motivation for studying this method
was to study the shape of data, which has been c...

Multiplex link prediction is the problem of finding missing links between nodes based on information from other layers. Although the link prediction problem in the online social networks is studied comprehensively, most approaches only employ internal features of the under prediction layer and do not consider additional link information from other...

Nowadays, in scientific and computational programs, increasing the execution speed of programs is very important. In implementing scientific programs, loops are dedicating a large part of time to themselves. Therefore, the parallel execution of the iteration loop reduces the runtime of all programs. This paper has been working on the parallelism of...

Comprehensive analysis of proteins to evaluate their genetic diversity, study their differences, and respond to the tensions is the main subject of an interdisciplinary field of study called proteomics. The main objective of the proteomics is to detect and quantify proteins and study their post-translational modifications and interactions using pro...

Evaluating the reliability of component-based software systems from their architecture is of great importance. This paper proposes metrics to assess the reliability of software systems considering the self-healing effect of components on software reliability. A self-healing component when being broken, heals itself with a probability and returns to...

Users of cloud computing technology can lease resources instead of spending an excessive charge for their ownership. For service delivery in the infrastructure-as-a-service model of the cloud computing paradigm, virtual machines (VMs) are created by the hypervisor. This software is installed on a bare-metal server, called the host, and acted as a b...

Context
Clustering algorithms, as a modularization technique, are used to modularize a program aiming to understand large software systems as well as software refactoring. These algorithms partition the source code of the software system into smaller and easy-to-manage modules (clusters). The resulting decomposition is called the software system st...

Software refactoring is a software maintenance action to improve the software internal quality without changing its external behavior. During the maintenance process, structural refactoring is performed by remodularizing the source code. Software clustering is a modularization technique to remodularize artifacts of source code aiming to improve rea...

A software system evolves overtime to meet the needs of users. Understanding a program is the most important step to apply new requirements. Clustering techniques by dividing a program into small and meaningful parts make it possible to understand the program. In general, clustering algorithms are classified into two categories: hierarchical and no...

Clustering (modularisation) techniques are often employed for the meaningful decomposition of a program aiming to understand it. In the software clustering context, several external metrics are presented to evaluate and validate the resultant clustering obtained by an algorithm. These metrics use a ground‐truth decomposition to evaluate a resultant...

In the literature, loop fusion is an effective optimization technique which tries to enhance parallelizing compilers’ performance via memory hierarchy management, and all its competing criteria create an NP-hard problem. This paper proposes an evolutionary algorithm that aims to achieve a profitable loop order which maximizes fusion taking into acc...

Multiplex networks are the general representative of complex systems composed of distinct interactions between the same entities on multiple layers. Community detection in the multiplex networks is the problem of finding a shared structure under all layers, which combines the information of the entire network. Most of the existing methods for commu...

Program comprehension plays a significant role in the maintenance of software systems. There has recently been a significant increase in written large-scale applications with a collaboration of several programming languages. Due to the indirect collaboration between different components of a multilingual program, it is difficult to understand such...

Security is a growing concern in developing software systems. It is important to face unknown threats in order to make the system continue operating properly. Threats are vague and attack methods change frequently. Coping with such changes is a major feature of an adaptive software. Therefore, designing an adaptive secure software is an appropriate...

Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, therefore, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness function) which are used by genetic algorithms for clustering. These...

Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness functions) which are used by genetic algorithms for clustering. These objective...

A software system evolves over time to meet the user’s new requirements as well as to adapt to the environment. This causes it deviates from its original and documented structure. Hence, after a while, due to its low understandability, making new changes will not be easy. Modularization is utilized to extract the software system structure from the...

Software modularization techniques are employed to understand a software system. The purpose of modularization is to decompose a software system from a source code into meaningful and understandable subsystems (modules). Since modularization of a software system is an NP-hard problem, the modularization quality obtained using evolutionary algorithm...

Clustering is used as an important technique to extract patterns from big data in various fields. Graph clustering as a subset of clustering has a lot of practical applications. Due to the NP-hardness of the graph clustering problem, many evolutionary algorithms, particularly the genetic algorithm have been presented. One of the most effective oper...

Programs are under continuous attack for disclosing secret information, and defending against these attacks is becoming increasingly vital. An attractive approach for protection is to measure the amount of secret information that might leak to attackers. A fundamental issue in computing information leakage is that given a program and attackers with...

Clustering is used as an important technique to extract patterns from big data in various fields. Graph clustering as a subset of clustering has a lot of practical applications. Due to the NP-hardness of the graph clustering problem, many evolutionary algorithms, particularly the genetic algorithm have been presented. One of the most effective oper...

Graph clustering has wide applications in different areas such as machine learning, bioinformatics, data mining, social networks and understanding a software. Since it is an NP-hard problem, most approaches use the meta-heuristic and search-based evolutionary methods for solving it. Inspired by optimization algorithms like krill herd (KH) and genet...

Abstract:
Clustering techniques are usually utilized to partition a software system, aiming to understand it. Understanding a program helps to maintain the legacy source code. Since the partitioning of a software system is an NP-hard problem, using the evolutionary approaches seems reasonable. Krill herd (KH) evolutionary algorithm is an effective...

Nowadays, parallel and distributed based environments are used extensively; hence, for using these environments effectively, scheduling techniques are employed. The scheduling algorithm aims to minimize the makespan (i.e., completion time) of a parallel program. Due to the NP-hardness of the scheduling problem, in the literature, several genetic al...

Context: Software systems evolve over time to meet the new requirements of users. These new requirements, usually, are not reflected in the original documents of these software systems. Therefore, the new version of a software system deviates from the original and documented architecture. This way, it will be more difficult to understand it after a...

Planning a powerful server imposes an enormous cost for providing ideal performance. Given that a server responding for web requests is more likely to consume RAM memory than other resources, it is desirable to provide an appropriate RAM capacity for optimal performance of server in congested situations. This can be done through RAM usage modeling...

A lack of up-to-date software documentation hinders the processes of software evolution and maintenance, as the structure and code of the software can be misunderstood. One approach to overcoming such problems is to extract and reconstruct the software architecture from the available source code so that it can be assessed against the required chang...

In view of the very large search space in software system modularization, the use of evolutionary approaches and, especially, genetic algorithms seems reasonable. Research on these approaches has shown that they can achieve better results than hierarchical techniques, when used to modularize software systems to extract the architecture. Therefore,...