Faramarz Safi

Faramarz Safi
Islamic Azad University, Najafabad Branch, Najafabad, Isfahan, Iran. · Computer Engineering

Ph.D.

About

67
Publications
15,884
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,015
Citations

Publications

Publications (67)
Article
Full-text available
With the rapid expansion of the Internet of Things (IoT), sensors, smartphones, and wearables have become integral to daily life, powering smart applications in home automation, healthcare, and intelligent transportation. However, these advancements face significant challenges due to latency and bandwidth constraints imposed by traditional cloud-ba...
Article
Full-text available
In the cloud environment, task scheduling has always been a challenge. Failure to use a proper scheduling approach in cloud computing may cause high energy consumption and low resource efficiency. Due to the dynamism and limitation of cloud resources to execute diverse and time-varying requests of users, an effective scheduling mechanism is require...
Article
Full-text available
Planning and sequencing for hot strip mills in the steel industry is a challenging, complex problem that has fascinated optimization researchers and practitioners alike. This paper applies a combinatory heuristic search and a multi-objective metaheuristic that is a novel approach called HSMO-NSGA-II and employs the HSMO heuristic search method and...
Article
Full-text available
One of the critical challenges in brain-computer interfaces is the classification of brain activities through the analysis of EEG signals. This paper seeks to improve the efficacy of deep learning-based rehabilitation systems, aiming to deliver superior services for individuals with physical disabilities. The research introduces the time distribute...
Preprint
Full-text available
Many engineering optimization problems can be solved using meta-heuristics. Despite their merits, such algorithms face common challenges of early convergence rate and the imbalance between the exploitation and exploration phases. These algorithms have strengths and weaknesses considering the convergence rate, local search, and global search criteri...
Preprint
Full-text available
Many engineering optimization problems can be solved using meta-heuristics. Despite their merits, such algorithms face common challenges of early convergence rate and the imbalance between the exploitation and exploration phases. These algorithms have strengths and weaknesses considering the convergence rate, local search, and global search criteri...
Article
Full-text available
Due to the increasing use of sensors/devices in smart cities, IoT/cloud data centers must provide adequate computing resources. Efficient resource management is of the biggest challenges in distributed computing. This research proposes a solution to use the activity log of sensors to extract their activity patterns. These patterns contribute to the...
Article
Agglomerative Hierarchical Clustering (AHC) is a general type of Hierarchical Clustering (HC) that forms clusters from the “bottom-up.” This paper focuses on the development of AHC methods based on ensemble-based approaches. Accordingly, we develop an AHC framework based on clusters clustering along with an innovative similarity criterion that perf...
Article
Full-text available
Despite that IoT technology provides a promising future for human life, some significant challenges such as routing, security, low-cost equipment, energy consumption, privacy, and reliability can considerably affect its performance. In recent studies, routing has been considered as one of the most critical challenges in IoT due to many existing IoT...
Article
Full-text available
The cloud runtime environment is dynamic; therefore, allocating tasks to computing resources might include various scenarios. Metaheuristic algorithms are usually used to choose appropriate scheduling scenarios; however, they suffer from premature convergence, trapping in local optima, and imbalance between the exploration and exploitation of searc...
Article
Full-text available
Cloud computing adopts virtualization technology, including migration and consolidation of virtual machines, to overcome resource utilization problems and minimize energy consumption. Most of the approaches have focused on minimizing the number of physical machines and rarely have devoted attention to minimizing the number of migrations. They also...
Article
Full-text available
Cloud computing maps tasks to resources in a scalable fashion. The scheduling is an NP-hard problem; thus, the scheduler chooses one solution from among many. This is the reason why finding the best optimal solution, especially at a high scale of the system, is not possible. Applying metaheuristic algorithms to find a near-to-optimal solution, not...
Article
Full-text available
MapReduce framework is used for the distribution and parallelization of large-scale data processing. This framework breaks a job into several MapReduce tasks and assigns them to different nodes. A weak performance of a node in executing a task may result in a long execution of the job which is called Straggler Task. Also, detecting the nodes with t...
Article
Full-text available
One of the significant objectives of artificial intelligence is to design learning algorithms that are executed on general-purpose computational machines inspired by the human brain. Neural Turing Machine (NTM) is a step towards realizing such a computational machine. In the literature, a variety of approaches have been presented for the NTM; howev...
Article
Full-text available
One of the methods for solving optimization problems is applying metaheuristic algorithms that find near to optimal solutions. Dragonfly algorithm is one of the metaheuristic algorithms which search problem space by the inspiration of hunting and emigration behavior of dragonflies in nature. However, it suffers from the premature convergence of the...
Article
Full-text available
One of the most important aspects of distributed systems is automatic failure recovery. In general, systems must be able to confront any type of failure. One issue is commonly overlooked in the subject of confronting the failures in services. Byzantine failures are the worst kind of arbitrary failures, either. The client should be ready for the wor...
Article
One recent study in the field of deep learning is extending Artificial Neural Networks (ANNs) by coupling them to external memory resources. Neural Turing Machine (NTM) and Differentiable Neural Computer (DNC) are two counterparts in this field. Research activities fall into two categories of either interacting with memory or choosing controller co...
Article
Full-text available
The development of cities has caused the expansion of urban areas, especially in metropolitan areas. These areas include a complex mix of physical, social, economic, and environmental problems that intensify: exhaustion, poverty, environmental pollution, and social anomalies, in this situation there is a need for urban management systems and urban...
Article
The live migration of virtual machines among physical machines aims at efficient utilization of resources, load balancing, maintenance, energy management, fault tolerance, sharing resources and mobile computing. There are several methods for the live migration of virtual machines. In the post-copy approach, a virtual machine starts working in a tar...
Article
Full-text available
Despite the development of online items in the market, Item-based Recommender Systems play an essential role to assist consumers to access their targets rapidly. Research activities show that users (consumers) in various societies have different purchasing behaviors. In the latest studies, the similarity of users was calculated based on different i...
Article
Full-text available
Meta-heuristic algorithms are divided into two categories: biological and non-biological. Biological algorithms are divided into evolutionary and swarm-based intelligence, where the latter is divided into imitation based and sign based. The whale algorithm is a meta-heuristic biological swarm-based intelligence algorithm (based on imitation). This...
Article
Full-text available
One of the newest bio-inspired meta-heuristic algorithms is the chicken swarm optimization (CSO) algorithm. This algorithm is inspired by the hierarchical behavior of chickens in a swarm for finding food. The diverse movements of the chickens create a balance between the local and the global search for finding the optimal solution. Raven roosting o...
Article
Full-text available
Scheduling in cloud computing is the assignment of tasks to resources with maximum performance, which is a multi-purpose problem. The scheduling is of NP-Hard issues that is the reason why meta-heuristic algorithms are used in scheduling problems. The meta-heuristic scheduling algorithms are divided into two categories of biological and non-biologi...
Article
There is an essential requirement to support people with speech and communication disabilities. A brain-computer interface using electroencephalography (EEG) is applied to satisfy this requirement. A number of research studies to recognize brain signals using machine learning and deep neural networks (DNNs) have been performed to increase the brain...
Article
Full-text available
The runtime environment of cloud computing requires a scheduler to be adapted with the current runtime conditions or make itself ready for the prospective events by predicting the future that requires adaptive scheduling algorithms. A description of the runtime environment based on a formula can help scheduler not only to control the current state...
Article
In expert systems, data mining methods are algorithms that simulate humans’ problem-solving capabilities. Clustering methods as unsupervised machine learning methods are crucial approaches to categorize similar samples in the same categories. The use of different clustering algorithms to a given dataset produces clusters with different qualities. H...
Article
Full-text available
Cloud datacenters consume enormous amounts of electrical energy that increases their operational costs. This shows the importance of investing on energy consumption techniques. Dynamic placement of virtual machines to appropriate physical nodes using metaheuristic algorithms is among the methods of reducing energy consumption. In metaheuristic algo...
Article
Today, information is rapidly increasing. For most of this information, data security and protection from unauthorized access are of great importance. Maybe information is created by an individual or a few people, but creating security for the information should be done by all assets of hardware, software and people. This entails organizing all ele...
Preprint
Full-text available
One of the major objectives of Artificial Intelligence is to design learning algorithms that are executed on a general purposes computational machines such as human brain. Neural Turing Machine (NTM) is a step towards realizing such a computational machine. The attempt is made here to run a systematic review on Neural Turing Machine. First, the min...
Article
Deep learning in artificial intelligence looks for a general-purpose computational machine to execute complex algorithms similar to humans’ brain. Neural Turing Machine (NTM)as a tool to realize deep learning approach brings together Turing machine that is a general-purpose machine equipped to a long-term memory, and a neural network as a controlle...
Article
Full-text available
Development of modern techniques, such as virtualization, underlies new solutions to the problem of reducing energy consumption in cloud computing. However, for the infrastructure as a service providers, it would be a difficult process to guarantee energy saving. Analysis of the workload of applications shows that the average utilization of virtual...
Article
Workflows are a set of tasks and the dependency among them, which are divided into scientific and business categories. To avoid problems of centralized execution of workflows, they are broken into segments that is known as fragmentation. To fragment the workflow, it is highly important to consider the dependency among tasks and runtime conditions....
Article
Full-text available
Scheduling means devoting tasks among computational resources, considering specific goals. Cloud computing is facing a dynamic and rapidly evolving situation. Devoting tasks to the computational resources could be done in numerous different ways. As a consequence, scheduling of tasks in cloud computing is considered as a NP-hard problem. Meta-heuri...
Article
Full-text available
Cloud computing is a scalable computing infrastructure in which the number of resources and requests change dynamically. There are usually a huge number of tasks and resources in cloud computing. A scheduler does allocating resources to tasks, which is an operation with a large number of parameters that is of NP-hard problems. Approaches such as me...
Article
Placement of virtual machines (VMs) on physical nodes as a sub-problem of dynamic VM consolidation has been driven by mainly energy efficiency and performance objectives. However, due to varying workloads in VMs, placement of the VMs can cause a violation in Service Level Agreement (SLA). In this paper, the VM placement is regarded as a bin packing...
Article
Optimization means finding the best solution from among an infinite number of possible solutions to a complex problem. Several methods are generally used for solving complex problems, such as meta-heuristic algorithms that are inspired from living organisms. Raven Roosting Optimization (RRO) is an algorithm inspired by the mimicking behavior of rav...
Article
Resource management plays a key role in a cloud environment in which applications face with dynamically changing workloads. However, such dynamic and unpredictable workloads can lead to performance degradation of applications. To meet the Quality of Service (QoS) requirements based on Service Level Agreements (SLA), the resource management strategi...
Article
Full-text available
One of the current discussions concerning cloud computing environments involves the issue of failure prediction that influences the delivery of on-demand services through the Internet. Proactive failure prediction techniques play an important role in reducing undesirable consequents produced by failures within high performance systems. Accordingly,...
Article
Full-text available
Plagiarism, defined as "the wrongful appropriation of other writers' or authors' works and ideas without citing or informing them", poses a major challenge to knowledge spread publication. Plagiarism has been placed in the four categories of direct, paraphrasing (re-writing), translation, and combinatory. This paper addresses the translational plag...
Article
Full-text available
A significant aspect of Cloud-computing is scheduling of a large number of real-time concurrent workflow instances. Most of the existing scheduling algorithms are designed for a single complex workflow instance. This study examined instance-intensive workflows bounded by SLA constraints, including user-defined deadlines. The scheduling method for t...
Article
A workflow model is the computerized representation of a business or scientific process. It defines the starting and ending conditions of the process, the activities in the process, control flow and data flow among these activities, etc. A partitioning method creates workflow fragments that group some of the workflow model elements (activities, con...
Article
Full-text available
In recent years, it has been made possible to compose exiting services when a user's request cannot be satisfied by a single web service. Web service composition is faced with several challenges among which is the rapid growth in the number of available web services leading to increased number of web services offering the same functionalities. The...
Article
Full-text available
Software as a Service (SaaS) in Cloud Computing offers reliable access to software applications for end users over the Internet without direct investment in infrastructure and software. SaaS providers utilize resources of internal datacenters or rent resources from a public Infrastructure as a Service (IaaS) provider in order to serve their custome...
Article
Full-text available
Dynamic consolidation of virtual machines (VMs) is an effective technique, which can lead to improvement of energy efficiency and resource utilization in cloud data centers. However, due to varying workloads in applications, consolidating the virtual machines can cause a violation in Service Level Agreement. The main goal of the dynamic VM consolid...
Article
Full-text available
In this decade, grid computing is a well-known solution for applying a large collection of connected heterogeneous systems and sharing various combinations of resources. It creates a simple but large, powerful and self-managing virtual computer, which leads to the problem of load balancing. The main goal of load balancing is to provide a distribute...
Article
Full-text available
Workload and resource management are two essential functions provided in the service level of a Grid software infrastructure. Consistently, efficient load balancing algorithms are fundamentally important to improve the global throughput of these environments. Although previous works show that, ant colony algorithm works well for load balancing, the...
Article
Full-text available
Background: Nowadays, web services are one of the most widely used groups of SOA and service computing. The problem of QoS based selecting a web service dynamically and composing a set of web services to conduct a business task has been investigated in this paper. One of the main objectives of this paper is selecting web services based on non-funct...
Article
Full-text available
Plagiarism is one of the common problems that is present in all organizations that deal with electronic content. At present, plagiarism detection tools, only detect word by word or exact copy phrases and paraphrasing is often mixed. One of the successful and applicable methods in paraphrasing detection is fuzzy method. In this study, a new fuzzy ap...
Article
In recent years the adaptive job schedulers became an attraction point to many researchers, but despite many efforts that have been made, there are still many challenges in this area. The aim of this paper is to provide a better understanding of adaptive job schedulers in MapReduce and identify important research directions in this area. In this pa...
Article
A web service is an application that is published on the web and can be discovered and used automatically. In many cases, a web service cannot provide the capability that the user has requested. Therefore, according to the user request, it should be possible to combine various services and produce a new composite service for user request. AI planni...
Conference Paper
The rapid growing demand for computational power by modern applications in Cloud Computing has led to the creation of large-scale data centers. Such data centers consume huge amount of electrical energy, emitting a great deal of CO2 in our environment. One effective way so as to reduce energy consumption is consolidating the vms using dynamic migra...
Article
Centralized business process execution in the Service Oriented Architecture (SOA) suffers from lack of scalability. Decentralization of business processes is introduced as an alternative approach to address this shortcoming. However, decentralization methods do not consider adaptability of created fragments with runtime environment. The Adaptable a...
Article
Full-text available
In the Service Oriented Architecture (SOA), BPEL specified business processes are executed by non-scalable centralized orchestration engines. In order to address the scalability issue, decentralized orchestration engines are applied, which decentralize BPEL processes into static fragments at design time without considering runtime requirements. The...
Article
Full-text available
BPEL specified business processes in the Service Oriented Architecture (SOA) are executed by non-scalable centralized orchestration engines. In order to resolve scalability issues, the centralized engines are clustered, which is not a final solution either. Alternatively, several decentralized orchestration engines are being emerged with the purpos...
Conference Paper
Full-text available
Service Oriented Architecture (SOA) is by far the most pervasive architecture which includes several building blocks among which orchestration engine is under special focus. Although, there are a number of centralized orchestration engines to execute business processes described by BPEL language in SOA, you may find several decentralized orchestrat...
Conference Paper
Service oriented architecture (SOA) includes several building blocks among which orchestration engine demands special attention. Although, there are a number of centralized orchestration engines to execute business processes described by BPEL language in SOA, you may find several decentralized orchestration engines and their purpose is to decompose...
Article
Full-text available
Business Processes in Service Oriented Architecture (SOA) are run using an orchestrate engine. The poi nt here is that running a huge number of business proc esses under a centralized orchestrate engine result in degrading of run-time environment abilities. Apart from this, running clustered orchestrate engines as an alternative way to obviate cent...

Network

Cited By