About
208
Publications
93,678
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
2,075
Citations
Introduction
Additional affiliations
January 2000 - April 2016
Publications
Publications (208)
The occurrence of D&I (Dynamic and Interactive) events in HPC (High-Performance Computing) systems that cannot be considered statically in the initial structure of scientific applications, can disrupt the behavior of the system load balancer, leading to the malfunctioning and even stoppage of its activities. The load balancer is responsible for cre...
With the rapid expansion of the Internet of Things and the surge in the volume of data exchanged in it, cloud computing became more significant. To face the challenges of the cloud, the idea of fog computing was formed. The heterogeneity of nodes, distribution, and limitation of their resources in fog computing in turn led to the formation of the s...
The continual increase in the amount of generated data by social media, IoT devices, and monitoring systems have motivated the use of Distributed Data Stream Processing (DSP) systems to harness data in a real-time manner. The scheduling of processing tasks in DSP systems across the machines in a cluster or cloud environment is an NP-Hard problem. D...
Existing Data Stream Processing (DSP) systems perform poorly while encountering heavy workloads, particularly on clustered set of (heterogeneous) computers. Elasticity and changing application parallelism degree can limit the performance degradation in the face of varying workloads that negatively impact the overall application response time. Elast...
Fast execution of functions is an inevitable challenge in the serverless computing landscape. Inefficient dispatching, fluctuations in invocation rates, burstiness of workloads, and wide range of execution times of serverless functions result in load imbalances and overloading of worker nodes of serverless platforms that impose high latency on invo...
Due to the rapid growth of production and dissemination of big data from various sources, the speed of data processing must inevitably increase. In distributed big data processing systems such as cloud computing, the task scheduler is responsible for mapping a large set of various tasks to a set of possibly heterogeneous computing nodes in a way to...
A key feature of distributed stream processing (DSP) systems is the scheduling of operators on clustered computers. In scheduling, the assignment plan of operators to nodes of the cluster, requirements of operators, and the computational power of each worker node must be considered with the goal of finding a tradeoff between the communication laten...
The recent explosion of data of all kinds (persistent and short-lived) have imposed processing speed constraints on big data processing systems (BDPSs). One such constraint on running these systems in Cloud computing environments is to utilize as many parallel processors as required to process data fast. Consequently, the nodes in a Cloud environme...
The execution of complex event processing (CEP) applications on a set of clustered homogenous computing nodes is latency-sensitive, especially when workload conditions widely change at runtime. To manage the varying workloads of nodes in a scalable and cost-effective manner, adjusting the application parallelism at runtime is critical. To tackle th...
The concept of unstructured Peer-to-Peer (P2P) systems—setting free from any structural constraints—has put forward an appropriate paradigm for sharing a wide assortment of resources in a distributed-sharing manner efficiently. More importantly, unstructured P2P systems’ architecture and concepts have permeated diverse spheres of today’s successful...
In order to evaluate the tolerance to water deficit stress of 47 different ecotypes of Iranian cannabis (Cannabis
sativa L.), a split plot experiment based on a randomized complete block design with two replications was
conducted at the Research Station of the University of Tehran in 2015. Ecotypes were considered as the minor
factor and water defi...
Computer systems are designed to make resources available to users and users may be interested in some resources more than others, therefore, a coordination scheme is required to satisfy the users' requirements. This scheme may implement certain policies such as "never allocate more than X units of resource Z". One policy that is of particular inte...
One of the pivotal challenges of unstructured Peer-to-Peer (P2P) systems is resource discovery. Search mechanisms generally utilize blind, or informed search strategies wherein nodes locally store metadata to quicken resource discovery time compared to blind search mechanisms. Dynamic behavior of P2P systems profoundly affects the performance of an...
Software systems development nowadays has moved towards dynamic composition of services that run on distributed infrastructures aligned with continuous changes in the system requirements. Consequently, software developers need to tailor project specific methodologies to fit their methodology requirements. Process patterns present a suitable solutio...
Service-orientation is a promising paradigm that enables the engineering of large-scale distributed software systems using rigorous software development processes. The existing problem is that every service-oriented software development project often requires a customized development process that provides specific service-oriented software engineer...
Idle-state governors partially turn off idle CPUs, allowing them to go to states known as idle-states to save power. Exiting from these idle-sates, however, imposes delays on the execution of tasks and aggravates tail latency. Menu, the default idle-state governor of Linux, predicts periods of idleness based on the historical data and the disk I/O...
The potential for product recovery of metabolites can be improved by several methods; one of the most common techniques is elicitation. Elicitors can adjust multiple control points and precipitate the expression of some key genes. This project was conducted with the aim of increasing valuable secondary metabolites such as D-9-tetrahydrocannabinol (...
It is obvious that the next generation sequencing (NGS) technologies, are poised to be the next big revolution in personalized healthcare, and caused the amount of available sequencing data growing exponentially. While NGS data processing has become a major challenge for individual genomic research, commodity computers as a cost-effective platform...
The need for fast processing of high volume of event streams has triggered the deployment of parallel processing models and techniques for complex event processing. It is however hard to parallelize stateful operators, making the implementations of distributed complex event processing systems very challenging. A well-defined parallel processing mod...
An essential requirement of large-scale event-driven systems is the real-time detection of complex patterns of events from a large number of basic events and derivation of higher-level events using complex event processing (CEP) mechanisms. Centralized CEP mechanisms are however not scalable and thus inappropriate for large-scale domains with many...
Complex event processing (CEP) techniques have been used for business process (BP) monitoring of large organizations with high number of complex BPs and high rates of BP instance generations resulting in high number of monitoring rules and high rates of events. To circumvent the scale limitation of centralized CEP techniques, we present a decentral...
Although virtualization technology is recently applied to next-generation distributed high-performance computing systems, theoretical aspects of scheduling jobs on these virtualized environments are not sufficiently studied, especially in online and non-clairvoyant cases. Virtualization of computing resources results in interference and virtualizat...
IaaS cloud providers typically leverage virtualization technology (VT) to multiplex underlying physical resources among virtual machines (VMs), thereby enhancing the utilization of physical resources. However, the contention on shared physical resources brought about by VT is one of the main causes of the performance variability that acts as a barr...
Background
One of the pivotal challenges in nowadays genomic research domain is the fast processing of voluminous data such as the ones engendered by high-throughput Next-Generation Sequencing technologies. On the other hand, BLAST (Basic Local Alignment Search Tool), a longestablished and renowned tool in Bioinformatics, has shown to be incredibly...
Cloud computing users are faced with a wide variety of services to choose from. Consequently, a number of cloud service brokers (CSBs) have emerged to help users in their service selection process. This paper reviews the recent approaches that have been introduced and used for cloud service brokerage and discusses their challenges accordingly. We p...
large-scale online services parallelize sub-operations of a user's request across a large number of physical machines (service components) so as to enhance the responsiveness. Even a temporary spike in latency of any service component can notably inflate the end to end delay; therefore, the tail of the latency distribution of service components has...
Large-scale interactive Web services break a user’s request to many sub-requests and send them to a large number of independent servers so as to consult multi-terabyte datasets instantaneously. Service responsiveness hinges on the slowest server, making the tail of the latency distribution of individual servers a matter of great concern. A large nu...
Extremely heterogeneous software stacks have encouraged the use of system virtualization technology for execution of composite high performance computing (HPC) applications to enable full utilization of extreme-scale HPC systems (ExaScale). Parts of composite applications, called loosely-coupled components, consist of a set of loosely-coupled CPU-i...
Sequence similarity, as a special case of data intensive applications, is one of the neediest applications for parallelization. Clustered commodity computers as a cost-effective platform for distributed and parallel processing , can be leveraged to parallelize sequence similarity. However, manually designing and developing parallel programs on comm...
Smart homes are the new generation of homes where pervasive computing is employed to make the lives of the residents more convenient. Human activity recognition (HAR) is a fundamental task in these environments. Since critical decisions will be made based on HAR results, accurate recognition of human activities with low uncertainty is of crucial im...
This paper presents a high performance technique for virtualization-unaware scheduling of compute-intensive synchronized (i.e., tightly-coupled) jobs in virtualized high performance computing systems. Online tightly-coupled jobs are assigned/reassigned to clustered virtual machines based on synchronization costs. Virtual machines are in turn assign...
Detection of stateful complex event patterns using parallel programming features is a challenging task because of statefulness of event detection operators. Parallelization of event detection tasks needs to be implemented in a way that keeps track of state changes by new arriving events.
In this paper, we describe our implementation for a customize...
Cloud computing environments (CCEs) are expected to deliver their services with qualities in service level agreements. On the other hand, they typically employ virtualization technology to consolidate multiple workloads on the same physical machine, thereby enhancing the overall utilization of physical resources. Most existing virtualization techno...
In spite of the fact that Cloud Computing Environments (CCE) host many I/O intensive applications such as Web services, big data and virtual desktops, virtual machine monitors like Xen impose high overhead on CCE's delivered performance hosting such applications. Studies have shown that hypervisors such as Xen favor compute intensive workloads whil...
Exponential growth of information in the Cyberspace alongside rapid advancements in its related technologies has created a new mode of competition between societies to gain information domination in this critical and invaluable space. It has thus become quite critical to all stakeholders to play a leading and dominant role in the generation of info...
We describe an approach for a custom complex event processing engine using Message Passing Interface (MPI) in C++ programming language. Our approach utilizes a multi-processor infrastructure and distributes its load on multiple processes, expecting each process to run on one processor. A dispatching process receives events and distributes them on s...
Cloud computing systems have emerged as a type of distributed systems in which a multitude of interconnected machines are gathered and recruited over the internet to help solve a computation or data-intensive problem. There are large numbers of cases in which Cloud techniques solely are not able to solve the job due to the nature of the tasks. To o...
Cloud computing environments have introduced a new model of computing by shifting the location of computing infrastructure to the Internet network to reduce the cost associated with the management of hardware and software resources. The Cloud model uses virtualization technology to effectively consolidate virtual machines (VMs) into physical machin...
SUMMARY Peer-to-peer (P2P) systems have been developed with the goal of providing support for transparent and efficient sharing of scalable distributed resources wherein size scalability is limited by the costs of all types of transparencies, especially data access transparency, which are due to the need for frequent data exchanges between peers an...
Not very long ago, organizations used to identify their customers by means of one-factor authentication mechanisms. In today's world, however, these mechanisms cannot overcome the new security threats at least when it comes to high risk situations. Hence, identity providers have introduced varieties of two-factor authentication mechanisms. It may b...
Resource overloading causes one of the main challenges in computing environments. In this case, a new resource should be discovered to transfer the extra load. However, this results in drastic performance degradation. Thus, it is of high importance to discover the appropriate resource at first. So far, several resource discovery mechanisms have bee...
Service orientation is a promising paradigm that enables the engineering of large-scale distributed software systems using rigorous software development processes. The existing problem is that every service-oriented software development project often requires a customized development process that provides specific service-oriented software engineer...
In this paper, we propose an efficient resource discovery framework allowing pure unstructured peer-to-peer systems to respond to requests at run time with a high success rate while preserving the local autonomy of member machines. There are five units in the proposed framework that respectively gather information about the status of resources, mak...
High Performance Cluster Computing Systems (HPCSs) represent the best performance because their configuration is customized regarding the features of the problem to be solved at design time. Therefore, if the problem has static nature and features, the best customized configuration can be done. New generations of scientific and industrial problems...
One of the benefits of virtualization technology is the provision of secure and isolated computing environments on a single physical machine. However, the use of virtual machines for this purpose often degrades the overall system performance that is due to emulation costs, for example, packet filtering on every virtual machine. To allow virtual mac...
The use of virtualization technology (VT) has become widespread in modern datacenters and Clouds in recent years. In spite of their many advantages, such as provisioning of isolated execution environments and migration, current implementations of VT do not provide effective performance isolation between virtual machines (VMs) running on a physical...
Extraordinary large datasets of high performance computing applications require improvement in existing storage and retrieval mechanisms. Moreover, enlargement of the gap between data processing and I/O operations' throughput will bound the system performance to storage and retrieval operations and remarkably reduce the overall performance of high...
This paper proposes two complementary virtual machine monitor VMM detection methods. These methods can be used to detect any VMM that is designed for ×86 architecture. The first method works by finding probable discrepancies in hardware privilege levels of the guest operating system's kernel on which user applications run. The second method works b...
Wireless Sensor Actor Networks (WSANs) have contributed to the advancement of ubiquitous computing wherein time and energy considerations to perform the tasks of ubiquitous applications are critical. Therefore, real-timeliness and energy-awareness are amongst the grand challenges of WSANs. In this paper, we present a context-aware task distribution...
In the recent years, Cloud Co mputing has been one of the top ten new technologies which provides various services such as software, platform and infrastructure for internet users. The Cloud Co mputing is a promising IT paradig m which enables the Internet evolution into a global market of collaborating services. In order to provide better services...
This chapter describes one approach with which legacy systems can be augmented to provide additional functionality. This is a helpful approach to quickly upgrading systems, and to adapt them to new requirements. We describe this in the context of authentication.
High-performance computing (HPC) clusters are currently faced with two major challenges - namely, the dynamic nature of new generation of applications and the heterogeneity of platforms - if they are going to be useful for exascale computing. Processes running these applications may well demand unpredictable requirements and changes to system confi...
This paper presents a model for resolving two main issues of time in e-commerce. The first issue is the time value of e-commerce that represents the value of each moment of the commerce time from the perspective of buyers and sellers. Buyers and sellers can use this model to calculate the time value at each moment of time and accordingly decide whe...
Spelling errors in digital documents are often caused by operational and cognitive mistakes, or by the lack of full knowledge about the language of the written documents. Computer-assisted solutions can help to detect and suggest replacements. In this paper, we present a new string distance metric for the Persian language to rank respelling suggest...
Power efficiency is one of the main challenges in large-scale distributed systems such as datacenters, Grids, and Clouds. One can study the scheduling of applications in such large-scale distributed systems by representing applications as a set of precedence-constrained tasks and modeling them by a Directed Acyclic Graph. In this paper we address t...
The combination of sensor and actor nodes in wireless sensor actor networks (WSANs) has created new challenges notably in coordination. In this paper, we survey, categorize, and bring into perspective existing researches on weak connectivity and its impacts on coordination ranging from a node failure to disability of actor nodes to communicate with...
Wireless Sensor Actor Networks (WSANs) have contributed to the advancement of ubiquitous computing wherein time and energy considerations to perform the tasks of ubiquitous applications are critical. Therefore, real-timeliness and energy-awareness are amongst the grand challenges of WSANs. In this paper, we present a context-aware task distribution...
One of the main challenges of unstructured peer-to-peer (P2P) systems that greatly affects performance is resource searching. The early proposed mechanisms use blind searching, but they have a lot of shortcomings. Informed search strategies have better performance in comparison with blind ones, but they still suffer from low success-rates and long...
The deployment of sensors without enough coverage can result in unreliable outputs in wireless sensor networks (WSNs). Thus sensing coverage is one of the most important quality of service factors in WSNs. A useful metric for quantifying the coverage reliability is the coverage rate that is the area covered by sensor nodes in a region of interest....
Structured peer-to-peer (P2P) systems have been recognized as an efficient approach to solve the resource discovery problem in large-scale dynamic distributed systems. The efficiency of structured P2P resource discovery approaches is attributed to their structured property. However, system dynamism caused by changes in the system membership, i.e.,...
Load balancing is one of the main challenges of structured P2P systems that use distributed hash tables (DHT) to map data items (objects) onto the nodes of the system. In a typical P2P system with N nodes, the use of random hash functions for distributing keys among peer nodes can lead to O(log N) imbalance. Most existing load balancing algorithms...
Interprocess communication (IPC) is a well-known technique commonly used by programs running on homogeneous distributed systems.
However, it cannot be used readily and efficiently by programs running on heterogeneous distributed systems. This is because
it must be given a uniform interface either by a set of middleware or more efficiently properly...
In many applications of wireless sensor actor networks (WSANs) that often run in harsh environments, the reduction of completion times of tasks is highly desired. We present a new time-aware, energy-aware, and starvation-free algorithm called Scate for assigning tasks to actors while satisfying the scalability and distribution requirements of WSANs...
Existing self-healing mechanisms for Web services constantly monitor services and their computational environment, analyze system state, determine failure occurrences, and execute built-in recovery plans (MAPE loop). We propose a more pro-active self healing mechanism that uses a multi-layer perceptron ANN and a health score mechanism to learn abou...
In this paper, a method for fast processing of data stream tuples in parallel execution of continuous queries over a multiprocessing environment is proposed. A copy of the query plan is assigned to each of processing units in the multiprocessing environment. Dynamic and continuous routing of input data stream tuples among the graph constructed by t...
Getting informed of what is registered in the Web space on time, can greatly
help the psychologists, marketers and political analysts to familiarize,
analyse, make decision and act correctly based on the society`s different
needs. The great volume of information in the Web space hinders us to
continuously online investigate the whole space of the W...
With the ever-increasing advancement of mobile device technology and their pervasive usage, users expect to run their applications on mobile devices and get the same performance as if they used to run their applications on powerful non-mobile computers.
There is a challenge though in that mobile devices deliver lower performance than
traditional...
Cloud computing enjoys the many attractive attributes of virtualization technology, such as consolidation, isolation, migration and suspend/resume support. In this model of computing, some desirable features such as scalability are provided by means of a new type of building blocks called virtual machines (VMs). As with any other construction block...
In this paper, we present a mathematical approach based on queuing theory to minimize the average number of allocated tasks to actors that have not been dispatched to actors yet for execution by the singleton network sink node in wireless sensor actor networks with semi-automated architecture. We calculate the best rate of dispatching of tasks by t...
Packet compression is a well-known technique for improving the performance of low speed networks such as WANs. This technique is also effective in networks with a high cost per transmitted byte, namely wireless networks. Most implementations of this technique as a network service, like IPComp, are not transparent and require modifications either to...
Task allocation is a critical issue in proper engineering of cooperative applications in embedded systems with latency and energy constraints, as in wireless sensor and actor networks (WSANs). Existing task allocation algorithms are mostly concerned with energy savings and ignore time constraints and thus increase the makespan of tasks in the netwo...
Cluster computing systems require managing their resources and running processes dynamically in an efficient manner. Preemptive process migration is such a mechanism that tries to improve the overall performance of a cluster system running independent processes. In this paper, we show that blind migration of processes at runtime by such a mechanism...
Mosix has long been recognized as a distributed operating system leader in the high performance computing community. In this paper, we analyze the load-balancing capabilities of a Mosix cluster in handling requests for different types of resources through real experiments on a Mosix cluster comprising of heterogeneous machines.
The ever increasing demands for using resource-constrained mobile devices for
running more resource intensive applications nowadays has initiated the
development of cyber foraging solutions that offload parts or whole
computational intensive tasks to more powerful surrogate stationary computers
and run them on behalf of mobile devices as required....
With the advent of virtualization technology and its propagation to the infrastructure of Cloud distributed systems, there is an emergent request for more effective means of communication between virtual machines (VMs) lying on distributed memory than traditional message based communication means. This paper presents a distributed virtual shared me...
Request forwarding is an efficient approach in discovering resources in distributed systems because it achieves one of the main goals of distributed systems namely the scalability goal. Despite achieving reasonable scalability, this approach suffers from long response times to resource requests. Several solutions such as learning-based request forw...