Hardware design of a new genetic based disk scheduling method

Real-Time Systems (Impact Factor: 1). 01/2011; 47(1):41-71. DOI: 10.1007/s11241-010-9111-8
Source: DBLP


Disk management is an increasingly important aspect of operating systems research and development because it has great effect
on system performance. As the gap between processor and disk performance continues to increase in modern systems, access to
mass storage is a common bottleneck that ultimately limits overall system performance. In this paper, we propose hardware
architecture of a new genetic based real-time disk scheduling method. Also, to have a precise simulation, a neural network
is proposed to simulate seek-time of disks. Simulation results showed the hardware implementation of proposed algorithm outperformed
software implementation in term of execution time, and other related works in terms of number of tasks that miss deadlines
and average seeks.

Download full-text


Available from: Hossein Rahmani
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nine selection-survival strategies were implemented in a genetic algorithm experiment, and differences in terms of evolution were assessed. The moments of evolution (expressed as generation numbers) were recorded in a contingency of three strategies (i.e., proportional, tournament, and deterministic) for two moments (i.e., selection for crossover and mutation and survival for replacement). The experiment was conducted for the first 20,000 generations in 46 independent runs. The relative moments of evolution (where evolution was defined as a significant increase in the determination coefficient relative to the previous generation) when any selection-survival strategy was used fit a Log-Pearson type III distribution. Moreover, when distributions were compared to one another, functional relationships were identified between the population parameters, revealing a degeneration of the Log-Pearson type III distribution in a one-parametrical distribution that can be assigned to the chosen variable—evolution strategy. The obtained theoretical population distribution allowed comparison of the selection-survival strategies that were used. © 2012 Wiley Periodicals, Inc. Complexity, 2012. © 2012 Wiley Periodicals, Inc.
    No preview · Article · Jul 2012 · Complexity
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is known that evolution may lead to a new species while adaptation may lead to a new variety. In this manuscript we present an analysis of the number of evolutions (defined by improvement of the score associated to an objective function of a genetic algorithm) in an experiment supervised by a genetic algorithm, experiment conducted on octan-1-ol/H20 partition coefficient of polychlorinated biphenyls. The numbers of evolutions resulted from 9 implemented evolution strategies were investigated. Evolutions arisen from the first 20,000 generations coming from 46 independent runs were recorded. A distribution analysis has been conducted for each evolution strategy. Without exception, the Weibull distribution fits well with the number of evolutions at a significance level of 5% for any evolution strategy. Furthermore, the Weibull distribution could not be rejected when different merged samples were investigated. This article is protected by copyright. All rights reserved.
    No preview · Article · Jul 2013 · Chemical Biology & Drug Design
  • [Show abstract] [Hide abstract]
    ABSTRACT: Server performance is one of the critical activities in the data grid environment. A large number of applications require access to huge volumes of data from grid servers. In this case, efficient, scalable and robust grid server which can deal with large file transfer concurrent is needed. In this paper, we analyze the bottleneck of our grid servers and introduce user-space I/O scheduling, zero copy and event-driven architecture in our grid server to improve the servers’ performance. The user-space I/O scheduling can save almost 50% I/O time in a huge number of small files transfer. Grid servers can elimination CPU consumptions between kernel and user space by zero copy and cut 63% times for context switches. Event-driven architecture will save 30% CPU usage to reach the best performance by thread-driven architecture. Optimization method combination of these three above are used in our grid servers, the full-load throughput of our system is 30% more than traditional solutions and only 60% CPU consumed compared with traditional solutions.
    No preview · Article · Mar 2014 · Simulation Modelling Practice and Theory