ABSTRACT: In most existing studies of wavelength-division multiplexing networks, the problems of routing and wavelength assignment are generally treated separately, since it is NP-complete to produce the optimal solutions for the two problems at the same time. The four presented adaptive routing algorithms, however, consider the availability of wavelengths during the routing process. Our algorithms favor paths with the near-maximum number of available wavelengths between two nodes, resulting in improved load balancing. Simulations show that our algorithms reduce call blocking by nearly half when compared with the least-loaded and the k-fixed routing algorithms in some small networks using the first-fit wavelength assignment policy. In addition, simulation and analysis show that the path length of our algorithms is almost the same as those of the other algorithms.
IEEE Journal on Selected Areas in Communications 11/2003; · 3.41 Impact Factor
Inf. Process. Lett. 01/2003; 85:93-97.
Web Communication Technologies and Internet-Related Social Issues - HSI 2003, Second International Conference on Human Society@Internet, Seoul, Korea, June 18-20, 2003, Proceedings; 01/2003
ABSTRACT: This paper proposes a new optical path restoration method for wavelength division multiplexing (WDM) networks. In the method, the protection path is established by connecting several sub-lightpaths from the source node to the destination node of the original working lightpath, as opposed to conventional path restoration where a single protection lightpath between the source-destination pair performs restoration. The set of one or more consecutive lightpaths to connect a source-destination pair is called semi-lightpath. The semi-lightpath based restoration provides enhanced flexibility in protection path provisioning and relieves the cost of spare capacity reservation and wavelength conversion at intermediate optical cross-connect nodes. In terms of spare capacity utilization, our method shows substantial reduction of spare capacity overhead compared with dedicated path restoration in all optical networks without wavelength conversion, and shows similar capacity efficiency compared with shared path restoration in opaque networks with full wavelength conversion capability.
Global Telecommunications Conference, 2002. GLOBECOM '02. IEEE; 12/2002
IEEE Transactions on Computers 01/2002; 50(12):1352-1361. · 1.10 Impact Factor
ABSTRACT: Cache memory is used in almost all computer systems today to
bridge the ever increasing speed gap between the processor and main
memory. However, its use in multitasking computer systems introduces
additional preemption delay due to the reloading of memory blocks that
are replaced during preemption. This cache-related preemption delay
poses a serious problem in realtime computing systems where
predictability is of utmost importance. We propose an enhanced technique
for analyzing and thus bounding the cache-related preemption delay in
fixed-priority preemptive scheduling focusing on instruction caching.
The proposed technique improves upon previous techniques in two
important ways. First, the technique takes into account the relationship
between a preempted task and the set of tasks that execute during the
preemption when calculating the cache-related preemption delay. Second,
the technique considers the phasing of tasks to eliminate many
infeasible task interactions. These two features are expressed as
constraints of a linear programming problem whose solution gives a
guaranteed upper bound on the cache-related preemption delay. This paper
also compares the proposed technique with previous techniques using
randomly generated task sets. The results show that the improvement on
the worst-case response time prediction by the proposed technique over
previous techniques ranges between 5 percent and 18 percent depending on
the cache refill time when the task set utilization is 0.6. The results
also show that as the cache refill time increases, the improvement
increases, which indicates that accurate prediction of cache-related
preemption delay by the proposed technique becomes increasingly
important if the current trend of widening speed gap between the
processor and main memory continues
IEEE Transactions on Software Engineering 10/2001; · 1.98 Impact Factor
ABSTRACT: This paper proposes a backup network planning method for
survivable WDM mesh networks. The proposed method centers around
multiple backup cycles where each network link is assigned m backup
cycles and each cycle protects 1/m of the working capacity of a target
link. Distributed link restoration is performed using preplanned cycles,
in which both the backup paths and the spare capacity can be shared. The
preconfiguration of the cycles and the spare capacity placement are
derived directly from the network topology off-line, which is
independent of the primary traffic status or its dynamic changes over
time. The proposed method provides efficiency and simplicity to
survivable network design and management, and also to runtime recovery
operation. Experimental results show that the proposed method needs on
average under 60% of spare capacity redundancy for single link failure
while preserving the speed of cycle-based restoration
Computer Communications and Networks, 2001. Proceedings. Tenth International Conference on; 02/2001
ABSTRACT: Efficient and effective buffering of disk blocks in main memory is critical for better file system performance due to a wide speed gap between main memory and hard disks. In such a buffering system, one of the most important design decisions is the block replacement policy that determines which disk block to replace when the buffer is full. In this paper, we show that there exists a spectrum of block replacement policies that subsumes the two seemingly unrelated and independent Least Recently Used (LRU) and Least Frequently Used (LFU) policies. The spectrum is called the LRFU (Least Recently/Frequently Used) policy and is formed by how much more weight we give to the recent history than to the older history. We also show that there is a spectrum of implementations of the LRFU that again subsumes the LRU and LFU implementations. This spectrum is again dictated by how much weight is given to recent and older histories and the time complexity of the implementations lies between O(1) (the time complexity of LRU) and y
log P n (the time complexity of LFU), where n is the number of blocks in the buffer. Experimental results from trace-driven simulations show that the performance of the LRFU is at least competitive with that of previously known policies for the workloads we considered.
IEEE Trans. Computers. 01/2001; 50:1352-1361.
ABSTRACT: In many previous low Earth orbit (LEO) satellite networks,
intersatellite links (ISLs) are used to enhance the capability of the
networks. The ISLs are continuously deleted and established since the
visibility between satellites changes over time. These topological
changes require rerouting of the traffic on ISLs, which unavoidably
involves dropping of some ongoing calls. In general, such an ongoing
call dropping degrades the quality-of-service (QoS) more severely than
an initiated call blocking. We propose a call admission control scheme
that allows a flexible tradeoff between dropped ongoing calls and
blocked initiated calls. The call admission control scheme makes use of
two techniques: traffic recasting and traffic projection. The traffic
recasting maps the current traffic onto the network after the
topological change whereas the traffic projection projects the recast
traffic from the current time to the time of the topological change. The
projected traffic is used to estimate the increase in ongoing call
dropping whose QoS penalty is compared against that of increased newly
initiated call blacking to make the call admission decision. Simulation
results show that the proposed call admission control scheme
significantly reduces the ongoing call dropping probability with only a
marginal increase in the initiated call blocking and call completion
IEEE Transactions on Vehicular Technology 12/2000; · 1.92 Impact Factor
ABSTRACT: The Least Recently Used (LRU) block replacement scheme is still widely used due to its simplicity. While simple, it still adapts well to the changes of the workload, and has been shown to be efficient when recently referenced blocks are likely to be re-referenced in the near future. The main drawback of the LRU scheme, however, is that it exhibits performance degradations because it does not make use of reference regularities such as sequential and looping references. In this paper, we present a Unified Buffer Management (UBM) scheme that exploits these regularities and yet, is simple to deploy. The UBM scheme automatically detects sequential and looping references and stores the detected blocks in separate partitions of the buffer cache. These partitions are managed by appropriate replacement schemes based on their detected patterns. The allocation problem among the divided partitions is also tackled with the use of the notion of marginal gains. The performance gains obtained through ...
ABSTRACT: In this paper, we present an algorithm for efficiently aggregating
link state information needed for source routing in PNNI networks. In
this algorithm, each border node in a peer group is mapped to a node of
a shufflenet. By this mapping, the number of links for which state
information is maintained becomes pN (p is an integer, N is the number
of border nodes) which is significantly smaller than N<sup>2</sup> in
the full-mesh approach. Another novel aspect of our algorithm is that it
can be applied to asymmetric networks, while many previous algorithms
such as the spanning tree approach can be applied only to symmetric
networks. Experimental results show that our shufflenet algorithm
performs as good as the full-mesh approach, with only a much smaller
amount of information
Global Telecommunications Conference, 2000. GLOBECOM '00. IEEE; 02/2000
ACM SIGMETRICS Performance Evaluation Review 07/1999; 27(1):134-143.
ABSTRACT: In the design and analysis of real-time systems, accurate timing measurement of various events in the system is required for validation purposes. In this paper, we present the design and implementation of a time-tracing hardware that gathers timing information on the execution of programs including the operating system. The hardware traces the timing behavior of programs including the total execution times, the time consumed in the system calls of the operating system, the time spent by the scheduler, etc. The results show that the hardware measures the execution times of programs with a resolution of 50 ns, while incurring only a small amount of overhead in the execution time. The timing information that this hardware provides can be used for validating various aspects of real-time systems such as worst case execution time analysis and schedulability analysis. 1. Introduction The integrity of real-time systems is affected by the timeliness as well as the correctness of the computati...
ABSTRACT: ion), which contains detailed timing information of the program construct. By defining a concatenation operation on WCTAs, our revised timing schema accurately accounts for the timing effects of the buffered threaded prefetching not only within but also across program constructs. This paper shows, through analysis using a timing tool based on the extended timing schema, the buffered prefetch scheme significantly improves the worst case execution times of tasks. 1. Introduction Due to their unpredictable performance, cache memories have not been widely used in hard realtime systems where guaranteed worst case performance is far more important than average case performance. On the other hand, the threaded prefetching is a predictable scheme. In the threaded prefetch scheme, each instruction block a pointer called thread. The thread indicates the instruction block to be prefetched once the block containing it is accessed by the processor. The threaded prefetching uses only two instruc...
ABSTRACT: As in other application areas, there is an increasing need for managing a large amount of data in the real-time area. This need gave birth to a system called a real-time database system (RTDBS) that provides database operations with timing constraints. One typical timing constraint in an RTDBS is temporal consistency that states that a transaction must read temporally valid data objects. This requires the data objects to be updated repeatedly. This paper proposes three novel schemes that aim to minimize the number of updates of data objects needed for guaranteeing the temporal consistency requirements of transactions with hard-deadlines. The three guarantee schemes differ from each other in the amount of updates and the implementation complexity. This paper also gives a framework for integrating transactions with soft-deadlines into the three guarantee schemes. 1 Introduction Recently, computer systems are increasingly used for monitoring and controlling time critical systems such as ...
ABSTRACT: Cache memory is used in almost all computer systems today to bridge the ever increasing speed gap between the processor and main memory. However, its use in multitasking computer systems introduces additional preemption delay due to reloading of memory blocks that were replaced during preemption. This cache-related preemption delay poses a serious problem in real-time computing systems where predictability is of utmost importance. In this paper, we propose an enhanced technique for analyzing and thus, bounding the cache-related preemption delay in fixed-priority preemptive scheduling focusing on instruction caching. The proposed technique improves upon previous techniques in two important ways. First, the technique takes into account the relationship between a preempted task and the set of tasks that execute during the preemption when calculating the cache-related preemption delay. Second, the technique considers phasing of tasks to eliminate many infeasible task interactions. These tw...
Real-Time Systems. 01/1999; 17:257-282.
ABSTRACT: Survivability is one of the important issues in broadband
networks, since even a single network element failure may cause a
serious amount of data loss. There have been two main approaches for
connection restoration: dynamic approach and backup approach. We discuss
the trade-offs between the two approaches and propose a new restoration
mechanism that centers around the concept of the virtual backup network,
which provides backup path sharing as well as spare bandwidth sharing.
The proposed mechanism can take advantage of the two existing approaches
together with special interests in resource efficiency and routing
Communications, 1998. ICC 98. Conference Record.1998 IEEE International Conference on; 07/1998