## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

We show that several combinatorial optimization problems on an interval graph given its interval representation in sorted order, are highly parallelizable in the sense of Berkman et al [2]. For each of these problems, we present an O(log log n) time parallel algorithm which uses O(nlog log n) processors on the Common CRCW PRAM model which is the weakest of among the CRCW PRAM models. Our algorithms are optimal since, all the problems under consideration can be solved sequentially in O(n) time given a sorted interval set. The problems we solve are finding minimum dominating set (MDS), minimum connected dominating set (MCDS), minimum total dominating set (MTDS), maximum independent set (MIS), and depth first search tree (DFS) and breadth first search tree (BFS) starting from a vertex corresponding to an arbitrary interval. No previous highly parallelizable combinatorial problems either on graph structures or on its geometrical representations are known. Optimal parallel algorithms with O(log n) running time are known for MDS, DFS, BFS, and MIS problems.

To read the full-text of this research,

you can request a copy directly from the authors.

The objective of this paper is to advance the view that solving the all-pairs shortest path (APSP) problem for a chordal graph G is a two-step process: the first step is determining vertex pairs at distance two (i.e., computing G2) and the second step is finding the vertex pairs at distance three or more. The main technical result here is that the APSP problem for a chordal graph can be solved in O(n2) time (optimally), if G2 is already known. It can be shown that computing G2 for chordal graphs is as hard as for general graphs. We then show certain subclasses of chordal graphs for which G2 can be computed more efficiently. This leads to optimal APSP algorithms for these classes of graphs in a more natural way than previously known results. Finally, we present an optimal parallel algorithm for the APSP problem on chordal graphs by exploiting new structural properties of shortest paths. Our parallel algorithm uses O(M(n)) operations where M(n) is the time needed for the fastest known algorithm for multiplying two n × n matrices over a ring.

Evidence is given to suggest that minimally vertex colouring an interval graph may not be in NC¹. This is done by showing that 3-colouring a linked list is NC¹-reducible to minimally colouring an interval graph. However, it is shown that an interval graph with a known interval representation and an O(1) chromatic number can be minimally coloured in NC¹.
For the CRCW PRAM model, an o(log n) time, polynomial processors algorithm is obtained for minimally colouring an interval graph with o(log n) chromatic number and a known interval representation. In particular, when the chromatic number is O((log n)1-ε), 0<ε<1, the algorithm runs in O(log n/log log n) time. Also, an O(log n) time, O(n) cost, EREW PRAM algorithm is found for interval graphs of arbitrary chromatic numbers. The following lower bound result is also obtained: even when the left and right endpoints of the interval are separately sorted, minimally colouring an interval graph needs Ω(log n/log log n) time, on a CRCW PRAM, with a polynomial number of processors.

In this paper, an O(n log n) time algorithm for finding all the
maximal cliques of an interval graph is proposed. This algorithm can
also be implemented in parallel in O(log n) time using O(n<sup>2</sup>)
processors. The maximal cliques of an interval graph contain important
structural information. Many problems on interval graphs can be solved
after all the maximal cliques are known. It is shown that cut vertices,
bridges, and vertex connectivities can all be determined easily after
the maximal cliques are known. Finally, the all-pair shortest path
problem for interval graphs is solved based on the relationship between
maximal cliques. The all-pair shortest path algorithm can also be
parallelized in O(log n) time using O(n<sup>2</sup>) processors

We establish that several problems are highly parallelizable. For each of these problems, we design an optimal O(log/log n) time parallel algorithm on the Common CRCW PRAM model which is the weakest among the CRCW PRAM models. These problems include all nearest smaller values, preprocessing for answering range maxima queries, several problems in computational geometry, and string matching. A new lower bound technique is presented showing that some of the new O(loglog n) upper bounds cannot be improved even when nonoptimal algorithms are used. The technique extends Ramsey-like lower bound argumentation.

A model for synchronized parallel computation is described in which all p processors have access to a common memory. This model is used to solve the problems of finding the maximum, merging, and sorting by p processors. The main results are: 1. Finding the maximum of n elements (1 < p ≤ n) within a depth of ; (optimal for ). 2. Merging two sorted lists of length m and n (m ≤ n) within a depth of for p ≤ n (optimal for ), for . 3. Sorting n elements within a depth of for p ≤ n, (optimal for ). for . The depth of O(klogn) for processors was also achieved by Hirschberg (Comm. ACM21, No. 8 1978, 657–661) and Preparata IEEE Trans ComputersC-27 (July 1978), 669–673). Our algorithm is substantially simpler. All the elementary operations including allocation of processors to their jobs are taken into account in deriving the depth complexity and not only comparisons.

Parallel algorithms are given for finding a maximum weighted clique, a maximum weighted independent set, a minimum clique cover, and a minimum weighted dominating set of an interval graph. Parallel algorithms are also given for finding a Hamiltonian circuit and the minimum bandwidth of a proper interval graph. The shared memory model (SMM) of parallel computers is used to obtain fast algorithms.

We design efficient parallel algorithms for solving several problems on interval graphs. The problems include finding a BFS-tree and DFS-tree, articulation points and bridges, and minimum coloring. Each of our algorithms requires O(log n) time employing O(n) processors on the EREW PRAM model, where n is the number of vertices. The proposed algorithms for computing articulation points and bridges have better performance in terms of the cost (i.e, processor-time product) than the existing algorithms in [RR90], yet having the same time complexity. Our novel approach to the construction of BFS tree is based on elegantly capturing the structure of a given collection of intervals. This structure reveals important properties of the corresponding interval graph, and is found to be instrumental in solving many other problems on such graphs. Although the time- and processor-complexities of the proposed DFS-tree construction or minimum-coloring is comparable with the best-known ones [KI89], our approach to these problems are new. For example, the approach to the construction of a DFS tree is based on reducing the problem into an all dominating neighbors (ADN) problem; while that to the minimum coloring is by transforming this problem into a linked list ranking problem.

One of the frequently used models for a synchronous parallel computer is that of a parallel random access machine, where each processor can read from and write into a common random access memory. Different processors may read the same memory location at the same time, but simultaneous writing is disallowed. We show that even if we allow nonuniform algorithms, an arbitrary number of processors, and arbitrary instruction sets, Ω (log n) is a lower bound on the time required to compute various simple functions, including sorting n keys and finding the logical ”or” of n bits. We also prove a surprising time upper bound of.72 log 2 n steps for these functions, which beats the obvious algorithms requiring log 2 n steps. If simultaneous writes are allowed, there are simple algorithms to compute these functions in a constant number of steps.

A family of intervals on the real line provides a natural model for a vast number of scheduling and VLSI problems. Recently, a number of parallel algorithms to solve a variety of practical problems on such a family of intervals have been proposed in the literature. Computational tools are developed, and it is shown how they can be used for the purpose of devising cost-optimal parallel algorithms for a number of interval-related problems including finding a largest subset of pairwise nonoverlapping intervals, a minimum dominating subset of intervals, along with algorithms to compute the shortest path between a pair of intervals and, based on the shortest path, a parallel algorithm to find the center of the family of intervals. More precisely, with an arbitrary family of n intervals as input, all algorithms run in O(log n) time using O(n) processors in the EREW-PRAM model of computation.

We study the number of comparison steps required for searching, merging, and sorting with P processors. We present a merging algorithm that is optimal up to a constant factor when merging two lists of equal size (independent of the number of processors); as a special case, with N processors it merges two lists, each of size N, in 1.893 lg lg N + 4 comparison steps. We use the merging algorithm to obtain a sorting algorithm that, in particular, sorts N values with N processors in 1.893 lg N lg lg N/lg lg lg N(plus lower order terms) comparison steps. The algorithms can be implemented on a shared memory machine that allows concurrent reads from the same location with constant overhead at each comparison step.

We give the first efficient parallel algorithms for recognizing chordal graphs, finding a maximum clique and a maximum independent set in a chordal graph, finding an optimal coloring of a chordal graph, finding a breadth-first search tree and a depth-first search tree of a chordal graph, recognizing interval graphs, and testing interval graphs for isomorphism. The key to our results is an efficient parallel algorithm for finding a perfect elimination ordering.

For each interval ai E B, interval ai points to the interval 4(rightmatch(CL, p(li))* (4) Construct the list CL as in step 3(a) and preprocess CL for RangeMaxima queries. The largest index con(j) such that a6 and C

- Preprocess Cl
- Anlv

Preprocess CL for ANLV queries. (c) For each interval ai E B, interval ai points to the interval 4(rightmatch(CL, p(li))* (4) Construct the list CL as in step 3(a) and preprocess CL for RangeMaxima queries. The largest index con(j) such that a6 and C, intersect is Runge-Muximu( K, s(j), e(j)>. This step has the same Gmplexity as the previous step.

PT-optimal algorithms for interval graphs

- Moltra

A. Moltra and R. Johnson, PT-optimal algorithms for interval graphs, in: Proc. of 26th Allerton Conf on Commu., Control and Computing, vol. 1 (1988) 274-282.

Optimal parallel algorithms on sorted intervals

- Kim

S.K. Kim, Optimal parallel algorithms on sorted intervals, 27th Annual Allerton Conf on Commu-nication, Control, and Computing (Sept. 1989) 766-775.