## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

To read the full-text of this research,

you can request a copy directly from the authors.

... There has been a fruitful line of work tackling this question for spanners and emulators (e.g., [6][7][8][9][10][11][12][13]16]). This has produced optimal bounds on the price of fault tolerance in some settings, distinctions between edge and vertex faults, and a suite of algorithmic techniques that allow for efficient computation of sparse fault tolerant spanners/emulators (including in distributed and parallel models of computation). ...

... After significant work following [11] (see in particular [7][8][9][10]13]), we now have a generally good understanding of both vertex-and edge-fault tolerant spanners. Precise bounds are given in Table 1, but the high-level view is that the price of fault-tolerance is f 1−1/k for vertex fault tolerant spanners, and even smaller for edge fault tolerant spanners (approximately f 1/2 , although the exact bounds remain open). ...

... While the "main" algorithm we analyze does not run in polynomial time (the obvious implementation of it would take time at least Ω(n f )), in Section 7 we show a variant that runs in polynomial time, paying an additional O(k) factor in emulator size. This algorithm is based on ideas from [13]. ...

A $t$-emulator of a graph $G$ is a graph $H$ that approximates its pairwise shortest path distances up to multiplicative $t$ error. We study fault tolerant $t$-emulators, under the model recently introduced by Bodwin, Dinitz, and Nazari [ITCS 2022] for vertex failures. In this paper we consider the version for edge failures, and show that they exhibit surprisingly different behavior. In particular, our main result is that, for $(2k-1)$-emulators with $k$ odd, we can tolerate a polynomial number of edge faults for free. For example: for any $n$-node input graph, we construct a $5$-emulator ($k=3$) on $O(n^{4/3})$ edges that is robust to $f = O(n^{2/9})$ edge faults. It is well known that $\Omega(n^{4/3})$ edges are necessary even if the $5$-emulator does not need to tolerate any faults. Thus we pay no extra cost in the size to gain this fault tolerance. We leave open the precise range of free fault tolerance for odd $k$, and whether a similar phenomenon can be proved for even $k$.

... Unlike the (fault-free) greedy algorithm of Althöfer et al. [ADD + 93], the naive implementation of the FT-greedy algorithms of [BDPW18,BP19] requires exponential time, in the number of faults f . Dinitz and Robelle [DR20] presented an elegant implementation of these greedy algorithms to run in O(k · f 2−1/k n 1+1/k · m) time, and with nearly optimal sparsity of O(k f 1−1/k n 1+1/k ) edges. In a more recent work, Bodwin, Dinitz and Robelle [BDR21a] obtained truly optimal spanners in time O( f 1−1/k · n 2+1/k + m f 2 ). ...

... Indeed, despite the fact that the key motivation for fault tolerant spanners comes from distributed networks, currently we are lacking time-efficient algorithms for optimal FT spanners. The known algorithms by Dinitz and Krauthgamer [DK11] and Dinitz and Robelle [DR20] provide FT-spanners with sub-optimal size of O( f 2−1/k n 1+1/k ) edges, and using O( f 2−1/k ) congest rounds [Pel00a]. We note that even for the simpler setting of edge-FT spanners (resilient to f edge faults), currently there are no local solutions, i.e., with O(1) congest rounds, even when settling for spanners with sub-optimal sparsity 2 . ...

... This connection immediately lead to O(λ)-round distributed algorithms for nearly sparse λ-edge certificates. Using the algorithm of [DR20], one can obtain λ-vertex certificates with O(λ 2 n) edges using O(λ) rounds. In this paper, we will use this connection to provide nearly-sparse vertex-certificates, i.e., with O(λn) edges, and in nearly optimal parallel and distributed runtime (e.g., in O(1) congest rounds). ...

We (nearly) settle the time complexity for computing vertex fault-tolerant (VFT) spanners with optimal sparsity (up to polylogarithmic factors). VFT spanners are sparse subgraphs that preserve distance information, up to a small multiplicative stretch, in the presence of vertex failures. These structures were introduced by [Chechik et al., STOC 2009] and have received a lot of attention since then. We provide algorithms for computing nearly optimal $f$-VFT spanners for any $n$-vertex $m$-edge graph, with near optimal running time in several computational models: - A randomized sequential algorithm with a runtime of $\widetilde{O}(m)$ (i.e., independent in the number of faults $f$). The state-of-the-art time bound is $\widetilde{O}(f^{1-1/k}\cdot n^{2+1/k}+f^2 m)$ by [Bodwin, Dinitz and Robelle, SODA 2021]. - A distributed congest algorithm of $\widetilde{O}(1)$ rounds. Improving upon [Dinitz and Robelle, PODC 2020] that obtained FT spanners with near-optimal sparsity in $\widetilde{O}(f^{2})$ rounds. - A PRAM (CRCW) algorithm with $\widetilde{O}(m)$ work and $\widetilde{O}(1)$ depth. Prior bounds implied by [Dinitz and Krauthgamer, PODC 2011] obtained sub-optimal FT spanners using $\widetilde{O}(f^3m)$ work and $\widetilde{O}(f^3)$ depth. An immediate corollary provides the first nearly-optimal PRAM algorithm for computing nearly optimal $\lambda$-\emph{vertex} connectivity certificates using polylogarithmic depth and near-linear work. This improves the state-of-the-art parallel bounds of $\widetilde{O}(1)$ depth and $O(\lambda m)$ work, by [Karger and Motwani, STOC'93].

... At this point, it appeared that the EFT setting might be substantially easier than the VFT setting, in the sense that it allowed for a smaller dependence on f in spanner size. However, a recent series of papers has developed a set of techniques that apply equally well to both settings, yielding the same improved bounds for each [5][6][7]13]. This has culminated in the following theorem: Theorem 1.1. ...

... The spanner construction algorithm that we use to prove Theorem 1.4 is the same greedy algorithm as in [5,7] (adapted for edge fault tolerance), which requires exponential time. However, by combining our new analysis with the ideas used by [13], we can obtain polynomial time at the price of a slightly worse dependence on k: Theorem 1.5. There is a polynomial time algorithm that, given positive integers f, k and an n-node input graph, outputs an f -EFT (2k − 1)-spanner H with ...

... 2. The following analyses [6,7,13] took an alternate view that greedy FT-spanners are structurally similar to high-girth graphs; in particular, they certify sparsity by showing that the output spanners of the FT-greedy algorithm have high-girth subgraphs that keep most of the density of the original graph. This high-girth subgraph must be sparse, since the Moore bounds apply to this subgraph directly, and the way the subgraph is constructed ensures that its density cannot be too far away from that of the original output spanner. ...

... After significant work following [17], we now completely understand the achievable bounds on fault-tolerant spanners: Bodwin and Patel [14] proved that every graph has an f -VFT (2k − 1) spanner with at most O f 1−1/k n 1+1/k edges (and the same bounds were shown to be achievable in polynomial time by [11,18]), and Bodwin, Dinitz, Parter, and Williams [10] gave examples (under the girth conjecture) of graphs on which this bound cannot be improved in any range of parameters. ...

... The algorithm we design to prove Theorem 1.1 starts from the basic greedy VFT spanner algorithm of [10] (and its polytime extension in [18]), where we consider edges in nondecreasing weight order and add an edge if there is a fault set that forces us to add it. To take advantage of the power of emulators, though, we augment this with an extra "path sampling" step: intuitively, when we decide to add a spanner edge, we also flip a biased coin for every k-path that it completes to decide whether to also add an emulator edge between the endpoints of the path. ...

... We begin by proving Theorem 1.1 in the special case k = 3 in Section 2. This introduces the main ideas and approach that we use to prove Theorem 1.1 in general, but it also happens to avoid a few technical details that become necessary only when we move to larger k (allowing us to replace the complicated SALAD paths with simpler "middle-heavy fault-avoiding" paths). We then prove Theorem 1.1 in its full generality: in Section 3 we design an exponential-time algorithm which proves existence of sparse fault-tolerant emulators for all k, and then in Section 4 we show how to use ideas from [18] to make the algorithm polynomial-time without significant loss in emulator sparsity. We then prove our lower bounds (Theorems 1.2 and 1.3) in Section 5, and we conclude with our results on additive spanners in Section 6. ...

A $k$-spanner of a graph $G$ is a sparse subgraph that preserves its shortest path distances up to a multiplicative stretch factor of $k$, and a $k$-emulator is similar but not required to be a subgraph of $G$. A classic theorem by Thorup and Zwick [JACM '05] shows that, despite the extra flexibility available to emulators, the size/stretch tradeoffs for spanners and emulators are equivalent. Our main result is that this equivalence in tradeoffs no longer holds in the commonly-studied setting of graphs with vertex failures. That is: we introduce a natural definition of vertex fault-tolerant emulators, and then we show a three-way tradeoff between size, stretch, and fault-tolerance for these emulators that polynomially surpasses the tradeoff known to be optimal for spanners. We complement our emulator upper bound with a lower bound construction that is essentially tight (within $\log n$ factors of the upper bound) when the stretch is $2k-1$ and $k$ is either a fixed odd integer or $2$. We also show constructions of fault-tolerant emulators with additive error, demonstrating that these also enjoy significantly improved tradeoffs over those available for fault-tolerant additive spanners.

... At this point, it appeared that the EFT setting might be substantially easier than the VFT setting, in the sense that it allowed for a smaller dependence on f in spanner size. However, a recent series of papers has developed a set of techniques that apply equally well to both settings, yielding the same improved bounds for each [5][6][7]11]. This has culminated in the following theorem: Theorem 1.1 (FT upper bounds [6]). ...

... Polytime? Citation O f · n 1+1/k [8] O exp(k)f 1−1/k · n 1+1/k k = 2 only [5] O f 1−1/k · n 1+1/k k = 2 only [7] O kf 1−1/k · n 1+1/k k = 2 only [11] O f 1−1/k · n 1+1/k k = 2 only [6] O k 2 f 1/2 · n 1+1/k + kf n for even k k = 2 only ( * ) this paper ...

... The spanner construction algorithm that we use to prove Theorem 1.4 is the same greedy algorithm as in [5,7] (adapted for edge fault tolerance), which requires exponential time. However, by combining our new analysis with the ideas used by [11], we can obtain polynomial time at the price of a slightly worse dependence on k: Theorem 1.6. There is a polynomial time algorithm that, given positive integers f, k and an n-node input graph, outputs an f -EFT (2k − 1)-spanner H with ...

Recent work has established that, for every positive integer $k$, every $n$-node graph has a $(2k-1)$-spanner on $O(f^{1-1/k} n^{1+1/k})$ edges that is resilient to $f$ edge or vertex faults. For vertex faults, this bound is tight. However, the case of edge faults is not as well understood: the best known lower bound for general $k$ is $\Omega(f^{\frac12 - \frac{1}{2k}} n^{1+1/k} +fn)$. Our main result is to nearly close this gap with an improved upper bound, thus separating the cases of edge and vertex faults. For odd $k$, our new upper bound is $O_k(f^{\frac12 - \frac{1}{2k}} n^{1+1/k} + fn)$, which is tight up to hidden $poly(k)$ factors. For even $k$, our new upper bound is $O_k(f^{1/2} n^{1+1/k} +fn)$, which leaves a gap of $poly(k) f^{1/(2k)}$. Our proof is an analysis of the fault-tolerant greedy algorithm, which requires exponential time, but we also show that there is a polynomial-time algorithm which creates edge fault tolerant spanners that are larger only by factors of $k$.

... The edge test in the FT greedy algorithm, i.e., whether or not there exists a fault set under which (1.2) holds, is an NP-hard problem known as lengthbounded cut [BEH + 06], and hence the algorithm inherently runs in exponential time. Addressing this, a greedy algorithm with slack was recently proposed in [DR20]. This algorithm is an adaptation of the FTgreedy algorithm which replaces the exponential-time edge test with a different subroutine test(u, v), which accepts every edge (u, v) where there exist |F | ≤ f faults under which (1.2) fails, and possibly some other edges too. ...

... This slack maintains correctness and allows one to escape NP-hardness, but it introduces the challenge of bounding the number of additional edges added. The approach in [DR20] is to design an O(k)-approximation algorithm for length-bounded cut and use this in an efficiently computable test subroutine. This gives a polynomial runtime, but pays the approximation ratio of O(k) in spanner size over optimal. ...

... This gives a polynomial runtime, but pays the approximation ratio of O(k) in spanner size over optimal. So the result in [DR20] takes an important step forward (polynomial time) but also a step back (non-optimal size, by a factor of O(k)). ...

... This technique is inspired by the color-coding technique [2], and provides a general recipe for translating a given fault-free algorithm for a given task into a fault-tolerant one while paying a relatively small overhead in terms of computation time and other complexity measures of interest (e.g., space). Indeed this approach has been applied in the context of distance sensitivity oracles [15,15,7], faulttolerant spanners [11,5,12], fault-tolerant reachability preservers [6], distributed minimum-cut computation [20], and resilient distributed computation [24,23,10,17]. The high-level idea of this technique is based on sampling a (relatively) small number of subgraphs G 1 , . . . ...

... Comparison with a recent independent work of [4]. Independent to our work, [4] presented a new slack version of the greedy algorithm from [3,12] to obtain a (vertex) fault-tolerant spanners with optimal size bounds. Their main algorithm is randomized with and the emphasis there is on optimizing the size of the output spanner. ...

... Distributed constructions of fault-tolerant preservers. Distributed constructions of FT preservers attracted attention recently [15,20,16,29]. In the context of exact distance preservers, Ghaffari and Parter [20] presented the first distributed constructions of fault tolerant distance preserving structures. ...

The restoration lemma by Afek, Bremler-Barr, Kaplan, Cohen, and Merritt [Dist. Comp. '02] proves that, in an undirected unweighted graph, any replacement shortest path avoiding a failing edge can be expressed as the concatenation of two original shortest paths. However, the lemma is tiebreaking-sensitive: if one selects a particular canonical shortest path for each node pair, it is no longer guaranteed that one can build replacement paths by concatenating two selected shortest paths. They left as an open problem whether a method of shortest path tiebreaking with this desirable property is generally possible. We settle this question affirmatively with the first general construction of restorable tiebreaking schemes. We then show applications to various problems in fault-tolerant network design. These include a faster algorithm for subset replacement paths, more efficient fault-tolerant (exact) distance labeling schemes, fault-tolerant subset distance preservers and $+4$ additive spanners with improved sparsity, and fast distributed algorithms that construct these objects. For example, an almost immediate corollary of our restorable tiebreaking scheme is the first nontrivial distributed construction of sparse fault-tolerant distance preservers resilient to three faults.

... This technique is inspired by the color-coding technique [AYZ95], and provides a general recipe for translating a given fault-free algorithm for a given task into a fault-tolerant one while paying a relatively small overhead in terms of computation time and other complexity measures of interest (e.g., space). Indeed this approach has been applied in the context of distance sensitivity oracles [GW20, GW20,CC20b], fault-tolerant spanners [DK11,BCPS15,DR20], fault-tolerant reachability preservers [CC20a], distributed minimum-cut computation [Par19a], and resilient distributed computation [PY19b,PY19a,CPT20,HP20]. The high-level idea of this technique is based on sampling a (relatively) small number of subgraphs G 1 , . . . ...

In this article, we provide a unified and simplified approach to derandomize central results in the area of fault-tolerant graph algorithms. Given a graph $G$, a vertex pair $(s,t) \in V(G)\times V(G)$, and a set of edge faults $F \subseteq E(G)$, a replacement path $P(s,t,F)$ is an $s$-$t$ shortest path in $G \setminus F$. For integer parameters $L,f$, a replacement path covering (RPC) is a collection of subgraphs of $G$, denoted by $\mathcal{G}_{L,f}=\{G_1,\ldots, G_r \}$, such that for every set $F$ of at most $f$ faults (i.e., $|F|\le f$) and every replacement path $P(s,t,F)$ of at most $L$ edges, there exists a subgraph $G_i\in \mathcal{G}_{L,f}$ that contains all the edges of $P$ and does not contain any of the edges of $F$. The covering value of the RPC $\mathcal{G}_{L,f}$ is then defined to be the number of subgraphs in $\mathcal{G}_{L,f}$. We present efficient deterministic constructions of $(L,f)$-RPCs whose covering values almost match the randomized ones, for a wide range of parameters. Our time and value bounds improve considerably over the previous construction of Parter (DISC 2019). We also provide an almost matching lower bound for the value of these coverings. A key application of our above deterministic constructions is the derandomization of the algebraic construction of the distance sensitivity oracle by Weimann and Yuster (FOCS 2010). The preprocessing and query time of the our deterministic algorithm nearly match the randomized bounds. This resolves the open problem of Alon, Chechik and Cohen (ICALP 2019).

A highly desirable property of networks is robustness to failures. Consider a metric space $(X,d_X)$, a graph $H$ over $X$ is a $\vartheta$-reliable $t$-spanner if, for every set of failed vertices $B\subset X$, there is a superset $B^+\supseteq B$ such that the induced subgraph $H[X\setminus B]$ preserves all the distances between points in $X\setminus B^+$ up to a stretch factor $t$, while the expected size of $B^+$ is as most $(1+\vartheta)|B|$. Such a spanner could withstand a catastrophe: failure of even $90\%$ of the network. Buchin, Har-Peled, and Ol{\'{a}}h [2019,2020], constructed very sparse reliable spanners with stretch $1+\epsilon$ for Euclidean space using locality-sensitive orderings. Har-Peled and Ol{\'{a}}h [2020] constructed reliable spanners for various non-Euclidean metric spaces using sparse covers. However, this second approach has an inherent dependency on the aspect ratio (a.k.a. spread) and gives sub-optimal stretch and sparsity parameters. Our contribution is twofold: 1) We construct a locality-sensitive ordering for doubling metrics with a small number of orderings. As a corollary, we obtain reliable spanners for doubling metrics matching the sparsity parameters of known reliable spanners for Euclidean space. 2) We introduce new types of locality-sensitive orderings suitable for non-Euclidean metrics and construct such orderings for various metric families. We then construct reliable spanners from the newly introduced locality-sensitive orderings via reliable 2-hop spanners for paths. The number of edges in our spanner has no dependency on the spread.

We show an improved parallel algorithm for decomposing an undirected
unweighted graph into small diameter pieces with a small fraction of the edges
in between. These decompositions form critical subroutines in a number of graph
algorithms. Our algorithm builds upon the shifted shortest path approach
introduced in [Blelloch, Gupta, Koutis, Miller, Peng, Tangwongsan, SPAA 2011].
By combining various stages of the previous algorithm, we obtain a
significantly simpler algorithm with the same asymptotic guarantees as the best
sequential algorithm.

Adecomposition of a graphG=(V,E) is a partition of the vertex set into subsets (calledblocks). Thediameter of a decomposition is the leastd such that any two vertices belonging to the same connected component of a block are at distance ≤d. In this paper we prove (nearly best possible) statements, of the form: Anyn-vertex graph has a decomposition into a small number of blocks each having small diameter. Such decompositions provide a tool for efficiently decentralizing distributed computations. In [4] it was shown that every graph has a decomposition into at mosts(n) blocks of diameter at mosts(n) for
$s(n) = n^{O(\sqrt {\log \log n/\log n)} }$
. Using a technique of Awerbuch [3] and Awerbuch and Peleg [5], we improve this result by showing that every graph has a decomposition of diameterO (logn) intoO(logn) blocks. In addition, we give a randomized distributed algorithm that produces such a decomposition and runs in timeO(log2n). The construction can be parameterized to provide decompositions that trade-off between the number of blocks and the diameter. We show that this trade-off is nearly best possible, for two families of graphs: the first consists of skeletons of certain triangulations of a simplex and the second consists of grid graphs with added diagonals. The proofs in both cases rely on basic results in combinatorial topology, Sperner's lemma for the first class and Tucker's lemma for the second.

We investigate the problem of constructing spanners for a given set of points that are tolerant for edge/vertex faults. Let ℝd
be a set of n points and let k be an integer number. A k-edge/vertex fault tolerant spanner for S has the property that after the deletion of k arbitrary edges/vertices each pair of points in the remaining graph is still connected by a short path. Recently it was shown that for each set S of n points there exists a k-edge/vertex fault tolerant spannerwith O(k
2n) edges which can be constructed in O(n log n + k
2n) time. Furthermore, it was shown that for each set S of n points there exists a k-edge/vertex fault tolerant spannerwhose degree is bouned by O(c
k+1) for some constant c.
Our first contribution is a construction of a k-vertex fault tolerant spanner with O(kn) edges which is a tight bound. The computation takes O(nlogd−1n + kn log log n) time. Then we show that the same k-vertex fault tolerant spanner is also k-edge fault tolerant. Thereafter, we construct a k-vertex fault tolerant spanner with O(k
2n) edges whose degree is bounded by O(k
2). Finally, we give a more natural but stronger definition of k-edge fault tolerance which not necessarily can be satisfied if one allows only simple edges between the points of S. We investigate the question whether Steiner points help.We answer this question affirmatively and prove Θ(kn) bounds on the number of Steiner points and on the number of edges in such spanners.

We improve on random sampling techniques for approximately solving problems that involve cuts and flows in graphs. We give a near-linear-time construction that transforms any graph on n vertices into an O(n\log n)-edge graph on the same vertices whose cuts have approximately the same value as the original graph's. In this new graph, for example, we can run the O(m^{3/2})-time maximum flow algorithm of Goldberg and Rao to find an s--t minimum cut in O(n^{3/2}) time. This corresponds to a (1+epsilon)-times minimum s--t cut in the original graph. In a similar way, we can approximate a sparsest cut to within O(log n) in O(n^2) time using a previous O(mn)-time algorithm. A related approach leads to a randomized divide and conquer algorithm producing an approximately maximum flow in O(m sqrt{n}) time.

We give a short and easy upper bound on the worst-case size of fault tolerant spanners, which improves on all prior work and is fully optimal at least in the setting of vertex faults.

In the Steiner k-Forest problem, we are given an edge weighted graph, a collection D of node pairs, and an integer k ⩽ |D|. The goal is to find a min-weight subgraph that connects at least k pairs. The best known ratio for this problem is min {O(&sqrt;n), O(&sqrt;k)} [Gupta et al. 2010]. In Gupta et al. [2010], it is also shown that ratio ρ for Steiner k-Forest implies ratio O(ρ · log ²n) for the related Dial-a-Ride problem. The only other algorithm known for Dial-a-Ride, besides the one resulting from Gupta et al. [2010], has ratio O(&sqrt;n) [Charikar and Raghavachari 1998].
We obtain approximation ratio n0.448 for Steiner k-Forest and Dial-a-Ride with unit weights, breaking the O(&sqrt;n) approximation barrier for this natural case. We also show that if the maximum edge-weight is O(nε), then one can achieve ratio O(n(1 + ε) · 0.448), which is less than &sqrt;n if ε is small enough. The improvement for Dial-a-Ride is the first progress for this problem in 15 years. To prove our main result, we consider the following generalization of the Minimum k-Edge Subgraph (Mk-ES) problem, which we call Min-Cost ℓ-Edge-Profit Subgraph (MCℓ-EPS): Given a graph G = (V, E) with edge-profits p = {pe: e ∈ E} and node-costs c = {cv: v ∈ V}, and a lower profit bound ℓ, find a minimum node-cost subgraph of G of edge-profit at least ℓ. The Mk-ES problem is a special case of MCℓ-EPS with unit node costs and unit edge profits. The currently best known ratio for Mk-ES is n3-2&sqrt;2 + ε [Chlamtac et al. 2012]. We extend this ratio to MCℓ-EPS for general node costs and profits bounded by a polynomial in n, which may be of independent interest.

We use exponential start time clustering to design faster parallel graph algorithms involving distances. Previous algorithms usually rely on graph decomposition routines with strict restrictions on the diameters of the decomposed pieces. We weaken these bounds in favor of stronger local probabilistic guarantees. This allows more direct analyses of the overall process, giving:
• Linear work parallel algorithms that construct spanners with O(k) stretch and size O(n1+1/k) in unweighted graphs, and size O(n1+1/k log k) in weighted graphs.
• Hopsets that lead to the first parallel algorithm for approximating shortest paths in undirected graphs with O(m poly log n) work.

We give a Polynomial-Time Approximation Scheme (PTAS) for the Steiner tree problem in planar graphs. The running time is O(n log n).

Given a graph G = (V, E), a subgraph Gapos; = (V, Eapos;) is a t-spanner of G if for every u, v ∈ V, the distance from u to v in Gapos; is at most t times longer than that distance in G. This paper presents some results concerning the existence and efficient constructability of sparse spanners for various classes of graphs, including general undirected graphs, undirected chordal graphs, and general directed graphs.

The synchronizer is a simulation methodology introduced by Awerbuch [J. Assoc. Comput. Math., 32 (1985), pp. 804–823] for simulating a synchronous network by an asynchronous one, thus enabling the execution of a synchronous algorithm on an asynchronous network. In this paper a novel technique for constructing network synchronizers is presented. This technique is developed from some basic relationships between synchronizers and the structure of a t-spanning subgraph over the network. As a special result, a synchronizer for the hypercube with optimal time and communication complexities is obtained.

An L-length-bounded cut in a graph G with source s, and sink t is a cut that destroys all s-t-paths of length at most L. An L-length-bounded flow is a flow in which only flow paths of length at most L are used. We show that the minimum length-bounded cut problem in graphs with unit edge lengths is NP-hard to approximate within a factor of at least 1.1377 for L ≧ 5 in the case of node-cuts and for L ≧ 4 in the case of edge-cuts. We also give approximation algorithms of ratio min{L, n/L} in the node case and min{L, n2/L2,√m} in the edge case, where n denotes the number of nodes and m denotes the number of edges. We discuss the integrality gaps of the LP relaxations of length-bounded flow and cut problems, analyze the structure of optimal solutions, and present further complexity results for special cases.

Aimed at an audience of researchers and graduate students in computational geometry and algorithm design, this book uses the Geometric Spanner Network Problem to showcase a number of useful algorithmic techniques, data structure strategies, and geometric analysis techniques with many applications, practical and theoretical. The authors present rigorous descriptions of the main algorithms and their analyses for different variations of the Geometric Spanner Network Problem. Though the basic ideas behind most of these algorithms are intuitive, very few are easy to describe and analyze. For most of the algorithms, nontrivial data structures need to be designed, and nontrivial techniques need to be developed in order for analysis to take place. Still, there are several basic principles and results that are used throughout the book. One of the most important is the powerful well-separated pair decomposition. This decomposition is used as a starting point for several of the spanner constructions. © Giri Narasimhan, Michiel Smid 2007 and Cambridge University Press, 2009. All rights reserved.

A natural requirement of many distributed structures is fault-tolerance:
after some failures, whatever remains from the structure should still be
effective for whatever remains from the network. In this paper we examine
spanners of general graphs that are tolerant to vertex failures, and
significantly improve their dependence on the number of faults $r$, for all
stretch bounds.
For stretch $k \geq 3$ we design a simple transformation that converts every
$k$-spanner construction with at most $f(n)$ edges into an $r$-fault-tolerant
$k$-spanner construction with at most $O(r^3 \log n) \cdot f(2n/r)$ edges.
Applying this to standard greedy spanner constructions gives $r$-fault tolerant
$k$-spanners with $\tilde O(r^{2} n^{1+\frac{2}{k+1}})$ edges. The previous
construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends
similarly on $n$ but exponentially on $r$ (approximately like $k^r$).
For the case $k=2$ and unit-length edges, an $O(r \log n)$-approximation
algorithm is known from recent work of Dinitz and Krauthgamer [arXiv 2010],
where several spanner results are obtained using a common approach of rounding
a natural flow-based linear programming relaxation. Here we use a different
(stronger) LP relaxation and improve the approximation ratio to $O(\log n)$,
which is, notably, independent of the number of faults $r$. We further
strengthen this bound in terms of the maximum degree by using the \Lovasz Local
Lemma.
Finally, we show that most of our constructions are inherently local by
designing equivalent distributed algorithms in the LOCAL model of distributed
computation.

This paper provides a novel technique for the analysis of randomized algorithms for optimization problems on metric spaces, by relating the randomized performance ratio for any, metric space to the randomized performance ratio for a set of “simple” metric spaces. We define a notion of a set of metric spaces that probabilistically-approximates another metric space. We prove that any metric space can be probabilistically-approximated by hierarchically well-separated trees (HST) with a polylogarithmic distortion. These metric spaces are “simple” as being: (1) tree metrics; (2) natural for applying a divide-and-conquer algorithmic approach. The technique presented is of particular interest in the context of on-line computation. A large number of on-line algorithmic problems, including metrical task systems, server problems, distributed paging, and dynamic storage rearrangement are defined in terms of some metric space. Typically for these problems, there are linear lower bounds on the competitive ratio of deterministic algorithms. Although randomization against an oblivious adversary has the potential of overcoming these high ratios, very little progress has been made in the analysis. We demonstrate the use of our technique by obtaining substantially improved results for two different on-line problems

We describe several compact routing schemes for general weighted undirected networks. Our schemes are simple and easy to implement. The routing tables stored at the nodes of the network are all very small. The headers attached to the routed messages, including the name of the destination, are extremely short. The routing decision at each node takes constant time. Yet, the stretch of these routing schemes, i.e., the worst ratio between the cost of the path on which a packet is routed and the cost of the cheapest path from source to destination, is a small constant. Our schemes achieve a near-optimal tradeoff between the size of the routing tables used and the resulting stretch. More specifically, we obtain: 1. A routing scheme that uses only ~ O(n ) bits of memory at each node of an n-node network that has stretch 3. The space is optimal, up to logarithmic factors, in the sense that every routing scheme with stretch < 3 must use, on some networks, routing tables of total size ), and every routing scheme with stretch < 5 must use, on some networks, routing tables of total size ). The headers used are only (1 + o(1)) log 2 n-bit long and each routing decision takes constant time. A variant of this scheme with dlog 2 ne-bit headers makes routing decisions in O(log log n) time.

Let S be a set of n points in IR d , and k an integer such that 1 k n 2. Algorithms are given that construct fault-tolerant spanners for S. If in such a spanner at most k edges or vertices are removed, then each pair of points in the remaining graph is still connected by a short" path. Our results include (i) an algorithm with running time O(n log d 1 n + kn log log n + k 2 n) that constructs a spanner with O(k 2 n) edges, that is resilient to k edge faults, (ii) an algorithm with running time O(n log n + k 2 n) that constructs a spanner with O(k 2 n) edges, that is resilient to k vertex faults, and (iii) an algorithm with running time O(n log n + c k n) that constructs a spanner of degree O(c k ), whose total edge length is bounded by O(c k ) times the weight of a minimum spanning tree of S, and that is resilient to k edge or vertex faults. Here, c is a constant that is independent of n and k. Our algorithms are based on well-separated pairs, and approximate n...