Michael T. Goodrich

University of California, Irvine, Irvine, California, United States

Are you Michael T. Goodrich?

Claim your profile

Publications (331)38.67 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: We study balanced circle packings and circle-contact representations for planar graphs, where the ratio of the largest circle's diameter to the smallest circle's diameter is polynomial in the number of circles. We provide a number of positive and negative results for the existence of such balanced configurations.
    08/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Many well-known graph drawing techniques, including force directed drawings, spectral graph layouts, multidimensional scaling, and circle packings, have algebraic formulations. However, practical methods for producing such drawings ubiquitously use iterative numerical approximations rather than constructing and then solving algebraic expressions representing their exact solutions. To explain this phenomenon, we use Galois theory to show that many variants of these problems have solutions that cannot be expressed by nested radicals or nested roots of low-degree polynomials. Hence, such solutions cannot be computed exactly even in extended computational models that include such operations.
    08/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study wear-leveling techniques for cuckoo hashing, showing that it is possible to achieve a memory wear bound of $\log\log n+O(1)$ after the insertion of $n$ items into a table of size $Cn$ for a suitable constant $C$ using cuckoo hashing. Moreover, we study our cuckoo hashing method empirically, showing that it significantly improves on the memory wear performance for classic cuckoo hashing and linear probing in practice.
    04/2014;
  • Source
    Michael T Goodrich
    [Show abstract] [Hide abstract]
    ABSTRACT: We study sorting algorithms based on randomized round-robin comparisons. Specifically, we study Spin-the-bottle sort, where comparisons are unrestricted, and Annealing sort, where comparisons are restricted to a distance bounded by a temperature parameter. Both algorithms are simple, randomized, data-oblivious sorting algorithms, which are useful in privacy-preserving computations, but, as we show, Annealing sort is much more efficient. We show that there is an input permutation that causes Spin-the-bottle sort to require Ω(n (2) log n) expected time in order to succeed, and that in O(n (2) log n) time this algorithm succeeds with high probability for any input. We also show there is a specification of Annealing sort that runs in O(n log n) time and succeeds with very high probability.
    Algorithmica 03/2014; 68(4):835-858. · 0.49 Impact Factor
  • Source
    Olga Ohrimenko, Michael T. Goodrich, Roberto Tamassia, Eli Upfal
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a simple, efficient, and secure data-oblivious randomized shuffle algorithm. This is the first secure data-oblivious shuffle that is not based on sorting. Our method can be used to improve previous oblivious storage solutions for network-based outsourcing of data.
    02/2014;
  • Source
    Michael T. Goodrich, Paweł Pszona
    [Show abstract] [Hide abstract]
    ABSTRACT: In streamed graph drawing, a planar graph, G, is given incrementally as a data stream and a straight-line drawing of G must be updated after each new edge is released. To preserve the mental map, changes to the drawing should be minimized after each update, and Binucci et al.show that exponential area is necessary and sufficient for a number of streamed graph drawings for trees if edges are not allowed to move at all. We show that a number of streamed graph drawings can, in fact, be done with polynomial area, including planar streamed graph drawings of trees, tree-maps, and outerplanar graphs, if we allow for a small number of coordinate movements after each update. Our algorithms involve an interesting connection to a classic algorithmic problem - the file maintenance problem - and we also give new algorithms for this problem in a framework where bulk memory moves are allowed.
    08/2013;
  • Source
    Michael T. Goodrich, Paweł Pszona
    [Show abstract] [Hide abstract]
    ABSTRACT: We study a three-dimensional analogue to the well-known graph visualization approach known as arc diagrams. We provide several algorithms that achieve good angular resolution for 3D arc diagrams, even for cases when the arcs must project to a given 2D straight-line drawing of the input graph. Our methods make use of various graph coloring algorithms, including an algorithm for a new coloring problem, which we call localized edge coloring.
    08/2013;
  • Source
    David Eppstein, Michael T. Goodrich, Joseph A. Simons
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce the problem of performing set-difference range queries, where answers to queries are set-theoretic symmetric differences between sets of items in two geometric ranges. We describe a general framework for answering such queries based on a novel use of data-streaming sketches we call signed symmetric-difference sketches. We show that such sketches can be realized using invertible Bloom filters (IBFs), which can be composed, differenced, and searched so as to solve set-difference range queries in a wide range of scenarios.
    06/2013;
  • Source
    Michael T. Goodrich, Paweł Pszona
    [Show abstract] [Hide abstract]
    ABSTRACT: Parametric search has been widely used in geometric algorithms. Cole's improvement provides a way of saving a logarithmic factor in the running time over what is achievable using the standard method. Unfortunately, this improvement comes at the expense of making an already complicated algorithm even more complex; hence, this technique has been mostly of theoretical interest. In this paper, we provide an algorithm engineering framework that allows for the same asymptotic complexity to be achieved probabilistically in a way that is both simple and practical (i.e., suitable for actual implementation). The main idea of our approach is to show that a variant of quicksort, known as boxsort, can be used to drive comparisons, instead of using a sorting network, like the complicated AKS network, or an EREW parallel sorting algorithm, like the fairly intricate parallel mergesort algorithm. This results in a randomized optimization algorithm with a running time matching that of using Cole's method, with high probability, while also being practical. We show how this results in practical implementations of some geometric algorithms utilizing parametric searching and provide experimental results that prove practicality of the method.
    06/2013;
  • Source
    David Eppstein, Michael T. Goodrich, Daniel S. Hirschberg
    [Show abstract] [Hide abstract]
    ABSTRACT: We formalize a problem we call combinatorial pair testing (CPT), which has applications to the identification of uncooperative or unproductive participants in pair programming, massively distributed computing, and crowdsourcing environments. We give efficient adaptive and nonadaptive CPT algorithms and we show that our methods use an optimal number of testing rounds to within constant factors. We also provide an empirical evaluation of some of our methods.
    05/2013;
  • Source
    A.U. Asuncion, M.T. Goodrich
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we study sparsity-exploiting Mastermind algorithms for attacking the privacy of an entire database of character strings or vectors, such as DNA strings, movie ratings, or social network friendship data. Based on reductions to nonadaptive group testing, our methods are able to take advantage of minimal amounts of privacy leakage, such as contained in a single bit that indicates if two people in a medical database have any common genetic mutations, or if two people have any common friends in an online social network. We analyze our Mastermind attack algorithms using theoretical characterizations that provide sublinear bounds on the number of queries needed to clone the database, as well as experimental tests on genomic information, collaborative filtering data, and online social networks. By taking advantage of the generally sparse nature of these real-world databases and modulating a parameter that controls query sparsity, we demonstrate that relatively few nonadaptive queries are needed to recover a large majority of each database.
    IEEE Transactions on Knowledge and Data Engineering 01/2013; 25(1):131-144. · 1.89 Impact Factor
  • L. Arge, M.T. Goodrich, F. van Walderveen
    [Show abstract] [Hide abstract]
    ABSTRACT: Betweenness centrality is one of the most well-known measures of the importance of nodes in a social-network graph. In this paper we describe the first known external-memory and cache-oblivious algorithms for computing betweenness centrality. We present four different external-memory algorithms exhibiting various tradeoffs with respect to performance. Two of the algorithms are cache-oblivious. We describe general algorithms for networks with weighted and unweighted edges and a specialized algorithm for networks with small diameters, as is common in social networks exhibiting the “small worlds” phenomenon.
    Big Data, 2013 IEEE International Conference on; 01/2013
  • Michael T. Goodrich, Joseph A. Simons
    [Show abstract] [Hide abstract]
    ABSTRACT: We give a new efficient data-oblivious PRAM simulation and several new data-oblivious graph-drawing algorithms with application to privacy-preserving graph-drawing in a cloud computing context.
    Proceedings of the 20th international conference on Graph Drawing; 09/2012
  • Michael T. Goodrich, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    Proceedings of the 20th international conference on Graph Drawing; 09/2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: A graph is 1-planar if it can be drawn in the plane such that each edge is crossed at most once. It is maximal 1-planar if the addition of any edge violates 1-planarity. Maximal 1-planar graphs have at most 4n−8 edges. We show that there are sparse maximal 1-planar graphs with only $\frac{45}{17} n + \mathcal{O}(1)$ edges. With a fixed rotation system there are maximal 1-planar graphs with only $\frac{7}{3} n + \mathcal{O}(1)$ edges. This is sparser than maximal planar graphs. There cannot be maximal 1-planar graphs with less than $\frac{21}{10} n - \mathcal{O}(1)$ edges and less than $\frac{28}{13} n - \mathcal{O}(1)$ edges with a fixed rotation system. Furthermore, we prove that a maximal 1-planar rotation system of a graph uniquely determines its 1-planar embedding.
    Proceedings of the 20th international conference on Graph Drawing; 09/2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Force-directed layout algorithms produce graph drawings by resolving a system of emulated physical forces. We present techniques for using social gravity as an additional force in force-directed layouts, together with a scaling technique, to produce drawings of trees and forests, as well as more complex social networks. Social gravity assigns mass to vertices in proportion to their network centrality, which allows vertices that are more graph-theoretically central to be visualized in physically central locations. Scaling varies the gravitational force throughout the simulation, and reduces crossings relative to unscaled gravity. In addition to providing this algorithmic framework, we apply our algorithms to social networks produced by Mark Lombardi, and we show how social gravity can be incorporated into force-directed Lombardi-style drawings.
    09/2012;
  • Source
    Michael T. Goodrich, Olga Ohrimenko, Roberto Tamassia
    [Show abstract] [Hide abstract]
    ABSTRACT: We study graph drawing in a cloud-computing context where data is stored externally and processed using a small local working storage. We show that a number of classic graph drawing algorithms can be efficiently implemented in such a framework where the client can maintain privacy while constructing a drawing of her graph.
    09/2012;
  • Source
    Michael T. Goodrich, Michael Mitzenmacher
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the question of how to shuffle $n$ cards when faced with an opponent who knows the initial position of all the cards {\em and} can track every card when permuted, {\em except} when one takes $K< n$ cards at a time and shuffles them in a private buffer "behind your back," which we call {\em buffer shuffling}. The problem arises naturally in the context of parallel mixnet servers as well as other security applications. Our analysis is based on related analyses of load-balancing processes. We include extensions to variations that involve corrupted servers and adversarially injected messages, which correspond to an opponent who can peek at some shuffles in the buffer and who can mark some number of the cards. In addition, our analysis makes novel use of a sum-of-squares metric for anonymity, which leads to improved performance bounds for parallel mixnets and can also be used to bound well-known existing anonymity measures.
    05/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Searching accounts for one of the most frequently performed computations over the Internet as well as one of the most important applications of outsourced computing, producing results that critically affect users' decision-making behaviors. As such, verifying the integrity of Internet-based searches over vast amounts of web contents is essential. We provide the first solution to this general security problem. We introduce the concept of an authenticated web crawler and present the design and prototype implementation of this new concept. An authenticated web crawler is a trusted program that computes a special "signature" $s$ of a collection of web contents it visits. Subject to this signature, web searches can be verified to be correct with respect to the integrity of their produced results. This signature also allows the verification of complicated queries on web pages, such as conjunctive keyword searches. In our solution, along with the web pages that satisfy any given search query, the search engine also returns a cryptographic proof. This proof, together with the signature $s$, enables any user to efficiently verify that no legitimate web pages are omitted from the result computed by the search engine, and that no pages that are non-conforming with the query are included in the result. An important property of our solution is that the proof size and the verification time both depend solely on the sizes of the query description and the query result, but not on the number or sizes of the web pages over which the search is performed. Our authentication protocols are based on standard Merkle trees and the more involved bilinear-map accumulators. As we experimentally demonstrate, the prototype implementation of our system gives a low communication overhead between the search engine and the user, and allows for fast verification of the returned results on the user side.
    04/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: We study oblivious storage (OS), a natural way to model privacy-preserving data outsourcing where a client, Alice, stores sensitive data at an honest-but-curious server, Bob. We show that Alice can hide both the content of her data and the pattern in which she accesses her data, with high probability, using a method that achieves O(1) amortized rounds of communication between her and Bob for each data access. We assume that Alice and Bob exchange small messages, of size O(N1/c), for some constant c>=2, in a single round, where N is the size of the data set that Alice is storing with Bob. We also assume that Alice has a private memory of size 2N1/c. These assumptions model real-world cloud storage scenarios, where trade-offs occur between latency, bandwidth, and the size of the client's private memory.
    01/2012;

Publication Stats

5k Citations
38.67 Total Impact Points

Institutions

  • 2000–2014
    • University of California, Irvine
      • • Department of Computer Science
      • • Secure Computing and Networking Center (SCONCE)
      Irvine, California, United States
    • University of Waterloo
      Waterloo, Ontario, Canada
  • 2008
    • UC Irvine Health
      Santa Ana, California, United States
  • 1988–2006
    • Johns Hopkins University
      • Department of Computer Science
      Baltimore, MD, United States
  • 1999
    • Technion - Israel Institute of Technology
      H̱efa, Haifa District, Israel
  • 1997
    • The University of Memphis
      • Department of Mathematical Sciences
      Memphis, Tennessee, United States
  • 1994
    • Texas A&M University
      • Department of Computer Science and Engineering
      College Station, TX, United States
  • 1986–1990
    • Purdue University
      • Department of Computer Science
      West Lafayette, IN, United States
  • 1970
    • Stanford University
      • Department of Computer Science
      Palo Alto, CA, United States