John Brevik

University of California, Santa Barbara, Santa Barbara, CA, United States

Are you John Brevik?

Claim your profile

Publications (42)5.95 Total impact

  • Source
    John Brevik, Scott Nollet
    [Show abstract] [Hide abstract]
    ABSTRACT: For the completion B of a local geometric normal domain, V. Srinivas asked which subgroups of Cl B arise as the image of the map from Cl A to Cl B on class groups as A varies among normal geometric domains with B isomorphic to the completion of A. For two dimensional rational double point singularities we show that all subgroups arise in this way. We also show that in any dimension, every normal hypersurface singularity has completion isomorphic to that of a geometric UFD. Our methods are global, applying Noether-Lefschetz theory to linear systems with non-reduced base loci.
    03/2014;
  • Source
    John Brevik, Scott Nollet
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the fixed singularities imposed on members of a linear system of surfaces in P^3_C by its base locus Z. For a 1-dimensional subscheme Z \subset P^3 with finitely many points p_i of embedding dimension three and d >> 0, we determine the nature of the singularities p_i \in S for general S in |H^0 (P^3, I_Z (d))| and give a method to compute the kernel of the restriction map Cl S \to Cl O_{S,p_i}. One tool developed is an algorithm to identify the type of an A_n singularity via its local equation. We illustrate the method for representative Z and use Noether-Lefschetz theory to compute Pic S.
    08/2012;
  • Source
    John Brevik, Scott Nollet
    [Show abstract] [Hide abstract]
    ABSTRACT: We use our extension of the Noether-Lefschetz theorem to describe generators of the class groups at the local rings of singularities of very general hypersurfaces containing a fixed base locus. We give several applications, including (1) every subgroup of the class group of the completed local ring of a rational double point arises as the class group of such a singularity on a surface in complex projective 3-space and (2) every complete local ring arising from a normal hypersurface singularity over the complex numbers is the completion of a unique factorization domain of essentially finite type over the complex numbers.
    Proceedings of The Steklov Institute of Mathematics - PROC STEKLOV INST MATH. 10/2011; 267(1).
  • [Show abstract] [Hide abstract]
    ABSTRACT: We examine various algebraic/combinatorial prop- erties of Low-Density Parity-Check codes as predictors for the performance of the sum-product algorithm on the AWGN channel in the error floor region. We consider three families of check matrices, two algebraically constructed and one sampled from an ensemble, expurgated to remove short cycles. The three families have similar properties, all are (3;6)-regular, have girth 8, and have code length roughly 280. The best predictors are small trapping sets, and the predictive value is much higher for the algebraically constructed families than the random ones. I. INTRODUCTION As is well known, the performance of the Sum-Product Al- gorithm (SPA), measured in either bit-error rate or frame-error rate as a function of the signal-to-noise ratio, tends to have two regions: a "waterfall" portion, where the curve descends ever more steeply until it reaches the second region, the "error floor," where the curve flattens out considerably. Obtaining an accurate estimate of the error floor using simulation is costly, since decoding failure is so rare; therefore, there has been a great deal of effort to identify properties of the check matrix that are associated with decoding failure. This article compares algebraic and combinatorial characteristics of the check matrix to see how well each serves as a predictor for performance. II. COMBINATORIAL PROPERTIES USED AS PREDICTORS We consider binary codes defined by a check matrix H. The bipartite graph of H has a check node for each row r of H and a bit node for each column ' of H, with an edge between r and ' when Hr;' = 1. An early observation was that short cycles in the bipartite graph of the parity-check matrix are undesirable since inac- curate received values on such a cycle are self-reinforcing (14). Therefore, it is considered desirable that the bipartite graph have large girth. Our earlier results (9) showed that the girth was certainly not a definitive indicator of decoding performance; matrices constructed using the same algebraic method and with the same girth could have performance differing by orders of magnitude, while a code of girth 6 could outperform one of girth 8. It is still possible that the number of small cycles might be a determining factor for the error rate. In our study, all graphs have girth 8, so one predictor variable we consider is the number of 8-cycles in the graph.
    01/2011;
  • Source
    D. Nurmi, R. Wolski, J. Brevik
    [Show abstract] [Hide abstract]
    ABSTRACT: In high-performance computing (HPC) settings, in which multiprocessor machines are shared among users with potentially competing resource demands, processors are allocated to user workload using space sharing. Typically, users interact with a given machine by submitting their jobs to a centralized batch scheduler that implements a site-specific, and often partially hidden, policy designed to maximize machine utilization while providing tolerable turnaround times. In practice, while most HPC systems experience good utilization levels, the amount of time experienced by individual jobs waiting to begin execution has been shown to be highly variable and difficult to predict, leading to user confusion and/or frustration. One method for dealing with this uncertainty that has been proposed is the ability to predict the amount of time that individual jobs will wait in batch queues once they are submitted, thus allowing a user to reason about the total time between job submission and job completion (which we term a job's ldquooverall turnaround timerdquo). Another related but distinct method for handling the uncertainty is to allow users who are willing to plan ahead to make ldquoadvanced reservationsrdquo for processor resources, again allowing them to reason about job turnaround time. To date, however, few if any HPC centers provide either job-queue delay prediction services or advanced reservation capabilities to their general user populations. In this paper, we describe QBETS, VARQ, and CO-VARQ, new methods for allowing users to reason and control the overall turnaround time of their batch-queue jobs submitted to busy HPC systems in existence today. QBETS is an online, non-parametric system for predicting statistical bounds on the amount of time individual batch jobs will wait in queue. VARQ is a new method for job scheduling that provides users with probabilistic ldquovirtualrdquo advanced reservations using only existing best effort batch schedulers and policies, and - - CO-VARQ utilizes this capability to implement a general coallocation service. QBETS, VARQ and CO-VARQ operate as overlays, requiring no modification to the local scheduler implementation or policies. We describe the statistical methods we use to implement the systems, detail empirical evaluations of their effectiveness in a number of HPC settings, and explore the potential future impact of these systems should they become widely used.
    IEEE Systems Journal 04/2009; · 1.75 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Using stratified sampling a desired confidence level and a specified margin of error can be achieved with smaller sample size than under standard sampling. We apply stratified sampling to the simulation of the sum-product algorithm on a binary low-density parity-check code.
    Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, 18th International Symposium, AAECC-18 2009, Tarragona, Catalonia, Spain, June 8-12, 2009. Proceedings; 01/2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: This article summarizes work in progress on theoretical analysis of the sum-product algorithm. Two families of graphs with quite different characteristics are studied: graphs in which all checks have degree two and graphs with a single cycle. Each family has a relatively simple structure that allows for precise mathematical results about the convergence of the sum-product algorithm.
    01/2009;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Conventional approaches to either information flow security or intrusion detection are not suited to detecting Trojans that steal information such as credit card numbers using advanced cryptovirological and inference channel techniques. We propose a technique based on repeated deterministic replays in a virtual machine to detect the theft of private information. We prove upper bounds on the average amount of information an attacker can steal without being detected, even if they are allowed an arbitrary distribution of visible output states. Our intrusion detection approach is more practical than traditional approaches to information flow security. We show that it is possible to, for example, bound the average amount of information an attacker can steal from a 53-bit credit card number to less than a bit by sampling only 11 of the 253 possible outputs visible to the attacker, using a two-pronged approach of hypothesis testing and information theory.
    Transactions on Computational Science. 01/2009; 4:244-262.
  • Source
    John Brevik, Scott Nollet
    [Show abstract] [Hide abstract]
    ABSTRACT: We compute the class groups of very general normal surfaces in complex projective three-space containing an arbitrary base locus $Z$, thereby extending the classic Noether-Lefschetz theorem (the case when $Z$ is empty). Our method is an adaptation of Griffiths and Harris' degeneration proof, simplified by a cohomology and base change argument. We give applications to computing Picard groups, which generalize several known results.
    International Mathematics Research Notices 06/2008; · 1.12 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In high-performance computing (HPC) settings, in which multiprocessor machines are shared among users with potentially competing resource demands, processors are allocated to user workload using space sharing. Typically, users interact with a given machine by submitting their jobs to a centralized batch scheduler that implements a site-specific policy designed to maximize machine utilization while providing tolerable turn-around times. To these users, the functioning of the batch scheduler and the policies it implements are both critical operating system components since they control how each job is serviced. In practice, while most HPC systems experience good utilization levels, the amount of time experienced by individual jobs waiting to begin execution has been shown to be highly variable and difficult to predict, leading to user confusion and/or frustration. One method for dealing with this uncertainty that has been proposed is to allow users who are willing to plan ahead to make "advanced reservations" for processor resources. To date, however, few HPC centers provide an advanced reservation capability to their general user populations since previous research indicates that diminished machine utilization will occur if and when advanced reservations are introduced. In this work, we describe VARQ, a new method for job scheduling that provides users with probabilistic "virtual" advanced reservations using only existing best effort batch schedulers. VARQ functions as an overlay, submitting jobs that are indistinguishable from the normal workload serviced by a scheduler. We describe the statistical methods we use to implement VARQ, detail an empirical evaluation of its effectiveness in a number of HPC settings, and explore the potential future impact of VARQ should it become widely used. Without requiring HPC sites to support advanced reservations, we find that VARQ can implement a reservation capability probabilistically and that the effects of this probabilistic approach are unlikely to negatively affect resource utilization.
    Proceedings of the 17th International Symposium on High-Performance Distributed Computing (HPDC-17 2008), 23-27 June 2008, Boston, MA, USA; 01/2008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In high-performance computing (HPC) settings, in which multi- processor machines are shared among users with potentially com- peting resource demands, processors are allocated to user work- load using space sharing. Typically, users interact with a given ma- chine by submitting their jobs to a centralized batch scheduler that implements a site-specific, and often partially hidden, policy de- signed to maximize machine utilization while providing tolerable turn-around times. In practice, while most HPC systems experience good utilization levels, the amount of time experienced by individ- ual jobs waiting to begin execution has been shown to be highly variable and difficult to predict, leading to user confusion and/or frustration. One method for dealing with this uncertainty that has been proposed is to allow users who are willing to plan ahead to make "advanced reservations" for processor resources. To date, however, few if any HPC centers provide an advanced reservation capability to their general user populations for fear (supported by previous research) that diminished machine utilization will occur if and when advanced reservations are introduced. In this work, we describe VARQ, a new method for job schedul- ing that provides users with probabilistic "virtual" advanced reser- vations using only existing best effort batch schedulers and poli- cies. VARQ functions as an overlay, submitting jobs that are indis- tinguishable from the normal workload serviced by a scheduler. We describe the statistical methods we use to implement VARQ, detail an empirical evaluation of its effectiveness in a number of HPC settings, and explore the potential future impact of VARQ should it become widely used. Without requiring HPC sites to support ad- vanced reservations, we find that VARQ can implement a reserva- tion capability probabilistically and that the effects of this proba- bilistic approach are unlikely to negatively affect resource utiliza- tion.
    Proceedings of the 13th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP 2008, Salt Lake City, UT, USA, February 20-23, 2008; 01/2008
  • Source
    A. Mutz, R. Wolski, J. Brevik
    [Show abstract] [Hide abstract]
    ABSTRACT: Markets and auctions have been proposed as mechanisms lor efficiently and fairly allocating resources in a number of different computational settings. Economic approaches to resource allocation in batch-controlled systems, however, have proved difficult due to the fact that, unlike reservation systems, every resource allocation decision made by the scheduler affects the turnaround time of all jobs in the queue. Economists refer to this characteristic as an "externality", where a transaction affects more than just the immediate resource consumer and producer. The problem is particularly acute for computational grid systems where organizations wish to engage in service-level agreements but are not at liberty to abandon completely the use of space-sharing and batch scheduling as the local control policies. Grid administrators desire the ability to make these agreements based on anticipated user demand, but eliciting truthful reportage of job importance and priority has proved difficult due to the externalities present when resources are batch controlled. In this paper we propose and evaluate the application of the Expected Externality Mechanism as an approach to solving this problem that is based on economic principles. In particular, this mechanism provides incentives for users to reveal information honestly about job importance and priority in an environment where batch-scheduler resource allocation decisions introduce "externalities" that affect all users. Our tests indicate that the mechanism meets its theoretical predictions in practice and can be implemented in a computationally tractable manner.
    Grid Computing, 2007 8th IEEE/ACM International Conference on; 10/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Today, internet researchers, engineers, and application writ-ers have at their disposal a number of methods for measuring end-to-end internet performance. Additionally, many wide-area applications make heavy use of measurement techniques to optimize their performance. Despite this, there is no widely accepted method for determining if two tools or tech-niques produce equivalent results, or if feedback from a tool is relevant to the application that employs it. In this paper, we apply current technologies in time series databases and network performance modeling to the problem of compar-ing network bandwidth time series. Using these techniques, we present a methodology to evaluate the level of similarity between two time series.
    01/2007;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most space-sharing parallel computers presently operated by production high-performance computing centers use batch-queuing systems to manage processor allocation. In many cases, users wishing to use these batch-queued resources may choose among different queues (charging different amounts) potentially on a num- ber of machines to which they have access. In such a situation, the amount of time a user's job will wait in any one batch queue can be a significant portion of the overall time from job submission to job completion. It thus becomes desirable to provide a prediction for the amount of time a given job can expect to wait in the queue. Further, it is natural to expect that attributes of an incoming job, specifically the number of processors requested and the amount of time requested, might impact that job's wait time. In this work, we explore the possibility of generating accu- rate predictions by automatically grouping jobs having similar at- tributes using model-based clustering. Moreover, we implement this clustering technique for a time series of jobs so that predic- tions of future wait times can be generated in real time. Using trace-based simulation on data from 7 machines over a 9-year period from across the country, comprising over one mil- lion job records, we show that clustering either by requested time, requested number of processors, or the product of the two gener- ally produces more accurate predictions than earlier, more naive, approaches and that automatic clustering outperforms administrator- determined clustering.
    Parallel Processing Letters. 01/2007; 17:21-46.
  • Source
    Job Scheduling Strategies for Parallel Processing, 13th International Workshop, JSSPP 2007, Seattle, WA, USA, June 17, 2007. Revised Papers; 01/2007
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this article, we investigate the dynamics exhibited by the production Condor pool at the University of Wisconsin with the goal of understanding its distributional properties. Condor is a cycle-harvesting service originally designed to launch and control "guest" user jobs (in batch mode) on idle workstations. Since its inception in 1985, however, it has expanded to include the ability to run in dedicated mode on clusters, to "glide in" to systems that are not strictly dedicated to Condor, and to "flock" jobs from one site to another based on pre-determined service level agreements (SLAs). Thus it has developed from an enterprise-wide desktop system into a full-fledged global computing infrastructure over its lifetime.
    21th International Parallel and Distributed Processing Symposium (IPDPS 2007), Proceedings, 26-30 March 2007, Long Beach, California, USA; 01/2007
  • Source
    J. Brevik, D. Nurmi, R. Wolski
    [Show abstract] [Hide abstract]
    ABSTRACT: Most space-sharing resources presently operated by high performance computing centers employ some sort of batch queueing system to manage resource allocation to multiple users. In this work, we explore a new method for providing end-users with predictions of the bounds on queuing delay individual jobs will experience when waiting to be scheduled to a machine partition. We evaluate this method using scheduler logs that cover a 10 year period from 10 large HPC systems. Our results show that it is possible to predict delay bounds with specified confidence levels for jobs in different queues, and for jobs requesting different ranges of processor counts
    IEEE Workload Characterization Symposium. 10/2006;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Large-scale distributed systems offer computational power at unprecedented levels. In the past, HPC users typi- cally had access to relatively few individual supercompute rs and, in general, would assign a one-to-one mapping of ap- plications to machines. Modern HPC users have simultane- ous access to a large number of individual machines and are beginning to make use of all of them for single-application execution cycles. One method that application developers have devised in order to take advantage of such systems is to organize an entire application execution cycle as a workflow. The scheduling of such workflows has been the topic of a great deal of research in the past few years and, although very sophisticated algorithms have been devised, a very specific aspect of these distributed systems, namely that most supercomputing resources employ batch queue scheduling software, has heretofore been omitted from con- sideration, presumably because it is difficult to model ac- curately. In this work, we augment an existing workflow scheduler through the introduction of methods which make accurate predictions of both the performance of the appli- cation on specific hardware, and the amount of time indi- vidual workflow tasks will spend waiting in batch queues. Our results show that although a workflow scheduler alone may choose correct task placement based on data locality or network connectivity, this benefit is often compromised by the fact that most jobs submitted to current systems must wait in overcommited batch queues for a significant portion of time. However, incorporating the enhancements we de-
    01/2006;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Large-scale distributed systems offer computational power at unprecedented levels. In the past, HPC users typically had access to relatively few individual supercomputers and, in general, would assign a one-to-one mapping of applications to machines. Modern HPC users have simultaneous access to a large number of individual machines and are beginning to make use of all of them for single-application execution cycles. One method that application developers have devised in order to take advantage of such systems is to organize an entire application execution cycle as a workflow. The scheduling of such workflows has been the topic of a great deal of research in the past few years and, although very sophisticated algorithms have been devised, a very specific aspect of these distributed systems, namely that most supercomputing resources employ batch queue scheduling software, has heretofore been omitted from consideration, presumably because it is difficult to model accurately. In this work, we augment an existing workflow scheduler through the introduction of methods which make accurate predictions of both the performance of the application on specific hardware, and the amount of time individual workflow tasks will spend waiting in batch queues. Our results show that although a workflow scheduler alone may choose correct task placement based on data locality or network connectivity, this benefit is often compromised by the fact that most jobs submitted to current systems must wait in overcommited batch queues for a significant portion of time. However, incorporating the enhancements we describe improves workflow execution time in settings where batch queues impose significant delays on constituent workflow tasks.
    Proceedings of the ACM/IEEE SC2006 Conference on High Performance Networking and Computing, November 11-17, 2006, Tampa, FL, USA; 01/2006
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Most space-sharing parallel computers presently operatedby high-performance computing centers use batch-queuing sys tems to manage processor allocation. In many cases, users wishin g to use these batch-queued resources have accounts at multiplesites and have the option of choosing at which site or sites to submi t a parallel job. In such a situation, the amount of time a user' s job will wait in any one batch queue can significantly impact t he overall time a user waits from job submission to job completi on. In this work, we explore a new method for providing end-users wi th predictions for the bounds on the queuing delay individual j obs will experience. We evaluate this method using batch schedu ler logs for distributed-memory parallel machines that cover a 9-year period at 7 large HPC centers. Our results show that it is possible to predict delay bounds r e- liably for jobs in different queues, and for jobs requestingdifferent ranges of processor counts. Using this information, scient ific ap- plication developers can intelligently decide where to sub mit their parallel codes in order to minimize overall turnaround time .
    Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP 2006, New York, New York, USA, March 29-31, 2006; 01/2006

Publication Stats

1k Citations
5.95 Total Impact Points

Institutions

  • 2003–2009
    • University of California, Santa Barbara
      • Department of Computer Science
      Santa Barbara, CA, United States
    • Wheaton College
      Norton, Massachusetts, United States
  • 1970–2009
    • California State University, Long Beach
      • Department of Mathematics & Statistics
      Long Beach, California, United States
  • 2004
    • Wheaton College
      • Department of Mathematics and Computer Science
      Wheaton, IL, United States
  • 2001
    • College of the Holy Cross
      Worcester, Massachusetts, United States
    • The University of Tennessee Medical Center at Knoxville
      Knoxville, Tennessee, United States
  • 1999
    • University of California, Berkeley
      Berkeley, California, United States