John Brevik

California State University, Long Beach, Long Beach, California, United States

Are you John Brevik?

Claim your profile

Publications (50)

  • [Show abstract] [Hide abstract] ABSTRACT: This paper examines the use of partial least squares regression to predict glycemic variability in subjects with Type I Diabetes Mellitus using measurements from continuous glucose monitoring devices and consumer-grade activity monitoring devices. It illustrates a methodology for generating automated predictions from current and historical data and shows that activity monitoring can improve prediction accuracy substantially.
    Chapter · Oct 2016
  • John Brevik · Scott Nollet
    [Show abstract] [Hide abstract] ABSTRACT: Let A be the local ring at a point of a normal complex variety with completion R. Srinivas has asked about the possible images of the induced map from Cl A to Cl R over all geometric normal domains A with fixed completion R. We use Noether-Lefschetz theory to prove that all finitely generated subgroups are possible in some familiar cases. As a byproduct we show that every finitely generated abelian group appears as the class group of the local ring at the vertex of a cone over some smooth complex variety of each positive dimension.
    Article · Jun 2016
  • Source
    John Brevik · Scott Nollet
    [Show abstract] [Hide abstract] ABSTRACT: We compute the divisor class group of the general hypersurface Y of a complex projective normal variety X of dimension at least four containing a fixed base locus Z. We deduce that completions of normal local complete intersection domains of finite type over ℂ of dimension ≥ 4 are completions of UFDs of finite type over ℂ.
    Full-text Article · Jan 2016 · Israel Journal of Mathematics
  • Source
    John O. Brevik · Michael E. O'Sullivan
    [Show abstract] [Hide abstract] ABSTRACT: The sum-product algorithm for decoding of binary codes is analyzed for bipartite graphs in which the check nodes all have degree $2$. The algorithm simplifies dramatically and may be expressed using linear algebra. Exact results about the convergence of the algorithm are derived and applied to trapping sets.
    Full-text Article · Nov 2014
  • Richard Wolski · John Brevik
    [Show abstract] [Hide abstract] ABSTRACT: Cloud computing has become a popular metaphor for dynamic and secure self-service access to computational and storage capabilities. In this study, we analyze and model workloads gathered from enterprise-operated commercial private clouds that implement “Infrastructure as a Service.” Our results show that 3-phase hyperexponential distributions fit using the Estimation Maximization (E-M) algorithm capture workload attributes accurately. In addition, these models of individual attributes compose to produce estimates of overall cloud performance that our results verify to be accurate. As an early study of commercial enterprise private clouds, this work provides guidance to those researching, designing, or maintaining such installations. In particular, the cloud workloads under study do not exhibit “heavy-tailed” distributional properties in the same way that “bare metal” operating systems do, potentially leading to different design and engineering tradeoffs.
    Article · Oct 2014 · IEEE Transactions on Services Computing
  • Source
    John Brevik · Scott Nollet
    [Show abstract] [Hide abstract] ABSTRACT: For the completion B of a local geometric normal domain, V. Srinivas asked which subgroups of Cl B arise as the image of the map from Cl A to Cl B on class groups as A varies among normal geometric domains with B isomorphic to the completion of A. For two dimensional rational double point singularities we show that all subgroups arise in this way. We also show that in any dimension, every normal hypersurface singularity has completion isomorphic to that of a geometric UFD. Our methods are global, applying Noether-Lefschetz theory to linear systems with non-reduced base loci.
    Full-text Article · Mar 2014 · The Michigan Mathematical Journal
  • John Brevik · Scott Nollet
    Article · Jan 2014
  • John Brevik · Scott Nollet
    [Show abstract] [Hide abstract] ABSTRACT: We study the fixed singularities imposed on members of a linear system of surfaces in P^3_C by its base locus Z. For a 1-dimensional subscheme Z \subset P^3 with finitely many points p_i of embedding dimension three and d >> 0, we determine the nature of the singularities p_i \in S for general S in |H^0 (P^3, I_Z (d))| and give a method to compute the kernel of the restriction map Cl S \to Cl O_{S,p_i}. One tool developed is an algorithm to identify the type of an A_n singularity via its local equation. We illustrate the method for representative Z and use Noether-Lefschetz theory to compute Pic S.
    Article · Aug 2012
  • Source
    John Brevik · Scott Nollet
    [Show abstract] [Hide abstract] ABSTRACT: We use our extension of the Noether-Lefschetz theorem to describe generators of the class groups at the local rings of singularities of very general hypersurfaces containing a fixed base locus. We give several applications, including (1) every subgroup of the class group of the completed local ring of a rational double point arises as the class group of such a singularity on a surface in complex projective 3-space and (2) every complete local ring arising from a normal hypersurface singularity over the complex numbers is the completion of a unique factorization domain of essentially finite type over the complex numbers.
    Full-text Article · Oct 2011
  • [Show abstract] [Hide abstract] ABSTRACT: We examine various algebraic/combinatorial properties of Low-Density Parity-Check codes as predictors for the performance of the sum-product algorithm on the AWGN channel in the error floor region. We consider three families of check matrices, two algebraically constructed and one sampled from an ensemble, expurgated to remove short cycles. The three families have similar properties, all are (3; 6)-regular, have girth 8, and have code length roughly 280. The best predictors are small trapping sets, and the predictive value is much higher for the algebraically constructed families than the random ones.
    Article · May 2011
  • Daniel Nurmi · Rich Wolski · John Brevik
    [Show abstract] [Hide abstract] ABSTRACT: In high-performance computing (HPC) settings, in which multiprocessor machines are shared among users with potentially competing resource demands, processors are allocated to user workload using space sharing. Typically, users interact with a given machine by submitting their jobs to a centralized batch scheduler that implements a site-specific, and often partially hidden, policy designed to maximize machine utilization while providing tolerable turnaround times. In practice, while most HPC systems experience good utilization levels, the amount of time experienced by individual jobs waiting to begin execution has been shown to be highly variable and difficult to predict, leading to user confusion and/or frustration. One method for dealing with this uncertainty that has been proposed is the ability to predict the amount of time that individual jobs will wait in batch queues once they are submitted, thus allowing a user to reason about the total time between job submission and job completion (which we term a job's ldquooverall turnaround timerdquo). Another related but distinct method for handling the uncertainty is to allow users who are willing to plan ahead to make ldquoadvanced reservationsrdquo for processor resources, again allowing them to reason about job turnaround time. To date, however, few if any HPC centers provide either job-queue delay prediction services or advanced reservation capabilities to their general user populations. In this paper, we describe QBETS, VARQ, and CO-VARQ, new methods for allowing users to reason and control the overall turnaround time of their batch-queue jobs submitted to busy HPC systems in existence today. QBETS is an online, non-parametric system for predicting statistical bounds on the amount of time individual batch jobs will wait in queue. VARQ is a new method for job scheduling that provides users with probabilistic ldquovirtualrdquo advanced reservations using only existing best effort batch schedulers and policies, and - - CO-VARQ utilizes this capability to implement a general coallocation service. QBETS, VARQ and CO-VARQ operate as overlays, requiring no modification to the local scheduler implementation or policies. We describe the statistical methods we use to implement the systems, detail empirical evaluations of their effectiveness in a number of HPC settings, and explore the potential future impact of these systems should they become widely used.
    Article · Apr 2009 · IEEE Systems Journal
  • [Show abstract] [Hide abstract] ABSTRACT: This article summarizes work in progress on theoretical analysis of the sum-product algorithm. Two families of graphs with quite different characteristics are studied: graphs in which all checks have degree two and graphs with a single cycle. Each family has a relatively simple structure that allows for precise mathematical results about the convergence of the sum-product algorithm.
    Article · Feb 2009
  • Source
    Jedidiah R. Crandall · John Brevik · Shaozhi Ye · [...] · Frederic T. Chong
    [Show abstract] [Hide abstract] ABSTRACT: Conventional approaches to either information flow security or intrusion detection are not suited to detecting Trojans that steal information such as credit card numbers using advanced cryptovirological and inference channel techniques. We propose a technique based on repeated deterministic replays in a virtual machine to detect the theft of private information. We prove upper bounds on the average amount of information an attacker can steal without being detected, even if they are allowed an arbitrary distribution of visible output states. Our intrusion detection approach is more practical than traditional approaches to information flow security. We show that it is possible to, for example, bound the average amount of information an attacker can steal from a 53-bit credit card number to less than a bit by sampling only 11 of the 253 possible outputs visible to the attacker, using a two-pronged approach of hypothesis testing and information theory.
    Full-text Article · Jan 2009
  • [Show abstract] [Hide abstract] ABSTRACT: Using stratified sampling a desired confidence level and a specified margin of error can be achieved with smaller sample size than under standard sampling. We apply stratified sampling to the simulation of the sum-product algorithm on a binary low-density parity-check code.
    Conference Paper · Jan 2009
  • Source
    John Brevik · Scott Nollet
    [Show abstract] [Hide abstract] ABSTRACT: For an arbitrary curve Z ⊂ ℙ3 (possibly reducible, non-reduced, of mixed dimension) lying on a normal surface, the general surface S of high degree containing Z is also normal but often singular. We compute the class groups of the very general such surface, thereby extending the Noether-Lefschetz theorem (the special case when Z is empty). Our method is an adaptation of Griffiths and Harris' degeneration proof, simplified by a cohomology and base change argument. We give applications to computing Picard groups. Dedicated to Robin Hartshorne on his 70th birthday. © The Author 2010. Published by Oxford University Press. All rights reserved.
    Full-text Article · Jun 2008 · International Mathematics Research Notices
  • Daniel Nurmi · Richard Wolski · John Brevik
    [Show abstract] [Hide abstract] ABSTRACT: In high-performance computing (HPC) settings, in which multiprocessor machines are shared among users with potentially competing resource demands, processors are allocated to user workload using space sharing. Typically, users interact with a given machine by submitting their jobs to a centralized batch scheduler that implements a site-specific policy designed to maximize machine utilization while providing tolerable turn-around times. To these users, the functioning of the batch scheduler and the policies it implements are both critical operating system components since they control how each job is serviced. In practice, while most HPC systems experience good utilization levels, the amount of time experienced by individual jobs waiting to begin execution has been shown to be highly variable and difficult to predict, leading to user confusion and/or frustration. One method for dealing with this uncertainty that has been proposed is to allow users who are willing to plan ahead to make "advanced reservations" for processor resources. To date, however, few HPC centers provide an advanced reservation capability to their general user populations since previous research indicates that diminished machine utilization will occur if and when advanced reservations are introduced. In this work, we describe VARQ, a new method for job scheduling that provides users with probabilistic "virtual" advanced reservations using only existing best effort batch schedulers. VARQ functions as an overlay, submitting jobs that are indistinguishable from the normal workload serviced by a scheduler. We describe the statistical methods we use to implement VARQ, detail an empirical evaluation of its effectiveness in a number of HPC settings, and explore the potential future impact of VARQ should it become widely used. Without requiring HPC sites to support advanced reservations, we find that VARQ can implement a reservation capability probabilistically and that the effects of this probabilistic approach are unlikely to negatively affect resource utilization.
    Conference Paper · Jan 2008
  • Daniel Nurmi · Richard Wolski · John Brevik
    [Show abstract] [Hide abstract] ABSTRACT: In high-performance computing (HPC) settings, in which multi- processor machines are shared among users with potentially com- peting resource demands, processors are allocated to user work- load using space sharing. Typically, users interact with a given ma- chine by submitting their jobs to a centralized batch scheduler that implements a site-specific, and often partially hidden, policy de- signed to maximize machine utilization while providing tolerable turn-around times. In practice, while most HPC systems experience good utilization levels, the amount of time experienced by individ- ual jobs waiting to begin execution has been shown to be highly variable and difficult to predict, leading to user confusion and/or frustration. One method for dealing with this uncertainty that has been proposed is to allow users who are willing to plan ahead to make "advanced reservations" for processor resources. To date, however, few if any HPC centers provide an advanced reservation capability to their general user populations for fear (supported by previous research) that diminished machine utilization will occur if and when advanced reservations are introduced. In this work, we describe VARQ, a new method for job schedul- ing that provides users with probabilistic "virtual" advanced reser- vations using only existing best effort batch schedulers and poli- cies. VARQ functions as an overlay, submitting jobs that are indis- tinguishable from the normal workload serviced by a scheduler. We describe the statistical methods we use to implement VARQ, detail an empirical evaluation of its effectiveness in a number of HPC settings, and explore the potential future impact of VARQ should it become widely used. Without requiring HPC sites to support ad- vanced reservations, we find that VARQ can implement a reserva- tion capability probabilistically and that the effects of this proba- bilistic approach are unlikely to negatively affect resource utiliza- tion.
    Conference Paper · Jan 2008
  • Andrew Mutz · Rich Wolski · John Brevik
    [Show abstract] [Hide abstract] ABSTRACT: Markets and auctions have been proposed as mechanisms lor efficiently and fairly allocating resources in a number of different computational settings. Economic approaches to resource allocation in batch-controlled systems, however, have proved difficult due to the fact that, unlike reservation systems, every resource allocation decision made by the scheduler affects the turnaround time of all jobs in the queue. Economists refer to this characteristic as an "externality", where a transaction affects more than just the immediate resource consumer and producer. The problem is particularly acute for computational grid systems where organizations wish to engage in service-level agreements but are not at liberty to abandon completely the use of space-sharing and batch scheduling as the local control policies. Grid administrators desire the ability to make these agreements based on anticipated user demand, but eliciting truthful reportage of job importance and priority has proved difficult due to the externalities present when resources are batch controlled. In this paper we propose and evaluate the application of the Expected Externality Mechanism as an approach to solving this problem that is based on economic principles. In particular, this mechanism provides incentives for users to reveal information honestly about job importance and priority in an environment where batch-scheduler resource allocation decisions introduce "externalities" that affect all users. Our tests indicate that the mechanism meets its theoretical predictions in practice and can be implemented in a computationally tractable manner.
    Conference Paper · Oct 2007
  • M.E. O’Sullivan · J. Brevik · R. Wolski
    Article · Jul 2007
  • Article: QBETS
    Daniel Charles Nurmi · John Brevik · Rich Wolski
    Article · Jun 2007 · ACM SIGMETRICS Performance Evaluation Review

Publication Stats

2k Citations

Institutions

  • 1970-2014
    • California State University, Long Beach
      • Department of Mathematics & Statistics
      Long Beach, California, United States
  • 2003-2005
    • University of California, Santa Barbara
      • Department of Computer Science
      Santa Barbara, CA, United States
  • 2003-2004
    • Wheaton College
      • Department of Mathematics and Computer Science
      نورتون، ماساچوست, Massachusetts, United States
  • 2001
    • Holy Cross College
      South Bend, Indiana, United States
    • The University of Tennessee Medical Center at Knoxville
      Knoxville, Tennessee, United States
    • College of the Holy Cross
      Worcester, Massachusetts, United States
  • 1999
    • University of California, Berkeley
      Berkeley, California, United States