Mark Hereld

Mark Hereld
Argonne National Laboratory | ANL · Division of Mathematics and Computer Science

Ph.D.

About

135
Publications
12,684
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,832
Citations

Publications

Publications (135)
Article
Accurate, precise and rapid particle tracking in three dimensions remains a challenge, yet its achievement will significantly enhance our understanding of living systems. We developed a multifocal microscopy (MFM) that allows snapshot acquisition of the imaging data, and an associated image processing approach, that together allow simultaneous 3D t...
Article
Full-text available
Digital technology presents us with new and compelling opportunities for discovery when focused on the world's natural history collections. The outstanding barrier to applying existing and forthcoming computational methods for large-scale study of this important resource is that it is (largely) not yet in the digital realm. Without development of n...
Article
Full-text available
Despite recent advances, high performance single-shot 3D microscopy remains an elusive task. By introducing designed diffractive optical elements (DOEs), one is capable of converting a microscope into a 3D “kaleidoscope,” in which case the snapshot image consists of an array of tiles and each tile focuses on different depths. However, the acquired...
Article
Full-text available
Realizing both high temporal and spatial resolution across a large volume is a key challenge for 3D fluorescent imaging. Towards achieving this objective, we introduce an interferometric multifocus microscopy (iMFM) system, a combination of multifocus microscopy (MFM) with two opposing objective lenses. We show that the proposed iMFM is capable of...
Conference Paper
Full-text available
We present a Bayesian approach for 3D image reconstruction of an extended object imaged with multi-focus microscopy (MFM). MFM simultaneously captures multiple sub-images of different focal planes to provide 3D information of the sample. The naive method to reconstruct the object is to stack the sub-images along the z-axis, but the result suffers f...
Conference Paper
Full-text available
We present a primal-dual interior point method (IPM) with a novel preconditioner to solve the 1-norm regularized least square problem for nonnegative sparse signal reconstruction. IPM is a second-order method that uses both gradient and Hessian information to compute effective search directions and achieve super-linear convergence rates. It therefo...
Preprint
Full-text available
Despite recent advances, high performance single-shot 3D microscopy remains an elusive task. By introducing designed diffractive optical elements (DOEs), one is capable of converting a microscope into a 3D "kaleidoscope", in which case the snapshot image consists of an array of tiles and each tile focuses on different depths. However, the acquired...
Article
Accurate and rapid particle tracking is essential for addressing many research problems in single molecule and cellular biophysics and colloidal soft condensed matter physics. We developed a novel three-dimensional interferometric fluorescent particle tracking approach that does not require any sample scanning. By periodically shifting the interfer...
Article
Full-text available
An interferometric fluorescent microscope and a novel theoretic image reconstruction approach were developed and used to obtain super-resolution images of live biological samples and to enable dynamic real time tracking. The tracking utilizes the information stored in the interference pattern of both the illuminating incoherent light and the emitte...
Article
Full-text available
Volumetric biological imaging often involves compromising high temporal resolution at the expense of high spatial resolution when popular scanning methods are used to capture 3D information. We introduce an integrated experimental and image reconstruction method for capturing dynamic 3D fluorescent extended objects as a series of synchronously meas...
Article
Full-text available
Integrated Dynamic 3D Imaging of Microbial Processes and Communities in Rhizosphere Environments: The Argonne Small Worlds Project - Volume 23 Issue S1 - K. M. Kemner, M. Hereld, N. Scherer, A. Selewa, X. Wang, I. Gdor, M. Daddysman, J. Jureller, T. Huynh, O. Cossairt, A. Katsaggelos, K. He, S. Yoo, N. Matsuda, B. Glick, P. La Riviere, J. Austin, K...
Article
The authors present a graph-based computational framework that facilitates the construction, instantiation, and analysis of large-scale optimisation and simulation applications of coupled infrastructure networks. The framework integrates the optimisation modelling package PLASMO and the simulation package DMNetwork (built around PETSc). These tools...
Conference Paper
Today's leadership computing facilities have enabled the execution of transformative simulations at unprecedented scales. However, analyzing the huge amount of output from these simulations remains a challenge. Most analyses of this output is performed in post-processing mode at the end of the simulation. The time to read the output for the analysi...
Chapter
Computational models of brain tissue provide important insights for understanding pathological behavior within neuronal networks. Validating these models poses difficult challenges due to the number of neurons and synaptic connections in even the most modest samples. An important step toward validation is determining connectivity within the biologi...
Article
Full-text available
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing an...
Conference Paper
Adaptive Mesh Refinement is a popular approach for allocating scarce computing resources to the most important portions of the simulation domain. This approach implies spatial compression and the large simulation sizes which necessitate it. We present a novel, cluster- and GPU-parallel rendering scheme for AMR data, which is built on previous work...
Article
Full-text available
Constructing integrative visualizations that simultaneously cater to a variety of data types is challenging. Hybrid-reality environments blur the line between virtual environments and tiled display walls. They incorporate high-resolution, stereoscopic displays, which can be used to juxtapose large, heterogeneous datasets while providing a range of...
Conference Paper
Computational science is generating increasingly unwieldy datasets created by complex and high-resolution simulations of physical, social, and economic systems. Traditional post processing of such large datasets requires high bandwidth to large storage resources. In situ processing approaches can reduce I/O requirements but steal processing cycles...
Article
Full-text available
Molecular surfaces at atomic and subatomic scales are inherently ill-defined. In many computational chemistry problems , boundaries are better represented as volumetric regions than as discrete surfaces. Molecular structure of a system at equilibrium is given by the self-consistent field, commonly interpreted as a scalar field of electron density....
Conference Paper
The light from early galaxies had a dramatic impact on the gasses filling the universe. This video highlights the spatial structure of the light's effect, by comparing two simulations: one with a self-consistent radiation field (radiative), and one without (non-radiative), each with a very high dynamic range. Looking at the simulations side-by-side...
Conference Paper
The light from early galaxies had a dramatic impact on the gasses filling the universe. This video highlights the spatial structure of the light's effect, by comparing two simulations: one with a self-consistent radiation field (radiative), and one without (non-radiative), each with a very high dynamic range. Ionization fraction is the amount of th...
Conference Paper
Climate models are both outputting larger and larger amounts of data and are doing it on more sophisticated numerical grids. The tools climate scientists have used to analyze climate output, an essential component of climate modeling, are single threaded and assume rectangular structured grids in their analysis algorithms. We are bringing both task...
Conference Paper
Climate models are both outputting larger and larger amounts of data and are doing it on more sophisticated numerical grids. The tools climate scientists have used to analyze climate output, an essential component of climate modeling, are single threaded and assume rectangular structured grids in their analysis algorithms. We are bringing both task...
Article
Full-text available
One of the most pressing issues with petascale analysis is the transport of simulation results data to a meaningful analy-sis. Traditional workflow prescribes storing the simulation results to disk and later retrieving them for analysis and visualization. However, at petascale this storage of the full results is prohibitive. A solution to this prob...
Conference Paper
Scalability and time-to-solution studies have historically been focused on the size of the problem and run time. We consider a more strict definition of "solution" whereby a live data analysis (co-visualization of either the full data or in situ data extracts) provides continuous and reconfigurable insight into massively parallel simulations. Speci...
Conference Paper
This simulation uses a flux-limited diffusion solver to explore the radiation hydrodynamics of early galaxies, in particular, the ionizing radiation created by Population III stars. At the time of this rendering, the simulation has evolved to a redshift of 3.5. The simulation volume is 11.2 comoving megaparsecs, and has a uniform grid of 1024³ cell...
Conference Paper
Full-text available
There is growing concern that I/O systems will be hard pressed to satisfy the requirements of future leadership-class machines. Even current machines are found to be I/O bound for some applications. In this paper, we identify existing performance bottlenecks in data movement for I/O on the IBM Blue Gene/P (BG/P) supercomputer currently deployed at...
Article
Full-text available
The performance mismatch between computing and I/O components of current-generation HPC systems has made I/O the critical bottleneck for scientific applications. It is therefore critical to make data movement as efficient as possible, and, to facilitate simulation-time data analysis and visualization to reduce the data written to storage. These wil...
Conference Paper
Full-text available
Simulations running on the top supercomputers are routinely producing multi-terabyte data sets. Enabling scientists, at their home institutions, to analyze, visualize and interact with these data sets as they are produced is imperative to the scientific discovery process. We report on interactive visualizations of large simulations performed on Kra...
Article
Full-text available
Science gateways have dramatically simplified the work required by science communities to run their codes on TeraGrid resources. Gateway development typically spans the duration of a particular grant, with the first production runs occurring some months after the award and concluding near the end of the project. Scientists use gateways as a means t...
Article
The top supercomputers typically have aggregate memories in excess of 100 TB, with simulations running on these systems producing datasets of comparable size. The size of these datasets and the speed with which they are produced define the minimum performance that modern analysis and visualization must achieve. We report on interactive visualizatio...
Conference Paper
Full-text available
Large-scale scientific simulations routinely produce data of increasing resolution. Analyzing this data is key to scientific discovery. A critical bottleneck facing data analysis is the I/O time to access the data due to the disparity between a simulation's data layout and the data layout requirements of analysis applications. One method of address...
Conference Paper
Full-text available
Current leadership-class machines suffer from a significant imbalance between their computational power and their I/O bandwidth. I/O forwarding is a paradigm that attempts to bridge the increasing performance and scalability gap between the compute and I/O components of leadership-class machines to meet the requirements of data-intensive applicatio...
Article
Two existing models of brain dynamics in epilepsy, one detailed (i.e., realistic) and one abstract (i.e., simplified) are compared in terms of behavioral range and match to in vitro mouse recordings. A new method is introduced for comparing across computational models that may have very different forms. First, high-level metrics were extracted from...
Conference Paper
Full-text available
A significant obstacle to building usable, web-based interfaces for computational science in a Grid environment is how to deploy scientific applications on computational resources and expose these applications as web services. To streamline the development of these interfaces, we propose a new application framework that can deliver user-defined sci...
Article
A basic understanding of the relationship between activity of individual neurons and macroscopic electrical activity of local field potentials, or electroencephalogram {(EEG)}, may provide guidance for experimental design in neuroscience, improve development of therapeutic approaches in neurology, and offer opportunities for computer-aided design o...
Article
Full-text available
A Science Gateway is a computational web portal that includes a community-developed set of tools, applications, and data customized to enable scientists to run scientific simulations, data analysis, and visualization through their web browsers. The major problem of building a science gateway on a Grid environment such as TeraGrid is how to deploy s...
Article
Full-text available
The Computer Supported Collaborative Work research community has identified that the technology used to support distributed teams of researchers, such as email, instant messaging, and conferencing environments, are not enough. Building from a list of areas where it is believed technology can help support distributed teams, we have divided our effor...
Article
Full-text available
Increasingly massive datasets produced by simulations beg the question How will we connect this data to the computational and display resources that support visualization and analysis? This question is driving research into new approaches to allocating computational, storage, and network resources. In this paper we explore potential solutions that...
Article
Full-text available
Connecting expensive and scarce visual data analysis resources to end-users is a major challenge today. We describe a flexible mechanism for meeting this challenge based on commodity compression technologies for streaming video. The advantages of this approach include simplified application development, access to generic client components for viewi...
Conference Paper
Full-text available
Collaboration is often an afterthought to a project or development. In this paper we describe and analyze our experiences in developing collaborative technologies, most often involving the sharing of visual information. We have often developed these in a context that required us to retrofit existing analysis applications with collaboration capabili...
Article
Full-text available
The SIDGrid architecture provides a frame-work for distributed annotation, archiving, and analysis of the rapidly growing volume of multimodal data. The framework inte-grates three main components: an annota-tion and analysis client, a web-accessible data repository, and a portal to the dis-tributed processing capability of the Ter-aGrid. The archi...
Article
With the dramatic increases in simulation complexity and resolution comes an equally dramatic challenge for resources, both computational and storage, needed to facilitate analysis and understanding of the results. Traditionally these needs have been met by powerful workstations equipped with sophisticated analysis tools and special purpose visuali...
Article
Seizures in pediatric epilepsy are often associated with spreading, repetitive bursting activity in neocortex. The authors examined onset and propagation of seizure-like activity using a computational model of cortical circuitry. The model includes two pyramidal cell types and four types of inhibitory interneurons. Each neuron is represented by a m...
Article
Large simulations have become increasingly complex in many fields, tending to incorporate scale-dependent modeling and algorithms and wide-ranging physical influences. This scale of simulation sophistication has not yet been matched in neuroscience. The authors describe a framework aimed at enabling natural interaction with complex simulations: the...
Conference Paper
Full-text available
Abstract This paper proposes the study of a new,computation model,that attempts to address the underlying sources of performance degradation (e.g. latency, overhead, and star- vation) and the difficulties of programmer,productivity (e.g. explicit locality management and scheduling, performance tuning, fragmented memory, and synchronous global barri...
Article
Full-text available
The Social Informatics Data Grid is a new infrastructure designed to transform how social and behavioral scientists collect and annotate data, collaborate and share data, and analyze and mine large data repositories. An important goal of the project is to be compatible with existing databases and tools that support the sharing, storage and retrieva...
Article
Full-text available
Most types of electrographic epileptiform activity can be characterized by isolated or repetitive bursts in brain electrical activity. This observation is our motivation to determine mechanisms that underlie bursting behavior of neuronal networks. Here we show that the persistent sodium (Na(P)) current in mouse neocortical slices is associated with...
Conference Paper
Full-text available
This paper addresses the underlying sources of performance degradation (e.g. latency, overhead, and starvation) and the difficulties of programmer productivity (e.g. explicit locality management and scheduling, performance tuning, fragmented memory, and synchronous global barriers) to dramatically enhance the broad effectiveness of parallel process...
Conference Paper
Full-text available
Demand for high resolution visualization, large pixel real estate collaborative workspaces, and interactive computer interfaces continue to drive researchers to develop new physical portals connecting them to their computational tools, to their data, and to their colleagues around the world. In this paper we describe CupHolder, a high performance w...
Conference Paper
Full-text available
Multiprojector tiled displays offer a scalable path to high-resolution display systems, often in a large format. Such displays have been employed in a range of applications from scientific visualization to collaborative virtual environments. Images projected onto these systems have always looked less sharp than expected owing to the image warping r...