Chandrika Sivaramakrishnan

Pacific Northwest National Laboratory, Richland, Washington, United States

Are you Chandrika Sivaramakrishnan?

Claim your profile

Publications (14)5.79 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: The U.S. Department of Energy (DOE) recently invested in developing a numerical modeling toolset called ASCEM (Advanced Simulation Capability for Environmental Management) to support modeling analyses at legacy waste sites. This investment includes the development of an open-source user environment called Akuna that manages subsurface simulation workflows. Core toolsets accessible through the Akuna user interface include model setup, grid generation, sensitivity analysis, model calibration, and uncertainty quantification. Additional toolsets are used to manage simulation data and visualize results. This new workflow technology is demonstrated by streamlining model setup, calibration, and uncertainty analysis using high performance computation for the BC Cribs Site, a legacy waste area at the Hanford Site in Washington State. For technetium-99 transport, the uncertainty assessment for potential remedial actions (e.g., surface infiltration covers) demonstrates that using multiple realizations of the geologic conceptual model results in greater variation in concentration predictions than when a single model is used.
    Environmental Modelling and Software 05/2014; 55:176–189. DOI:10.1016/j.envsoft.2014.01.030 · 4.54 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Numerical simulation is a standard practice used to support designing, operating, and monitoring CO2 injection projects. Although a variety of computational tools have been developed that support the numerical simulation process, many are single-purpose or platform specific and have a prescribed workflow that may or may not be suitable for a particular project. We are developing an open-source, flexible framework named Velo that provides a knowledge management infrastructure and tools to support modeling and simulation for various types of projects in a number of scientific domains. The Geologic Sequestration Software Suite (GS3) is a version of this framework with features and tools specifically tailored for geologic sequestration studies. Because of its general nature, GS3 is being employed in a variety of ways on projects with differing goals. GS3 is being used to support the Sim-SEQ international model comparison study, by providing a collaborative framework for the modeling teams and providing tools for model comparison. Another customized deployment of GS3 has been made to support the Class VI Well geologic sequestration permit application process. In this case, GS3 is being used to manage data in support of conceptual model development and provide documentation and provenance for numerical simulations. An additional customized deployment of GS3 is being created for use by the United States Environmental Protection Agency (US EPA) Underground Injection Control (UIC) Program to aid in the Class VI Well geologic sequestration permit application review process. These use cases demonstrate GS3's flexibility. utility, and broad applicability.
    Energy Procedia 12/2013; 37:3971-3979}}]. DOI:10.1016/j.egypro.2013.06.296
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Velo is a reusable, domain-independent knowledge-management infrastructure for modeling and simulation. Velo leverages, integrates, and extends Web-based open source collaborative and data-management technologies to create a scalable and flexible core platform tailored to specific scientific domains. As the examples here describe, Velo has been used in both the carbon sequestration and climate modeling domains.
    Computing in Science and Engineering 03/2012; 14(2):12-23. DOI:10.1109/MCSE.2011.116 · 1.25 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Geologic storage projects associated with large anthropogenic sources of greenhouse gases (GHG) will have lifecycles that may easily span a century, involve several numerical simulation cycles, and have distinct modeling teams. The process used for numerical simulation of the fate of GHG in the subsurface follows a generally consistent sequence of steps that often are replicated by scientists and engineers around the world. Site data is gathered, assembled, interpreted, and assimilated into conceptualizations of a solid-earth model; assumptions are made about the processes to be modeled; a computational domain is specified and spatially discretized; driving forces and initial conditions are defined; the conceptual models, computational domain, and driving forces are translated into input files; simulations are executed; and results are analyzed. Then, during and after the GHG injection, a continuous monitoring of the reservoir is done and models are updated with the newly collected data. Typically the working files generated during all these steps are maintained on workstations with local backups and archived once the project has concluded along with any modeling notes and records. We are proposing a new concept for supporting the management of full-scale GHG storage projects where collaboration, flexibility, accountability and long-term access will be essential features: The Geologic Sequestration Software Suite, GS3.
    Energy Procedia 12/2011; 4:3825-3832. DOI:10.1016/j.egypro.2011.02.318
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern scientific enterprises are inherently knowledge-intensive. In general, scientific studies in domains such as geosciences, climate, and biology require the acquisition and manipulation of large amounts of experimental and field data in order to create inputs for large-scale computational simulations. The results of these simulations must then be analyzed, leading to refinements of inputs and models and additional simulations. Further, these results must be managed and archived to provide justifications for regulatory decisions and publications that are based on these models. In this paper we introduce our Velo framework that is designed as a reusable, domain independent knowledge management infrastructure for modeling and simulation. Velo leverages, integrates, and extends open source collaborative and content management technologies to create a scalable and flexible core platform that can be tailored to specific scientific domains. We describe the architecture of Velo for managing and associating the various types of data that are used and created in modeling and simulation projects, as well as the framework for integrating domain-specific tools. To demonstrate a realization of Velo, we describe the Geologic Sequestration Software Suite (GS3) that has been developed to support geologic sequestration modeling. This provides a concrete example of the inherent extensibility and utility of our approach.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A context-aware scientific workflow is a typical scientific workflow that is enhanced with context binding and awareness mechanisms. Context facilitates further configuration of the scientific workflow at runtime such that it is tuned to its environment during execution and responds intelligently based on such awareness without customized coding of the workflow. In this paper, we present a context annotation framework, which supports rapid development of context-aware scientific workflows. Context annotation enables a diverse type of actor in Kepler that may bind with different sensed environmental information as part of the actors regular data. Context-aware actors simplify the construction of scientific workflows that require intricate knowledge in initializing and configuring a large number of parameters to cover all different execution conditions. This paper presents the motivation, system design, implementation, and usage of context annotation in relation to the Kepler scientific workflow system.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A drawback of existing scientific workflow systems is the lack of support to domain scientists in designing and executing their own scientific workflows. Many domain scientists avoid developing and using workflows because the basic objects of workflows are too low-level and high-level tools and mechanisms to aid in workflow construction and use are largely unavailable. In our research, we are prototyping higher-level abstractions and tools to better support scientists in their workflow activities. Specifically, we are developing generic actors that provide abstract interfaces to specific functionality, workflow templates that encapsulate workflow and data patterns that can be reused and adapted by scientists, and context-awareness mechanisms to gather contextual information from the workflow environment on behalf of the scientist. To evaluate these scientist-centered abstractions on real problems, we apply them to construct and execute scientific workflows in the specific domain area of groundwater modeling and analysis.
    Services (SERVICES), 2011 IEEE World Congress on; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Scientific research products are the result of long-term collaborations between teams. Scientific workflows are capable of helping scientists in many ways including collecting information about how research was conducted (e.g., scientific workflow tools often collect and manage information about datasets used and data transformations). However, knowledge about why data was collected is rarely documented in scientific workflows. In this paper we describe a prototype system built to support the collection of scientific expertise that influences scientific analysis. Through evaluating a scientific research effort underway at the Pacific Northwest National Laboratory, we identified features that would most benefit PNNL scientists in documenting how and why they conduct their research, making this information available to the entire team. The prototype system was built by enhancing the Kepler Scientific Workflow System to create knowledge-annotated scientific workflows and to publish them as semantic annotations.
    Scientific and Statistical Database Management - 23rd International Conference, SSDBM 2011, Portland, OR, USA, July 20-22, 2011. Proceedings; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern scientific enterprises are inherently knowledge-intensive. In general, scientific studies in domains such as geoscience, chemistry, physics and biology require the acquisition and manipulation of large amounts of experimental and field data in order to create inputs for large-scale computational simulations. The results of these simulations must then be analyzed, leading to refinements of inputs and models and further simulations. In this paper we describe our efforts in creating a knowledge management platform to support collaborative, wide-scale studies in the area of geologic sequestration modeling. The platform, known as GS3 (Geologic Sequestration Software Suite), exploits and integrates off-the-shelf software components including semantic wikis, content management systems and open source middleware to create the core architecture. We then extend the wiki environment to support the capture of provenance, the ability to incorporate various analysis tools, and the ability to launch simulations on supercomputers. The paper describes the key components of GS3 and demonstrates its use through illustrative examples. We conclude by assessing the suitability of our approach for geologic sequestration modeling and generalization to other scientific problem domains.
    43rd Hawaii International International Conference on Systems Science (HICSS-43 2010), Proceedings, 5-8 January 2010, Koloa, Kauai, HI, USA; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The Support Architecture for Large-Scale Subsurface Analysis (SALSSA) provides an extensible framework, sophisticated graphical user interface, and underlying data management system that simplifies the process of running subsurface models, tracking provenance information, and analyzing the model results. Initially, SALSSA supported two styles of job control: user directed execution and monitoring of individual jobs, and load balancing of jobs across multiple machines taking advantage of many available workstations. Recent efforts in subsurface modelling have been directed at scaling simulators to take advantage of leadership class supercomputers. We describe two approaches, current progress, and plans toward enabling efficient application of the subsurface simulator codes via the SALSSA framework: automating sensitivity analysis problems through task parallelism, and task parallel parameter estimation using the PEST framework.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Data-intensive scientific workflows are often modeled using a dataflow-oriented model. The simplicity of a dataflow model facilitates intuitive workflow design, analysis, and optimization. However, some amount of control-flow modeling is often necessary for engineering fault-tolerant, robust, and adaptive workflows. Modeling the control-flow using inherent dataflow constructs will quickly end up with a workflow that is hard to comprehend, reuse, and maintain. In this paper, we propose a context-aware architecture for scientific workflows. By incorporating contexts within a data-flow oriented scientific workflow system, we enable the development of context-aware scientific workflows without the need to use numerous low-level control-flow actors. This results in a workflow that is aware of its environment during execution with minimal user input and responds intelligently based on such awareness at runtime. A further advantage of our approach is that the defined contexts can be reused and shared across other workflows. We demonstrate our approach with two prototype implementation of context-aware actors in KEPLER.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Scientific applications are often structured as workflows that execute a series of interdependent, distributed software modules to analyze large data sets. The order of execution of the tasks in a workflow is commonly controlled by complex scripts, which over time become difficult to maintain and evolve. In this paper, we describe how we have integrated the Kepler scientific workflow platform with the MeDICi Integration Framework, which has been specifically designed to provide a standards-based, lightweight and flexible integration platform. The MeDICi technology provides a scalable, component-based architecture that efficiently handles integration with heterogeneous, distributed software systems. This paper describes the MeDICi Integration Framework and the mechanisms we used to integrate MeDICi components with Kepler workflow actors. We illustrate this solution with a workflow application for an atmospheric sciences application. The resulting solution promotes a strong separation of concerns, simplifying the Kepler workflow description and promoting the creation of a reusable collection of components available for other workflow applications in this domain.
    Services - I, 2009 World Conference on; 08/2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: Systems biology research demands the availability of tools and technologies that span a comprehensive range of computational capabilities, including data management, transfer, processing, integration, and interpretation. To address these needs, we have created the Bioinformatics Resource Manager (BRM), a scalable, flexible, and easy to use tool for biologists to undertake complex analyses. This paper describes the underlying software architecture of the BRM that integrates multiple commodity platforms to provide a highly extensible and scalable software infrastructure for bioinformatics. The architecture integrates a J2EE 3-tier application with an archival Experimental Data Management System, the GAGGLE framework for desktop tool integration, and the MeDICi Integration Framework for high-throughput data analysis workflows. This architecture facilitates a systems biology software solution that enables the entire spectrum of scientific activities, from experimental data access to high throughput processing and analysis of data for biologists and experimental scientists.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A next generation open source subsurface simulator and user environment for environmental management is being developed through a collaborative effort across Department of Energy National Laboratories. The flow and transport simulator, Amanzi, will be capable of modeling complex subsurface environments and processes using both unstructured and adaptive meshes at very fine spatial resolutions that require supercomputing-scale resources. The user environment, Akuna, provides users with a range of tools to manage environmental and simulator data sets, create models, manage and share simulation data, and visualize results. Underlying the user interface are core toolsets that provide algorithms for sensitivity analysis, parameter estimation, and uncertainty quantification. Akuna is open-source, cross platform software that is initially being demonstrated on the Hanford BC Cribs remediation site. In this paper, we describe the emerging capabilities of Akuna and illustrate how these are being applied to the BC Cribs site.