[Show abstract][Hide abstract] ABSTRACT: For many scientific applications, it is highly desirable to be able to compare metabolic models of closely related genomes. In this short report, we attempt to raise awareness to the fact that taking annotated genomes from public repositories and using them for metabolic model reconstructions is far from being trivial due to annotation inconsistencies. We are proposing a protocol for comparative analysis of metabolic models on closely related genomes, using fifteen strains of genus Brucella, which contains pathogens of both humans and livestock. This study lead to the identification and subsequent correction of inconsistent annotations in the SEED database, as well as the identification of 31 biochemical reactions that are common to Brucella, which are not originally identified by automated metabolic reconstructions. We are currently implementing this protocol for improving automated annotations within the SEED database and these improvements have been propagated into PATRIC, Model-SEED, KBase and RAST. This method is an enabling step for the future creation of consistent annotation systems and high-quality model reconstructions that will support in predicting accurate phenotypes such as pathogenicity, media requirements or type of respiration.
Electronic supplementary material
The online version of this article (doi:10.1007/s13205-014-0202-4) contains supplementary material, which is available to authorized users.
[Show abstract][Hide abstract] ABSTRACT: The remarkable advance in sequencing technology and the rising interest in medical and environmental microbiology, biotechnology, and synthetic biology resulted in a deluge of published microbial genomes. Yet, genome annotation, comparison, and modeling remain a major bottleneck to the translation of sequence information into biological knowledge, hence computational analysis tools are continuously being developed for rapid genome annotation and interpretation. Among the earliest, most comprehensive resources for prokaryotic genome analysis, the SEED project, initiated in 2003 as an integration of genomic data and analysis tools, now contains >5,000 complete genomes, a constantly updated set of curated annotations embodied in a large and growing collection of encoded subsystems, a derived set of protein families, and hundreds of genome-scale metabolic models. Until recently, however, maintaining current copies of the SEED code and data at remote locations has been a pressing issue. To allow high-performance remote access to the SEED database, we developed the SEED Servers (http://www.theseed.org/servers): four network-based servers intended to expose the data in the underlying relational database, support basic annotation services, offer programmatic access to the capabilities of the RAST annotation server, and provide access to a growing collection of metabolic models that support flux balance analysis. The SEED servers offer open access to regularly updated data, the ability to annotate prokaryotic genomes, the ability to create metabolic reconstructions and detailed models of metabolism, and access to hundreds of existing metabolic models. This work offers and supports a framework upon which other groups can build independent research efforts. Large integrations of genomic data represent one of the major intellectual resources driving research in biology, and programmatic access to the SEED data will provide significant utility to a broad collection of potential users.
[Show abstract][Hide abstract] ABSTRACT: Many enzymes and other proteins are difficult subjects for bioinformatic analysis because they exhibit variant catalytic, structural, regulatory, and fusion mode features within a protein family whose sequences are not highly conserved. However, such features reflect dynamic and interesting scenarios of evolutionary importance. The value of experimental data obtained from individual organisms is instantly magnified to the extent that given features of the experimental organism can be projected upon related organisms. But how can one decide how far along the similarity scale it is reasonable to go before such inferences become doubtful? How can a credible picture of evolutionary events be deduced within the vertical trace of inheritance in combination with intervening events of lateral gene transfer (LGT)? We present a comprehensive analysis of a dehydrogenase protein family (TyrA) as a prototype example of how these goals can be accomplished through the use of cohesion group analysis. With this approach, the full collection of homologs is sorted into groups by a method that eliminates bias caused by an uneven representation of sequences from organisms whose phylogenetic spacing is not optimal. Each sufficiently populated cohesion group is phylogenetically coherent and defined by an overall congruence with a distinct section of the 16S rRNA gene tree. Exceptions that occasionally are found implicate a clearly defined LGT scenario whereby the recipient lineage is apparent and the donor lineage of the gene transferred is localized to those organisms that define the cohesion group. Systematic procedures to manage and organize otherwise overwhelming amounts of data are demonstrated.
[Show abstract][Hide abstract] ABSTRACT: The number of prokaryotic genome sequences becoming available is growing steadily and is growing faster than our ability to accurately annotate them.
We describe a fully automated service for annotating bacterial and archaeal genomes. The service identifies protein-encoding, rRNA and tRNA genes, assigns functions to the genes, predicts which subsystems are represented in the genome, uses this information to reconstruct the metabolic network and makes the output easily downloadable for the user. In addition, the annotated genome can be browsed in an environment that supports comparative analysis with the annotated genomes maintained in the SEED environment. The service normally makes the annotated genome available within 12-24 hours of submission, but ultimately the quality of such a service will be judged in terms of accuracy, consistency, and completeness of the produced annotations. We summarize our attempts to address these issues and discuss plans for incrementally enhancing the service.
By providing accurate, rapid annotation freely to the community we have created an important community resource. The service has now been utilized by over 120 external users annotating over 350 distinct genomes.
[Show abstract][Hide abstract] ABSTRACT: Despite its being a leading cause of nosocomal and community-acquired infections, surprisingly little is known about Staphylococcus aureus stress responses. In the current study, Affymetrix S. aureus GeneChips were used to define transcriptome changes in response to cold shock, heat shock, stringent, and SOS response-inducing
conditions. Additionally, the RNA turnover properties of each response were measured. Each stress response induced distinct
biological processes, subsets of virulence factors, and antibiotic determinants. The results were validated by real-time PCR
and stress-mediated changes in antimicrobial agent susceptibility. Collectively, many S. aureus stress-responsive functions are conserved across bacteria, whereas others are unique to the organism. Sets of small stable
RNA molecules with no open reading frames were also components of each response. Induction of the stringent, cold shock, and
heat shock responses dramatically stabilized most mRNA species. Correlations between mRNA turnover properties and transcript
titers suggest that S. aureus stress response-dependent alterations in transcript abundances can, in part, be attributed to alterations in RNA stability.
This phenomenon was not observed within SOS-responsive cells.
Preview · Article · Nov 2006 · Journal of Bacteriology
[Show abstract][Hide abstract] ABSTRACT: Bacterial pathogens regulate virulence factor expression at both the level of transcription initiation and mRNA processing/turnover.
Within Staphylococcus aureus, virulence factor transcript synthesis is regulated by a number of two-component regulatory systems, the DNA binding protein
SarA, and the SarA family of homologues. However, little is known about the factors that modulate mRNA stability or influence
transcript degradation within the organism. As our entree to characterizing these processes, S. aureus GeneChips were used to simultaneously determine the mRNA half-lives of all transcripts produced during log-phase growth.
It was found that the majority of log-phase transcripts (90%) have a short half-life (<5 min), whereas others are more stable,
suggesting that cis- and/or trans-acting factors influence S. aureus mRNA stability. In support of this, it was found that two virulence factor transcripts, cna and spa, were stabilized in a sarA-dependent manner. These results were validated by complementation and real-time PCR and suggest that SarA may regulate target
gene expression in a previously unrecognized manner by posttranscriptionally modulating mRNA turnover. Additionally, it was
found that S. aureus produces a set of stable RNA molecules with no predicted open reading frame. Based on the importance of the S. aureus agr RNA molecule, RNAIII, and small stable RNA molecules within other pathogens, it is possible that these RNA molecules influence
biological processes within the organism.
Full-text · Article · Apr 2006 · Journal of Bacteriology
[Show abstract][Hide abstract] ABSTRACT: While many issues in the area of virtual reality (VR) researchhave been addressed in recentyears, the constant leaps forward in technology continue to push the field forward. VR research no longer is focused only on computer graphics, but instead has become even more interdisciplinary,combining the fields of networking, distributed computing, and even artificial intelligence. In this article we discuss some of the issues associated with distributed, collaborative virtual reality,aswell as lessons learned during the developmentoftwo distributed virtual reality applications. 1 Introduction The Futures Laboratory at Argonne National Laboratory has been exploring what is needed to support large-scale shared space virtual environments (VE) for wide-area collaborations. Our research has focused on the system architecture, software design, and features needed to implement suchenvironments. In this article we discuss twoprototype systems under development at Argonne. Shared virtual spaces a...
[Show abstract][Hide abstract] ABSTRACT: The Futures Lab was founded within the Mathematics and Computer Science Division of Argonne National Laboratory in the fall of 1994. The goal of the lab is develop new technology and systems to support collaborative science. In order to meet this goal, the lab is organizedaround threeresearch areas: advanced networking, multimedia, and virtual environments. The Argonne Computing and Communications Infrastructure Futures Laboratory (Futures Lab) was created in 1994 to explore, develop, and prototype next-generation computing and communications infrastructure systems. An important goal of the Futures Lab project is to understand how to incorporate advanced display and media server systems into scientific computing environments. The objective is to create new collaborativeenvironmenttechnologies that combine advanced networking, virtual space technology, and high-end virtual environments to enable the construction of virtual teams for scientific research. We believe that digital medi...
[Show abstract][Hide abstract] ABSTRACT: Virtual reality has become an increasingly familiar part of the science of visualization and communication of information. This, combined with the increase in connectivity of remote sites via high-speed networks, allows for the development of a collaborative distributed virtual environment. Such an environment enables the development of supercomputer simulations with virtual reality visualizations that can be displayed at multiple sites, with each site interacting, viewing, and communicating about the results being discovered.
The early results of an experimental collaborative virtual reality environment are discussed in this paper. The issues that need to be addressed in the implementation, as well as preliminary results are covered. Also provided are a discussion of plans and a generalized application programmers interface for CAVE to CAVE will be provided.
[Show abstract][Hide abstract] ABSTRACT: The Futures Lab was founded within the Mathematics and Computer Science Division of Argonne National Laboratory in the fall of 1994. The goal of the lab is develop new technology and systems to support collaborative science. In order to meet this goal, the lab is organized around three research areas: advanced networking, multimedia, and virtual environments. The Argonne Computing and Communications Infrastructure Futures Laboratory (Futures Lab) was created in 1994 to explore, develop, and prototype next-generation computing and communications infrastructure systems. An important goal of the Futures Lab project is to understand how to incorporate advanced display and media server systems into scientific computing environments. The objective is to create new collaborative environment technologies that combine advanced networking, virtual space technology, and high-end virtual environments to enable the construction of virtual teams for scientific research. We believe that digital ...
[Show abstract][Hide abstract] ABSTRACT: The use of virtual reality has been demonstrated to be very useful in the visualization of science and the synthesis of vast amounts of data. Further, advances in high-speed networks makes feasible the coupling of remote supercomputer simulations and virtual environments. The critical performance issue to be addressed with this coupling is the end-to-end lag time (i.e., the delay between a user input and the result of the input). In this paper we quantify the lag times for four different applications. These applications range from coupling virtual environments with supercomputers to coupling multiple virtual environments for collaborative design. The lag times for each applications are given in terms of rendering, network, simulation, tracking, and synchronization times. The results of the survey indicate that for some applications it is feasible, in terms of lag times, to couple supercomputers with virtual environments and to couple multiple virtual environments. For the applications ...
[Show abstract][Hide abstract] ABSTRACT: When coupling supercomputer simulations to “virtual
reality” for real time interactive visualization, the critical
performance metric is the end to end lag time in system response.
Measuring the simulation, tracking, rendering, network, and
synchronization components of lag time shows the feasibility of coupling
supercomputers with virtual environments for some applications. For
others, simulation time makes interactivity difficult. The article
analyzes the components of lag for four applications that use virtual
environments: Monte-a simple application to calculate π using a
parallel Monte Carlo algorithm; Automotive Disk Brake-uses a parallel
finite element code to allow users to design and analyze an automotive
disk braking system under different conditions; BoilerMaker-lets users
design and analyze the placement of pollution control system injectors
in boilers and incinerators; Calvin (Collaborative Architectural Layout
Via Immersive Navigation)-allows people at different sites to work
collaboratively on the design and viewing of architectural spaces
No preview · Article · Feb 1996 · IEEE Computational Science and Engineering
[Show abstract][Hide abstract] ABSTRACT: : When performing communications mapping experiments for massively parallel processors, it is important to be able to visualize the mappings and resulting communications. In a molecular dynamics model, visualization of the atom to atom interaction and the processor mappings provides insight into the effectiveness of the communications algorithms. The basic quantities available for visualization in a model of this type are the number of molecules per unit volume, the mass, and velocity of each molecule. The computational information available for visualization is the atom to atom interaction within each time step, the atom to processor mapping, and the energy rescaling events. We use the CAVE (CAVE Automatic Virtual Environment) to provide interactive, immersive visualization experiences. 1 Introduction Molecular dynamic simulations are increasingly being used to perform simulations of materials and in the modeling of drug compounds. The validation of these simulations is done by comp...
[Show abstract][Hide abstract] ABSTRACT: Aurora is a prototype or-parallel implementation of the full Prolog language for shared-memory multiprocessors, developed
as part of an informal research collaboration known as the “Gigalips Project”. It currently runs on Sequent and Encore machines.
It has been constructed by adapting Sicstus Prolog, a fast, portable, sequential Prolog system. The techniques for constructing
a portable multiprocessor version follow those pioneered in a predecessor system, ANL-WAM. The SRI model was adopted as the
means to extend the Sicstus Prolog engine for or-parallel operation. We describe the design and main implementation features
of the current Aurora system, and present some experimental results. For a range of benchmarks, Aurora on a 20-processor Sequent
Symmetry is 4 to 7 times faster than Quintus Prolog on a Sun 3/75. Good performance is also reported on some large-scale Prolog
Full-text · Article · Jan 1988 · New Generation Computing