Article

iTools: A Framework for Classification, Categorization and Integration of Computational Biology Resources

Center for Computational Biology, University of California Los Angeles, Los Angeles, California, United States of America.
PLoS ONE (Impact Factor: 3.53). 02/2008; 3(5):e2265. DOI: 10.1371/journal.pone.0002265
Source: PubMed

ABSTRACT The advancement of the computational biology field hinges on progress in three fundamental directions--the development of new computational algorithms, the availability of informatics resource management infrastructures and the capability of tools to interoperate and synergize. There is an explosion in algorithms and tools for computational biology, which makes it difficult for biologists to find, compare and integrate such resources. We describe a new infrastructure, iTools, for managing the query, traversal and comparison of diverse computational biology resources. Specifically, iTools stores information about three types of resources--data, software tools and web-services. The iTools design, implementation and resource meta-data content reflect the broad research, computational, applied and scientific expertise available at the seven National Centers for Biomedical Computing. iTools provides a system for classification, categorization and integration of different computational biology resources across space-and-time scales, biomedical problems, computational infrastructures and mathematical foundations. A large number of resources are already iTools-accessible to the community and this infrastructure is rapidly growing. iTools includes human and machine interfaces to its resource meta-data repository. Investigators or computer programs may utilize these interfaces to search, compare, expand, revise and mine meta-data descriptions of existent computational biology resources. We propose two ways to browse and display the iTools dynamic collection of resources. The first one is based on an ontology of computational biology resources, and the second one is derived from hyperbolic projections of manifolds or complex structures onto planar discs. iTools is an open source project both in terms of the source code development as well as its meta-data content. iTools employs a decentralized, portable, scalable and lightweight framework for long-term resource management. We demonstrate several applications of iTools as a framework for integrated bioinformatics. iTools and the complete details about its specifications, usage and interfaces are available at the iTools web page http://iTools.ccb.ucla.edu.

1 Bookmark
 · 
126 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Nanotechnology represents an area of particular promise and significant opportunity across multiple scientific disciplines. Ongoing nanotechnology research ranges from the characterization of nanoparticles and nanomaterials to the analysis and processing of experimental data seeking correlations between nanoparticles and their functionalities and side effects. Due to their special properties, nanoparticles are suitable for cellular-level diagnostics and therapy, offering numerous applications in medicine, e.g. development of biomedical devices, tissue repair, drug delivery systems and biosensors. In nanomedicine, recent studies are producing large amounts of structural and property data, highlighting the role for computational approaches in information management. While in vitro and in vivo assays are expensive, the cost of computing is falling. Furthermore, improvements in the accuracy of computational methods (e.g. data mining, knowledge discovery, modeling and simulation) have enabled effective tools to automate the extraction, management and storage of these vast data volumes. Since this information is widely distributed, one major issue is how to locate and access data where it resides (which also poses data-sharing limitations). The novel discipline of nanoinformatics addresses the information challenges related to nanotechnology research. In this paper, we summarize the needs and challenges in the field and present an overview of extant initiatives and efforts.
    Computational Science & Discovery 11/2013; 6:014011.
  • [Show abstract] [Hide abstract]
    ABSTRACT: The volume, diversity and velocity of biomedical data are exponentially increasing providing petabytes of new neuroimaging and genetics data every year. At the same time, tens-of-thousands of computational algorithms are developed and reported in the literature along with thousands of software tools and services. Users demand intuitive, quick and platform-agnostic access to data, software tools, and infrastructure from millions of hardware devices. This explosion of information, scientific techniques, computational models, and technological advances leads to enormous challenges in data analysis, evidence-based biomedical inference and reproducibility of findings. The Pipeline workflow environment provides a crowd-based distributed solution for consistent management of these heterogeneous resources. The Pipeline allows multiple (local) clients and (remote) servers to connect, exchange protocols, control the execution, monitor the states of different tools or hardware, and share complete protocols as portable XML workflows. In this paper, we demonstrate several advanced computational neuroimaging and genetics case-studies, and end-to-end pipeline solutions. These are implemented as graphical workflow protocols in the context of analyzing imaging (sMRI, fMRI, DTI), phenotypic (demographic, clinical), and genetic (SNP) data.
    Brain Imaging and Behavior 08/2013; · 2.67 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: A well-known bioinformatic metaphor aims to call Resourceome the biological universe of resources (databases, computational tools, case studies, etc.). As a consequence, a virtual desktop for biologists, where in-silico experiments can automatically integrate and access to heterogeneous and distributed resources, must be equipped with tools suitable to manage a Resourceome. In this perspective, we exploit the Resourceome KMS and the Resourceome WMS to realize semantic-driven formulations of in-silico biological experiments as workflows, where activities are semantically linked to any involved resource (context, roles, objects, documents, etc.). In the whole, combining the use of domain ontologies and workflow techniques, Resourceome turns out to be a semantic guide for domain experts, bioinformaticians and biologists, providing respectively a flexible domain and operational knowledge organization, a powerful engine for semantic-driven workflow composition, and a distributed, automatic and transparent environment for workflow execution.
    2010 International Conference on Biosciences. 01/2010;

Full-text (3 Sources)

Download
43 Downloads
Available from
May 26, 2014