A vision for a biomedical cloud

Institute for Genomics and Systems Biology, The University of Chicago, Chicago, IL 60637, USA.
Journal of Internal Medicine (Impact Factor: 6.06). 12/2011; 271(2):122-30. DOI: 10.1111/j.1365-2796.2011.02491.x
Source: PubMed


We present a vision for a Biomedical Cloud that draws on progress in the fields of Genomics, Systems Biology and biomedical data mining. The successful fusion of these areas will combine the use of biomarkers, genetic variants, and environmental variables to build predictive models that will drastically increase the specificity and timeliness of diagnosis for a wide range of common diseases, whilst delivering accurate predictions about the efficacy of treatment options. However, the amount of data being generated by each of these fields is staggering, as is the task of managing and analysing it. Adequate computing infrastructure needs to be developed to assemble, manage and mine the enormous and rapidly growing corpus of 'omics' data along with clinical information. We have now arrived at an intersection point between genome technology, cloud computing and biological data mining. This intersection point provides a launch pad for developing a globally applicable cloud computing platform capable of supporting a new paradigm of data intensive, cloud-enabled predictive medicine.

1 Follower
14 Reads
  • Source
    • "The application of cloud computing for genomics, systems biology, biomedical data mining, health care and biodiversity research could be described as a new concept – biocloud. As a type of community cloud, biocloud could be filled with data relevant to biology, medicine and health care with the computing intensive operations, virtualization function, large number of machines and parallel computing (e.g., MapReduce, Hadoop) [2], [3]. Researches in next-generation sequencing (NGS), comparative genomics, and proteomics have already adopted biocloud to deal with data processing operation successfully [4]. "

  • Source
    • "These technologies create an opportunity to explore more fully the genome and investigate novel hypotheses about contributors to complex diseases. Moreover, these technologies result in an explosion of data [Grossman and White, 2012]. The current bottleneck in research is no longer the generation of large-scale genetic data, but the availability of computational tools to effectively analyze the data [Green and Guyer, 2011] as well as the means to compare and contrast new tools. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Genetic simulation programs are used to model data under specified assumptions to facilitate the understanding and study of complex genetic systems. Standardized data sets generated using genetic simulation are essential for the development and application of novel analytical tools in genetic epidemiology studies. With continuing advances in high-throughput genomic technologies and generation and analysis of larger, more complex data sets, there is a need for updating current approaches in genetic simulation modeling. To provide a forum to address current and emerging challenges in this area, the National Cancer Institute (NCI) sponsored a workshop, entitled “Genetic Simulation Tools for Post-Genome Wide Association Studies of Complex Diseases” at the National Institutes of Health (NIH) in Bethesda, Maryland on March 11–12, 2014. The goals of the workshop were to (1) identify opportunities, challenges, and resource needs for the development and application of genetic simulation models; (2) improve the integration of tools for modeling and analysis of simulated data; and (3) foster collaborations to facilitate development and applications of genetic simulation. During the course of the meeting, the group identified challenges and opportunities for the science of simulation, software and methods development, and collaboration. This paper summarizes key discussions at the meeting, and highlights important challenges and opportunities to advance the field of genetic simulation.
    Genetic Epidemiology 01/2015; 39(1). DOI:10.1002/gepi.21870 · 2.60 Impact Factor
  • Source
    • "The third party is the knowledge agent who links together the knowledge user and the knowledge expert on demand. Great potential of cloud services in the area of biomedicine is identified by Grossman and White [7], who made a vision of biomedical cloud in the future. Since amount of data which hospitals and medical institutions are dealing with is growing rapidly, big data technologies will have indispensable role in data analysis. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Rapid growth and storage of biomedical data enabled many opportunities for predictive modeling and improvement of healthcare processes. On the other side analysis of such large amounts of data is a difficult and computationally intensive task for most existing data mining algorithms. This problem is addressed by proposing a cloud based system that integrates metalearning framework for ranking and selection of best predictive algorithms for data at hand and open source big data technologies for analysis of biomedical data.
    The Scientific World Journal 04/2014; Volume 2014 (2014)(Article ID 859279):10 pages. DOI:10.1155/2014/859279 · 1.73 Impact Factor
Show more

Similar Publications


14 Reads