Jesper Tegnér

Karolinska University Hospital, Tukholma, Stockholm, Sweden

Are you Jesper Tegnér?

Claim your profile

Publications (134)564.28 Total impact

  • Narsis A. Kiani · Hector Zenil · Jakub Olczak · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: Network inference is advancing rapidly, and new methods are proposed on a regular basis. Understanding the advantages and limitations of different network inference methods is key to their effective application in different circumstances. The common structural properties shared by diverse networks naturally pose a challenge when it comes to devising accurate inference methods, but surprisingly, there is a paucity of comparison and evaluation methods. Historically, every new methodology has only been tested against "gold standard" (true-values) purpose-designed synthetic and real-world (validated) biological networks. In this paper we aim to assess the impact of taking into consideration topological and information-theoretic complexity aspects in the evaluation of the final accuracy of an inference procedure. Specifically, we will compare the best inference methods, in both graph-theoretic and information-theoretic terms, for preserving topological properties and the original information content of synthetic and biological networks. New methods for performance comparison are introduced by borrowing ideas from gene set enrichment analysis and by applying concept from algorithmic complexity. Experimental results show that no individual algorithm outperforms all others in all cases, and that the challenging and non-trivial nature of network inference is evident in the struggle of some of the algorithms to turn in a performance that is better than random guesswork. Therefore special care should be taken to suit the method used to the specific purpose. Finally, we show that evaluations from data generated representing different underlying topologies have different signatures that can be used to better choose a network reconstruction method.
    No preview · Article · Jan 2016 · Seminars in Cell and Developmental Biology
  • Source
    Josep Roca · Isaac Cano · David Gomez-Cabrero · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: Systems medicine, using and adapting methods and approaches as developed within systems biology, promises to be essential in ongoing efforts of realizing and implementing personalized medicine in clinical practice and research. Here we review and critically assess these opportunities and challenges using our work on COPD as a case study. We find that there are significant unresolved biomedical challenges in how to unravel complex multifactorial components in disease initiation and progression producing different clinical phenotypes. Yet, while such a systems understanding of COPD is necessary, there are other auxiliary challenges that need to be addressed in concert with a systems analysis of COPD. These include information and communication technology (ICT)-related issues such as data harmonization, systematic handling of knowledge, computational modeling, and importantly their translation and support of clinical practice. For example, clinical decision-support systems need a seamless integration with new models and knowledge as systems analysis of COPD continues to develop. Our experience with clinical implementation of systems medicine targeting COPD highlights the need for a change of management including design of appropriate business models and adoption of ICT providing and supporting organizational interoperability among professional teams across healthcare tiers, working around the patient. In conclusion, in our hands the scope and efforts of systems medicine need to concurrently consider these aspects of clinical implementation, which inherently drives the selection of the most relevant and urgent issues and methods that need further development in a systems analysis of disease.
    Full-text · Article · Dec 2015 · Methods in molecular biology (Clifton, N.J.)
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Since the decoding of the Human Genome, techniques from bioinformatics, statistics, and machine learning have been instrumental in uncovering patterns in increasing amounts and types of different data produced by technical profiling technologies applied to clinical samples, animal models, and cellular systems. Yet, progress on unravelling biological mechanisms, causally driving diseases, has been limited, in part due to the inherent complexity of biological systems. Whereas we have witnessed progress in the areas of cancer, cardiovascular and metabolic diseases, the area of neurodegenerative diseases has proved to be very challenging. This is in part because the aetiology of neurodegenerative diseases such as Alzheimer´s disease or Parkinson´s disease is unknown, rendering it very difficult to discern early causal events. Here we describe a panel of bioinformatics and modeling approaches that have recently been developed to identify candidate mechanisms of neurodegenerative diseases based on publicly available data and knowledge. We identify two complementary strategies—data mining techniques using genetic data as a starting point to be further enriched using other data-types, or alternatively to encode prior knowledge about disease mechanisms in a model based framework supporting reasoning and enrichment analysis. Our review illustrates the challenges entailed in integrating heterogeneous, multiscale and multimodal information in the area of neurology in general and neurodegeneration in particular. We conclude, that progress would be accelerated by increasing efforts on performing systematic collection of multiple data-types over time from each individual suffering from neurodegenerative disease. The work presented here has been driven by project AETIONOMY; a project funded in the course of the Innovative Medicines Initiative (IMI); which is a public-private partnership of the European Federation of Pharmaceutical Industry Associations (EFPIA) and the European Commission (EC).
    Full-text · Article · Dec 2015 · International Journal of Molecular Sciences
  • [Show abstract] [Hide abstract]
    ABSTRACT: Folate metabolism is central to cell proliferation and a target of commonly used cancer chemotherapeutics. In particular, the mitochondrial folate-coupled metabolism is thought to be important for proliferating cancer cells. The enzyme MTHFD2 in this pathway is highly expressed in human tumors and broadly required for survival of cancer cells. Although the enzymatic activity of the MTHFD2 protein is well understood, little is known about its larger role in cancer cell biology. We here report that MTHFD2 is co-expressed with two distinct gene sets, representing amino acid metabolism and cell proliferation, respectively. Consistent with a role for MTHFD2 in cell proliferation, MTHFD2 expression was repressed in cells rendered quiescent by deprivation of growth signals (serum) and rapidly re-induced by serum stimulation. Overexpression of MTHFD2 alone was sufficient to promote cell proliferation independent of its dehydrogenase activity, even during growth restriction. In addition to its known mitochondrial localization, we found MTHFD2 to have a nuclear localization and co-localize with DNA replication sites. These findings suggest a previously unknown role for MTHFD2 in cancer cell proliferation, adding to its known function in mitochondrial folate metabolism.
    No preview · Article · Oct 2015 · Scientific Reports
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: CD4(+)FOXP3(+) regulatory T (Treg) cells are essential for maintaining immunological self-tolerance. Treg cell development and function depend on the transcription factor FOXP3, which is present in several distinct isoforms due to alternative splicing. Despite the importance of FOXP3 in the proper maintenance of Treg cells, the regulation and functional consequences of FOXP3 isoform expression remains poorly understood. Here, we show that in human Treg cells IL-1β promotes excision of FOXP3 exon 7. FOXP3 is not only expressed by Treg cells but is also transiently expressed when naïve T cells differentiate into Th17 cells. Forced splicing of FOXP3 into FOXP3Δ2Δ7 strongly favored Th17 differentiation in vitro. We also found that patients with Crohn's disease express increased levels of FOXP3 transcripts lacking exon 7, which correlate with disease severity and IL-17 production. Our results demonstrate that alternative splicing of FOXP3 modulates T cell differentiation. These results highlight the importance of characterizing FOXP3 expression on an isoform basis and suggest that immune responses may be manipulated by modulating the expression of FOXP3 isoforms, which has broad implications for the treatment of autoimmune diseases.
    Full-text · Article · Oct 2015 · Scientific Reports
  • Hector Zenil · James A. R. Marshall · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: We apply methods for estimating the algorithmic complexity of sequences to behavioural sequences of three landmark studies of animal behavior each of increasing sophistication, including foraging communication by ants, flight patterns of fruit flies, and tactical deception and competition strategies in rodents. In each case, we demonstrate that approximations of Logical Depth and Kolmogorv-Chaitin complexity capture and validate previously reported results, in contrast to other measures such as Shannon Entropy, compression or ad hoc. Our method is practically useful when dealing with short sequences, such as those often encountered in cognitive-behavioural research. Our analysis supports and reveals non-random behavior (LD and K complexity) in flies even in the absence of external stimuli, and confirms the "stochastic" behaviour of transgenic rats when faced that they cannot defeat by counter prediction. The method constitutes a formal approach for testing hypotheses about the mechanisms underlying animal behaviour.
    No preview · Article · Sep 2015
  • Source
    Hector Zenil · Angelika Schmidt · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: Information and computation have transformed the way we look at the world beyond statistical correlations, the way we can perform experiments, through simulations, and the way we can test these hypotheses. In previous work we introduced a concept of "algorithmicity" and of "programmability". Biology has already taken steps towards becoming a computer science aiming at reprogramming nature after the realisation that nature herself has reprogrammed organisms by harnessing the power of natural selection. We further unpack these ideas related to computability, algorithmic information theory and software engineering, in the context of the extent to which biology can be (re)programmed, and with how we may go about doing so in a more systematic way with all the tools and concepts offered by theoretical computer science in an translation exercise from computing to molecular biology and back. These concepts provide a means to a hierarchical organization thereby blurring previously clear-cut lines between concepts like matter and life, or between tumour types that are otherwise taken as different and may not have however a different cause. This does not diminish the properties of life or make its components and functions less interesting. On the contrary, this approach makes for a more encompassing and integrated view of nature, one that subsumes observer and observed within the same system, and can generate new perspectives and tools with which to view complex diseases like cancer, approaching them afresh from a software-engineering viewpoint that casts evolution in the role of programmer, cells as computer programs, the immune system as a program debugging tool, and diseases as a battlefield where these forces deploy.
    Full-text · Article · Aug 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The high-throughput analysis of microRNAs (miRNAs) circulating within the blood of healthy and diseased individuals is an active area of biomarker research. Whereas quantitative real-time reverse transcription polymerase chain reaction (qPCR)-based methods are widely used, it is yet unresolved how the data should be normalized. Here, we show that a combination of different algorithms results in the identification of candidate reference miRNAs that can be exploited as normalizers, in both discovery and validation phases. Using the methodology considered here, we identify normalizers that are able to reduce nonbiological variation in the data and we present several case studies, to illustrate the relevance in the context of physiological or pathological scenarios. In conclusion, the discovery of stable reference miRNAs from high-throughput studies allows appropriate normalization of focused qPCR assays. © The Author 2015. Published by Oxford University Press.
    Full-text · Article · Aug 2015 · Briefings in Bioinformatics
  • [Show abstract] [Hide abstract]
    ABSTRACT: An increase in the number of older people experiencing disability and dependence is a critical aspect of the demographic change that will emerge within Europe due to the rise in life expectancy. In this scenario, prevention of these conditions is crucial for the well-being of older citizens and for the sustainability of our healthcare systems. Thus, the diagnosis and management of conditions like frailty, which identifies the people at the highest risk for developing those adverse outcomes, is of critical relevance. Currently, assessment of frailty relies primarily on measuring functional parameters, which have limited clinical utility. In this viewpoint article, we describe the FRAILOMIC Initiative, an international, large-scale, multi-endpoint, community- and clinic-based research study funded by the European Commission. The aim of the study is to develop validated measures, comprising both classic and ‘omics-based' laboratory biomarkers, which can predict the risk of frailty, improve the accuracy of its diagnosis in clinical practice and provide a prognostic forecast on the evolution from frailty to disability. The initiative includes eight established cohorts of older adults, encompassing >75,000 subjects, most of whom (∼70%) are aged >65 years. Data on function, nutritional status and exercise habits have been collected, and cardiovascular health has been evaluated at baseline. Subjects will be stratified as ‘non-frail' or ‘frail' using Fried's definition, all adverse outcomes of interest will be recorded and differentially expressed biomarkers associated with the risk of frailty will be identified. Genomic, proteomic and transcriptomic investigations will be carried out using array-based systems. As circulating microRNAs in plasma have been identified in the context of senescence, ageing and age-associated diseases, a miRNome-wide analysis will also be undertaken to identify a miRNA-based signature of frailty. Blood concentrations of secreted proteins known to be upregulated significantly in senescent endothelial cells and other hypothesis-driven biomarkers will be measured using ELISAs. The FRAILOMIC Initiative aims to issue a series of interim scientific reports as key results emerge. Ultimately, it is hoped that this study will contribute to the development of new clinical tools, which may help individuals to enjoy an old age that is healthier and free from disability.
    No preview · Article · Jul 2015 · Gerontology
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Common variable immunodeficiency (CVID), the most frequent primary immunodeficiency characterized by loss of B-cell function, depends partly on genetic defects, and epigenetic changes are thought to contribute to its aetiology. Here we perform a high-throughput DNA methylation analysis of this disorder using a pair of CVID-discordant MZ twins and show predominant gain of DNA methylation in CVID B cells with respect to those from the healthy sibling in critical B lymphocyte genes, such as PIK3CD, BCL2L1, RPS6KB2, TCF3 and KCNN4. Individual analysis confirms hypermethylation of these genes. Analysis in naive, unswitched and switched memory B cells in a CVID patient cohort shows impaired ability to demethylate and upregulate these genes in transitioning from naive to memory cells in CVID. Our results not only indicate a role for epigenetic alterations in CVID but also identify relevant DNA methylation changes in B cells that could explain the clinical manifestations of CVID individuals.
    Preview · Article · Jun 2015 · Nature Communications
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The biological functions of VEGF-B in cancer progression remain poorly understood. Here, we report that VEGF-B promotes cancer metastasis through the remodeling of tumor microvasculature. Knockdown of VEGF-B in tumors resulted in increased perivascular cell coverage and impaired pulmonary metastasis of human melanomas. In contrast, the gain of VEGF-B function in tumors led to pseudonormalized tumor vasculatures that were highly leaky and poorly perfused. Tumors expressing high levels of VEGF-B were more metastatic, although primary tumor growth was largely impaired. Similarly, VEGF-B in a VEGF-A-null tumor resulted in attenuated primary tumor growth but substantial pulmonary metastases. VEGF-B also led to highly metastatic phenotypes in Vegfr1 tk(-/-) mice and mice treated with anti-VEGF-A. These data indicate that VEGF-B promotes cancer metastasis through a VEGF-A-independent mechanism. High expression levels of VEGF-B in two large-cohort studies of human patients with lung squamous cell carcinoma and melanoma correlated with poor survival. Taken together, our findings demonstrate that VEGF-B is a vascular remodeling factor promoting cancer metastasis and that targeting VEGF-B may be an important therapeutic approach for cancer metastasis.
    Full-text · Article · May 2015 · Proceedings of the National Academy of Sciences
  • Source
    Hector Zenil · Narsis A. Kiani · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: Network biology approaches have over the last decade proven to be very useful for the integration and generation of functional hypothesis by providing a context for specific molecular components and processes. Recent experimental and computational techniques yield networks of increased size and sophistication. The study of these complex cellular networks is emerging as a new challenge in biology. A number of dimensionality reduction techniques for graphs have been developed to cope with complexity of networks. However, it is yet not clear to what extent information is lost or preserved when these techniques are applied to reduce the complexity of large networks. Here we therefore develop a rigorous framework, based on algorithmic information theory, to quantify the capability to preserve information when network motif analysis, graph spectra and sparsification methods respectively, are applied to over twenty different well-established networks. We find that the sparsification method is highly sensitive to deletion of edges leading to significant inconsistencies with respect to the loss of information and that graph spectral methods were the most irregular measure only capturing algebraic information in a condensed fashion but in that process largely lost the information content of the original networks. Our algorithmic information methodology therefore provides a rigorous framework enabling fundamental assessment and comparison between different methods for reducing the complexity of networks while preserving key structures in the networks thereby facilitating the identification of such core processes.
    Full-text · Article · Apr 2015
  • Source
    Hector Zenil · Narsis A. Kiani · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: We undertake an extensive numerical investigation of the graph spectra of thousands regular graphs, a set of random Erd\"os-R\'enyi graphs, the two most popular types of complex networks and an evolving genetic network by using novel conceptual and experimental tools. Our objective in so doing is to contribute to an understanding of the meaning of the Eigenvalues of a graph relative to its topological and information-theoretic properties. We introduce a technique for identifying the most informative Eigenvalues of evolving networks by comparing graph spectra behavior to their algorithmic complexity. We suggest that extending techniques can be used to further investigate the behavior of evolving biological networks. In the extended version of this paper we apply these techniques to seven tissue specific regulatory networks as static example and network of a na\"ive pluripotent immune cell in the process of differentiating towards a Th17 cell as evolving example, finding the most and least informative Eigenvalues at every stage.
    Full-text · Article · Apr 2015 · Lecture Notes in Computer Science
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The FRAILOMIC consortium (available at: http://www.frailomic.org/) was created and funded under the European FP7 framework in order to overcome these limitations. The consortium comprises seven small and medium-sized companies, six universities, two leading research centres, two hospital-based research groups and researches affiliated with the World Health Organization (WHO). The primary aim is to create a European network for developing clinical tools and validating biomarkers that can assist in the managed care of frailty. More specifically, a large number of molecular and biochemical biomarkers will be measured in as many as 75,000 participants, in order to develop predictive, diagnostic and prognostic models in the older general population and people at a higher risk of frailty. The analytical and diagnostic performance of these biomarkers will be compared against the current quality specifications to define whether the current techniques are suitable for use in this specific population. A selected set of biomarkers will then be validated prospectively and assessed to identify the best fit models, which will guide the development of panels of tests or risk estimation models to be used in the clinical setting. The homeostatic impairments of frailty will hence be assessed using a multidisciplinary approach within this network. Hopefully, the outcomes of this initiative will underline the important contribution of laboratory diagnostics for reducing the prevalence or severity of this increasingly frequent condition, thus resulting in large benefits for individuals, society and healthcare systems.
    Full-text · Article · Mar 2015 · Clinical Chemistry and Laboratory Medicine
  • Source
    Nicolas Gauvrit · Hector Zenil · Jesper Tegnér
    [Show abstract] [Hide abstract]
    ABSTRACT: We survey concepts at the frontier of research connecting artificial, animal and human cognition to computation and information processing---from the Turing test to Searle's Chinese Room argument, from Integrated Information Theory to computational and algorithmic complexity. We start by arguing that passing the Turing test is a trivial computational problem and that its pragmatic difficulty sheds light on the computational nature of the human mind more than it does on the challenge of artificial intelligence. We then review our proposed algorithmic information-theoretic measures for quantifying and characterizing cognition in various forms. These are capable of accounting for known biases in human behavior, thus vindicating a computational algorithmic view of cognition as first suggested by Turing, but this time rooted in the concept of algorithmic probability, which in turn is based on computational universality while being independent of computational model, and which has the virtue of being predictive and testable as a model theory of cognitive behavior.
    Full-text · Chapter · Jan 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: The epigenome is th ecollection of all epigenetic modifications occurring on a genome. To properly generate, analyze, and understand epigenomic data has become increasingly important in basic and applied research, because epigenomic modifications have been broadly associated with differentiation, development, and disease processes, thereby also constituting attractive drug targets. In this chapter, we introduce the reader to the different aspects of epigenomics (e.g., DNA methylation and histone marks, among others), by briefly reviewing the most relevant underlying biological concepts and by describing the different experimental protocols and the analysis of the associated data types. Furthermore, for each type of epigenetic modification we describe the most relevant analysis pipelines, data repositories, and other resources. We conclude that any epigenomic investigation needs to carefully align the selection of the experimental protocols with the subsequent bioinformatics analysis and vice versa, as the effect sizes can be small and thereby escape detection if an integrative design is not well considered.
    No preview · Chapter · Jan 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: ABSTRACT Regular endurance exercise training induces beneficial functional and health effects in human skeletal muscle. The putative contribution to the training response of the epigenome as a mediator between genes and environment has not been clarified. Here we investigated the contribution of DNA methylation and associated transcriptomic changes in a well-controlled human intervention study. Training effects were mirrored by significant alterations in DNA methylation and gene expression in regions with a homogeneous muscle energetics and remodeling ontology. Moreover, a signature of DNA methylation and gene expression separated the samples based on training and gender. Differential DNA methylation was predominantly observed in enhancers, gene bodies and intergenic regions and less in CpG islands or promoters. We identified transcriptional regulator binding motifs of MRF, MEF2 and ETS proteins in the proximity of the changing sites. A transcriptional network analysis revealed modules harboring distinct ontologies and, interestingly, the overall direction of the changes of methylation within each module was inversely correlated to expression changes. In conclusion, we show that highly consistent and associated modifications in methylation and expression, concordant with observed health-enhancing phenotypic adaptations, are induced by a physiological stimulus.
    Full-text · Article · Dec 2014 · Epigenetics: official journal of the DNA Methylation Society
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Chronic Obstructive Pulmonary Disease (COPD) is a major challenge for healthcare. Heterogeneities in clinical manifestations and in disease progression are relevant traits in COPD with impact on patient management and prognosis. It is hypothesized that COPD heterogeneity results from the interplay of mechanisms governing three conceptually different phenomena: 1) pulmonary disease, 2) systemic effects of COPD and 3) co-morbidity clustering. To assess the potential of systems medicine to better understand non-pulmonary determinants of COPD heterogeneity. To transfer acquired knowledge to healthcare enhancing subject-specific health risk assessment and stratification to improve management of chronic patients. Underlying mechanisms of skeletal muscle dysfunction and of co-morbidity clustering in COPD patients were explored with strategies combining deterministic modelling and network medicine analyses using the Biobridge dataset. An independent data driven analysis of co-morbidity clustering examining associated genes and pathways was done (ICD9-CM data from Medicare, 13 million people). A targeted network analysis using the two studies: skeletal muscle dysfunction and co-morbidity clustering explored shared pathways between them. (1) Evidence of abnormal regulation of pivotal skeletal muscle biological pathways and increased risk for co-morbidity clustering was observed in COPD; (2) shared abnormal pathway regulation between skeletal muscle dysfunction and co-morbidity clustering; and, (3) technological achievements of the projects were: (i) COPD Knowledge Base; (ii) novel modelling approaches; (iii) Simulation Environment; and, (iv) three layers of Clinical Decision Support Systems. The project demonstrated the high potential of a systems medicine approach to address COPD heterogeneity. Limiting factors for the project development were identified. They were relevant to shape strategies fostering 4P Medicine for chronic patients. The concept of Digital Health Framework and the proposed roadmap for its deployment constituted relevant project outcomes.
    Full-text · Article · Nov 2014 · Journal of Translational Medicine
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Chronic Obstructive Pulmonary Disease (COPD) patients are characterized by heterogeneous clinical manifestations and patterns of disease progression. Two major factors that can be used to identify COPD subtypes are muscle dysfunction/wasting and co-morbidity patterns. We hypothesized that COPD heterogeneity is in part the result of complex interactions between several genes and pathways. We explored the possibility of using a Systems Medicine approach to identify such pathways, as well as to generate predictive computational models that may be used in clinic practice. Our overarching goal is to generate clinically applicable predictive models that characterize COPD heterogeneity through a Systems Medicine approach. To this end we have developed a general framework, consisting of three steps/objectives: (1) feature identification, (2) model generation and statistical validation, and (3) application and validation of the predictive models in the clinical scenario. We used muscle dysfunction and co-morbidity as test cases for this framework. In the study of muscle wasting we identified relevant features (genes) by a network analysis and generated predictive models that integrate mechanistic and probabilistic models. This allowed us to characterize muscle wasting as a general de-regulation of pathway interactions. In the co-morbidity analysis we identified relevant features (genes/pathways) by the integration of gene-disease and disease-disease associations. We further present a detailed characterization of co-morbidities in COPD patients that was implemented into a predictive model. In both use cases we were able to achieve predictive modeling but we also identified several key challenges, the most pressing being the validation and implementation into actual clinical practice. The results confirm the potential of the Systems Medicine approach to study complex diseases and generate clinically relevant predictive models. Our study also highlights important obstacles and bottlenecks for such approaches (e.g. data availability and normalization of frameworks among others) and suggests specific proposals to overcome them.
    Full-text · Article · Nov 2014 · Journal of Translational Medicine
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This article describes a Digital Health Framework (DHF), benefitting from the lessons learnt during the three-year life span of the FP7 Synergy-COPD project. The DHF aims to embrace the emerging requirements - data and tools - of applying systems medicine into healthcare with a three-tier strategy articulating formal healthcare, informal care and biomedical research. Accordingly, it has been constructed based on three key building blocks, namely, novel integrated care services with the support of information and communication technologies, a personal health folder (PHF) and a biomedical research environment (DHF-research). Details on the functional requirements and necessary components of the DHF-research are extensively presented. Finally, the specifics of the building blocks strategy for deployment of the DHF, as well as the steps toward adoption are analyzed. The proposed architectural solutions and implementation steps constitute a pivotal strategy to foster and enable 4P medicine (Predictive, Preventive, Personalized and Participatory) in practice and should provide a head start to any community and institution currently considering to implement a biomedical research platform.
    Full-text · Article · Nov 2014 · Journal of Translational Medicine

Publication Stats

7k Citations
564.28 Total Impact Points

Institutions

  • 2009-2015
    • Karolinska University Hospital
      • Center for Molecular Medicine (CMM)
      Tukholma, Stockholm, Sweden
  • 1994-2015
    • Karolinska Institutet
      • • Center for Molecular Medicine - CMM
      • • Department of Clinical Neuroscience
      • • Department of Neuroscience
      Сольна, Stockholm, Sweden
  • 2003-2014
    • Stockholm University
      Tukholma, Stockholm, Sweden
  • 2012
    • University College London
      Londinium, England, United Kingdom
  • 2005-2010
    • Linköping University
      • • Department of Physics, Chemistry and Biology (IFM)
      • • Institute of Technology
      Linköping, Östergötland, Sweden
    • Stockholm Spine Center
      Tukholma, Stockholm, Sweden
  • 1998-2007
    • KTH Royal Institute of Technology
      • • School of Computer Science and Communication (CSC)
      • • Division of Numerical Analysis
      Stockholm, Stockholm, Sweden
  • 2000-2004
    • Brandeis University
      • Volen Center for Complex Systems
      Waltham, MA, United States
  • 2002-2003
    • Boston University
      • Department of Biomedical Engineering
      Boston, Massachusetts, United States