Lucila Ohno-Machado

University of California, San Diego, San Diego, California, United States

Are you Lucila Ohno-Machado?

Claim your profile

Publications (276)505.44 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective: Delayed diagnosis of Kawasaki disease (KD) may lead to serious cardiac complications. We sought to create and test the performance of a natural language processing (NLP) tool, the KD-NLP, in the identification of emergency department (ED) patients for whom the diagnosis of KD should be considered. Methods: We developed an NLP tool that recognizes the KD diagnostic criteria based on standard clinical terms and medical word usage using 22 pediatric ED notes augmented by Unified Medical Language System (UMLS) vocabulary. With high suspicion for KD defined as fever and ≥3 KD clinical signs, KD-NLP was applied to 253 ED notes from children ultimately diagnosed with either KD or another febrile illness. We evaluated KD-NLP performance against ED notes manually reviewed by clinicians and compared the results to a simple keyword search. Results: KD-NLP identified high suspicion patients with a sensitivity of 93.6% and specificity of 77.5% as compared to notes manually reviewed by clinicians. The tool outperformed a simple keyword search (sensitivity 41.0%; specificity 76.3%). Conclusions: KD-NLP showed comparable performance to clinician manual chart review for identification of pediatric ED patients with a high suspicion for KD. This tool could be incorporated into the ED electronic health record system to alert providers to consider the diagnosis of KD. KD-NLP could serve as a model for decision support for other conditions in the ED. This article is protected by copyright. All rights reserved.
    No preview · Article · Jan 2016 · Academic Emergency Medicine
  • Source
    Yuan Wu · Xiaoqian Jiang · Shuang Wang · Wenchao Jiang · Pinghao Li · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: Multi-category response models are very important complements to binary logistic models in medical decision-making. Decomposing model construction by aggregating computation developed at different sites is necessary when data cannot be moved outside institutions due to privacy or other concerns. Such decomposition makes it possible to conduct grid computing to protect the privacy of individual observations. This paper proposes two grid multi-category response models for ordinal and multinomial logistic regressions. Grid computation to test model assumptions is also developed for these two types of models. In addition, we present grid methods for goodness-of-fit assessment and for classification performance evaluation. Simulation results show that the grid models produce the same results as those obtained from corresponding centralized models, demonstrating that it is possible to build models using multi-center data without losing accuracy or transmitting observation-level data. Two real data sets are used to evaluate the performance of our proposed grid models. The grid fitting method offers a practical solution for resolving privacy and other issues caused by pooling all data in a central site. The proposed method is applicable for various likelihood estimation problems, including other generalized linear models.
    Preview · Article · Dec 2015 · BMC Medical Informatics and Decision Making
  • Source
    Jose E. Kroll · Jihoon Kim · Lucila Ohno-Machado · Sandro J. de Souza
    [Show abstract] [Hide abstract]
    ABSTRACT: Motivation. Alternative splicing events (ASEs) are prevalent in the transcriptome of eukaryotic species and are known to influence many biological phenomena. The identification and quantification of these events are crucial for a better understanding of biological processes. Next-generation DNA sequencing technologies have allowed deep characterization of transcriptomes and made it possible to address these issues. ASEs analysis, however, represents a challenging task especially when many different samples need to be compared. Some popular tools for the analysis of ASEs are known to report thousands of events without annotations and/or graphical representations. A new tool for the identification and visualization of ASEs is here described, which can be used by biologists without a solid bioinformatics background. Results. A software suite named Splicing Express was created to perform ASEs analysis from transcriptome sequencing data derived from next-generation DNA sequencing platforms. Its major goal is to serve the needs of biomedical researchers who do not have bioinformatics skills. Splicing Express performs automatic annotation of transcriptome data (GTF files) using gene coordinates available from the UCSC genome browser and allows the analysis of data from all available species. The identification of ASEs is done by a known algorithm previously implemented in another tool named Splooce. As a final result, Splicing Express creates a set of HTML files composed of graphics and tables designed to describe the expression profile of ASEs among all analyzed samples. By using RNA-Seq data from the Illumina Human Body Map and the Rat Body Map, we show that Splicing Express is able to perform all tasks in a straightforward way, identifying well-known specific events. Availability and Implementation.Splicing Express is written in Perl and is suitable to run only in UNIX-like systems. More details can be found at:
    Preview · Article · Nov 2015 · PeerJ
  • Source
    Yong Li · Xiaoqian Jiang · Shuang Wang · Hongkai Xiong · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective: To develop an accurate logistic regression (LR) algorithm to support federated data analysis of vertically partitioned distributed data sets. Material and methods: We propose a novel technique that solves the binary LR problem by dual optimization to obtain a global solution for vertically partitioned data. We evaluated this new method, VERTIcal Grid lOgistic regression (VERTIGO), in artificial and real-world medical classification problems in terms of the area under the receiver operating characteristic curve, calibration, and computational complexity. We assumed that the institutions could "align" patient records (through patient identifiers or hashed "privacy-protecting" identifiers), and also that they both had access to the values for the dependent variable in the LR model (eg, that if the model predicts death, both institutions would have the same information about death). Results: The solution derived by VERTIGO has the same estimated parameters as the solution derived by applying classical LR. The same is true for discrimination and calibration over both simulated and real data sets. In addition, the computational cost of VERTIGO is not prohibitive in practice. Discussion: There is a technical challenge in scaling up federated LR for vertically partitioned data. When the number of patients m is large, our algorithm has to invert a large Hessian matrix. This is an expensive operation of time complexity O(m(3)) that may require large amounts of memory for storage and exchange of information. The algorithm may also not work well when the number of observations in each class is highly imbalanced. Conclusion: The proposed VERTIGO algorithm can generate accurate global models to support federated data analysis of vertically partitioned data.
    Full-text · Article · Nov 2015 · Journal of the American Medical Informatics Association
  • Source
    L. Ohno-Machado

    Preview · Article · Nov 2015 · Journal of the American Medical Informatics Association
  • Wei Wei · Dina Demner-Fushman · Shuang Wang · Xiaoqian Jiang · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: Automatically assigning MeSH (Medical Subject Headings) to articles is an active research topic. Recent work demonstrated the feasibility of improving the existing automated Medical Text Indexer (MTI) system, developed at the National Library of Medicine (NLM). Encouraged by this work, we propose a novel data-driven approach that uses semantic distances in the MeSH ontology for automated MeSH assignment. Specifically, we developed a graphical model to propagate belief through a citation network to provide robust MeSH main heading (MH) recommendation. Our preliminary results indicate that this approach can reach high Mean Average Precision (MAP) in some scenarios.
    No preview · Article · Aug 2015
  • Chia-Lun Lu · Shuang Wang · Zhanglong Ji · Yuan Wu · Li Xiong · Xiaoqian Jiang · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective: The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. Methods and materials: The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. Results: The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. Limitation: The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data.
    No preview · Article · Jul 2015 · Journal of the American Medical Informatics Association
  • [Show abstract] [Hide abstract]
    ABSTRACT: Centralized and federated models for sharing data in research networks currently exist. To build multivariate data analysis for centralized networks, transfer of patient-level data to a central computation resource is necessary. The authors implemented distributed multivariate models for federated networks in which patient-level data is kept at each site and data exchange policies are managed in a study-centric manner. The objective was to implement infrastructure that supports the functionality of some existing research networks (e.g., cohort discovery, workflow management, and estimation of multivariate analytic models on centralized data) while adding additional important new features, such as algorithms for distributed iterative multivariate models, a graphical interface for multivariate model specification, synchronous and asynchronous response to network queries, investigator-initiated studies, and study-based control of staff, protocols, and data sharing policies. Based on the requirements gathered from statisticians, administrators, and investigators from multiple institutions, the authors developed infrastructure and tools to support multisite comparative effectiveness studies using web services for multivariate statistical estimation in the SCANNER federated network. The authors implemented massively parallel (map-reduce) computation methods and a new policy management system to enable each study initiated by network participants to define the ways in which data may be processed, managed, queried, and shared. The authors illustrated the use of these systems among institutions with highly different policies and operating under different state laws. Federated research networks need not limit distributed query functionality to count queries, cohort discovery, or independently estimated analytic models. Multivariate analyses can be efficiently and securely conducted without patient-level data transport, allowing institutions with strict local data storage requirements to participate in sophisticated analyses based on federated research networks. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
    No preview · Article · Jul 2015 · Journal of the American Medical Informatics Association
  • Katherine K Kim · Jill G Joseph · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: New models of healthcare delivery such as accountable care organizations and patient-centered medical homes seek to improve quality, access, and cost. They rely on a robust, secure technology infrastructure provided by health information exchanges (HIEs) and distributed research networks and the willingness of patients to share their data. There are few large, in-depth studies of US consumers’ views on privacy, security, and consent in electronic data sharing for healthcare and research together. Objective This paper addresses this gap, reporting on a survey which asks about California consumers’ views of data sharing for healthcare and research together. Materials and Methods The survey conducted was a representative, random-digit dial telephone survey of 800 Californians, performed in Spanish and English. Results There is a great deal of concern that HIEs will worsen privacy (40.3%) and security (42.5%). Consumers are in favor of electronic data sharing but elements of transparency are important: individual control, who has access, and the purpose for use of data. Respondents were more likely to agree to share deidentified information for research than to share identified information for healthcare (76.2% vs 57.3%, p < .001). Discussion While consumers show willingness to share health information electronically, they value individual control and privacy. Responsiveness to these needs, rather than mere reliance on Health Insurance Portability and Accountability Act (HIPAA), may improve support of data networks. Conclusion Responsiveness to the public’s concerns regarding their health information is a pre-requisite for patient-centeredness. This is one of the first in-depth studies of attitudes about electronic data sharing that compares attitudes of the same individual towards healthcare and research.
    No preview · Article · Mar 2015 · Journal of the American Medical Informatics Association
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: About half of the known miRNA genes are located within protein-coding host genes, and are thus subject to co-transcription. Accumulating data indicate that this coupling may be an intrinsic mechanism to directly regulate the host gene's expression, constituting a negative feedback loop. Inevitably, the cell requires a yet largely unknown repertoire of methods to regulate this control mechanism. We propose APA as one possible mechanism by which negative feedback of intronic miRNA on their host genes might be regulated. Using in-silico analyses, we found that host genes that contain seed matching sites for their intronic miRNAs yield longer 32UTRs with more polyadenylation sites. Additionally, the distribution of polyadenylation signals differed significantly between these host genes and host genes of miRNAs that do not contain potential miRNA binding sites. We then transferred these in-silico results to a biological example and investigated the relationship between ZFR and its intronic miRNA miR-579 in a U87 cell line model. We found that ZFR is targeted by its intronic miRNA miR-579 and that alternative polyadenylation allows differential targeting. We additionally used bioinformatics analyses and RNA-Seq to evaluate a potential cross-talk between intronic miRNAs and alternative polyadenylation. CPSF2, a gene previously associated with alternative polyadenylation signal recognition, might be linked to intronic miRNA negative feedback by altering polyadenylation signal utilization.
    Preview · Article · Mar 2015 · PLoS ONE
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background: Adoption of a common data model across health systems is a key infrastructure requirement to allow large scale distributed comparative effectiveness analyses. There are a growing number of common data models (CDM), such as Mini-Sentinel, and the Observational Medical Outcomes Partnership (OMOP) CDMs. Objective: In this case study, we describe the challenges and opportunities of a study specific use of the OMOP CDM by two health systems and describe three comparative effectiveness use cases developed from the CDM. Methods: The project transformed two health system databases (using crosswalks provided) into the OMOP CDM. Cohorts were developed from the transformed CDMs for three comparative effectiveness use case examples. Administrative/billing, demographic, order history, medication, and laboratory were included in the CDM transformation and cohort development rules. Results: Record counts per person month are presented for the eligible cohorts, highlighting differences between the civilian and federal datasets, e. g. the federal data set had more outpatient visits per person month (6.44 vs. 2.05 per person month). The count of medications per person month reflected the fact that one system's medications were extracted from orders while the other system had pharmacy fills and medication administration records. The federal system also had a higher prevalence of the conditions in all three use cases. Both systems required manual coding of some types of data to convert to the CDM. Conclusion: The data transformation to the CDM was time consuming and resources required were substantial, beyond requirements for collecting native source data. The need to manually code subsets of data limited the conversion. However, once the native data was converted to the CDM, both systems were then able to use the same queries to identify cohorts. Thus, the CDM minimized the effort to develop cohorts and analyze the results across the sites.
    No preview · Article · Jan 2015 · Applied Clinical Informatics
  • Source
    Xiaoqian Jiang · Yuan Wu · Keith Marsolo · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes.
    Preview · Article · Dec 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To answer the need for the rigorous protection of biomedical data, we organized the Critical Assessment of Data Privacy and Protection initiative as a community effort to evaluate privacy-preserving dissemination techniques for biomedical data. We focused on the challenge of sharing aggregate human genomic data (e.g., allele frequencies) in a way that preserves the privacy of the data donors, without undermining the utility of genome-wide association studies (GWAS) or impeding their dissemination. Specifically, we designed two problems for disseminating the raw data and the analysis outcome, respectively, based on publicly available data from HapMap and from the Personal Genome Project. A total of six teams participated in the challenges. The final results were presented at a workshop of the iDASH (integrating Data for Analysis, 'anonymization,' and SHaring) National Center for Biomedical Computing. We report the results of the challenge and our findings about the current genome privacy protection techniques.
    Full-text · Article · Dec 2014 · BMC Medical Informatics and Decision Making
  • Lucila Ohno-Machado

    No preview · Article · Nov 2014 · Journal of the American Medical Informatics Association
  • Source
    Yongan Zhao · Xiaofeng Wang · Xiaoqian Jiang · Lucila Ohno-Machado · Haixu Tang
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective To propose a new approach to privacy preserving data selection, which helps the data users access human genomic datasets efficiently without undermining patients’ privacy. Methods Our idea is to let each data owner publish a set of differentially-private pilot data, on which a data user can test-run arbitrary association-test algorithms, including those not known to the data owner a priori. We developed a suite of new techniques, including a pilot-data generation approach that leverages the linkage disequilibrium in the human genome to preserve both the utility of the data and the privacy of the patients, and a utility evaluation method that helps the user assess the value of the real data from its pilot version with high confidence. Results We evaluated our approach on real human genomic data using four popular association tests. Our study shows that the proposed approach can help data users make the right choices in most cases. Conclusions Even though the pilot data cannot be directly used for scientific discovery, it provides a useful indication of which datasets are more likely to be useful to data users, who can therefore approach the appropriate data owners to gain access to the data.
    Preview · Article · Oct 2014 · Journal of the American Medical Informatics Association
  • Source
    Petra Stepanowsky · Eric Levy · Jihoon Kim · Xiaoqian Jiang · Lucila Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: MicroRNAs (miRNAs) are a class of short noncoding RNAs that regulate gene expression through base pairing with messenger RNAs. Due to the interest in studying miRNA dysregulation in disease and limits of validated miRNA references, identification of novel miRNAs is a critical task. The performance of different models to predict novel miRNAs varies with the features chosen as predictors. However, no study has systematically compared published feature sets. We constructed a comprehensive feature set using the minimum free energy of the secondary structure of precursor miRNAs, a set of nucleotide-structure triplets, and additional extracted sequence and structure characteristics. We then compared the predictive value of our comprehensive feature set to those from three previously published studies, using logistic regression and random forest classifiers. We found that classifiers containing as few as seven highly predictive features are able to predict novel precursor miRNAs as well as classifiers that use larger feature sets. In a real data set, our method correctly identified the holdout miRNAs relevant to renal cancer.
    Preview · Article · Oct 2014 · Cancer informatics
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: MicroRNAs (miRNAs) are a class of small (∼22 nucleotides) non-coding RNAs that post-transcriptionally regulate gene expression by interacting with target mRNAs. A majority of miRNAs is located within intronic or exonic regions of protein-coding genes (host genes), and increasing evidence suggests a functional relationship between these miRNAs and their host genes. Here, we introduce miRIAD, a web-service to facilitate the analysis of genomic and structural features of intragenic miRNAs and their host genes for five species (human, rhesus monkey, mouse, chicken and opossum). miRIAD contains the genomic classification of all miRNAs (inter- and intragenic), as well as classification of all protein-coding genes into host or non-host genes (depending on whether they contain an intragenic miRNA or not). We collected and processed public data from several sources to provide a clear visualization of relevant knowledge related to intragenic miRNAs, such as host gene function, genomic context, names of and references to intragenic miRNAs, miRNA binding sites, clusters of intragenic miRNAs, miRNA and host gene expression across different tissues and expression correlation for intragenic miRNAs and their host genes. Protein–protein interaction data are also presented for functional network analysis of host genes. In summary, miRIAD was designed to help the research community to explore, in a user-friendly environment, intragenic miRNAs, their host genes and functional annotations with minimal effort, facilitating hypothesis generation and in-silico validations. Database URL:
    Full-text · Article · Oct 2014 · Database The Journal of Biological Databases and Curation
  • M K Ross · W Wei · L Ohno-Machado
    [Show abstract] [Hide abstract]
    ABSTRACT: Objectives: Implementation of Electronic Health Record (EHR) systems continues to expand. The massive number of patient encounters results in high amounts of stored data. Transforming clinical data into knowledge to improve patient care has been the goal of biomedical informatics professionals for many decades, and this work is now increasingly recognized outside our field. In reviewing the literature for the past three years, we focus on "big data" in the context of EHR systems and we report on some examples of how secondary use of data has been put into practice. Methods: We searched PubMed database for articles from January 1, 2011 to November 1, 2013. We initiated the search with keywords related to "big data" and EHR. We identified relevant articles and additional keywords from the retrieved articles were added. Based on the new keywords, more articles were retrieved and we manually narrowed down the set utilizing predefined inclusion and exclusion criteria. Results: Our final review includes articles categorized into the themes of data mining (pharmacovigilance, phenotyping, natural language processing), data application and integration (clinical decision support, personal monitoring, social media), and privacy and security. Conclusion: The increasing adoption of EHR systems worldwide makes it possible to capture large amounts of clinical data. There is an increasing number of articles addressing the theme of "big data", and the concepts associated with these articles vary. The next step is to transform healthcare big data into actionable knowledge.
    No preview · Article · Aug 2014 · Yearbook of medical informatics
  • Source
    David W Bates · Suchi Saria · Lucila Ohno-Machado · Anand Shah · Gabriel Escobar
    [Show abstract] [Hide abstract]
    ABSTRACT: The US health care system is rapidly adopting electronic health records, which will dramatically increase the quantity of clinical data that are available electronically. Simultaneously, rapid progress has been made in clinical analytics-techniques for analyzing large quantities of data and gleaning new insights from that analysis-which is part of what is known as big data. As a result, there are unprecedented opportunities to use big data to reduce the costs of health care in the United States. We present six use cases-that is, key examples-where some of the clearest opportunities exist to reduce costs through the use of big data: high-cost patients, readmissions, triage, decompensation (when a patient's condition worsens), adverse events, and treatment optimization for diseases affecting multiple organ systems. We discuss the types of insights that are likely to emerge from clinical analytics, the types of data needed to obtain such insights, and the infrastructure-analytics, algorithms, registries, assessment scores, monitoring devices, and so forth-that organizations will need to perform the necessary analyses and to implement changes that will improve care while reducing costs. Our findings have policy implications for regulatory oversight, ways to address privacy concerns, and the support of research on analytics.
    Full-text · Article · Jul 2014 · Health Affairs
  • Source
    Haoran Li · Li Xiong · Lucila Ohno-Machado · Xiaoqian Jiang
    [Show abstract] [Hide abstract]
    ABSTRACT: Data sharing is challenging but important for healthcare research. Methods for privacy-preserving data dissemination based on the rigorous differential privacy standard have been developed but they did not consider the characteristics of biomedical data and make full use of the available information. This often results in too much noise in the final outputs. We hypothesized that this situation can be alleviated by leveraging a small portion of open-consented data to improve utility without sacrificing privacy. We developed a hybrid privacy-preserving differentially private support vector machine (SVM) model that uses public data and private data together. Our model leverages the RBF kernel and can handle nonlinearly separable cases. Experiments showed that this approach outperforms two baselines: (1) SVMs that only use public data, and (2) differentially private SVMs that are built from private data. Our method demonstrated very close performance metrics compared to nonprivate SVMs trained on the private data.
    No preview · Article · Jun 2014

Publication Stats

6k Citations
505.44 Total Impact Points


  • 2010-2015
    • University of California, San Diego
      • Department of Medicine
      San Diego, California, United States
  • 2014
    • Duke University
      Durham, North Carolina, United States
  • 2013
    • Shanghai Jiao Tong University
      • Department of Electronic Engineering
      Shanghai, Shanghai Shi, China
  • 1993-2013
    • Stanford Medicine
      • Department of Pediatrics
      Stanford, CA, United States
  • 1996-2010
    • Harvard Medical School
      • • Department of Radiology
      • • Department of Genetics
      • • Department of Medicine
      Boston, Massachusetts, United States
  • 2000-2009
    • Harvard University
      Cambridge, Massachusetts, United States
    • Consorcio Hospital General Universitario de Valencia
      • Departamento de Cardiología
      Valenza, Valencia, Spain
  • 1996-2009
    • Brigham and Women's Hospital
      • • Decision Systems Group
      • • Department of Radiology
      Boston, Massachusetts, United States
  • 2000-2008
    • Massachusetts Institute of Technology
      • Division of Health Sciences and Technology
      Cambridge, Massachusetts, United States
  • 2006
    • Universidade Federal de São Paulo
      San Paulo, São Paulo, Brazil
  • 2005
    • Teikyo University Hospital
      Edo, Tōkyō, Japan
  • 2004
    • University of Massachusetts Boston
      Boston, Massachusetts, United States
  • 2002
    • Federal University of Rio de Janeiro
      Rio de Janeiro, Rio de Janeiro, Brazil
    • University of Hertfordshire
      • School of Computer Science
      Hatfield, ENG, United Kingdom
  • 1999-2002
    • Stanford University
      Palo Alto, California, United States
  • 1998
    • Columbia University
      New York, New York, United States
  • 1997
    • University of Southern California
      • Department of Chemical Engineering and Materials Science
      Los Angeles, CA, United States