Joseph Lau’s research while affiliated with Brown University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (408)


Fig. 1 Annual and cumulative numbers of projects with publicly available data on the Systematic Review Data Repository (SRDR) website since the year after SRDR's inception in 2012 (i.e., 2013 to 2019). Note: Data for 2019 only includes January 1, 2019 to November 12, 2019. The spike in the number of projects in 2019, although it includes data from only approximately 10.5 months is because we recently reached out to leads of all existing projects in SRDR to encourage them to make the project data available publicly. Blue bars = annual number of projects. Green bars = cumulative number of projects
Numbers of research questions and included studies in projects with data available publicly on the Systematic Review Data Repository (SRDR) website as of November 12, 2019, sorted by whether or not the review was funded by the Agency for Healthcare Research and Quality (AHRQ)
Topics of the 152 projects with publicly available data on the Systematic Review Data Repository (SRDR) website as of November 12, 2019, categorized by health area
The Systematic Review Data Repository (SRDR): Descriptive characteristics of publicly available data and opportunities for research
  • Article
  • Full-text available

December 2019

·

578 Reads

·

22 Citations

Systematic Reviews

·

Bryant T. Smith

·

·

[...]

·

Joseph Lau

Background: Conducting systematic reviews ("reviews") requires a great deal of effort and resources. Making data extracted during reviews available publicly could offer many benefits, including reducing unnecessary duplication of effort, standardizing data, supporting analyses to address secondary research questions, and facilitating methodologic research. Funded by the US Agency for Healthcare Research and Quality (AHRQ), the Systematic Review Data Repository (SRDR) is a free, web-based, open-source, data management and archival platform for reviews. Our specific objectives in this paper are to describe (1) the current extent of usage of SRDR and (2) the characteristics of all projects with publicly available data on the SRDR website. Methods: We examined all projects with data made publicly available through SRDR as of November 12, 2019. We extracted information about the characteristics of these projects. Two investigators extracted and verified the data. Results: SRDR has had 2552 individual user accounts belonging to users from 80 countries. Since SRDR's launch in 2012, data have been made available publicly for 152 of the 735 projects in SRDR (21%), at a rate of 24.5 projects per year, on average. Most projects are in clinical fields (144/152 projects; 95%); most have evaluated interventions (therapeutic or preventive) (109/152; 72%). The most frequent health areas addressed are mental and behavioral disorders (31/152; 20%) and diseases of the eye and ocular adnexa (23/152; 15%). Two-thirds of the projects (104/152; 67%) were funded by AHRQ, and one-sixth (23/152; 15%) are Cochrane reviews. The 152 projects each address a median of 3 research questions (IQR 1-5) and include a median of 70 studies (IQR 20-130). Conclusions: Until we arrive at a future in which the systematic review and broader research communities are comfortable with the accuracy of automated data extraction, re-use of data extracted by humans has the potential to help reduce redundancy and costs. The 152 projects with publicly available data through SRDR, and the more than 15,000 studies therein, are freely available to researchers and the general public who might be working on similar reviews or updates of reviews or who want access to the data for decision-making, meta-research, or other purposes.

Download


Fig. 1. Participant flow during the DAA trial. The 26 pairs were formed by pairing each of the first 26 less experienced abstractors with the next available participant from the first 26 more experienced abstractors.
Baseline characteristics of all 52 participants in the DAA trial
Proportion of errors across all approaches, by type of error, type of data abstracted, and systematic review topic
Between-approach comparisons of error proportions by type of data abstracted
Between-approach comparisons of autorecorded time by type of data abstracted
A randomized trial provided new evidence on the accuracy and efficiency of traditional vs electronically annotated abstraction approaches in systematic reviews

July 2019

·

116 Reads

·

34 Citations

Journal of Clinical Epidemiology

Objectives: Data Abstraction Assistant (DAA) is a software for linking items abstracted into a data collection form for a systematic review to their locations in a study report. We conducted a randomized cross-over trial that compared DAA-facilitated single-data abstraction plus verification ("DAA verification"), single data abstraction plus verification ("regular verification"), and independent dual data abstraction plus adjudication ("independent abstraction"). Study design and setting: This study is an online randomized cross-over trial with 26 pairs of data abstractors. Each pair abstracted data from six articles, two per approach. Outcomes were the proportion of errors and time taken. Results: Overall proportion of errors was 17% for DAA verification, 16% for regular verification, and 15% for independent abstraction. DAA verification was associated with higher odds of errors when compared with regular verification (adjusted odds ratio [OR] = 1.08; 95% confidence interval [CI]: 0.99-1.17) or independent abstraction (adjusted OR = 1.12; 95% CI: 1.03-1.22). For each article, DAA verification took 20 minutes (95% CI: 1-40) longer than regular verification, but 46 minutes (95% CI: 26 to 66) shorter than independent abstraction. Conclusion: Independent abstraction may only be necessary for complex data items. DAA provides an audit trail that is crucial for reproducible research.



Response to 'Increasing value and reducing waste in data extraction for systematic reviews: Tracking data in data extraction forms'

December 2018

·

52 Reads

·

1 Citation

Systematic Reviews

ᅟ This is a response to a Letter. Data abstraction is a time-consuming and error-prone systematic review task. Shokraneh and Adams categorize available techniques for tracking data during data abstraction into three methods: simple annotation, descriptive addressing, and Cartesian coordinate system. While we agree with the categorization of the techniques, we disagree with the authors’ statement that descriptive addressing is a PDF-independent method, i.e., any sort of descriptive addressing must reference a specific version of PDF file and not just any PDF of said report. Different versions of PDFs of the same report might place text and tables on different locations of the same page and/or on different pages. Consequently, it is our opinion that any kind of source location information should be accompanied by the source or linked by an intermediary service such as the Data Abstraction Assistant (DAA).


Features and functioning of Data Abstraction Assistant, a software application for data abstraction during systematic reviews

October 2018

·

62 Reads

·

11 Citations

Research Synthesis Methods

Introduction During systematic reviews, data abstraction is labor‐ and time‐intensive and error‐prone. Existing data abstraction systems do not track specific locations and contexts of abstracted information. To address this limitation, we developed a software application, the Data Abstraction Assistant (DAA), and surveyed early users about their experience using DAA. Features of Daa We designed DAA to encompass three essential features: (1) a platform for indicating the source of abstracted information; (2) compatibility with a variety of data abstraction systems; and (3) user‐friendliness. How Daa Functions DAA: (1) converts source documents from PDF to HTML format (to enable tracking of source of abstracted information); (2) transmits the HTML to the data abstraction system; and (3) displays the HTML in an area adjacent to the data abstraction form in the data abstraction system. The data abstractor can mark locations on the HTML that DAA associates with items on the data abstraction form. Experiences of Early Users of Daa When we surveyed 52 early users of DAA, 83% reported that using DAA was either very or somewhat easy; 71% are very or somewhat likely to use DAA in the future; and 87% are very or somewhat likely to recommend that others use DAA in the future. Discussion DAA, a user‐friendly software for linking abstracted data with their exact source, is likely to be a very useful tool in the toolbox of systematic reviewers. DAA facilitates verification of abstracted data and provides an audit trail that is crucial for reproducible research.




Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference

January 2017

·

224 Reads

·

6 Citations

Journal of Clinical Epidemiology

Objective: To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). Study design and setting: We constructed a database of PubMed-indexed meta-analyses of test performance from which 2×2 tables for each included study could be extracted. We re-analyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. Results: We use two worked examples - thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities - to highlight that different meta-analysis approaches can produce different results. We also present results from re-analysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated substantially greater uncertainty around those estimates. All bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. Conclusion: The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters.



Citations (81)


... We will use the ARHQ's Systematic Review and Data Repository (SRDR+) software for data extraction. 49 We will attempt to contact the systematic review authors to clarify any missing outcome data. ...

Reference:

What is the association between the microbiome and cognition? An umbrella review protocol
The Systematic Review Data Repository (SRDR): Descriptive characteristics of publicly available data and opportunities for research

Systematic Reviews

... 9 Similar to several studies that sought to identify factors predicting the fundoplication outcomes, 12 our data also revealed that demographic factors including age, sex, and BMI and clinical parameters such as PPI treatment duration and history of GERD were not associated with surgical outcomes. 9, 13 Lundell et al reported that surgery was more effective in controlling overall symptoms in patients with reflux esophagitis or chronic GERD. 14 This study also demonstrated that LNF could predict favorable outcomes in patients with reflux esophagitis. ...

Predictors of Clinical Outcomes Following Fundoplication for Gastroesophageal Reflux Disease Remain Insufficiently Defined: A Systematic Review
  • Citing Article
  • March 2009

The American Journal of Gastroenterology

... Fig. 3). For several steps of the SR process, only a few studies that evaluated methods or tools were identified: deduplication: n = 6 [31,37,55,58,72,81], additional search: n = 2 [34,98], update search: n = 6 [37,51,62,78,110,112], full-text selection: n = 4 [86,114,115,126], data extraction: n = 11 [32,37,47,68,70,71,75,104,113,125,126] (one study evaluated both a method and a tool [75]); critical appraisal: n = 9, [27,28,37,50,63,69,76,102,116], and combination of abbreviated methods/tools: n = 6 [10,26,77,79,101,117] (see Fig. 3). No studies were found for some steps of the SR process, such as administration/project management, formulating the review question, searching for existing reviews, writing the protocol, full-text retrieval, synthesis/meta-analysis, certainty of evidence assessment, and report preparation. ...

A randomized trial provided new evidence on the accuracy and efficiency of traditional vs electronically annotated abstraction approaches in systematic reviews

Journal of Clinical Epidemiology

... Fig. 3). For several steps of the SR process, only a few studies that evaluated methods or tools were identified: deduplication: n = 6 [31,37,55,58,72,81], additional search: n = 2 [34,98], update search: n = 6 [37,51,62,78,110,112], full-text selection: n = 4 [86,114,115,126], data extraction: n = 11 [32,37,47,68,70,71,75,104,113,125,126] (one study evaluated both a method and a tool [75]); critical appraisal: n = 9, [27,28,37,50,63,69,76,102,116], and combination of abbreviated methods/tools: n = 6 [10,26,77,79,101,117] (see Fig. 3). No studies were found for some steps of the SR process, such as administration/project management, formulating the review question, searching for existing reviews, writing the protocol, full-text retrieval, synthesis/meta-analysis, certainty of evidence assessment, and report preparation. ...

Features and functioning of Data Abstraction Assistant, a software application for data abstraction during systematic reviews
  • Citing Article
  • October 2018

Research Synthesis Methods

... To this, we added a second set of 19 reviews known to us (the "Adam dataset"), 18 of which were designed for systematic reviews conducted in the Agency for Healthcare Research and Quality's (AHRQ) Evidence-based Practice Center (EPC) program [21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37] or in the Department of Veterans Affairs (VA). 38 These searches were all peer reviewed by another systematic review librarian using the Peer Review Electronic Search Strategies (PRESS) checklist. ...

Omega-3 Fatty Acids and Cardiovascular Disease: An Updated Systematic Review
  • Citing Article
  • August 2016

Evidence Report/technology Assessment

... SRDR can serve as a valuable platform for conducting methodologic research. Examples of such research that has already been conducted using SRDR are the Data Abstraction Assistant (DAA) Trial (a randomized controlled trial that compared different data extraction approaches [26][27][28]) and the current study and the six other methodologic projects described in this paper [29][30][31][32][33][34]. SRDR also can serve as a source of data for metaresearch (i.e., methodologic and other types of research on research [4]). ...

Response to 'Increasing value and reducing waste in data extraction for systematic reviews: Tracking data in data extraction forms'

Systematic Reviews

... Based on American dietary guideline (2015-2020), the good dietary practice is defined as the diet which that includes a variety of vegetables from all subgroups-dark, green, red, and orange, legumes, starchy, and other fruits, grains, at least half of which has whole grains, fat-free or low fat dairy, including milk, yogurt, cheese, and/or fortified soy beverages, a variety of protein foods, including meat, poultry, eggs, legumes and nuts; and a variety of productions and oils. Alongside, regular exercise is also included in good dietary practice [18]. All these were asked in the questionnaire, in the perceived benefits dimension and were ranked accordingly. ...

Redesigning the process for establishing the Dietary Guidelines for Americans
  • Citing Book
  • September 2017

... Additionally, in 2017, the National Academies of Sciences, Engineering, and Medicine (NASEM) produced a Congressionally mandated report on the DGA process in which the Academies issued a four-part recommendation 'to enhance transparency, manage biases and COI to promote independent decision making' (13) . Specifically, the NASEM recommended that USDA-HHS should disclose how provisional DGAC nominees' biases and COI are identified and managed, by, among other things, 'creating and publicly posting a policy and form to explicitly disclose financial and nonfinancial biases and conflicts' (13) . ...

Optimizing the process for establishing the dietary guidelines for Americans: The selection process.
  • Citing Book
  • January 2017

... Similar to [6], the analyses here relied on separate randomeffects models for pooling sensitivity and specificity values. More complex analyses have been recommended (e.g., [35]), but it is possible that the results here are similar to what may be observed using more complex analyses [36], [37]. The analyses here are also an improvement on [1], which was akin to a fixed effects meta-analysis and had lower k values. ...

Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference
  • Citing Article
  • January 2017

Journal of Clinical Epidemiology