BMC Medical Research Methodology

Published by Springer Nature

Online ISSN: 1471-2288

Articles


Table 6 Example of epistemological approaches that may be used in case study research
Table 8 Potential pitfalls and mitigating actions when undertaking case study research
The Case Study Approach
  • Literature Review
  • Full-text available

June 2011

·

31,845 Reads

·

·

·

[...]

·

The case study approach allows in-depth, multi-faceted explorations of complex issues in their real-life settings. The value of the case study approach is well recognised in the fields of business, law and policy, but somewhat less so in health services research. Based on our experiences of conducting several health-related case studies, we reflect on the different types of case study design, the specific research questions this approach can help answer, the data sources that tend to be used, and the particular advantages and disadvantages of employing this methodological approach. The paper concludes with key pointers to aid those designing and appraising proposals for conducting case study research, and a checklist to help readers assess the quality of case study reports.
Download
Share

Coding process during an admission to a hospital in Denmark.
Thygesen SK, Christiansen CF, Christensen S, Lash TL, Sorensen HTThe predictive value of ICD-10 diagnostic coding used to assess Charlson comorbidity index conditions in the population-based Danish National Registry of Patients. BMC Med Res Methodol 11: 83

May 2011

·

276 Reads

The Charlson comorbidity index is often used to control for confounding in research based on medical databases. There are few studies of the accuracy of the codes obtained from these databases. We examined the positive predictive value (PPV) of the ICD-10 diagnostic coding in the Danish National Registry of Patients (NRP) for the 19 Charlson conditions. Among all hospitalizations in Northern Denmark between 1 January 1998 and 31 December 2007 with a first-listed diagnosis of a Charlson condition in the NRP, we selected 50 hospital contacts for each condition. We reviewed discharge summaries and medical records to verify the NRP diagnoses, and computed the PPV as the proportion of confirmed diagnoses. A total of 950 records were reviewed. The overall PPV for the 19 Charlson conditions was 98.0% (95% CI; 96.9, 98.8). The PPVs ranged from 82.0% (95% CI; 68.6%, 91.4%) for diabetes with diabetic complications to 100% (one-sided 97.5% CI; 92.9%, 100%) for congestive heart failure, peripheral vascular disease, chronic pulmonary disease, mild and severe liver disease, hemiplegia, renal disease, leukaemia, lymphoma, metastatic tumour, and AIDS. The PPV of NRP coding of the Charlson conditions was consistently high.

Mean estimates and confidence limits for adjusted hazard ratio methods from Scenarios 2, 6, 10 and 14. Note: Mean upper confidence limits truncated at β = 4. Vertical lines show true treatment effect (β = 0.7).
Mean estimates and confidence limits for AFT methods from Scenarios 2, 6, 10 and 14. Note: Mean upper confidence limits truncated at eψ = 5 Vertical lines show true treatment effect (eψ = 2.04).
Scatter plot matrix of hazard ratio method point estimates from Scenario 14.
Scatter plot matrix of AFT method point estimates from Scenario 14.
Morden JP, Lambert PC, Latimer N, Abrams KR, Wailoo AJAssessing methods for dealing with treatment switching in randomised controlled trials: a simulation study. BMC Med Res Methodol 11: 4

January 2011

·

312 Reads

We investigate methods used to analyse the results of clinical trials with survival outcomes in which some patients switch from their allocated treatment to another trial treatment. These included simple methods which are commonly used in medical literature and may be subject to selection bias if patients switching are not typical of the population as a whole. Methods which attempt to adjust the estimated treatment effect, either through adjustment to the hazard ratio or via accelerated failure time models, were also considered. A simulation study was conducted to assess the performance of each method in a number of different scenarios. 16 different scenarios were identified which differed by the proportion of patients switching, underlying prognosis of switchers and the size of true treatment effect. 1000 datasets were simulated for each of these and all methods applied. Selection bias was observed in simple methods when the difference in survival between switchers and non-switchers were large. A number of methods, particularly the AFT method of Branson and Whitehead were found to give less biased estimates of the true treatment effect in these situations. Simple methods are often not appropriate to deal with treatment switching. Alternative approaches such as the Branson & Whitehead method to adjust for switching should be considered.

Using the framework method for the analysis of qualitative data in multi-disciplinary health research. <http://www.biomedcentral.com/1471-2288/13/117> (accessed 5 August 2015)

September 2013

·

5,935 Reads

The Framework Method is becoming an increasingly popular approach to the management and analysis of qualitative data in health research. However, there is confusion about its potential application and limitations. The article discusses when it is appropriate to adopt the Framework Method and explains the procedure for using it in multi-disciplinary health research teams, or those that involve clinicians, patients and lay people. The stages of the method are illustrated using examples from a published study. Used effectively, with the leadership of an experienced qualitative researcher, the Framework Method is a systematic and flexible approach to analysing qualitative data and is appropriate for use in research teams even where not all members have previous experience of conducting qualitative research.

Figure 1: Meta-Analysis of random data
Table 1 : The best formula for estimation by distribution.
Figure 2: Actual pooled mean difference and estimated pooled mean difference
Figure 3: An example: Meta Analysis with all eligible trials included
Hozo SP, Djulbegovic B, Hozo I. Estimating the mean and variance from the median, range, and the size of a sample. BMC Med Res Methodol5:13

April 2005

·

5,216 Reads

Usually the researchers performing meta-analysis of continuous outcomes from clinical trials need their mean value and the variance (or standard deviation) in order to pool data. However, sometimes the published reports of clinical trials only report the median, range and the size of the trial. In this article we use simple and elementary inequalities and approximations in order to estimate the mean and the variance for such trials. Our estimation is distribution-free, i.e., it makes no assumption on the distribution of the underlying data. We found two simple formulas that estimate the mean using the values of the median (m), low and high end of the range (a and b, respectively), and n (the sample size). Using simulations, we show that median can be used to estimate mean when the sample size is larger than 25. For smaller samples our new formula, devised in this paper, should be used. We also estimated the variance of an unknown sample using the median, low and high end of the range, and the sample size. Our estimate is performing as the best estimate in our simulations for very small samples (n < or = 15). For moderately sized samples (15 < n < or = 70), our simulations show that the formula range/4 is the best estimator for the standard deviation (variance). For large samples (n > 70), the formula range/6 gives the best estimator for the standard deviation (variance). We also include an illustrative example of the potential value of our method using reports from the Cochrane review on the role of erythropoietin in anemia due to malignancy. Using these formulas, we hope to help meta-analysts use clinical trials in their analysis even when not all of the information is available and/or reported.

Royston P, Altman DGExternal validation of a Cox prognostic model: principles and methods. BMC Med Res Methodol 13: 33

March 2013

·

2,006 Reads

Background A prognostic model should not enter clinical practice unless it has been demonstrated that it performs a useful role. External validation denotes evaluation of model performance in a sample independent of that used to develop the model. Unlike for logistic regression models, external validation of Cox models is sparsely treated in the literature. Successful validation of a model means achieving satisfactory discrimination and calibration (prediction accuracy) in the validation sample. Validating Cox models is not straightforward because event probabilities are estimated relative to an unspecified baseline function. Methods We describe statistical approaches to external validation of a published Cox model according to the level of published information, specifically (1) the prognostic index only, (2) the prognostic index together with Kaplan-Meier curves for risk groups, and (3) the first two plus the baseline survival curve (the estimated survival function at the mean prognostic index across the sample). The most challenging task, requiring level 3 information, is assessing calibration, for which we suggest a method of approximating the baseline survival function. Results We apply the methods to two comparable datasets in primary breast cancer, treating one as derivation and the other as validation sample. Results are presented for discrimination and calibration. We demonstrate plots of survival probabilities that can assist model evaluation. Conclusions Our validation methods are applicable to a wide range of prognostic studies and provide researchers with a toolkit for external validation of a published Cox model.

Figure 1: Flowchart demonstrating papers included in the stroke review.
Table 1 NPT based coding framework
Qualitative systematic reviews of treatment burden in stroke, heart failure and diabetes - Methodological challenges and solutions (vol 13, 10, 2013)
Background Treatment burden can be defined as the self-care practices that patients with chronic illness must perform to respond to the requirements of their healthcare providers, as well as the impact that these practices have on patient functioning and well being. Increasing levels of treatment burden may lead to suboptimal adherence and negative outcomes. Systematic review of the qualitative literature is a useful method for exploring the patient experience of care, in this case the experience of treatment burden. There is no consensus on methods for qualitative systematic review. This paper describes the methodology used for qualitative systematic reviews of the treatment burdens identified in three different common chronic conditions, using stroke as our exemplar. Methods Qualitative studies in peer reviewed journals seeking to understand the patient experience of stroke management were sought. Limitations of English language and year of publication 2000 onwards were set. An exhaustive search strategy was employed, consisting of a scoping search, database searches (Scopus, CINAHL, Embase, Medline & PsycINFO) and reference, footnote and citation searching. Papers were screened, data extracted, quality appraised and analysed by two individuals, with a third party for disagreements. Data analysis was carried out using a coding framework underpinned by Normalization Process Theory (NPT). Results A total of 4364 papers were identified, 54 were included in the review. Of these, 51 (94%) were retrieved from our database search. Methodological issues included: creating an appropriate search strategy; investigating a topic not previously conceptualised; sorting through irrelevant data within papers; the quality appraisal of qualitative research; and the use of NPT as a novel method of data analysis, shown to be a useful method for the purposes of this review. Conclusion The creation of our search strategy may be of particular interest to other researchers carrying out synthesis of qualitative studies. Importantly, the successful use of NPT to inform a coding frame for data analysis involving qualitative data that describes processes relating to self management highlights the potential of a new method for analyses of qualitative data within systematic reviews.

Whiting P, Rutjes AWS, Reitsma JB, Bossuyt PMM, Kleijnen JThe development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 3: 1-13

December 2003

·

4,990 Reads

In the era of evidence based medicine, with systematic reviews as its cornerstone, adequate quality assessment tools should be available. There is currently a lack of a systematically developed and evaluated tool for the assessment of diagnostic accuracy studies. The aim of this project was to combine empirical evidence and expert opinion in a formal consensus method to develop a tool to be used in systematic reviews to assess the quality of primary studies of diagnostic accuracy. We conducted a Delphi procedure to develop the quality assessment tool by refining an initial list of items. Members of the Delphi panel were experts in the area of diagnostic research. The results of three previously conducted reviews of the diagnostic literature were used to generate a list of potential items for inclusion in the tool and to provide an evidence base upon which to develop the tool. A total of nine experts in the field of diagnostics took part in the Delphi procedure. The Delphi procedure consisted of four rounds, after which agreement was reached on the items to be included in the tool which we have called QUADAS. The initial list of 28 items was reduced to fourteen items in the final tool. Items included covered patient spectrum, reference standard, disease progression bias, verification bias, review bias, clinical review bias, incorporation bias, test execution, study withdrawals, and indeterminate results. The QUADAS tool is presented together with guidelines for scoring each of the items included in the tool. This project has produced an evidence based quality assessment tool to be used in systematic reviews of diagnostic accuracy studies. Further work to determine the usability and validity of the tool continues.

Figure 1: Flow Diagram of Patient Recruitment
Table 1 : Characteristics of Patients Consented or Refused to Study Participation
Table 2 : Characteristics of Patients Enrolled or Withdrawn after Consenting
Patient recruitment to a randomized clinical trial of behavioral therapy for chronic heart failure. <http://www.biomedcentral.com/1471-2288/4/8> (accessed on 6 March 2014)

April 2004

·

367 Reads

Patient recruitment is one of the most difficult aspects of clinical trials, especially for research involving elderly subjects. In this paper, we describe our experience with patient recruitment for the behavioral intervention randomized trial, "The relaxation response intervention for chronic heart failure (RRCHF)." Particularly, we identify factors that, according to patient reports, motivated study participation. The RRCHF was a three-armed, randomized controlled trial designed to evaluate the efficacy and cost of a 15-week relaxation response intervention on veterans with chronic heart failure. Patients from the Veterans Affairs (VA) Boston Healthcare System in the United States were recruited in the clinic and by telephone. Patients' reasons for rejecting the study participation were recorded during the screening. A qualitative sub-study in the trial consisted of telephone interviews of participating patients about their experiences in the study. The qualitative study included the first 57 patients who completed the intervention and/or the first follow-up outcome measures. Factors that distinguished patients who consented from those who refused study participation were identified using a t-test or a chi-square test. The reason for study participation was abstracted from the qualitative interview. We successfully consented 134 patients, slightly more than our target number, in 27 months. Ninety-five of the consented patients enrolled in the study. The enrollment rate among the patients approached was 18% through clinic and 6% through telephone recruitment. The most commonly cited reason for declining study participation given by patients recruited in the clinic was 'Lives Too Far Away'; for patients recruited by telephone it was 'Not Interested in the Study'. One factor that significantly distinguished patients who consented from patients who declined was the distance between their residence and the study site (t-test: p <.001). The most frequently reported reason for study participation was some benefit to the patient him/herself. Other reasons included helping others, being grateful to the VA, positive comments by trusted professionals, certain characteristics of the recruiter, and monetary compensation. The enrollment rate was low primarily because of travel considerations, but we were able to identify and highlight valuable information for planning recruitment for future similar studies.

Figure 1: Chouaid et al. study
Figure 2: Tree diagram showing the relationships between the variables in the re-randomization process
Figure 3: The 0.05 level curves for the relative difference between the risk difference before and after the re-randomization
Figure 4: Meta-analysis conducted with Review Manager using the reported data from Anaissie et al. with re-randomized data included
Figure 5: Meta-analysis conducted with Review Manager using our estimates for the data from Anaissie et al.without including the re-randomized data
Use of re-randomized data in meta-analysis. BMC Medical Res Methodol 5:17

February 2005

·

297 Reads

Outcomes collected in randomized clinical trials are observations of random variables that should be independent and identically distributed. However, in some trials, the patients are randomized more than once thus violating both of these assumptions. The probability of an event is not always the same when a patient is re-randomized; there is probably a non-zero covariance coming from observations on the same patient. This is of particular importance to the meta-analysts. We developed a method to estimate the relative error in the risk differences with and without re-randomization of the patients. The relative error can be estimated by an expression depending on the percentage of the patients who were re-randomized, multipliers (how many times more likely it is to repeat an event) for the probability of reoccurrences, and the ratio of the total events reported and the initial number of patients entering the trial. We illustrate our methods using two randomized trials testing growth factors in febrile neutropenia. We showed that under some circumstances the relative error of taking into account re-randomized patients was sufficiently small to allow using the results in the meta-analysis. Our findings indicate that if the study in question is of similar size to other studies included in the meta-analysis, the error introduced by re-randomization will only minimally affect meta-analytic summary point estimate. We also show that in our model the risk ratio remains constant during the re-randomization, and therefore, if a meta-analyst is concerned about the effect of re-randomization on the meta-analysis, one way to sidestep the issue and still obtain reliable results is to use risk ratio as the measure of interest. Our method should be helpful in the understanding of the results of clinical trials and particularly helpful to the meta-analysts to assess if re-randomized patient data can be used in their analyses.

Analyzing repeated data collected by mobile phones and frequent text messages. An example of Low back pain measured weekly for 18 weeks

July 2012

·

100 Reads

Repeated data collection is desirable when monitoring fluctuating conditions. Mobile phones can be used to gather such data from large groups of respondents by sending and receiving frequently repeated short questions and answers as text messages.The analysis of repeated data involves some challenges. Vital issues to consider are the within-subject correlation, the between measurement occasion correlation and the presence of missing values.The overall aim of this commentary is to describe different methods of analyzing repeated data. It is meant to give an overview for the clinical researcher in order for complex outcome measures to be interpreted in a clinically meaningful way. A model data set was formed using data from two clinical studies, where patients with low back pain were followed with weekly text messages for 18 weeks. Different research questions and analytic approaches were illustrated and discussed, as well as the handling of missing data. In the applications the weekly outcome "number of days with pain" was analyzed in relation to the patients' "previous duration of pain" (categorized as more or less than 30 days in the previous year).Research questions with appropriate analytical methods1: How many days with pain do patients experience? This question was answered with data summaries.2: What is the proportion of participants "recovered" at a specific time point? This question was answered using logistic regression analysis.3: What is the time to recovery? This question was answered using survival analysis, illustrated in Kaplan-Meier curves, Proportional Hazard regression analyses and spline regression analyses.4: How is the repeatedly measured data associated with baseline (predictor) variables? This question was answered using generalized Estimating Equations, Poisson regression and Mixed linear models analyses.5: Are there subgroups of patients with similar courses of pain within the studied population? A visual approach and hierarchical cluster analyses revealed different subgroups using subsets of the model data. We have illustrated several ways of analysing repeated measures with both traditional analytic approaches using standard statistical packages, as well as recently developed statistical methods that will utilize all the vital features inherent in the data.

Figure 1: Number and Size of RCTs published in eight German language general health care journals.
The demise of the randomised controlled trial: Bibliometric study of the German-language health care literature, 1948 to 2004

February 2006

·

60 Reads

In order to reduce systematic errors (such as language bias) and increase the precision of the summary treatment effect estimate, a comprehensive identification of randomised controlled trials (RCT), irrespective of publication language, is crucial in systematic reviews and meta-analyses. We identified trials in the German general health care literature. Eight German language general health care journals were searched for randomised controlled trials and analysed with respect to the number of published RCTs each year and the size of trials. A total of 1618 trials were identified with a median total number of 43 patients per trial. Between 1970 and 2004 a small but constant rise in sample size from a median number of 30 to 60 patients per trial can be observed. The number of published trials was very low between 1948 and 1970, but increased between 1970 and 1986 to a maximum of 11.2 RCTs per journal and year. In the following time period a striking decline of the number of RCTs was observed. Between 1999 and 2001 only 0.8 RCTs per journal and year were published, in the next three years, the number of published trials increased to 1.7 RCTs per journal and year. German language general health care journals no longer have a role in the dissemination of trial results. The slight rise in the number of published RCTs in the last three years can be explained by a change of publication language from German to English of three of the analysed journals.

Table 3 : Women by number of identified births according to data source (restricted to women for whom SMR 2 linkage was attempted)
Frequency distribution of linkage scores greater than 22 used for the probabilistic linkage of females in the Children of the 1950's study linkage to SMR 2 maternity records. The bar width is equivalent to 1.0 unit increase in linkage score. Vertical line: used cut-off score of 22.0.
Profile of mailing questionnaires and attempts of linkage with subpopulation considered for further validation of individual births (box with double borders).
How good is probabilistic record linkage to reconstruct reproductive histories? Results from the Aberdeen Children of the 1950s study

February 2006

·

44 Reads

Probabilistic record linkage is widely used in epidemiology, but studies of its validity are rare. Our aim was to validate its use to identify births to a cohort of women, being drawn from a large cohort of people born in Scotland in the early 1950s. The Children of the 1950s cohort includes 5868 females born in Aberdeen 1950-56 who were in primary schools in the city in 1962. In 2001 a postal questionnaire was sent to the cohort members resident in the UK requesting information on offspring. Probabilistic record linkage (based on surname, maiden name, initials, date of birth and postcode) was used to link the females in the cohort to birth records held by the Scottish Maternity Record System (SMR 2). We attempted to mail a total of 5540 women; 3752 (68%) returned a completed questionnaire. Of these 86% reported having had at least one birth. Linkage to SMR 2 was attempted for 5634 women, one or more maternity records were found for 3743. There were 2604 women who reported at least one birth in the questionnaire and who were linked to one or more SMR 2 records. When judged against the questionnaire information, the linkage correctly identified 4930 births and missed 601 others. These mostly occurred outside of Scotland (147) or prior to full coverage by SMR 2 (454). There were 134 births incorrectly linked to SMR 2. Probabilistic record linkage to routine maternity records applied to population-based cohort, using name, date of birth and place of residence, can have high specificity, and as such may be reliably used in epidemiological research.

Seasonal variation in incidence rates of stroke in AF patients. The estimated seasonal variation in incidence rates of hospitalizations with stroke in AF patients in Denmark from 1980 to 2011 adjusted for an overall trend. The seasonal variation is presented as the percentual deviation from annual median for four time points.
Trend in incidence rates of stroke in AF patients. The overall trend is presented as the underlying level of incidence rates of hospitalizations with stroke per 100 person-years for patients with AF in Denmark, adjusted for seasonal variations.
Peak-to-trough ratios. Peak-to-trough ratio of the seasonal variation in incidence rates of stroke in AF patients in Denmark estimated by a dynamic generalized linear model. The solid line represents the dynamic peak-to-trough ratio estimated by including a dynamic seasonal variation component in a generalized linear model, whereas the dashed line represents a static seasonal variation.
Times for peaks and troughs. Time of year for peak (dashed line) and trough (solid line) through the study period.
Modeling gradually changing seasonal variation in count data using state space models: A cohort study of hospitalization rates of stroke in atrial fibrillation patients in Denmark from 1977 to 2011

November 2012

·

421 Reads

Background Seasonal variation in the occurrence of cardiovascular diseases has been recognized for decades. In particular, incidence rates of hospitalization with atrial fibrillation (AF) and stroke have shown to exhibit a seasonal variation. Stroke in AF patients is common and often severe. Obtaining a description of a possible seasonal variation in the occurrence of stroke in AF patients is crucial in clarifying risk factors for developing stroke and initiating prophylaxis treatment. Methods Using a dynamic generalized linear model we were able to model gradually changing seasonal variation in hospitalization rates of stroke in AF patients from 1977 to 2011. The study population consisted of all Danes registered with a diagnosis of AF comprising 270,017 subjects. During follow-up, 39,632 subjects were hospitalized with stroke. Incidence rates of stroke in AF patients were analyzed assuming the seasonal variation being a sum of two sinusoids and a local linear trend. Results The results showed that the peak-to-trough ratio decreased from 1.25 to 1.16 during the study period, and that the times of year for peak and trough changed slightly. Conclusion The present study indicates that using dynamic generalized linear models provides a flexible modeling approach for studying changes in seasonal variation of stroke in AF patients and yields plausible results.

Table 1 Methodological characteristics of four cross-sectional surveys
Table 2 Parental questionnaire compliance
Table 4 Prevalence of socio-demographic characteristics of household respondents
Table 5 Townsend scores in responders or non-responders to specific questions
Geographical location of primary schools in Liverpool and Wallasey surveys.
Parental compliance - An emerging problem in Liverpool community child health surveys 1991-2006

April 2012

·

128 Reads

Abstract Background Compliance is a critical issue for parental questionnaires in school based epidemiological surveys and high compliance is difficult to achieve. The objective of this study was to determine trends and factors associated with parental questionnaire compliance during respiratory health surveys of school children in Merseyside between 1991 and 2006. Methods Four cross-sectional respiratory health surveys employing a core questionnaire and methodology were conducted in 1991, 1993, 1998 and 2006 among 5-11 year old children in the same 10 schools in Bootle and 5 schools in Wallasey, Merseyside. Parental compliance fell sequentially in consecutive surveys. This analysis aimed to determine the association of questionnaire compliance with variation in response rates to specific questions across surveys, and the demographic profiles for parents of children attending participant schools. Results Parental questionnaire compliance was 92% (1872/2035) in 1991, 87.4% (3746/4288) in 1993, 78.1% (1964/2514) in 1998 and 30.3% (1074/3540) in 2006. The trend to lower compliance in later surveys was consistent across all surveyed schools. Townsend score estimations of socio-economic status did not differ between schools with high or low questionnaire compliance and were comparable across the four surveys with only small differences between responders and non-responders to specific core questions. Respiratory symptom questions were mostly well answered with fewer than 15% of non-responders across all surveys. There were significant differences between mean child age, maternal and paternal smoking prevalence, and maternal employment between the four surveys (all p

Kaplan-Meier curve of time to registration with an HIV diagnosis in the Danish National Hospital Registry from the date of first HIV positive test, according to the Danish HIV Cohort Study.
Kaplan-Meier curve of the time to death for patients with diagnoses registered early in the Danish National Hospital Registry [within 3 months of HIV diagnosis in the Danish HIV Cohort Study] [broken line] and for patients diagnosed later (after 3 months) [solid line].
Retrivability in The Danish National Hospital Registry of HIV and hepatitis B and C coinfection diagnoses of patients managed in HIV centers 1995–2004

April 2008

·

47 Reads

Hospital-based discharge registries are used increasingly for longitudinal epidemiological studies of HIV. We examined completeness of registration of HIV infections and of chronic hepatitis B (HBV) and hepatitis C (HCV) coinfections in the Danish National Hospital Registry (DNHR) covering all Danish hospitals. The Danish HIV Cohort Study (DHCS) encompasses all HIV-infected patients treated in Danish HIV clinics since 1 January 1995. All 2,033 Danish patients in DHCS diagnosed with HIV-1 during the 10-year period from 1 January 1995 to 31 December 2004 were included in the current analysis. We used the DHCS as a reference to examine the completeness of HIV and of HBV and HCV coinfections recorded in DNHR. Cox regression analysis was used to estimate hazard ratios of time to diagnosis of HIV in DNHR compared to DHCS. Of the 2,033 HIV patients in DHCS, a total of 2,006 (99%) were registered with HIV in DNHR. Of these, 1,888 (93%) were registered in DNHR within one year of their first positive HIV test. A CD4 < 200 cells/microl, a viral load > or = 100,000 copies/ml and being diagnosed after 1 January 2000, were associated with earlier registration in DNHR, both in crude and adjusted analyses. Thirty (23%) HIV patients registered with chronic HBV (n = 129) in DHCS and 126 (48%) of HIV patients with HCV (n = 264) in DHCS were registered with these diagnoses in the DNHR. Further 17 and 8 patients were registered with HBV and HCV respectively in DNHR, but not in DHCS. The positive predictive values of being registered with HBV and HCV in DHCS were thereby estimated to 0.88 and 0.97 and in DNHR to 0.32 and 0.54. The study demonstrates that secondary data from national hospital databases may be reliable for identification of patients diagnosed with HIV infection. However, the predictive value of co-morbidity data may be low.

Table 2 : Prevalence of NHST and CI across periods, languages and research areas.
Table 3 : Frequency of occurrence of the significance fallacy across periods, languages and research areas.
Table 5 : Frequency of presence of the term Significance (or statistical significance) in conclusions across periods, languages and research areas.
Flow chart of the selection process for eligible papers.
The null hypothesis significance test in health sciences research (1995-2006): Statistical analysis and interpretation

May 2010

·

477 Reads

The null hypothesis significance test (NHST) is the most frequently used statistical method, although its inferential validity has been widely criticized since its introduction. In 1988, the International Committee of Medical Journal Editors (ICMJE) warned against sole reliance on NHST to substantiate study conclusions and suggested supplementary use of confidence intervals (CI). Our objective was to evaluate the extent and quality in the use of NHST and CI, both in English and Spanish language biomedical publications between 1995 and 2006, taking into account the International Committee of Medical Journal Editors recommendations, with particular focus on the accuracy of the interpretation of statistical significance and the validity of conclusions. Original articles published in three English and three Spanish biomedical journals in three fields (General Medicine, Clinical Specialties and Epidemiology - Public Health) were considered for this study. Papers published in 1995-1996, 2000-2001, and 2005-2006 were selected through a systematic sampling method. After excluding the purely descriptive and theoretical articles, analytic studies were evaluated for their use of NHST with P-values and/or CI for interpretation of statistical "significance" and "relevance" in study conclusions. Among 1,043 original papers, 874 were selected for detailed review. The exclusive use of P-values was less frequent in English language publications as well as in Public Health journals; overall such use decreased from 41% in 1995-1996 to 21% in 2005-2006. While the use of CI increased over time, the "significance fallacy" (to equate statistical and substantive significance) appeared very often, mainly in journals devoted to clinical specialties (81%). In papers originally written in English and Spanish, 15% and 10%, respectively, mentioned statistical significance in their conclusions. Overall, results of our review show some improvements in statistical management of statistical results, but further efforts by scholars and journal editors are clearly required to move the communication toward ICMJE advices, especially in the clinical setting, which seems to be imperative among publications in Spanish.

Figure 1 Flow diagram of the search strategy and review process. 
Table 1 Characteristics of included RCTs 
Table 2 Reporting quality of key methodological items 
Table 3 Methodological reporting in 2008 according to center and funding source 
Flow diagram of the search strategy and review process.
Methodological reporting of randomized controlled trials in major hepato-gastroenterology journals in 2008 and 1998: A comparative study

July 2011

·

64 Reads

It was still unclear whether the methodological reporting quality of randomized controlled trials (RCTs) in major hepato-gastroenterology journals improved after the Consolidated Standards of Reporting Trials (CONSORT) Statement was revised in 2001. RCTs in five major hepato-gastroenterology journals published in 1998 or 2008 were retrieved from MEDLINE using a high sensitivity search method and their reporting quality of methodological details were evaluated based on the CONSORT Statement and Cochrane Handbook for Systematic Reviews of interventions. Changes of the methodological reporting quality between 2008 and 1998 were calculated by risk ratios with 95% confidence intervals. A total of 107 RCTs published in 2008 and 99 RCTs published in 1998 were found. Compared to those in 1998, the proportion of RCTs that reported sequence generation (RR, 5.70; 95%CI 3.11-10.42), allocation concealment (RR, 4.08; 95%CI 2.25-7.39), sample size calculation (RR, 3.83; 95%CI 2.10-6.98), incomplete outecome data addressed (RR, 1.81; 95%CI, 1.03-3.17), intention-to-treat analyses (RR, 3.04; 95%CI 1.72-5.39) increased in 2008. Blinding and intent-to-treat analysis were reported better in multi-center trials than in single-center trials. The reporting of allocation concealment and blinding were better in industry-sponsored trials than in public-funded trials. Compared with historical studies, the methodological reporting quality improved with time. Although the reporting of several important methodological aspects improved in 2008 compared with those published in 1998, which may indicate the researchers had increased awareness of and compliance with the revised CONSORT statement, some items were still reported badly. There is much room for future improvement.

Table 1 Telephone (landline) and mobile status of household, and household has landline and/or mobile telephone listed in the Australian Electronic White Pages by year
Table 3 Proportion of people living in households where mobile phone or landline telephone is listed in the White Pages by selected demographic, health conditions, and health related risk factors from 2006 to 2008
Sampling and coverage issues of telephone surveys used for collecting health information in Australia: Results from a face-to-face survey from 1999 to 2008

August 2010

·

121 Reads

To examine the trend of "mobile only" households, and households that have a mobile phone or landline telephone listed in the telephone directory, and to describe these groups by various socio-demographic and health indicators. Representative face-to-face population health surveys of South Australians, aged 15 years and over, were conducted in 1999, 2004, 2006, 2007 and 2008 (n = 14285, response rates = 51.9% to 70.6%). Self-reported information on mobile phone ownership and usage (1999 to 2008) and listings in White Pages telephone directory (2006 to 2008), and landline telephone connection and listings in the White Pages (1999 to 2008), was provided by participants. Additional information was collected on self-reported health conditions and health-related risk behaviours. Mobile only households have been steadily increasing from 1.4% in 1999 to 8.7% in 2008. In terms of sampling frame for telephone surveys, 68.7% of South Australian households in 2008 had at least a mobile phone or landline telephone listed in the White Pages (73.8% in 2006; 71.5% in 2007). The proportion of mobile only households was highest among young people, unemployed, people who were separated, divorced or never married, low income households, low SES areas, rural areas, current smokers, current asthma or people in the normal weight range. The proportion with landlines or mobiles telephone numbers listed in the White Pages telephone directory was highest among older people, married or in a defacto relationship or widowed, low SES areas, rural areas, people classified as overweight, or those diagnosed with arthritis or osteoporosis. The rate of mobile only households has been increasing in Australia and is following worldwide trends, but has not reached the high levels seen internationally (12% to 52%). In general, the impact of mobile telephones on current sampling frames (exclusion or non-listing of mobile only households or not listed in the White Pages directory) may have a low impact on health estimates obtained using telephone surveys. However, researchers need to be aware that mobile only households are distinctly different to households with a landline connection, and the increase in the number of mobile-only households is not uniform across all groups in the community. Listing in the White Pages directory continues to decrease and only a small proportion of mobile only households are listed. Researchers need to be aware of these telephone sampling issues when considering telephone surveys.

Table 2 Estimates of treatment effects for MS clinical trials in Japan
Schematic forms of the various extended time-to-event Cox regression models. Each arrow represents a stratum. Arrow diagrams describe about the behavior of ID = 1 in the sample data (DATA = MS) in Appendix (Additional file 1), who experiences the first event at day 51, second event at day 185, third event at day 413 and finally have censored at day 692. (A) "Time-to-first-event Cox model" only uses the information of the time to first event (the day 51). (B) "AG model" shows the renewal process of the events. (C) "PWP model" shows how the conditional models are constructed. (D) "WLW model" and "LWA model" model the marginal distribution of each event occurrence time.
Alternative statistical methods for estimating efficacy of interferon beta-1b for multiple sclerosis clinical trials

May 2011

·

116 Reads

In the randomized study of interferon beta-1b (IFN beta-1b) for multiple sclerosis (MS), it has usually been evaluated the simple annual relapse rate as the study endpoint. This study aimed to investigate the performance of various regression models using information regarding the time to each recurrent event and considering the MS specific data generation process, and to estimate the treatment effect of a MS clinical trial data. We conducted a simulation study with consideration of the pathological characteristics of MS, and applied alternative efficacy estimation methods to real clinical trial data, including 5 extended Cox regression models for time-to-event analysis, a Poisson regression model and a Poisson regression model with Generalized Estimating Equations (GEE). We adjusted for other important covariates that may have affected the outcome. We compared the simulation results for each model. The hazard ratios of real data were estimated for each model including the effects of other covariates. The results (hazard ratios of high-dose to low-dose) of all models were approximately 0.7 (range, 0.613 - 0.769), whereas the annual relapse rate ratio was 0.714. The precision of the treatment estimation was increased by application of the alternative models. This suggests that the use of alternative models that include recurrence event data may provide better analyses.

Table 1 Characteristics of Participants Eligible for participation in Validation Study
Table 2 Participant Demographics by language of Edinburgh Claudication Questionnaire
Table 4 Sensitivity, Specificity, Positive and Negative predictive values of each question of the Edinburgh Claudication Questionnaire
Translation process of the Edinburgh Claudication Questionnaire.
Recruitment of subjects.
Validation of the Edinburgh Claudication Questionnaire in 1generation Black African-Caribbean and South Asian UK migrants: A sub-study to the Ethnic-Echocardiographic Heart of England Screening (E-ECHOES) study

June 2011

·

1,098 Reads

We determined the diagnostic accuracy of the Edinburgh Claudication Questionnaire (ECQ) in 1st generation Black African-Caribbean UK migrants as previous diagnostic questionnaires have been found to be less accurate in this population. We also determined the diagnostic accuracy of translated versions of the ECQ in 1st generation South Asian UK migrants, as this has not been investigated before. Subjects were recruited from the Ethnic-Echocardiographic Heart of England Screening (E-ECHOES) study, a community based screening survey for heart failure in minority ethnic groups. Translated versions of the ECQ were prepared following a recognised protocol. All participants attending screening between October 2007 and February 2009 were asked to complete the ECQ in the language of their choice (English, Punjabi, Bengali, Urdu, Hindi or Gujarati). Subjects answering positively to experiencing leg pain or discomfort on walking were asked to return to have Ankle Brachial Pressure Index (ABPI) measured. 154 out of 2831 subjects participating in E-ECHOES (5.4%) were eligible to participate in this sub-study, for which 74.3% returned for ABPI assessment. Non-responders were younger than participants (59[9] vs. 65[11] years; p=0.015). Punjabi, English and Bengali questionnaires identified participants with Intermittent Claudication, so these questionnaires were assessed. The sensitivities (SN), specificities (SP), positive (PPV) and negative (NPV) predictive values were calculated. English: SN: 50%; SP: 68%; PPV: 43%; NPV: 74%. Punjabi: SN: 50%; SP: 87%; PPV: 43%; NPV: 90%. Bengali: SN: 33%; SP: 50%; PPV: 13%; NPV: 73%. There were significant differences in diagnostic accuracy between the 3 versions (Punjabi: 83.8%; Bengali: 45%; English: 62.2%; p<0.0001). No significant differences were found in sensitivity and specificity between illiterate and literate participants in any of the questionnaires and there was no significant different difference between those under and over 60 years of age. Our findings suggest that the ECQ is not as sensitive or specific a diagnostic tool in 1st generation Black African-Caribbean and South Asian UK migrants than in the Edinburgh Artery Study, reflecting the findings of other diagnostic questionnaires in these minority ethnic groups. However this study is limited by sample size so conclusions should be interpreted with caution.

Table 2 : Source of recruitment
Recruitment flow chart. * Two participants were excluded from analysis due to a combination of failing memory and excessive number of falls, i.e. more than two per day.
Trials and tribulations of recruiting 2,000 older women onto a clinical trial investigating falls and fractures: Vital D study

November 2009

·

134 Reads

Randomised, placebo-controlled trials are needed to provide evidence demonstrating safe, effective interventions that reduce falls and fractures in the elderly. The quality of a clinical trial is dependent on successful recruitment of the target participant group. This paper documents the successes and failures of recruiting over 2,000 women aged at least 70 years and at higher risk of falls or fractures onto a placebo-controlled trial of six years duration. The characteristics of study participants at baseline are also described for this study. The Vital D Study recruited older women identified at high risk of fracture through the use of an eligibility algorithm, adapted from identified risk factors for hip fracture. Participants were randomised to orally receive either 500,000 IU vitamin D3 (cholecalciferol) or placebo every autumn for five consecutive years. A variety of recruitment strategies were employed to attract potential participants. Of the 2,317 participants randomised onto the study, 74% (n = 1716/2317) were consented onto the study in the last five months of recruiting. This was largely due to the success of a targeted mail-out. Prior to this only 541 women were consented in the 18 months of recruiting. A total of 70% of all participants were recruited as a result of targeted mail-out. The response rate from the letters increased from 2 to 7% following revision of the material by a public relations company. Participant demographic or risk factor profile did not differ between those recruited by targeted mail-outs compared with other methods. The most successful recruitment strategy was the targeted mail-out and the response rate was no higher in the local region where the study had extensive exposure through other recruiting strategies. The strategies that were labour-intensive and did not result in successful recruitment include the activities directed towards the GP medical centres. Comprehensive recruitment programs employ overlapping strategies simultaneously with ongoing assessment of recruitment rates. In our experience, and others direct mail-outs work best although rights to privacy must be respected. ISRCTN83409867 and ACTR12605000658617.

Figure 1 Consort-Score in RCT ’ s . Consort-score in RCT ’ s- A comarison between oral and poster presentation in 2000 and 2008. 
Figure 2: Timmer-Score in RCT's. Timmer-score in RCT's-A comarison between oral and poster presentation in 2000 and 2008.
Figure 2 Timmer-Score in RCT ’ s . Timmer-score in RCT ’ s-A comarison between oral and poster presentation in 2000 and 2008. 
Figure 3 Strobe-Score in Observational Studies . Strobe-score in observational studies-A comarison between oral and poster presentation in 2000 and 2008. 
Figure 4 Timmer-Score in Observational Studies . Timmer-score in observational studies-A comarison between oral and poster presentation in 2000 and 2008. 
Quality of reporting according to the CONSORT, STROBE and Timmer instrument at the American Burn Association (ABA) annual meetings 2000 and 2008

November 2011

·

496 Reads

The quality of oral and poster conference presentations differ. We hypothesized that the quality of reporting is better in oral abstracts than in poster abstracts at the American Burn Association (ABA) conference meeting. All 511 abstracts (2000: N = 259, 2008: N = 252) from the ABA annual meetings in year 2000 and 2008 were screened. RCT's and obervational studies were analyzed by two independent examiners regarding study design and quality of reporting for randomized-controlled trials (RCT) by CONSORT criteria, observational studies by the STROBE criteria and additionally the Timmer instrument. Overall, 13 RCT's in 2000 and 9 in 2008, 77 observational studies in 2000 and 98 in 2008 were identified. Of the presented abstracts, 5% (oral; 7%(n = 9) vs. poster; 3%(n = 4)) in 2000 and 4% ((oral; 5%(n = 7) vs. poster; 2%(n = 2)) in 2008 were randomized controlled trials. The amount of observational studies as well as experimental studies accepted for presentation was not significantly different between oral and poster in both years. Reporting quality of RCT was for oral vs. poster abstracts in 2000 (CONSORT; 7.2 ± 0.8 vs. 7 ± 0, p = 0.615, CI -0.72 to 1.16, Timmer; 7.8 ± 0.7 vs. 7.5 ± 0.6,) and 2008 (CONSORT; 7.2 ± 1.4 vs. 6.5 ± 1, Timmer; 9.7 ± 1.1 vs. 9.5 ± 0.7). While in 2000, oral and poster abstracts of observational studies were not significantly different for reporting quality according to STROBE (STROBE; 8.3 ± 1.7 vs. 8.9 ± 1.6, p = 0.977, CI -37.3 to 36.3, Timmer; 8.6 ± 1.5 vs. 8.5 ± 1.4, p = 0.712, CI -0.44 to 0.64), in 2008 oral observational abstracts were significantly better than posters (STROBE score; 9.4 ± 1.9 vs. 8.5 ± 2, p = 0.005, CI 0.28 to 1.54, Timmer; 9.4 ± 1.4 vs. 8.6 ± 1.7, p = 0.013, CI 0.32 to 1.28). Poster abstract reporting quality at the American Burn Association annual meetings in 2000 and 2008 is not necessarily inferior to oral abstracts as far as study design and reporting quality of clinical trials are concerned. The primary hypothesis has to be rejected. However, endorsement for the comprehensive use of the CONSORT and STROBE criteria might further increase the quality of reporting ABA conference abstracts in the future.

Identifying unusual performance in Australian and New Zealand intensive care units from 2000 to 2010

April 2014

·

146 Reads

The Australian and New Zealand Intensive Care Society (ANZICS) Adult Patient Database (APD) collects voluntary data on patient admissions to Australian and New Zealand intensive care units (ICUs). This paper presents an in-depth statistical analysis of risk-adjusted mortality of ICU admissions from 2000 to 2010 for the purpose of identifying ICUs with unusual performance. A cohort of 523, 462 patients from 144 ICUs was analysed. For each ICU, the natural logarithm of the standardised mortality ratio (log-SMR) was estimated from a risk-adjusted, three-level hierarchical model. This is the first time a three-level model has been fitted to such a large ICU database anywhere. The analysis was conducted in three stages which included the estimation of a null distribution to describe usual ICU performance. Log-SMRs with appropriate estimates of standard errors are presented in a funnel plot using 5% false discovery rate thresholds. False coverage-statement rate confidence intervals are also presented. The observed numbers of deaths for ICUs identified as unusual are compared to the predicted true worst numbers of deaths under the model for usual ICU performance. Seven ICUs were identified as performing unusually over the period 2000 to 2010, in particular, demonstrating high risk-adjusted mortality compared to the majority of ICUs. Four of the seven were ICUs in private hospitals. Our three-stage approach to the analysis detected outlying ICUs which were not identified in a conventional (single) risk-adjusted model for mortality using SMRs to compare ICUs. We also observed a significant linear decline in mortality over the decade. Distinct yearly and weekly respiratory seasonal effects were observed across regions of Australia and New Zealand for the first time. The statistical approach proposed in this paper is intended to be used for the review of observed ICU and hospital mortality. Two important messages from our study are firstly, that comprehensive riskadjustment is essential in modelling patient mortality for comparing performance, and secondly, that the appropriate statistical analysis is complicated.

Table 1 Prevalence of average daily alcohol consumption in the European Union
Table 5 Alcohol-attributable deaths for people aged 15 years and older in the European Union calculated using capped (at 150 grams per day) and uncapped alcohol consumption distributions and capped relative risk functions
Original and normalized gamma distributions for men in Latvia.
The effects of capping the alcohol consumption distribution and relative risk functions on the estimated number of deaths attributable to alcohol consumption in the European Union in 2004

February 2013

·

93 Reads

When calculating the number of deaths attributable to alcohol consumption (i.e., the number of deaths that would not have occurred if everyone was a lifetime abstainer), alcohol consumption is most often modelled using a capped exposure distribution so that the maximum average daily consumption is 150 grams of pure alcohol. However, the effect of capping the exposure distribution on the estimated number of alcohol-attributable deaths has yet to be systematically evaluated. Thus, the aim of this article is to estimate the number of alcohol-attributable deaths by means of a capped and an uncapped gamma distribution and capped and uncapped relative risk functions using data from the European Union (EU) for 2004. Sex- and disease-specific alcohol relative risks were obtained from the ongoing Global Burden of Disease, Comparative Risk Assessment Study. Adult per capita consumption estimates were obtained from the Global Information System on Alcohol and Health. Data on the prevalence of current drinkers, former drinkers, and lifetime abstainers by sex and age were obtained from various population surveys. Alcohol-attributable deaths were calculated using Alcohol-Attributable Fractions that were calculated using capped (at 150 grams of alcohol) and uncapped alcohol consumption distributions and capped and uncapped relative risk functions. Alcohol-attributable mortality in the EU may have been underestimated by 25.5% for men and 8.0% for women when using the capped alcohol consumption distribution and relative risk functions, amounting to the potential underestimation of over 23,000 and 1,100 deaths in 2004 in men and women respectively. Capping of the relative risk functions leads to an estimated 9,994 and 468 fewer deaths for men and for women respectively when using an uncapped gamma distribution to model alcohol consumption, accounting for slightly less than half of the potential underestimation. Although the distribution of drinkers in the population and the exact shape of the relative risk functions at large average daily alcohol consumption levels are not known, the findings of our study stress the importance of conducting further research to focus on exposure and risk in very heavy drinkers.

Top-cited authors