Clare Bankhead

Cardiff University, Cardiff, WLS, United Kingdom

Are you Clare Bankhead?

Claim your profile

Publications (36)217.32 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Experts recommend screening for albuminuria in patients at risk for kidney disease. To systematically review evidence about the diagnostic accuracy of point-of-care (POC) tests for detecting albuminuria in individuals for whom guidelines recommend such detection. Cochrane Library, EMBASE, Medion database, MEDLINE, and Science Citation Index from 1963 through 5 December 2013; hand searches of other relevant journals; and reference lists. Cross-sectional studies, published in any language, that compared the accuracy of machine-read POC tests of urinary albumin-creatinine ratio with that of laboratory measurement. Two independent reviewers extracted study data and assessed study quality using the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2) tool. Sixteen studies (n = 3356 patients) that evaluated semiquantitative or quantitative POC tests and used random urine samples collected in primary or secondary ambulatory care settings met inclusion criteria. Pooling results from a bivariate random-effects model gave sensitivity and specificity estimates of 76% (95% CI, 63% to 86%) and 93% (CI, 84% to 97%), respectively, for the semiquantitative test. Sensitivity and specificity estimates for the quantitative test were 96% (CI, 78% to 99%) and 98% (CI, 93% to 99%), respectively. The negative likelihood ratios for the semiquantitative and quantitative tests were 0.26 (CI, 0.16 to 0.40) and 0.04 (CI, 0.01 to 0.25), respectively. Accuracy estimates were based on data from single-sample urine measurement, but guidelines require that diagnosis of albuminuria be based on at least 2 of 3 samples collected in a 6-month period. A negative semiquantitative POC test result does not rule out albuminuria, whereas quantitative POC testing meets required performance standards and can be used to rule out albuminuria. None.
    Annals of internal medicine 04/2014; 160(8):550-7. · 13.98 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The publication of clinical prediction rules (CPRs) studies has risen significantly. It is unclear if this reflects increasing usage of these tools in clinical practice or how this may vary across clinical areas. To review clinical guidelines in selected areas and survey GPs in order to explore CPR usefulness in the opinion of experts and use at the point of care. A review of clinical guidelines and survey of UK GPs. Clinical guidelines in eight clinical domains with published CPRs were reviewed for recommendations to use CPRs including primary prevention of cardiovascular disease, transient ischaemic attack (TIA) and stroke, diabetes mellitus, fracture risk assessment in osteoporosis, lower limb fractures, breast cancer, depression, and acute infections in childhood. An online survey of 401 UK GPs was also conducted. Guideline review: Of 7637 records screened by title and/or abstract, 243 clinical guidelines met inclusion criteria. CPRs were most commonly recommended in guidelines regarding primary prevention of cardiovascular disease (67%) and depression (67%). There was little consensus across various clinical guidelines as to which CPR to use preferentially. Survey: Of 401 responders to the GP survey, most were aware of and applied named CPRs in the clinical areas of cardiovascular disease and depression. The commonest reasons for using CPRs were to guide management and conform to local policy requirements. GPs use CPRs to guide management but also to comply with local policy requirements. Future research could focus on which clinical areas clinicians would most benefit from CPRs and promoting the use of robust, externally validated CPRs.
    British Journal of General Practice 04/2014; 64(621):e233-42. · 2.03 Impact Factor
  • British Journal of General Practice 01/2014; · 2.03 Impact Factor
  • Source
    Diabetologia 10/2012; · 6.49 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Observational studies suggest that metformin may reduce cancer risk by approximately one-third. We examined cancer outcomes and all-cause mortality in published randomised controlled trials (RCTs). RCTs comparing metformin with active glucose-lowering therapy or placebo/usual care, with minimum 500 participants and 1-year follow-up, were identified by systematic review. Data on cancer incidence and all-cause mortality were obtained from publications or by contacting investigators. For two trials, cancer incidence data were not available; cancer mortality was used as a surrogate. Summary RRs, 95% CIs and I (2)statistics for heterogeneity were calculated by fixed effects meta-analysis. Of 4,039 abstracts identified, 94 publications described 14 eligible studies. RRs for cancer were available from 11 RCTs with 398 cancers during 51,681 person-years. RRs for all-cause mortality were available from 13 RCTs with 552 deaths during 66,447 person-years. Summary RRs for cancer outcomes in people randomised to metformin compared with any comparator were 1.02 (95% CI 0.82, 1.26) across all trials, 0.98 (95% CI 0.77, 1.23) in a subgroup analysis of active-comparator trials and 1.36 (95% CI 0.74, 2.49) in a subgroup analysis of placebo/usual care comparator trials. The summary RR for all-cause mortality was 0.94 (95% CI 0.79, 1.12) across all trials. Meta-analysis of currently available RCT data does not support the hypothesis that metformin lowers cancer risk by one-third. Eligible trials also showed no significant effect of metformin on all-cause mortality. However, limitations include heterogeneous comparator types, absent cancer data from two trials, and short follow-up, especially for mortality.
    Diabetologia 08/2012; 55(10):2593-603. · 6.49 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Warfarin is used as an oral anticoagulant. However, there is wide variation in patient response to warfarin dose. This variation, as well as the necessity of keeping within a narrow therapeutic range, means that selection of the correct warfarin dose at the outset of treatment is not straightforward. To assess the effectiveness of different initiation doses of warfarin in terms of time in-range, time to INR in-range and effect on serious adverse events. We searched CENTRAL, DARE and the NHS Health economics database on The Cochrane Library (2012, Issue 4); MEDLINE (1950 to April 2012) and EMBASE (1974 to April 2012). All randomised controlled trials which compared different initiation regimens of warfarin. Review authors independently assessed studies for inclusion. Authors also assessed the risk of bias and extracted data from the included studies. We identified 12 studies of patients commencing warfarin for inclusion in the review. The overall risk of bias was found to be variable, with most studies reporting adequate methods for randomisation but only two studies reporting adequate data on allocation concealment. Four studies (355 patients) compared 5 mg versus 10 mg loading doses. All four studies reported INR in-range by day five. Although there was notable heterogeneity, pooling of these four studies showed no overall difference between 5 mg versus 10 mg loading doses (RR 1.17, 95% CI 0.77 to 1.77, P = 0.46, I(2) = 83%). Two of these studies used two consecutive INRs in-range as the outcome and showed no difference between a 5 mg and 10 mg dose by day five (RR 0.86, 95% CI 0.62 to 1.19, P = 0.37, I(2 )= 22%); two other studies used a single INR in-range as the outcome and showed a benefit for the 10 mg initiation dose by day 5 (RR 1.49, 95% CI 1.01 to 2.21, P = 0.05, I(2 )= 72%). Two studies compared a 5 mg dose to other doses: a 2.5 mg initiation dose took longer to achieve the therapeutic range (2.7 versus 2.0 days; P < 0.0001), but those receiving a calculated initiation dose achieved a target range quicker (4.2 days versus 5 days, P = 0.007). Two studies compared age adjusted doses to 10 mg initiation doses. More elderly patients receiving an age adjusted dose achieved a stable INR compared to those receiving a 10 mg initial dose (and Fennerty regimen). Four studies used genotype guided dosing in one arm of each trial. Three studies reported no overall differences; the fourth study, which reported that the genotype group spent significantly more time in-range (P < 0.001), had a control group whose INRs were significantly lower than expected. No clear impacts from adverse events were found in either arm to make an overall conclusion. The studies in this review compared loading doses in several different situations. There is still considerable uncertainty between the use of a 5 mg and a 10 mg loading dose for the initiation of warfarin. In the elderly, there is some evidence that lower initiation doses or age adjusted doses are more appropriate, leading to fewer high INRs. However, there is insufficient evidence to warrant genotype guided initiation.
    Cochrane database of systematic reviews (Online) 01/2012; 12:CD008685. · 5.70 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Uptake of self-testing and self-management of oral anticoagulation [corrected] has remained inconsistent, despite good evidence of their effectiveness. To clarify the value of self-monitoring of oral anticoagulation, we did a meta-analysis of individual patient data addressing several important gaps in the evidence, including an estimate of the effect on time to death, first major haemorrhage, and thromboembolism. We searched Ovid versions of Embase (1980-2009) and Medline (1966-2009), limiting searches to randomised trials with a maximally sensitive strategy. We approached all authors of included trials and requested individual patient data: primary outcomes were time to death, first major haemorrhage, and first thromboembolic event. We did prespecified subgroup analyses according to age, type of control-group care (anticoagulation-clinic care vs primary care), self-testing alone versus self-management, and sex. We analysed patients with mechanical heart valves or atrial fibrillation separately. We used a random-effect model method to calculate pooled hazard ratios and did tests for interaction and heterogeneity, and calculated a time-specific number needed to treat. Of 1357 abstracts, we included 11 trials with data for 6417 participants and 12,800 person-years of follow-up. We reported a significant reduction in thromboembolic events in the self-monitoring group (hazard ratio 0·51; 95% CI 0·31-0·85) but not for major haemorrhagic events (0·88, 0·74-1·06) or death (0·82, 0·62-1·09). Participants younger than 55 years showed a striking reduction in thrombotic events (hazard ratio 0·33, 95% CI 0·17-0·66), as did participants with mechanical heart valve (0·52, 0·35-0·77). Analysis of major outcomes in the very elderly (age ≥85 years, n=99) showed no significant adverse effects of the intervention for all outcomes. Our analysis showed that self-monitoring and self-management of oral coagulation is a safe option for suitable patients of all ages. Patients should also be offered the option to self-manage their disease with suitable health-care support as back-up. UK National Institute for Health Research (NIHR) Technology Assessment Programme, UK NIHR National School for Primary Care Research.
    The Lancet 11/2011; 379(9813):322-34. · 39.21 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Theory and simulation suggest that randomized controlled trials (RCTs) stopped early for benefit (truncated RCTs) systematically overestimate treatment effects for the outcome that precipitated early stopping. To compare the treatment effect from truncated RCTs with that from meta-analyses of RCTs addressing the same question but not stopped early (nontruncated RCTs) and to explore factors associated with overestimates of effect. Search of MEDLINE, EMBASE, Current Contents, and full-text journal content databases to identify truncated RCTs up to January 2007; search of MEDLINE, Cochrane Database of Systematic Reviews, and Database of Abstracts of Reviews of Effects to identify systematic reviews from which individual RCTs were extracted up to January 2008. Selected studies were RCTs reported as having stopped early for benefit and matching nontruncated RCTs from systematic reviews. Independent reviewers with medical content expertise, working blinded to trial results, judged the eligibility of the nontruncated RCTs based on their similarity to the truncated RCTs. Reviewers with methodological expertise conducted data extraction independently. The analysis included 91 truncated RCTs asking 63 different questions and 424 matching nontruncated RCTs. The pooled ratio of relative risks in truncated RCTs vs matching nontruncated RCTs was 0.71 (95% confidence interval, 0.65-0.77). This difference was independent of the presence of a statistical stopping rule and the methodological quality of the studies as assessed by allocation concealment and blinding. Large differences in treatment effect size between truncated and nontruncated RCTs (ratio of relative risks <0.75) occurred with truncated RCTs having fewer than 500 events. In 39 of the 63 questions (62%), the pooled effects of the nontruncated RCTs failed to demonstrate significant benefit. Truncated RCTs were associated with greater effect sizes than RCTs not stopped early. This difference was independent of the presence of statistical stopping rules and was greatest in smaller studies.
    JAMA The Journal of the American Medical Association 03/2010; 303(12):1180-7. · 29.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The introduction of portable monitors (point-of-care devices) for the management of patients on oral anticoagulation allows self-testing by the patient at home. Patients who self-test can either adjust their medication according to a pre-determined dose-INR schedule (self-management) or they can call a clinic to be told the appropriate dose adjustment (self-monitoring). Several trials of self-monitoring of oral anticoagulant therapy suggest this may be equal to or better than standard monitoring. To evaluate the effects of self-monitoring or self-management of oral anticoagulant therapy compared to standard monitoring. We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2007, Issue 4), MEDLINE, EMBASE and CINAHL (to November 2007). We checked bibliographies and contacted manufacturers and authors of relevant studies. No language restrictions were applied. Outcomes analysed were thromboembolic events, mortality, major haemorrhage, minor haemorrhage, tests in therapeutic range, frequency of testing, and feasibility of self-monitoring and self-management. The review authors independently extracted data. We used a fixed-effect model with the Mantzel-Haenzel method to calculate the pooled risk ratio (RR) and Peto's method to verify the results for uncommon outcomes. We examined heterogeneity amongst studies with the Chi(2) and I(2) statistics. We identified 18 randomized trials (4723 participants). Pooled estimates showed significant reductions in both thromboembolic events (RR 0.50, 95% CI 0.36 to 0.69) and all-cause mortality (RR 0.64, 95% CI 0.46 to 0.89). This reduction in mortality remained significant after the removal of low-quality studies (RR 0.65, 95% CI 0.46 to 0.90). Trials of self-management alone showed significant reductions in thromboembolic events (RR 0.47, 95% CI 0.31 to 0.70) and all-cause mortality (RR 0.55, 95% CI 0.36 to 0.84); self-monitoring did not (thrombotic events RR 0.57, 95% CI 0.32 to 1.00; mortality RR 0.84, 95% CI 0.50 to 1.41). Self-monitoring significantly reduced major haemorrhages (RR 0.56, 95% CI 0.35 to 0.91) whilst self-management did not (RR 1.12, 95% CI 0.78 to 1.61). Twelve trials reported improvements in the percentage of mean INR measurements in the therapeutic range. No heterogeneity was identified in any of these comparisons. Compared to standard monitoring, patients who self-monitor or self-manage can improve the quality of their oral anticoagulation therapy. The number of thromboembolic events and mortality were decreased without increases in harms. However, self-monitoring or self-management were not feasible for up to half of the patients requiring anticoagulant therapy. Reasons included patient refusal, exclusion by their general practitioner, and inability to complete training.
    Cochrane database of systematic reviews (Online) 01/2010; · 5.70 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Selection of the right warfarin dose at the outset of treatment is not straightforward, and current evidence is lacking to determine the optimal strategy for initiation of therapy. We included randomized controlled trials in patients commencing anticoagulation with warfarin, comparing different loading dose or different regimens.We searched Medline, EMBASE, the Cochrane Library and the NHS Health Economics Database up to June 2009. Primary outcomes were time to stable INR and adverse events. We summarised results as proportion of INRs in range from date of initiation and compared dichotomous outcomes using relative risks (RR) and calculated 95% confidence intervals (CIs). We included 11 studies of 1,340 patients newly initiated on warfarin. In two studies that used single INR measures, a loading dose of 10 mg compared to 5 mg led to more patients in range on day five. However, in two studies which measured two consecutive INRs, a loading dose of 10 mg compared to 5 mg did not lead to more patients in range on day five (RR = 0.86, 95% CI, 0.62 to 1.19, p = 0.37). Patients receiving a 2.5 mg initiation does took longer to achieve the therapeutic range, whilst those receiving a calculated initiation dose achieved target range 0.8 days quicker (4.2 days vs. 5 days, p = 0.007). More elderly patients receiving an age adjusted dose achieved a stable INR compared to the Fennerty protocol (48% vs. 22% p = 0.02) and significantly fewer patients on the age adjusted regimens had high out-of-range INRs. Two studies report no significant differences between genotype guided and 5 mg or 10 mg initiation doses and in the one significant genotype study the control group INRs were significantly lower than expected. Our review findings suggest there is still considerable uncertainty between a 10 mg and a 5 mg loading dose for initiation of warfarin. In the elderly, lower initiation doses or age adjusted doses are more appropriate, leading to less higher INRs. Currently there is insufficient evidence to warrant genotype guided initiation, and adequately powered trials to detect effects on adverse events are currently warranted.
    BMC Cardiovascular Disorders 01/2010; 10:18. · 1.46 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Low cancer awareness contributes to delay in presentation for cancer symptoms and may lead to delay in cancer diagnosis. The aim of this study was to review the evidence for the effectiveness of interventions to raise cancer awareness and promote early presentation in cancer to inform policy and future research. We searched bibliographic databases and reference lists for randomised controlled trials of interventions delivered to individuals, and controlled or uncontrolled studies of interventions delivered to communities. We found some evidence that interventions delivered to individuals modestly increase cancer awareness in the short term and insufficient evidence that they promote early presentation. We found limited evidence that public education campaigns reduce stage at presentation of breast cancer, malignant melanoma and retinoblastoma. Interventions delivered to individuals may increase cancer awareness. Interventions delivered to communities may promote cancer awareness and early presentation, although the evidence is limited.
    British Journal of Cancer 12/2009; 101 Suppl 2:S31-9. · 5.08 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To identify and quantify symptoms of ovarian cancer in women in primary care. Case-control study, with coding of participants' primary care records for one year before diagnosis. 39 general practices in Devon, England. 212 women aged over 40 with a diagnosis of primary ovarian cancer, 2000-7; 1060 controls matched by age and general practice. Odds ratios and positive predictive values for symptoms from conditional logistic regression analyses. Seven symptoms were associated with ovarian cancer in multivariable analysis. The univariable positive predictive values and multivariable odds ratios (with 95% confidence intervals) for these were 2.5% (1.2% to 5.9%) and 240 (46 to 1200) for abdominal distension; 0.5% (0.2% to 0.9%) and 24 (9.3 to 64) for postmenopausal bleeding; 0.6% (0.3% to 1.0%) and 17 (6.1 to 50) for loss of appetite; 0.2% (0.1% to 0.3%) and 16 (5.6 to 48) for increased urinary frequency; 0.3% (0.2% to 0.3%) and 12 (6.1 to 22) for abdominal pain; 0.2% (0.1% to 0.4%) and 7.6 (2.5 to 23) for rectal bleeding; and 0.3% (0.2% to 0.6%) and 5.3 (1.8 to 16) for abdominal bloating. In 181 (85%) cases and 164 (15%) controls at least one of these seven symptoms was reported to primary care before diagnosis. After exclusion of symptoms reported in the 180 days before diagnosis, abdominal distension, urinary frequency, and abdominal pain remained independently associated with a diagnosis of ovarian cancer. Women with ovarian cancer usually have symptoms and report them to primary care, sometimes months before diagnosis. This study provides an evidence base for selection of patients for investigation, both for clinicians and for developers of guidelines.
    BMJ (online) 02/2009; 339:b2998. · 17.22 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Randomized clinical trials (RCTs) stopped early for benefit often receive great attention and affect clinical practice, but pose interpretational challenges for clinicians, researchers, and policy makers. Because the decision to stop the trial may arise from catching the treatment effect at a random high, truncated RCTs (tRCTs) may overestimate the true treatment effect. The Study Of Trial Policy Of Interim Truncation (STOPIT-1), which systematically reviewed the epidemiology and reporting quality of tRCTs, found that such trials are becoming more common, but that reporting of stopping rules and decisions were often deficient. Most importantly, treatment effects were often implausibly large and inversely related to the number of the events accrued. The aim of STOPIT-2 is to determine the magnitude and determinants of possible bias introduced by stopping RCTs early for benefit. Methods/Design: We will use sensitive strategies to search for systematic reviews addressing the same clinical question as each of the tRCTs identified in STOPIT-1 and in a subsequent literature search. We will check all RCTs included in each systematic review to determine their similarity to the index tRCT in terms of participants, interventions, and outcome definition, and conduct new meta-analyses addressing the outcome that led to early termination of the tRCT. For each pair of tRCT and systematic review of corresponding non-tRCTs we will estimate the ratio of relative risks, and hence estimate the degree of bias. We will use hierarchical multivariable regression to determine the factors associated with the magnitude of this ratio. Factors explored will include the presence and quality of a stopping rule, the methodological quality of the trials, and the number of total events that had occurred at the time of truncation.Finally, we will evaluate whether Bayesian methods using conservative informative priors to "regress to the mean" overoptimistic tRCTs can correct observed biases. Discussion: A better understanding of the extent to which tRCTs exaggerate treatment effects and of the factors associated with the magnitude of this bias can optimize trial design and data monitoring charters, and may aid in the interpretation of the results from trials stopped early for benefit.
    Trials 01/2009; · 2.21 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To date, there has been no systematic examination of the relationship between international normalized ratio (INR) control measurements and the prediction of adverse events in patients with atrial fibrillation on oral anticoagulation. We searched MEDLINE, EMBASE, and Cochrane through January 2008 for studies of atrial fibrillation patients receiving vitamin-K antagonists that reported INR control measures (percentage of time in therapeutic range [TTR] and percentage of INRs in range) and major hemorrhage and thromboembolic events. In total, 47 studies were included from 38 published articles. TTR ranged from 29% to 75%; percentage of INRs ranged from 34% to 84%. From studies reporting both measures, TTR significantly correlated with percentage of INRs in range (P<0.001). Randomized controlled trials had better INR control than retrospective studies (64.9% versus 56.4%; P=0.01). TTR negatively correlated with major hemorrhage (r=-0.59; P=0.002) and thromboembolic rates (r=-0.59; P=0.01). This effect was significant in retrospective studies (major hemorrhage, r=-0.78; P=0.006 and thromboembolic rate, r=-0.88; P=0.03) but not in randomized controlled trials (major hemorrhage, r=0.18; P=0.33 and thromboembolic rate, r=-0.61; P=0.07). For retrospective studies, a 6.9% improvement in the TTR significantly reduced major hemorrhage by 1 event per 100 patient-years of treatment (95% CI, 0.29 to 1.71 events). In atrial fibrillation patients receiving orally administered anticoagulation treatment, TTR and percentage of INRs in range effectively predict INR control. Data from retrospective studies support the use of TTR to accurately predict reductions in adverse events.
    Circulation Cardiovascular Quality and Outcomes 11/2008; 1(2):84-91. · 5.66 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Although the effectiveness of mammography for women under the age of 50 years with a family history of breast cancer (FHBC) has not yet been proven, annual screening is being offered to these women to manage breast cancer risk. This study investigates women's awareness and interpretation of their familial risk and knowledge and views about mammographic screening. A total of 2231 women from 21 familial/breast/genetics centres who were assessed as moderate risk (17-30% lifetime risk) or high risk (>30% lifetime risk) completed a questionnaire before their mammographic screening appointment. Most women (70%) believed they were likely, very likely or definitely going to develop breast cancer in their lifetime. Almost all women (97%) understood that the purpose of mammographic screening was to allow the early detection of breast cancer. However, 20% believed that a normal mammogram result meant there was definitely no breast cancer present, and only 4% understood that screening has not been proven to save lives in women under the age of 50 years. Women held positive views on mammography but did not appear to be well informed about the potential disadvantages. These findings suggest that further attention should be paid to improving information provision to women with an FHBC being offered routine screening.
    British Journal of Cancer 11/2008; 99(7):1007-12. · 5.08 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: To explore English women's experiences of cervical screening result communication. Qualitative study consisting of seven focus groups conducted between May 2005 and April 2006. 33 women with a range of screening results (normal, inadequate, borderline and abnormal) who had recently been for cervical screening, and five women who had attended a colposcopy appointment for the first time following screening. Three screening centres (Hampshire, Reading and Sheffield) and one colposcopy clinic (Oxford) in England. Unsatisfactory result communication (eg, delivery of out-of-date and conflicting information) on the part of both screening centres and primary care teams was highlighted. Variable levels of general practitioner involvement in screening result provision were experienced; result-giving strategies included personal as well as generic letters and telephone calls. Means for improving women's understanding of abnormal results were described including the use of diagrams to explain the progression of cell changes, the provision of updates regarding any changes in cell abnormalities between screening tests (ie, lesion progression or regression) and contact with a knowledgeable "intermediary" outside primary care. The timely provision of appropriate information is an important aspect of any screening programme. Our findings suggest that there is scope for improvement in both the delivery and content of cervical screening result notifications. Regular review of patient result-giving strategies on the part of screening centres and general practices could help ensure that screening programme standards for written information are met. Enhanced communication between primary care teams and screening centres could facilitate the provision of consistent and clear result messages thereby improving women's cervical screening experiences.
    Quality and Safety in Health Care 11/2008; 17(5):334-8. · 2.16 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Symptoms of ovarian cancer are often vague and consequently a high proportion of women with ovarian cancer are not referred to the appropriate clinic. To identify diagnostic factors for ovarian cancer. A qualitative and quantitative study. Four UK hospitals. One hundred and twenty-four women referred to hospital with suspected ovarian malignancy. Women were interviewed prior to diagnosis (n = 63), or soon after. A thematic analysis was conducted. Emergent symptoms were quantitatively analysed to identify distinguishing features of ovarian cancer. Symptoms in women with and without ovarian cancer. Diagnoses comprised 44 malignancies, 59 benign gynaecological pathologies and 21 normal findings. Of the malignancies, 25 women had stage III or more disease, with an average age of 59 years. The benign/normal cohort was significantly younger (48 years). Multivariate analysis revealed persistent abdominal distension (OR 5.2, 95% CI 1.3-20.5), postmenopausal bleeding (OR 9.2, 95% CI 1.1-76.1), appetite loss (OR 3.2, 95% CI 1.1-9.2), early satiety (OR 5.0, 95% CI 1.6-15.7) and progressive symptoms (OR 3.6, 95% CI 1.3-9.8) as independent, statistically significant variables associated with ovarian cancer. Fluctuating distension was not associated with ovarian cancer (OR 0.4, 95% CI 0-4.1). Women frequently used the term bloating, but this represented two distinct events: persistent abdominal distension and fluctuating distension/discomfort. Ovarian cancer is not a silent killer. Clinicians should distinguish between persistent and fluctuating distension. Recognition of the significance of symptoms described by women could lead to earlier and more appropriate referral.
    BJOG An International Journal of Obstetrics & Gynaecology 07/2008; 115(8):1008-14. · 3.76 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: This longitudinal study investigated pre-screening factors that predicted breast cancer-specific distress among 1286 women who were undergoing annual mammography screening as part of a UK programme for younger women (i.e., under 50) with a family history of breast cancer. Women completed questionnaires one month prior to screening, and one and six months after receiving screening results. Factors measured were breast cancer worry, perceived risk, cognitive appraisals, coping, dispositional optimism, and background variables relating to screening history and family history. Pre-screening cancer worry was the most important predictor of subsequent worry, explaining 56/61% and 54/57% of the variance at one and six months follow-up, respectively. Other salient pre-screening predictors included high perceived risk of breast cancer, appraisals of high relevance and threat associated with the family history, and low perceived ability to cope emotionally. Women who had previously been part of the screening programme and those with a relative who had recently died from breast cancer were also vulnerable to longer-term distress. A false positive screening result, pessimistic personality, and coping efforts relating to religion and substance use predicted outcomes of screening at one month follow-up, but were not predictive in the longer-term. Early intervention to ameliorate high levels of cancer-related distress and negative appraisals would benefit some women as they progress through the familial breast screening programme.
    Psycho-Oncology 05/2008; 17(12):1180-8. · 3.51 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This multi-centre study examined factors associated with breast cancer-specific distress in 2321 women under 50 who are on a mammographic screening programme on account of their family history. Women were recruited from 21 UK centres, and completed a questionnaire one month before their screening appointment. The transactional theory of stress, appraisal, and coping provided the theoretical framework for the study. Factors measured included screening history, family history, perceived risk, cognitive appraisals, coping, optimism, and cancer worry. The findings indicate that the majority of women appraise their family history as being relevant and somewhat threatening to personal well-being, but something they can deal with emotionally. Acceptance was the most commonly used coping strategy. Hierarchical regression analysis identified that the factors most significantly associated with distress were an appraisal of high relevance and threat, increased risk perception, low dispositional optimism, and the use of both avoidant and task-orientated coping strategies. Women with children and those with relatives who have died from breast cancer were also more distressed. To conclude, most women appraised their situation positively but there is a potential profile of risk factors which may help clinicians identify those women who need extra psychological support as they progress through screening.
    Psycho-Oncology 02/2008; 17(1):74-82. · 3.51 Impact Factor
  • Alexander Swanton, Clare R Bankhead, Sean Kehoe
    [Show abstract] [Hide abstract]
    ABSTRACT: Borderline ovarian tumours account for 10-15% of all ovarian cancers, and there have been numerous studies indicating their excellent long-term prognosis. As this disease commonly affects younger women, the issue of fertility-preserving surgery is increasingly important. A systematic review of the literature, searching the relevant electronic databases was performed analysing conservative surgery, borderline ovarian tumours and pregnancy rates/fertility outcome. Overall, 19 studies met the inclusion criteria. From these studies, 2479 patients had borderline ovarian tumours of which 923 (37%) patients were treated by conservative surgery. Nine studies recorded data regarding pregnancy outcome. A pregnancy rate of 48% was calculated on these data, where recorded, analysing the number of women wanting to conceive and the actual number of pregnancies achieved. The recurrence rate after conservative treatment was 16% with only five recorded disease-related deaths. Knowledge of the pregnancy rates is important to permit appropriate counselling of women diagnosed with this malignancy.
    European Journal of Obstetrics & Gynecology and Reproductive Biology 12/2007; 135(1):3-7. · 1.84 Impact Factor