Carl Heneghan

University of Oxford, Oxford, England, United Kingdom

Are you Carl Heneghan?

Claim your profile

Publications (195)1474.39 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Post-marketing withdrawal of medicinal products because of deaths can be occasioned by evidence obtained from case reports, observational studies, randomized trials, or systematic reviews. There have been no studies of the pattern of withdrawals of medicinal products to which deaths have been specifically attributed and the evidence that affects such decisions. Our objectives were to identify medicinal products that were withdrawn after marketing in association with deaths, to search for the evidence on which withdrawal decisions were based, and to analyse the delays involved and the worldwide patterns of withdrawal. We searched the World Health Organization's Consolidated List of [Medicinal] Products, drug regulatory authorities' websites, PubMed, Google Scholar, and textbooks on adverse drug reactions. We included medicinal products for which death was specifically mentioned as a reason for withdrawal from the market. Non-human medicines, herbal products, and non-prescription medicines were excluded. One reviewer extracted the data and a second reviewer verified them independently. We found 95 drugs for which death was documented as a reason for withdrawal between 1950 and 2013. All were withdrawn in at least one country, but at least 16 remained on the market in some countries. Withdrawals were more common in European countries; few were recorded in Africa (5.3%). The more recent the launch date, the sooner deaths were reported. However, in 47% of cases more than 2 years elapsed between the first report of a death and withdrawal of the drug, and the interval between the first report of a death attributed to a medicinal product and eventual withdrawal of the product has not improved over the last 60 years. These results suggest that some deaths associated with these products could have been avoided. Manufacturers and regulatory authorities should expedite investigations when deaths are reported as suspected adverse drug reactions and consider early suspensions. Increased transparency in the publication of clinical trials data and improved international co-ordination could shorten the delays in withdrawing dangerous medicinal products after reports of deaths and obviate discrepancies in drug withdrawals in different countries.Please see related article: http://dx.doi.org/10.1186/s12916-015-0270-2.
    BMC Medicine 12/2015; 13(1):26. DOI:10.1186/s12916-014-0262-7 · 7.28 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: General practice is increasingly used as a learning environment in undergraduate medical education in the UK. The aim of this project was to identify, summarise and synthesise research about undergraduate medical education in general practice in the UK. We systematically identified studies of undergraduate medical education within a general practice setting in the UK from 1990 onwards. All papers were summarised in a descriptive report and categorised into two in-depth syntheses: a quantitative and a qualitative in-depth review. 169 papers were identified, representing research from 26 UK medical schools. The in-depth review of quantitative papers (n = 7) showed that medical students learned clinical skills as well or better in general practice settings. Students receive more teaching, and clerk and examine more patients in the general practice setting than in hospital. Patient satisfaction and enablement are similar whether a student is present or not in a consultation, however, patients experience lower relational empathy. Two main thematic groups emerged from the qualitative in-depth review (n = 10): the interpersonal interactions within the teaching consultation and the socio-cultural spaces of learning which shape these interactions. The GP has a role as a broker of the interactions between patients and students. General practice is a socio-cultural and developmental learning space for students, who need to negotiate the competing cultures between hospital and general practice. Lastly, patients are transient members of the learning community, and their role requires careful facilitation. General practice is as good, if not better, than hospital delivery of teaching of clinical skills. Our meta-ethnography has produced rich understandings of the complex relationships shaping possibilities for student and patient active participation in learning.
    Medical Teacher 05/2015; DOI:10.3109/0142159X.2015.1032918 · 2.05 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Description of the condition Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia (Wyndham 2000). It has been estimated that there are more than 2.3 million cases in the United States, with an estimated increase of up to 15.9 million persons by 2050. More than 46,000 new cases are diagnosed each year in the United Kingdom (Agarwal 2005; Testai 2010). The prevalence of AF increases with age with an estimated prevalence of 0.5% in the age group 50 to 59 years rising to approximately 9% in individuals older than 70 years. The lifetime risk of developing AF is approximately one in four (Agarwal 2005; Brieger 2009). The majority of cases of AF, be it paroxysmal or permanent, are ascribed to cardiovascular disorders such ischaemic heart disease, hypertension, cardiac failure and valvular heart abnormalities. Other non-cardiac causes include hyperthyroidism, and only a minority of cases (estimated at 11%) have no identifiable cause (lone AF) (Agarwal 2005). The resultant arrhythmia leads to an increase in blood stasis within the atria. This, in combination with other factors such as an ageing vessel wall and blood component changes, leads to an increased risk in venous thromboemboli formation (Watson 2009). As a result, the main morbidity and mortality associated with atrial fibrillation is in relation to the risk of ischaemic stroke, which is increased five-fold (Hart 2001). However, this risk is thought to vary from one individual to another with the leading risk factors being: previous history of stroke or transient ischaemic attack (TIA), increasing age, hypertension, and structural heart disease in the presence of AF (Hughes 2008). These have led to several clinical prediction rules to estimate the risk of stroke in paroxysmal and permanent AF along with the best option for pharmacological prophylaxis. Of these the CHADS2 risk stratification score was found to have the highest ability to correctly rank-order patients by risk (Hughes 2008). The mainstay for venous thromboemboli prophylaxis and stroke prevention in AF has thus far been using either a vitamin K antagonist (VKA) such as warfarin or an anti-platelet agent such as aspirin. An earlier systematic review of long term anticoagulants (warfarin) compared with antiplatelet treatment (aspirin) suggested that the included trials (all pre-1989) were too weak to confer any value of long term anticoagulation (Taylor 2001). However a more recent meta-analysis of 28,044 participants showed stroke was reduced by 64% for those on dose-adjusted warfarin and 22% for those on antiplatelet agents. Warfarin in comparison to aspirin leads to a 39% relative risk reduction in stroke (Hart 2007). The decision as to whether a patient receives warfarin or aspirin depends on risk versus benefit. Those at low risk or where warfarin is contraindicated may well be managed on aspirin alone, whereas patients at higher risk may benefit from warfarin. Patients who fall into an intermediate risk category may benefit from either treatment and this decision is largely based on individual risk. Table 1 summarises the criteria for low, intermediate and high risk stratification (Lafuente-Lafuente 2009). Description of the intervention The benefits of warfarin therapy in stroke reduction for AF patients are well established. However, these benefits are offset by increased side effects and the need for regular monitoring. The most serious complication for warfarin use is increased haemorrhagic risk. Two meta-analyses have suggested that there is a greater than two-fold increase in the risk of serious major haemorrhagic bleed with warfarin use when compared to placebo or aspirin (Segal 2001; Hart 2007). This risk is increased when warfarin and aspirin are combined without any benefit in stroke prevention (Flaker 2006). Another significant problem with warfarin use is its narrow therapeutic window. To prevent under and over anticoagulation, patients on warfarin require regular monitoring of their international normalised ratio (INR). Most guidelines suggest patients on warfarin for AF should have an INR of between 2 and 3 (Lip 2007). Sub-optimal levels are associated with a greater risk of complications. One study looked at mortality within 30 days of admission to hospital with stroke. Among patients taking warfarin at the time of the stroke, 16% of those with an INR <2 died within 30 days compared to 6% with INR >2 (Hylek 2003). The same study also showed that increased haemorraghic risk was associated with an INR >4. Tight INR control requires regular monitoring and is thought to be one of the contributing factors to poor adherence to warfarin. A prospective cohort study of patients presenting to secondary care with AF found 56% of patients on anticoagulation treatment did not adhere to international guidelines. Reasons for this were thought to be due to poor understanding of treatment, logistics of regular monitoring and reluctance of physicians to correctly prescribe warfarin for fear of potential drug interactions and complications (Mehta 2004). Several alternatives to warfarin have emerged over the past ten years. These include direct thrombin inhibitors (DTIs) and factor Xa inhibitors. These newer drugs have the potential for several advantages over traditional VKAs. For example, they do not require regular monitoring, have faster onset of action, and potentially fewer adverse interactions. However, what is not clear yet is how efficacious they are or the associated risks of adverse events (Verheugt 2010). Clinical trials of parenteral DTIs, such as hirudin, argatroban, and bivalirudin have been evaluated in acute settings such as percutaneous coronary intervention (PCI) and acute coronary syndrome (ACS) with mixed results (Hirsh 2005). The two oral DTIs furthest into clinical trials are ximelagatran and dabigatran. Trials of ximelagatran in preventing venous thromboemboli have shown superiority over placebo without increased risks of bleeding (Schulman 2003). The Stroke Prevention Using Oral Thrombin Inhibitor in Atrial Fibrillation (SPORTIF) III and SPORTIF V trials concluded ximelagatran was non-inferior to warfarin in preventing stroke with no increase in bleeding events (Olsson 2003; Albers 2005). However, serious concerns of hepatotoxicity have resulted in ximelagatran being withdrawn from the market (Kaul 2005). Early trials of dabigatran have shown promise. In the Randomized Evaluation of Long Term Anticoagulation Therapy (RE-LY) trial, oral dabigatran, when given at a dose of 110 mg, was found to be associated with rates of stroke and systemic embolism that were similar to those associated with warfarin. At a dose of 150 mg, compared with warfarin, dabigatran was associated with lower rates of stroke and systemic embolism but similar rates of major haemorrhage (Connolly 2009). Factor Xa inhibitors currently include idraparinux, apixaban, rivaroxaban, and edoxaban. The AMADEUS trial compared once weekly subcutaneous injections of idraparinux with oral warfarin (Amadeus Investigators 2008). While idraparinux was non-inferior to warfarin, there were significantly higher rates of bleeding. A biotinylated version entered phase III clinical trials but the trial was terminated early by the manufacturer. A number of clinical trials for oral factor Xa inhibitors are currently underway. The ROCKET AF study is a non-inferiority study comparing rivaroxaban to warfarin in atrial fibrillation patients (Investigators 2010). A recent randomised controlled trial looked at the effect of the oral factor Xa inhibitor apixaban against aspirin in those patients unsuitable for warfarin (Connolly 2011). This trial was stopped early as a result of the clear benefit of apixaban over aspirin in reducing stroke and adverse bleeding events. Apixaban also appeared superior when compared to warfarin (Granger 2011). Other direct thrombin and factor Xa inhibitors involved in clinical trials include AZD0837 (Lip 2009) and YM150 (Astellas 2007). Table 2 summarises the main direct thrombin and factor Xa inhibitors currently being investigated for thromboembolic prevention in atrial fibrillation. How the intervention might work Both the intrinsic and extrinsic coagulation pathways result in fibrin activation. Directly before this step is the conversion of prothrombin to thrombin which in turn is dependent upon activation of factor Xa. Warfarin interrupts this cascade indirectly through the inhibition of vitamin K dependent factors II, VII, IX, and X. In contrast, direct thrombin inhibitors (DTIs) bind to and inhibit thrombin, which is the most potent platelet agonist. This also has the advantage of preventing feedback activation of factors V, VIII, and XI. Inhibitors of factor Xa inhibit the formation of thrombin by binding directly to its precursor (Eriksson 2011). Why it is important to do this review Current management of anticoagulation for reducing stroke risk in AF patients involves a clinical decision of risk versus benefit in deciding who should receive and which type of anticoagulation to use. Current guidelines advocate the use of either aspirin or warfarin. While warfarin has shown clear superiority over aspirin in risk reduction, it may not be suitable for use in some patients. Long term use of warfarin requires regular monitoring through blood tests. This may be less suitable for patients with poor mobility, or in those who are housebound or have poor access to regular means for blood testing within the community. In addition, many of the patients being considered for warfarin are likely to be on other medications, and there are established risks of interactions with other drugs and subsequent effects on INR. Most importantly warfarin use is associated with higher rates of both minor and major bleeding events. Newer anticoagulation drugs work along the coagulation pathways through, for example, direct inhibition of thrombin or other factors such as factor Xa. These newer drugs have the potential benefits over warfarin that they require no monitoring, have activity and inactivity through a shorter time frame than warfarin, and less potential of drug interactions. One study also found that dabigatran was as cost effective as warfarin although it is unclear if this is the case for other newer inhibitors (Patel 2010). However, there are concerns over the safety profile and risk of adverse events for some of these drugs. In addition, it remains unclear how these drugs could be monitored or rapidly reversed in the scenario of overdosing. Some drug trials have been stopped early and their results have not been published for unclear reasons that have further raised concerns over the safety profile of these newer drugs. At least two other reviews (Aguilar 2007; Bruins 2011) appear in the Cochrane library that suggest some overlap with this one. However in the Aguilar 2007 review the authors compare only warfarin versus antiplatelet agents for stroke prevention in non-valvular AF, and the protocol from Bruins 2011 only evaluates Factor Xa inhibitors against VKA for AF. Therefore a systematic review is needed to assess the effect of both direct thrombin or factor Xa inhibitors in the prevention of stroke in AF. Objectives To assess the effectiveness of direct thrombin inhibitors and factor Xa inhibitors on clinical outcomes in patients with AF.
    Cochrane database of systematic reviews (Online) 05/2015; In press. · 5.70 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Description of the condition Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia (Wyndham 2000). It has been estimated that there are more than 2.3 million cases in the United States, with an estimated increase of up to 15.9 million persons by 2050. More than 46,000 new cases are diagnosed each year in the United Kingdom (Agarwal 2005; Testai 2010). The prevalence of AF increases with age with an estimated prevalence of 0.5% in the age group 50 to 59 years rising to approximately 9% in individuals older than 70 years. The lifetime risk of developing AF is approximately one in four (Agarwal 2005; Brieger 2009). The majority of cases of AF, be it paroxysmal or permanent, are ascribed to cardiovascular disorders such ischaemic heart disease, hypertension, cardiac failure and valvular heart abnormalities. Other non-cardiac causes include hyperthyroidism, and only a minority of cases (estimated at 11%) have no identifiable cause (lone AF) (Agarwal 2005). The resultant arrhythmia leads to an increase in blood stasis within the atria. This, in combination with other factors such as an ageing vessel wall and blood component changes, leads to an increased risk in venous thromboemboli formation (Watson 2009). As a result, the main morbidity and mortality associated with atrial fibrillation is in relation to the risk of ischaemic stroke, which is increased five-fold (Hart 2001). However, this risk is thought to vary from one individual to another with the leading risk factors being: previous history of stroke or transient ischaemic attack (TIA), increasing age, hypertension, and structural heart disease in the presence of AF (Hughes 2008). These have led to several clinical prediction rules to estimate the risk of stroke in paroxysmal and permanent AF along with the best option for pharmacological prophylaxis. Of these the CHADS2 risk stratification score was found to have the highest ability to correctly rank-order patients by risk (Hughes 2008). The mainstay for venous thromboemboli prophylaxis and stroke prevention in AF has thus far been using either a vitamin K antagonist (VKA) such as warfarin or an anti-platelet agent such as aspirin. An earlier systematic review of long term anticoagulants (warfarin) compared with antiplatelet treatment (aspirin) suggested that the included trials (all pre-1989) were too weak to confer any value of long term anticoagulation (Taylor 2001). However a more recent meta-analysis of 28,044 participants showed stroke was reduced by 64% for those on dose-adjusted warfarin and 22% for those on antiplatelet agents. Warfarin in comparison to aspirin leads to a 39% relative risk reduction in stroke (Hart 2007). The decision as to whether a patient receives warfarin or aspirin depends on risk versus benefit. Those at low risk or where warfarin is contraindicated may well be managed on aspirin alone, whereas patients at higher risk may benefit from warfarin. Patients who fall into an intermediate risk category may benefit from either treatment and this decision is largely based on individual risk. Table 1 summarises the criteria for low, intermediate and high risk stratification (Lafuente-Lafuente 2009). Description of the intervention The benefits of warfarin therapy in stroke reduction for AF patients are well established. However, these benefits are offset by increased side effects and the need for regular monitoring. The most serious complication for warfarin use is increased haemorrhagic risk. Two meta-analyses have suggested that there is a greater than two-fold increase in the risk of serious major haemorrhagic bleed with warfarin use when compared to placebo or aspirin (Segal 2001; Hart 2007). This risk is increased when warfarin and aspirin are combined without any benefit in stroke prevention (Flaker 2006). Another significant problem with warfarin use is its narrow therapeutic window. To prevent under and over anticoagulation, patients on warfarin require regular monitoring of their international normalised ratio (INR). Most guidelines suggest patients on warfarin for AF should have an INR of between 2 and 3 (Lip 2007). Sub-optimal levels are associated with a greater risk of complications. One study looked at mortality within 30 days of admission to hospital with stroke. Among patients taking warfarin at the time of the stroke, 16% of those with an INR <2 died within 30 days compared to 6% with INR >2 (Hylek 2003). The same study also showed that increased haemorraghic risk was associated with an INR >4. Tight INR control requires regular monitoring and is thought to be one of the contributing factors to poor adherence to warfarin. A prospective cohort study of patients presenting to secondary care with AF found 56% of patients on anticoagulation treatment did not adhere to international guidelines. Reasons for this were thought to be due to poor understanding of treatment, logistics of regular monitoring and reluctance of physicians to correctly prescribe warfarin for fear of potential drug interactions and complications (Mehta 2004). Several alternatives to warfarin have emerged over the past ten years. These include direct thrombin inhibitors (DTIs) and factor Xa inhibitors. These newer drugs have the potential for several advantages over traditional VKAs. For example, they do not require regular monitoring, have faster onset of action, and potentially fewer adverse interactions. However, what is not clear yet is how efficacious they are or the associated risks of adverse events (Verheugt 2010). Clinical trials of parenteral DTIs, such as hirudin, argatroban, and bivalirudin have been evaluated in acute settings such as percutaneous coronary intervention (PCI) and acute coronary syndrome (ACS) with mixed results (Hirsh 2005). The two oral DTIs furthest into clinical trials are ximelagatran and dabigatran. Trials of ximelagatran in preventing venous thromboemboli have shown superiority over placebo without increased risks of bleeding (Schulman 2003). The Stroke Prevention Using Oral Thrombin Inhibitor in Atrial Fibrillation (SPORTIF) III and SPORTIF V trials concluded ximelagatran was non-inferior to warfarin in preventing stroke with no increase in bleeding events (Olsson 2003; Albers 2005). However, serious concerns of hepatotoxicity have resulted in ximelagatran being withdrawn from the market (Kaul 2005). Early trials of dabigatran have shown promise. In the Randomized Evaluation of Long Term Anticoagulation Therapy (RE-LY) trial, oral dabigatran, when given at a dose of 110 mg, was found to be associated with rates of stroke and systemic embolism that were similar to those associated with warfarin. At a dose of 150 mg, compared with warfarin, dabigatran was associated with lower rates of stroke and systemic embolism but similar rates of major haemorrhage (Connolly 2009). Factor Xa inhibitors currently include idraparinux, apixaban, rivaroxaban, and edoxaban. The AMADEUS trial compared once weekly subcutaneous injections of idraparinux with oral warfarin (Amadeus Investigators 2008). While idraparinux was non-inferior to warfarin, there were significantly higher rates of bleeding. A biotinylated version entered phase III clinical trials but the trial was terminated early by the manufacturer. A number of clinical trials for oral factor Xa inhibitors are currently underway. The ROCKET AF study is a non-inferiority study comparing rivaroxaban to warfarin in atrial fibrillation patients (Investigators 2010). A recent randomised controlled trial looked at the effect of the oral factor Xa inhibitor apixaban against aspirin in those patients unsuitable for warfarin (Connolly 2011). This trial was stopped early as a result of the clear benefit of apixaban over aspirin in reducing stroke and adverse bleeding events. Apixaban also appeared superior when compared to warfarin (Granger 2011). Other direct thrombin and factor Xa inhibitors involved in clinical trials include AZD0837 (Lip 2009) and YM150 (Astellas 2007). Table 2 summarises the main direct thrombin and factor Xa inhibitors currently being investigated for thromboembolic prevention in atrial fibrillation. How the intervention might work Both the intrinsic and extrinsic coagulation pathways result in fibrin activation. Directly before this step is the conversion of prothrombin to thrombin which in turn is dependent upon activation of factor Xa. Warfarin interrupts this cascade indirectly through the inhibition of vitamin K dependent factors II, VII, IX, and X. In contrast, direct thrombin inhibitors (DTIs) bind to and inhibit thrombin, which is the most potent platelet agonist. This also has the advantage of preventing feedback activation of factors V, VIII, and XI. Inhibitors of factor Xa inhibit the formation of thrombin by binding directly to its precursor (Eriksson 2011). Why it is important to do this review Current management of anticoagulation for reducing stroke risk in AF patients involves a clinical decision of risk versus benefit in deciding who should receive and which type of anticoagulation to use. Current guidelines advocate the use of either aspirin or warfarin. While warfarin has shown clear superiority over aspirin in risk reduction, it may not be suitable for use in some patients. Long term use of warfarin requires regular monitoring through blood tests. This may be less suitable for patients with poor mobility, or in those who are housebound or have poor access to regular means for blood testing within the community. In addition, many of the patients being considered for warfarin are likely to be on other medications, and there are established risks of interactions with other drugs and subsequent effects on INR. Most importantly warfarin use is associated with higher rates of both minor and major bleeding events. Newer anticoagulation drugs work along the coagulation pathways through, for example, direct inhibition of thrombin or other factors such as factor Xa. These newer drugs have the potential benefits over warfarin that they require no monitoring, have activity and inactivity through a shorter time frame than warfarin, and less potential of drug interactions. One study also found that dabigatran was as cost effective as warfarin although it is unclear if this is the case for other newer inhibitors (Patel 2010). However, there are concerns over the safety profile and risk of adverse events for some of these drugs. In addition, it remains unclear how these drugs could be monitored or rapidly reversed in the scenario of overdosing. Some drug trials have been stopped early and their results have not been published for unclear reasons that have further raised concerns over the safety profile of these newer drugs. At least two other reviews (Aguilar 2007; Bruins 2011) appear in the Cochrane library that suggest some overlap with this one. However in the Aguilar 2007 review the authors compare only warfarin versus antiplatelet agents for stroke prevention in non-valvular AF, and the protocol from Bruins 2011 only evaluates Factor Xa inhibitors against VKA for AF. Therefore a systematic review is needed to assess the effect of both direct thrombin or factor Xa inhibitors in the prevention of stroke in AF. Objectives To assess the effectiveness of direct thrombin inhibitors and factor Xa inhibitors on clinical outcomes in patients with AF.
    Cochrane database of systematic reviews (Online) 05/2015; In press. · 5.70 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this systematic review was to evaluate the evidence for or against the effectiveness of grapefruits (Citrus paradisi) on body weight, blood pressure and lipid profile. Electronic searches were conducted in MEDLINE, EMBASE, AMED and the Cochrane Clinical Trials databases to identify relevant human randomized clinical trials (RCTs). Hand searches of bibliographies were also conducted. Only overweight and obese subjects were included. The reporting quality was assessed using the CONSORT checklist, and the strength of the overall body of evidence was rated based on the GRADE criteria. 154 citations were identified and three RCTs with a total of 250 participants were included. The RCTs were of moderate quality. A meta-analysis for change in body weight failed to reveal a significant difference between grapefruits and controls, MD: -0.45 kg (95% CI: -1.06 to 0.16; I(2) = 53%, but analysis revealed a significant decrease in systolic blood pressure, MD: -2.43 mmHg (95%CI: -4.77 to -0.09; I(2) = 0%). Paucity in the number of RCTs, short durations of interventions, and lack of an established minimum effective dose limit the conclusions that can be drawn about the effects of grapefruit on body weight and metabolic parameters. Further clinical trials evaluating the effects of grapefruit are warranted.
    Critical reviews in food science and nutrition 04/2015; DOI:10.1080/10408398.2014.901292 · 5.55 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Objective To assess the diagnostic accuracy of recommendations for self-monitoring blood pressure for diagnosing hypertension in primary care. Methods 247 consecutive participants with raised (>130 mm Hg systolic) blood pressure (BP) measured by their general practitioner (GP) from four primary care practices in the UK underwent 28-days self-monitoring followed by 24 hour ambulatory blood pressure monitoring (ABPM). Diagnostic accuracy of the first seven days of self-monitored BP (minimum four days, discarding readings on day one) in detecting hypertension with ambulatory blood pressure as reference. Results 203 participants were included, 109 (53.7%) of whom were diagnosed with hypertension using daytime ambulatory BP. The average of days two to seven self-monitored BP correctly classified 150 of 203 participants (sensitivity, 93.6%; 95% CI, 87.2%-97.4%; specificity, 51.1%; 95% CI, 40.5%-61.5%). However, the average of days two to five self-monitoring correctly classified 152 of 203 participants due to better specificity (53.2%; 95% CI, 42.6%-63.6%). In sensitivity analysis, diagnostic accuracy was not improved by inclusion of readings beyond day five and inclusion of readings taken on day one had no impact on diagnostic accuracy. Self-monitoring in the clinic was more accurate than readings taken by the GP but not self-monitoring outside of the clinic. Conclusion Hypertension can be ruled out in the majority of patients with elevated clinic BP using the average of the first five consecutive days of self-monitored BP, supporting lower limits for self-monitoring readings in current guidelines. Performing readings beyond day five and including readings taken on the first day has no clinical impact on diagnostic accuracy.
    Journal of Hypertension 04/2015; 33:755-72. · 4.22 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Description of the condition: Hypertension is the leading mortality risk factor in middle- and high-income countries and the second biggest mortality risk factor in low-income countries, where elevated BP is expected to represent a greater disease burden than communicable disease by 2030 (WHO 2013, Hendriks 2012, WHO 2009). Hypertension is defined as a systolic blood pressure (BP) of > 140 mmHg or a diastolic BP > 90 mmHg (European Society of Hypertension 2013, NICE 2011, Chobanian 2003). The risk associated with hypertension is due to its role as a precursor to cardiovascular, cerebrovascular, renal and various metabolic disorders. The majority of cases represent primary hypertension, where the exact cause is unknown. However, lifestyle factors such as high body mass index (BMI), low physical activity level and high salt consumption have a long-term association with hypertension (WHO 2013). In addition, multiple pathophysiological processes may also contribute to hypertension such as insulin resistance, dyslipidaemia, deregulated angiotensin II and aldosterone activity, enhanced oxidative stress, sympathetic overactivity, raised levels of inflammatory mediators and renal dysfunction (Yanai 2008; El-Atat 2004). Regardless of the underlying mechanism, there is a strong positive association between lowering blood pressure and reduced risk of cardiovascular disease. (Lewington 2002). Estimates indicate that lowering of systolic BP by 20 mmHg and diastolic BP by 10 mmHg can achieve a five-fold reduction in stroke risk and a two-fold reduction in ischaemic heart disease and other cardiovascular disease related deaths (Lewington 2002). Elevated BP is associated with increased sympathetic or decreased parasympathetic activity of the nervous system (Guyenet 2006). Therefore, lifestyle factors influencing the sympathovagal balance are thought to play an important role in the development of hypertension. For example, chronic stress stimulates the sympathetic system, whereas recreational physical exercise decreases the resting sympathetic tone (Huang 2013). Hypertensive patients have reported low levels of physical activity and high levels of work stress, both contributing to elevated basal sympathetic tone (Hayes 2008, Vrijkotte 2000). Description of the intervention: Pharmacotherapies are the mainstay of hypertension treatments. However, medication adherence is generally inadequate and it has been estimated that less than 50% of hypertension patients achieve their target BP (Wang 2005). Proposed non-pharmacological interventions to reduce BP have included device guided breathing exercises, which slow the user's breathing rate and have been shown to have a positive impact on reducing blood pressure (Schein 2001). The ReSPERATE® (InterCure Ltd, Israel) device is an FDA approved adjunct antihypertensive treatment that guides the periodicity and depth of user‘s breathing and is intended for adult use in a home setting (FDA 2002). The device consists of a control box, chest sensor and headphones. Users listen to a melody that guides the listener to slow their breathing rate, aiming for 10 breaths per minute or lower. According to the manufacturer, the device should be used a minimum of 3-4 times a week for 15 minutes aiming for at least 40 minutes of slow breathing per week. How the intervention might work: The mechanism whereby slow breathing can lower blood pressure is unclear. Several research studies suggest that effects are achieved via modulating sympathetic and parasympathetic nervous system activity through respiratory sinus arrhythmia and increased vagal tone (Sharma 2011, Bernardi 2002). Deep breathing activates arterial and cardiopulmonary stretch receptors, leading to an increase in vagal tone, decreased central sympathetic outflow and a consequent decrease in blood pressure. Long-term effects on blood pressure are thought to be achieved by increased sensitivity of arterial and cardiopulmonary baroreceptors and augmentation of baroreflex sensitivity (Bernardi 2002; Raupach 2008)(Figure 1). It has also been hypothesized that slow breathing lowers blood pressure by affecting the relationship between cardiac and respiratory nuclei in the central nervous system and thereby positively altering efferent sympathetic outflow to peripheral vasculature (Brook 2013). Why it is important to do this review: RESPeRATE® device-guided breathing instruments are readily available and marketed as effective non-pharmacological treatments for high blood pressure despite equivocal evidence of their efficacy. Our previous review including eight randomised controlled trials (RCTs) demonstrated that short term use of a device guiding breathing diminished both systolic and diastolic blood pressure (Mahtani 2012). However, due to a limited number of small, low quality trials, our review was inconclusive regarding the long-term efficacy of device-guided breathing. Furthermore, no clinical efficacy was observed after removal of five trials sponsored by the device manufacturer from our analysis. Since our original review, new trials have been published showing mixed results (Howorka 2013, Landman 2013). Therefore, a Cochrane systematic review that includes the most up-to-date RCTs investigating blood-pressure lowering effects of device-guided breathing is needed. Objectives Primary objective: · To assess the difference in systolic or diastolic blood pressure or both between participants randomised to a device that guides breathing rate or control group. Secondary objectives: · To assess differences in heart rate, quality of life, adverse events and side effects between participants randomised to a device that guides breathing rate or control group. Compliance will also be assessed in the intervention group.
    Cochrane database of systematic reviews (Online) 04/2015; In press. · 5.70 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Objective To assess the diagnostic accuracy of recommendations for self-monitoring blood pressure for diagnosing hypertension in primary care. Methods 247 consecutive participants with raised (>130 mm Hg systolic) blood pressure (BP) measured by their general practitioner (GP) from four primary care practices in the UK underwent 28-days self-monitoring followed by 24 hour ambulatory blood pressure monitoring (ABPM). Diagnostic accuracy of the first seven days of self-monitored BP (minimum four days, discarding readings on day one) in detecting hypertension with ambulatory blood pressure as reference. Results 203 participants were included, 109 (53.7%) of whom were diagnosed with hypertension using daytime ambulatory BP. The average of days two to seven self-monitored BP correctly classified 150 of 203 participants (sensitivity, 93.6%; 95% CI, 87.2%-97.4%; specificity, 51.1%; 95% CI, 40.5%-61.5%). However, the average of days two to five self-monitoring correctly classified 152 of 203 participants due to better specificity (53.2%; 95% CI, 42.6%-63.6%). In sensitivity analysis, diagnostic accuracy was not improved by inclusion of readings beyond day five and inclusion of readings taken on day one had no impact on diagnostic accuracy. Self-monitoring in the clinic was more accurate than readings taken by the GP but not self-monitoring outside of the clinic. Conclusion Hypertension can be ruled out in the majority of patients with elevated clinic BP using the average of the first five consecutive days of self-monitored BP, supporting lower limits for self-monitoring readings in current guidelines. Performing readings beyond day five and including readings taken on the first day has no clinical impact on diagnostic accuracy.
    Journal of Hypertension 04/2015; In press(4). DOI:10.1097/HJH.0000000000000489 · 4.22 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In addition to mean blood pressure, blood pressure variability is said to have important prognostic value in evaluating cardiovascular risk. We aimed to assess the potential for 24-hour blood pressure variability as a prognostic index. Using MEDLINE, EMBASE and Cochrane Library to April 2013, we conducted a systematic review of prospective studies of adults, with at least one year follow-up and any blood pressure variability measure up to 24 hours as a predictor of one or more of the following outcomes: all-cause mortality, cardiovascular mortality, all cardiovascular events, stroke and coronary heart disease. We examined how 24-hour blood pressure variability is defined and how its prognostic use is reported. For studies reporting relative risks adjusted for covariates including 24-hour blood pressure, we considered the potential for applications of meta-analysis. Our analysis of methods included 24 studies and analysis of predictions included 15 studies. There were 31 different measures of blood pressure variability and 13 definitions of night- and day-time periods. Median follow-up was 5.5 years (interquartile range 4.2-7.0). Comparing measures of dispersion, coefficient of variation was less well researched than standard deviation. Night dipping based on percentage change was the most researched measure and the only measure for which data could be meaningfully pooled. Night dipping or lower night-time blood pressure was associated with lower risk of cardiovascular events. We concluded that the interpretation and use in clinical practice of 24-hour blood pressure variability, as an important prognostic indicator of cardiovascular events, is hampered by insufficient evidence and divergent methodologies. We recommend greater standardisation of methods.
    PLoS ONE 04/2015; In press. · 3.53 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: In recent years there has been much debate and controversy surrounding the efficacy and safety of neuraminidase inhibitors for influenza, in part because the data underlying certain efficacy claims were not available for independent scrutiny. In 2014, a Cochrane review was published, based exclusively on an almost complete set of clinical study reports and other regulatory documents. Clinical study reports can run to thousands of pages, providing an extensive amount of information on the planning, conduct and results of each trial. After a protracted campaign to obtain the reports, the manufacturers of the medications provided them unconditionally. The review authors subsequently published the underlying documents simultaneously with the Cochrane review, endorsing the concept of open science. In the following commentary, the background to and results of this review are summarized and put into clinical context.
    Clinical Microbiology and Infection 03/2015; DOI:10.1016/j.cmi.2014.10.011 · 5.20 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Community-based self-screening may provide opportunities to increase detection of hypertension, and identify raised blood pressure (BP) in populations who do not access healthcare. This systematic review aimed to evaluate the effectiveness of non-physician screening and self-screening of BP in community settings. We searched the Cochrane Central Trials Register, Medline, Embase, CINAHL, and Science Citation Index & Conference Proceedings Citation Index-Science to November 2013 to identify studies reporting community-based self-screening or non-physician screening for hypertension in adults. Results were stratified by study site, screener, and the cut-off used to define high screening BP. We included 73 studies, which described screening in 9 settings, with pharmacies (22%) and public areas/retail (15%) most commonly described. We found high levels of heterogeneity in all analyses, despite stratification. The highest proportions of eligible participants screened were achieved by mobile units (range 21%-88%) and pharmacies (range 40%-90%). Self-screeners had similar median rates of high BP detection (25%-35%) to participants in studies using other screeners. Few (16%) studies reported referral to primary care after screening. However, where participants were referred, a median of 44% (range 17%-100%) received a new hypertension diagnosis or antihypertensive medication. Community-based non-physician or self-screening for raised BP can detect raised BP, which may lead to the identification of new cases of hypertension. However, current evidence is insufficient to recommend specific approaches or settings. Studies with good follow-up of patients to definitive diagnosis are needed. © The Author 2015. Published by Oxford University Press on behalf of the American Journal of Hypertension.
    American Journal of Hypertension 03/2015; DOI:10.1093/ajh/hpv029 · 3.40 Impact Factor
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: To understand GP trainees' experience of out-of-hours (OOH) training in England; whether it is achieving educational aims, and to highlight potential improvements. Additionally to explore factors that influence the decision to work in OOH care. An online survey was sent to 1091 GP trainees in England. Odds ratios were calculated for factors correlating with intention to work in OOH, or confidence and effectiveness in OOH work. Free text responses were coded and organised thematically. Trainees' experience of OOH care influences the decision to work there once qualified. Although this experience has positively influenced over three-quarters of trainees, it can be improved. Training is not achieving competencies in managing psychiatric emergencies and personal safety. Half of trainees received formal teaching in OOH skills; 3% receiving assessments in telephone triage. Only a quarter of trainees had worked with their usual GP trainer. Influential features of training included trainer enthusiasm and continuity, familiarity with the workplace, and confidence in OOH skills. Financial and lifestyle considerations were also important. OOH training in England has an impact on the future workforce and could be improved. The planned transition to a 4-year GP training structure offers an opportunity to address this.
    Education for Primary Care 03/2015; 26(2):95-101. · 1.07 Impact Factor
  • British Journal of General Practice 03/2015; 65(632):156-8. DOI:10.3399/bjgp15X684229 · 2.36 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The prevalence of diagnosed chronic obstructive pulmonary disease (COPD) in the UK is 1.8%, although it is estimated that this represents less than half of the total disease in the population as much remains undiagnosed. Case finding initiatives in primary care will identify people with mild disease and symptoms. The majority of self-management trials have identified patients from secondary care clinics or following a hospital admission for exacerbation of their condition. This trial will recruit a primary care population with mild symptoms of COPD and use telephone health coaching to encourage self-management. In this study, using a multi-centred randomised controlled trial (RCT) across at least 70 general practices in England, we plan to establish the effectiveness of nurse-led telephone health coaching to support self-management in primary care for people who report only mild symptoms of their COPD (MRC grade 1 and 2) compared to usual care. The intervention focuses on taking up smoking cessation services, increasing physical activity, medication management and action planning and is underpinned by behavioural change theory. In total, we aim to recruit 556 patients with COPD confirmed by spirometry with follow up at six and 12 months. The primary outcome is health related quality of life using the St Georges Respiratory Questionnaire (SGRQ). Spirometry and BMI are measured at baseline. Secondary outcomes include self-reported health behaviours (smoking and physical activity), physical activity measured by accelerometery (at 12 months), psychological morbidity, self-efficacy and cost-effectiveness of the intervention. Longitudinal qualitative interviews will explore how engaged participants were with the intervention and how embedded behaviour change was in every day practices. This trial will provide robust evidence about the effectiveness of a novel telephone health coaching intervention to promote behaviour change and prevent disease progression in patients with mild symptoms of dyspnoea in primary care. Current controlled trials ISRCTN06710391 .
    BMC Pulmonary Medicine 02/2015; 15(16). DOI:10.1186/s12890-015-0011-5 · 2.49 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Email is one of the most widely used methods of communication, but its use in healthcare is still uncommon. Where email communication has been utilised in health care, its purposes have included clinical communication between healthcare professionals, but the effects of using email in this way are not well known. We updated a 2012 review of the use of email for two-way clinical communication between healthcare professionals. To assess the effects of email for clinical communication between healthcare professionals on healthcare professional outcomes, patient outcomes, health service performance, and service efficiency and acceptability, when compared to other forms of communicating clinical information. We searched: the Cochrane Consumers and Communication Review Group Specialised Register, Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 9 2013), MEDLINE (OvidSP) (1946 to August 2013), EMBASE (OvidSP) (1974 to August 2013), PsycINFO (1967 to August 2013), CINAHL (EbscoHOST) (1982 to August 2013), and ERIC (CSA) (1965 to January 2010). We searched grey literature: theses/dissertation repositories, trials registers and Google Scholar (searched November 2013). We used additional search methods: examining reference lists and contacting authors. Randomised controlled trials, quasi-randomised trials, controlled before and after studies, and interrupted time series studies examining interventions in which healthcare professionals used email for communicating clinical information in the form of: 1) unsecured email, 2) secure email, or 3) web messaging. All healthcare professionals, patients and caregivers in all settings were considered. Two authors independently assessed studies for inclusion, assessed the included studies' risk of bias, and extracted data. We contacted study authors for additional information and have reported all measures as per the study report. The previous version of this review included one randomised controlled trial involving 327 patients and 159 healthcare providers at baseline. It compared an email to physicians containing patient-specific osteoporosis risk information and guidelines for evaluation and treatment versus usual care (no email). This study was at high risk of bias for the allocation concealment and blinding domains. The email reminder changed health professional actions significantly, with professionals more likely to provide guideline-recommended osteoporosis treatment (bone density measurement or osteoporosis medication, or both) when compared with usual care. The evidence for its impact on patient behaviours or actions was inconclusive. One measure found that the electronic medical reminder message impacted patient behaviour positively (patients had a higher calcium intake), and two found no difference between the two groups. The study did not assess health service outcomes or harms.No new studies were identified for this update. Only one study was identified for inclusion, providing insufficient evidence for guiding clinical practice in regard to the use of email for clinical communication between healthcare professionals. Future research should aim to utilise high-quality study designs that use the most recent developments in information technology, with consideration of the complexity of email as an intervention.
    Cochrane database of systematic reviews (Online) 02/2015; 2:CD007979. DOI:10.1002/14651858.CD007979.pub3 · 5.94 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Screening for atrial fibrillation (AF) using 12-lead-electrocardiograms (ECGs) has been recommended; however, the best method for interpreting ECGs to diagnose AF is not known. We compared accuracy of methods for diagnosing AF from ECGs. We searched MEDLINE, EMBASE, CINAHL and LILACS until March 24, 2014. Two reviewers identified eligible studies, extracted data and appraised quality using the QUADAS-2 instrument. Meta-analysis, using the bivariate hierarchical random effects method, determined average operating points for sensitivities, specificities, positive and negative likelihood ratios (PLR, NLR) and enabled construction of Summary Receiver Operating Characteristic (SROC) plots. 10 studies investigated 16 methods for interpreting ECGs (n=55,376 participant ECGs). The sensitivity and specificity of automated software (8 studies; 9 methods) were 0.89 (95% C.I. 0.82-0.93) and 0.99 (95% C.I. 0.99-0.99), respectively; PLR 96.6 (95% C.I. 64.2-145.6); NLR 0.11 (95% C.I. 0.07-0.18). Indirect comparisons with software found healthcare professionals (5 studies; 7 methods) had similar sensitivity for diagnosing AF but lower specificity [sensitivity 0.92 (95% C.I. 0.81-0.97), specificity 0.93 (95% C.I. 0.76-0.98), PLR 13.9 (95% C.I. 3.5-55.3), NLR 0.09 (95% C.I. 0.03-0.22)]. Sub-group analyses of primary care professionals found greater specificity for GPs than nurses [GPs: sensitivity 0.91 (95% C.I. 0.68-1.00); specificity 0.96 (95% C.I. 0.89-1.00). Nurses: sensitivity 0.88 (95% C.I. 0.63-1.00); specificity 0.85 (95% C.I. 0.83-0.87)]. Automated ECG-interpreting software most accurately excluded AF, although its ability to diagnose this was similar to all healthcare professionals. Within primary care, the specificity of AF diagnosis from ECG was greater for GPs than nurses. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
    International Journal of Cardiology 02/2015; 184C:175-183. DOI:10.1016/j.ijcard.2015.02.014 · 6.18 Impact Factor
  • M C McCall, A Ward, C Heneghan
    [Show abstract] [Hide abstract]
    ABSTRACT: Depending on interest, knowledge, and skills, oncologists are adapting clinical behaviour to include integrative approaches, supporting patients to make informed complementary care decisions. The present study sought to improve the knowledge base in three ways: Test the acceptability of a self-reported online survey for oncologists.Provide preliminary data collection concerning knowledge, attitudes, beliefs, and current referral practices among oncologists with respect to yoga in adult cancer.List the perceived benefits of and barriers to yoga intervention from a clinical perspective. A 38-item self-report questionnaire was administered online to medical, radiation, and surgical oncologists in British Columbia. Some of the 29 oncologists who completed the survey (n = 10) reported having recommended yoga to patients to improve physical activity, fatigue, stress, insomnia, and muscle or joint stiffness. Other responding oncologists were hesitant or unlikely to suggest yoga for their patients because they had no knowledge of yoga as a therapy (n = 15) or believed that scientific evidence to support its use is lacking (n = 11). All 29 respondents would recommend that their patients participate in a clinical trial to test the efficacy of yoga. In qualitative findings, oncologists compared yoga with exercise and suggested that it might have similar psychological and physical health benefits that would improve patient capacity to endure treatment. Barriers to and limitations of yoga in adult cancer are also discussed. An online self-report survey is feasible, but has response rate limitations. A small number of oncologists are currently recommending yoga to improve health-related outcomes in adult cancer. Respondents would support clinical yoga interventions to improve the evidence base in cancer patients, including men and women in all tumour groups.
    02/2015; 22(1):13-9. DOI:10.3747/co.22.2129
  • Source
    Igho J Onakpoya, Carl J Heneghan
    [Show abstract] [Hide abstract]
    ABSTRACT: Hundreds of dietary supplements are marketed as weight loss pills, but the evidence for effectiveness for most is unproven. The objective of this review was to critically appraise and evaluate the evidence from published randomized clinical trials (RCTs) examining the effectiveness of polyglycoplex (PGX), a novel functional fibre, on body weight and metabolic parameters. We conducted electronic searches in Medline, Embase, Amed, Cinahl and The Cochrane Library. Only double-blinded RCTs were considered for inclusion. The reporting quality of included studies was assessed using the Cochrane criteria. Two reviewers independently determined eligibility, assessed the quality of reporting, and extracted the data. We included four RCTs with a total of 217 participants. The RCTs varied in the quality of their reporting. The evidence from the RCTs suggested that PGX has no significant effects on body weight; however, significant reductions were noted for total and LDL cholesterol. Adverse events reported included diarrhea and abdominal bloating. The evidence from available RCTs does not indicate that PGX intake causes reductions in body weight. PGX may cause reductions in total and LDL cholesterol. Few trials examining the effects of PGX have been conducted; they are characterized by small sample sizes, deficiencies in reporting quality, and are funded by a single manufacturer. Future clinical trials evaluating its effect should be adequately powered and better reported. Copyright © 2015 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
    Clinical nutrition (Edinburgh, Scotland) 01/2015; DOI:10.1016/j.clnu.2015.01.004 · 3.94 Impact Factor
  • Igho J. Onakpoya, Jack O’Sullivan, Carl J. Heneghan
    [Show abstract] [Hide abstract]
    ABSTRACT: Hundreds of dietary supplements are currently marketed as weight loss supplements. However, the advertised health claims of effectiveness for most of these have not been proven. The aim of this study was to critically appraise and evaluate the evidence for effectiveness of cactus pear, Opuntia ficus-indica (OFI), using data from published randomized clinical trials. We conducted electronic searches in Medline, Embase, Amed, Cinahl, and the Cochrane Library. No restrictions on age, time, or language were imposed. The risk for bias in the studies included was assessed using the Cochrane Collaboration criteria. Two reviewers independently determined the eligibility of included studies, assessed reporting quality, and extracted data. We identified seven eligible studies, of which five were included. The studies varied in design and reporting quality. Meta-analysis revealed a nonsignificant difference in body weight between OFI and controls (mean difference = -0.83 kg; 95% confidence interval, -2.49 to 0.83; I(2) = 93%). Significant reductions in body mass index, percentage body fat, systolic and diastolic blood pressures, and total cholesterol were observed. Adverse events included gastric intolerance and flu symptoms. The evidence from randomized clinical trials does not indicate that supplementation with OFI generates statistically significant effects on body weight. Consumption of OFI can cause significant reductions in percentage body fat, blood pressure, and total cholesterol. Few clinical trials evaluating the effects of OFI have been published. They vary in design and methodology, and are characterized by inconsistent quality of reporting. Further clinical trials evaluating the effects of OFI on body composition and metabolic parameters are warranted. Copyright © 2015 Elsevier Inc. All rights reserved.
    Nutrition 12/2014; 31(5). DOI:10.1016/j.nut.2014.11.015 · 3.05 Impact Factor

Publication Stats

3k Citations
1,474.39 Total Impact Points

Institutions

  • 2006–2015
    • University of Oxford
      • • Department of Primary Care Health Sciences
      • • Oxford Centre for Evidence Based Medicine
      Oxford, England, United Kingdom
  • 2014
    • University of Queensland
      • School of Population Health
      Brisbane, Queensland, Australia
  • 2013
    • Oregon Health and Science University
      • Department of Family Medicine
      Portland, Oregon, United States
    • University of Exeter
      Exeter, England, United Kingdom
  • 2011
    • The Cochrane Collaboration
      Oxford, England, United Kingdom