Elizabeth A Spencer

University of Oxford, Oxford, England, United Kingdom

Are you Elizabeth A Spencer?

Claim your profile

Publications (63)406.09 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Description of the condition Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia (Wyndham 2000). It has been estimated that there are more than 2.3 million cases in the United States, with an estimated increase of up to 15.9 million persons by 2050. More than 46,000 new cases are diagnosed each year in the United Kingdom (Agarwal 2005; Testai 2010). The prevalence of AF increases with age with an estimated prevalence of 0.5% in the age group 50 to 59 years rising to approximately 9% in individuals older than 70 years. The lifetime risk of developing AF is approximately one in four (Agarwal 2005; Brieger 2009). The majority of cases of AF, be it paroxysmal or permanent, are ascribed to cardiovascular disorders such ischaemic heart disease, hypertension, cardiac failure and valvular heart abnormalities. Other non-cardiac causes include hyperthyroidism, and only a minority of cases (estimated at 11%) have no identifiable cause (lone AF) (Agarwal 2005). The resultant arrhythmia leads to an increase in blood stasis within the atria. This, in combination with other factors such as an ageing vessel wall and blood component changes, leads to an increased risk in venous thromboemboli formation (Watson 2009). As a result, the main morbidity and mortality associated with atrial fibrillation is in relation to the risk of ischaemic stroke, which is increased five-fold (Hart 2001). However, this risk is thought to vary from one individual to another with the leading risk factors being: previous history of stroke or transient ischaemic attack (TIA), increasing age, hypertension, and structural heart disease in the presence of AF (Hughes 2008). These have led to several clinical prediction rules to estimate the risk of stroke in paroxysmal and permanent AF along with the best option for pharmacological prophylaxis. Of these the CHADS2 risk stratification score was found to have the highest ability to correctly rank-order patients by risk (Hughes 2008). The mainstay for venous thromboemboli prophylaxis and stroke prevention in AF has thus far been using either a vitamin K antagonist (VKA) such as warfarin or an anti-platelet agent such as aspirin. An earlier systematic review of long term anticoagulants (warfarin) compared with antiplatelet treatment (aspirin) suggested that the included trials (all pre-1989) were too weak to confer any value of long term anticoagulation (Taylor 2001). However a more recent meta-analysis of 28,044 participants showed stroke was reduced by 64% for those on dose-adjusted warfarin and 22% for those on antiplatelet agents. Warfarin in comparison to aspirin leads to a 39% relative risk reduction in stroke (Hart 2007). The decision as to whether a patient receives warfarin or aspirin depends on risk versus benefit. Those at low risk or where warfarin is contraindicated may well be managed on aspirin alone, whereas patients at higher risk may benefit from warfarin. Patients who fall into an intermediate risk category may benefit from either treatment and this decision is largely based on individual risk. Table 1 summarises the criteria for low, intermediate and high risk stratification (Lafuente-Lafuente 2009). Description of the intervention The benefits of warfarin therapy in stroke reduction for AF patients are well established. However, these benefits are offset by increased side effects and the need for regular monitoring. The most serious complication for warfarin use is increased haemorrhagic risk. Two meta-analyses have suggested that there is a greater than two-fold increase in the risk of serious major haemorrhagic bleed with warfarin use when compared to placebo or aspirin (Segal 2001; Hart 2007). This risk is increased when warfarin and aspirin are combined without any benefit in stroke prevention (Flaker 2006). Another significant problem with warfarin use is its narrow therapeutic window. To prevent under and over anticoagulation, patients on warfarin require regular monitoring of their international normalised ratio (INR). Most guidelines suggest patients on warfarin for AF should have an INR of between 2 and 3 (Lip 2007). Sub-optimal levels are associated with a greater risk of complications. One study looked at mortality within 30 days of admission to hospital with stroke. Among patients taking warfarin at the time of the stroke, 16% of those with an INR <2 died within 30 days compared to 6% with INR >2 (Hylek 2003). The same study also showed that increased haemorraghic risk was associated with an INR >4. Tight INR control requires regular monitoring and is thought to be one of the contributing factors to poor adherence to warfarin. A prospective cohort study of patients presenting to secondary care with AF found 56% of patients on anticoagulation treatment did not adhere to international guidelines. Reasons for this were thought to be due to poor understanding of treatment, logistics of regular monitoring and reluctance of physicians to correctly prescribe warfarin for fear of potential drug interactions and complications (Mehta 2004). Several alternatives to warfarin have emerged over the past ten years. These include direct thrombin inhibitors (DTIs) and factor Xa inhibitors. These newer drugs have the potential for several advantages over traditional VKAs. For example, they do not require regular monitoring, have faster onset of action, and potentially fewer adverse interactions. However, what is not clear yet is how efficacious they are or the associated risks of adverse events (Verheugt 2010). Clinical trials of parenteral DTIs, such as hirudin, argatroban, and bivalirudin have been evaluated in acute settings such as percutaneous coronary intervention (PCI) and acute coronary syndrome (ACS) with mixed results (Hirsh 2005). The two oral DTIs furthest into clinical trials are ximelagatran and dabigatran. Trials of ximelagatran in preventing venous thromboemboli have shown superiority over placebo without increased risks of bleeding (Schulman 2003). The Stroke Prevention Using Oral Thrombin Inhibitor in Atrial Fibrillation (SPORTIF) III and SPORTIF V trials concluded ximelagatran was non-inferior to warfarin in preventing stroke with no increase in bleeding events (Olsson 2003; Albers 2005). However, serious concerns of hepatotoxicity have resulted in ximelagatran being withdrawn from the market (Kaul 2005). Early trials of dabigatran have shown promise. In the Randomized Evaluation of Long Term Anticoagulation Therapy (RE-LY) trial, oral dabigatran, when given at a dose of 110 mg, was found to be associated with rates of stroke and systemic embolism that were similar to those associated with warfarin. At a dose of 150 mg, compared with warfarin, dabigatran was associated with lower rates of stroke and systemic embolism but similar rates of major haemorrhage (Connolly 2009). Factor Xa inhibitors currently include idraparinux, apixaban, rivaroxaban, and edoxaban. The AMADEUS trial compared once weekly subcutaneous injections of idraparinux with oral warfarin (Amadeus Investigators 2008). While idraparinux was non-inferior to warfarin, there were significantly higher rates of bleeding. A biotinylated version entered phase III clinical trials but the trial was terminated early by the manufacturer. A number of clinical trials for oral factor Xa inhibitors are currently underway. The ROCKET AF study is a non-inferiority study comparing rivaroxaban to warfarin in atrial fibrillation patients (Investigators 2010). A recent randomised controlled trial looked at the effect of the oral factor Xa inhibitor apixaban against aspirin in those patients unsuitable for warfarin (Connolly 2011). This trial was stopped early as a result of the clear benefit of apixaban over aspirin in reducing stroke and adverse bleeding events. Apixaban also appeared superior when compared to warfarin (Granger 2011). Other direct thrombin and factor Xa inhibitors involved in clinical trials include AZD0837 (Lip 2009) and YM150 (Astellas 2007). Table 2 summarises the main direct thrombin and factor Xa inhibitors currently being investigated for thromboembolic prevention in atrial fibrillation. How the intervention might work Both the intrinsic and extrinsic coagulation pathways result in fibrin activation. Directly before this step is the conversion of prothrombin to thrombin which in turn is dependent upon activation of factor Xa. Warfarin interrupts this cascade indirectly through the inhibition of vitamin K dependent factors II, VII, IX, and X. In contrast, direct thrombin inhibitors (DTIs) bind to and inhibit thrombin, which is the most potent platelet agonist. This also has the advantage of preventing feedback activation of factors V, VIII, and XI. Inhibitors of factor Xa inhibit the formation of thrombin by binding directly to its precursor (Eriksson 2011). Why it is important to do this review Current management of anticoagulation for reducing stroke risk in AF patients involves a clinical decision of risk versus benefit in deciding who should receive and which type of anticoagulation to use. Current guidelines advocate the use of either aspirin or warfarin. While warfarin has shown clear superiority over aspirin in risk reduction, it may not be suitable for use in some patients. Long term use of warfarin requires regular monitoring through blood tests. This may be less suitable for patients with poor mobility, or in those who are housebound or have poor access to regular means for blood testing within the community. In addition, many of the patients being considered for warfarin are likely to be on other medications, and there are established risks of interactions with other drugs and subsequent effects on INR. Most importantly warfarin use is associated with higher rates of both minor and major bleeding events. Newer anticoagulation drugs work along the coagulation pathways through, for example, direct inhibition of thrombin or other factors such as factor Xa. These newer drugs have the potential benefits over warfarin that they require no monitoring, have activity and inactivity through a shorter time frame than warfarin, and less potential of drug interactions. One study also found that dabigatran was as cost effective as warfarin although it is unclear if this is the case for other newer inhibitors (Patel 2010). However, there are concerns over the safety profile and risk of adverse events for some of these drugs. In addition, it remains unclear how these drugs could be monitored or rapidly reversed in the scenario of overdosing. Some drug trials have been stopped early and their results have not been published for unclear reasons that have further raised concerns over the safety profile of these newer drugs. At least two other reviews (Aguilar 2007; Bruins 2011) appear in the Cochrane library that suggest some overlap with this one. However in the Aguilar 2007 review the authors compare only warfarin versus antiplatelet agents for stroke prevention in non-valvular AF, and the protocol from Bruins 2011 only evaluates Factor Xa inhibitors against VKA for AF. Therefore a systematic review is needed to assess the effect of both direct thrombin or factor Xa inhibitors in the prevention of stroke in AF. Objectives To assess the effectiveness of direct thrombin inhibitors and factor Xa inhibitors on clinical outcomes in patients with AF.
    Cochrane database of systematic reviews (Online) 05/2015; In press. · 5.70 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Description of the condition Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia (Wyndham 2000). It has been estimated that there are more than 2.3 million cases in the United States, with an estimated increase of up to 15.9 million persons by 2050. More than 46,000 new cases are diagnosed each year in the United Kingdom (Agarwal 2005; Testai 2010). The prevalence of AF increases with age with an estimated prevalence of 0.5% in the age group 50 to 59 years rising to approximately 9% in individuals older than 70 years. The lifetime risk of developing AF is approximately one in four (Agarwal 2005; Brieger 2009). The majority of cases of AF, be it paroxysmal or permanent, are ascribed to cardiovascular disorders such ischaemic heart disease, hypertension, cardiac failure and valvular heart abnormalities. Other non-cardiac causes include hyperthyroidism, and only a minority of cases (estimated at 11%) have no identifiable cause (lone AF) (Agarwal 2005). The resultant arrhythmia leads to an increase in blood stasis within the atria. This, in combination with other factors such as an ageing vessel wall and blood component changes, leads to an increased risk in venous thromboemboli formation (Watson 2009). As a result, the main morbidity and mortality associated with atrial fibrillation is in relation to the risk of ischaemic stroke, which is increased five-fold (Hart 2001). However, this risk is thought to vary from one individual to another with the leading risk factors being: previous history of stroke or transient ischaemic attack (TIA), increasing age, hypertension, and structural heart disease in the presence of AF (Hughes 2008). These have led to several clinical prediction rules to estimate the risk of stroke in paroxysmal and permanent AF along with the best option for pharmacological prophylaxis. Of these the CHADS2 risk stratification score was found to have the highest ability to correctly rank-order patients by risk (Hughes 2008). The mainstay for venous thromboemboli prophylaxis and stroke prevention in AF has thus far been using either a vitamin K antagonist (VKA) such as warfarin or an anti-platelet agent such as aspirin. An earlier systematic review of long term anticoagulants (warfarin) compared with antiplatelet treatment (aspirin) suggested that the included trials (all pre-1989) were too weak to confer any value of long term anticoagulation (Taylor 2001). However a more recent meta-analysis of 28,044 participants showed stroke was reduced by 64% for those on dose-adjusted warfarin and 22% for those on antiplatelet agents. Warfarin in comparison to aspirin leads to a 39% relative risk reduction in stroke (Hart 2007). The decision as to whether a patient receives warfarin or aspirin depends on risk versus benefit. Those at low risk or where warfarin is contraindicated may well be managed on aspirin alone, whereas patients at higher risk may benefit from warfarin. Patients who fall into an intermediate risk category may benefit from either treatment and this decision is largely based on individual risk. Table 1 summarises the criteria for low, intermediate and high risk stratification (Lafuente-Lafuente 2009). Description of the intervention The benefits of warfarin therapy in stroke reduction for AF patients are well established. However, these benefits are offset by increased side effects and the need for regular monitoring. The most serious complication for warfarin use is increased haemorrhagic risk. Two meta-analyses have suggested that there is a greater than two-fold increase in the risk of serious major haemorrhagic bleed with warfarin use when compared to placebo or aspirin (Segal 2001; Hart 2007). This risk is increased when warfarin and aspirin are combined without any benefit in stroke prevention (Flaker 2006). Another significant problem with warfarin use is its narrow therapeutic window. To prevent under and over anticoagulation, patients on warfarin require regular monitoring of their international normalised ratio (INR). Most guidelines suggest patients on warfarin for AF should have an INR of between 2 and 3 (Lip 2007). Sub-optimal levels are associated with a greater risk of complications. One study looked at mortality within 30 days of admission to hospital with stroke. Among patients taking warfarin at the time of the stroke, 16% of those with an INR <2 died within 30 days compared to 6% with INR >2 (Hylek 2003). The same study also showed that increased haemorraghic risk was associated with an INR >4. Tight INR control requires regular monitoring and is thought to be one of the contributing factors to poor adherence to warfarin. A prospective cohort study of patients presenting to secondary care with AF found 56% of patients on anticoagulation treatment did not adhere to international guidelines. Reasons for this were thought to be due to poor understanding of treatment, logistics of regular monitoring and reluctance of physicians to correctly prescribe warfarin for fear of potential drug interactions and complications (Mehta 2004). Several alternatives to warfarin have emerged over the past ten years. These include direct thrombin inhibitors (DTIs) and factor Xa inhibitors. These newer drugs have the potential for several advantages over traditional VKAs. For example, they do not require regular monitoring, have faster onset of action, and potentially fewer adverse interactions. However, what is not clear yet is how efficacious they are or the associated risks of adverse events (Verheugt 2010). Clinical trials of parenteral DTIs, such as hirudin, argatroban, and bivalirudin have been evaluated in acute settings such as percutaneous coronary intervention (PCI) and acute coronary syndrome (ACS) with mixed results (Hirsh 2005). The two oral DTIs furthest into clinical trials are ximelagatran and dabigatran. Trials of ximelagatran in preventing venous thromboemboli have shown superiority over placebo without increased risks of bleeding (Schulman 2003). The Stroke Prevention Using Oral Thrombin Inhibitor in Atrial Fibrillation (SPORTIF) III and SPORTIF V trials concluded ximelagatran was non-inferior to warfarin in preventing stroke with no increase in bleeding events (Olsson 2003; Albers 2005). However, serious concerns of hepatotoxicity have resulted in ximelagatran being withdrawn from the market (Kaul 2005). Early trials of dabigatran have shown promise. In the Randomized Evaluation of Long Term Anticoagulation Therapy (RE-LY) trial, oral dabigatran, when given at a dose of 110 mg, was found to be associated with rates of stroke and systemic embolism that were similar to those associated with warfarin. At a dose of 150 mg, compared with warfarin, dabigatran was associated with lower rates of stroke and systemic embolism but similar rates of major haemorrhage (Connolly 2009). Factor Xa inhibitors currently include idraparinux, apixaban, rivaroxaban, and edoxaban. The AMADEUS trial compared once weekly subcutaneous injections of idraparinux with oral warfarin (Amadeus Investigators 2008). While idraparinux was non-inferior to warfarin, there were significantly higher rates of bleeding. A biotinylated version entered phase III clinical trials but the trial was terminated early by the manufacturer. A number of clinical trials for oral factor Xa inhibitors are currently underway. The ROCKET AF study is a non-inferiority study comparing rivaroxaban to warfarin in atrial fibrillation patients (Investigators 2010). A recent randomised controlled trial looked at the effect of the oral factor Xa inhibitor apixaban against aspirin in those patients unsuitable for warfarin (Connolly 2011). This trial was stopped early as a result of the clear benefit of apixaban over aspirin in reducing stroke and adverse bleeding events. Apixaban also appeared superior when compared to warfarin (Granger 2011). Other direct thrombin and factor Xa inhibitors involved in clinical trials include AZD0837 (Lip 2009) and YM150 (Astellas 2007). Table 2 summarises the main direct thrombin and factor Xa inhibitors currently being investigated for thromboembolic prevention in atrial fibrillation. How the intervention might work Both the intrinsic and extrinsic coagulation pathways result in fibrin activation. Directly before this step is the conversion of prothrombin to thrombin which in turn is dependent upon activation of factor Xa. Warfarin interrupts this cascade indirectly through the inhibition of vitamin K dependent factors II, VII, IX, and X. In contrast, direct thrombin inhibitors (DTIs) bind to and inhibit thrombin, which is the most potent platelet agonist. This also has the advantage of preventing feedback activation of factors V, VIII, and XI. Inhibitors of factor Xa inhibit the formation of thrombin by binding directly to its precursor (Eriksson 2011). Why it is important to do this review Current management of anticoagulation for reducing stroke risk in AF patients involves a clinical decision of risk versus benefit in deciding who should receive and which type of anticoagulation to use. Current guidelines advocate the use of either aspirin or warfarin. While warfarin has shown clear superiority over aspirin in risk reduction, it may not be suitable for use in some patients. Long term use of warfarin requires regular monitoring through blood tests. This may be less suitable for patients with poor mobility, or in those who are housebound or have poor access to regular means for blood testing within the community. In addition, many of the patients being considered for warfarin are likely to be on other medications, and there are established risks of interactions with other drugs and subsequent effects on INR. Most importantly warfarin use is associated with higher rates of both minor and major bleeding events. Newer anticoagulation drugs work along the coagulation pathways through, for example, direct inhibition of thrombin or other factors such as factor Xa. These newer drugs have the potential benefits over warfarin that they require no monitoring, have activity and inactivity through a shorter time frame than warfarin, and less potential of drug interactions. One study also found that dabigatran was as cost effective as warfarin although it is unclear if this is the case for other newer inhibitors (Patel 2010). However, there are concerns over the safety profile and risk of adverse events for some of these drugs. In addition, it remains unclear how these drugs could be monitored or rapidly reversed in the scenario of overdosing. Some drug trials have been stopped early and their results have not been published for unclear reasons that have further raised concerns over the safety profile of these newer drugs. At least two other reviews (Aguilar 2007; Bruins 2011) appear in the Cochrane library that suggest some overlap with this one. However in the Aguilar 2007 review the authors compare only warfarin versus antiplatelet agents for stroke prevention in non-valvular AF, and the protocol from Bruins 2011 only evaluates Factor Xa inhibitors against VKA for AF. Therefore a systematic review is needed to assess the effect of both direct thrombin or factor Xa inhibitors in the prevention of stroke in AF. Objectives To assess the effectiveness of direct thrombin inhibitors and factor Xa inhibitors on clinical outcomes in patients with AF.
    Cochrane database of systematic reviews (Online) 05/2015; In press. · 5.70 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To assess the diagnostic accuracy of three personal breathalyser devices available for sale to the public marketed to test safety to drive after drinking alcohol. Prospective comparative diagnostic accuracy study comparing two single-use breathalysers and one digital multiuse breathalyser (index tests) to a police breathalyser (reference test). Establishments licensed to serve alcohol in a UK city. Of 222 participants recruited, 208 were included in the main analysis. Participants were eligible if they were 18 years old or over, had consumed alcohol and were not intending to drive within the following 6 h. Sensitivity and specificity of the breathalysers for the detection of being at or over the UK legal driving limit (35 µg/100 mL breath alcohol concentration). 18% of participants (38/208) were at or over the UK driving limit according to the police breathalyser. The digital multiuse breathalyser had a sensitivity of 89.5% (95% CI 75.9% to 95.8%) and a specificity of 64.1% (95% CI 56.6% to 71.0%). The single-use breathalysers had a sensitivity of 94.7% (95% CI 75.4% to 99.1%) and 26.3% (95% CI 11.8% to 48.8%), and a specificity of 50.6% (95% CI 40.4% to 60.7%) and 97.5% (95% CI 91.4% to 99.3%), respectively. Self-reported alcohol consumption threshold of 5 UK units or fewer had a higher sensitivity than all personal breathalysers. One alcohol breathalyser had sensitivity of 26%, corresponding to false reassurance for approximately one person in four who is over the limit by the reference standard, at least on the evening of drinking alcohol. The other devices tested had 90% sensitivity or higher. All estimates were subject to uncertainty. There is no clearly defined minimum sensitivity for this safety-critical application. We conclude that current regulatory frameworks do not ensure high sensitivity for these devices marketed to consumers for a decision with potentially catastrophic consequences. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
    BMJ Open 12/2014; 4(12):e005811. DOI:10.1136/bmjopen-2014-005811 · 2.06 Impact Factor
  • Journal of Epidemiology &amp Community Health 09/2014; 68(Suppl 1):A7-A7. DOI:10.1136/jech-2014-204726.11 · 3.29 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Introduction Many different dietary supplements are currently marketed for the management of hypertension, but the evidence for effectiveness is mixed. The aim of this systematic review was to evaluate the evidence for or against the effectiveness of green tea (Camellia sinensis) on blood pressure and lipid parameters. Methods Electronic searches were conducted in Medline, Embase, Amed, Cinahl and the Cochrane Library to identify relevant human randomized clinical trials (RCTs). Hand searches of bibliographies were also conducted. The reporting quality of included studies was assessed using a checklist adapted from the CONSORT Statement. Two reviewers independently determined eligibility, assessed the reporting quality of the included studies, and extracted the data. Results 474 citations were identified and 20 RCTs comprising 1,536 participants were included. There were variations in the designs of the RCTs. A meta-analysis revealed a significant reduction in systolic blood pressure favouring green tea (MD: -1.94 mmHg; 95% CI: -2.95 to -0.93; I2=8%; p = 0.0002). Similar results were also observed for total cholesterol (MD: -0.13 mmol/l; 95% CI: -0.2 to -0.07; I2=8%; p < 0.0001) and LDL cholesterol (MD: -0.19 mmol/l; 95% CI: -0.3 to -0.09; I2=70%; p = 0.0004). Adverse events included rash, elevated blood pressure, and abdominal discomfort. Conclusion Green tea intake results in significant reductions in systolic blood pressure, total cholesterol, and LDL cholesterol. The effect size on systolic blood pressure is small, but the effects on total and LDL cholesterol appear moderate. Longer-term independent clinical trials evaluating the effects of green tea are warranted.
    Nutrition, metabolism, and cardiovascular diseases: NMCD 08/2014; 24(8). DOI:10.1016/j.numecd.2014.01.016 · 3.88 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Several dietary supplements are currently marketed for management of hypertension, but the evidence for effectiveness is conflicting. Our objective was to critically appraise and evaluate the evidence for the effectiveness of chlorogenic acids (CGAs) on blood pressure, using data from published randomized clinical trials (RCTs). Electronic searches were conducted in Medline, Embase, Amed, Cinahl and The Cochrane Library. We also hand-searched the bibliographies of all retrieved articles. Two reviewers independently determined the eligibility of studies and extracted the data. The reporting quality of all included studies was assessed by the use of a quality assessment checklist adapted from the Consolidated Standard of Reporting Trials Statement. Disagreements were resolved through discussion. Seven eligible studies were identified, and five including 364 participants were included. There were variations in the reporting quality of the included RCTs. Meta-analysis revealed a statistically significant reduction in systolic blood pressure in favour of CGA (mean difference (MD): -4.31 mm Hg; 95% confidence interval (CI): -5.60 to -3.01; I(2)=65%; P<0.00001). Meta-analysis also showed a significant reduction in diastolic blood pressure favouring CGA (MD: -3.68 mm Hg; 95% CI: -3.91 to -3.45; I(2)=97%; P<0.00001). All studies reported no adverse events. In conclusion, the evidence from published RCTs suggests that CGA intake causes statistically significant reductions in systolic and diastolic blood pressures. The size of the effect is moderate. Few clinical trials have been conducted; they vary in design and methodology and are confined to Asian populations and funded by CGA manufacturers. Large independent trials evaluating the effects of CGA on blood pressure are warranted.Journal of Human Hypertension advance online publication, 19 June 2014; doi:10.1038/jhh.2014.46.
    Journal of Human Hypertension 06/2014; 29(2). DOI:10.1038/jhh.2014.46 · 2.69 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background Neuraminidase inhibitors (NIs) are stockpiled and recommended by public health agencies for treating and preventing seasonal and pandemic influenza. They are used clinically worldwide. Objectives To describe the potential benefits and harms of NIs for influenza in all age groups by reviewing all clinical study reports of published and unpublished randomised, placebo-controlled trials and regulatory comments. Search methods We searched trial registries, electronic databases (to 22 July 2013) and regulatory archives, and corresponded with manufacturers to identify all trials. We also requested clinical study reports. We focused on the primary data sources of manufacturers but we checked that there were no published randomised controlled trials (RCTs) from non-manufacturer sources by running electronic searches in the following databases: the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, MEDLINE (Ovid), EMBASE, Embase.com, PubMed (not MEDLINE), the Database of Reviews of Effects, the NHS Economic Evaluation Database and the Health Economic Evaluations Database. Selection criteria Randomised, placebo-controlled trials on adults and children with confirmed or suspected exposure to naturally occurring influenza. Data collection and analysis We extracted clinical study reports and assessed risk of bias using purpose-built instruments. We analysed the effects of zanamivir and oseltamivir on time to first alleviation of symptoms, influenza outcomes, complications, hospitalisations and adverse events in the intention-to-treat (ITT) population. All trials were sponsored by the manufacturers. Main results We obtained 107 clinical study reports from the European Medicines Agency (EMA), GlaxoSmithKline and Roche. We accessed comments by the US Food and Drug Administration (FDA), EMA and Japanese regulator. We included 53 trials in Stage 1 (a judgement of appropriate study design) and 46 in Stage 2 (formal analysis), including 20 oseltamivir (9623 participants) and 26 zanamivir trials (14,628 participants). Inadequate reporting put most of the zanamivir studies and half of the oseltamivir studies at a high risk of selection bias. There were inadequate measures in place to protect 11 studies of oseltamivir from performance bias due to non-identical presentation of placebo. Attrition bias was high across the oseltamivir studies and there was also evidence of selective reporting for both the zanamivir and oseltamivir studies. The placebo interventions in both sets of trials may have contained active substances. Time to first symptom alleviation. For the treatment of adults, oseltamivir reduced the time to first alleviation of symptoms by 16.8 hours (95% confidence interval (CI) 8.4 to 25.1 hours, P < 0.0001). This represents a reduction in the time to first alleviation of symptoms from 7 to 6.3 days. There was no effect in asthmatic children, but in otherwise healthy children there was (reduction by a mean difference of 29 hours, 95% CI 12 to 47 hours, P = 0.001). Zanamivir reduced the time to first alleviation of symptoms in adults by 0.60 days (95% CI 0.39 to 0.81 days, P < 0.00001), equating to a reduction in the mean duration of symptoms from 6.6 to 6.0 days. The effect in children was not significant. In subgroup analysis we found no evidence of a difference in treatment effect for zanamivir on time to first alleviation of symptoms in adults in the influenza-infected and non-influenza-infected subgroups (P = 0.53). Hospitalisations. Treatment of adults with oseltamivir had no significant effect on hospitalisations: risk difference (RD) 0.15% (95% CI -0.78 to 0.91). There was also no significant effect in children or in prophylaxis. Zanamivir hospitalisation data were unreported. Serious influenza complications or those leading to study withdrawal. In adult treatment trials, oseltamivir did not significantly reduce those complications classified as serious or those which led to study withdrawal (RD 0.07%, 95% CI -0.78 to 0.44), nor in child treatment trials; neither did zanamivir in the treatment of adults or in prophylaxis. There were insufficient events to compare this outcome for oseltamivir in prophylaxis or zanamivir in the treatment of children. Pneumonia. Oseltamivir significantly reduced self reported, investigator-mediated, unverified pneumonia (RD 1.00%, 95% CI 0.22 to 1.49); number needed to treat to benefit (NNTB) = 100 (95% CI 67 to 451) in the treated population. The effect was not significant in the five trials that used a more detailed diagnostic form for pneumonia. There were no definitions of pneumonia (or other complications) in any trial. No oseltamivir treatment studies reported effects on radiologically confirmed pneumonia. There was no significant effect on unverified pneumonia in children. There was no significant effect of zanamivir on either self reported or radiologically confirmed pneumonia. In prophylaxis, zanamivir significantly reduced the risk of self reported, investigator-mediated, unverified pneumonia in adults (RD 0.32%, 95% CI 0.09 to 0.41); NNTB = 311 (95% CI 244 to 1086), but not oseltamivir. Bronchitis, sinusitis and otitis media. Zanamivir significantly reduced the risk of bronchitis in adult treatment trials (RD 1.80%, 95% CI 0.65 to 2.80); NNTB = 56 (36 to 155), but not oseltamivir. Neither NI significantly reduced the risk of otitis media and sinusitis in both adults and children. Harms of treatment. Oseltamivir in the treatment of adults increased the risk of nausea (RD 3.66%, 95% CI 0.90 to 7.39); number needed to treat to harm (NNTH) = 28 (95% CI 14 to 112) and vomiting (RD 4.56%, 95% CI 2.39 to 7.58); NNTH = 22 (14 to 42). The proportion of participants with four-fold increases in antibody titre was significantly lower in the treated group compared to the control group (RR 0.92, 95% CI 0.86 to 0.97, I2 statistic = 0%) (5% absolute difference between arms). Oseltamivir significantly decreased the risk of diarrhoea (RD 2.33%, 95% CI 0.14 to 3.81); NNTB = 43 (95% CI 27 to 709) and cardiac events (RD 0.68%, 95% CI 0.04 to 1.0); NNTB = 148 (101 to 2509) compared to placebo during the on-treatment period. There was a dose-response effect on psychiatric events in the two oseltamivir "pivotal" treatment trials, WV15670 and WV15671, at 150 mg (standard dose) and 300 mg daily (high dose) (P = 0.038). In the treatment of children, oseltamivir induced vomiting (RD 5.34%, 95% CI 1.75 to 10.29); NNTH = 19 (95% CI 10 to 57). There was a significantly lower proportion of children on oseltamivir with a four-fold increase in antibodies (RR 0.90, 95% CI 0.80 to 1.00, I2 = 0%). Prophylaxis. In prophylaxis trials, oseltamivir and zanamivir reduced the risk of symptomatic influenza in individuals (oseltamivir: RD 3.05% (95% CI 1.83 to 3.88); NNTB = 33 (26 to 55); zanamivir: RD 1.98% (95% CI 0.98 to 2.54); NNTB = 51 (40 to 103)) and in households (oseltamivir: RD 13.6% (95% CI 9.52 to 15.47); NNTB = 7 (6 to 11); zanamivir: RD 14.84% (95% CI 12.18 to 16.55); NNTB = 7 (7 to 9)). There was no significant effect on asymptomatic influenza (oseltamivir: RR 1.14 (95% CI 0.39 to 3.33); zanamivir: RR 0.97 (95% CI 0.76 to 1.24)). Non-influenza, influenza-like illness could not be assessed due to data not being fully reported. In oseltamivir prophylaxis studies, psychiatric adverse events were increased in the combined on- and off-treatment periods (RD 1.06%, 95% CI 0.07 to 2.76); NNTH = 94 (95% CI 36 to 1538) in the study treatment population. Oseltamivir increased the risk of headaches whilst on treatment (RD 3.15%, 95% CI 0.88 to 5.78); NNTH = 32 (95% CI 18 to 115), renal events whilst on treatment (RD 0.67%, 95% CI -2.93 to 0.01); NNTH = 150 (NNTH 35 to NNTB > 1000) and nausea whilst on treatment (RD 4.15%, 95% CI 0.86 to 9.51); NNTH = 25 (95% CI 11 to 116). Authors' conclusions Oseltamivir and zanamivir have small, non-specific effects on reducing the time to alleviation of influenza symptoms in adults, but not in asthmatic children. Using either drug as prophylaxis reduces the risk of developing symptomatic influenza. Treatment trials with oseltamivir or zanamivir do not settle the question of whether the complications of influenza (such as pneumonia) are reduced, because of a lack of diagnostic definitions. The use of oseltamivir increases the risk of adverse effects, such as nausea, vomiting, psychiatric effects and renal events in adults and vomiting in children. The lower bioavailability may explain the lower toxicity of zanamivir compared to oseltamivir. The balance between benefits and harms should be considered when making decisions about use of both NIs for either the prophylaxis or treatment of influenza. The influenza virus-specific mechanism of action proposed by the producers does not fit the clinical evidence.
    Cochrane database of systematic reviews (Online) 04/2014; · 5.70 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To describe the potential benefits and harms of zanamivir. Systematic review of clinical study reports of randomised placebo controlled trials and regulatory information Clinical study reports, trial registries, electronic databases, regulatory archives, and correspondence with manufacturers. Randomised placebo controlled trials in adults and children who had confirmed or suspected exposure to natural influenza. Time to first alleviation of symptoms, influenza outcomes and complications, admissions to hospital, and adverse events in the intention to treat (ITT) population. We included 28 trials in stage 1 (judgment of appropriate study design) and 26 in stage 2 (formal analysis). For treatment of adults, zanamivir reduced the time to first alleviation of symptoms of influenza-like illness by 0.60 days (95% confidence interval 0.39 to 0.81, P<0.001, I(2)=9%), which equates to an average 14.4 hours' reduction, or a 10% reduction in mean duration of symptoms from 6.6 days to 6.0 days. Time to first alleviation of symptoms was shorter in all participants when any relief drugs were allowed compared with no use. Zanamivir did not reduce the risk of self reported investigator mediated pneumonia (risk difference 0.17%, -0.73% to 0.70%) or radiologically confirmed pneumonia (-0.06%, -6.56% to 2.11%) in adults. The effect on pneumonia in children was also not significant (0.56%, -1.64% to 1.04%). There was no significant effect on otitis media or sinusitis in both adults and children, with only a small effect noted for bronchitis in adults (1.80%, 0.65% to 2.80%), but not in children. There were no data to assess effects on admissions in adults and children. Zanamivir tended to be well tolerated. In zanamivir prophylaxis studies, symptomatic influenza in individuals was significantly reduced (1.98%, (0.98% to 2.54%); reducing event rates from 3.26% to 1.27%, which means 51 people need to be treated to prevent one influenza case (95% confidence interval, 40 to 103). In contrast, the prophylaxis effect on asymptomatic influenza cases was not significant in individuals (risk difference 0.14%, -1.10% to 1.10%) or in households (1.32%, -2.20% to 3.84%). In households treated prophylactically there was an effect on symptomatic influenza (14.84%, 12.18% to 16.55%), but this was based on only two small studies including 824 participants. Prophylaxis in adults reduced unverified pneumonia (0.32%, 0.09% to 0.41%; NNTB (number needed to treat to benefit) 311, 244 to 1086) but had no effect on pneumonia in children or on bronchitis or sinusitis in adults or children (risk difference 0.32%, 0.09% to 0.41%; NNTB 311, 244 to 1086). Based on a full assessment of all trials conducted, zanamivir reduces the time to symptomatic improvement in adults (but not in children) with influenza-like illness by just over half a day, although this effect might be attenuated by symptom relief medication. Zanamivir also reduces the proportion of patients with laboratory confirmed symptomatic influenza. We found no evidence that zanamivir reduces the risk of complications of influenza, particularly pneumonia, or the risk of hospital admission or death. Its harmful effects were minor (except for bronchospasm), perhaps because of low bioavailability.
    BMJ (online) 04/2014; 348:g2547. DOI:10.1136/bmj.g2547 · 16.38 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Background: Organically produced foods are less likely than conventionally produced foods to contain pesticide residues. Methods: We examined the hypothesis that eating organic food may reduce the risk of soft tissue sarcoma, breast cancer, non-Hodgkin lymphoma and other common cancers in a large prospective study of 623 080 middle-aged UK women. Women reported their consumption of organic food and were followed for cancer incidence over the next 9.3 years. Cox regression models were used to estimate adjusted relative risks for cancer incidence by the reported frequency of consumption of organic foods. Results: At baseline, 30%, 63% and 7% of women reported never, sometimes, or usually/always eating organic food, respectively. Consumption of organic food was not associated with a reduction in the incidence of all cancer (n=53 769 cases in total) (RR for usually/always vs never=1.03, 95% confidence interval (CI): 0.99–1.07), soft tissue sarcoma (RR=1.37, 95% CI: 0.82–2.27), or breast cancer (RR=1.09, 95% CI: 1.02–1.15), but was associated for non-Hodgkin lymphoma (RR=0.79, 95% CI: 0.65–0.96). Conclusions: In this large prospective study there was little or no decrease in the incidence of cancer associated with consumption of organic food, except possibly for non-Hodgkin lymphoma.
    British Journal of Cancer 03/2014; 110(9). DOI:10.1038/bjc.2014.148 · 4.82 Impact Factor
  • Source
    Cancer Causes and Control 09/2011; 22(9):1351. DOI:10.1007/s10552-011-9815-7 · 2.96 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To describe the development of the Oxford WebQ, a web-based 24 h dietary assessment tool developed for repeated administration in large prospective studies; and to report the preliminary assessment of its performance for estimating nutrient intakes. We developed the Oxford WebQ by repeated testing until it was sufficiently comprehensive and easy to use. For the latest version, we compared nutrient intakes from volunteers who completed both the Oxford WebQ and an interviewer-administered 24 h dietary recall on the same day. Oxford, UK. A total of 116 men and women. The WebQ took a median of 12·5 (interquartile range: 10·8-16·3) min to self-complete and nutrient intakes were estimated automatically. By contrast, the interviewer-administered 24 h dietary recall took 30 min to complete and 30 min to code. Compared with the 24 h dietary recall, the mean Spearman's correlation for the 21 nutrients obtained from the WebQ was 0·6, with the majority between 0·5 and 0·9. The mean differences in intake were less than ±10 % for all nutrients except for carotene and vitamins B12 and D. On rare occasions a food item was reported in only one assessment method, but this was not more frequent or systematically different between the methods. Compared with an interviewer-based 24 h dietary recall, the WebQ captures similar food items and estimates similar nutrient intakes for a single day's dietary intake. The WebQ is self-administered and nutrients are estimated automatically, providing a low-cost method for measuring dietary intake in large-scale studies.
    Public Health Nutrition 06/2011; 14(11):1998-2005. DOI:10.1017/S1368980011000942 · 2.48 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Hip fracture risk is known to increase with physical inactivity and decrease with obesity, but there is little information on their combined effects. We report on the separate and combined effects of body mass index (BMI) and physical activity on hospital admissions for hip fracture among postmenopausal women in a large prospective UK study. Baseline information on body size, physical activity, and other relevant factors was collected in 1996-2001, and participants were followed for incident hip fractures by record linkage to National Health Service (NHS) hospital admission data. Cox regression was used to calculate adjusted relative risks of hip fracture. Among 925,345 postmenopausal women followed for an average of 6.2 years, 2582 were admitted to hospital with an incident hip fracture. Hip fracture risk increased with decreasing BMI: Compared with obese women (BMI of 30+ kg/m(2) ), relative risks were 1.71 [95% confidence interval (CI) 1.47-1.97)] for BMI of 25.0 to 29.9 kg/m(2) and 2.55 (95% CI 2.22-2.94) for BMI of 20.0 to 24.9 kg/m(2). The increase in fracture risk per unit decrease in BMI was significantly greater among lean women than among overweight women (p < .001). For women in every category of BMI, physical inactivity was associated with an increased risk of hip fracture. There was no significant interaction between the relative effects of BMI and physical activity. For women who reported that they took any exercise versus no exercise, the adjusted relative risk of hip fracture was 0.68 (95% CI 0.62-0.75), with similar results for strenuous exercise. In this large cohort of postmenopausal women, BMI and physical activity had independent effects on hip fracture risk.
    Journal of bone and mineral research: the official journal of the American Society for Bone and Mineral Research 06/2011; 26(6):1330-8. DOI:10.1002/jbmr.315 · 6.59 Impact Factor
  • Source
  • [Show abstract] [Hide abstract]
    ABSTRACT: Until now, studies examining the relationship between socioeconomic status and pancreatic cancer incidence have been inconclusive. To prospectively investigate to what extent pancreatic cancer incidence varies according to educational level within the European Prospective Investigation into Cancer and Nutrition (EPIC) study. In the EPIC study, socioeconomic status at baseline was measured using the highest level of education attained. Hazard ratios by educational level and a summary index, the relative indices of inequality (RII), were estimated using Cox regression models stratified by age, gender, and center and adjusted for known risk factors. In addition, we conducted separate analyses by age, gender and geographical region. Within the source population of 407, 944 individuals at baseline, 490 first incident primary pancreatic adenocarcinoma cases were identified in 9 European countries. The crude difference in risk of pancreatic cancer according to level of education was small and not statistically significant (RII=1.14, 95% CI 0.80-1.62). Adjustment for known risk factors reduced the inequality estimates to only a small extent. In addition, no statistically significant associations were observed for age groups (adjusted RII(≤ 60 years)=0.85, 95% CI 0.44-1.64, adjusted RII(>60 years)=1.18, 95% CI 0.73-1.90), gender (adjusted RII(male)=1.20, 95% CI 0.68-2.10, adjusted RII(female)=0.96, 95% CI 0.56-1.62) or geographical region (adjusted RII(Northern Europe)=1.14, 95% CI 0.81-1.61, adjusted RII(Middle Europe)=1.72, 95% CI 0.93-3.19, adjusted RII(Southern Europe)=0.75, 95% CI 0.32-1.80). Despite large educational inequalities in many risk factors within the EPIC study, we found no evidence for an association between educational level and the risk of developing pancreatic cancer in this European cohort.
    12/2010; 34(6):696-701. DOI:10.1016/j.canep.2010.08.004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Epidemiologic evidence for an association between colorectal cancer (CRC) risk and total dietary fat, saturated fat (SF), monounsaturated fat (MUFA) and polyunsaturated fat (PUFA) is inconsistent. Previous studies have used food frequency questionnaires (FFQ) to assess diet, but data from food diaries may be less prone to severe measurement error than data from FFQ. We conducted a case-control study nested within seven prospective UK cohort studies, comprising 579 cases of incident CRC and 1996 matched controls. Standardized dietary data from 4- to 7-day food diaries and from FFQ were used to estimate odds ratios for CRC risk associated with intake of fat and subtypes of fat using conditional logistic regression. We also calculated multivariate measurement error corrected odds ratios for CRC using repeated food diary measurements. We observed no associations between intakes of total dietary fat or types of fat and CRC risk, irrespective of whether dietary data were obtained using food diaries or FFQ. Our results do not support the hypothesis that intakes of total dietary fat, SF, MUFA or PUFA are linked to risk of CRC.
    10/2010; 34(5):562-7. DOI:10.1016/j.canep.2010.07.008
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The authors investigated associations between serum C-reactive protein (CRP) concentrations and colon and rectal cancer risk in a nested case-control study within the European Prospective Investigation into Cancer and Nutrition (1992-2003) among 1,096 incident cases and 1,096 controls selected using risk-set sampling and matched on study center, age, sex, time of blood collection, fasting status, menopausal status, menstrual cycle phase, and hormone replacement therapy. In conditional logistic regression with adjustment for education, smoking, nutritional factors, body mass index, and waist circumference, CRP showed a significant nonlinear association with colon cancer risk but not rectal cancer risk. Multivariable-adjusted relative risks for CRP concentrations of > or = 3.0 mg/L versus <1.0 mg/L were 1.36 (95% confidence interval (CI): 1.00, 1.85; P-trend = 0.01) for colon cancer and 1.02 (95% CI: 0.67, 1.57; P-trend = 0.65) for rectal cancer. Colon cancer risk was significantly increased in men (relative risk = 1.74, 95% CI: 1.11, 2.73; P-trend = 0.01) but not in women (relative risk = 1.06, 95% CI: 0.67, 1.68; P-trend = 0.13). Additional adjustment for C-peptide, glycated hemoglobin, and high density lipoprotein cholesterol did not attenuate these results. These data provide evidence that elevated CRP concentrations are related to a higher risk of colon cancer but not rectal cancer, predominantly among men and independently of obesity, insulin resistance, and dyslipidemia.
    American journal of epidemiology 08/2010; 172(4):407-18. DOI:10.1093/aje/kwq135 · 4.98 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Results of epidemiological studies of dietary fiber and colorectal cancer risk have not been consistent, possibly because of attenuation of associations due to measurement error in dietary exposure ascertainment. To examine the association between dietary fiber intake and colorectal cancer risk, we conducted a prospective case-control study nested within seven UK cohort studies, which included 579 case patients who developed incident colorectal cancer and 1996 matched control subjects. We used standardized dietary data obtained from 4- to 7-day food diaries that were completed by all participants to calculate the odds ratios for colorectal, colon, and rectal cancers with the use of conditional logistic regression models that adjusted for relevant covariates. We also calculated odds ratios for colorectal cancer by using dietary data obtained from food-frequency questionnaires that were completed by most participants. All statistical tests were two-sided. Intakes of absolute fiber and of fiber intake density, ascertained by food diaries, were statistically significantly inversely associated with the risks of colorectal and colon cancers in both age-adjusted models and multivariable models that adjusted for age; anthropomorphic and socioeconomic factors; and dietary intakes of folate, alcohol, and energy. For example, the multivariable-adjusted odds ratio of colorectal cancer for highest vs the lowest quintile of fiber intake density was 0.66 (95% confidence interval = 0.45 to 0.96). However, no statistically significant association was observed when the same analysis was conducted using dietary data obtained by food-frequency questionnaire (multivariable odds ratio = 0.88, 95% confidence interval = 0.57 to 1.36). Intake of dietary fiber is inversely associated with colorectal cancer risk. Methodological differences (ie, study design, dietary assessment instruments, definition of fiber) may account for the lack of convincing evidence for the inverse association between fiber intake and colorectal cancer risk in some previous studies.
    CancerSpectrum Knowledge Environment 05/2010; 102(9):614-26. DOI:10.1093/jnci/djq092 · 15.16 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Some but not all epidemiological studies have reported that high intakes of red and processed meat are associated with an increased risk of colorectal cancer. In the UK Dietary Cohort Consortium, we examined associations of meat, poultry and fish intakes with colorectal cancer risk using standardised individual dietary data pooled from seven UK prospective studies. Four- to seven-day food diaries were analysed, disaggregating the weights of meat, poultry and fish from composite foods to investigate dose-response relationships. We identified 579 cases of colorectal cancer and matched with 1,996 controls on age, sex and recruitment date. Conditional logistic regression models were used to estimate odds ratios for colorectal cancer associated with meat, poultry and fish intakes, adjusting for relevant covariables. Disaggregated intakes were moderately low, e.g. mean red meat intakes were 38.2 g/day among male and 28.7 g/day among female controls. There was little evidence of association between the food groups examined and risk for colorectal cancer: Odds ratios (95% confidence intervals) for a 50 g/day increase were 1.01 (0.84-1.22) for red meat, 0.88 (0.68-1.15) for processed meat, 0.97 (0.84-1.12) for red and processed meat combined, 0.80 (0.65-1.00) for poultry, 0.92 (0.70-1.21) for white fish and 0.89 (0.70-1.13) for fatty fish. This study using pooled data from prospective food diaries, among cohorts with low to moderate meat intakes, shows little evidence of association between consumption of red and processed meat and colorectal cancer risk.
    Cancer Causes and Control 05/2010; 21(9):1417-25. DOI:10.1007/s10552-010-9569-7 · 2.96 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Colorectal cancer (CRC) is the third most common malignant tumor and the fourth leading cause of cancer death worldwide. The crucial role of fatty acids for a number of important biological processes suggests a more in-depth analysis of inter-individual differences in fatty acid metabolizing genes as contributing factor to colon carcinogenesis. We examined the association between genetic variability in 43 fatty acid metabolism-related genes and colorectal risk in 1225 CRC cases and 2032 controls participating in the European Prospective Investigation into Cancer and Nutrition study. Three hundred and ninety two single-nucleotide polymorphisms were selected using pairwise tagging with an r(2) cutoff of 0.8 and a minor allele frequency of >5%. Conditional logistic regression models were used to estimate odds ratios and corresponding 95% confidence intervals. Haplotype analysis was performed using a generalized linear model framework. On the genotype level, hydroxyprostaglandin dehydrogenase 15-(NAD) (HPGD), phospholipase A2 group VI (PLA2G6) and transient receptor potential vanilloid 3 were associated with higher risk for CRC, whereas prostaglandin E receptor 2 (PTGER2) was associated with lower CRC risk. A significant inverse association (P < 0.006) was found for PTGER2 GGG haplotype, whereas HPGD AGGAG and PLA2G3 CT haplotypes were significantly (P < 0.001 and P = 0.003, respectively) associated with higher risk of CRC. Based on these data, we present for the first time the association of HPGD variants with CRC risk. Our results support the key role of prostanoid signaling in colon carcinogenesis and suggest a relevance of genetic variation in fatty acid metabolism-related genes and CRC risk.
    Carcinogenesis 03/2010; 31(3):466-72. DOI:10.1093/carcin/bgp325 · 5.27 Impact Factor
  • Miranda E. Armstrong, Elizabeth A. Spencer, Valerie Beral
    Medicine &amp Science in Sports &amp Exercise 01/2010; 42. DOI:10.1249/01.MSS.0000385561.58902.ad · 4.46 Impact Factor

Publication Stats

4k Citations
406.09 Total Impact Points

Institutions

  • 2005–2014
    • University of Oxford
      • • Department of Primary Care Health Sciences
      • • Cancer Epidemiology Unit
      Oxford, England, United Kingdom
  • 2008–2010
    • German Institute of Human Nutrition
      • Department of Epidemiology
      Potsdam, Brandenburg, Germany
  • 2009
    • Fondazione IRCCS Istituto Nazionale dei Tumori di Milano
      Milano, Lombardy, Italy
  • 2002
    • Catalan Institute of Oncology
      • Cancer Epidemiology Research Programme (PREC)
      Badalona, Catalonia, Spain