[Show abstract][Hide abstract] ABSTRACT: We sought to describe patient recruitment and experiences in a randomised controlled trial of a 12-week (thrice weekly) supervised exercise program for patients with small abdominal aortic aneurysm (AAA). Potential patients were identified via AAA surveillance lists and vascular clinics and invited to participate in the study. Upon completion of baseline assessments, patients were randomly allocated 1:1 to exercise or usual care. Patients completing the exercise arm were invited to attend a focus group session to explore experiences of diagnosis, management of condition, trial recruitment, and expectations and experiences of the exercise program. Between January 2010 and September 2011, 545 patients were identified. The response rate to postal invitation was 81.7% (445/545), with 108 patients responding as "interested." Only 28 of these patients were eligible and recruited (46.7% of recruitment target), yielding an overall recruitment rate of 5.1%. However, the estimated recruitment rate among eligible patients was 23.7%. Twenty-five patients (89.3%) completed the study, and compliance to the exercise program was 94%. Participants attending the focus group session indicated that the exercise program was manageable, beneficial, and enjoyable. The feasibility of supervised exercise training in individuals with small AAA remains unclear. Our study revealed a poorer than expected recruitment rate, but good compliance to, and feedback for, the exercise intervention. We present potential explanations for these findings and suggestions for future trials involving similar populations.
Journal of vascular nursing: official publication of the Society for Peripheral Vascular Nursing 03/2014; 32(1):4-9. DOI:10.1016/j.jvn.2013.05.002
[Show abstract][Hide abstract] ABSTRACT: Background: Implementing semi-automated processes to efficiently match patients to clinical trials at the point of care requires both detailed patient data and authoritative information about open studies. Objective: To evaluate the utility of the ClinicalTrials.gov registry as a data source for semi-automated trial eligibility screening. Methods: Eligibility criteria and metadata for 437 trials open for recruitment in four different clinical domains were identified in ClinicalTrials.gov. Trials were evaluated for up to date recruitment status and eligibility criteria were evaluated for obstacles to automated interpretation. Finally, phone or email outreach to coordinators at a subset of the trials was made to assess the accuracy of contact details and recruitment status. Results: 24% (104 of 437) of trials declaring on open recruitment status list a study completion date in the past, indicating out of date records. Substantial barriers to automated eligibility interpretation in free form text are present in 81% to up to 94% of all trials. We were unable to contact coordinators at 31% (45 of 146) of the trials in the subset, either by phone or by email. Only 53% (74 of 146) would confirm that they were still recruiting patients. Conclusion: Because ClinicalTrials.gov has entries on most US and many international trials, the registry could be repurposed as a comprehensive trial matching data source. Semi-automated point of care recruitment would be facilitated by matching the registry's eligibility criteria against clinical data from electronic health records. But the current entries fall short. Ultimately, improved techniques in natural language processing will facilitate semi-automated complex matching. As immediate next steps, we recommend augmenting ClinicalTrials.gov data entry forms to capture key eligibility criteria in a simple, structured format.
PLoS ONE 10/2014; 9(10):e111055. DOI:10.1371/journal.pone.0111055 · 3.23 Impact Factor
[Show abstract][Hide abstract] ABSTRACT: Randomised trials are at the heart of evidence-based healthcare, but the methods and infrastructure for conducting these sometimes complex studies are largely evidence free. Trial Forge (www.trialforge.org) is an initiative that aims to increase the evidence base for trial decision making and, in doing so, to improve trial efficiency.
This paper summarises a one-day workshop held in Edinburgh on 10 July 2014 to discuss Trial Forge and how to advance this initiative. We first outline the problem of inefficiency in randomised trials and go on to describe Trial Forge. We present participants’ views on the processes in the life of a randomised trial that should be covered by Trial Forge.
General support existed at the workshop for the Trial Forge approach to increase the evidence base for making randomised trial decisions and for improving trial efficiency. Agreed upon key processes included choosing the right research question; logistical planning for delivery, training of staff, recruitment, and retention; data management and dissemination; and close down. The process of linking to existing initiatives where possible was considered crucial. Trial Forge will not be a guideline or a checklist but a ‘go to’ website for research on randomised trials methods, with a linked programme of applied methodology research, coupled to an effective evidence-dissemination process. Moreover, it will support an informal network of interested trialists who meet virtually (online) and occasionally in person to build capacity and knowledge in the design and conduct of efficient randomised trials.
Some of the resources invested in randomised trials are wasted because of limited evidence upon which to base many aspects of design, conduct, analysis, and reporting of clinical trials. Trial Forge will help to address this lack of evidence.
Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.