Improving the performance of the health service delivery system? Lessons from the Towards Unity for Health projects.

Technical Officer Quality of Health Systems and Services, WHO Regional Office for Europe, Copenhagen, Denmark.
Education for Health 12/2006; 19(3):298-307. DOI: 10.1080/13576280600937861
Source: PubMed

ABSTRACT The World Health Organization developed the Towards Unity for Health (TUFH) strategy in 2000 for the improvement of health system performance. Twelve projects worldwide were supported to put this strategy into practice. A standard evaluation and monitoring framework was developed on the basis of which project coordinators prepared technical progress reports.
To review the utility and effectiveness of the evaluation criteria recommended by TUFH and their application in four of the original twelve projects.
We reviewed status reports provided by European project coordinators and developed a standardized reporting template to extract information using original TUFH evaluation criteria.
The original TUFH evaluation framework is very comprehensive and has only partly been followed by the field projects. The evaluation strategies employed by the projects were insufficient to demonstrate the connections between the intervention and the desired process improvements, and few of the evaluation measures address outcomes.
The evaluation strategies employed by the projects are limited in allowing us to associate the intervention with the desired process improvements. Few measures address outcomes. The evaluation of complex community interventions poses many challenges, however, tools are available to assess impact on structures and process, and selected outcome indicators may be identified to monitor progress in future projects.
Based on the review of evaluation status of the TUFH projects and resources available we recommend moving away from uniform evaluation and towards monitoring minimal, context-specific performance indicators criteria.

Download full-text


Available from: Oliver Groene, Jun 29, 2015
1 Follower
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The view is widely held that experimental methods (randomised controlled trials) are the "gold standard" for evaluation and that observational methods (cohort and case control studies) have little or no value. This ignores the limitations of randomised trials, which may prove unnecessary, inappropriate, impossible, or inadequate. Many of the problems of conducting randomised trials could often, in theory, be overcome, but the practical implications for researchers and funding bodies mean that this is often not possible. The false conflict between those who advocate randomised trials in all situations and those who believe observational data provide sufficient evidence needs to be replaced with mutual recognition of the complementary roles of the two approaches. Researchers should be united in their quest for scientific rigour in evaluation, regardless of the method used.
    BMJ Clinical Research 06/1996; 312(7040):1215-8. DOI:10.1136/bmj.312.7040.1215 · 14.09 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Public health interventions tend to be complex, programmatic, and context dependent. The evidence for their effectiveness must be sufficiently comprehensive to encompass that complexity. This paper asks whether and to what extent evaluative research on public health interventions can be adequately appraised by applying well established criteria for judging the quality of evidence in clinical practice. It is adduced that these criteria are useful in evaluating some aspects of evidence. However, there are other important aspects of evidence on public health interventions that are not covered by the established criteria. The evaluation of evidence must distinguish between the fidelity of the evaluation process in detecting the success or failure of an intervention, and the success or failure of the intervention itself. Moreover, if an intervention is unsuccessful, the evidence should help to determine whether the intervention was inherently faulty (that is, failure of intervention concept or theory), or just badly delivered (failure of implementation). Furthermore, proper interpretation of the evidence depends upon the availability of descriptive information on the intervention and its context, so that the transferability of the evidence can be determined. Study design alone is an inadequate marker of evidence quality in public health intervention evaluation.
    Journal of Epidemiology &amp Community Health 03/2002; 56(2):119-27. · 3.29 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Given the pressures for health care reform, interest in the concept of integrated or organized delivery systems as a means to offer more coordinated cost-effective care is growing. This article has two primary objectives: (1) to clarify the different types of integration associated with the notion of an organized delivery system, and (2) to share the results from an ongoing study of 12 organized delivery systems. The findings indicate a moderate level of integration overall, particularly in the areas of culture, financial planning, and strategic planning. The study found that corporate staff respondents perceive their systems to be more integrated and effective than do operating unit managers, and that some functional integration areas are positively associated with both physician-system and clinical integration that, in turn, are positively related to each other. Overall, perceived integration was found to be positively associated with perceived effectiveness.
    Hospital & health services administration 02/1993; 38(4):467-89.