Article

The effect of sex education on adolescents' use of condoms: applying the Solomon four-group design.

Institute of Psychology, University of Oslo, Norway.
Health education quarterly 03/1996; 23(1):34-47. DOI: 10.1177/109019819602300103
Source: PubMed

ABSTRACT A school-based sex education program was developed in order to prevent sexually transmitted diseases and unwanted pregnancies. A Solomon four-group design, with random assignment to the different conditions, was used to evaluate an intervention based on cognitive social learning theory and social influence theory. The main goal of the intervention was to increase use of condoms. A stratified sample of 124 classes (2,411 students) was drawn at random from all the upper secondary schools (high schools/colleges) in one county in Norway. The results indicate a consistent interaction between pretest and intervention, which seems to have an effect on condom use. Pretest or intervention alone did not contribute to this effect. The interaction effect appeared among the students with few sexual partners. Several possible explanations to the observed interaction effect and the implication for future interventions are discussed.

2 Bookmarks
 · 
280 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Substance-using men who have sex with men (MSM) are among the groups at highest risk for HIV infection in the United States. We report the results of a randomized trial testing the efficacy of a small group sexual and substance use risk reduction intervention based on empowerment theory compared to an enhanced efficacious control condition among 515 high risk not-in-treatment MSM substance users. Effect sizes for sexual risk and substance use outcomes were moderate to large: HIV transmission risk frequency, d = 0.71 in the control versus 0.66 in the experimental group; number of anal sex partners, d = 1.04 versus 0.98; substance dependence symptoms, d = 0.49 versus 0.53; significant differences were not observed between conditions. Black MSM reduced their risks at a greater rate than White or Latino men. The findings point to a critically important research agenda to reduce HIV transmission among MSM substance users.
    AIDS and Behavior 06/2013; · 3.49 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Behavioral intervention trials may be susceptible to poorly understood forms of bias stemming from research participation. This article considers how assessment and other prerandomization research activities may introduce bias that is not fully prevented by randomization. This is a hypothesis-generating discussion article. An additivity assumption underlying conventional thinking in trial design and analysis is problematic in behavioral intervention trials. Postrandomization sources of bias are somewhat better known within the clinical epidemiological and trials literatures. Neglect of attention to possible research participation effects means that unintended participant behavior change stemming from artifacts of the research process has unknown potential to bias estimates of behavioral intervention effects. Studies are needed to evaluate how research participation effects are introduced, and we make suggestions for how research in this area may be taken forward, including how these issues may be addressed in the design and conduct of trials. It is proposed that attention to possible research participation effects can improve the design of trials evaluating behavioral and other interventions and inform the interpretation of existing evidence.
    Journal of clinical epidemiology 12/2013; · 5.33 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes. Assessments may introduce bias in randomised controlled trials by altering receptivity to intervention in experimental groups and differentially impacting on the behaviour of control groups. In a Solomon 4-group design, participants are randomly allocated to one of four arms: (1) assessed experimental group; (2) unassessed experimental group (3) assessed control group; or (4) unassessed control group. This design provides a test of the internal validity of effect sizes obtained in conventional two-group trials by controlling for the effects of baseline assessment, and assessing interactions between the intervention and baseline assessment. The aim of this systematic review is to evaluate evidence from Solomon 4-group studies with behavioural outcomes that baseline research assessments themselves can introduce bias into trials. Electronic databases were searched, supplemented by citation searching. Studies were eligible if they reported appropriately analysed results in peer-reviewed journals and used Solomon 4-group designs in non-laboratory settings with behavioural outcome measures and sample sizes of 20 per group or greater. Ten studies from a range of applied areas were included. There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review. There were too few high quality completed studies to infer conclusively that biases stemming from baseline research assessments do or do not exist. There is, therefore a need for new rigorous Solomon 4-group studies that are purposively designed to evaluate the potential for research assessments to cause bias in behaviour change trials.
    PLoS ONE 01/2011; 6(10):e25223. · 3.73 Impact Factor

Full-text

View
3 Downloads
Available from
Jun 16, 2014