Content uploaded by Richard C Thelwell
Author content
All content in this area was uploaded by Richard C Thelwell on Dec 15, 2015
Content may be subject to copyright.
This article was downloaded by: [University of Portsmouth]
On: 10 December 2012, At: 01:01
Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Journal of Applied Sport Psychology
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/uasp20
The Value of Social Validation in Single-
Case Methods in Sport and Exercise
Psychology
Jenny Page
a
& Richard Thelwell
a
a
University of Portsmouth
Accepted author version posted online: 13 Feb 2012.Version of
record first published: 05 Dec 2012.
To cite this article: Jenny Page & Richard Thelwell (2013): The Value of Social Validation in Single-
Case Methods in Sport and Exercise Psychology, Journal of Applied Sport Psychology, 25:1, 61-71
To link to this article: http://dx.doi.org/10.1080/10413200.2012.663859
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation
that the contents will be complete or accurate or up to date. The accuracy of any
instructions, formulae, and drug doses should be independently verified with primary
sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand, or costs or damages whatsoever or howsoever caused arising directly or
indirectly in connection with or arising out of the use of this material.
JOURNAL OF APPLIED SPORT PSYCHOLOGY, 25: 61–71, 2013
Copyright
C
Association for Applied Sport Psychology
ISSN: 1041-3200 print / 1533-1571 online
DOI: 10.1080/10413200.2012.663859
The Value of Social Validation in Single-Case Methods in Sport
and Exercise Psychology
JENNY PAGE AND RICHARD THELWELL
University of Portsmouth
Social validation is used to determine satisfaction with an intervention and has been utilized
in many single-case studies within sport and exercise psychology research and consultancy.
This article reviews current social validation procedures and makes recommendations of how
a more thorough consideration of the technique could add greater value to t he understanding
of single-case protocols within research and applied practice. Recommendations include using
semi-structured interviews for data collection, using content analysis to analyze these data,
reporting social validation results in a thorough manner, collecting social validation information
from significant others, and collecting social validation data as frequently as possible.
This commentary aims to provide a comprehensive review of social validation procedures
employed in single-case research within the sport and exercise psychology literature. We aim
to inform interested readers of the typical approaches adopted, and signpost practitioners
and researchers to how they might employ social validation procedures based on the limited
literature comparing social validation techniques. This commentary will review five main areas.
First, we will address the key components of what social validation is and what social validation
procedures aim to address, with particular reference to efficacy and effectiveness, and statistical
and clinical significance. Second, we will address the issues of variable and inconsistent usage
of methods used to collect social validation data. Third, we will address the variable and
inconsistent usage of methods used to analyse social validation data. Fourth, we will propose
suggestions for social validation research in the sport and exercise psychology domain, and
finally, we will make recommendations for applied practice and single-case research.
WHAT IS SOCIAL VALIDATION AND WHAT IS ITS PURPOSE?
The use of social validation as a tool to determine the satisfaction with an intervention
is important in intervention studies because it is alleged to tie the intervention effects to the
social context (Storey & Horner, 1991). For example, social validation procedures have enabled
researchers to demonstrate that increases in rugby performance as a result of a goal-setting
intervention were perceived as effective by the players and that the changes in performance
were viewed as useful to the team (Mellalieu, Hanton, & O’Brien, 2006). A consequence of the
data being available is that they can help guide research and applied work (Storey & Horner,
1991). With this in mind, it is important to note that social validity refers to the “consideration of
Received 5 September 2011; accepted 1 February 2012.
Address correspondence to Jenny Page, Department of Sport and Exercise Science, University of
Portsmouth, Cambridge Road, Portsmouth, PO1 2ER, UK. E-mail: jenny.page@port.ac.uk
61
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
62 J. PAGE AND R. THELWELL
social criteria for evaluating the focus of treatment, the procedures that are used and the effects
that they have” (Kazdin, 1982, p. 479). It is a measure of social importance (Wolf, 1978) and
has been documented as a measure that supplements statistical analyses of objective data by
subjectively assessing socially important outcomes (Dempsey & Matson, 2009). Furthermore,
it is argued that social validity assessments offer an appropriate framework for examining client
satisfaction (Milne, 1987). Hrycaiko and Martin (1996) argued that seeking “social validation
helps to ensure that practitioners do the best job that they can in helping consumers of their
service function to the best of their ability” (p. 187). This, itself, is a key point wort hy of note
given that practitioners should be aware of their duty to be accountable (Smith, 1989). Social
validation has been defined as a “supplemental method that facilitates involvement of multiple
participants in the evaluation process” (Busse, Kratochwill, & Elliott, 1995, p. 273), and adds
subjective data to objective data (Wolf, 1978). Hence, three key areas that require attention
through the social validation process include the social significance of the goal(s), the social
appropriateness of the procedures, and the social important of the effects (Wolf, 1978).
The social significance of the goal(s) of the intervention relates to whether the specific
behavioral goals identified by the researcher or practitioner are congruent with what society
(i.e., the client), really wants (Kazdin, 1982; Wolf, 1978). For example, if a study aims to
identify whether a psychological intervention has a significant impact on confidence, it seems
logical to check that changes in confidence are of value to the participant or client, and
potentially to significant others. Furthermore, there would be little point should an athlete
perceive an improvement in attribute “x” to be essential when the coach perceives attribute
“y.” Assessing the social significance of the goal(s) of the intervention is important for a
number of reasons with one of the most prominent being the potential adherence to the
program. Indeed, several researchers have identified that adherence to interventions is linked
to the individuals’ perceived importance of changes to the target behaviors/variables (Bandura,
1986; Godin, Valois, & Lepage, 1993; Rosenstock, Strecher, & Becker, 1988), and practitioners
and researchers alike should be aware of this potential issue.
The social appropriateness of the procedures relate to whether the clients, athletes, and
participants consider the treatment procedures acceptable (Kazdin, 1982; Wolf, 1978). As
such, the measurement of social appropriateness is related to the intervention (independent
variable) rather than the dependent variables being measured. Of particular interest within
social appropriateness measurement is the reference to the consideration of acceptableness of
the intervention by people other than the participants themselves, for example, parents, peers
and significant others (Martin & Hrycaiko, 1983).
Finally, the social importance of the intervention effects relates to whether the consumers
are satisfied with the results, which may also include those that are unpredicted, such as
changes in mood when confidence was the dependent measure (Kazdin, 1982; Wolf, 1978).
The measurement of social importance therefore relates to the idiosyncratic effect of the
intervention on the variables identified. It seems reasonable to suggest that if participants or
clients are not satisfied with the impacts, then advocating the same techniques and variables
to other participants or client groups may be unethical.
In relation to practices in sport and exercise science and psychology, it seems prudent
to understand how social validation may allow researchers and practitioners to differentiate
between, and assess differences in, efficacy and effectiveness, and statistical and clinical
significance. An efficacy study contrasts some kind of therapy/intervention to a comparison
group under well-controlled conditions (Seligman, 1995), and therefore single-case design
studies do not produce data that would allow researchers or practitioners to conclude on the
efficacy of an intervention. This in the main could be argued based on the precedent that
within single-case designs each participant takes part in both the control and experimental
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
SOCIAL VALIDATION IN SINGLE-CASE DESIGN 63
phases of the study. Seligman (1995), suggested that efficacy studies may not be the best way of
finding out which treatments actually work in the field and that effectiveness studies examining
how patients fare under the actual conditions of treatment in the field can yield useful and
credible empirical validation. Therefore, the social validation procedures employed in single-
case designs allow individuals to subjectively explore their thoughts on the intervention and,
thus, are a measure of effectiveness rather than efficacy.
With regard to statistical and clinical significance, the traditional use of statistical signifi-
cance (in sport and exercise psychology), is limited in at least two respects (Jacobson & Tr uax,
1991). The first is related to its inability to show within-treatment variability and the second
and perhaps most importantly when assessing single-case data, it does not fully detail the
efficacy of the intervention. As such, it could be the case that the user’s satisfaction may be
low despite statistically significant changes. Therefore, the ability to document both statistical
and clinical significance should be considered of upmost importance to researchers and prac-
titioners working with individual performers. Documenting the individual responses to the
intervention is particularly important given that applied-based researchers and practitioners
must remember that in an applied setting the nomothetic science must at some point be applied
to the individual (Dunn, 1994). Prior to much of the current thinking, Kazdin (1982) suggested
two methods for assessing social validation. The first is related to social comparisons in which
behaviors pre- and post-intervention are compared with non-deviant peer behavior. Practi-
tioners and researchers would therefore compare clients with low self-efficacy to individuals
showing higher levels of self-efficacy. The second is related to subjective evaluation in which
the importance of the response to the intervention is assessed by individuals likely to have
contact with the client. Using this approach, practitioners and researchers could gain social
validation information from coaches, parents, and teammates.
Despite its use in sport being somewhat limited to the psychological domain, the impor-
tance of social validation has been considered in many disciplines. Storey and Horner (1991)
provided an evaluative review of social validation research involving persons with handicaps,
and concluded that “social validation procedures were an important component of applied be-
havior analysis” (p. 352) because they can help guide research and service support strategies.
Following this, Goldstein (2002) produced a review of treatment efficacy of communication
interventions for children with autism. It was concluded that social validation could play an
important role to help validate the measures taken by observers and researchers and alleviate
some of the concerns that changes are attributable to biased recording on their part. Given
the obvious potential for social validation in enhancing understanding of results from single-
case design research, it is of no surprise that other domains, including sport and exercise
psychology, have also utilized these measures. The use of social validation procedures within
sport and exercise psychology became more prevalent in the 1990s, which coincided with
two review papers (Dunn, 1994; Hrycaiko & Martin, 1996) promoting the use of single-case
designs within this field.
Within the sport and exercise literature it would appear that social validation is fundamental
to understanding, evaluating, and documenting the impact that interventions have on clients and
participants performance. There have been many calls for sport scientists and psychologists to
document the effectiveness of their work to, among others, national governing bodies, coaches
and performance directors in order to enhance accountability of the practitioner in an applied
setting (Anderson, Miles, Mahoney, & Robinson, 2002, 2004; Smith, 1989). Strean (1998)
suggested that the need for effective evaluation is essential in applied sport psychology and one
that is essential for moving the domain forward. Social validation techniques have the potential
to enhance accountability in these settings though measuring the importance of changes in the
measured variables, the appropriateness of the intervention and the overall satisfaction with
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
64 J. PAGE AND R. THELWELL
the outcomes of the specific intervention. The ability to document and publish these data could
have a considerable impact on further enhancing the reputation of the profession and could add
value to the work that is being conducted in applied settings. If clients continue to document
that the changes brought about by the interventions used are important and satisfactory, such
accolades may encourage other potential clients to make use of the services of sport scientists
and psychologists. Furthermore, if participants or clients are making statistically significant
changes in variables that researchers or practitioners perceive to be important, but such changes
are not viewed as important to the client, a drop-out or disengagement with the intervention
may occur. Similarly, if an intervention is not viewed as appropriate to the user, withdrawal
may be likely. Such conclusions have the ability to influence and enhance the service delivery
of sport scientists and psychologists and given the performance developments that many
practitioners claim, they would provide useful supplementary evidence.
METHODS USED TO COLLECT SOCIAL VALIDATION DATA
With the necessity to document the effectiveness of interventions in applied research, it is
with no surprise that most published literature in sport psychology has utilized measures of
social validity. In a 30-year review of the sport psychology literature, it was found that 26 of
40 single-case studies conducted a formal social validity evaluation with participants (Martin,
Thompson, & Regehr, 2004). Within the review, Martin et al. (2004) indicated that positive
reactions to questions such as (a) what do the participants (and perhaps significant others)
think about the goals of the intervention? (b) what do they think about the procedures that
were applied? and (c) what do they think about the results produced by those procedures?
were found in most instances. It is important to note that 14 studies did not use formal social
validation and therefore add to the observations by Hardy and Jones (1994) that systematic
evaluation of practice is not customary in the field. This very issue is one of concern and limits
the development of accountability within the profession.
The studies that have used formal social validation procedures have used different methods
to obtain the data and Schwartz and Baer (1991) support the need to focus on methodological
issues in social validation. One inconsistency within the social validation procedures within
the sport and exercise psychology literature is the depth and breadth of the questions used to
collect social validation data. Some studies have used an all-encompassing question such as
“do you think the procedures were acceptable and were you satisfied with the results” (see
Lindsay, Maynard, & Thomas, 2005). Others (e.g., Pates, Cummings, & Maynard, 2002; Pates
& Maynard, 2000) have opted to ask three questions. These questions have largely been based
on Hrycaiko and Martin’s (1996) and Wolf ’s (1978) recommendations and have been related
to whether the participants: (a) perceived the task to be important, (b) thought the procedures
of study were acceptable, and (c) felt satisfied with the results. Conversely, Thelwell et al. (e.g.,
Thelwell & Greenlees, 2001; Thelwell & Maynard, 2003), utilized more extensive questions
with numerical responses required by the participants. Such questions included (a) “How
important is an improvement in consistency of performance to you?” with responses ranging
from 1 (not at all important)to7(extremely important); (b) “Do you consider the changes
in performance to be significant?” with responses ranging from 1 (not at all significant)to7
(extremely significant); (c) “How satisfied were you with the mental skills training program?”
with responses ranging from 1 (not at all s atisfied)to7(extremely satisfied); (d) “Has the
intervention proved useful to you?” with responses ranging from 1 (not at all useful)to
7(extremely useful). Although such an approach may seem lucrative in terms of the data
collected, one must air caution in relation to the depth of the data generated.
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
SOCIAL VALIDATION IN SINGLE-CASE DESIGN 65
To exacerbate the issues surrounding the inconsistency the breadth and depth of data col-
lected, the methods used to collect the data have also been inconsistent. Many researchers have
utilized a questionnaire approach (e.g., Kendall, Hrycaiko, Martin, & Kendall, 1990; Mellalieu,
Hanton, & Thomas, 2009; Thelwell, Greenlees, & Weston, 2006). The exact questionnaire used
by Kendall et al. (1990) was not reported in the study. However, they drew conclusions based
on enjoyment of participants within the study, whether intervention procedures were helpful
and worthwhile, and whether they would recommend the techniques (imagery, relaxation, and
self-talk) to other elite athletes. Despite the perception that positive data were collected, the
absence of the details relating to its collection limits the impact of the argument. Interestingly,
the study also used a log book to monitor the participants’ feelings and program throughout
the study and these data were reported in addition to the social validation data. The content of
the questionnaires used by Thelwell et al. (2006) and Mellalieu et al. (2009) have been alluded
to previously. Interestingly, Mellalieu et al. also provided an open-ended question to invite
participants to explain their perceived underlying reasons for the relative success or failure of
the intervention. Despite the apparent widespread usage of social validation questionnaires,
there seems to be a lack of consistency to the questions asked, and this is one area that might
benefit from immediate attention.
In contrast to the questionnaire-focussed social validation data collection, other studies have
used an all-encompassing question delivered verbally such as “do you think the procedures
were acceptable and were you satisfied with the results” (see Lindsay et al., 2005). Some studies
(e.g., Pates et al., 2002; Pates & Maynard, 2000) have preferred to ask three verbal questions,
based on Hrycaiko and Martin’s (1996) and Wolf ’s (1978) recommendations. Conversely,
Thelwell et al. (e.g., Thelwell & Greenlees, 2001; Thelwell & Maynard, 2003), utilized more
extensive questions in a verbal protocol with numerical responses required by the participants.
To elicit information regarding the precise impact of the intervention, participants were also
asked to consider potential underlying reasons as to why the intervention procedure was a
success or failure. This was assessed via an open-ended question which read, “If, from your
perceptions, the procedure has contributed to changing your performance, can you state why
you perceive this to be the case?” It would appear that this more systematic approach to
gaining perceptions of how effective the intervention was, and why, offers more potential for
understanding the idiosyncrasies of single-case data.
A review of the single-case data within sport and exercise psychology also highlights the
lack of detail given in many papers in relation to how the social validity data were collected.
For example, Barker and Jones (2006, p. 98), state that “finally, social validation data was
collected at the end of the follow-up period to ascertain the participant’s perceptions and
feelings of the intervention” (Hanton & Jones, 1999; Kazdin, 1982), and Kendall et al. (1990)
state that “subjects were asked to respond to a social validation questionnaire at the completion
of the study” (p. 163). Such omissions in detail make it difficult to determine the potential
effectiveness of these data and limit the development of social validation procedures within
research and professional practice.
Another factor to consider is the temporal element of social validity. A consistent feature
of the use of social validation in single-case design i n sport and exercise psychology is
the employment of the techniques towards the ends of the intervention period. It has been
recommended that social validity should be assessed in as many phases of a single-case design
as possible (Storey & Horner, 1991), and therefore only assessing at the end of the intervention
could be limiting the potential usefulness of social validation. For example, if clients were
able to report that the dependent variables selected are not acceptable to them at the beginning
of the program, attempts could be made to change the variables used or to educate the clients
of the importance of the measures selected prior to undergoing the intervention. Further more,
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
66 J. PAGE AND R. THELWELL
if the client is only able to express his or her dissatisfaction with the procedures employed
during the intervention at the end of the intervention, practitioners do not have the chance to
alter the intervention to suit the individual. The lack of feedback from clients throughout the
intervention is problematic given that practitioners often aim to deliver an individualized and
client-centered program.
Another factor to consider relates t o the individuals that can be requested, in addition to
the participants or clients, to assess social validity. Indeed, Wolf (1978) suggested that the
social appropriateness of the procedures should be measured by individuals other than the
participants, or clients. However, a concern within the sport and exercise psychology literature
is that most single-case studies measure social validity only from the perspective of the
participants, rather than addressing significant others such as parents, peers or coaches. Such
practices seem to contradict the essence of social validation, which states that “satisfaction
can be assessed by people other than the participants” (Kazdin, 1982; Wolf, 1978). Within
sport and exercise psychology, Martin and Hrycaiko (1983) suggested that in a coaching
context, programs should be evaluated by athletes, parents, and others involved in the sporting
environment. However, most studies within sport and exercise psychology have focused on
collecting data only from the participants t hemselves (e.g., Mellalieu et al., 2009), with a
limited number of studies collecting data from participants and coaches (e.g., Mellalieu et al.,
2006). Turner and Barker (2013/this issue), assessed youth cricketers’ parents’ and coaches’
perceptions of rational-emotive behavior therapy in addition to the participants, and were
able to gain valuable comments from the significant others regarding the effectiveness of the
intervention showing further support for the intervention. This is an excellent demonstration
to how significant others can contribute to the measurement of effectiveness of an intervention.
DATA ANALYSES
In addition to the inconsistencies in methods used to collect these data, the methods used
to analyze these data also differ. Furthermore, very few studies have reported the data analysis
techniques used to obtain social validation information. It appears that many studies have
used quotes gathered through raw data collected during the social validation procedure to
substantiate numerical changes in the dependent variables in the results sections of the studies
(e.g., Barker & Jones, 2008). In these studies (e.g., Barker & Jones, 2008), it would appear that
social validation data are used as a supplemental method, rather than being discussed in their
own right. This submissive nature of data presentation, perhaps due to a l ack of understanding
in sport and exercise psychology about how to collect social validation data and the value of
doing so, appears to devalue the use of social validation. Such practices need to be reduced
to enable researchers and practitioners to document their effectiveness in a more coherent
manner and to reinforce the benefits of the intervention.
The studies that have assessed social validity on a Likert scale (Hanton & Jones, 1999;
Mellalieu et al., 2009; Swain & Jones, 1995; Thelwell & Maynard, 2003; Thelwell et al.,
2006) have referred to the numerical values by taking mean or grouped values from the
participants (see Swain & Jones, 1995, for an exception). It would appear that taking grouped
values contradicts the fundamental purpose of social validation, which by definition aims
to understand individual clients’ responses to interventions (Kazdin, 1982). The benefits
of reporting individual participant data are reinforced by Franck (1986) who argued that
“nomothetic science can never escape the individual” (p. 24). If social validity refers to
individual client satisfaction, taking means or grouping these data detracts from the ability
to make sound conclusions based on individual changes in the dependent variables and their
perceived usefulness.
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
SOCIAL VALIDATION IN SINGLE-CASE DESIGN 67
Perhaps the most substantial analysis technique reported within the sport and exercise
psychology literature with regard to social validation i s by Mellalieu et al. (2009). This study
used standard content analysis to analyze written responses to an open-ended question relating
to the reasons for the success or failure of the intervention. The study discussed three key
themes identified through the content analysis.
Furthermore, once reported in the results section, many studies include very little discus-
sion of these data in the discussion sections of their studies (e.g., Barker & Jones, 2008;
Kendall et al., 1990; Thelwell et al., 2006). This omission hinders researchers’ and practition-
ers’ abilities to draw conclusions based on a thorough understanding of an individual and this
would appear crucial not only for when working with single-case data, but also for enhancing
accountability. Where social validation findings have been discussed, researchers have referred
to the valuable additional information gleaned such as the use of social validation as a manip-
ulation check to enable understanding of what might effect change in performance (Swain &
Jones, 1995).
Interestingly, some papers (e.g., Shambrook & Bull, 1996) included no mention of the results
from the social validation, despite stating that they had used a social validation questionnaire
in the method section. However, such practice is rare in the sport psychology literature with
most papers making some attempt to relay the social validity findings.
Despite the inconsistencies in the methods used and the analyses of the social validation
data, many conclusions have been drawn in relation to social validity of interventions. In
particular studies have attempted to conclude about (a) whether the procedures are acceptable
to the participant (e.g., Barker & Jones 2006, 2008; Hanton & Jones, 1999; Kendall et al.,
1990; Thelwell & Greenlees, 2001; Thelwell & Maynard, 2003; Thelwell et al., 2006), (b) the
perceived effects of the intervention on performance (e.g., Hanton & Jones, 1999; Lindsay
et al., 2005; Marlow, Bull, Heath, & Shambrook, 1998; Pates & Maynard, 2000; Thelwell &
Greenlees, 2001; Thelwell et al., 2006), (c) the perceived changes in consistency of perfor-
mance (e.g., Pates & Maynard 2000; Thelwell & Maynard, 2003), (d) the perceived enjoyment
of intervention (Barker & Jones, 2006, 2008; Kendall et al., 1990; Templin & Vernacchia,
1995), (e) whether participants would recommend techniques to other athletes (Barker &
Jones, 2008), (f) athletes’ commitment to intervention (Hanton & Jones, 1999; Thelwell &
Maynard 2003), and (g) why the changes had occur red (Hanton & Jones, 1999; Thelwell &
Greenlees, 2001; Thelwell & Maynard, 2003).
Suggestions for Social Validation Research
To-date, no attempt has been made to compare and contrast, or even establish the effec-
tiveness of the different questions asked within the social validation process, the different
methods used to collect social validation, the most effective people that can assess social
validation other than the participant or client, when to collect social validation information,
or what analyses are the most appropriate for social validation research. Studies comparing
the effectiveness of the different methods used to collect and analyze social validation data
would enable researchers and applied practitioners to s elect the appropriate social validation
procedures based on the aims of their data collections.
CONCLUSION
Social validation procedures have the potential to strengthen the external validity of research
and consultancy within sport and exercise psychology (Storey & Horner, 1991). However, to
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
68 J. PAGE AND R. THELWELL
enable this to occur, there is a need to focus on methodological issues and measurement of
social validation to ensure that such data is treated with the same rigor associated with other
qualitative data collected used within the discipline. Despite the lack of key research examining
the impact of social validation within sport and exercise psychology, we provide a number of
recommendations for sport and exercise psychologists based on reviews in other domains and
our own opinions on current practice.
Data Collection Methods
It appears that greater consistency in the methods used to collect social validation is
necessary. There are obvious merits to using an interview technique as it gives participants,
clients, and significant others the opportunity to expand on answers that could influence the
future choice of dependent variables and delivery of interventions.
Question Formulation
To enhance the quality of social validation data it is important to consider the questions
that are asked. It would appear that questions asked in a systematic and concise manner yield
the most beneficial social validation data. Furthermore, questions should be based on a single
item, unambiguous (Shaw & Wright, 1967) and the wording should be simple (Robinson,
Rush, & Head, 1964). The recommendation is therefore to deliver semi-structured interviews
that enable sound conclusions to be drawn regarding the three areas of social validity identified
by Wolf (1978).
Data Analysis
It is advised that more thorough, systematic analyses be conducted of these qualitative
data. With social validation data argued to have great value (Dempsey & Matson, 2009), the
methodological aspects that underpin qualitative data, that of the analyses need to be adhered
to so to provide confidence in the information gained. As such, the use of content analysis to
create themes to why dependent variables are useful to change, or why an intervention might be
appropriate, may provide invaluable infor mation for a practitioner when designing future inter-
vention studies. Similarly, once such themes are established, a more deductive type of analysis
may allow researchers to add to their understanding of why particular dependent variables
are important. Although the varying research paradigms are beyond the scope of this paper,
researchers and practitioners need to be fully aware that the collection and analysis of data
should follow accepted procedures such as content analyses, as used by Mellalieu et al. (2009).
Reporting Data
In addition to adopting more detailed analyses, it is important that social validation data
is reported in a manner that is clear and concise and that it adds to the other data collected
throughout the study. Therefore, the inclusion of raw data and showing the resultant second
and first-order themes identified would be beneficial for the reader. In addition, to make
improvements in the documentation of social validation data in the results sections of studies,
social validation data should be discussed with regards to understanding clients and other
people’s perceptions of the intervention within the discussion sections. This should help
explain why interventions were or were not useful.
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
SOCIAL VALIDATION IN SINGLE-CASE DESIGN 69
Using Significant Others
A further recommendation relates to who is used to provide social validation information,
other than, or in addition to, the participants or clients. Given the benefits associated with
triangulation of data (Kazdin, 1982), it would appear that assessing social validity from a
variety of perspectives may be useful in understanding the impacts of our work and may enhance
practitioners’ efficacy to employ interventions. Social validation information collected from
others has the potential for further recommendations and accountability from the perspective
of adding value to our trade. With regards to who to access, it is recommended that social
validation data is collected from as many relevant people as necessary including coaches,
family members, and teammates. In addition, videotapes can be used to record and show
significant others the intervention procedures and/or changes in target behaviors if they do not
have regular access to the client (Storey & Horner, 1991).
Temporal Aspects
It is recommended that social validation procedures are employed before, during, and after
the intervention period. If participants agree that the measures being investigated are important
to them prior to the intervention, this may influence the expectations of the impact of the inter-
vention and also the adherence to the intervention. Checking the importance of such variables
can also be made during and after the intervention, to either reinforce that the variables selected
are still important to the client or highlight where the variables are no longer considered impor-
tant. More regular checks of social validation would enable the practitioner to alter the measures
or fur ther educate the individual of the importance of these variables. An understanding of the
individual’s perceptions of the intervention during the intervention may be of particular benefit
when using an alternating-treatments design whereby the researcher and/or practitioner will
discontinue a treatment if it is not working, and therefore it would seem logical to suggest that
social validation data may be a further means of evidence for making an informed choice.
The acceptableness of the procedures could also be considered before, during, and after the
intervention. Having knowledge of the procedures that will be employed and whether they are
appropriate to the individual is important to know prior to starting the intervention, as changes
can be made to the inter vention if any features within it are highlighted as problematic.
Practitioners can also be flexible with the delivery of the intervention, therefore assessing
the acceptableness of the procedures throughout the i ntervention could inform any changes
that need to be made. Satisfaction with the results of the intervention can also be assessed
throughout the intervention, which would also have the ability to influence future intervention
sessions.
In conclusion, social validation is a useful tool for understanding single-case data because
it is alleged to tie the intervention effects to the social context, guide research, and support
services (Storey & Horner, 1991), validate measures, and alleviate concerns with regards to
biased recording (Goldstein, 2002). However, to maximize the usefulness of these data it
would appear that social validation information should be collected, analyzed and reported
using systematic methods. The key points to consider relate to the questions being asked, who
is being asked, when they are asked, how the data are going to be analyzed, and how such
data may be used to support other dependent variable data. Future research should concentrate
on maximizing the value of these data and make stronger links between changes in depen-
dent variables and the perceptions of social validity from those involved in the intervention
process.
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
70 J. PAGE AND R. THELWELL
REFERENCES
Anderson, A. G., Miles, A., Mahoney, C., & Robinson, P. (2002). Evaluating the effectiveness of applied
sport psychology practice: Making the case for a case study approach. The Sport Psychologist, 16,
433–454.
Anderson, A. G., Miles, A., Mahoney, C., & Robinson, P. (2004). Evaluating the athlete’s perception
of the sport psychologist’s effectiveness: What should we be assessing? Psychology of Sport and
Exercise, 5, 255–277.
Bandura, A. (1986). Social foundations of thought and action: A social-cognitive theory. Englewood
Cliffs (NJ): Prentice-Hall.
Barker, J. B., & Jones, M.V. (2006). Using hypnosis, technique refinement and self-modelling to enhance
self-efficacy: A case study in cricket. The Sport Psychologist, 20, 94–110.
Barker, J. B., & Jones, M.V. (2008). The effects of hypnosis on self-efficacy, affect, and soccer perfor-
mance: A case study. Journal of Clinical Sport Psychology, 2, 127–147.
Busse, R.T., Kratochwill, T. R., & Elliott, S. N. (1995). Meta-analysis for single-case consulta-
tion outcomes: Applications to research and practice. Journal of School Psychology, 33, 269–
285.
Dempsey, T., & Matson, J. L. (2009). General methods of treatment. In J.L. Matson (Ed.), Social behavior
and skills in children (pp. 77–96). New York: Springer.
Dunn, J. G. H. (1994). Toward the combined use of nomothetic and idiographic methodologies in sport
psychology: An empirical example. The Sport Psychologist, 8, 376–392.
Franck, I. (1986). Psychology as a science: Resolving the idiographic-nomothetic controversy. In J.
Valsiner (Ed.), The individual subject and scientific psychology (pp. 17–36). New York: Plenum
Press.
Godin, G., Valois, P., & Lepage, L. (1993). The pattern of influence of perceived behavioral control upon
exercising behavior: An application of Azjen’s theory of planned behavior. Journal of Behavioral
Medicine, 16, 81–102.
Goldstein, H. (2002). Communication intervention for children with autism: A review of treatment
efficacy. Journal of Autism and Developmental Disorders, 32, 373–396.
Hanton, S., & Jones, G. (1999). The effects of a multimodal intervention program on performers: II.
Training the butterflies to fly in formation. The Sport Psychologist, 13, 22–41.
Hardy, L., & Jones, G. (1994). Current issues and future directions for performance-related research in
sport psychology. Journal of Sport Sciences, 61–92.
Hrycaiko, D. W., & Martin, G. L. (1996). Applied research studies with single-subject designs: Why so
few? Journal of Applied Sport Psychology, 8, 183–199.
Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful
change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19.
Kazdin, A. (1982). Single-case experimental designs. In P. C. Kendall & J. N. Butcher (Eds.), Handbook
of research methods in clinical psychology (pp. 461–490). New York: Wiley.
Kendall, G., Hrycaiko, D., Martin, G. L., & Kendall, T. (1990). The effects of an imagery rehearsal,
relaxation, and self-talk package on basketball game performance. Journal of Sport and Exercise
Psychology, 12, 157–166.
Lindsay, P., Maynard, I., & Thomas, O. (2005). Effects of hypnosis on flow states and cycling performance.
The Sport Psychologist,
19, 164–177.
Marlow, C., Bull, S., Heath, B., & Shambrook, C. (1998). The use of a single case design to investigate
the effect of a pre-performance routine on the water polo penalty shot. Journal of Science and
Medicine in Sport, 1, 143–155.
Martin, G., & Hrycaiko, D. ( 1983). Effective behavioral coaching: What’s it all about? Journal of Sport
Psychology, 5, 8–20.
Martin, G., Thompson, K., & Regehr, K. (2004). Studies using single-subject designs in sport psychology:
30 years of research. The B ehavior Analyst, 27, 263–280.
Mellalieu, S. D., Hanton, S., & O’Brien, M. (2006). The effects of goal setting on rugby performance.
Journal of Applied Behavior Analysis, 39, 257–261
Downloaded by [University of Portsmouth] at 01:01 10 December 2012
SOCIAL VALIDATION IN SINGLE-CASE DESIGN 71
Mellalieu, S. D., Hanton, S., & Thomas, O. (2009). The effects of a motivational general arousal imagery
intervention upon preperformance symptoms in male rugby union players. Psychology of Sport and
Exercise, 10, 175–185.
Milne, D. (Ed.). (1987). Evaluating mental health practice. Worcester, UK: Billin & Sons.
Pates, J. K., Cummings, A., & Maynard, I. (2002). The effects of hypnosis on flow states and three-point
shooting performance in basketball players. The Sport Psychologist, 16, 34–47.
Pates, J. K., & Maynard, I. (2000). Effects of hypnosis on flow states and golf performance. Perceptual
and Motor Skills, 91, 1057–1075.
Robinson, J. P., Rush, J. G., & Head, K. B. (1964). Criteria for an attitude scale. In G. M. Maranell (Ed),
Scaling: A sourcebook for behavioural scientists (pp. 244–257). Chicago: Aldine.
Rosenstock, I. M., Strecher, V. J., & Becker, M. J. (1988). Social learning theory and the health belief
model. Health Education Quarterly, 15, 175–83.
Schwartz, N., & Baer, D. M. (1991). Social validity assessments: Is current practice state-of the-art?
Journal of Applied Behavior Analysis, 24, 189–204.
Seligman, M. E. P. (1995). The effectiveness of psychotherapy: The consumer reports study. American
Psychologist, 50, 965–974.
Shambrook, C. J., & Bull, S. J. (1996). The use of single-case research design to investigate the efficacy
of imagery training. Journal of Applied Sport Psychology, 8, 27–43.
Shaw, M. E., & Wright, J. M. (1967). Scales for the measurement of attitude. New York: McGraw-Hill
Smith, R. E. (1989). Applied sport psychology in an age of accountability. Journal of Applied Sport
Psychology, 1, 166–180.
Storey, K., & Horner, R. H. (1991). An evaluative review of social validation research involving persons
with handicaps. Journal of Special Education, 25, 352–401.
Strean, W. (1998). Possibilities for qualitative research in sport psychology. The Sport Psychologist, 12,
333–345.
Swain, A., & Jones, G. (1995). Effects of goal-setting interventions on selected basketball skills: A
single-subject design. Research Quarterly for Exercise and Sport, 66, 51–63.
Templin, D. P., & Vernacchia, R.A. (1995). The effect of highlight music videotapes upon the game
performance of intercollegiate basketball players. The Sport Psychologist, 9, 41–50.
Thelwell, R. C., & Greenlees, I. A. (2001). The effects of a mental skills training package on gymnasium
triathlon performance. The Sport Psychologist, 15, 127–141.
Thelwell, R. C., Greenlees, I. A., & Weston, N. J. V. (2006). Using psychological skills training to develop
soccer performance. Journal of Applied Sport Psychology, 18, 254–270.
Thelwell, R. C., & Maynard, I. W. (2003). The effects of a mental skills package on ‘repeatable good
performance’ in cricketers. Psychology of Sport and Exercise, 4, 377–396.
Turner, M., & Barker, J. (2013/this issue). Examining the efficacy of Rational-Emotive Behavior Ther-
apy (REBT) on irrational beliefs and anxiety in elite youth cricketers. Journal of Applied Sport
Psychology, 25, 132–148.
Wolf, M. W. (1978). Social validity: The case for subjective measurement or how applied behavior
analysis is finding its heart. Journal of Applied Behavior Analysis, 11, 203–214.
Downloaded by [University of Portsmouth] at 01:01 10 December 2012