In the fields of education and performance improvement, we find ourselves in the
business of facilitating development and change. The methods practitioners select
are often simplistic: Engage students with an instructional intervention and then
assess the change that results. To do this, we typically employ self-constructed
knowledge tests or essays but may also use other forms of assessment, such as rubrics
or performance-based measures. Educational researchers often adopt slightly more com-
plex methods, such as quasi-experimental or randomized trials. They also use different
measures like IQ, self-efficacy, or other psychometrically sound instruments and then
apply the same basic principle of measuring change. Unfortunately, these visions of
change are often too narrow, focusing only on academic or knowledge-based develop-
ment. While our understanding of knowledge and our ability to assess it has improved
over time, there are many instructional contexts in which knowledge is only a portion
of the specified learning objectives.
The notion that the complex aspects of learning correspond with more than one out-
come measure is not a new idea. Bloom began developing a taxonomy of instructional
objectives in three domains—the cognitive, affective, and psychomotor—as early as
1956 (Bloom, 1976; Bloom, Englehart, Frost, Hill, & Krathwol, 1956; Bloom, Hastings, &
Madaus, 1971). Research has not only confirmed the importance of these constructs as
outcomes of learning but describes a relationship among the cognitive, affective, and
behavioral dimensions as well (Woolfolk, 1998). Further, Alexander (2003) has found
strong ties between the cognitive and affective attributes of the learner and their impact
on the acquisition and comprehension of information. Ajzen and Fishbein (1977) report
that while it is not the sole indicator, attitude is a factor in determining behavior, and
Kim and Hunter (1993) add that the higher the attitudinal relevance, the stronger the
relation between attitude and behavior.
With these arguments in mind, a growing body of researchers from different areas have
ventured to adapt Bloom’s taxonomy of instructional objectives into a multi-construct
approach to assessment that evaluates not only knowledge, but attitude and behavioral
change as well (Bruvold, 1990; Byrd-Bredbenner, O’Connell, & Shannon, 1982; Coyle,
Basen-Engquist, Kirby, Parcel, Banspach, & Harrist, 1999; Donovan, & Singh, 1999;
by P.G. Schrader, PhD, and Kimberly A. Lawless, PhD
The Knowledge, Attitudes,
How to Evaluate Performance and
Learning in Complex Environments
Heppner, Humphrey, Hillenbrand-Gunn, & DeBord, 1995;
Kapoor, 1989; Kirby, 1985; Lawless, Brown, & Cartter, 1997;
Looker, & Shannon, 1984; Miller, Booraem, Flowers, &
Iversen, 1990). The most common examples of the knowl-
edge, attitude, and behavior (KAB) approach arise from the
medical literature examining a range of areas from primary
care to AIDS prevention (e.g., Miller et al., 1990).
In general, these re s e a rch ers have all espoused an assessment
a p p roa ch that seeks to measure not only knowledge gains,
but the heightening of learner attitudes and the impact of
knowledge and attitude on behavioral change. As re s e a rc h e r s
have discovered, however, the assessment of each of these
c o n s t ruc ts is not without challenge. Issues as simple as defin-
ing each construct constitutively and as complex as re l i a b l y
capturing quantitative data as indicators of the constru c t s
have confounded much of what we know about these con-
s t ruc ts. In the following sections of this paper, we delineate
many of these issues, provide some consistency in terms of
how we communicate about these constructs, and pro v i d e
guidance from the literature re g a rding methods of best prac-
tice when using these constructs to portray a model of out-
come-based change from instructional interv e n t i o n s .
With respect to Bloom’s taxonomy, the cognitive domain of
l e a rning is concerned with knowledge and understanding.
Within a domain, knowledge embodies all information that a
person possesses or accrues related to a particular field of
study (Alexander & Jetton, 2000; Alexander, Jetton, &
Kulikowich, 1995). Knowledge is generally defined as com-
prising three forms: (1) declarative, or knowing w h a t , (2) pro-
cedural, or knowing h o w, and (3) conditional, or knowing
when a n d w h y. For example, in biology, knowing how to
define the word m i t o s i s is an example of demonstrating one’s
declarative knowledge. Comparatively, knowing how to use a
m i c roscope to identify slides that depict various phases of
mitosis would exemplify procedural knowledge. Final
a rrangement of the slides in terms of order for cell division is
an example that demonstrates conditional knowledge, that is,
knowing when one phase ends and another begins.
The influence of domain knowledge on learning has been
illustrated in a number of fields, including biology, history,
psychology, and kinesiology (Alexander, 2003). It has been
found a strong predictor of new information acquisition
from a variety of instructional contexts, such as textbooks,
the Internet, and problem-solving environments (Alexander
2003; Chen, Shen, Scrabis, & Tolley, 2002; Lawless, Brown,
Mills, & Mayall, 2003) and has been consistently related to
competence when processing new information from a
related domain in a strategic and efficient manner
(Alexander, 1992; Alexander & Judy, 1988).
There is also a substantial body of research indicating that
knowledge provides more than just a simple foundation for
the acquisition of new knowledge (e.g., Halford, 1993).
Knowledge directs an individual’s attention to either dis-
count or to focus on particular environmental elements
(e.g., Ericsson, Patel, & Kintsch, 2000; Marshall, 1995). It
also allows people to make inferences and therefore colors
the perceived meaningfulness of new information (Gagne,
Yekovich, & Yekovich, 1993; Marshall, 1995).
In general, knowledge has been a pesky construct for
researchers to capture reliably and validly. Perhaps this is
why we find such a varied set of approaches for its mea-
surement. Some researchers have taken the approach of
simply asking a learner to self-report how much knowledge
he or she has on a given topic or within a given domain (e.g.,
Mautone & Mayer, 2001). While this is an easy and often
cost-effective way to measure knowledge, research has indi-
cated that self-report data of any kind are problematic in
two main areas: the ability for participants to respond with
accurate information and the potential for students to pro-
vide intentionally false or non-representative information
(Kuh, 2002). What is actually being measured is not knowl-
edge but instead a person’s confidence within a particular
topic area or domain (Lawless, Kilikowich, & Smith, 2002).
In particular, these data reflect one’s perceptions of truth
without any actual index such as measured knowledge,
observed action, or other indicator.
However, under certain conditions, self-report data have
been shown to be both reliable and valid (Baird, 1976; Pace,
1984; Pohlmann, 1974). Kuh, Carini, and Klein (2004) sug-
gest that survey question items should target information
that is known to the respondents. The items should also be
clearly written and free of any ambiguity. In all cases, the
items should merit serious and thoughtful re s p o n s e s .
Generally, responses should not threaten, embarrass, or vio-
late the privacy of the respondent or encourage participants
to respond in socially desirable ways. In the case of behav-
ioral self-reports, items should pertain to events that have
taken place during a fixed time frame or with a specific
frame of reference. Researchers using self-report scales are
encouraged to review Brandt (1958), DeNisi and Shaw
(1977), and Kuh (2002) for a more detailed review.
Given the issues with self-report data, many researchers
have attempted to develop more objective tests for knowl-
edge that include multiple choice or other similar forced
choice item formats. By and large, these tests are at best dif-
ficult to construct appropriately. Common psychometric
issues with these measures include insufficient item sam-
pling from the domain, poor item or distractor construction,
and lack of linkages between the items and the instructional
content (Tobias, 1994; Alexander et al., 1995).
More recently, researchers have begun to explore alternative
assessment formats for knowledge, including concept map-
ping, portfolio development, and performance-based mea-
sures. For example, Schrader, Leu, Kinzer, Ataya, Teale, and
Labbo (2003) used a concept-mapping strategy to assess
Performance Improvement •Volume 43 •Number 9 9
10 www.ispi.org•OCTOBER 2004
domain knowledge relative to early literacy instruction.
Participants were asked to complete a conceptual map asso-
ciated with a particular domain, and those concept maps
were evaluated. In another example, Baume and Yorke
(2002) used portfolios to assess teachers’ development in
higher education. Baume and Yorke emphasize the impor-
tance of attaining reliable assessment. Although these
assessment tools are becoming more widely accepted, they
are relatively new. As such, the methodological issues asso-
ciated with assessing knowledge in this way are still becom-
ing clear. Figure 1 lists several KAB studies and the manner
in which they assessed knowledge, attitudes, and behaviors.
Even though knowledge has proven to be such an elusive
construct to measure well, we arein the business of educa-
tion. This means knowledge increases will always be a prin-
cipal outcome variable for us. Thus, the selection of a scale
format and the construction of its contents should be a rig-
orous endeavor, involving multiple iterations and pilot
tests. If we cannot be assured that we are measuring what
we purport to measure and do so consistently across and
within subjects, then we can make no claims about the
effectiveness of our interventions. For individuals inter-
ested in learning more in-depth approaches to creating
sound knowledge measures, see Cangelosi (1990), Ebel
(1965), and Payne (1992).
Similar to knowledge, the concept of attitude has multiple
meanings to researchers. Historically, the literature reveals
two separate frameworks in which attitude is defined:
behavioral and cognitive (see Ajzen & Fishbein, 1977, for a
review). Allport (1967) and LaPiere (1967) define attitude in
a behavioral sense, as a mental and neural state of readiness
conditioned by stimuli directing an individual’s response to
all objects with which it is related. In contrast, Thurstone
takes the position that an “attitude is the affect for or against
a psychological object” (1931, p. 261) rather than a behav-
ioral object as others suggested. Thurstone (1967) adds that
attitudes are also subjective because they are viewed as the
sum or aggregate of all feelings and dispositions toward a
particular concept, idea, or action.
M o re contemporary psychologists have further expanded the
understanding and definition of attitude (Ajzen, 1993;
A l b e rt, Aschenbre n n e r , & Schmalhofer, 1989; Eagly &
Chaiken, 1993; Erwin, 2001; Gable & Wolf, 1993) to include
t h ree components: cognitive, affective, and conative. The
cognitive component is a belief or idea associated with a par-
ticular psychological object. The affective component re p re-
sents the individual’s evaluation of the psychological object
as well as the emotion associated with that object. The
Figure 1. KAB Studies and Their Measures.
Performance Improvement •Volume 43 •Number 9 11
conative—or behavioral—component re p resents the overt
action or predisposition toward action directed toward that
object. Though perspectives may vary, commonalities among
the viewpoints regarding attitudes are evident. The affective
domain specified by Bloom (1976) represents the emotions
and feelings attached to a particular action or thought that
are related to behaviors. Ajzen also states “although formal
definitions of attitude vary, most contemporary theorists
agree that the characteristic attribute of attitude is its evalu-
ative (pro-con, positive-negative) dimension” (1993, p. 41).
As a result, most assessment and scaling techniques (see
Gable & Wolf, 1993) result in a score that locates the indi-
vidual on an evaluative continuum. From a cognitive per-
spective, researches use this scaling technique to assess the
affective domain in Bloom’s taxonomies.
The measurement of attitudes is not without concern, how-
ever. Seeman (1993) indicates a problem with the multidi-
mensionality of attitude measurement. He describes issues
associated with isolating strong and stable factors and
admits the difficulty in predicting behavior from attitudes
alone. This may be due to the fact that an attitude is situated
in both the environment and the individual (Seeman, 1993).
As such, the attitude is a function of the situation in which
it occurs. This can become very problematic for researchers
in complex environments like the schools and businesses
Gable and Wolf (1993) investigate. Seeman addresses this
concern by advising researchers to attend to the environ-
ment as best as possible, ensuring consistency between
measures and rigorous test construction.
Examples of attitude assessment in KAB research typically
focus on some form of self-report data (Schrader, 2003).
Several researchers have simply used Likert-type scales to
measure attitudes relating to a topic (e.g., Heppner et al.,
1995; Miller et al., 1990). Other researchers have used
semantic differentials to assess attitudes (Morrison, Baker, &
Gillmore, 1994). In addition to the scale, researchers must
also decide how to evaluate the scores. For example, Miller
et al. (1990) calculated a unified score for the attitude scale
in re s e a r ch on community-based AIDS pre v e n t i o n .
However, due to multidimensionality of his attitude scale,
Schrader (2003) was compelled to evaluate factor variables.
Issues with evaluating attitudes are both compelling and
frustrating. For this reason, researchers are encouraged to
review the work of researchers such as Gable and Wolf
(1993), who offer a comprehensive review of developing
scales in the affective domain.
Most psychologists agree that a behavior is an observable
action. Researchers use the constitutive definition: the way
in which a person, organism, or group responds to a certain
set of conditions. Although this understanding is simple,
re s e a rchers have operationally defined a multitude of
assessment techniques to record and measure behavior.
Researchers have applied direct measurement techniques,
such as recording the frequency of behaviors during a set
time, but they have also used less direct methods, like inter-
views with peers or close friends, to understand the dynam-
ics of a part i c i p a n t ’ s behavior. Although less dire c t ,
participant reflection on behaviors through some form of
self-report, such as a journal or survey, is used as well.
While most of the behavioral data collected have been in
some form of self-report surveys or frequency reports (e.g.,
Donavan & Singh, 1999; Lawless et al., 1997; Schrader,
2003), there are important examples of other measurement
strategies. In particular, researchers can use diaries and logs
(Byrd-Bredbenner et al., 1982), direct observation (Kapoor,
1989), or participant interviews (Heppner et al., 1995). In
addition to these methods, there are other, less frequently
used techniques to collect behavioral data. These include
video logs, interviews with family, friends, and peers; and
direct outcome measures, for example, pregnancies and
births as an indicator of sexual conduct (see Kirby, 1985). In
most cases, the research conditions direct the assessment
strategies. See Figure 1 for a list of methods researchers have
used to assess behavior.
Interaction of Knowledge, Attitudes, and
One of the justifications for integrating the multiple con-
struct assessment methodology is the unclear relationship
between knowledge and behavior (Valente, Paredes, & Polle,
1998). Researchers have debated the directionality of the
relationship as well as the actual presence of the relation-
ship (Ajzen, Timko, & White, 1982; Fazio, 1986; Fishbein,
1967). More recent research indicates that the relationship
is considerably more complex, indicating that it is poten-
tially reciprocal and dynamic (Alexander & Dochy, 1995;
Ajzen & Fishbein, 1977; Bruvold, 1990; Kim & Hunter, 1993;
Kirby, 1985). From one perspective, what an individual
knows may inform his or her attitude about that topic, and
how he or she feels about that topic may influence behavior.
Alternatively, as some have previously indicated, attitudes
can also be aligned with behavior, indicating that behaviors
can inform attitudes (Fishbein, 1967), and attitudes are
influential in attention (Hoffman, 1986). Thus, attitudes can
impact what an individual perceives and therefore impacts
knowledge gains. Furthermore, knowledge—or attitude, for
that matter—is not necessarily a strong predictor of behav-
ior alone (Ajzen & Fishbein, 1977; Beavers, Kelley, &
Flenner, 1982). Taking all these arguments into account, one
may conclude that the relationship between these three
dimensions—knowledge , attitude, and beh avior—is
dynamic and sometimes reciprocal. It is therefore beneficial
12 www.ispi.org•OCTOBER 2004
and prudent to conduct research of this sort from the per-
spective that these three dimensions can and do interact.
Designing and Conducting KAB Research
Researchers are trained to follow several specific steps
when conducting research. In this sense, KAB research is
the same as other research methods. However, there are a
few vital areas, outlined in Figure 2, where the KAB
methodology demands additional attention and rigor. Once
the research questions have been identified, KAB research
efforts have typically implemented some form of pretest/
post-test research design (Byrd-Bredbenner et al., 1982;
Kapoor, 1989). In many cases, researchers collect data at
multiple points in time (Coyle et al., 1999; Heppner et al.,
1995; Looker & Shannon, 1984). Due to the nature of mea-
suring change, a pretest/post-test research design is highly
recommended (see Campbell & Stanley, 1963). In the event
that a true experimental design is not feasible and equating
the groups is desirable, experts recommend a covariate that
does not share a great deal of variance with the KAB measure s
such as grade point average (GPA), job performance rating,
and so on.
As indicated earlier, the three dimensions in the KAB
methodology are assessed in various ways. Knowledge, for
example, has been evaluated using a range of techniques,
including content measures, conceptual mapping exercises,
or self-report scales. Although assessment of the attitude
and behavior dimensions is generally not as complex,
researchers also have a number of measurement strategies.
As a result, determining the assessment strategy, which is
heavily influenced by the research constraints and objec-
tives, is a crucial step in KAB research.
R e g a rdl ess of assessment strategy selected, newly con-
structed KAB measures must be psychometrically sound.
Although validation strategies also vary, it is evident that
pilot testing and other practices are extremely important at
this stage. The validation and pilot process for research
involving first-year college students and their academic and
social adjustment, for example, was conducted over the
course of several years (Schrader, Ataya, & Brown, 2000;
Schrader, Brown, & Ouimette, 2002). As indicated, the KAB
methodology evaluates multiple cognitive constru c t s .
Consequently, the instruments are often more complex than
other measures. Due to this complexity and the desire for
valid inferences from the results, excellent psychometric
properties are a crucial component of KAB research.
With the exceptions already noted, KAB methods follow the
same logical research steps. Once the idiosyncrasies have
been addressed, one is prepared to administer a KAB instru-
ment, analyze the data, and evaluate the results. This is
done following standard research methods. This work has
little to add in that area. In general, however, the literature
Figure 2. Conducting KAB Research
on KAB investigations has revealed several examples of sta-
tistical analyses used in pretest and post-test designs.
Schrader (2003) describes the application of a MANCOVA
design in detail, while Coyle et al. (1999) refer to a multi-
level modeling procedure. Researchers are encouraged to
review statistical practices that adequately address issues of
change and the unique issues associated with KAB research
(see Stevens, 1996, and Tabachnick & Fidell, 1996, for a
review of multivariate methods).
Research has shown that knowledge instruction alone is a
poor agent for influencing changes in behavior (Bruvold,
1990; Morrison et al., 1994; Valente et al., 1998). Successful
outcomes of interventions in education and performance
improvement involve more than knowledge gains.
Furthermore, researchers are not only concerned with eval-
uating change but with predicting behavior. Unfortunately,
some interventions may influence behavior for a short time
without lasting effects (Lawless et al., 1997). Unless one
assesses the cognitive constructs associated with the inter-
vention, one is unable to offer a justifiable reason for this
phenomenon. Further, attitudes are typically heightened
immediately following an intervention but often dissipate
over time, negatively influencing the future likelihood of
performing a particular behavior. As a result, research sug-
gests that interventions and their evaluation should involve
all three domains. The growing body of KAB research sug-
gests that this design has profound potential as well as pro-
found utility in this area.
Although the KAB method is more complex than most, this
complexity aff o rds a more comprehensive understanding of
the cognitive constructs associated with development and
change. This is particularly true of complex learning enviro n-
ments, where traditional, simple evaluation strategies are
often inadequate. While each specific KAB evaluation is dis-
tinct, re s e a rch has shown the KAB method to be a reliable and
valid method to evaluate change as a result of interventions.
Studying learning is not a simple undertaking, but it is an
important one. There are many variables to consider and
many types of outcomes that can be examined. However, as
the fields of education and training continue to present
learners with environments that house an abundance of
materials and resources, it would seem very important to
explore the relationships among variables that can be used
to indicate complex human processing within these envi-
ronments. Researchers need to continue to identify these
constructs and develop methods that facilitate their exami-
nation. By doing so, researchers and designers alike will be
more capable of developing practical applications from
their findings for teaching and learning environments. The
KAB approach summarized in this article is one promising
method for examining such changes and isolating the out-
comes that will lead to instructional improvement in any
arena of human performance.
Ajzen, I. (1993). Attitude theory and the attitude-behavior re l a-
tion. In D. Krebs & P Schmidt (Eds.), New directions in attitude
m e a s u r e m e n t (pp. 41-57). New York: Walter de Gru y t e r.
Ajzen, I., & Fishbein, M. (1977). Attitude-behavior rela-
tions: A theoretical analysis and review of empirical
research. Psychological Bulletin, 84(5), 888-918.
Ajzen, I., Timko, C., & White, J.B. (1982). Self-monitoring
and the attitude-behavior relation. Journal of Personality
and Social Psychology, 42(3), 426-435.
Albert, D., Aschenbrenner, K.M., & Schmalhofer, F. (1989).
Cognitive choice processes and the attitude-behavior rela-
tion. In A. Upmeyer (Ed.), Attitudes and behavioral dimen -
sions (pp. 61-99). New York: Springer-Verlag.
Alexander, P.A. (1992). Domain knowledge: Evolving
themes and emerging concerns. Educational Psychologist,
Alexander, P.A. (2003). The development of expertise: The
journey from acclimation to proficiency.Educational
Researcher, 32(8), 10-14.
Alexander, P.A., & Dochy, F.J.R.C. (1995). Conceptions of
knowledge and beliefs: A comparison across varying cul-
tural and educational communities. American Educational
Research Journal, 32(2), 413-442.
Alexander, P.A., & Jetton, T.L. (2000). Learning from text: A
multidimensional and developmental perspective. In M.L.
Kamil, P.B. Mosenthal, P.D. Pearson, & R. Barr (Eds.),
Handbook of reading research: Volume III (pp. 285-310).
Mahwah, NJ: Lawrence Erlbaum Associates.
Alexander, P.A., Jetton, T.L., & Kulikowich, J.M. (1995).
Interrelationship of knowledge, interest, and recall:
Assessing a model of domain learning. Journal of
Educational Psychology, 87, 559-575.
Alexander, P.A., & Judy, J.E. (1988). The interaction of
domain-specific and strategic knowledge in academic per-
formance. Review of Educational Research, 58, 375-404.
Allport, G.W. (1967). Attitudes. In M. Fishbein (Ed.),
Readings in attitude theory and measurement (pp. 1-13).
New York: John Wiley & Sons.
B a i rd, L.L. (1976). Biographical and educational correlates of
graduate and professional school admissions test score s .
Educational and Psychological Measurement, 36(2), 415-420.
Baume, D., & Yorke, M. (2002). The reliability of assess-
ment by portfolio on a course to develop and accredit teachers
in higher education. Studies in Higher Education, 27(1), 7-25.
Beavers, I., Kelley, M., & Flenner, J. (1982). Nutrition
knowledge, attitudes, and food purchasing practices of par-
ents. Home Economics Research Journal, 11(2), 134-142.
Performance Improvement •Volume 43 •Number 9 13
14 www.ispi.org•OCTOBER 2004
Bloom, B.S. (1976). Human characteristics and school
learning. New York: McGraw Hill.
Bloom, B.S., Englehart, M.D., Frost, E.J., Hill, W.H., &
Krathwol, D.R. (1956). Taxonomy of educational objectives.
Handbook I: Cognitive domain. New York: David McKay.
Bloom, B.S., Hastings, J.T., & Madaus, G.F. (1971).
Handbook on formative and summative evaluation of stu -
dent learning. New York: McGraw Hill.
Brandt, R.M. (1958). The accuracy of self estimates.
Genetic Psychology Monographs, 58, 55-99.
Brown, S.W., & King, F.B. (2000). Constructivist pedagogy
and how we learn: Educational psychology meets interna-
tional studies. I n t e r national Studies Perspectives, 1, 245-254.
Bruvold, W.H. (1990). A meta-analysis of the California
school-based risk reduction program. Journal of Drug
Education, 20(2), 139-152.
Byrd-Bredbenner, C., O’Connell, L.H., & Shannon, B.
(1982). Junior high home economics curriculum: Its effect
on students’ knowledge, attitude, and behavior.Home
Economics Research Journal, 11(2), 124-133.
Campbell, D.T., & Stanley, J.C. (1963). Experimental and
quasi-experimental designs for research. Chicago, IL: Rand
Cangelosi, J.S. (1990). Designing tests for evaluating stu -
dent achievement. White Plains, NY: Longman.
Chen, A., Shen, B., Scrabis, K.A., & Tolley, C. (2002).
Motivation effects of achievement goals and interests on
learning in physical education. Manuscript submitted for
publication. [AU: Pls update]
Coyle, K., Basen-Engquist, K., Kirby, D., Parcel, G.,
Banspach, S., Harrist, R., et al. (1999). Short-term impact of
Safer Choices: A multicomponent, school-based HIV, other
STD, and pregnancy prevention program. Journal of School
Health, 69(5), 181-188.
DeNisi, A.S., & Shaw, J.B. (1977). Investigation of the uses
of self-reports of abilities. Journal of Applied Psychology,
Donovan, D.T., & Singh, S.N. (1999). Sun-safety behavior
among elementary school children: The role of knowledge,
social norms, and parental involvement. Psychological
Reports, 84, 831-836.
Eagly, A.H., & Chaiken, S. (1993). The psychology of atti -
tudes. New York: Harcourt Brace & Company.
Ebel, R.L. (1965). Measuring educational achievement.
Englewood Cliffs, NJ: Prentice-Hall.
Ericsson, K.A., Patel, V.L., & Kintsch, W. (2000). How
experts’ adaptations to representative task demands
account for the expertise effect in memory recall: Comment
on Vicente and Wang. Psychological Review, 107, 578-592.
Erwin, P. (2001). Attitudes and persuasion. Philadelphia:
Taylor & Francis Inc.
Fazio, R.H. (1986). How do attitudes guide behavior? In
R.M. Sorrentino and E.T. Higgins (Eds.), Handbook of
motivation and cognition (pp. 204-243). New Yo r k :
G u i l f o r d Pre s s .
Fishbein, M. (1967). Attitude and the prediction of behav-
ior. In M. Fishbein (Ed.), Readings in attitude theory and
measurement (pp. 477-492). New York: John Wiley & Sons.
Gable, R.K., & Wolf, M.B. (1993). Instrument development
in the affective domain: Measuring attitudes and values in
corporate and school settings (2nd ed.). Boston, MA:
Kluwer Academic Publishers.
Gagne, E.D., Yekovich, C.W., & Yekovich, F.R. (1993). The
cognitive psychology of school learning (2nd ed.). New
York: HarperCollins Publishers.
Halford, G.S. (1993). Children’s understanding: The devel -
opment of mental models. Hillsdale, NJ: Lawrence Erlbaum.
Heppner, M.J., Humphrey, C.F., Hillenbrand-Gunn, T.L., &
DeBord, K.A. (1995). The differential effects of rape pre-
vention programming on attitudes, behavior, and knowl-
edge. Journal of Counseling Psychology, 42(4), 508-518.
Hoffman, M.L. (1986). Affect, cognition, and motivation. In
R.M. Sorrentino and E.T. Higgins (Eds.), Handbook of
motivation and cognition (pp. 244-280). New Yo r k :
G u i l f o rd Pre s s .
K a p o o r, S.A. (1989). Help for the significant others of bulim-
ics. J o u rnal of Applied Social Psychology, 19(1), 50-66.
Kim, M., & Hunter, J.E. (1993). Attitude-behavior relations:
A meta-analysis of attitudinal relevance and topic. Journal
of Communication, 43(1), 101-141.
Kirby, D. (1985). Sexuality education: A more realistic view
of its effects. Journal of School Health, 55(10), 421-424.
Kuh, G.D. (2002). The national survey of student engage-
ment: conceptual framework and overview of psychomet-
ric properties [Online]. National Survey of Student
Engagement: The College Student Report. Available at:
Kuh, G.D., Carini, R.M., & Klein, S.P. (2004). Student
engagement and student learning: insights from a con -
struct validation study.Paper presented at the annual
meeting of the American Educational Research
Association, April 2004, San Diego, CA.
LaPiere, R.T. (1967). Attitude versus actions. In M.
Fishbein (Ed.), Readings in attitude theory and measure -
ment (pp. 26-31). New York: John Wiley & Sons.
Lawless, K.A., Brown, S.W., & Cartter, M. (1997). Applying
educational psychology and instructional technology to
health care issues: Combating Lyme disease. International
Journal of Instructional Media, 24(2), 287-297.
Lawless, K.A., & Brown, S.W., Mills, R.J., & Mayall. H.J.
(2003). Knowledge, interest, recall and navigation: A look
at hypertext processing. Journal of Literacy Research,
Lawless, K.A., Gerber, B., & Smolin, L. (2004, April).
Comparison of multimedia navigational strategies of first
and second language learners. Paper presented at the
annual meeting of the American Educational Research
Association, San Diego, CA.
Lawless, K.A., Kulikowich, J.M., & Smith, E.V. (2002,
April). Examining the relationships among knowledge and
interest and perceived knowledge and interest. Paper pre-
sented at the annual meeting of the American Educational
Research Association, New Orleans, LA.
Looker, A., & Shannon, B. (1984). Threat vs. benefit
appeals: Effectiveness in adult nutrition education. Journal
of Nutrition Education, 16(4), 173-176.
Marshall, S.P. (1995). Schemas in problem solving.
Cambridge: Cambridge University Press.
Mautone, P.D., & Mayer, R.E. (2001). Signaling as a cogni-
tive guide in multimedia learning. Journal of Educational
Miller, T.E., Booraem, C., Flowers, J.V., & Iversen, A.E.
(1990). Changes in knowledge, attitudes, and behavior as a
result of a community-based AIDS prevention program.
AIDS Education and Prevention, 2(1), 12-23.
Morrison, D.M., Baker, S.A., & Gillmore, M.R. (1994).
Sexual risk behavior, knowledge, and condom use among
adolescents in juvenile detention. Journal of Youth and
Adolescence, 23(2), 271-288.
Pace, C.R. (1984). Measuring the quality of college student
experiences. Los Angeles: University of California, Higher
Education Research Institute.
Payne, D.A. (1992). Measuring and evaluating educational
outcomes. New York: Merrill.
Pohlmann, J.T. (1974). A description of effective college
teaching in five disciplines as measured by student ratings.
Research in Higher Education, 4(4), 335-346.
Schrader, P.G. (2003). Knowledge, attitudes, and behaviors
of college freshmen: an exploratory study. Unpublished
dissertation. [AU: What school?]
Schrader, P.G., Ataya, R., & Brown, S.W. (2000, October).
Freshman college students: Knowledge, attitudes and
behaviors of academic and life skills. Paper presented at
the Northeastern Educational Research Association confer-
ence, Ellenville, NY.
S c h r a d e r, P.G., Brown, S.W., & Ouimette, D. (2002, October).
F reshman year experience: Measuring incoming life skills.
Paper presented at the annual meeting of the Nort h e a s t
Educational Research Association, Kerhonkson NY.
Schrader, P.G., Leu, D.J., Kinzer, C.K., Ataya, R., Teale,
W.H., & Labbo, L.D., et al. (2003). “The effects of using
multimedia case-based instruction, delivered over the
Internet, on preservice, teacher education students’
understanding of effective K-3 reading instruction: An
exploratory study.” Instructional Science, 31, 317-340.
Seeman, M. (1993). A historical perspective on attitude
re s e a rch . In D. Krebs & P. Schmidt (Eds.), New directions in
attitude measure m e n t (pp. 3-20). New York: Walter de Gru y t e r.
Stevens, J. (1996). Applied multivariate statistics for the
social sciences (3rd ed.). Mahwah, NJ: Lawrence Earlbaum
Tabachnick, B.G., & Fidell, L.S. (1996). Using multivariate
statistics (3rd ed.). New York: Harper Collins.
Thurstone, L.L. (1931). The measurement of social atti-
tudes. Journal of Abnormal Social Psychology, 26, 249-269.
Thurstone, L.L. (1967). Attitudes can be measured. In M.
Fishbein (Ed.), Readings in attitude theory and measure -
ment (pp. 77-89). New York: John Wiley & Sons.
Tobias, S. (1994). Interest, prior knowledge, and learning.
Review of Educational Research, 64, 37-54.
Valente, T.W., Paredes, P., & Poppe, P.R. (1998). Matching
the message to the process: Relative ordering of knowl-
edge, attitudes, and practices in behavior change research.
Human Communication Research, 24(3), 366-385.
Woolfolk, A.E. (1998). Educational psychology (7th ed.).
Boston, MA: Allyn and Bacon.
Dr. P.G. Schrader is an Assistant Professor of Educational Multimedia in the
College of Education and Integrative Studies at Cal Poly, Pomona. Prior to
working in education, Dr. Schrader was a swimmer and ranked among the top
100 backstrokers in the world. After a successful athletic career, Dr. Schrader
pursued his degree in Educational Psychology. During that time, he instructed
students of all ages in the areas of mathematics, educational psychology, and
technology. Dr. Schrader has received awards honoring his commitment to
academics, the community,and higher education in general. His dissertation
focused on the manner in which newly matriculated students adjust to the col-
lege environment, particularly with respect to electronic and technological
resources. Dr. Schrader’s current work emphasizes the importance of learning
theory in educational multimedia. He has recently published work on Internet-
delivered case-based instruction in Instructional Science.He has also been
published in the areas of instrument development, multimedia, distance learn-
ing, and games in education and has presented at more than 20 national and
regional conferences. Dr. Schrader may be reached at firstname.lastname@example.org.
Dr. Kimberly Lawless is an Associate Professor of Educational Technology
in the department of Curriculum, Instruction and Evaluation at the University of
Illinois, Chicago. Her research focuses on the comprehension of digital text
and teacher beliefs about the effectiveness of technology in the classroom.
She has published more than 60 articles and book chapters in the areas of
educational technology, instructional science, and reading. Currently, Dr. 8
Lawless is the lead principal investigator and project director for Project TITUS,
funded by the Department of Education’s Preparing Tomorrow’s Teachers to
Use Technology program, and is the university partner for Chicago Public
School’s No Child Left Behind initiative. In addition, Dr. La wless serves on the
editorial review boards for several professional journals, including the
International Journal of Instructional Media and the Journal of Research on
Computers in Education. Most recently, she has served as the co-guest editor
for a special issue of Instructional Science focusing on innovations in web-
based education. Dr. Lawless served as the program chair for the educational
t e c h n o l o g y research section of the Americ an Educational Resear ch
Association for 2004 and was recently awarded the Outstanding Young Alumni
Researcher Award from the University of Connecticut and the AACTE Best
Practice Award for Technology in Teacher Education. She may be reached at
Performance Improvement •Volume 43 •Number 9 15