Content uploaded by Rob L Martens
Author content
All content in this area was uploaded by Rob L Martens on Feb 07, 2014
Content may be subject to copyright.
ORIGINAL PAPER
Investigating efficacy expectancy as criterion
for comparison of teacher- versus student-regulated
learning in higher education
Cornelis J. de Brabander ÆJeroen S. Rozendaal Æ
Rob L. Martens
Received: 26 September 2007 / Accepted: 20 March 2008 / Published online: 2 September 2009
The Author(s) 2009. This article is published with open access at Springerlink.com
Abstract In this study, we assessed the feasibility of a specific elaboration of the efficacy
construct, distinguishing between personal and contextual aspects, as a criterion for
comparing learning environments. The participants were 163 students from two student-
regulated and two teacher-regulated programs in higher education. We measured students’
perceptions of autonomy and various aspects of perceptions of efficacy in common
learning tasks. Using principal components analyses, we assessed the structure of all the
relevant variables. Subsequently, analyses of variance were performed with regulation
source, discipline and grade level as factors. All variables emerged as separate scales with
high internal consistency. Students in student-regulated programs reported a higher level of
organisational efficacy expectancy, implying that these students perceived better organi-
sational conditions for supporting their task performance. It is concluded that the dis-
tinction between personal and contextual aspects of efficacy is a promising distinction.
However, a more fine-grained conceptualisation of teacher and student regulation is
needed.
Keywords Autonomy Efficacy Higher education Motivation New learning
Self-regulation Subjective task value
Introduction
Developments in education in general and in tertiary vocational education in particular
illustrate a shifting balance from external teacher regulation to student (self-) regulation. In
recent years, there has been an abundance of literature which contrasts the ‘new’ learning
C. J. de Brabander (&)J. S. Rozendaal R. L. Martens
Department of Education, Leiden University, P.O. Box 9555, 2300 RB Leiden, The Netherlands
e-mail: brabander@fsw.leidenuniv.nl
J. S. Rozendaal
e-mail: rozendaal@fsw.leidenuniv.nl
R. L. Martens
e-mail: rmartens@fsw.leidenuniv.nl
123
Learning Environ Res (2009) 12:191–207
DOI 10.1007/s10984-009-9062-y
with the ‘old’ learning (e.g. Boekaerts et al. 2000; Grabinger 1996; Simons et al. 2000).
According to self-determination theory (e.g. Deci and Ryan 2000), student autonomy is
beneficial for intrinsic motivation. This is in line with the socio-constructivist plea for
activating students, stimulating inquiry, self-regulation and collaboration. Self-regulated
learners are motivated, independent and metacognitively active participants in their own
learning (e.g. Bastiaens and Martens 2000; Boekaerts and Martens 2006). But others
contest this. For instance, Mayer (2004) argues against discovery learning which makes
learning environments disordered, unpredictable and thus ineffective. They increase task
complexity in an unwanted way. This debate on self-regulation is important, but there has
not been much research to provide empirical evidence. It is clear that student-controlled
learning environments can have motivational benefits, because ‘‘students fed a continuous
diet of well-structured tasks might shortcut learning and self-regulation’’ (Lodewyk and
Winne 2005, p. 3). On the other hand, research on cognitive load, for instance, shows that
teacher-controlled learning environments lead to more effective learning with less extra-
neous load (Mayer 2004). Because empirical explorations of student-regulated programs
are scarce, we initiated the current study.
1
Self-determination theory (SDT; Deci and Ryan 1985,1995; Ryan and Deci 2000) puts
forward three basic psychological needs that are conditional to personal growth, integrity
and well-being. These are the needs for autonomy, competence and relatedness. The need
for autonomy refers to freedom of action, mainly being self-initiating, and to self-regu-
lating one’s own actions. It is defined as the awareness of sovereignty in choosing and
designing courses of action. Notably, sense of autonomy involves the level of autonomy
that people experience, not the autonomy actually granted. The level of experienced
autonomy depends on whether people’s opportunities for autonomous decision making are
within the realm of their proximal development. Therefore, even small opportunities for
choice can already increase self-determination (Anderman and Midgley 1997). In other
words, SDT stresses the importance of students’ perceptions of their learning environment.
The need for competence involves understanding how to attain various external and
internal outcomes and being effective in performing the requisite actions. The need for
competence indicates a need to experience satisfaction in exercising and extending one’s
capabilities. Naturally, people seem to seek out challenges that are optimal for their level
of development (Levesque et al. 2004). The need for relatedness involves developing
secure and satisfying connections with others in one’s social milieu (Deci and Ryan 2000).
The conceptualisation of these three needs opens up the possibility to specify conditions
that are relevant to learning and personal growth. There is ample evidence that a learning
environment that satisfies students’ basic needs of autonomy, competence and relatedness
promotes learning (e.g. Connell and Wellborn 1991; Deci and Ryan 2002; Deci et al. 1991;
Grolnick and Ryan 1989). So, the level of need satisfaction provides an informative
criterion in the comparison of learning environments that differ in the amount of self-
regulation: if student and teacher regulation make a difference, then this difference would
surface in measurements of students’ sense of autonomy, competence and relatedness.
At this point, one can raise the worry that it is hard to combine all the ‘basic needs’ into
the same learning environment. Indeed, research has shown that, although it is clear that
perceptions of control, relatedness and competence are related to intrinsic motivation, it is
unclear how exactly they are interrelated (Sheldon and Niemiec 2006). This study focused
on the difficulty of combining all these aspects into a learning environment. After all, too
much freedom and low levels of teacher control in ill-structured tasks will cause a
1
The material for this study was collected as part of a Master’s thesis (Van den Ende 2004).
192 Learning Environ Res (2009) 12:191–207
123
perception of high autonomy, but might cause a perception of low self-efficacy. On the
other hand, high teacher control can occur simultaneously with perception of low auto-
nomy. Critics of SDT and ‘new learning’ argue that students still might perceive personal
efficacy but will perceive less ‘contextual’ efficacy. According to Lodewyk and Winne
(2005, p. 3):
…well-structured tasks can be identified as those with straightforward operations for
constructing products, predictable evaluations, and agreed-upon standards for their
products. In contrast, ill-structured tasks do not make obvious the operations to use in
creating products, offer erratic evaluations, and have moot standards for judging the
product.
On the distinction between ill- and well-structured tasks, see also Hew and Knapczyk
(2006).
Many researchers point at this complicated relation between student’s self-regulation
and external regulation in the learning environment (e.g. Ten Gate et al. 2004; Vermunt
2007). In the first place, we need a clear distinction between learning environments with
different amounts or degrees of regulation. As stated above, too much or too little guidance
in the learning environment can hinder students’ development or (intrinsic) motivation. A
balance should be found between guidance and self-regulation. Vermunt and Verloop
(1999) call this constructive friction between learning and teaching, which refers to the
distance between the actual developmental level as determined by independent problem
solving capability, and the level of potential development with the assistance of others (see
also Ten Gate et al. 2004). These authors distinguish between learning environments that
almost completely rely on teacher guidance and those that rely fully on internal guidance.
Between those versions, they describe a stage that is often found in ‘new’ learning
environments such as those described above: shared guidance (from both teacher and
student) in which, at a cognitive level, students are helped to determine the importance of
issues themselves, at an affective level, students are stimulated to figure out their motives
and, finally, at a metacognitive level, students are given neither more nor less help than
actually needed. As the authors state, this internalisation of teacher functions in the learner
is not an easy task and might involve constructive friction.
Solving this problem is multi-faceted and complex. One of the key issues is the fact that,
although many authors agree that students’ perceptions of these different types of regu-
lation are highly important, the exact measurement of this distinction between efficacy
related to personal and to contextual aspects, as criteria for the comparison of learning
environments, has proven to be very difficult. As yet, SDT doesn’t provide such a potential
and relevant distinction in measurement, and neither do other tools for assessing students’
perceptions of learning environments such as the What Is Happening In this Class?
(WIHIC) questionnaire (den Brok et al. 2006), Questionnaire on Teacher Instructional
Behaviour (QIB) or other instruments (e.g. den Brok et al. 2004; Masui and de Corte
2005). Therefore, this article aims to unravel this distinction in learning environments
which vary with different levels of student regulation.
Student-regulated learning in tertiary vocational education
The basic features of student-regulated learning environments are displayed in Table 1.
These are taken from Simons et al. (2000). Students typically receive an educational credit
to be spent on fields and topics which they judge to be important. A pivotal role is ascribed
to the establishment of a personal development plan. Learning takes place in authentic
Learning Environ Res (2009) 12:191–207 193
123
environments, namely, either in simulated or in real-work conditions. Students are
encouraged to actively (re)create knowledge from their concrete work experiences through
reflection and investigation. They are responsible for initiating their learning activities.
They monitor and evaluate their progress independently, but in consultation with their
teachers.
Sense of efficacy
To conceptualise sense of competence in this article, we elaborate on Bandura’s (1986,
1994) concept of efficacy. Perceived self-efficacy is defined as ‘‘people’s judgments of
their capabilities to organize and execute courses of action required to attain designated
types of performances’’ (Bandura 1986, p. 391). According to Bandura, self-efficacy
cannot be conceived of as a general characteristic but is task specific: it can be incorrect to
extrapolate a positive level of self-efficacy from one domain of tasks to other domains.
Depending on their efficacy expectancies, people anticipate likely outcomes. The antici-
pation of outcomes as such is an idiosyncratic process: every person foresees different
outcomes. Outcomes can be compared, however, with respect to the value that they rep-
resent. Thus, the efficacy construct entails two important aspects that are quantifiable:
efficacy expectancies (how successfully I can perform specific courses of action); and
valence expectancies (how valuable to me the likely outcomes of these courses of action
are). Of course, valence expectancy is akin to the concept of ‘subjective task value’ (Eccles
and Wigfield 2002; Pintrich 2003; Pintrich and Schunk 2002), but we prefer to handle these
concepts within the unifying domain of a single theory.
The learning environment, of course, is not immediately reflected in students’ self-
efficacy. Therefore, we theorised that, for the investigation of their relationship, it is
necessary to expand the efficacy construct by drawing a distinction between personal and
environmental aspects. Despite its importance, to date much is still unclear about the
concept, such as the conceptual distinction between goal orientation and related constructs
like self-efficacy (Zweig and Webster 2004). In this study, we tried to deepen under-
standing of the efficacy concept by drawing a distinction between the person and his or her
environment. The personal side of efficacy expectancies involves judgements of personal
capabilities, and thus is more or less equivalent to the original definition of self-efficacy.
The environmental side, however, involves judgements of the extent to which conditions in
the environment are conducive or obtrusive to the execution of courses of action aimed at
designated types of performances. The environmental side of efficacy expectancy makes
explicit what remains implicit in Bandura’s theory: deliberating the viability of a certain
course of action, people not only estimate their personal capabilities, but they also make
judgements about the context in which a specific course of action is to be executed. Are the
Table 1 Guidelines for devel-
oping student-regulated learning
(Simons et al. 2000)
Increased self-directed
learning
Increased experiential
learning
More active learning More discovery learning
More cumulative learning More contextual learning
More constructive learning More problem-based learning
More goal-oriented learning More case-based learning
More diagnostic learning More social learning
More reflective learning More intrinsically-motivated learning
194 Learning Environ Res (2009) 12:191–207
123
conditions favourable and are there any obstacles in the context that have to be dealt with.
In an analysis of teachers’ efficacy expectancies, Imants and de Brabander (1996) showed
that the distinction between perceived self-efficacy and perceived school efficacy shed
interesting light on the efficacy expectancies of male and female teachers. Depending on
the characteristics of the context, environmental efficacy expectancies can have more
specific names. In the context of school organisation, for instance, the label of ‘organi-
sational’ efficacy expectancies seems natural.
The distinction between person and environment applies equally well to valence
expectancies. Personal benefits indeed can be the most important determinants of action
choices or behavioural persistence. However, this does not preclude people from consid-
ering benefits that might result for other people. Reflecting on courses of action in which
they might get involved, people do take into account the benefits that might result for their
social support group, for the organisation for which they work, or even for planet earth.
Indeed, in many contexts, the primary goal of courses of action that people undertake is not
personal benefit, but benefit for other people. Teaching, for instance, is a pre-eminent
example in this regard. In this study, however, because of concentration on the expec-
tancies of students, who are clients of the school organisation rather than members, we did
not consider non-personal valence expectancies relevant enough. Therefore, we addressed
only personal valence expectancies that we define as the total value of the outcomes of a
course of action that people anticipate for themselves personally.
The aim of this study was to investigate personal and organisational efficacy expec-
tancies and personal valence expectancies in student- and teacher-controlled learning
environments. We studied the empirical feasibility of these constructs and their capacity to
discriminate between learning environments. This is not translated into an expectation. If
these two aspects can be proven to be measurable separately, we have the possibility to test
hypotheses. Based on what is described in the sections above about the debate on ill-
structured versus well-structured environments, we formulate one hypothesis. This
hypothesis illustrates constructive friction. We anticipate that students in the student-
regulated environment have a higher level of personal efficacy expectancy, a lower level of
organisational efficacy expectancy, and a higher level of personal valence expectancy.
Method
Sample
‘Student-regulated’ programs were acquired from a Dutch network of schools for tertiary
vocational education that aims to promote student-regulated learning. Confronted with
discrepancies and incompatibilities between the knowledge with which students are
equipped by completion of their preservice education and the needs of employing com-
panies, these institutions have developed educational programs which aim to increase
student regulation. The development of these programs is based on heuristics such as the
12 guidelines in Table 1formulated by Simons et al. (2000). All these schools have
learning environments that can be characterised as ‘shared guidance’ learning environ-
ments (see introduction). In order to introduce some variation in disciplines, an ‘infor-
matics’ program and a ‘small business and retail management’ program were selected.
Recruiting second and fourth year students allowed the tracing of quasi-developmental
aspects. The sample was obtained by recruiting two comparable programs from traditional,
teacher-regulated schools. The final sample consisted of 163 participants (see Table 3)
Learning Environ Res (2009) 12:191–207 195
123
from four schools. The age of the students in the sample ranged from 18 to 29 years with a
mean of 21.56 years and a standard deviation of 1.89 years. The male domination of about
85% in the attendance of these programs was reflected in the sample: only 21 female
students participated (15%).
Learning tasks
In accordance with their task-specific nature, we used a task specific-approach (Bakkenes
et al. 1993; Imants and de Brabander 1996) to develop measures of sense of efficacy. In
this approach, judgements are acquired with respect to a series of specific tasks. For such
an approach, we needed to identify learning tasks that were general enough to apply to
different types of programs and different environments. Vermunt (1992) compiled a set of
cognitive, affective and regulative processing activities (Table 2) that met this prerequisite.
These activities were transformed into descriptions of learning tasks that would be com-
prehensible to students. However, this was not possible for all the processing activities.
The category of affective processing activities especially proved to be difficult. Eventually,
we were able to come up with a list of 17 tasks (Appendix). A few examples of these task
descriptions are: ‘‘Identifying relationships between different parts of the subject matter,
and between new information and what you know already’’ (Relating), ‘‘Motivating
yourself to realise the learning goals you planned, building and sustaining the willpower to
learn’’ (Motivating), ‘‘Handling distractions, thoughts, and emotions, that threaten to dis-
turb the learning process’’ (Concentrating).
Variables
Sense of efficacy
To measure the three aspects of sense of efficacy, the 17 learning tasks described above
were used. The student reported on all three aspects of sense of efficacy using a five-point
scale, ranging from 1 (Not At All or Hardly) to 5 (Very Much So) for each of these 17
tasks. We adapted the precise wording of the scale positions to the specific response-
eliciting statement that we used for each efficacy aspect. Personal efficacy expectancy was
measured in response to the statement: ‘‘I have enough skills and abilities to accomplish
this task successfully.’’ Organisational efficacy expectancy was measured in response to
the statement: ‘‘In our school all conditions are fulfilled that are necessary to accomplish
this task successfully.’’ Personal valence expectancy was measured in response to the
Table 2 Catalogue of process-
ing activities according to
Vermunt (1992)
Cognitive Affective Regulative
Relating Attributing Orientating
Structuring Motivating Planning
Analysing Concentrating Process monitoring
Concretising Self-judging Testing
Applying Valuing Diagnosing
Memorising Making an effort Adjusting
Criticising Arousing emotions Assessing
Selecting Expecting Reflecting
196 Learning Environ Res (2009) 12:191–207
123
statement: ‘‘Accomplishing this task, and the results I obtain in doing so, are very valuable
to me personally.’’
The three apriori aspects of efficacy normally would suggest the desirability of using
confirmatory factor analysis. However the psychological structure of the type of tasks was
unknown in advance. The logical categories of cognitive, affective and regulative tasks do
not necessarily constitute also a psychological structure. Therefore, we based the devel-
opment of efficacy scales on exploratory factor analysis. The eigenvalues of a principal
components analysis of sense of efficacy for the data in our final sample allowed for the
extraction of four or even five components. However, the fifth component in the unrotated
solution appeared to be dominated by the fifth task, namely, preparing for tests. Moreover,
from the loadings on the fourth component, we were not able to identify a clear contrast
between different types of tasks. Therefore, we limited the number of components to three.
These three components undoubtedly dealt with the three efficacy aspects defined in
advance (eigenvalues: 14.109, 4.133 and 3.399). To ease interpretation, we rotated
the principal components solution (varimax method with Kaiser normalisation). The
component loadings in the rotated solution are given in Table 3. The first component was
defined by the organisational efficacy expectancies, the second by the personal efficacy
expectancies, and the third by the personal valence expectancies. All items had their
highest loading in their own category. Subsequently, we formed three scales and analysed
their reliability. For each scale, all tasks appeared to contribute to the reliability. Calculated
with Cronbach’s acoefficient, the reliability of the personal efficacy, the personal valence,
and the organisational efficacy scales were 0.90, 0.89 and 0.93, respectively.
Sense of autonomy
A measure of autonomy was developed using the same list of tasks. Friedman (1999) has
used a comparable approach in the field of autonomy perception. His Appropriate Teacher
Work Autonomy instrument comprises a set of 32 teacher tasks. The teacher was asked to
judge the level of autonomy for each task on a five-point scale. Likewise, for our instru-
ment, we asked the student to indicate the level of autonomy which he or she experienced
in the fulfilment of each of 17 tasks on a five-point scale ranging from 1 (Not Autonomous
At All) to 5 (Fully Autonomous).
Because of our uncertainty about the psychological structure of the tasks, sense of
autonomy in different tasks again was analysed with principal components analysis. The
scree plot did not warrant extraction of more than one component. Moreover, we could not
find a plausible interpretation of subsequent components. With an eigenvalue of 5.325, the
first component explained 31.3% of the variance. A reliability analysis with Cronbach’s a
coefficient showed that the reliability could be improved by removing Item 5. This item
also had a low factor loading of 0.231 (Table 4). The task addressed in this item involved
preparation for tests which, with respect to autonomy, is apparently different from other
tasks; no-one else but the students themselves can prepare for tests. Without this item, the
final value of Cronbach’s afor the autonomy scale was 0.861.
Procedures
We invited the respondents by email to participate. This email presented a link to an online
version of the questionnaire where the respondent could enter his or her responses. The
student received a reminder if he or she had not responded within 2 weeks.
Learning Environ Res (2009) 12:191–207 197
123
Table 3 Component factor loadings for sense of efficacy in learning tasks
Task Factor loadings
Personal efficacy expectancy Personal valence expectancy Organisational efficacy expectancy
Component
1
Component
2
Component
3
Component
1
Component
2
Component
3
Component
1
Component
2
Component
3
Identifying relationships 0.091 0.622 0.224 0.117 0.133 0.567 0.667 0.143 0.229
Designing a personal development plan 0.299 0.585 0.044 0.262 0.294 0.462 0.682 0.147 0.197
Applying knowledge 0.179 0.537 0.161 0.060 0.170 0.521 0.617 0.207 0.139
Critically processing -0.006 0.630 0.113 0.003 0.335 0.623 0.632 0.061 0.306
Preparing for assessments 0.026 0.503 0.080 -0.106 0.053 0.466 0.416 0.201 0.173
Motivating yourself 0.121 0.617 0.138 0.037 0.179 0.732 0.730 0.228 0.055
Monitoring 0.171 0.673 0.029 0.079 0.356 0.575 0.692 0.233 -0.020
Diagnosing deficiencies 0.104 0.576 0.187 0.102 0.222 0.598 0.727 0.240 0.120
Selecting learning strategies 0.126 0.589 0.151 0.201 0.286 0.622 0.764 0.172 0.098
Evaluating learning results 0.125 0.580 0.208 0.166 0.422 0.510 0.693 0.247 0.181
Working together in a group 0.153 0.356 0.100 0.270 0.039 0.471 0.529 -0.067 0.300
Attending selectively 0.063 0.635 0.060 0.207 0.184 0.491 0.566 0.130 0.190
Critically reflecting 0.280 0.502 0.174 0.277 0.103 0.523 0.745 0.058 0.226
Handling distractions 0.149 0.568 0.011 0.235 -0.159 0.533 0.629 0.188 -0.057
Selecting learning activities 0.224 0.522 0.116 0.300 0.052 0.507 0.664 0.165 0.143
Evaluating mastery 0.185 0.624 0.052 0.196 -0.070 0.579 0.647 0.260 0.199
Adjusting planning 0.284 0.643 0.164 0.321 0.156 0.474 0.739 0.134 0.105
% of Variance explained 16.8 13.9 11.7
The table refers to one principal components analysis with 17 (tasks) 93 (efficacy aspects) items. The component loadings are laid out horizontally per aspect for
convenience
198 Learning Environ Res (2009) 12:191–207
123
Design
First, the composition and characteristics of the sample were explored in different ways.
Next, the effect of regulation source was investigated with analysis of variance. In all
analyses, regulation source, discipline and year of study served as independent variables.
The model of analyses was limited to main effects and to all two-way interactions. The
independent variables, autonomy perceptions and sense of efficacy were analysed
separately. Considering the number of dependent variables, we used a univariate or a
multivariate analysis.
Results
Descriptive analyses
About 163 students gave a usable response. Table 5gives the distribution over regulation
sources, disciplines and year of study. The response rate was low (21%), but similar to
other comparable inquiries. We considered this response rate acceptable given that the
objective of this study was not generalisation, but testing the feasibility of an approach. In
the student-regulated programs, the response rate was slightly higher than in the teacher-
regulated programs. The number of responses from second-year small-business students in
the teacher-regulated program was very low. A univariate analysis of variance with age as
dependent variable and source of regulation, discipline and year of study as independent
variables, and with all main effects and all two-way interactions in the analysis model,
showed a significant year-of-study effect (F[1, 154] =16.666, p\0.0001) but also a
significant interaction between discipline and year of study (F[1, 154] =15.538,
p=0.0001). In the group of small-business students, the age difference between second
Table 4 Component loadings
for sense of autonomy in learning
tasks
Task Component 1 loading
Identifying relationships 0.551
Designing a personal development plan 0.416
Applying knowledge 0.493
Critically processing 0.593
Preparing for assessments 0.231
Motivating yourself 0.650
Monitoring 0.562
Diagnosing deficiencies 0.606
Selecting learning strategies 0.658
Evaluating learning results 0.616
Working together in a group 0.466
Attending selectively 0.591
Critically reflecting 0.577
Handling distractions 0.615
Selecting learning activities 0.586
Evaluating mastery 0.559
Adjusting planning 0.591
Learning Environ Res (2009) 12:191–207 199
123
year (mean =20.7 years) and fourth year (mean =22.7 years) students was 2 years, as
was to be expected, but the second year (21.1 years) and fourth year (21.7 years) infor-
matics students on the average were clearly closer in age.
Though the chi-square value of a cross tabulation of preliminary education and source
of regulation was not statistically significant, the percentage of students with vocational
secondary education in the student-regulated program was 10 points higher than the per-
centage of students with general secondary education. This might be interpreted as an
indication that students with vocational secondary education as preliminary education
found the student-regulated programs more attractive, possibly because of the practice and
action-oriented components of a typical student-regulated program.
Neither regulation source nor year of study was related to sex: the unequal distribution
between men and women applied to all regulation sources and all years of study. But there
was a significant association between sex and discipline (v
2
[1] =11.481, p=0.0007),
showing that male overrepresentation was stronger in the informatics programs than in the
small business programs (Table 6).
Sense of autonomy
Differences in sense of autonomy were examined with a univariate analysis of variance.
We used regulation source, discipline and year of study as independent variables. Because
of the interaction effect between discipline and year of study on age level, age level was
added as a covariate. In addition to all main effects, the two-way interactions between
regulation source, discipline and year of study were tested. Significance tests were based
on Type III sums of squares (unique effects). The analysis revealed a significant main
effect for regulation source (F[1, 153] =3.991, p=0.0475) and a significant interaction
effect between regulation source and discipline (F[1,153] =4.919, p=0.028). The
graphical representation of this interaction (Fig. 1) shows that the main effect can be
explained by the interaction between regulation source and discipline: in the small business
programs students in the student-regulated program had a slightly higher sense of auton-
omy than students in the teacher-regulated programs (means of 3.83 and 3.44), but there
Table 5 Sample distribution
Discipline Year Teachers Students Total
Informatics Second year 18 26 44
Fourth year 19 16 35
Total 37 42 79
Small business Second year 4 40 44
Fourth year 12 28 40
Total 16 68 84
Total 53 110 163
Table 6 Distribution of men and
women Discipline Male Female Total
Informatics 76 3 79
Small business 65 18 83
Total 141 21 162
200 Learning Environ Res (2009) 12:191–207
123
was no difference between regulation sources among informatics students (means of 3.57
and 3.61).
The interaction between regulation source and year of study was not statistically sig-
nificant, although a trend can be signalled (F[1, 153] =2.984, p=0.086). According to
the observed means, there was only one group with a lower sense of autonomy and that
group was the second-year students in the teacher-regulated environment: mean 3.39 and
3.75 (second year student regulated), 3.67 (fourth year, teacher regulated) and 3.70 (fourth
year, student regulated).
Sense of efficacy
A multivariate analysis of variance was used to reveal any differences in sense of efficacy.
Independent variables were source of regulation, discipline and year of study, with age
level again as covariate. In addition to the covariate, the model of analysis included the
main effects of regulation source, discipline and year of study and the three-two-way
interactions that could be formed with them. Significance tests were based on Type III
sums of squares (unique effects).
The multivariate tests yielded two significant results, namely, the main effect of reg-
ulation source (F[3, 148] =4.630, p=0.004) and the interaction effect between regula-
tion source and discipline (F[3, 148] =2.713, p=0.047). Subsequent univariate tests
revealed that the main effect of regulation source was attributable to an effect on organ-
isational efficacy expectancy (F[1, 150] =11.651, p=0.0008): student regulation was
superior to teacher regulation in terms of organisational efficacy expectancy (Table 7). The
effect on personal valence expectancy was not significant (F[1, 150] =3.023, p=0.084),
but there was a slight trend in the same direction that might be meaningful (Table 7).
Fig. 1 Interaction between
regulation source and discipline
for sense of autonomy
Learning Environ Res (2009) 12:191–207 201
123
Univariate tests for the interaction effect between regulation source and discipline,
however, failed to reach statistical significance, although the effect on organisational
efficacy expectancy showed a trend (F[1, 150] =3.212, p=0.075). The graphical rep-
resentation of the interaction (Fig. 2) suggests that the difference in organisational efficacy
expectancy between student regulation and teacher regulation was more distinct in the
informatics programs. According to the univariate tests, it might also be worthwhile to
investigate the main effect of discipline on organisational efficacy expectancy
(F[1,150] =3.932, p=0.049).
Discussion
Consistent with the objectives of this study, the discussion of results below concentrates on
the usefulness and the feasibility of specific constructs and measures which we used in the
evaluation of student-regulated versus teacher-regulated ‘shared guidance’ learning
environments.
Table 7 Mean personal valence expectancy and organisational efficacy expectancy
Type of regulation Mean
Personal valence Organisational efficacy
Student regulation 3.74 3.41
Teacher regulation 3.56 2.84
Differences between student-regulated and teacher-regulated group was statistically significant (p\0.001)
Fig. 2 Interaction between
regulation source and discipline
for organisational efficacy
expectancy
202 Learning Environ Res (2009) 12:191–207
123
Sense of efficacy
In an elaboration of the SDT concept of perceived competence, the three efficacy aspects
consistently surfaced in the principal components analysis. The response-eliciting state-
ments apparently had clear and different meanings to the respondents.
With respect to sense of efficacy, we anticipated that the students in the student-
regulated environment would have a higher level of personal efficacy expectancy, a lower
level of organisational efficacy expectancy, and a higher level of personal valence
expectancy. Although the analysis revealed interesting results, these expectations were not
confirmed. First, we found no difference in personal efficacy expectancy between student-
regulated and teacher-regulated environments. Second, there was a sizable difference
between teacher and student regulation in terms of organisational efficacy expectancy, but
the size of the difference might be bigger in the informatics programs. However, contrary
to our hypothesis, students judged the conditions in the student-regulated environment in
general as more favourable than in the teacher-regulated environment. They clearly saw
the conditions in the student-regulated environments as more conducive to the execution of
their learning tasks than in the teacher-regulated environment. Apparently, a low level of
organisational efficacy expectancy is not a hallmark of a student-regulated learning
environment as such, but appears to depend on how support in that environment is
organised. The results in this study suggest at the very least that student-regulated
environments have potential to offer adequate guidance. Regarding the third aspect,
personal valence expectancy, we found no significant differences but, if we take the
observed tendency seriously, then the students found that the learning activities in the
student-regulated environment were slightly more valuable than in the teacher-regulated
environment. This conforms to our expectation.
The primary goal of this study was the conceptualisation of personal and environmental
aspects of the efficacy construct. In regard to this objective, we conclude that the proposed
distinction appears to be appropriate and fruitful: the two types of expectancies made sense
to the respondents and made the instrument clearly more sensitive to the variability in the
learning environments. If the difference between the learning environments is not strong
enough to affect differences in personal efficacy expectancy, apparently a measurement of
organisational efficacy expectancy still can detect environmental variability.
Task-specific approach
Both with respect to sense of autonomy and sense of efficacy, we find it somewhat
discomforting that the principal components analysis did not detect multiple components
contrasting different types of tasks. If the task descriptions had a clear and discriminating
meaning for the respondents, then the principal components analysis would have detected
components that were related to contrasts between different types of tasks. We explain the
absence of task contrasts in terms of a deficiency in the task descriptions, which apparently
were not clear enough to enable the respondents to grasp a clear understanding of the
differences between the tasks. This might have several causes. In the first place, the
descriptions of the tasks could have been too abstract, making it difficult for the respon-
dents to connect them with their concrete activities. Another possibility is that these tasks
indeed describe courses of actions that contain a set of activities that is rather heteroge-
neous with respect to regulation/autonomy and efficacy. In either case, what was clear to
the respondents, however, was that all tasks had something to do with learning. This
general resemblance between tasks elicited more or less equal responses to different items.
Learning Environ Res (2009) 12:191–207 203
123
Ironically, then, the very satisfactory internal consistency of our scales was actually
boosted by one or more inadequacies in the task descriptions.
Regulation source
We found some partial effects for sense of autonomy that were consistent with expectation.
Students in the student-regulated environment in the small-business program reported a
higher sense of autonomy, but this was not the case for students in the informatics program.
We also found a tendency for second-year students in the teacher-regulated environment to
demonstrate a lower sense of autonomy than the other groups. Although sense of autonomy
was not our primary interest, we did not expect differences of this small size. With respect
to personal efficacy expectancy, we found no influence of regulation source at all. There
was a possibly meaningful difference in personal valence expectancy in favour of student
regulation. Setting aside organisational efficacy expectancy, overall the impact of regu-
lation source was rather small.
This lack of effects of regulation source cannot be attributed to the less-than-optimal
characteristics of the dependent measures. The task descriptions were not perfect. Nev-
ertheless, because there is no doubt that all task descriptions were related to activities in a
learning context, sense of autonomy and sense of efficacy still were adequately measured,
admittedly in a global manner. If source of regulation had made a substantial impact, it
would have surfaced in more measures. Therefore we can conclude that student and teacher
regulation are either insufficiently different or not relevant to the dependent measures.
Presumably, source of regulation was conceived mistakenly as an all-or-none distinc-
tion. Teacher-regulated environments also vary substantially with respect to the level of
self-regulation. Educational programs that, according to the subject matter and transmis-
sion model, would be characterised as teacher regulated can allow for considerable self-
regulation (e.g. when they stress independent learning). On the other hand, because
educational traditions are rather persistent, it is safe to assume that actual practice in
student-regulated environments is less student regulated than it is assumed to be. And what
if student regulation in fact would be implemented as a laissez faire type of coaching? The
opportunity to act autonomously would be hampered severely. Maybe it would be possible
to isolate effects of weak differences in terms of source of regulation in a very large
sample. It would be wiser, however, to analyse the type of regulation actually employed in
educational programs in a more fine-grained manner. These can be based on analyses
of these learning environments’ characteristics or by the use of more specific instruments
to assess students’ opinions, such as the What Is Happening In this Class? (WIHIC)
questionnaire described by den Brok et al. (2006) and Rickards et al. (2005) or the
Questionnaire on Teacher Instructional Behaviour (QIB; den Brok et al. 2004).
The relation between regulation context and self-regulation is further complicated by
the fact that they are not directly linked. Even the most rigorous external regulation by
itself would not preclude people from self-regulation. As deCharms (1976) remarked long
ago, ‘origins’ and ‘pawns’ are equally subject to external constraints but, while pawns feel
defeated by their constraints and complain about them, origins are not obsessed with them
and strive to visualise their paths to their personal goals through the requirements with
which they are faced. The contexts in which students of more than 20 years of age follow
their courses of study might be different on multiple aspects, but students will find their
path through their obligations using the resources that they see fit. This would imply that
there is an immediate relationship between the regulative context and neither subjective
sense of autonomy nor personal efficacy expectancy. Finally, as also indicated by den Brok
204 Learning Environ Res (2009) 12:191–207
123
et al. (2006) and Dhindsa and Fraser (2004), differences between male and female stu-
dents’ opinions of the learning environment, or more precisely, of the distinction between
personal and environmental aspects, might be an interesting subject for future
investigations.
In summary, it can be concluded that the distinction between personal and environ-
mental aspects of efficacy is promising, although we have no data on non-personal valence
expectancies. The measurement of environmental efficacy expectancies proved very sen-
sitive to source of regulation, despite the difficulties that we identified with the concept of
regulation source. Their importance is that environmental efficacy expectancies involve
perceptions of the support offered by the environment and, therefore, have a more
immediate relationship with the regulative context. If the description of learning tasks can
be improved to permit the comparison of different types of tasks, our conceptualisation of
efficacy could contribute substantially to the evaluation of teacher and student regulation,
but also more generally to understanding of phenomena for which task performance is
relevant. This eventually could contribute to the difficult and lengthy discussion about the
benefits and disadvantages of student-regulated learning environments. The notion of
regulation, however, needs further conceptual clarification. Important dimensions of reg-
ulation must be identified, and the relation between subjective and objective perspectives
must be enlightened.
With these comments in mind, the results of this study are taken not as definitive
answers, but as encouragement to pursue research on this topic in the direction chosen
here.
Open Access This article is distributed under the terms of the Creative Commons Attribution Noncom-
mercial License which permits any noncommercial use, distribution, and reproduction in any medium,
provided the original author(s) and source are credited.
Appendix
All items are literal translations of Dutch phrases. There is no guarantee, therefore, that
they are also connotatively equivalent in English.
Learning task descriptions
1. Identifying relationships between different parts of the subject matter, and between
new information and what you already know (Cognitive: Relating).
2. Designing a personal development plan, reflecting on possible and desirable learning
goals (Regulative: Planning).
3. Applying acquired knowledge in new situations (Cognitive: Applying).
4. Critically processing of subject matter, drawing personal conclusions on basis of
facts and arguments, developing personal judgements about the correctness of
information at hand (Cognitive: Criticising).
5. Preparing for assessments (Cognitive: Memorising, Criticising, Analysing, Selecting;
Regulative: Testing).
6. Motivating yourself to realise the learning goals you planned, building and sustaining
the willpower to persevere (Affective: Motivating).
7. Monitoring and evaluating the learning process (Regulative: Process monitoring).
8. Diagnosing deficiencies in knowledge and skills (Regulative: Diagnosing).
9. Selecting learning strategies to master new knowledge and skills (Cognitive:
Analysing).
10. Evaluating learning results against planned learning goals (Regulative: Assessing).
Learning Environ Res (2009) 12:191–207 205
123
11. Working together in a group, and with representatives of different vocational fields in
solving vocational problems (Affective: Making an effort).
12. Attending selectively to subject matter, distinguishing major and minor points
(Cognitive: Analysing, Selecting).
13. Critically reflecting on personal behaviour and personal competencies (Regulative:
Reflecting).
14. Handling distractions, thoughts and emotions which threaten to disturb the learning
process (Affective: Concentrating).
15. Selecting learning activities for the acquisition of new knowledge and skills
(Cognitive: Analysing; Regulative: Orienting).
16. Adjusting the original planning based on the results of assessments or guidance
consultations (Regulative: Adjusting).
17. Evaluating mastery of subject matter, demonstration of acquired knowledge and
skills (Regulative: Testing).
References
Anderman, L. H., & Midgley, C. (1997). Motivation and middle school students. In J. L. Irvin (Ed.), What
current research says to the middle level practitioner (pp. 41–48). Columbus, OH: National Middle
School Association.
Bakkenes, I., de Brabander, C. J., & Imants, J. (1993). Professional isolation in primary schools and
teachers’ task perceptions. In F. K. Kieviet & R. Vandenberghe (Eds.), School culture, school
improvement, and teacher development (pp. 171–197). Leiden, The Netherlands: DSWO-Press.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs,
NJ: Prentice Hall.
Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behaviour (Vol. 4,
pp. 71–81). New York: Academic Press.
Bastiaens, Th., & Martens, R. (2000). Conditions for web-based learning with real events. In B. Abbey
(Ed.), Instructional and cognitive impacts of web-based education (pp. 1–32). Hershey/London: Idea
Group Publishing.
Boekaerts, M., & Martens, R. (2006). Motivated learning: What is it and how can it be enhanced? In L.
Verschaffel, F. Dochy, M. Boekaerts, & S. Vosniadou (Eds.), Instructional psychology: Past, present
and future trends. A look back and a look forward (pp. 113–130). London: Elsevier.
Boekaerts, M., Pintrich, P. R., & Zeidner, M. (2000). Handbook of self-regulation. San Diego, CA: Aca-
demic Press.
Connell, J. P., & Wellborn, J. G. (1991). Competence, autonomy, and relatedness: A motivational analysis
of self-system processes. In M. R. Gunnar & L. A. Sroufe (Eds.), Self processes and development: The
Minnesota Symposia on Child Psychology (Vol. 23, pp. 43–77). Hillsdale, NJ: Lawrence Erlbaum.
deCharms, R. (1976). The learning of personal causation. In R. deCharms (Ed.), Enhancing motivation:
Change in the classroom (pp. 200–213). New York: Irvington Publishers.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behaviour. New
York: Plenum Publishing Co.
Deci, E. L., & Ryan, R. M. (1995). Human autonomy: The basis for true self-esteem. In M. Kernis (Ed.),
Efficacy, agency, and self-esteem (pp. 31–49). New York: Plenum.
Deci, E. L., & Ryan, R. M. (2000). The ‘‘what’’ and ‘‘why’’ of goal pursuits: Human needs and the self-
determination of behavior. Psychological Inquiry, 11, 227–268.
Deci, E. L., & Ryan, R. M. (Eds.). (2002). Handbook of self-determination research. Rochester, NY:
University of Rochester Press.
Deci, E. L., Vallerand, R. J., Pelletier, L. G., & Ryan, R. M. (1991). Motivation and education: The self-
determination perspective. Educational Psychologist, 26, 325–346.
den Brok, P., Bergen, T., Stahl, R. J., & Brekelmans, M. (2004). Students’ perceptions of teacher control
behaviours. Learning and Instruction, 14, 425–443.
den Brok, P., Fisher, D., Rickards, T., & Bull, E. (2006). Californian science students’ perceptions of their
classroom learning environments. Educational Research and Evaluation, 12, 3–25.
206 Learning Environ Res (2009) 12:191–207
123
Dhindsa, H. S., & Fraser, B. J. (2004). Culturally-sensitive factors in teacher trainees’ learning environ-
ments. Learning Environments Research, 7, 165–181.
Eccles, J. S., & Wigfield, A. (2002). Motivational beliefs, values, and goals. Annual Review of Psychology,
53, 109–132.
Friedman, I. A. (1999). Teacher-perceived work autonomy: The concept and its measurement. Educational
and Psychological Measurement, 59, 58–76.
Grabinger, R. S. (1996). Rich environments for active learning. In D. H. Jonassen (Ed.), Handbook of
research for educational communications and technology (pp. 665–692). New York: Macmillan
Library Reference USA.
Grolnick, W. S., & Ryan, R. M. (1989). Parent styles associated with children’s self-regulation and com-
petence in school. Journal of Educational Psychology, 81, 143–154.
Hew, K., & Knapczyk, D. (2006). Analysis of ill-structured problem solving, mentoring functions, and
perceptions of practicum teachers and mentors toward online mentoring in a field-based practicum.
Instructional science, 35, 1–40.
Imants, J. G. M., & de Brabander, C. J. (1996). Sense of efficacy of teachers and principals for learner and
school oriented tasks in primary schools. Teaching and Teacher Education, 12, 179–195.
Levesque, C., Zuehlke, A. N., Stanek, L. R., & Ryan, R. M. (2004). Autonomy and competence in German
and American university students: A Comparative study based on self-determination theory. Journal of
Educational Psychology, 96, 68–84.
Lodewyk, K. R., & Winne, P. H. (2005). Relations among the structure of learning tasks, achievement, and
changes in self-efficacy in secondary students. Journal of Educational Psychology, 97, 3–12.
Masui, C., & De Corte, E. (2005). Learning to reflect and to attribute constructively as basic components of
self-regulated learning. British Journal of Educational Psychology, 75, 351–372.
Mayer, R. E. (2004). Should there be a three-strikes rule against pure discovery learning? The case for
guided methods of instruction? American Psychologist, 59, 14–19.
Pintrich, P. R. (2003). A motivational science perspective on the role of student motivation in learning and
teaching contexts. Journal of Educational Psychology, 95, 667–686.
Pintrich, P. R., & Schunk, D. H. (2002). Motivation in education: Theory, research, and applications (2nd
ed.). Upper Saddle River, NJ: Prentice Hall.
Rickards, T., Den Brok, P., & Fisher, D. (2005). The Australian science teacher: A typology of teacher-
student interpersonal behaviour in Australian science classes. Learning Environments Research, 8,
267–287.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation,
social development, and well-being. American Psychologist, 55, 68–78.
Sheldon, K. M., & Niemiec, C. P. (2006). It’s not just the amount that counts: Balanced need satisfaction
also affects well-being. Journal of Personality and Social Psychology, 91, 331–341.
Simons, R. J., van der Linden, J., & Duffy, T. (2000). New learning. Dordrecht/Boston: Kluwer.
Ten Gate, O., Snell, L., Mann, K., & Vermunt, J. (2004). Orienting teaching towards the learning process.
Academic Medicine, 79, 219–228.
Van den Ende, E. J. (2004). De juiste weg. Een exploratief onderzoek naar instructie- en vraaggestuurde
leeromgevingen in het Hoger Beroepsonderwijs [The right road. An explorative investigation of
instruction and demand oriented learning environments in Higher Education]. Unpublished Master’s
thesis, Centre for the Study of Education and Instruction, Leiden University, The Netherlands.
Vermunt, J. D. H. M. (1992). Leerstijlen en sturen van leerprocessen in het hoger onderwijs: Naar pro-
cesgerichte instructie in zelfstandig denken [Learning styles and regulation of learning processes in
higher education: Towards process oriented instruction in independent thinking]. Lisse, The Nether-
lands: Swets & Zeitlinger.
Vermunt, J. D. (2007). The power of teaching-learning environments to influence student learning. In N.
Entwistle & P. Tomlinson (Eds.), Student learning and university teaching (pp. 73–90). Leicester, UK:
British Psychological Society.
Vermunt, J. D., & Verloop, N. (1999). Congruence and friction between learning and teaching. Learning
and Instruction, 9, 257–280.
Zweig, D., & Webster, J. (2004). Validation of a multidimensional measure of goal orientation. Canadian
Journal of Behavioural Science, 36, 232–248.
Learning Environ Res (2009) 12:191–207 207
123