ArticlePDF Available

Abstract and Figures

Prima facie it is accepted that anonymity is an important feature of eLearning systems. It is easy to implement. But this simplicity belies its more serious implications, especially regarding the worth which can be attached to results derived from the interpretation and analysis of anonymously collected data. A sample of eLearning implementation cases is reviewed from the point of view of anonymously collected data for evaluation of educational quality. A reiteration of an earlier analysis of levels of anonymity leads to the introduction of the Anonymity-Purpose-Worth matrix. It is contended that if the matrix is used to characterise the data collection and analysis in eLearning evaluation settings, the worth of feedback can be better appraised and acted upon. Practitioners, researchers, and students in the eLearning field will benefit as the anonymity conditions will be clearly documented in a standardised and comparable manner.
Content may be subject to copyright.
19
th
Bled eConference
eValues
Bled, Slovenia, June 5 - 7, 2006
The Worth of Anonymous Feedback
Heinz Dreher
Curtin University of Technology, Australia
h.dreher@curtin.edu.au
Hermann Maurer
Graz University of Technology, Austria
hmaurer@iicm.edu
Abstract
Prima facie it is accepted that anonymity is an important feature of eLearning systems. It
is easy to implement. But this simplicity belies its more serious implications, especially
regarding the worth which can be attached to results derived from the interpretation and
analysis of anonymously collected data. A sample of eLearning implementation cases is
reviewed from the point of view of anonymously collected data for evaluation of
educational quality. A reiteration of an earlier analysis of levels of anonymity leads to the
introduction of the Anonymity-Purpose-Worth matrix. It is contended that if the matrix is
used to characterise the data collection and analysis in eLearning evaluation settings, the
worth of feedback can be better appraised and acted upon. Practitioners, researchers,
and students in the eLearning field will benefit as the anonymity conditions will be clearly
documented in a standardised and comparable manner.
Keywords: anonymity; anonymous data; Computer Mediated Communications;
eLearning; evaluation of educational quality; Learning Management Systems
Introduction
In this article we aim to persuade eLearning practitioners and researchers to document
the anonymity conditions associated with eLearning investigations and evaluations. We
contend that if anonymity levels and matching data collection purposes were documented
in standard and comparable manner such as is suggested in our Anonymity-Purpose-
Worth Matrix (Table 2), consumers of eLearning evaluations and appraisals will be better
able to judge the worth of the results of the investigations.
The article begins with an overview of anonymity in the classrooms of the pre-computing
era, and then observes the effect of computer mediated communications on the collection
of anonymous feedback in eLearning situations. An analysis of the importance on
anonymity and its categorisation into the six levels as suggested by Flinn and Maurer in
1995 precedes our consideration of the value of the evaluative claims able to be made in
five eLearning situations.
We conclude by offering an example of using the Anonymity-Purpose-Worth matrix and
show that the reported worth of the claims in the two cases analysed have fallen short of
the original aim. In completing the above steps we hope to have persuaded the reader of
the value of using our Anonymity-Purpose-Worth matrix.
Anonymity in the Classroom
Prior to the enlistment of the ‘new technologies’ to support teaching and learning, little
thought was given to the issue of anonymity. Teachers went about their business of
teaching, and students faced their challenge of learning, in public view of the entire class.
2
Students asked questions of the teacher, usually under the curious surveillance of their
nearest neighbours, if not many more of their peers. They asked each other questions
under the same conditions. The teacher asked questions of the students, often in public,
soliciting for the first or best answer. Test results were frequently announced to
individuals by reading aloud the scores in front of the entire class, or by handing back test
papers in order of score and perhaps accompanied by some comment from the teacher in
some way thought to be appropriate for each student. Such activities, in the absence of
technology, were efficiently carried out during the class and in public.
For students who performed well, were confident, gregarious and outgoing, such a
‘public’ system functioned rather well, and for students who were not so bright, and
perhaps tended towards introversion, the prospects were entirely different: ridicule;
reduced levels of confidence; lack of motivation; and poor performance giving rise to
undesirable expressions of superiority, were some possible outcomes.
Not only was teacher-to-individual-student communication public, so too was student-to-
teacher communication. In many situations this may be desirable, but if students wanted
further elaboration on an aspect of a lesson there was little opportunity to ask in private
and save the possible embarrassment in case of ‘ignorance’ or ‘error’. We would all
readily recall personal embarrassment at the asking so the called ‘stupid question’ in
public. It is bad enough to ask such a question of the teacher in private, but to face the
agony of ridicule by one’s peers was something to be carefully avoided. Our natural
embarrassment aversion behaviour thus tends to result in a plethora of unasked
questions with the concomitant lack of learning to full potential.
Thoughtful teachers, those who realised that individual students varied in psychological
disposition as well as in intellectual ability, and those who questioned the very methods
which were accepted as good contemporary practice, were able to help their students
realise their full potential by accommodating for individual differences, but these were in
the minority. And it was hard work. The advent of technology brought with it the possibility
of teaching larger numbers of students, even personalising the learning experience, and
monitoring the effectiveness of the educational experience through the efficient collection
of student opinion.
We could safely conclude that respecting and preserving the anonymity of individual
students was not achieved, or even attempted in the main, despite the negative
consequences. The advent of technology, not simply computer and communications
technology, but also the availability of cheap and immediate reproduction facilities such
as photocopying, and all manner of other media (audio, 35mm slides, filmstrips, and later
video) provided a major impetus to the emergence of a new branch of education known
as “educational technology” in the 1970s and 1980s.
Computer Mediated Communications Facilitates
Anonymity
Computer Mediated Communications (CMC) technology burst onto the world scene in the
closing decade of last century, first in boardrooms frequented by rich company
executives, but soon enough, as the power of personal computing spread and the
Internet reached out to the world, the technology was within reach of universities and
schools and finally private individuals working or studying from home. CMC technology
made possible Learning Management Systems (LMS) and through them, the
management of anonymity so as to promote learning and take cognisance of individual
learner needs and abilities in interacting with teacher and peer alike.
Of particular interest here in this article is the collection of student opinion of education
quality in Learning Management Systems. Good teachers, whether using LMS or not, are
very interested in monitoring the effectiveness of their own performance and that of the
systems they enlist in the education enterprise, and through the use of CMC it is relatively
simple and cost-efficient to collect such responses. We know that if a respondent is
identifiable, the opinion submitted may not be as truthful as if that opinion was
3
anonymously offered. This is precisely why democratic elections are conducted as secret
ballots. In such elections the anonymity of the voter is safeguarded, but at the same time,
controls exist to ensure that one and only one vote is counted for each elector (or some
slight variation if voting is not compulsory).
Looking at the education literature we found a selection of cases in which student opinion
of educational quality was collected. Of interest are the cases where anonymity was
safeguarded, but the matching controls to ensure one and only one input from an
authoritative source were weak or absent. We observed that, despite such an absence of
controls, the data was used to derive supposedly ‘valid’ and in any case, useful
interpretations and conclusions about the quality of the education. This is surely a matter
of concern and warrants further attention. Using CMC it is relatively simple to preserve
the anonymity of contributors, however much more thought needs to go into a process of
‘secret ballot’-like controls to ensure the veracity of the data and thus the reliability and
validity of the interpretations and conclusions drawn from such data.
In this article we argue that despite the desire or need to collect data, and the relative
ease with which it can be collected via eLearning systems, care must be taken as to the
worth which is placed on data from anonymous sources. For the purpose of drawing
attention to some possible problems, we conjecture that data collected from anonymous
sources is of limited worth - worse still, it (data collected from anonymous sources) may
be useless and lead to false expectations and unrealisable hopes on the part of
respondents.
Feedback is provided with an expectation that it is collected for a worthwhile purpose and
the hope that the insight gained from its interpretation and analysis will make a difference,
presumably for the better. It is important therefore that educational technology
researchers arrange data gathering, analysis, and reporting so as to justify the implied
trust of those participating in the research. Whilst it may be unrealistic to conduct our
feedback on educational quality with the same rigour and controls of a democratic
election, there are some guidelines which can be usefully be borne in mind. On the one
hand we wish to promote participation and anonymity, and on the other we want to
ensure the data can be relied upon in drawing some useful insights to improve the
educational offering.
Importance of Anonymity
The notion of anonymity is generally accepted, but recognition of the balance between
the preservation of the sources of research data and the value or worth we purport to
adhere to the results derived from such data merits closer scrutiny. In modern LMS it is
perfectly straightforward to offer an online data collection device, ask students to use it,
and at some time gather the results. The simplicity and ease with which this can be done
may tend to lull us into a comfort zone from which we are not sensitive to the possible
negative aspects of anonymous data.
In her qualitative research study of seven mid-life adult learners studying an
undergraduate program delivered online, Dianne Conrad (2002) observed that student
public identification was an important aspect of learning. She found that a combination of
three factors: online learners’ lack of anonymity; the societal inclination to be ‘nice’
people; and learners’ prolonged commitment to a program of learning; “created in them
an increased sense of inhibition” (p12). This makes it clear that anonymity is not always a
positive aspect.
At the Lrnlab Course Website, Faculty of Education and Social Work, The University of
Sydney, one finds a report by Paul Love (2003, p6) in which he states:
“A further note that I would add to the benefits of online synchronous
communication is that the concept of anonymity … can help increase the students’
opportunity for participation. Discussions where students can log on as someone
anonymous can enable them to risk ideas that they might feel uncomfortable
raising in person”.
4
This is the usual common positive effect attributed to anonymity, and is well understood
by all people – perhaps one should say all people living in a democratic society. One
student participating in an on-line computer science course at the Open University of
Israel (Gal-Ezer and Lanzberg, 2003) said that the “partial anonymity” gave him the
courage to ask questions and actively participate in the tutorial. We thus see degrees or
levels of anonymity which may be usefully implemented by our Learning Management
Systems.
Of course, whatever we choose to implement, we need to have good control over access
levels and anonymity. As Lennon and Maurer (2003, p1252) point out “In chat groups and
forums, the system also provides levels of anonymity. This can be of considerable
advantage when there are discrimination problems in the class”. The matter is not so
simple as at first glance. In the recent report of the “
DELPHI” project (Barajas, 2004, p13)
the advice is that “pedagogues need special training for online-education. They must
especially be qualified in knowing how to:
• decrease anonymity and to establish the atmosphere of a learning community;
• motivate and keep the motivation of learners high;
• avoid student frustrations;
• establish and maintain interaction among students, between teacher and students
and between the user and the system; and how to
• moderate discussions.”
Levels of Anonymity
In their 1995 paper, Flinn and Maurer, provide a comprehensive analysis of the
anonymity issue as it pertains to networked computer systems – what we today may refer
to as Computer Mediated Communications Systems, the facilities of which are present in
all eLearning systems. They propose six levels of anonymity numbered from 5 (Level 5 –
Super-identification, the strongest and akin to a secret ballot) through to Level 0 at which
there is no identification of the user (Table 1).
Flinn and Maurer (1995, p45) present a scenario of the positive role of anonymity in an
educational setting,
“where the facility to have an anonymous electronic discussion removes the
authoritarian role from the teacher or lecturer and enables the more diffident
students to advance ideas without threat. Perhaps the most useful mode here is
level 3. The teacher may wish to review or assess the degree and quality of
statements and ideas expressed by participants, and in order to do this one needs
access to the system records to link pseudonyms to actual student ids. One very
useful aspect of being able to use multiple pseudonyms comes into play in this
example; the teacher (or any other participant) is able to present several different
viewpoints or sides of an argument using different pen-names. We feel that this is
particularly valuable in an educational setting: it would appear a useful skill to be
able to look at an argument, without being forced to be identified with or even to
strongly hold a particular point of view. What is important is being able to marshal
the appropriate facts to support or cast doubt on a particular hypothesis.”
Wallace (1999) in her book The Psychology of the Internet suggests a continuum of
effects on our behaviour. The anonymity variable leads to dis-inhibition at one end of the
continuum – “a lowering of the normal social constraints on behaviour. It is not an all or
nothing variable, especially on the Internet, but we feel more or less anonymous in
different Internet locales, and this affects the way we act.” (p.9). Whilst this may be more
realistic than a six point scale such as suggested by Flinn and Maurer, the latter lends
itself well to implementing in LMSs, and may provide a suitable level of granularity for
most purposes.
5
Table 1 – The six levels of anonymity
Level 5. Super-identification The user is authenticated and identified in a completely secure
way – this implies zero anonymity.
Level 4. Usual identification The user is known within the system by a user-name and
password which must be provided prior to admission to the
system.
Level 3. Latent (potential)
identification
The user is known as person to the system and may develop and
use a set of unique pseudonyms.
Level 2. Pen-name identification The user is known within the system by some user-name and
password, but there is no proper identification of the user as
person.
Level 1. Anonymous identification The user is identified by the system, but not as a specific individual
and without pseudonym or pen-name, a user logs on anonymously
(probably using a password), and the system keeps an event log.
Level 0. No identification This is typical when using a stand-alone workstation; however it is
possible for an application to log user activity, as in Level 1.
The existence of a log permits profiling the (unknown) user, and can be used to gain information about user
characteristics. True anonymity would require the absence of any personal history being associated with a user.
Source: compiled from Flinn and Maurer (1995)
The Role of Anonymous Data in Selected Studies
In an early report of using the WWW to facilitate teaching, Rada (1996) used a standard
university evaluation instrument for the anonymous rating teachers via the Web. Out of
the total of 17 students, seven answered yielding an average score of 4.63 (on a 5 point
scale with 5 meaning “excellent”) where the average of courses in that school was about
4. On that basis it was claimed that: “Overall, students were satisfied with the course”
(p741). Any data may be better than no data, but with around 50% of data missing, it may
be safer to claim that 50% of students were satisfied and some proportion of the other
50% may have been too. Apparently there were other factors in addition to “anonymity”
which resulted in students not responding to the questionnaire. Note also that it is
legitimate to ask about the controls which would ensure each student only responded
once – this is not reported in the paper. In such a case, if we wanted to have results of
high worth we would need to hold the ambiguity variable at much greater than the present
Level 0 (presumably). In any event, reporting the anonymity level associated with the
study would assist readers and researchers in making appropriate use of the study
results.
Schrum and Lamb (1996, p721) also posted an online evaluation survey near the end of
their course in which “collaborative distance learning and training through electronic
networks and groupware” technology was deployed to provide a learning environment for
18 students. Out of these, ten provided anonymous responses, on the basis of which the
authors make appropriately cautious claims and balance this data with other feedback
such as open-ended comments. This analysis and reporting seems to be more in the
spirit of formative evaluation where the intent is incremental improvement as distinct from
Rada’s summative evaluation mentioned above.
In a second project Schrum and Lamb (1996, p727) alert the reader to the inherent
difficulties associated with evaluation, even when rather elaborate and careful measures
to get at the truth are implemented. They conclude that “Results suggest positive
outcomes for content learning and some strong suggestions for improving the structure
and process of this type of online activity” (p728). They are unable to be more definitive,
in this case because of a lack of anonymity - subjects felt they were being watched or
identified, and controlled.
In a course on teaching Hypertext and Hypermedia on the web, Paul De Bra (1996) notes
that “The teacher cannot even deduce from the server log how many students are taking
the course. Each student completing the assignment is a surprise to the teacher.” (p800).
Imagine taking a traditional course and having all the students fully anonymous – nothing
6
visible – no eyes, no hands, no shape, no voice-print, nothing! Teachers would surely not
proceed under such circumstances, and yet we still claim that anonymity on eLearning
scenarios is desirable. De Bra seems to be clear about the worth of data collected from
such a system, and perhaps that accounts for the absence of course evaluation based on
student opinion – it would be of low worth.
Finally, in a recent report of the use of a LMS in Austria (Dreher et al., 2003, p2597) the
claim that the “results show a high acceptance of the system by students, especially for
the purposes of submitting their contributions to the teacher, and for accessing the
feedback from the teacher” can surely only be worthy of further serious attention if we
know the anonymity conditions among other things (statistical representativeness, time of
data collection, and so on).
It is our contention that all these studies would have benefited from the inclusion of a
thorough analysis, control, and reporting of anonymity conditions. Naturally, we cannot
alter the past, but we can be guided by these insights into anonymity for the future, and it
is in that spirit in which we offer our conclusion.
Discussion
In a number of cases studied, we have noticed that a Level 0 anonymity is afforded the
respondents and yet the results derived from the analysis of such data is confidently
offered as support for the superior outcomes of the educational intervention or strategy. In
other studies the researchers have been cautious and restrained in the claims they make
based on anonymous input. We note that all researchers whose studies we have read
have proceeded with their work in good faith, and our observations in this article are
aimed at spreading the word of care and caution. In that spirit we suggest the use of
Table 2 for making explicit the anonymity, purpose, and worth parameters of eLearning
evaluations.
The construction of a device such as the Anonymity-Purpose-Worth matrix (Table 2), in
which the researchers match the purpose of their data collection with the desired or
intended worth of their results, and assign an appropriate anonymity level, may illuminate
this crucial aspect of collecting feedback in eLearning systems and help clarify the
legitimacy of conclusions being drawn from the analysis of such data.
Table 2Anonymity-Purpose-Worth matrix
Anonymity
level
Purpose of data collection
why is the data being collected?
Worth of results
of what value are the results intended to be?
Level 5.
Super-
identification
(can be costly to
implement and
cumbersome to use)
provide specific accurate highly
reliable data; e.g. leading to
specifications for educational
intervention affecting individuals;
summative evaluation of performance
high value; high cost associated with failure or
wrong decision making
Level 4.
Usual
identification
summative evaluation of programmes
and courses
accurate and reliable advice but perhaps not
comprehensive as respondents may be
identified
Level 3.
Latent (potential)
identification
for decision making but
acknowledging that further data
collected at a higher level of
anonymity may be needed
potentially high, but depends on pseudonym
control and respondent trust regarding
confidentiality
Level 2.
Pen-name
identification
trend discovery indicative of trends; unsuited to underpin or
justify change, e.g. where individuals may be
affected
Level 1.
Anonymous
identification
informal evaluation; to highlight
trends, and extremes
indicative of the need for further investigation
and may be used to justify such proposals
Level 0.
No identification
formative evaluation data is sought indicative of intervention; probably not
generalisable; suitable for informing micro-
interventions not directly affecting individuals
7
The Purpose and Worth columns have been populated with indications of the particular
entries designers and researchers may consider as they plan the appropriate anonymity
levels to be applied to their eLearning feedback systems.
As an example of using the Anonymity-Purpose-Worth matrix we now consider two of the
studies mentioned previously and compare the claims in respect of the worth of the
results that we contend can be made with those that have actually been reported.
Case 1:
Teaching on the WWW: Assignment Focus and Information Indexing (Rada, 1996)
The anonymity associated with this study is unclear; Rada reports: “The standard
university evaluation form by which students rate teachers was placed onto the WWW.
Students answered anonymously. Seven students in the course completed this optional
course evaluation.” (p740).
The above statement does not permit the reader to understand whether, for example, one
anonymous student provided all of the seven feedback items, or whether each of the
seven feedback items came from individual and qualified students. It is possible that the
feedback is legitimate, but the reader is entitled to know what conditions prevailed if the
study outcomes are to be taken seriously. In the matrix for this case we have highlighted
the possible study characteristics – namely Anonymity Level 1 (assumed to be the most
likely situation given the description by Rada), and a Worth associated with a Level 0
anonymity.
Table 3 – Case 1 Anonymity-Purpose-Worth analysis
Anonymity
level
Purpose of data collection
why is the data being collected?
Worth of results
of what value are the results intended to be?
Level 1.
Anonymous
identification
informal evaluation; to
highlight trends, and extremes
“In the end, what did the students
think of the course? Did the
teacher want to repeat the
experience in another course?”
(p740)
Level 0.
No identification
indicative of intervention; probably
not generalisable; suitable for
informing micro-interventions not
directly affecting individuals
“This course had already proven a
success under the conditions in which it
was offered. New markets of students
could be reached with this virtual mode
for educational delivery. The methods
and tools are not difficult to fashion.”
(p742)
Thus it is safe to say that Rada’s claim that “The students rated the course as an above-
average learning experience.” (p732) is acceptably authentic in its basis and of worth for
formative and informal evaluation purposes. It is not acceptable as a summative
evaluation, which would require far higher levels of anonymity to strengthen the
plausibility of the claim. One would be rather cautious if using these results to argue
replicating the course for many students, expecting similarly positive outcomes, or for the
purpose of justifying tenure and promotion decisions.
In addition to the anonymity factor, there are many other aspects relating to the worth
which can be placed on results, but these are beyond the scope of the present article.
8
Case 2:
Groupware for Collaborative Learning: A Research Perspective on Processes,
Opportunities, and Obstacles (Schrum & Lamb 1996).
Participants of a mandatory course in a military service academy were studied to
“determine the viability of using groupware to support collaborative writing and problem
solving within the context of projects that are assigned in courses that meet in a
traditional face to face model.” (p725). They were videotaped. The researchers report:
“Equally onerous for the students was the use of video cameras during the pilots and this
was mentioned by all those who were individually interviewed.” (p727).
Table 4 – Case 2 Anonymity-Purpose-Worth analysis
Anonymity
level
Purpose of data collection
why is the data being collected?
Worth of results
of what value are the results intended to be?
Level 5.
Super-
identification
(can be costly to
implement and
cumbersome to use)
provide specific accurate
highly reliable data – video
recording of participants
“to determine the viability of using
groupware to support
collaborative writing and problem
solving within the context of
projects that are assigned in
courses that meet in a traditional
face to face model.” (p725)
Potentially the results would be of high worth
but in the study the researchers found that the
lack of anonymity (i.e. the constant
surveillance) was “onerous” possibly meaning
that one could not assume the observed
interactions were natural.
Levels 1,2,3,4
Level 0.
No identification
indicative of intervention; probably
not generalisable; suitable for
informing micro-interventions not
directly affecting individuals
“Lessons learned from these
experiences will inform future planning
for using this and other groupware
packages, and assist instructors in
creating viable courses and assignments
for both teaching and learning. Results
suggest positive outcomes for content
learning, and some strong suggestions
for improving the structure and process
of this type of online activity.” (p728)
Under such conditions the researchers were clearly not able to meet their objective of
“determining the viability of …” and were rather more cautious in their conclusion, which
in part reads “Lessons learned from these experiences will inform future planning for
using this and other groupware packages.” (p728). Such a claim might be characteristic
of a study which is associated with a Level 0 of anonymity, and with hindsight one can
certainly see that videorecording was not needed to make their cautious and suggestive
claim.
Readers should note that the research in Case 2 has been analysed in part and only from
the standpoint of anonymity of feedback provided by study participants. There is of
course much more one needs to consider to make a judgement about the plausibility of
research claims in a general sense, and again, this is beyond the scope of this article.
Conclusion
Our aim has been to put before the reader a persuasive argument for careful
documentation and reporting of the anonymity characteristics associated with particular
instances of eLearning research and practice. We have created a matrix named the
Anonymity-Purpose-Worth Matrix to facilitate the documentation and analysis of study
9
conditions pertaining to anonymity and concomitant data collection purposes and claims
about the worth of results.
A few short examples have been given to indicate the use of the matrix in aligning the
worth which may be legitimately attached to study outcomes with the anonymity level
pertaining to the participant feedback collected during eLearning scenarios.
References
Barajas, M. (2004). DELPHI: European Observatory of e-Learning Practice, Report on
New Methods to Learning in Technology-based Environments, & Related Policy
Matters: IST and IHP projects. EU FP6 Contract No: 2002-4075/001-001 EDU-
ELEARN.
Conrad, D. (2002). Deep in the Hearts of Learners: Insights into the Nature of Online
Community. Journal of Distance Education/Revue de l'enseignement à distance.
Volume 17.1. Online: http://cade.icaap.org
De Bra, P.M.E. Teaching Hypertext and Hypermedia through the Web. Journal of
Universal Computer Science, Vol. 2, No. 12 (1996), pp. 797-804.
Dreher, H., Scerbakov, N., & Helic, D. (2004). Thematic Driven Learning. Proceedings of
E-Learn 2004 Conference, pp. 2594-2600. AACE. Washington DC, USA,
November 1-5, 2004.
Flinn, B., & Maurer, H. (1995). Levels of Anonymity. Journal of Universal Computer
Science, Vol. 1, No. 1, pp. 35-47.
Gal-Ezer, J., & Lanzberg, D. (2003). Using synchronous and asynchronous online
learning in computer science courses. Proceedings of the 33rd ASEE/IEEE
Frontiers in Education Conference, session T3F1, November 5-8, Boulder,
Colorado.
Lennon, J. & Maurer, H. (2003). Why it is Difficult to Introduce e-Learning into Schools
And Some New Solutions. Journal of Universal Computer Science, Vol. 9, No. 10,
pp. 1244-1257.
Love, P. (2003) Can a chat session accomplish and achieve the finer points of theoretical
argument without having the face-to-face stimulants and reactions that are readily
apparent in a traditional classroom? Lrnlab Course Website, Faculty of Education
and Social Work, The University of Sydney, http://lrnlab.edfac.usyd.edu.au/topics
Rada, R. (1996). Teaching on the WWW: Assignment Focus and Information Indexing.
Journal of Universal Computer Science, Vol. 2, No. 10, pp. 732-743.
Schrum L. & Lamb, T.H. (1996). Groupware for Collaborative Learning: A Research
Perspective on Processes, Opportunities, and Obstacles. Journal of Universal
Computer Science, Vol. 2, No. 10 (1996), pp. 717-731.
Wallace, P.M. (1999). The psychology of the Internet. Cambridge, UK: Cambridge
University Press.
Article
Full-text available
Thesis (M.S.)--Pennsylvania State University, 2001. Library holds archival microfiches negative and service copy.
Article
Full-text available
Learning Management Systems or e-Learning systems have been typically deployed to provide a means for the deposition of content which students access in a browse mode. Often a discussion space is provided to facilitate some course relevant communication. In this article we argue that if the e-Learning system can be adapted to match the educational design of particular courses, both students and teachers will benefit. Supporting assignment submission and evaluation is one aspect in need of attention. For a recent presentation of a course at the Technological University Graz, we have devised a new e-Learning paradigm termed Thematic Driven Learning which supports its constructivist course design. The use of WBT-Master to support learning and teaching, including submission and evaluation of assignments, is described and evaluated.
Article
Full-text available
Most informed educators agree that e-Learning should create a paradigm shift away from traditional teaching models, yet in practice this is extremely difficult to achieve. Typically, teachers use computer networks (internet or intranets) mainly for email, dissemination of information that frequently just mirrors traditional book material, assignments, and perhaps a discussion forum. In this paper, we examine reasons why there has been so little departure away from conventional teaching paradigms. We look beyond Virtual Learning Environments to Managed Learning Environments. We look at ways to make this transition a desirable option for both teachers and students. We suggest that when teachers and learners are properly supported within a Managed Learning Environment the workload of teachers is not increased and they enjoy teaching more; also, students learn better (i.e. more efficiently) and with higher motivation.
Article
Full-text available
Although the technology exists to provide collaborative distance learning and training through electronic networks and groupware, little is currently known about appropriate ways in which to structure these learning environments. This article describes two research projects using groupware for collaborative learning activities. The first was a graduate business course conducted entirely online with geographically dispersed individuals. The second project investigated the use of groupware for collaborative writing and problem solving at a military academy with undergraduate students. Results and conclusions are presented to inform others working with computer networks and groupware.
Article
Full-text available
: Since early 1994 the introductory course 2L670, "Hypermedia Structures and Systems", has been available on World Wide Web, and is an optional part of the curriculum in computing science at the Eindhoven University of Technology. The course has since been completed by more than 200 students from three different universities, two in the Netherlands and one in Belgium. Since January 1st 1996 the course is also offered at two more Dutch universities and at the Dutch Open University. In order to participate in this course the student only needs a World Wide Web browser. There is no need for separate e-mail, netnews, bulletin boards or ftp software. In this paper we present the technical environment for the current (1996) edition of the course, which features automatic evaluation of small assignments, a repository for assignment work, a discussion system, and complete monitoring of each student's progress. We also reflect on the previous architecture, used for the first 200 stud...
Article
Full-text available
: In this paper we make a first attempt at systematically investigating levels of anonymity required in networked computer systems: we feel it is often overlooked that beyond such obvious cases as identified by means of a password" or "anonymous use" there are many other levels of anonymity, identification and authenticity necessary in various applications. Key Words: security, anonymous use, access control, authentication, big brother Category: C.2.0, D.4.6, K.6.5 1 Introduction At present, most users of computers are usually aware of two modes for operation within a computer system: -- Logging on with user-id and password. (The standard way of operating any networked computer systems.) -- Using the computer anonymously; in this situation the user is unidentified, and does not have to provide a password. (This latter version, maybe first introduced in the Austrian videotex systems [Maurer 84] is becoming increasingly propular with Internet services such as anonymous FTP to downlo...
Article
Cambridge: Cambridge University Press, 1999. 264 pages (£20.00 hardcover) ISBN: 0521632943 This book is a timely examination of web users' behaviour and the ways in which it affects others participating among the different electronic environments available. The Internet is a technology that has expanded rapidly over relatively few years and millions of users are interacting with each other using the new medium without having considered how this communication differs in quantity and quality from more established information channels. Using examples from previously established research in the field of social psychology and more recent studies on various aspects of the Internet phenomenon, the author considers how human behaviour is influenced by the peculiar characteristics of the new 'Web-World'. After a brief introductory chapter where some of the Internet jargon is usefully demystified, there follow two related chapters on how people 'invent' themselves on the Web, discussing role-playing, impression formation and management, and identity experiments. They examine how users attempt to overcome the lack of the usual non-verbal cues in face-to-face communication and add socioemotional expressiveness to their online personas. The author also discusses how this process of online self-projection can become altogether delusional, deceptive and dangerous as participants morph between generational, gender and personality profiles. The following two chapters examine the dynamics of group behaviour online. They illustrate the psychological phenomena of conformity, polarization, conflict and co-operation occurring in mailing lists, e-mail traffic, news and discussion groups and chat rooms. Surprisingly, much of group behaviour online has similar social regulation as their real life counterparts.
Article
Discussion of online learning focuses on a study that was conducted among adult learners to investigate learners' interaction with online communities. Highlights include the concept of community; online learning communities; the importance of meeting face to face; building and maintaining community; and implications for future research. (LRW)
Conference Paper
This paper describes an experiment we conducted using both synchronous and asynchronous online tools in computer science courses at the Open University of Israel, We believe that computer science courses are appropriate for investigating on-line learning because many of them deal with processes (algorithms, queries on databases, etc.) which are much easier to learn through animated presentations than through written explanations. The success of this experiment encourages us to widen the experiment in the coming semesters.
Article
: Seventeen students completed a course in which no face-to-face meetings and no paper exchanged hand. All information was shared either on the WWW or by email. In the first few weeks, extensive email dialogue occurred about the method of learning but after that the students focused exclusively on the content of the course. The first readings and exercises gave the students much freedom of choice but that proved too much freedom for the students. They preferred the assignment to be highly focused. The automatic indexing of email by the Hypermail program needed to be augmented with manual indexing by the teacher. This indexing and generally managing the email was the most time consuming aspect of the teaching, but in the end this was less demanding on the teacher than would have been a traditional course. The students rated the course as an above-average learning experience. Keywords: Internet, educational applications, groupware, human-computer interaction, distance education, WWW, em...
DELPHI: European Observatory of e-Learning Practice
  • M Barajas
Barajas, M. (2004). DELPHI: European Observatory of e-Learning Practice, Report on New Methods to Learning in Technology-based Environments, & Related Policy Matters: IST and IHP projects. EU FP6 Contract No: 2002-4075/001-001 EDU-ELEARN.