Conference PaperPDF Available

CritViz: A Network Peer Critique Structure for Large Creative Classrooms

Authors:

Abstract and Figures

Critviz is an experimental web-based tool created to support peer critique in large classrooms. Critique sessions are an effective tool to use in the classrooms where students' work is not so clearly "right" or "wrong" and when students need detailed critical feedback in order to clarify their direction. Face-to-face critiques work well in smaller groups where all can have their work seen and discussed. CritViz's development has been driven by the question of whether a critique can be scaled up to larger classes of 100 students or more. Through CritViz's countdown timers, assignment uploading, and randomized peer ranking, we can effectively scale the use of critiques to much larger classes and change the "motivational structure" of the students who use it, resulting in increases in student learning, motivation, and student reflection on work. Folk wisdom suggests that when it comes to class size, "Smaller is Better," but our tool points toward possibilities for increasing class sizes with little negative impact, even leading to a "Bigger is Better" phenomenon in the classroom.
Content may be subject to copyright.
CritViz: A Network Peer Critique Structure for Large Creative Classrooms
John Sadauskas
Mary Lou Fulton Teachers College/
School of Arts, Media & Engineering
Arizona State University
United States
john.sadauskas@asu.edu
David Tinapple
School of Arts, Media & Engineering
Arizona State University
United States
david.tinapple@asu.edu
Loren Olson
School of Arts, Media & Engineering
Arizona State University
United States
loren.olson@asu.edu
Robert K. Atkinson
School of Computing, Informatics & Decision Systems Engineering/
Mary Lou Fulton Teachers College
Arizona State University
United States
robert.atkinson@asu.edu
Abstract: Critviz is an experimental web-based tool created to support peer critique in large
classrooms. Critique sessions are an effective tool to use in the classrooms where students’ work is
not so clearly “right” or “wrong” and when students need detailed critical feedback in order to
clarify their direction. Face-to-face critiques work well in smaller groups where all can have their
work seen and discussed. CritViz’s development has been driven by the question of whether a
critique can be scaled up to larger classes of 100 students or more. Through CritViz’s countdown
timers, assignment uploading, and randomized peer ranking, we can effectively scale the use of
critiques to much larger classes and change the “motivational structure” of the students who use it,
resulting in increases in student learning, motivation, and student reflection on work. Folk wisdom
suggests that when it comes to class size, “Smaller is Better,” but our tool points toward
possibilities for increasing class sizes with little negative impact, even leading to a “Bigger is
Better” phenomenon in the classroom.
Introduction
The quality and aesthetic value of works such as digital art, music, and photography are subjective. Thus, in
classrooms where student work is creative and does not necessarily have “right” and “wrong” answers, it is often
beneficial for students to receive feedback from multiple perspectives, rather than just from a single instructor.
Accordingly, a common practice in such classrooms is the peer critique, where classmates offer constructive
feedback on each other’s work in person. This works well in classes of 20 or less students, but does not scale well to
large classes. Faced with the challenge of teaching digital art courses of 50+ students, we sought a solution for
implementing peer critiques in these large classes to offer students quality feedback on their work. This ultimately
led to developing CritViz, an online peer feedback system specifically designed to scale to large classes of any size.
In implementing this system in our classes, we have found that CritViz can not only effectively manage large-class
critiques, but also impacts student learning and motivation by cultivating social connections between students and
- 1437 -
providing a self-curated bank of exemplar projects. Large classes are typically viewed as impersonal and not serving
students on the individual level—essentially the notion that smaller classes are “better” than larger classes.
However, we have seen CritViz foster a large, active community of learners, implying the possibility that “bigger”
classrooms can be “better.” In this paper, we will first outline the rationale for creating CritViz, then provide an
overview of how CritViz works, and finally present initial findings regarding the impact it has had on the students’
learning, motivation, and reflection on their work.
Background & Rationale
This section will outline the rationale behind CritViz, including discussion of the current state of online education,
educational implications of social media’s popularity, the nature of traditional in-person critiques, and the challenges
we addressed in scaling such interactions to our large classrooms.
Higher Education Goes Online
Higher education is changing rapidly, and due to online content and assignment management systems such as
Blackboard, essentially all current college classes are at least partially online. In addition to purely online courses,
more and more in-person courses are becoming hybrids in order to extend limited class time by asking students to
participate in online activities such as posting to discussion boards. Furthermore, despite their controversy,
“massive, open online courses” (MOOCs) are steadily growing in popularity (Bombardieri & Landergan, 2013;
Lewin, 2012). Thus, whether a course is as small as 20 students or a MOOC serving several thousand, instructors
and students are increasingly facing the challenges of fostering educational experiences similar to in-person classes
in an online form.
Social Media in the Classroom
Perhaps the appeal of MOOCs is due in part to the current success and pervasiveness of social media. Students are
already accustomed to communicating through web-based media and do so daily (Lenhart, Purcell, & Zickuhr,
2010). Furthermore, in contrast to a traditional classroom environment where the only evaluator is a teacher, today’s
“digital natives” have become accustomed to sharing their thoughts and ideas with a “real” audience of peers who
offer feedback through comments, likes, etc. (Atkins, 2011; Kitsis, 2008; Pascopella & Richardson, 2009; Prensky,
2001). In fact, social media research has even demonstrated that peer feedback received via social media can impact
one’s self-esteem (Ellison, Steinfield, & Lampe, 2007; Gentile, Twenge, Freeman, & Campbell, 2012; Steinfield,
Ellison, & Lampe, 2008). Findings such as these have spawned major interest among educators in how to learn
from, integrate, and leverage social media to create a better classroom (Aydin, 2012). This fascination with all types
of social networking tools may stem from one central property common to social networks, and alien to the
classroom: with social network tools, bigger means better. More users of social media platforms such as Facebook,
Twitter, and Google+ mean more options, more connections, more data, and more richness for that network’s users.
However, social media’s appeal to educators does not stem from the excitement of using any specific platform in the
classroom—they simply want to repurpose the engagement that social media brings for educational purposes. In
opposition to the “bigger is better” of social networks, in the classroom, larger class sizes generally lead to less
attention and bandwidth from instructors, more anonymity, and less peer interaction. Essentially, in classrooms,
“bigger is worse.” Yet, if there is a way to flip this equation and make classes improve as the number of students
increase, there is potential to improve the quality of teaching and learning for large numbers of people.
Implementing “Bigger is Better” in the Classroom
One particular challenge in a large, digitally-mediated classroom is for an instructor to offer quality feedback to a
large number of students. The standard classroom model where a single instructor offers feedback to all students
does not scale well to large classes; the larger the class, the more inundated with grading the instructor becomes.
While factual knowledge can be assessed with multiple-choice machine grading, the challenge lies in offering
feedback on work such as open written responses and creative projects. Although considerable research has explored
the use of machine-graded writing (Vojak, Kline, Cope, McCarthey, & Kalantzis, 2011) and many online courses
(particularly MOOCs) are experimenting with such possibilities (Markoff, 2013), machine-graded writing—in its
current state—is still no substitute for a human grader, as such systems can often be “gamed” with grammatically
- 1438 -
correct, yet nonsensical content (Vojak et al., 2011). Additionally, offering feedback on creative works such as
digital art, photography, and film, as such products can only truly be evaluated by a human. In the typical in-person
creative classroom, feedback is delivered via a formal critique.
The Power of Critique
The “crit session” is well known to anyone who attends art school, but many creative classrooms use critiques as a
form of feedback including design, architecture, journalism, writing, and increasingly engineering and the sciences.
Teachers use critique for a number of reasons. With many kinds of creative work, it’s not enough to simply receive a
grade as the primary mode of feedback. What is needed in order for a student to improve is critical advice on what
aspects of their work are successful or not, and why. The critique does more than simply offer feedback—it does so
in full view of and with participation of peer students who are all working to learn the same skills. Peer critique
makes every student’s work, struggles and effort visible and visceral to each other student. What a critique offers in
addition to feedback and pressure is practice at the important skills of giving and receiving criticism. These soft
skills are valuable, and come naturally to almost no one. Finally, a critique session can provide valuable feedback to
the instructor or teacher, giving a clear sense of how course material is being received in general, and allowing for
rapid retooling and reteaching of course instruction.
The Challenge of Digital Culture Classes
At Arizona State University, we recently created a new undergraduate major called “Digital Culture,” which aims to
combine technical and creative skills into a flexible curriculum geared toward preparing students to be cultural
producers of the 21st century (game designers, musicians, interface designers, inventors, scientists, engineers,
musicians, makers, and hackers). Our physical classrooms consist of large reconfigurable “maker labs” where we
commonly have 100 students in the same room, learning hands-on to make creative digital work.
Although students do still receive traditional letter grades, individual assignments rarely have clear “right” or
“wrong” answers. In addition to technical skill, we focus on experimentation, discussion, critical thinking, invention
and creative risk-taking. In this setting, making mistakes is often the very best way to learn. Peer critique would be
ideal for structuring the feedback process in these classes, except the size of the class presents several logistical
problems, including the challenge of finding enough reviewers for each student’s work, distributing that work to
those reviewers, and getting reviewer feedback to the author in an orderly fashion. Additionally, taking the time to
conduct peer reviews for so many students during class time would leave little room for other class activities, and
little time for reflecting on student work before giving feedback to the author.
Online Peer Critique
With these issues in mind, using online tools to help facilitate large critiques was an obvious direction to explore.
Computational tools have proven effective in orchestrating the paper shuffle and assigning reviewers to work
automatically and instantly delivering feedback to authors (Goldin, Ashley, & Schunn, 2012), particularly in large
classes. Furthermore, conducting peer reviews online not only frees up in-person class time, but also allows the
reviewer more time to reflect on the work and construct quality feedback than a candid response shared minutes
after viewing a piece for the first time (Sadauskas, Byrne, & Atkinson, 2013). An additional affordance of an online
peer review system is the ability to anonymize work as it is being reviewed by peers, placing focus on the work
itself and not allowing the author’s name to color the reviewer’s response.
Although we found existing technology for sharing and turning in students digital projects (e.g. uploading files to
Blackboard, blogs, message boards, email attachments) these tools were less than optimal for both instructor and
peer evaluation, as they all seemed to cause more cognitive overhead in the form of complexity, message overload,
and student confusion. We decided to build an experimental online tool specifically intended for the purpose of
facilitating large classroom critique sessions, the product of which is CritViz (Tinapple, Olson, & Sadauskas, 2013).
- 1439 -
CritViz: An Overview
CritViz was designed and implemented by two Digital Culture instructors whose initial motivation for creating it
was the need to immediately use it in the classroom. For this reason the system was intentionally kept as brutally
simple as possible, and the primary intention for the software was (and still is), to do one thing well—support peer
critique in a large creative classroom. In the following section, we will outline the structure and flow of the CritViz
framework, from the creation of assignments by an instructor to the randomized peer critiques by students, to the
calculation of the full class rankings.
Instructor Creates the Assignment
CritViz revolves around assignments. The instructor creates an assignment, which consists of a title, description,
publication date/time, response deadline date/time, critique deadline date/time and critique instructions. Text fields
such as “description” and “critique instructions” support the lightweight markup language Markdown
1
. The
description is used to indicate the assignment’s intent and instructions for completion. The publication date indicates
when students will be able to see the assignment. Prior to the publication date, only the instructor and teaching
assistant are able to view the assignment.
Instructor Adds Questions to the Assignment
An assignment also has one or more questions. These questions are elements of the assignment which the students
must answer in order for the assignment to be considered “complete.” Only students that complete an assignment are
included in the critique phase. A question consists of a description and a type. We have implemented several types
of questions.
text: a simple text box is provided for the student to type a text answer.
file: the student can upload a file. Image files of certain types (jpg, png, gif) are automatically recognized
and displayed inline. We encourage the use of a zip archive as a method to upload a project that consists of
more than one file.
YouTube: the student provides a YouTube video id, and the YouTube video is displayed embed on the
CritViz response page.
processing: the student uploads a zipped Processing.js
2
“sketch.” This sketch, and its source code, can then
be displayed inline on the CritViz response page.
Students do the Work
As soon as an assignment has reached its “publish date”, it becomes visible to all students in the class and appears in
their screens as an “open” assignment. While the assignment is open, students can upload their work, but cannot see
what other students have uploaded. If students want to make changes to what they have uploaded they may re-
upload as long as the assignment is open. “Open” is indicated on screen by a green color, and also by the presence of
a countdown timer associated with that assignment. The countdown timer is an actively ticking clock, showing
precisely how many days, hours, minutes and seconds remain before the assignment “closes”. We decided to
implement a countdown timer in addition to the due date/time, not only to add a sense of urgency, but because often
students (and sometimes professors) are not very good at managing longer term deadlines. The addition of a
countdown has been surprisingly effective in helping keep students accountable for managing their time. The timer
much like a NASA countdown before the launch of a rocket or shuttle, implies that all of the work, all of the effort
toward completing the assignment is just the beginning (the launch) of the “mission,” which is ultimately to give and
receive criticism, improve skills and proficiencies, and to help others improve. When the CritViz countdown timer
reaches zero, the assignment closes and students may no longer upload their work. They are then automatically
assigned to critique each other’s work, and begin the critique process. Through informal student feedback, we have
found that the deadline seems more authentic to students, less about an arbitrary moment in time, and more about a
shared common irreversible moment at which critiques are assigned. It does not eliminate late work, but it appears
to reduce it significantly.

1
http://daringfireball.net/projects/markdown/
2
http://processing.org
- 1440 -
“Create Crits”
At the point when an assignment “closes” the instructor is presented with the option to start the critiques. When the
“create crits” button is pressed, the system first creates a list of all students who successfully completed the
assignment, then each student in the list is assigned five other students from the list to critique. The critique
assignments are randomized such that each student critiques five others, and is critiqued by another five.
After the critiques have been assigned, students see that the critique phase of the assignment is now “open” and they
see the five pieces of work they are supposed to critique, along with another countdown timer now associated with
the deadline for the critiques. Students see only the list of five works, not the identities of the works’ authors. They
also see “critique instructions” written by the instructor, intended to help them frame and organize their critical
feedback, and a text field where they are to write the kinds of critical feedback indicated. These text fields are not
limited and can expand from a few sentences, to several pages if necessary.
While we have experimented with several different critique formats, the most-used and most successful format has
been what we call “rank order random 5”. In this kind of critique, students not only provide written feedback to the
random five other students, but to also put those five pieces of work into ranked order from strongest to weakest in
that group. This ranking—as opposed to a more traditional rating system—asks the students to make clear cut
decisions about their peers’ work.
Crits End, “Calculate Rank”
When the critique countdown timer ends, the critiques are over and the instructor can press the “calculate ranks”
button, which computes the average of all five received ranks for each student. This composite rank is used to infer a
global class ranking list. This full class ranking is then visible to all students in the class, as are all the written
critiques. This allows students to see who is performing best in the class on assignments, and read the critical
feedback to understand why each student critic ranked each student. While averaging is a rather crude way of using
the matrix of rank data, it is effective and importantly easy to understand. We are actively researching the use of
more complex analyses of the rank data including Markov chains and Monte Carlo methods and are seeing
promising initial results. At the same time we are looking for ways to make these analyses transparent and
understandable by the students, so that they retain their authenticity and credibility.
Finally, based on the ranks that everyone receives, a full-class ranking is calculated and displayed for the students,
essentially showing a “leaderboard” indicating which students received the highest marks from peers. In addition to
students indicating that ranking is authentic and motivating (Tinapple et al., 2013), students are also later able to use
these rankings as a resource, as they know where to look for excellent and bad examples of work.
The Use of Anonymity
One question we have grappled with in designing CritViz is: When is it important to display students’ names to each
other, and when is it important to for students to be anonymous? Currently the only use of anonymity comes in the
critique phase. When a student is asked to critique five others, they initially see only the work without attribution
(placing the critique focus on the work itself and not its author). Then, after the critiques are complete, the author
names become visible, as do the names of the reviewers, and these names remain visible as new assignments are
added. While future work will explore the use of anonymity, this current iteration is intended to foster unbiased
critiques, but for students to stand behind their words and works.
Research Questions & Hypotheses
Having implemented CritViz in several courses, we have observed a change in the “motivational structure” in
students who use CritViz (Tinapple et al., 2013). Sharing work with an audience outside of a single instructor
creates an authentic environment and “elevates” student effort, urging them to turn in work they are proud of.
CritViz has also become a valuable student resource for viewing other students’ work as examples to generate ideas
for future work. Furthermore, CritViz has enhanced the feeling of community and camaraderie in large classes.
Essentially, students seem to be more motivated and reflective than students in our classes prior to CritViz’s
- 1441 -
implementation, and also seem to produce a higher quality of work. With this in mind, during the Spring 2013
semester, we conducted a study guided by the following research questions:
1) Does CritViz increase student motivation?
2) Does CritViz increase students’ reflection on their work?
3) Does CritViz increase student learning?
Based on the effects we have observed as instructors, we hypothesized that CritViz does, in fact, increase
motivation, reflection, and learning.
Method
Eight instructors used CritViz at Arizona State University during the Spring 2013 semester. Each instructor taught
one class each, for a total of eight classes and a total of 302 students. In order to address the above research
questions, near the end of the semester, these students were invited via email to participate in an online survey
regarding CritViz. Additionally, for further insight into students’ CritViz use, we explored the site’s analytics. These
two activities are further described below.
Online Survey
The online survey was completed by 42 students (35 undergraduate students and 7 graduate students). These
participants ranged in age from 18-37 years old, with a mean age of 23.17 (SD = 4.34), and came from five of the
eight Spring 2013 courses (Media Installations, Programming for Media Arts, Digital Art and Culture, Engineering
Business Practice for Engineers, and Engineering Mechanics II). The survey included a set of statements about
CritViz with Likert-scaled responses. The Likert statements included items about student motivation (e.g.
“Receiving peer feedback increased my motivation in the class”), reflection (e.g. “I have gone back to look at past
assignment critiques”, and learning (e.g. “Critiquing my peers’ work helped me learn”). Likert responses were on
five-point scale: “1 - strongly disagree; 2 - disagree; 3 - neutral; 4 - agree; and 5 -strongly agree.”
CritViz Site Analytics
Additionally, we analyzed the site analytics (Google Analytics) for CritViz, to determine how often students are
viewing and reflecting on the work posted on CritViz—both their own work and the work of their peers. The
primary data source for this analysis was pageviews—that is, the number of views for each unique web page within
CritViz.
Results & Discussion
Overall, findings were not only consistent with our hypotheses, but also support prior findings related to CritViz use,
providing evidence that CritViz does increase student motivation, reflection, and learning.
Online Survey
Survey data indicated that a majority of respondents felt CritViz had increased their motivation, reflection, and
learning in their courses. As shown in Table 1, descriptive statistics indicate that students tended to “agree” or
“strongly agree” with eight statements regarding motivation, reflection, and learning.
Research Question 1: Does CritViz increase student motivation?
As shown in Table 1, results indicate that students who have used CritViz enjoy both giving and receiving feedback,
and that this feedback does tend to increase students’ motivation in their classes. Consistent with prior findings
(Tinapple et al., 2013), an additional feature that appears to impact student motivation is the countdown timer, which
encourages them to complete work on time.
- 1442 -
Survey Questions M SD
MOTIVATION
I enjoy receiving peer feedback about my work. 4.31 0.64
I enjoy giving peer feedback to others. 3.95 0.70
Receiving peer feedback increased my motivation in the class. 3.48 0.94
The countdown timers encouraged me to get my assignments and critiques in on time. 3.64 1.08
REFLECTION
I read the feedback provided to me. 4.14 0.93
I have gone back to look at past assignment critiques. 3.55 1.06
LEARNING
Receiving feedback from others is helpful in completing my work. 4.02 0.68
Critiquing my peers' work helped me learn. 3.79 0.81
Table 1: Descriptive Statistics for CritViz Online Survey Responses
(Note: a score of 1 = “strongly disagree” and 5 = “strongly agree”)
Research Question 2: Does CritViz increase students’ reflection on their work?
The survey also indicated that students do, in fact read the feedback provided to them by peers, and that they even
take the initiative to look back at past assignments, again confirming prior findings that students do use CritViz as a
resource for reflecting on their work (Tinapple et al., 2013).
Research Question 3: Does CritViz increase student learning?
Finally, respondents reported that the feedback received from others was helpful in completing their work and that
they feel critiquing their peers’ work helped them learn, providing support for the notion that CritViz enhances the
feeling of community in large classrooms through social connection and shared ideas (Tinapple et al., 2013).
CritViz Site Analytics
As mentioned above, we analyzed the CritViz site analytics to quantify how critiques impact students reflecting
upon their own work, and the work of their peers. Table 2 below shows data from the Spring 2013 semester.
Number of Student Responses Average Pageviews
Critique assignment 1429 22.7
Non-critique assignment 2571 6.8
Table 2: Pageviews on critique assignments vs. non-critique assignments
We see that responses in assignments that are critiqued are viewed more than 3 times as often compared to responses
in non-critique assignments. There may be several factors that cause this effect. First, there will naturally be
additional pageviews from the students that are performing critiques. Critics might even view responses more than
once while deciding on critique ranks. Students may be motivated to review and revise their work more often when
they know it will be viewed and critiqued by peers. Additional pageviews will also come from students that return to
view and reflect upon the feedback from peer critiques. Finally, some of the additional pageviews come from other
students in the class. Doing the critiques and seeing the ranks makes students curious to view more work by other
students in the class. Further, we find students are most likely to view the highly rated responses. For example, the
most viewed response of the semester received 243 pageviews. This response was the highest rated work in an
assignment for the class Engineering Mechanics II. The most viewed responses are top ranked responses in critique
assignments. Overall, these results provide further support for our hypothesis that CritViz increases the amount of
reflection on student work.
- 1443 -
Future Work
CritViz began as an experiment but is rapidly gaining traction in classrooms at ASU. It is also being refined for
other educational settings, including K-12 schools, where we are piloting CritViz in a high school writing classroom.
A major area of research interest is on the analysis and use of rank ordering as a peer evaluation method. While
using rubrics is a time-honored evaluative method, they need to be trained in order to be effective (Goldin & Ashley,
2012), a practice employed in numerous computational peer feedback tools (Vojak et. al.). However, such training
takes time. CritViz employs an intuitive rank-order system which does not require review of an external rubric.
Accordingly, work is being done to determine how a rank-order peer review approach measures up against rubric-
based peer review. Initial tests are showing that more sophisticated statistical analysis (Markov chain Monte Carlo,
and weighted random walks) of the sparse matrix of rank orderings produced by a class on an assignment can yield a
wealth of valuable inferences about students performance, not only on the assignment, but also their performance as
a reviewer of other students.
Conclusion
We believe the future of grading is collaborative, social, transparent, statistical and real-time. While opaque grading
systems lead to students logically trying to get the best grade for the least effort, transparency through feedback from
the instructor and peers leaves no room to “disagree” with grades, “game” the grading system, or cheat. In fact, in
creative classrooms, such transparency can entice hard work, time, risk taking, and creative thinking. We have found
that using CritViz in our own classrooms results in a dynamic, highly motivational experience for students that also
fosters both learning and reflection. Knowing they have an authentic audience interested in helping them improve
their work makes the educational experience less about grades and more about honing skills that students find
valuable. By regularly sharing their assignments through CritViz and receiving a steady stream of quality feedback,
students in large classes become part of a learning community who share their struggles and triumphs in ways
similar to a smaller, more traditional critique-based environment. Although tradition suggests that “Smaller is
Better” when it comes to classrooms, our work with CritViz points toward the possibility of erasing some of the
negative effects of having a very large classroom and allowing professors to increase the sizes of their classes with
little negative impact, leading to “Bigger is Better” in the classroom.
References
Atkins, J. (2011). Reading and Writing with Purpose: In and Out of School. The English Journal, 101(2), 12–13.
Aydin, S. (2012). A review of research on Facebook as an educational environment. Educational Technology
Research and Development, 1–14. doi:10.1007/s11423-012-9260-7
Bombardieri, M., & Landergan, K. (2013, May 22). 15 schools join online classroom initiative. The Boston Globe.
Boston, MA, US. Retrieved from http://www.bostonglobe.com/metro/2013/05/21/edx-pioneer-online-courses-
founded-harvard-and-mit-doubles-size/VmMGcNYD37lLuVWkyKDwCK/story.html
Calibrated Peer Review: Web-based Writing and Peer Review. (n.d.). Retrieved December 4, 2012, from
http://cpr.molsci.ucla.edu/
Ellison, N. B., Steinfield, C., & Lampe, C. (2007). The Benefits of Facebook “Friends:” Social Capital and College
Students’ Use of Online Social Network Sites. Journal of Computer-Mediated Communication, 12(4), 1143–
1168. doi:10.1111/j.1083-6101.2007.00367.x
Gentile, B., Twenge, J. M., Freeman, E. C., & Campbell, W. K. (2012). The effect of social networking websites on
positive self-views: An experimental investigation. Computers in Human Behavior, 28(5), 1929–1933.
doi:10.1016/j.chb.2012.05.012
Goldin, I. M., & Ashley, K. D. (2012). Eliciting formative assessment in peer review. Journal of Writing Research,
4(2), 203–237.
Goldin, I. M., Ashley, K. D., & Schunn, C. D. (2012). Redesigning Educational Peer Review Interactions Using
Computer Tools: An Introduction. Journal of Writing Research, 4(2), 111–119.
Han, A. (2012). Using Calibrated Peer Review to Encourage Writing. In P. Smith (Ed.), Proceedings of the 2012
Association of Small Computer Users in Education (ASCUE) Summer Conference (pp. 23–32). Vancouver,
BC, Canada.
- 1444 -
Kitsis, S. M. (2008). The Facebook Generation: Homework as Social Networking. The English Journal, 98(2), 30–
36.
Lenhart, A., Purcell, K., & Zickuhr, K. (2010). Social Media & Mobile Internet Use Among Teens and Young
Adults. Washington, DC, USA. Retrieved from http://pewinternet.org/Reports/2010/Social-Media-and-Young-
Adults.aspx
Lewin, T. (2012, November 19). College of Future Could Be Come One, Come All. The New York Times. New
York, NY, USA. Retrieved from http://www.nytimes.com/2012/11/20/education/colleges-turn-to-crowd-
sourcing-courses.html
Markoff, J. (2013, April 4). Essay- Grading Software Offers Professors a Break. The New York Times. New York,
NY, USA. Retrieved from http://www.nytimes.com/2013/04/05/science/new-test-for-computers-grading-
essays-at-college-level.html
Pascopella, A., & Richardson, W. (2009). The New Writing Pedagogy: Using social networking tools to keep up
with student interests. District Administration, 45(10), 44–50.
Prensky, M. (2001). Digital Natives, Digital Immigrants Part 1. On the Horizon, 9(5), 1–6.
doi:10.1108/10748120110424816
Robinson, R. (2001). Calibrated Peer Review: An Application to Increase Student Reading and Writing Skills. The
American Biology Teacher, 63(7), 474–480. Retrieved from http://www.jstor.org/stable/4451167
Sadauskas, J., Byrne, D., & Atkinson, R. K. (2013). Toward Social Media Based Writing. In A. Marcus (Ed.),
Proceedings of the 15th International Conference on Human-Computer Interaction - DUXU/HCII 2013, Part
II, LNCS 8013 (pp. 276–285). Las Vegas, NV, USA: Springer.
Steinfield, C., Ellison, N. B., & Lampe, C. (2008). Social capital, self-esteem, and use of online social network sites:
A longitudinal analysis. Journal of Applied Developmental Psychology, 29(6), 434–445.
doi:10.1016/j.appdev.2008.07.002
Tinapple, D., Olson, L., & Sadauskas, J. (2013). CritViz: Web-Based Software Supporting Peer Critique in Large
Creative Classrooms. Bulletin of the IEEE Technical Committee on Learning Technology, 15(1), 29–35.
Vojak, C., Kline, S., Cope, B., McCarthey, S., & Kalantzis, M. (2011). New Spaces and Old Places: An Analysis of
Writing Assessment Software. Computers and Composition, 28(2), 97–111.
doi:10.1016/j.compcom.2011.04.004
- 1445 -
Article
Student-directed projects are a promising approach to supporting powerful learning, yet uncertainty about how to assess these projects presents a barrier to widespread incorporation in K-12 classrooms. Drawing on interviews with computer science teachers and an interdisciplinary literature review, Karen Brennan, Sarah Blum-Smith, and Paulina Haduong offer four principles to guide assessment of student-directed projects: recognizing the individuality of the learner, illuminating process, engaging multiple perspectives, and cultivating capacity for personal judgment. They describe the research behind these principles and provide and example of what they look like in practice.
Article
Full-text available
Authentic learning in online education is feasible with intentional instructional strategies and appropriate educational technologies, yet as a learning approach, barriers to implementation still exist. We argue that authentic learning in online education can be successfully supported when the characteristics of authentic learning are (a) intentionally applied and (b) supported through research-based tools that facilitate the learning process seamlessly for students. To address this challenge, we developed a research-based online application that supports authentic learning. In this article, the theoretical foundations and empirical support for the tool are described, along with critical design decisions that support suggested characteristics of authentic activities. The authors overview formative research conducted during a four-year development process. Several case studies conducted at research-intensive universities are provided to describe how student motivation, metacognition, and strategic behaviors were facilitated through the tool and to encourage readers to apply similar research-based strategies in their own authentic learning contexts
Chapter
Full-text available
Research has observed substantial value in learning interventions where peers tutor and assess each other, and has outlined activities (e.g., explaining, questioning, and feedback) that enable interactive and reflective knowledge-building. This chapter considers how the underlying learning processes of peer tutoring and peer assessment can be enacted and supported within intelligent learning systems. We present a framework of allowances, affordances, and adaptances for describing different levels of scaffolding via technology. Example applications and recommendations for future research are discussed.
Article
Full-text available
In post-publication peer review, scientific contributions are first published in open-access forums, such as arXiv or other digital libraries, and are subsequently reviewed and possibly ranked and/or evaluated. Compared to the classical process of scientific publishing, in which review precedes publication, post-publication peer review leads to faster dissemination of ideas, and publicly-available reviews. The chief concern in post-publication reviewing consists in eliciting high-quality, insightful reviews from participants. We describe the mathematical foundations and structure of TrueReview, an open-source tool we propose to build in support of post-publication review. In TrueReview, the motivation to review is provided via an incentive system that promotes reviews and evaluations that are both truthful (they turn out to be correct in the long run) and informative (they provide significant new information). TrueReview organizes papers in venues, allowing different scientific communities to set their own submission and review policies. These venues can be manually set-up, or they can correspond to categories in well-known repositories such as arXiv. The review incentives can be used to form a reviewer ranking that can be prominently displayed alongside papers in the various disciplines, thus offering a concrete benefit to reviewers. The paper evaluations, in turn, reward the authors of the most significant papers, both via an explicit paper ranking, and via increased visibility in search.
Article
Full-text available
Peer review is a family of instructional techniques. Historically, these have been employed in writing and many other educational domains. Modern computer technologies facilitate the use of peer review, which is especially relevant to educational settings where it is not practical to administer peer review manually. The use of computer support for peer review has shed light on many important scientific questions, some of which we summarize. These findings set the context for the papers in this special issue, which demonstrate how computer support for peer review enables research on peer review itself and on its pedagogical significance.
Article
Full-text available
Computer-supported peer review systems can support reviewers and authors in many different ways, including through the use of different kinds of reviewing criteria. It has become an increasingly important empirical question to determine whether reviewers are sensitive to different criteria and whether some kinds of criteria are more effective than others. In this work, we compared the differential effects of two types of rating prompts, each focused on a different set of criteria for evaluating writing: prompts that focus on domain-relevant aspects of writing composition versus prompts that focus on issues directly pertaining to the assigned problem and to the substantive issues under analysis. We found evidence that reviewers are sensitive to the differences between the two types of prompts, that reviewers distinguish among problem-specific issues but not among domain-writing ones; that both types of ratings correlate with instructor scores; and that problem-specific ratings are more likely to be helpful and informative to peer authors in that they are less redundant.
Article
Full-text available
CritViz is an online framework for supporting real- time critique, conversation, and peer ranking of creative work in the classroom. In creative classes where student work cannot be entirely graded on objective criteria, classes often arrange critique sessions to provide direction and feedback to students, raise their level of performance, and teach them to give and receive constructive criticism. Critiques usually work best in classes small enough to sustain a single discussion. CritViz was created to scale up critiques for large classes whose size normally prohibits this activity. Through CritViz’s countdown timers, assignment uploading, and randomized peer feedback, we can now run effective critiques in much larger classes and have seen changes in overall classroom “motivational structure.” Typically with classrooms “Smaller is Better” but our work implies possibilities for increasing class sizes with little negative impact, and even leading to “Bigger is Better” in the classroom.
Conference Paper
Full-text available
Although text-based digital communication (e.g. email, text messaging) is the new norm, American teens continue to fall short of writing standards, claiming school writing is too challenging and that they have nothing interesting to share. However, teens constantly and enthusiastically immerse themselves in social media, through which they regularly document their life stories and voluntarily share them with peers who deliver feedback (comments, "likes," etc.) which has been demonstrated to impact self-esteem. While such activities are, in fact, writing, research indicates that teens instead view them as simply "communication" or "being social." Accordingly, through a review of relevant literature, interviews with teachers, and focus groups with students, this research offers recommendations for designing technology that infuses school writing with the aspects of social media that teens find so engaging--including multi-platform access to personal informatics, guided prewriting tools, and structured peer feedback--with the ultimate goal of improving student writing.
Article
Members of the Secondary Section Steering Committee comment on topics of importance to English language arts educators.
Article
The purpose of this study is to present a review of Facebook as an educational environment, as research on its use within education is relatively new. The study is categorized into six sections: Facebook users; reasons people use Facebook; harmful effects of Facebook; Facebook as an educational environment; Facebook’s effects on culture, language, and education; and the relationship between Facebook and subject variables. Additionally, the study compares Facebook usage in Turkey to its use on a global scale. To conclude, there has been a serious lack of research on Facebook’s use as an educational resource, as current literature reflects how Facebook might more readily be utilized as an educational environment. Finally, the study ends with practical recommendations for researchers and educators.
Article
Millions of people use social networking sites (SNSs), but it is unclear how these sites shape personality traits and identity. In Experiment 1, college students were randomly assigned to either edit their MySpace page or complete a control task online (interacting with Google Maps). Those who focused on their MySpace page scored significantly higher on the Narcissistic Personality Inventory (NPI) than a control group. In Experiment 2, those who focused on their Facebook page scored significantly higher in general self-esteem, but not narcissism, than a control group. Thus, spending time on SNSs profiles causes young people to endorse more positive self-views, although the specific form this takes depends on the site. Consistent with previous research, narcissism was associated with a larger number of SNSs “friends” in both experiments.
Article
This article examines the strengths and weaknesses of emerging writing assessment technologies. Instead of providing a comprehensive review of each program, we take a deliberately selective approach using three key understandings about writing as a framework for analysis: writing is a socially situated activity; writing is functionally and formally diverse; and writing is a meaning-making activity that can be conveyed in multiple modalities. We conclude that the programs available today largely neglect the potential of emerging technologies to promote a broader vision of writing. Instead, they tend to align with the narrow view of writing dominant in a more recent era of testing and accountability, a view that is increasingly thrown into question. New technologies, we conclude, are for the most part being used to reinforce old practices. At a time when computer technology is increasingly looked to as a way to improve assessment, these findings have important implications.