Content uploaded by Yao Lu
Author content
All content in this area was uploaded by Yao Lu on Nov 02, 2023
Content may be subject to copyright.
Computers and Education Open 3 (2022) 100087
Available online 27 April 2022
2666-5573/© 2022 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-
nc-nd/4.0/).
The inuence of social network prestige on in-service teachers’ learning
outcomes in online peer assessment
Ning Ma
a
,
b
,
*
, Lei Du
b
,
c
, Yao Lu
b
, Yi-Fan Sun
b
a
Advanced Innovation Center for Future Education, Beijing Normal University, Beijing, 100875, People’s Republic of China
b
School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, 100875, People’s Republic of China
c
Shenzhen Longhua Songhe School, Shenzhen, Guangdong, 518109, People’s Republic of China
ARTICLE INFO
Keywords:
distance education and online learning
teacher professional development
evaluation methodologies
informal learning
learning communities
pedagogical issues
ABSTRACT
Online peer assessment has been widely applied in online teacher training. However, not all learners participate
equally, detailed characterization and impact analysis of peer assessment is required. This study draws on a
sociological term, social network prestige, to evaluate the participation gap of learners in online peer assessment.
A teacher training course was offered to in-service primary and secondary school teachers, and 457 participants
were ranked according to their prestige. Then, the top 30% of learners were considered the high-prestige group
(142 participants) and the bottom 30% as the low-prestige group (128 participants). Social network analysis and
behavioral sequence analysis were used to explore the differences in the learning outcomes of these two groups.
The results showed signicant differences in learning performance, social network structure and learning be-
haviors among learners with different prestige. High-prestige learners have better learning performance and are
affected by their prior knowledge. Learners with different levels of prestige differ in social network structure and
learning behavior. Based on these ndings, this study suggests improvements to reduce the participation gap.
1. Introduction
With the rapid development of Internet technology, teacher training
is no longer limited to face-to-face formats and online teacher training
has emerged as an important avenue for teacher professional develop-
ment (TPD). Interaction is an important part of online teacher training
and there are a variety of interactive activities such as peer assessment.
Research has found that interaction behaviors have a positive impact on
training effectiveness [1]. Also, Tang [2] conrmed the importance of
interactions in online training and indicated it was the predictor of
whether teachers completed the course. Therefore, how to ensure the
quality of online interactions becomes an important concern.
Research on online interactions has addressed peer facilitation
strategies [3], the technical design of collaborative learning environ-
ments, interaction models [1], and collaborative community structures
[4]. Despite the extensive current research, a persistent problem with
online learning is the participation gap among learners. As early as
2013, Vaquero and Cebrian [5] described the participation gap as a "rich
club" phenomenon, meaning that high-achieving students are more
likely to have social interactions with each other, while low-achieving
students struggle to engage in such interactions. Participation gap is a
dynamic and complex social phenomenon reected in the unequal and
uneven nature of learners’ interactions in online learning [6]. Several
studies have reported the signicant damage of this gap on learning
performance, but further in-depth exploration of its impact is needed
[7].
Participation gaps change constantly and can be described in socio-
logical terms [6]. Social network prestige (‘prestige’) is a sociological
term that captures the directionality and reciprocity of learners’ in-
teractions. In directed social networks, the connection between nodes
points from the sender to the receiver, prestige reects the degree to
which a participant successfully attracts the attention of peers relevant
to their activity level. It is an important indicator to measure the
participation gap [8]. However, current research lacks multidimen-
sional investigations of the effects of social network prestige on learning
performance – an important aspect worthy of in-depth analysis.
2. Literature review
2.1. Online teacher training
Social participation is important for TPD. Interactions in online
* Corresponding author: Ning Ma, School of Educational Technology, Faculty of Education, Beijing Normal University, Beijing, 100875, People’s Republic of China
E-mail address: horsening@bnu.edu.cn (N. Ma).
Contents lists available at ScienceDirect
Computers and Education Open
journal homepage: www.sciencedirect.com/journal/computers-and-education-open
https://doi.org/10.1016/j.caeo.2022.100087
Received 29 September 2021; Received in revised form 25 April 2022; Accepted 25 April 2022
Computers and Education Open 3 (2022) 100087
2
teacher training can facilitate interaction and communication among
learners [9]. Interactive communities and information networks can
provide rich learning resources [10] and use group knowledge con-
struction to form a positive learning atmosphere that signicantly en-
hances training effectiveness (Ma et al., [11]). Despite the clear
advantages of interactions, many studies have reported problems with
it, especially social participation often varied among learners. In online
training courses for student teachers, some scholars found a signicant
participation gap (V´
asquez-Colina et al., 2017; [12]). Similar ndings
were also found in in-service teachers. For example, Zhang and Liu [13]
found in-service teachers in online professional learning communities
show differences in different aspects of participation. A literature review
indicated that in an Internet-based community, the participation gap
may exert a signicant impact on learners’ emotional support and pro-
fessional development [14]. Thus, many scholars have called for more
research on the participation gap in online teacher training [15,16].
2.2. Peer assessment
Online peer assessment is an important part of online interaction
with many advantages such as convenience [17], reduced workload for
teachers [18], cost-effectiveness, exibility and self-sustainability [19]
in terms of activity design and technical implementation. Peer assess-
ment has also been gradually applied to teacher training [20], which
facilitates the development of teachers’ learning design skills and
teaching abilities (Ma et al., [21]). Peer assessment is not only a form of
evaluation but also a form of learning. Peer assessment supports
learners’ deeper understanding of teaching requirements while pro-
moting self-reection [22] and critical thinking [23]. In fact, promoting
learners’ self-reection may be the most signicant advantage of peer
assessment [24]. Despite the advantages of online peer assessment, there
are also challenges, among which level of participation and participa-
tion gap are considered pressing issues for research [25,26]. Falchikov
[27] reported that learners were sometimes unable to accept peer
feedback and reluctant to provide comments to their peers in peer
assessment. Gao et al. [28] argued that the level of learner participation
in online peer assessment is uneven and that there are gaps in the gains
of learners with different levels of participation. This situation leaves
some students with little or no feedback, placing them at a disadvantage
in the interactive network [29]. Furthermore, their effectiveness in
participating in peer assessment is greatly reduced.
2.3. Social network prestige
Social network prestige comes from the level of recognition and
attention of peers and the range of inuence that a user radiates [30].
Sphere of inuence refers to individuals in the network who can be
reached through relational ties. Conceptually, social network prestige
refers to the proximity or reach of a person to the network, i.e., how
many peers are interested in becoming widely connected to that person
[31]. Based on the broad denitions in the literature and the charac-
teristics of peer assessment in this study, we argue that the social
network prestige of learners in online peer assessments refers to the
strength of a learner’s published work that triggers peer commenting
behavior, and that prestige is positively related to the number of
directed links received in the social network.
The concept of prestige allows researchers to explore the charac-
teristics of interactions and learning impacts generated by social net-
works from the detailed perspective of individual nodes. Atkisson et al.
[32] found that in learning communities, learners tend to acquire
knowledge from high-prestige learners. Learners with higher prestige
can enhance reciprocal social network connections, facilitate the sharing
of knowledge and experience among learning groups. Therefore, the
study of prestige contributes to probe the potential explanations of the
participation gap [33], suggesting ways to reduce the participation gap
and ultimately promoting effective online learning.
The algebraic form can express all the quantitative information in the
social network and thus support doing more mathematical analysis. The
most basic form of social networks that can be used for mathematical
analysis is the matrix. In a social network generated by an online peer
assessment of learners with n nodes, X (n, n) denotes a social matrix X
with n rows and columns, and xij denotes the value of the i-th row and j-
th column of this matrix (both i and j are from 1 to N), that is, the fre-
quency of interactions sent from node i to node j. Prestige is calculated as
follows [34,35].
prestige (for node i) = ∑n
j=1xji
n−1
2.4. Research questions
Learning is social in nature, and peer assessment in online courses
are important to learners. Many researchers have reported participation
gaps in online peer assessment and have conrmed that participation
gaps can directly undermine learners’ outcomes and learning mood.
Focusing on the participation gap in online learning, this study was
conducted from the perspective of social network prestige and explored
the differences in learning outcomes of learners with different levels of
prestige, specied as learning performance, social network structure and
learning behavior, with the following specic research questions:
1 Do learners with different prestige differ in learning performance? If
so, what are the differences?
2 Do learners with different prestige differ in social network structure?
If so, what are the differences?
3 Do learners with different prestige differ in learning behaviors? If so,
what are the differences?
3. Methodology
3.1. Experimental procedure
The experimental design is shown in Fig. 1. Participants were
recruited via the Internet and given pre-experimental training. Pre-test
data were collected, followed by a 5-week online course on ‘Project-
based Learning under Blended Concepts’, which began with two weeks
of basic knowledge teaching, followed by 3 weeks of intensive lectures
and peer assessment. The peer assessment required each learner to up-
load their own assignment and freely select about 10 other learners’
assignments to assess, giving specic scores and comments. The online
course was offered on the Learning Cell system (http://lcell.cn/), which
could automatically record learners’ interaction and learning data,
including online learning behavior, learning time, and activity access
sequence. At the beginning and end of the peer assessment phase, each
learner was required to submit a PBL design scheme for pre-test and
post-test as a basis of knowledge application ability. Finally, data from
the Learning Cell system were analyzed.
3.2. Participants
In order to recruit in-service teachers of various subjects from pri-
mary and secondary schools across China, we widely distributed posters
of this study on the Internet through various platforms, and 1438 can-
didates for the study were conrmed through voluntary registration.
Then, a total of 457 teachers who participated in 3 peer assessment
activities and provided complete data were selected as the study par-
ticipants. All 457 participants had clear learning goals and appropriate
learning intentions. Most teachers had bachelor’s or master’s degrees.
All had online learning experience and were competent in their use of
information technology. To characterize the participation gap in
teachers’ online peer assessment, 457 learners were divided into high
and low prestige groups based on prestige values, i.e., all learners were
N. Ma et al.
Computers and Education Open 3 (2022) 100087
3
Fig. 1. Experimental design.
Fig. 2. The design of peer assessment activity.
N. Ma et al.
Computers and Education Open 3 (2022) 100087
4
ranked in descending order of prestige, and the top 30% (142 learners
with high social network prestige values, dened as Group H) were
selected as the high-prestige group, and the bottom 30% (128 learners
with prestige values, Group L) were dened as the low-prestige group.
3.3. Course design and learning activity
The purpose of peer assessment in this study was to guide learners to
learn from others, reect on their own project-based learning (PBL)
design scheme, and nally independently create a PBL design scheme at
the end of the course. Each learner was free to choose the assignments
they wanted to evaluate. They were also advised to evaluate approxi-
mately 10 assignments and provide qualitative comments and quanti-
tative scores using the provided scale.
Three peer assessment activities were designed in conjunction with
the teaching objectives and learning contents. Each of the three peer
assessment activities followed the process below which contains three
stages: autonomous learning stage, evaluation stage and reection stage,
as shown in Fig. 2. First, learners studied the learning resources such as
videos and text materials, participated in the course activities, and
submitted their assignments. Then, they entered the evaluation stage,
where they could browse the assignments of others submitted in the
assignment display area, voluntarily selecting assignments that inter-
ested them, scored and commented using the scale provided. Finally,
learners proceeded to the reection phase, where they could view the
evaluation radar chart, read the comments and scores given by other
learners, and review their own assignment for reection. After the
reection stage, learners could review the learning materials again or
participate in other learning activities to ll in knowledge gaps.
During the evaluation stage, learners could see the basic information
of each assignment such as the title, author, upload time, and perform
operations such as checking, downloading and reviewing assignments,
as shown in Fig. 3. They could freely select the assignments they want to
review in the interface of assignment display.
Once the learner had selected the assignment to be evaluated, he or
she would come to the interface of assignment assessment, as shown in
Fig. 4. After browsing through the assignments uploaded by peers,
learners scored assignments based on the provided scale and wrote
comments. To improve the accuracy of scoring, we followed Brown’s
suggestion of using star scoring for each dimension [36], automatically
populating the scoring rules for the dimension at each scoring. Learners
did not have to evaluate an assignment after viewing it, they could
choose to return to the previous session and ignore the review of the
assignment.
The evaluation stage is followed by reection stage, where learners
could click on ‘My Assignments’ to enter the interface of assessment
result, as shown in Fig. 5. Learners were provided with a radar chart to
visualize the scores of their assignments in each dimension. They could
also see who give the comments, detailed scores and comments. In the
follow-up, learners could self-reect and further revise their assignments
based on the suggestions made in the comments.
3.4. Instruments
3.4.1. Evaluation of the level of knowledge application
In this study, a professor and two researchers whose research area is
PBL were invited to give each indicator and its related weight based on
the PBL evaluation index system proposed by Qiang and Zhang [37] and
the focus of this online teacher training course. The researchers of this
study implemented Analytic Hierarchy Process (AHP) method using
yaahp 10.1 software to calculate the weightings of the indicators given
by these three experts (as shown in Table 1), and found that the con-
sistency coefcients of the matrices were all less than 0.1, satisfying the
consistency requirements of AHP, indicating that the weights assigned
by each expert were reliable. Then, these three experts scored and
evaluated the PBL design scheme submitted by the participants based on
the scale.
3.4.2. Learning behavior coding scheme
Lag Sequential Analysis (LSA) is a method proposed by Sackett [38]
to examine the signicance of behavioral sequences, which can capture
the underlying behavioral patterns of learners and explain the reasons
for technology-enhanced learning from a behavioral perspective, thus
effectively guiding the design and implementation of subsequent
teaching and learning activities. This method is widely used in the eld
of learning analysis [39,40] and is particularly applicable to the dis-
covery of behavioral patterns in online environments [41,42].
In this study, LSA was used to analyze learners’ online learning be-
haviors in peer assessment. Considering the basic learning activities in
the course, the main activities in the peer assessment, as well as the data
that could be recorded by the learning platform, the researchers of this
study developed a coding table for learning behavior (Table 2).
4. Analysis and results
4.1. Learners’ participation gap in online peer assessment
Online network analysis was used to identify the social network
characteristics formed by peer assessment activities. Gephi 0.9.4 was
used to show the basic description of the whole social network generated
Fig. 3. The interface of assignment display
N. Ma et al.
Computers and Education Open 3 (2022) 100087
5
Fig. 4. The interface of assignment assessment.
Fig. 5. The interface of assessment result.
Table 1
Scale of knowledge application ability
First-level indicators Second-level indicators Assessing Items Weights Scores
The selection of project
topic
Fit Consistent with current curriculum standards. 0.105 10
Situation Topic selection and driven issues are in combination with current events and hot issues or
real-life scenes.
0.135 13
interest The topic is interesting and challenging, which can stimulate lasting learning enthusiasm. 0.065 7
Project objectives knowledge Design teaching objectives based on subject knowledge content. 0.055 6
Ability Agree with the training goal for subject ability 0.069 7
competency Reect core competencies’ requirements. 0.069 7
Project implementation Feasibility The project content is feasible and in line with students’ development zone. 0.089 9
Project results Project results are of important theoretical and practical signicance. 0.107 10
Blended learning
environment
Construction of technical
environment
Support learners to better achieve their learning goals through the Internet, nancial
media and other technologies using blended learning.
0.069 7
the abundance of resources Provide learners with abundant learning resources 0.082 8
The tools and Strategies of project
implementation
Provide auxiliary tools such as timetable and function distribution table 0.057 6
Project evaluation Evaluation tools Provide clear, effective and operable evaluation tools 0.048 5
Evaluation method The evaluation method can truly reect the learning performance of learners 0.056 5
N. Ma et al.
Computers and Education Open 3 (2022) 100087
6
by peer assessment activities. The whole social network was rst shown
in Fig. 6, followed by the explanation of some technical indicators. Then,
the commonly used tool UCINET was used for online network analysis
[43]. The specic values and descriptions of the indicators can be found
in Table 3. The peer assessment social network comprised 457 learners
(nodes) and 11,354 directed interactions (edges), so the average degree
of the social network is 24.84. The network density describes the overall
level of connectivity in the network and the value is 0.215, implying that
there is a relatively strong relationship between this network and the
interaction behavior of the learners, so the inuence of the network on
learning performance can be further explored. The average path length
of 1.933 indicates that the network is well linked, but the centralization
of 0.081 indicates that the network is quite scattered [44].
To explore the participation gap of learners in online peer assess-
ment, the prestige differences of learners in social networks were
explored. Learners’ prestige was ranked from high to low according to
the prestige formula, and the top 30% were dened as Group H (n=142,
M=142, SD=3.69), the bottom 30% as Group L (n=128, M=19.47,
SD=2.88), as shown in Table 4. The two groups differed signicantly in
high effect size (t=23.875, p=.000<.001, d=0.87), indicating that the
difference in prestige between the two groups was sufcient for the di-
vision into groups H and L for further study.
To clearly show the position of learners in Group H and L in the social
network, a social network diagram (force-directed network) was drawn
to mark the prestige of learners. Due to the large sample size of this
study, selected representative data were used to indicate social network
position more clearly, as shown in Fig. 7 Each node represents a learner,
and each directed edge represents the evaluation from one learner to
another.
Force-directed network can be used to reect the frequency of
interaction (the higher the interaction frequency, the larger the nodes
and the closer the nodes to the center of the force-directed network).
Fig. 7 shows that some high-prestige learners are not in the core of the
force-directed network, while some low-prestige learners are in the core
of the social network center, indicating that prestige level is not always
consistent with the frequency of interaction. Force-directed network is
the commonly-used tool used for interactive visualization, but such a
network visualization tool cannot fully capture the prestige differences
found in this study.
4.2. Analysis of learning performance
To clarify the differences in learning performance among groups H
and L in online peer assessment, we collected the PBL design schemes
submitted by learners at the start and end of peer assessment. Three PBL
experts gave scores out of 100 using the scoring scale to evaluate pre-
and post-test knowledge application. To ensure the validity of the
scoring, 10 assignments were randomly selected and scored indepen-
dently by 3 experts. Kendall’s coefcient of concordance was .787
(p=.000<.05). Therefore, the ratings of the experts were considered
consistent. Table 5 summarizes the pre- and post-test scores of the two
groups.
Considering the inuence of pre-test on statistical outcomes, one-
way ANCOVA was adopted. Nonetheless, the homogeneity test of
regression coefcients was violated, indicating a signicant interaction
effect between the independent variables and covariates (F=24.891,
p=.000 <.01). According to Huitema’s [45] suggestion, the
Johnson-Neyman technique was conducted – a resampling procedure to
nd the signicance between independent and dependent variables
under the inuence of covariates.
Fig. 8 presents the results for the two groups. The solid red line and
dashed blue line represent the regression slope of groups H and L,
respectively. The point of signicance of the two groups’ difference in
the pre-test is 49.40. When the pre-test score was below 49.40, the post-
test score of Group H was signicantly higher than that of Group L (t =
−2.08, p<.05). When the pre-test score was higher than 49.40, there
was no signicant difference between the two groups. That is to say, if
learners’ prior knowledge level is too low, the difference in social
network prestige of learners in online peer assessment will not affect
their learning performance. For the vast majority learners with a certain
learning basis (such learners accounted for more than 87%), learners
with higher prestige have better performance.
4.3. Analysis of social network structure
4.3.1. Out-degree and in-degree
In online peer-assessment social network, out-degree is the number
of comments sent by learners, while in-degree is the number of com-
ments the learner receives. A high out-degree indicates that learners
actively participate in peer assessment and actively comment on others’
assignments. A high in-degree denotes that the learners received more
comments from peers. Learners in Group H received 30.43 comments
from their peers on average, while Group L learners received about
19.65 comments from their peers. The average number of comments
received by Group H learners was 1.54 times that of Group L learners. It
can be seen that the two groups were signicantly different in terms of
in-degree (t=27.75, p =.000 <.001) and had a high effect size (d=0.87).
However, there was no signicant difference in the out-degree. Group H
learners gave 23.98 comments on average, while Group L learners gave
about 24.42 comments, showing no signicant difference (t=-1.95,
p=.25>.05).
Given that the two groups had the same level of out-degree, it ap-
pears that Group L achieved a lower level of in-degree. This was not
because Group L made less effort in establishing connections with peers,
but rather because low-prestige learners tend to receive unequal feed-
back in peer assessment activities. Table 6 provides an overview of
descriptive statistics and t-test results for the in-degree and out-degree of
the two groups.
4.3.2. Centrality
Centrality is a quantitative analysis of individual "power" in social
networks, generally including degree centrality, betweenness centrality,
closeness centrality, and eigenvector centrality (Table 7).
Degree centrality indicates the number of other nodes directly con-
nected to a node, which can directly reect the degree of interaction
between members [34]. T-test results indicated the degree centrality of
Group H was signicantly higher than that of Group L (t=22.38, p=.000
<.001, d=0.80), indicating that Group H experienced higher interaction
frequency in peer assessment.
Betweenness centrality measures the degree to which a node appears
on the shortest path between any two nodes in a network [34]. If a
Table 2
Coding table for learning behavior
Code Learning Behavior Description
SC Study course Browse the video, text and other learning
materials provided by the course.
MA Materials annotation Learners annotate the learning materials.
VE View evaluation
scheme and results
Learners enter the "evaluation scheme" to view
the scoring criteria and their own scoring results
on each learning content.
PD Participate in a
discussion
Learners express their views, ask questions or
answer others in the discussion board.
PA-
SA
Peer assessment-submit
assignments
Learners submit assignments required in peer
assessment activity.
PA-
RA
Peer assessment-review
assignments
After submitting their assignments, learners
click to view their work.
PA-
VA
Peer assessment-view
assignments
Learners click to view others’ assignments in
peer assessment activity.
PA-
EA
Peer assessment-
evaluate assignments
Learners evaluate the assignments of others in
peer assessment activity, including scoring and
writing comments.
PA-
VR
Peer assessment-view
results
Learners check the marks and comments given
by others in peer assessment activity.
N. Ma et al.
Computers and Education Open 3 (2022) 100087
7
learner always appears between the shortest path of the other two nodes
in the network, that learner can be seen to occupy an important position,
essentially controlling the communication between the other two
learners. When they refuse to communicate, it is then difcult for the
other learners to contact with each other. In this study, the betweenness
centrality of Group H was signicantly higher than that of Group L
(t=18.44, p=.000<.001, d=0.75). Learners in Group H are more likely
to serve as ‘bridges’ in peer assessment, implying they have stronger
control of the interaction.
Closeness centrality requires calculating the sum of the shortest
distance between a node and all other nodes in the graph. Nodes with
higher closeness centrality are closer to the geometrically central posi-
tion in the network [46]. The higher the closeness centrality of a node,
the more independent it will be in transmitting information. In this
study, there was no signicant difference in closeness centrality between
Group H and Group L (t=1.04, p =.079 >.05), suggesting that learners in
both groups had a degree of independence in transmitting information
in peer assessment activities.
Eigenvector centrality of a node depends on the number and quality
of its connections. The more connected or important a learner’s neigh-
bors are, the higher the eigenvector centrality value of that learner [46].
In this study, the eigenvector centrality of Group H was signicantly
higher than that of Group L (t=24.04, p=.000 <.001, d=0.83), indi-
cating that high-prestige learners can interact and communicate with
other inuential learners.
4.3.3. Ego-network
Ego-network refers to the network structure centered on a node
(ego), consisting of his or her directly connected nodes and the in-
teractions between them. Unlike the whole network, ego-network
Fig. 6. The whole social network and the explanation of some indicators
N. Ma et al.
Computers and Education Open 3 (2022) 100087
8
measures the characteristics of interactions among members from a
micro perspective [47]. The differences between the two groups in
Ego-network are shown in Table 8.
First, the network size of the ego-network refers to the number of
other learners directly related to a node [48]. Group H is signicantly
higher than Group L in this index (t=22.38, p=.000 <.001, d=0.78). The
size of an ego-network is closely related to the acquisition of informa-
tion, with larger networks being more likely to access a greater variety of
information [49].
Second, the average distance of learners in Group H (t=-11.24, p
=.000 <.001, d=0.59) was lower than that in Group L. Because Group H
has a shorter average distance and larger network size in the ego-
network, the networks it belongs to will have a wider range and faster
information transmission [50].
Finally, network density refers to the network tightness of the ego-
network, and self-centrality is used to measure a learner’s inuence on
his ego-network. Learners with different prestige levels did not show
signicant differences in density (t=1.52, p =.104 >.05), although they
did in self-centrality (t=9.70, p=.000 <.001, d=0.58): Group H was
signicantly better than Group L. This indicates that Group H has a
strong inuence compared with Group L based on the premise that
network density is not signicantly different.
4.4. Analysis of learning behaviors
To further explore the differences in learning behaviors between
Group H and Group L, this study conducted lag sequence analysis on the
learning behavior of these two groups. The GSEQ5.1 developed by
Quera et al. [51] was used to perform the sequence analysis. Z-scores
exceeding 1.96 indicate that the sequence has statistical meaning [52].
Tables 9 and 10 are the adjusted residual tables for Groups H and L.
Table 9 shows that high-prestige learners have 10 signicant behavioral
sequences. Three repetitive behaviors were noted: SC (studying
course)→SC, PD (participating in a discussion)→PD and PA-VA(peer
assessment-viewing assignments)→PA-VA. There were also 7
Table 3
Indicators and statistics of learners’ whole social network in peer assessment
Indicators Values Descriptions
Number of nodes 457 This indicator reects the number of nodes in the
online peer assessment network, i.e. the number of
participants involved in peer assessment.
Number of edges 11354 This indicator describes the number of interactions of
all the participants.
Average degree 24.84 This indicator is derived by dividing the number of
edges by the number of nodes, which reects the
average number of interactions per participant.
Network density 0.215 The network density refers to the ratio of the number
of actual edges to the number of all possible edges,
and its value varies from 0 to 1. [59] A higher
network density indicates a higher frequency of
learner interaction.
Average path
length
1.933 The average path length refers to the average distance
between any two nodes in the network [60]. A
shorter average path length indicates better network
connectivity and more efcient information
exchange.
Centralization 0.081 The centralization describes the differences of
degrees between nodes and its value ranges from 0 to
1. This indicator reects the concentration trend of
the network. [61]
Table 4
Descriptive statistics and t-test analysis of the two groups’ prestige
Variable Group H (Top 30%, n=142) Group L (Bottom 30%, n=128) t df p d
Mean SD Mean SD
Prestige 142 3.69 19.47 2.88 23.87 254.56 .000*** 0.87
***
p<.001
Fig. 7. Social network diagram of prestige.
Table 5
Descriptive statistics of learning performance scores of two groups of learners.
Pre-TEST Post-TEST
Group H Group L Group H Group L
n 142 128 142 128
Mean score 58.32 57.73 82.71 78.40
Median 57 58 86 76.5
Standard deviation 8.63 6.89 8.23 6.45
Interquartile range 6.5 5 14 8
N. Ma et al.
Computers and Education Open 3 (2022) 100087
9
Fig. 8. Illustration of the signicant difference in the learning performance of the two groups
Table 6
Descriptive statistics and t-test results of the two groups’ in-degree and out-degree
Variable Group H (n=142) Group L (n=128) t df p d
M SD M SD
In-degree 30.43 3.74 19.65 2.58 27.75 251.72 .000*** 0.87
Out-degree 23.98 2.89 24.42 2.86 -1.95 268 .065
***
p<.001
Table 7
Summarizes descriptive statistics and t-test results for the centrality of the two groups.
Variable Group H (n=142) Group L (n=128) t df p d
M SD M SD
Degree 41.47 3.19 33.39 2.73 22.38 267.38 .000*** 0.80
Betweenness 0.61 0.11 0.40 0.07 18.44 268 .000*** 0.75
Closeness 44.02 1.21 42.99 1.03 1.04 268 .079
Eigenvector 14.16 1.00 11.29 0.95 24.04 268 .000*** 0.83
***
p<.001
Table 8
Descriptive statistics and t-test results of the two groups of learners’ ego-network.
Variable Group H (n=142) Group L (n=128) t df p d
M SD M SD
Size 38.93 3.71 28.60 3.31 22.38 267.37 .000*** 0.78
Average distance 1.86 0.03 1.92 0.04 -11.24 232.43 .000*** 0.59
Density 22.02 0.95 21.65 0.91 1.52 268 0.064
Self-centrality 12.90 2.32 10.60 1.40 9.70 204.54 .000*** 0.58
***
p<.001
Table 9
Adjusted residual table of Group H.
Z SC MA VE PD PA-SA PA-VA PA-EA PA-VR PA-RA
SC 8.12* 3.97* -0.04 3.65* -1.36 -2.82 -4.81 -0.44 -3.48
MA -2.44 -2.33 -0.71 -1.64 1.07 -0.07 -2.44 1.58 -1.66
VE -3.62 -0.91 0.92 -0.62 0.93 1.87 -1.69 1.16 -1.04
PD 3.31* -1.95 1.15 4.14* 0.84 -0.54 0.31 0.64 -3.03
PA-SA -6.32 -0.56 -0.37 -1 -0.14 -4.79 -0.59 -2.32 -0.23
PA-VA -9.1 1.65 -0.45 1.21 -0.9 3.41* 10.44* -3.63 -2.74
PA-EA -2.15 1.14 1.44 -1.19 -0.57 -0.13 -1.98 1.52 -1.58
PA-VR 1.14 -2.62 -3.75 3.81* -0.64 -4.09 -0.05 -3.64 4.22*
PA-RA 3.04* 0.33 -1.16 -4.48 -0.23 -2.41 1.12 -1.88 0.03
*
p <.05.
N. Ma et al.
Computers and Education Open 3 (2022) 100087
10
signicant learning behavior sequences: SC→MA (making annotation),
PD→SC, PA-RA (peer assessment-reviewing assignments)→SC, PA-
VA→PA-EA (peer assessment-evaluating assignments), SC→PD, PA-VR
(peer assessment-viewing results)→PD, PA-VR→PA-RA.
As can be seen from Table 10, the low-prestige learners showed 8
signicant behavioral sequences. In addition to 3 repetitive learning
behaviors (SC→SC, PD→PD, and PA-VA→PA-VA), there were 5 signi-
cant learning sequences: PA-EA→SC, PA-VR→VE, PA-VR→PA-RA, PA-
VA→PA-EA, SC→PD.
To display the learning behavior sequences of the two groups, two
corresponding transition diagrams of learning behavior were drawn.
The learning behavioral transition diagrams for Group H and Group L
appears in Fig. 9 and 10. In the transition diagram, each node represents
a learning behavior, and each link indicates that the transition between
the two connected behaviors is signicant. The direction of each arrow
indicates the behavior transition direction. The numbers in these gures
are z-scores. The larger the number is, the more signicant the sequence
of behaviors is.
4.4.1. Low-prestige learners mainly derive inspiration and reection from
their peers’ assignments and pay more attention to their own learning
performance
First, low-prestige learners would review their assignments after
viewing the assessment results (PA-VR→PA-RA). Second, another sig-
nicant sequence of learning behavior associated with low-prestige
learners was that they demonstrate reective behavior after viewing
and evaluating others’ homework, that is, they would study the course
content again and participate in discussion (PA-VA→PA-EA→SC→PD).
This indicates that the low-prestige learners gain reection mainly came
from the inspiration of their peers’ assignments. However, self-reection
that relies mainly on oneself has limitations [53], which may be why
Group L was not as effective as the Group H in knowledge application.
Third, after viewing their assessment results, learners in Group L would
click on the evaluation scheme to check their scores on various di-
mensions, then further study course content (PA-VR→VE→SC), possibly
suggesting that low-prestige learners need to do this because they
receive more limited feedback from peers.
4.4.2. High-prestige learners produce more deep learning behaviors in peer
assessment, and promote reection through a combination of peer
assessment and self-thinking
First, analysis of behavior sequences showed that learners in Group H
reect more on their assignments through viewing peer assessment re-
sults (PA-VR→PA-RA). Second, high-prestige learners conduct further
learning after reviewing their assignments, that is, they browse relevant
course materials and comment on the learning content or go to the
discussion board to post (PA-VR→PA-RA→SC→MA, PA-VR→PA-
RA→SC→PD). These learners review what they have learned based on
peer assessment. Third, After viewing peer assessment results, learners
in Group H would use the discussion board to express their opinions, and
they then studied the course content again (PA-VR→PD→SC→MA). This
suggests that high-prestige learners were more willing to participate in
interaction after being evaluated by others, and interaction also pro-
motes their learning of course content.
5. Discussion and conclusion
This study draws on social network prestige to study the participa-
tion gap in online peer assessment, and explores the differences in
learning performance, social network structure, and learning behaviors
of learners with different levels of prestige (i.e. Group H and Group L).
5.1. Learning performance
In this study, a knowledge application scale was developed by AHP to
evaluate the learning performance of learners with different levels of
prestige. The Johnson-Neyman analysis method was adopted to
compare the post-test scores of Group H and Group L learners’ knowl-
edge application. The ndings are consistent with other research results
[54]. That is, social network prestige will affect learning performance,
the average post-test scores of the Group H were better than those of
Group L, and learners who are at a disadvantage in online interaction
(such as those with low prestige) are less likely to achieve well
Table 10
Adjusted residual table of Group L.
Z SC MA VE PD PA-SA PA-VA PA-EA PA-VR PA-RA
SC 6.99* 1.55 -2.33 4.22* -0.31 -2.42 -3.06 1.09 -0.73
MA 1.17 -0.67 -0.74 -0.22 -0.45 1.37 -5.69 -0.55 0.12
VE -3.65 -0.73 1.04 -0.3 -0.63 0.02 -0.97 0.24 -3.18
PD -5.34 -2.23 0.31 2.09* -0.19 -1.34 -0.29 1.08 -0.05
PA-SA 1.06 -2.03 -0.72 -0.21 -0.44 1.02 -5.67 -0.45 -0.12
PA-VA -7.17 -1.84 0.36 2.11 0.51 2.17* 12.19* -2.51 -2.31
PA-EA 2.48* -0.69 -0.99 -4.48 1.12 1.45 -0.93 -0.5 -0.17
PA-VR -1.99 -0.56 4.46* -2.49 -0.14 -4.55 0.34 1.4 5.31*
PA-RA -3.35 1.61 -2.18 1.22 1.11 -0.77 -0.17 -0.45 -0.03
*
p <.05.
Fig. 9. The learning behavioral transition diagram for Group H.
Fig. 10. The learning behavioral transition diagram for Group L
N. Ma et al.
Computers and Education Open 3 (2022) 100087
11
academically. This study delved into the current ndings and found that
the effect of prestige on learning performance is inuenced by learners’
initial level of knowledge ability, and that differences in social network
prestige in online peer assessment do not affect the learners’ ultimate
level of knowledge application when their prior knowledge is too low.
For the vast majority learners with a certain learning basis, high-prestige
learners ultimately performed signicantly better in terms of knowledge
application.
Based on these ndings, this study concluded that promoting social
network prestige contributes to learning performance. However,
focusing on social network prestige is not enough for learners with a too
low priori knowledge, more individualized attention and support need
to be provided to them in this case.
5.2. Social network structure
There are signicant differences in the social network structure
among learners with different prestige. Firstly, the analysis of out-degree
and in-degree shows that the difference between Group H and Group L is
not signicant in out-degree, but there is a signicant difference in in-
degree, suggesting that Group L do not show less effort and struggle in
establishing connections with peers, but somehow achieved lower levels
of interaction gains. This phenomenon has also been described by other
scholars as “positive peer interaction with unequal feedback” [55].
Secondly, compared with Group L, Group H have higher centrality and
inuence, implying that Group H have greater control over learning
resources such as information and knowledge in the network, which is
consistent with Andrews [56]’ ndings. But previous research [57] ar-
gues that the control of high-prestige learners has the potential to
interfere with other learners in the network, which may further
contribute to the participation gap. Third, the eigenvector centrality
indicator shows that high-prestige learners interact with more inuen-
tial learners, while low-prestige learners mostly interact with learners of
their own prestige level. However, this "monopoly" of learning capital is
not conducive to good learning performance for low-prestige learners
[34].
It can be seen that the Group L actively interacts without receiving
reciprocal responses and can only communicate with learners of their
own prestige level, while Group H has more inuence and even achieves
a certain degree of control over the information in the network. What
can be done to reduce the damage caused by the participation gap to the
low-prestige group? In this study, the choice of which assignments to
review was freely chosen by learners, they may prefer to review as-
signments from high-prestige learners, which could have led to a higher
participation gap. Therefore, this study suggests consciously recom-
mending assignments from Group L to Group H through algorithms to
increase the likelihood of low-prestige learners’ assignments being seen
by high-prestige learners, thereby reducing the damage caused by the
participation gap.
In addition, many technology-supported learning scaffolds use social
network analysis tools to build visual interaction networks of learners to
help instructors identify marginal learners who interact less and might
benet from specic intervention strategies [6]. But these
technology-supported learning scaffolds, such as commonly-used
force-directed network visualization tools could not fully capture the
prestige differences found in this study. As Fig. 7 shows, Group L that
have the same external interaction (out-degree) but achieve a lower
level of responses (in-degree) are not all found in marginal positions.
Considering the signicant differences in learning outcomes among
learners with different prestige, it is recommended that prestige in-
dicators should be considered when designing learning intervention
scaffold so that instructors can identify and provide timely support to
low-prestige learners who cannot obtain equivalent feedback as
high-prestige peers.
5.3. Learning behaviors
The behavior sequences showed that both high and low-prestige
learners actively reect on their past assignments in peer assessment.
However, learners with different prestige gain reection in different
ways, showing different depths of reection. Group L gain insight from
evaluating the assignments of others and then engage in self-reection.
Unlike Group L, Group H gain reection not only by evaluating the as-
signments of others, but also possibly from reections triggered by the
evaluation of others. They show more reective learning behaviors after
being evaluated by others, they are more willing to participate in
interactive activities besides peer assessment, such as actively ask
questions and checking peers’ messages, through which they can
continuously deepen their reections on course content.
Based on these ndings, the prestige levels of learners need to be
considered in the provision of feedback strategies. Low-prestige learners
have low efciency in interaction, and previous research indicates that
this leads to poor knowledge application and assignment quality [2].
Therefore, this study suggests that, in addition to focusing on their
prestige enhancement, pushing learning materials for low-prestige
learners that are appropriate to their ability level. In addition, the
course should more strongly support low-prestige learners to revise their
assignments, thus promoting deeper thinking. For high-prestige
learners, we suggest that higher-level cognitive demands should be
made on them, which can be achieved by providing learners with
evaluation scaffolding [58]. High-prestige learners post high-quality
evaluations in peer assessment, which in turn motivates them to
contribute to higher-quality interactions across the network.
6. Limitations and future directions
This research has some limitations, although it made some contri-
butions to the inuence of social network prestige on in-service teach-
ers’ learning outcomes in online peer assessment. Firstly, this study
mainly used the indicator of social network prestige to describe the
participation gap and its impact on learning outcomes from a static
perspective. Future studies can consider the time dimension to depict
the dynamic changes in learners’ social network prestige and the effect
of such changes on their interactions and learning outcomes from a time
evolution perspective. Secondly, this study did not explore the dimen-
sion of learner characteristics because the data on learner characteristics
such as age, education, gender, and online learning experience were
relatively concentrated or did not differ signicantly when exploring the
factors of social network prestige formation. Thirdly, the research on
online interaction texts mainly focuses on the analysis of the quality of
the scale, and future research can combine the content of other scales to
conduct deeper excavation of the texts generated by online interactions.
Declaration of competing interest
No potential conict of interest was reported by the authors.
Funding
This research was funded by the “Research on Time-Emotion
Cognition Analysis Model and Automatic Feedback Mechanism of On-
line Asynchronous Interaction” project [grant number 62077007],
supported by National Natural Science Foundation of China.
References
[1] Jo I, Park Y, Lee H. Three interaction patterns on asynchronous online discussion
behaviors: A methodological comparison. Journal of Computer Assisted Learning
2017;33(2):106–22. https://doi.org/10.1111/jcal.12168. https://doi.org/.
[2] Tang H. Teaching teachers to use technology through massive open online course:
Perspectives of interaction equivalency. Computers and Education 2021:174.
https://doi.org/10.1016/j.compedu.2021.104307. https://doi.org/.
N. Ma et al.
Computers and Education Open 3 (2022) 100087
12
[3] Chan JCC, Hew KF, Cheung WS. Asynchronous online discussion thread
development: examining growth patterns and peer-facilitation techniques:
Inuence of peer facilitation. Journal of Computer Assisted Learning 2009;25(5):
438–52. https://doi.org/10.1111/j.1365-2729.2009.00321.x. https://doi.org/.
[4] Wise AF, Cui Y. Learning communities in the crowd: Characteristics of content
related interactions and social relationships in MOOC discussion forums.
Computers and Education 2018;122:221–42. https://doi.org/10.1016/j.
compedu.2018.03.021. https://doi.org/.
[5] Vaquero LM, Cebrian M. The rich club phenomenon in the classroom. Scientic
Reports 2013;3(1):1174. https://doi.org/10.1038/srep01174. –1174https://doi.
org/.
[6] Chen BD, Huang TH. It is about timing: Network prestige in asynchronous online
discussions. Journal of Computer Assisted Learning 2019;35(4):503–15. https://
doi.org/10.1111/jcal.12355. https://doi.org/.
[7] Dawson S. Seeing’ the learning community: An exploration of the development of a
resource for monitoring online student networking. British Journal of Educational
Technology 2010;41(5):736–52. https://doi.org/10.1111/j.1467-
8535.2009.00970.x. https://doi.org/.
[8] Russo TC, Koesten J. Prestige, Centrality, and Learning: A Social Network Analysis
of an Online Class. Communication Education 2005;54(3):254–61. https://doi.org/
10.1080/03634520500356394. https://doi.org/.
[9] Laurillard D. The educational problem that MOOCs could solve: Professional
development for teachers of disadvantaged students. Research in Learning
Technology 2016;24(1):28–42. https://doi.org/10.3402/rlt.v24.29369. https://
doi.org/.
[10] Akiba M, Murata A, Howard CC, Wilkinson B. Lesson study design features for
supporting collaborative teacher learning. Teaching and Teacher Education 2019;
77:352–65. https://doi.org/10.1016/j.tate.2018.10.012. https://doi.org/.
[11] Ma N, Du L, Zhang YL, Cui ZJ, Ma R. The effect of interaction between knowledge
map and collaborative learning strategies on teachers’ learning performance and
self-efcacy of group learning. Interactive Learning Environments 2020:1–15.
https://doi.org/10.1080/10494820.2020.1855204. https://doi.org/.
[12] Demir M. Using online peer assessment in an Instructional Technology and
Material Design course through social media. Higher Education 2018;75(3):
399–414. https://doi.org/10.1007/s10734-017-0146-9. https://doi.org/.
[13] Zhang S, Liu Q. Investigating the relationships among teachers’ motivational
beliefs, motivational regulation, and their learning engagement in online
professional learning communities. Computers and Education 2019;134:145–55.
https://doi.org/10.1016/j.compedu.2019.02.013. https://doi.org/.
[14] Macia M, Garcia I. Informal online communities and networks as a source of
teacher professional development. A review. Teaching and Teacher Education
2016;55:291–307. https://doi.org/10.1016/j.tate.2016.01.021. https://doi.org/.
[15] Zhang JJ, Skryabin M, Song XW. Understanding the dynamics of MOOC discussion
forums with simulation investigation for empirical network analysis (SIENA).
Distance Education. 2016;37(3):270–86. https://doi.org/10.1080/
01587919.2016.1226230. https://doi.org/.
[16] Sato T, Haegele JA. Physical educators’ engagement in online adapted physical
education graduate professional development. Professional Development in
Education 2018;44(2):272–86. https://doi.org/10.1080/
19415257.2017.1288651. https://doi.org/.
[17] Liu XY, Li L, Zhang ZH. Small group discussion as a key component in online
assessment training for enhanced student learning in web-based peer assessment.
Assessment and Evaluation in Higher Education 2018;43(2):207–22. https://doi.
org/10.1080/02602938.2017.1324018. https://doi.org/.
[18] Liang JC, Tsai CC. Learning through science writing via online peer assessment in a
college biology course. The Internet and Higher Education 2010;13(4):242–7.
https://doi.org/10.1016/j.iheduc.2010.04.004. https://doi.org/.
[19] Xiong Y, Suen HK. Assessment approaches in massive open online courses:
Possibilities, challenges and future directions. International Review of Education
2018;64(2):241–63. https://doi.org/10.1007/s11159-018-9710-5. https://doi.
org/.
[20] Cabello VM, Topping KJ. Peer assessment of teacher performance. What works in
teacher education? International Journal of Cognitive Research in Science,
Engineering and Education 2020;8(2):121–32. https://doi.org/10.5937/
IJCRSEE2002121C. https://doi.org/.
[21] Ma N, Xin S, Du JY. A Peer Coaching-based Professional Development Approach to
Improving the Learning Participation and Learning Design Skills of In-Service
Teachers. Educational Technology & Society 2018;21(2):291–304.
[22] Topping KJ. Using peer assessment to inspire reection and learning. New York:
Routledge; 2018. p. 39–41.
[23] Lin H, Hwang G, Chang S, Hsu Y. Facilitating critical thinking in decision making-
based professional training: An online interactive peer-review approach in a
ipped learning context. Computers and Education 2021:173. https://doi.org/
10.1016/j.compedu.2021.104266. https://doi.org/.
[24] Chen NS, Wei CW, Wu KT, Uden L. Effects of high level prompts and peer
assessment on online learners’ reection levels. Computers and Education 2009;52
(2):283–91. https://doi.org/10.1016/j.compedu.2008.08.007. https://doi.org/.
[25] Sekendiz B. Utilisation of formative peer-assessment in distance online education:
A case study of a multi-model sport management unit. Interactive Learning
Environments 2018;26(5):682–94. https://doi.org/10.1080/
10494820.2017.1396229. https://doi.org/.
[26] Yuan JM, Kim CM. The effects of autonomy support on student engagement in peer
assessment. Educational Technology Research and Development 2018;66(1):
25–52. https://doi.org/10.1007/s11423-017-9538-x. https://doi.org/.
[27] Falchikov N. Peer feedback marking: Developing peer assessment. Innovations in
Education and Training International 1995;32:175–87.
[28] Gao Y, Schunn CDD, Yu QY. The alignment of written peer feedback with draft
problems and its impact on revision in peer assessment. Assessment and Evaluation
in Higher Education 2019;44(2):294–308. https://doi.org/10.1080/
02602938.2018.1499075. https://doi.org/.
[29] Zou Y, Schunn CD, Wang YQ, Zhang FH. Student attitudes that predict
participation in peer assessment. Assessment and Evaluation in Higher Education
2018;43(5):800–11. https://doi.org/10.1080/02602938.2017.1409872. https://
doi.org/.
[30] Andrews NCZ, Hanish LD, Updegraff KA, Martin CL, Santos CE. Targeted
Victimization: Exploring Linear and Curvilinear Associations Between Social
Network Prestige and Victimization. Journal of Youth and Adolescence 2016;45
(9):1772–85. https://doi.org/10.1007/s10964-016-0450-1. https://doi.org/.
[31] Andrews NCZ. Prestigious Youth are Leaders but Central Youth are Powerful: What
Social Network Position Tells us About Peer Relationships. Journal of Youth and
Adolescence 2020;49(3):631–44. https://doi.org/10.1007/s10964-019-01080-5.
https://doi.org/.
[32] Atkisson C, O’Brien MJ, Mesoudi A. Adult Learners in a Novel Environment Use
Prestige-Biased Social Learning. Evolutionary Psychology 2012;10(3):519–37.
https://doi.org/10.1177/147470491201000309. https://doi.org/.
[33] Lin Y, Huang X. Reputation incentive mechanism in online learning space: based
on the learner’s perspective. Advances in Education 2021;11(2):513–25. https://
doi.org/10.12677/AE.2021.112080. https://doi.org/.
[34] Knoke D, Yang S. Social network analysis. John Wiley & Sons, Ltd; 2008.
[35] Tsvetovat M. Social network analysis for startups. O’Reilly Media; 2011.
[36] Brown, J. (2012). Developing, using, and analyzing rubrics in language assessment
with case studies in Asian and Pacic languages (NFLRC monograph series).
Honolulu, HI: National Foreign Language Resource Center, University of Hawaii.
[37] Qiang F, Zhang WL. Research on the Indicator System of Project-based Learning
Evaluation based on Curriculum Reconstruction. Modern Educational Technology
2018;28(11):47–53.
[38] Sackett GP. Observing behavior: Theory and applications in mental retardation
(Vol. 1). Baltimore: University Park Press; 1978.
[39] Tlili A, Wang H, Gao B, Shi Y, Zhiying N, Looi C, Huang R. Impact of cultural
diversity on students’ learning behavioral patterns in open and online courses: A
lag sequential analysis approach. Interactive Learning Environments 2021:1–20.
https://doi.org/10.1080/10494820.2021.1946565. https://doi.org/.
[40] Hopcan S, Polat E, Albayrak Ebru. Collaborative Behavior Patterns of Students in
Programming Instruction. Journal of Educational Computing Research 2022;(1).
https://doi.org/10.1177/07356331211062260. http://doi.org./.
[41] Eryilmaz E, Chiu MM, Thoms B, Mary J, Kim R. Design and evaluation of
instructor-based and peer-oriented attention guidance functionalities in an open
source anchored discussion system. Computers and Education 2014;71:303–21.
https://doi.org/10.1016/j.compedu.2013.08.009. https://doi.org/.
[42] Yang TC, Chen SY, Hwang GJ. The inuences of a two-tier test strategy on student
learning: A lag sequential analysis approach. Computers & Education 2015;82:
366–77. https://doi.org/10.1016/j.compedu.2014.11.021. https://doi.org/.
[43] Borgatti, S. P., Everett, M. G., & Freeman, L. C. (2002). UCINET for Windows:
Software for social network analysis [Computer software]. Harvard, MA: Analytic
Technologies.
[44] Ergün E, Usluel YK. An Analysis of Density and Degree-Centrality According to the
Social Networking Structure Formed in an Online Learning Environment.
Educational Technology & Society 2016;19(4):34–46. https://www.jstor.org/
stable/jeductechsoci.19.4.34.
[45] Huitema B. The analysis of covariance and alternatives: Statistical methods for
experiments, quasi-experiments, and single-case studies. Wiley series in probability
and statistics. 2nd ed. Hoboken, N.J.: Wiley; 2011.
[46] Anaya AR, Luque M, Let´
on E, Hern´
andez-del-Olmo F. Automatic assignment of
reviewers in an online peer assessment task based on social interactions. Expert
Systems 2019;36(4). https://doi.org/10.1111/exsy.12405. https://doi.org/.
[47] Liu J. Lectures on whole network approach: a practical guide to UCINET. Shanghai
Renmin Press; 2014.
[48] Everett M, Borgatti SP. Ego network betweenness. Social Networks 2005;27(1):
31–8. https://doi.org/10.1016/j.socnet.2004.11.007. https://doi.org/.
[49] L.G. Arnaboldi V, Marco C, Massimiliano PA, Pezzoni F. Ego network structure in
online social networks and its impact on information diffusion Computer
Communications 2016;76:26–41. https://doi.org/10.1016/j.
comcom.2015.09.028. https://doi.org/
[50] Kim J, Hastak M. Social network analysis: Characteristics of online social networks
after a disaster. International Journal of Information Management 2018;38(1):
86–96. https://doi.org/10.1016/j.ijinfomgt.2017.08.003. https://doi.org/.
[51] Quera V, Bakeman R, Gnisci A. Observer agreement for event sequences: Methods
and software for sequence alignment and reliability estimates. Behavior Research
Methods 2007;39(1):39–49. https://doi.org/10.3758/BF03192842. https://doi.
org/.
[52] Bakeman R, Gottman JM. Observing interaction: An introduction to sequential
analysis. London, England: Cambridge university press; 1997.
[53] Nicol D, Thomson A, Breslin C. Rethinking feedback practices in higher education:
A peer review perspective. Assessment and Evaluation in Higher Education 2014;
39(1):102–22. https://doi.org/10.1080/02602938.2013.795518. https://doi.org/.
[54] Lowenthal PR, Dunlap JC, Lowenthal PR, Dunlap JC. Social presence and online
discussions: A mixed method investigation. Distance Education. 2020;41(4):
490–514. https://doi.org/10.1080/01587919.2020.1821603. https://doi.org/.
[55] Hsia LH, Huang I, Hwang GJ. Effects of different online peer-feedback approaches
on students’ performance skills, motivation and self-efcacy in a dance course.
Computers & Education 2016;96:55–71. https://doi.org/10.1016/j.
compedu.2016.02.004. https://doi.org/.
N. Ma et al.
Computers and Education Open 3 (2022) 100087
13
[56] Andrews NCZ. Prestigious Youth are Leaders but Central Youth are Powerful: What
Social Network Position Tells us About Peer Relationships. Journal of Youth and
Adolescence 2019;49(3):631–44. https://doi.org/10.1007/s10964-019-01080-5.
https://doi.org/.
[57] Lee A. Socially shared regulation in computer-supported collaborative learning.
New Brunswick: Rutgers The State University of New Jersey; 2014. Thesis (Ph.D.)
2014.
[58] Konings KD, van Zundert M, van Merrienboer JJG. Scaffolding peer-assessment
skills: Risk of interference with learning domain-specic skills? Learning and
Instruction: The Journal of the European Association for Research on Learning and
Instruction 2019;60:85–94. https://doi.org/10.1016/j.learninstruc.2018.11.007.
https://doi.org/.
[59] Lee B, Bonk CJ. Social network analysis of peer relationships and online
interactions in a blended class using blogs. The Internet and Higher Education
2016;28:35–44. https://doi.org/10.1016/j.iheduc.2015.09.001. https://doi.org/.
[60] de Marcos-Ortega L, Garcia-Cabot A, Garcia-Lopez E, Ramirez-Velarde R,
Teixeira A, Martínez-Herr´
aiz J-J. Gamifying Massive Online Courses: Effects on the
Social Networks and Course Completion Rates. Applied Sciences 2020;10(20):
7065. https://doi.org/10.3390/app10207065. https://doi.org/.
[61] Scott J. Social network analysis. London, UK: Sage; 2017.
N. Ma et al.