ArticlePDF Available

Identifying Key Factors Influencing Teaching Quality: A Computational Pedagogy Approach

Authors:

Abstract and Figures

Although previous research has explored the correlation between teacher characteristics and teaching quality, effective methods for identifying key factors that influence teaching quality are still lacking. This study aims to address this issue by developing an identification methodology based on a computational pedagogy research paradigm to identify the key characteristics of teachers and courses that influence their teaching quality. We developed quantitative models to quantify the characteristics of teaching quality, based on those identified in previous studies. Correlation and multiple correlation analyses were conducted to identify the key influencing characteristics, and grey correlation analysis was used to calculate the degree of correlation between these key characteristics and teaching quality. Our methodology was applied to 27 computer science discipline teachers and 82 courses, and validated with teaching data from eight additional teachers. Our findings demonstrate the effectiveness of our method in identifying the key influence characteristics of teachers and courses on teaching quality and confirm significant correlations between these key influential characteristics and teaching quality. This innovative approach provides new insights and tools for predicting and improving the teaching quality across disciplinary majors. Our research has significant implications for future education studies, particularly for the development of effective methods for identifying key factors that influence teaching quality. By providing a more comprehensive understanding of the key factors that influence teaching quality, our study can inform the development of evidence-based strategies to improve the teaching effectiveness for different disciplinary majors.
Content may be subject to copyright.
Citation: Yao, D.; Lin, J. Identifying
Key Factors Influencing Teaching
Quality: A Computational Pedagogy
Approach. Systems 2023,11, 455.
https://doi.org/10.3390/
systems11090455
Received: 18 July 2023
Revised: 28 August 2023
Accepted: 31 August 2023
Published: 2 September 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Attribution (CC BY) license (https://
creativecommons.org/licenses/by/
4.0/).
systems
Article
Identifying Key Factors Influencing Teaching Quality:
A Computational Pedagogy Approach
Dunhong Yao 1,2,3 and Jing Lin 1,2,3,*
1School of Computer and Artificial Intelligence, Huaihua University, Huaihua 418000, China;
dh_yao@hhtc.edu.cn
2Key Laboratory of Intelligent Control Technology for Wuling-Mountain Ecological Agriculture in Hunan
Province, Huaihua 418000, China
3Key Laboratory of Wuling-Mountain Health Big Data Intelligent Processing and Application in Hunan
Province Universities, Huaihua 418000, China
*Correspondence: linjing@hhtc.edu.cn
Abstract:
Although previous research has explored the correlation between teacher characteristics
and teaching quality, effective methods for identifying key factors that influence teaching quality
are still lacking. This study aims to address this issue by developing an identification methodology
based on a computational pedagogy research paradigm to identify the key characteristics of teachers
and courses that influence their teaching quality. We developed quantitative models to quantify the
characteristics of teaching quality, based on those identified in previous studies. Correlation and
multiple correlation analyses were conducted to identify the key influencing characteristics, and grey
correlation analysis was used to calculate the degree of correlation between these key characteristics
and teaching quality. Our methodology was applied to 27 computer science discipline teachers and
82 courses, and validated with teaching data from eight additional teachers. Our findings demonstrate
the effectiveness of our method in identifying the key influence characteristics of teachers and courses
on teaching quality and confirm significant correlations between these key influential characteristics
and teaching quality. This innovative approach provides new insights and tools for predicting and
improving the teaching quality across disciplinary majors. Our research has significant implications
for future education studies, particularly for the development of effective methods for identifying
key factors that influence teaching quality. By providing a more comprehensive understanding of the
key factors that influence teaching quality, our study can inform the development of evidence-based
strategies to improve the teaching effectiveness for different disciplinary majors.
Keywords:
quantitative characteristics model; correlation analysis; multiple correlation analysis;
gray correlation analysis; teaching quality; key influence characteristics
1. Introduction
Enhancing undergraduate education quality involves improving course-teaching.
However, teaching quality varies between teachers and courses. Scholars [
1
28
] researched
to determine whether a correlation exists between the quality of course teaching and factors
such as teachers’ educational background, degree, professional title, gender, age, teaching
experience, Pedagogical Content Knowledge (PCK), Technological Pedagogical Content
Knowledge (TPACK), teacher burnout, teaching style, academic achievement, course diffi-
culty, course type, and teaching evaluation. These studies suggest a complex relationship
between course-teaching quality and the teacher and course-object characteristics. Never-
theless, research has yet to disclose the nature and laws of these issues, owing to insufficient
computing technology, data support, and theoretical and practical difficulties in technology
integration in education during the offline classroom teaching era.
Implementing a new generation of information technology, particularly artificial intel-
ligence (AI), has significantly impacted course teaching and made course information con-
Systems 2023,11, 455. https://doi.org/10.3390/systems11090455 https://www.mdpi.com/journal/systems
Systems 2023,11, 455 2 of 24
struction and teaching the norm, resulting in vast data. Computational pedagogy [2935],
an educational research paradigm based on massive data computing, has become an es-
sential approach to educational research. It aims to construct educational theories; solve
educational problems; reveal the laws of teaching and learning; explore the nature and
laws of education in depth; and effectively apply data, algorithms, and technologies in
educational practice. This shift in perspective and value in educational research provides
new research methods and approaches to uncovering the covariation between teacher
characteristics and course quality.
Using computational pedagogy as a perspective and computer science courses as an
example, this study aims to analyze the key factors influencing characteristics that have
covariance with course teaching quality; calculate the degree of association of these factors
with course teaching quality; and uncover the hidden covariance among the relationships
by intelligently collecting, organizing, and quantifying a large number of relevant charac-
teristics related to teachers and courses. Universities can use this study to improve teaching
quality by determining course teaching arrangements.
2. Literature Review
In recent years, scholars have studied the characteristics that affect the quality of
teaching in courses and computational pedagogy.
2.1. Correlation between Teacher Characteristics and Teaching Quality and Evaluation
(1) Teachers’ Characteristics and Teaching Quality.
Various studies have explored the relationship between teacher characteristics and
teaching quality in order to improve educational outcomes. Saloviita et al. [
1
] identified a
correlation between teacher burnout and teaching quality, and Tan et al. [
2
] found that teach-
ing age had a significant influence on burnout. Palali et al. [
3
] found a nonlinear positive
association between teacher scholarships and teaching quality. Sacre et al. [
4
] demon-
strated that research-active teachers exhibit a higher teaching quality.
Kulgemeyer et al.
[
5
]
discovered a correlation between teachers’ PCK and their teaching quality. Li et al. [
6
]
observed variations in the PACK sub-dimensions based on teachers’ educational levels.
Ma et al. [
7
] identified significant differences in teaching quality between teachers with
different academic titles. Han [
8
] determined that students’ evaluation of teaching qual-
ity was not significantly influenced by teachers’ gender, but varied based on their titles.
Deng et al.
[
9
] conducted ANOVA tests on student evaluations, and found no significant
differences based on different semesters or teacher titles. Gabalán-Coello et al. [
10
] analyzed
the determinants of teaching quality in a Master of Engineering program at a Colombian
university, and found that students valued the teaching methods and research experience
of their professors. These studies provide a strong scientific foundation for investigating
the connection between teachers’ characteristics and their teaching quality.
(2) Teacher characteristics and teaching evaluation.
Education scholars have increasingly analyzed the relationship between teacher charac-
teristics and teaching evaluation data to enhance teaching quality. Notable findings include
Gordon et al.’s [
11
] observation that female teachers received lower teaching evaluation
scores than did male teachers. Santiesteban et al. [
12
] found a gender bias in computer sci-
ence teaching evaluations that mainly affected the evaluation results of professors and had
a smaller impact on the evaluation results of student teachers. Arrona-Palacios et al. [
13
]
noted that undergraduate students rarely consider gender when evaluating professors
but prefer male teachers to recommend the best professor. Flegl et al. [
14
] revealed that
experience significantly affects evaluation results more than gender, and that age has an
even greater influence in some fields. Bianchini et al. [
15
] revealed that students’ evaluation
of teaching effectiveness was influenced by the instructor’s age, seniority, gender, and
research output, with seniority having varying effects on discipline and research output
having a positive impact on evaluation. Bao et al. [
16
] found that factors such as teachers’
professional titles could significantly affect students’ evaluations. Joye et al. [
17
] showed
Systems 2023,11, 455 3 of 24
that teachers’ age and gender affect their evaluation of teaching. Han et al. [
18
] noted that
teachers’ education and professional titles had a significant impact on students’ evaluation
scores. However, the interaction between them was not insignificant. Tian et al. [
19
] found
that teachers’ age, title, professional background, and course credit have a positive effect
on evaluation scores, whereas teachers’ educational background has a negative impact on
student evaluations. Additionally, the administrative positions of the evaluated teachers
did not affect the student evaluations. Using one-way ANOVA, Li et al. [
20
] determined
that the gender, age, and title of college teachers had no significant effect on students’ eval-
uation scores. However, academic characteristics significantly affected evaluation scores,
with notable differences between faculty members with master’s degrees and those with
doctoral degrees. Binderkrantz et al. [
21
] found no gender bias in Danish universities, but
students tended to rate teachers of the same gender more highly, and this gender preference
may be related to students’ different perceptions of teachers’ behavior and characteristics.
These studies offer insights into the correlation between teacher characteristics and teaching
evaluations, thereby guiding improvement in teaching quality.
(3) Other aspects.
In addition to the correlations between teacher characteristics and teaching quality
and evaluation, other scholars have also investigated the relationships between teacher
characteristics and other aspects of teaching. For instance, Jaekel et al. [
22
] found that a
teacher’s communication skills, ability to maintain good relationships with students, and
time spent in the classroom are closely related to teaching quality ratings and student
learning experiences. Rodríguez-García et al. [
23
] discovered that a teacher’s instructional
method had a significant impact on students’ reading scores, and that positive teaching
styles were associated with higher achievement. Zhang et al. [
24
] found that a teacher’s
professional characteristics had a significant impact on job satisfaction and that professional
collaboration and teaching self-efficacy were key factors. Aldahdouh et al. [
25
] found
that teachers’ learning styles and background characteristics were related to their ability
to implement teaching reforms during the COVID-19 pandemic. Asare [
26
] discovered
that teachers’ cognitive and emotional characteristics, as well as their teaching practices,
had an impact on graduate students’ statistical learning anxiety and attitudes. Marici
et al. [
27
] found that a teacher’s appearance had an impact on students’ learning attitudes
and evaluations of the teacher, with students being more likely to accept teachers who were
more attractive. Finally, Khokhlova et al. [
28
] found that students exhibited gender bias
in the personality trait ratings of male and female teachers, with male teachers receiving
higher scores in promoting learning and engagement.
In summary, there was a correlation between teacher characteristics and teaching
quality. However, inconsistencies in research findings can be attributed to the limitations of
the classical educational science research paradigm, which decomposes complex objects
into lower level components. Empirical research with human subjects often faces challenges,
such as irreproducibility, unverifiability, and limited applicability to real-world settings.
Overcoming these difficulties is crucial to ensuring consistency in research outcomes and
meeting the needs of contemporary educational science research.
2.2. Correlation between Course Characteristics and Teaching Evaluation
Course difficulty is a complex educational issue that lacks a consensus or unified model.
Questionnaires are commonly used to examine the relationship between course difficulty
and teaching evaluations. Researchers have also explored this relationship. For instance,
Huang et al. [
29
] found that three dimensions of MOOCs (course vividness, teachers’
subject knowledge, and interactivity) had a positive impact on students’ willingness to
revisit MOOCs, but course difficulty had different moderating effects on these influences.
Addison et al.
[
30
] observed that difficult courses negatively affected students’ evaluations.
Cheng et al. [
31
] found a positive correlation between students’ perceptions of course
difficulty and their preference for teacher-directed strategies. Cavanaugh et al. [
32
] noted
that students’ perceptions of course difficulty decreased by 6% during the COVID-19
Systems 2023,11, 455 4 of 24
pandemic, with a slightly greater decrease in course difficulty observed among professors
without online teaching experience.
Furthermore, researchers have examined course types and their impacts on teaching
evaluations. Liu et al. [
33
] identified significant effects of course type on instructional
effectiveness and its importance to students. Johnson et al. [
34
] determined that course
characteristics, such as class size, course level, and elective/required status, significantly
influenced student teaching evaluations. Cai [
35
] applied descriptive statistics and ANOVA
to demonstrate that course type did not affect the teaching evaluation scores.
In summary, the correlation between course characteristics and teaching evaluation is
an important measure of teaching quality. However, inconsistencies in research findings
persist because of the limitations of the classical educational science research paradigms.
2.3. Research on Computational Pedagogy
Computational pedagogy is a research paradigm that utilizes data mining, machine
learning, and emerging technologies to analyze educational behavior. It values interdis-
ciplinarity and practical applications and addresses real educational problems. In recent
years, computational pedagogy has gained attention and matured.
Li et al. [
36
] discussed the necessity of establishing computational pedagogy in the era
of big data. Wang et al. [
37
39
] explained its emergence in relation to the characteristics of
the big data era and the limitations of the traditional research paradigms.
Liu et al.
[
40
,
41
]
described computational pedagogy as an interdisciplinary discipline that uses technology-
and data-intensive methods to understand educational activities and complex systems.
Zheng et al. [
42
] emphasized the importance of computational pedagogy in China, suggest-
ing that it can provide new research paths and improve educational decision making.
This study aimed to identify the limitations of the classical educational science research
paradigm and advocate for the adoption of computational pedagogy. By applying complex
systems science, leveraging large-scale data, and studying the characteristics and relation-
ships within the teaching and learning process, this study seeks to enhance educational
decision making and quality. Computational pedagogy offers both theoretical and technical
guidance, and this study provides empirical tests to promote theory development.
The literature review above provides clear evidence of the correlation between teacher
characteristics, course characteristics, and course-teaching quality, thus providing a direct
theoretical and empirical basis for this study. However, the following problems remain in
existing studies.
(1) Further analysis of the influential characteristics and covariate relationships with
course-teaching quality is required. Existing studies have only verified one or a few charac-
teristics that influence teaching quality or have only dealt with one kind of correlation, while
ignoring the study of the covariation of many influencing characteristics on teaching quality
and the lack of a method to analyze the covariation between influencing characteristics.
(2) There is a need for more effective methods to verify the credibility of the quanti-
tative models of influence characteristics. Several studies have been conducted to quan-
titatively extract and model these characteristics. However, there is a lack of methods to
verify the reliability of the quantitative model of influence characteristics to ensure that it
objectively and accurately reflects changes in course teaching quality.
(3) However, the exiting researches have not been able to identify the essential influ-
encing characteristics contributing to teaching quality, which makes it challenging to rank
course teachers based on these characteristics. Therefore, the current study aims to develop
more effective methods to evaluate and identify the key influencing characteristics.
Therefore, this study adopts a computational pedagogy research paradigm to pinpoint
the key influencing characteristics that impact teaching quality by exploring the correlations
between various characteristics and teaching quality and utilizing this knowledge to
arrange courses for teachers scientifically. This approach is expected to introduce a novel
and efficient means of enhancing the caliber of university teachings.
Systems 2023,11, 455 5 of 24
3. Methods
3.1. Collection and Analysis of Teaching Quality Influence Characteristics Data
Complex and diverse characteristics influence the quality of teaching. Correlation
analyses of a single or a few characteristics are limited. We propose a human-machine
approach to intelligently acquire multiple influence characteristic data to address this issue.
These data were collected from the databases of various business platforms and multiple
data sources using machine learning and natural language processing techniques. These
data can be divided into two types: simple influence characteristics, such as gender, age,
professional title, educational background, and degree; complex influence characteristics
data, such as data on teaching evaluation, including the mean value of student evaluation,
mean value of peer evaluation, and mean value of supervisor evaluation for course teaching
evaluation; and TPACK characteristic data obtained from questionnaire surveys, including
Technological Knowledge (TK), Content Knowledge (CK), Pedagogical Knowledge (PK),
PCK, Technological Pedagogical Knowledge (TPK), Technological Content Knowledge
(TCK), and TPACK.
3.2. Credible Quantitative Modeling of the Influence Characteristics of Teaching Quality
A scientific measurement method has been proposed in response to the lack of effective
methods for constructing quantitative models of influence characteristics and validating
their reliability. This method constructs quantitative models of different influence charac-
teristics and conducts reliability tests to provide credible data for identifying key influence
characteristics. Influence characteristics can be classified as simple or complex, based on
the complexity of their quantification processes.
3.2.1. Quantification of Simple Influence Characteristics
Basic teacher information was obtained using intelligent data-collection technology
on a faculty-management platform. Characteristic values, such as gender, educational
background, degree, professional title, age, and teaching age, were defined by experts’
argumentation and literature review according to the requirements of pedagogical theo-
ries and methods and machine learning data attribution to ensure the credibility of the
quantification results.
Definition 1.
Educational background (Eb). If the teacher’s education is of doctoral students, its
Eb characteristic value is set to 1; if it is of master’s students, it is set to 0.7; and if it is of bachelor’s
students, it is set to 0.5.
Definition 2.
Degree (Dg). If the teacher’s degree is Ph.D., the Dg value is set to 1, 0.7 if it is a
master’s degree, 0.5 if it is a bachelor’s degree, and 0 otherwise.
Definition 3.
Professional title (Pt). If the teacher has a full senior title (professor or other full
seniors), the Pt characteristic value of the title is set to 1; if it is an associate senior title (associate
professor or other associates senior), it is set to 0.8; if it is an intermediate title (lecturer or other
intermediate), it is set to 0.5; and if it is a junior title (assistant professor or other junior), it is set
to 0.2.
Definition 4.
Gender (Gd). Gender characteristics were set to 1 for male teachers and 0.5 for
female teachers.
Definition 5.
Age (Ag). According to reference [
43
] and expert argumentation, the age of university
teachers can be divided into six stages, with the Ag characteristic value set to 0.3 for those aged
35 and below, 0.4 for those aged 36–40, 0.6 for those aged 41–45, 0.7 for those aged 46–50, 0.8 for
those aged 51–55, and 1 for those aged 55 and above.
Systems 2023,11, 455 6 of 24
Definition 6.
Teaching age (Ta). This indicates the years the teacher has been engaged in teaching,
reflects the teachers’ teaching development course, and is a marker of their professional growth stage
in teaching. According to reference [
2
] and expert arguments, teaching age can be divided into six
segments, with the teaching age factor set to 0.1 for less than two years, 0.3 for 3–5 years, 0.5 for
6–9 years, 0.7 for 10–15 years, 0.9 for 16–20 years, and 1 for more than 21 years.
3.2.2. Quantification of the Complex Influence Characteristics
Owing to the complexity and diversity of the factors that influence teaching quality
and the large amount of data involved, we selected multiple characteristics that have been
verified by relevant studies as research elements closely related to teaching quality. The
following section describes the complex influencing characteristics and the methods used
to obtain them. To ensure the accuracy and reliability of our measurements, we referred to
validation methods described in the literature [44,45].
(1) Teaching quality of courses (Tq)
Teaching quality is a complex and multifaceted concept encompassing various aspects
of the teaching process. This study focuses on the teaching quality of courses, which refers
to the quality of the instruction provided by teachers in a specific course. To create a reliable
and valid scale for evaluating course-teaching quality, we adopted a standardized scale
development process.
First, we used semi-structured interviews to construct the initial measurement items
based on the connotations and influencing factors of teaching quality found in existing
literature. Second, we invited experts and students to conduct content validity tests to
ensure that the measurement items were consistent with concepts. We then conducted
a pretest using an online questionnaire to determine whether it was suitable for formal
research. Exploratory Factor Analysis (EFA) was used to streamline the items and determine
their dimensions by eliminating items that were inconsistent with the concept. The KMO
test and Bartlett’s sphericity test were conducted to determine whether the requirements
for conducting EFA were met. In the EFA process, principal component analysis and
orthogonal rotation were used to obtain the total contribution of the explained variables
and determine whether the division of the question items into dimensions was reasonable.
Confirmatory Factor Analysis (CFA) was used to verify the overall goodness of fit of the
factor structure.
Finally, we assessed the scale using Cronbach’s alpha coefficient, Construct Reliability
of the scale dimensions, and the Average Variance Extraction value to ensure that the
scale had high reliability, construct, and convergent validity, and met dimensionality and
discriminant validity requirements.
The resulting scale provides a comprehensive survey of course-teaching quality by
gathering objective data from the teaching management platform at the end of each course.
Information was collected and analyzed using intelligent data collection technology to
produce quantitative values for course-teaching quality. This approach allowed us to
identify the key influencing characteristics of course-teaching quality and their impacts on
students’ learning experiences.
(2) Teacher burnout (Tb)
Teacher burnout is a form of negative emotion and attitude related to prolonged job
stress. It typically appears in three dimensions: emotional exhaustion, depersonalization,
and personal accomplishment. Teacher burnout can negatively affect students, and may
even lead to psychological disorders. To gauge teacher burnout quantitatively, the survey
by Maslach et al. [
46
] was used in this study. The survey was part of the Burnout Scale
(MBI-educators’ survey), and consisted of 22 questions, each with five response options that
corresponded to a score of 1–5. This survey has been widely used in the field of education.
The quantitative model used in this study is illustrated in Figure 1.
Systems 2023,11, 455 7 of 24
Systems 2023, 11, x FOR PEER REVIEW 7 of 24
even lead to psychological disorders. To gauge teacher burnout quantitatively, the survey
by Maslach et al. [46] was used in this study. The survey was part of the Burnout Scale
(MBI-educators’ survey), and consisted of 22 questions, each with ve response options
that corresponded to a score of 1–5. This survey has been widely used in the eld of edu-
cation. The quantitative model used in this study is illustrated in Figure 1.
Figure 1. Developing a quantitative model for teacher burnout.
To evaluate teacher burnout, we used the MBI educators’ survey tool, which com-
putes overall scores and scores for emotional exhaustion, depersonalization, and personal
accomplishment. The sum of the scores for each dimension determined the presence or
absence of burnout. Burnout severity can be assessed by using various score ranges and
combinations. To obtain a more comprehensive and accurate understanding of teacher
burnout, it is necessary to utilize various techniques and tools, such as interviews and
observations.
(3) Teaching style (Ts)
Teachers’ teaching styles uniquely express their personal preferences, habits, and
long-term teaching experiences. This is a combination of personality, teaching skills, and
techniques. Scholars such as Sternberg have studied teaching styles and proposed various
theories to categorize them, including cognitive-style theory and psychological self-con-
trol theory [47]. In this study, we chose Sternberg’s classication system and used the
Thinking Styles in Teaching Inventory (TSTI) to measure teachers’ teaching styles.
The TSTI contains seven subscales, each with seven questions rated on a 7-point scale.
These questions covered seven teaching styles: legislative (Leg), executive (Exe), judicial
(Jud), global (Glo), local (Loc), liberal (Lib), and conservative (Con). We incorporated the
TSTI into our teaching assessment system to ensure accurate data, and regularly distrib-
uted it to students based on the instructors instructions. We used Cronbachs alpha to
verify the credibility of the seven items, and obtained quantitative values for each teaching
style. Figure 2 illustrates the design of the quantitative model.
Figure 1. Developing a quantitative model for teacher burnout.
To evaluate teacher burnout, we used the MBI educators’ survey tool, which computes
overall scores and scores for emotional exhaustion, depersonalization, and personal accom-
plishment. The sum of the scores for each dimension determined the presence or absence of
burnout. Burnout severity can be assessed by using various score ranges and combinations.
To obtain a more comprehensive and accurate understanding of teacher burnout, it is
necessary to utilize various techniques and tools, such as interviews and observations.
(3) Teaching style (Ts)
Teachers’ teaching styles uniquely express their personal preferences, habits, and
long-term teaching experiences. This is a combination of personality, teaching skills, and
techniques. Scholars such as Sternberg have studied teaching styles and proposed various
theories to categorize them, including cognitive-style theory and psychological self-control
theory [
47
]. In this study, we chose Sternberg’s classification system and used the Thinking
Styles in Teaching Inventory (TSTI) to measure teachers’ teaching styles.
The TSTI contains seven subscales, each with seven questions rated on a 7-point scale.
These questions covered seven teaching styles: legislative (Leg), executive (Exe), judicial
(Jud), global (Glo), local (Loc), liberal (Lib), and conservative (Con). We incorporated the
TSTI into our teaching assessment system to ensure accurate data, and regularly distributed
it to students based on the instructor’s instructions. We used Cronbach’s alpha to verify
the credibility of the seven items, and obtained quantitative values for each teaching style.
Figure 2illustrates the design of the quantitative model.
Systems 2023,11, 455 8 of 24
Systems 2023, 11, x FOR PEER REVIEW 8 of 24
Figure 2. A quantitative model of teaching style.
(4) Academic Achievement (Aa).
The relationship between course-teaching and academic and pedagogical research is
signicant. Academic research can improve course content to cater eectively to students’
learning needs, whereas course instruction can promote advances in science and technol-
ogy. Research can provide teachers with more theoretical knowledge and more eective
teaching techniques to enhance the quality of their education. Thus, scholarship, peda-
gogical research, and course teaching are interdependent, and their achievements reect
scholarly excellence.
Scholarly achievements include research output and impacts. Journal articles and
monographs characterize research output, while academic impact stems from the peer
evaluation of research results. The evaluation depends on how well others value, recog-
nize, and cite research results.
To facilitate the evaluation, intelligent data collection techniques were employed to
gather relevant data from research management and third-party paper search platforms,
including the number of published papers, monographs, patents, and academic inuence.
Data were weighed to determine mean values. Academic inuence was measured using
the H-index proposed by Hirsch [48], which considers the frequency of citations and num-
ber of articles to prevent the one-sided pursuit of the number of papers and stimulate
faculty members’ enthusiasm to create high-quality papers.
(5) Technological Pedagogical Content Knowledge (TPACK)
In 1986, a study by Shulman discovered that good teachers do not rely on specic
teaching methods, but rather use eective methods to help students quickly acquire and
understand knowledge [49]. Shulman argued that teachers’ professionalism was reected
in their ability to transfer knowledge. During his research, Shulman identied the inter-
section between the CK and PK. Park Soonhye at the University of Iowa has conducted a
series of PCK studies based on science subject areas since 2007, and has developed a PCK
pentagonal structure. This structure considers PCK to be an orientation toward teaching,
student understanding, curriculum knowledge, PK, and assessment knowledge, which
are mutually inuential and interrelated.
Figure 2. A quantitative model of teaching style.
(4) Academic Achievement (Aa).
The relationship between course-teaching and academic and pedagogical research
is significant. Academic research can improve course content to cater effectively to stu-
dents’ learning needs, whereas course instruction can promote advances in science and
technology. Research can provide teachers with more theoretical knowledge and more
effective teaching techniques to enhance the quality of their education. Thus, scholarship,
pedagogical research, and course teaching are interdependent, and their achievements
reflect scholarly excellence.
Scholarly achievements include research output and impacts. Journal articles and
monographs characterize research output, while academic impact stems from the peer
evaluation of research results. The evaluation depends on how well others value, recognize,
and cite research results.
To facilitate the evaluation, intelligent data collection techniques were employed to
gather relevant data from research management and third-party paper search platforms,
including the number of published papers, monographs, patents, and academic influence.
Data were weighed to determine mean values. Academic influence was measured using the
H-index proposed by Hirsch [
48
], which considers the frequency of citations and number
of articles to prevent the one-sided pursuit of the number of papers and stimulate faculty
members’ enthusiasm to create high-quality papers.
(5) Technological Pedagogical Content Knowledge (TPACK)
In 1986, a study by Shulman discovered that good teachers do not rely on specific
teaching methods, but rather use effective methods to help students quickly acquire and
understand knowledge [
49
]. Shulman argued that teachers’ professionalism was reflected in
their ability to transfer knowledge. During his research, Shulman identified the intersection
between the CK and PK. Park Soonhye at the University of Iowa has conducted a series of
PCK studies based on science subject areas since 2007, and has developed a PCK pentagonal
structure. This structure considers PCK to be an orientation toward teaching, student
understanding, curriculum knowledge, PK, and assessment knowledge, which are mutually
influential and interrelated.
Systems 2023,11, 455 9 of 24
With the continued evolution of information technology in education, teachers need
to be proficient not only in traditional CK and PK but also in information technology
knowledge and integrating technology into subject teaching. In 2009, Schmidt et al. [
50
]
proposed a new framework, TPACK, to integrate teachers’ knowledge of the effective use
of technology in teaching and learning into their expertise. TPACK encompasses PCK and
emphasizes how teachers can integrate subject matter and educational knowledge when
using technology to make teaching and learning more effective.
This study primarily used the TPACK structural framework as the theoretical basis,
and measured the TPACK level of each teacher using a questionnaire. The questionnaire
consisted of 49 questions on seven component dimensions of TPACK: TK, CK, PK, PCK,
TPK, TCK, and TPACK. The questions were divided into five categories using a Likert scale
and assigned a value of 1 to 5 for data analysis.
Before the formal questionnaire survey was conducted, a reliability test was conducted
to ensure the validity and reliability of the questionnaire. Factor analysis was conducted
on the TPACK scale using SPSS to verify the correspondence between the factors and
questions, explore the internal logical structure between the questions, and assess the
structural validity of the questionnaire.
Finally, we analyzed the reliability of the questionnaire using Cronbach’s alpha coeffi-
cient to verify consistency among the questions in the questionnaire. The modeling process
is illustrated in Figure 3.
Systems 2023, 11, x FOR PEER REVIEW 9 of 24
With the continued evolution of information technology in education, teachers need
to be procient not only in traditional CK and PK but also in information technology
knowledge and integrating technology into subject teaching. In 2009, Schmidt et al. [50]
proposed a new framework, TPACK, to integrate teachers’ knowledge of the eective use
of technology in teaching and learning into their expertise. TPACK encompasses PCK and
emphasizes how teachers can integrate subject maer and educational knowledge when
using technology to make teaching and learning more eective.
This study primarily used the TPACK structural framework as the theoretical basis,
and measured the TPACK level of each teacher using a questionnaire. The questionnaire
consisted of 49 questions on seven component dimensions of TPACK: TK, CK, PK, PCK,
TPK, TCK, and TPACK. The questions were divided into ve categories using a Likert
scale and assigned a value of 1 to 5 for data analysis.
Before the formal questionnaire survey was conducted, a reliability test was con-
ducted to ensure the validity and reliability of the questionnaire. Factor analysis was con-
ducted on the TPACK scale using SPSS to verify the correspondence between the factors
and questions, explore the internal logical structure between the questions, and assess the
structural validity of the questionnaire.
Finally, we analyzed the reliability of the questionnaire using Cronbach’s alpha co-
ecient to verify consistency among the questions in the questionnaire. The modeling
process is illustrated in Figure 3.
Figure 3. The TPACK quantitative model.
Once the reliability test was completed, the TPACK scale was integrated into the
teaching assessment system. Teachers were then required to measure it regularly in order
to obtain accurate TPACK data for each dimension.
(6) Course diculty (Cd):
There is no consensus on the concept of course diculty in the academic community
and there is no unied model of course diculty. Therefore, in this study, we used a ques-
tionnaire to obtain the course diculty of the discipline majors, using a scale ranging from
0.1 to 1.0, with ten levels indicating varying degrees of diculty, and in the statistics we
use Equation (1) to organize the course diculty.
Figure 3. The TPACK quantitative model.
Once the reliability test was completed, the TPACK scale was integrated into the
teaching assessment system. Teachers were then required to measure it regularly in order
to obtain accurate TPACK data for each dimension.
(6) Course difficulty (Cd):
There is no consensus on the concept of course difficulty in the academic community
and there is no unified model of course difficulty. Therefore, in this study, we used a
questionnaire to obtain the course difficulty of the discipline majors, using a scale ranging
Systems 2023,11, 455 10 of 24
from 0.1 to 1.0, with ten levels indicating varying degrees of difficulty, and in the statistics
we use Equation (1) to organize the course difficulty.
cd fi=λ1
n
n
k=1
cdi,k+(1λ)1
m
m
r=1
cdi,r(1)
where cdf
i
denotes the course difficulty of i;nand mare the numbers of teacher and student
questionnaires, respectively;
λ
(0 <
λ
< 1) is the proportion of teacher questionnaires in the
course difficulty; cd
i,k
denotes the difficulty given by the kth teacher of i; and cd
i,r
denotes
the difficulty given by the rth student of course i.
By embedding the course difficulty questionnaire into the teaching management
system and distributing it in a targeted manner, the characteristic difficulty values for each
course were obtained by applying Equation (1) after completing the questionnaires. This
research method can effectively solve the difficulties in measuring the difficulty of college
courses and ensure the objectivity and reliability of the research results.
(7) Teaching Evaluation (Te)
Using the Analytic Hierarchy Process (AHP) [
51
] proposed by Saaty et al. as a quantita-
tive evaluation index system, multilevel, multifaceted, and multi-angle diversified teaching
quantitative evaluation results were used as the basis. Through the teaching management
system, evaluation subjects, such as supervisors, peers, and students, gave their evaluations
and weighted averages each semester. A comprehensive evaluation result was obtained,
and various schools adopted a standard teaching evaluation system.
3.3. Designing a Method to Identify Key Influence Characteristics of Teaching Quality
We developed an analytical technique that accurately identified the key influencing
characteristics of course-teaching quality. Our approach integrates correlation, multiple
correlation, and grey correlation analyses to extract the key influencing characteristics
and ensure precise analysis results. The credibility and reliability of the outcomes were
assessed using rigorous testing. Our method employs correlation and multiple correlation
analyses to investigate the linear and nonlinear relationships between characteristics while
utilizing gray correlation analysis to evaluate the degree of gray correlation among them.
This comprehensive methodology provides valuable insights into the factors that influence
the quality of teaching.
3.3.1. Standardizing Influence Characteristics
Various characteristics can influence a course’s teaching quality, each with its own
value. To collect data, we quantified the characteristics of 27 teachers and their course-
teaching data from computer science majors at a university over a five-year period.
We standardized the data using a range normalization method to eliminate the impact
of differing magnitudes among these characteristics, ensuring that all values fell within the
normalized range of 0.1 to 0.99.
Within this process, we prioritized the most recent values for teachers’ educational
background, degree, and professional title upgrades over the past five years. The average
score was calculated for each course’s evaluation and quality. The age and teaching age
were based on the latest available years.
By employing these methods, we obtained a normalized dataset of teaching quality
influence characteristics, denoted as Cf = (T_id (teacher ID), C_id (course ID), Tq,Eb,Dg,
Pt,Gd,Ag,Ta,Tb,Leg,Exe,Jud,Glo,Loc,Lib,Con,Aa,TK,PK,CK,PCK,TCK,TPK,TPACK,
Cd,Te).
3.3.2. Correlation Analysis and Multiple Correlation Analysis of Key
Influence Characteristics
A correlation analysis was conducted on the influence characteristics to examine the
relationship between teaching quality (Tq) and other characteristic variables within dataset
Systems 2023,11, 455 11 of 24
Cf. This analysis resulted in the construction of a correlation matrix for the characteristic
samples, denoted by matrix (2).
r(i,j)n×n=
1r1,2 · · · r1,n
r2,1 1· · · r2,n
· · · · · · · · · · · ·
rn,1 rn,2 · · · 1
(2)
where r
i,j
=
(xx)(yy)
q(xx)2(yy)2
represents the correlation coefficient between characteristic
variables iand j.nis the number of influence characteristics, and xand yrepresent the
values of characteristic variables iand j. It is essential to emphasize the significance of this
analysis in determining the relationship between Tq and other variables in the dataset.
Through a correlation coefficient test of the correlation matrix and considering the
r-value and p-value matrices, we can identify the candidate key influence characteristics
that exhibit strong correlations with Tq. These characteristics, denoted as Ckf = (Tq,x
1
,x
2
,
. . ., xp), where pn, are considered candidate key influences on the teaching quality.
Multiple correlation analyses must be performed within the Ckf dataset to determine
whether these candidate key characteristics influence Tq. This involves creating a multi-
ple linear regression model with Tq as the dependent variable and other candidate key
characteristics as independent variables.
Tq =b0+b1x1+b2x2+· · · +bpxp+ε(3)
To obtain a prediction model for multiple linear regressions, we used linear regression
to estimate b0bp.
c
Tq =b0+b1x1+b2x2+· · · +bpxp(4)
By conducting multiple correlation analyses between Tq and the candidate key charac-
teristic variables, which can be transformed into a simple correlation analysis, denoted as
c
Tq, we obtain Equation (5).
R=corrTq,x1, . . ., xp=corrTq,c
Tq=cov(Tq,c
Tq)
qvar(Tq)var(c
Tq)
=
v
u
u
u
tc
TqiTq2
TqiTq2(5)
where
R2=(c
TqiTq)2
(TqiTq)2
represents the determination coefficient that indicates the goodness-
of-fit of the model. Higher R
2
values indicate a greater degree of explanation of the
dependent variable by the independent variables. Thus, different combinations of variables
were utilized to select the subset with the highest R
2
representing the key influencing
characteristics affecting teaching quality. The resulting subset is denoted as Kf = (y
1
,y
2
,
. . .
,y
n
), where n
p.f-tests and bilateral t-tests were conducted on the correlation and
determination coefficients to ensure the reliability of the analysis results. These tests
validated the significance and accuracy of the analysis.
3.3.3. Analysis of Association Degree between Key Influence Characteristics and
Teaching Quality
Correlation analysis alone does not measure the strength of the association between
the characteristic variables and teaching quality. To determine the relative strength of the
association between each key influence characteristic and teaching quality, Grey Relation
Analysis (GRA) [
52
] was employed. GRA evaluates the degree of influence of each key
influence characteristic on teaching quality.
Systems 2023,11, 455 12 of 24
In GRA, the reference column is denoted as Tq =Tq(k), and the comparison column
is denoted as y
i
=y
i
(k), i= 1, 2,
. . .
,p, where irepresents the number of key influence
characteristics (p), kdenotes the year (1, 2,
. . .
,m), and mdenotes the years. The correlation
coefficient was calculated using Equation (6):
ζi(k)=
min
imin
k|Tq(k)yi(k)|+ρmax
imax
k|Tq(k)yi(k)|
|Tq(k)yi(k)|+ρmax
imax
k|Tq(k)yi(k)|(6)
The correlation coefficient signifies the correlation value between the comparison
series for each year and the reference series. We used the annual average correlation value
(
ri=1
mm
k=1ξi(k)
) for a holistic comparison and studied the correlation between the key
influence characteristics and teaching quality.
Through the above correlation analysis, multiple correlation analysis, and grey corre-
lation degree analysis, the key characteristics of different disciplines and majors affecting
teaching quality can be identified, providing a scientific basis for the intelligent recommen-
dation of teachers of subsequent courses.
4. Results
4.1. Experimental Data
In this experiment, we collected basic information from 27 computer science teachers
at a university, including their gender, educational background, degree, professional title,
age, and teaching age. These characteristics were quantified according to predetermined
criteria. Subsequently, data related to the complex influencing characteristics were gathered
and quantified using an appropriate method.
4.1.1. Evaluation of Teaching Quality
To assess course-teaching quality, a survey was conducted on 82 courses taught by
teachers over the past five years, resulting in the collection of 1286 valid questionnaires.
By calculating the weighted average for courses taught by the same teacher, we obtained
143 sets of course-teaching-quality data. The validity of the data was verified using SPSS
and the results are presented in Table 1.
Table 1.
Teaching quality questionnaire validity analysis. Column 1 of the table is the final question
obtained based on the aforementioned scale generation method, column 2 is the maximum factor
loading coefficient of the survey results, and column 3 is the degree of commonality.
Items The Largest Factor Loadings Communalities
Q1: Well-prepared for teaching, skillful content, and full spirit. 0.615 0.752
Q2: Lecturing seriously and actively maintaining the order of classroom teaching. 0.714 0.592
Q3: The teaching content is systematic and full of information, the pace and progress are
reasonably arranged, and the important and difficult points are outstanding. 0.623 0.636
Q4: Establish morality and educate people with professional knowledge and quality at the
right time. 0.821 0.745
Q5: The ability to link theory with practice and reflect on the discipline frontier. 0.825 0.708
Q6: Focus on inspiration and guidance and the ability to flexibly use various teaching
methods to help students accept and understand relevant knowledge and develop their
thinking and creative abilities.
0.705 0.620
Q7: Using board books and modern teaching techniques in a timely and appropriate manner
with good results 0.885 0.811
Q8: Guiding students in actively participating in teaching sessions and the classroom
atmosphere is enthusiastic. 0.823 0.680
Q9: Students learn something and have a high degree of mastery of their course knowledge.
0.777 0.724
Q10: Cultivate students’ learning habits, learning methods, inquiring spirit, and
innovative ability. 0.730 0.636
Q11: Organized, accurate lecture knowledge, focused, and difficult breakthroughs. 0.767 0.627
Q12: Fluent expression, concise language, and strong appeal. 0.716 0.583
KMO 0.704
Bartlett’s Test of Sphericity (Chi-Square) 58.789
p-value 0.001
Systems 2023,11, 455 13 of 24
We conducted a validity study using a factor analysis to ensure the meaningfulness
and reasonableness of the variables. This analysis assessed the appropriateness and effec-
tiveness of study items. Three key indicators were examined: KMO values, commonalities,
and factor loading.
All research items exhibited a communality value exceeding 0.4, indicating the effec-
tive extraction and summarization of information. The KMO value, which exceeded the
accepted threshold of 0.7 at 0.704, demonstrated the adaptability of data extraction and
generalization, thus reflecting high data validity.
Moreover, the maximum loading coefficient for each factor exceeded 0.4, indicating
clear correspondence between the options and factors. After confirming the validity of the
study data, reliability analysis was conducted to assess its consistency. The standardized
Cronbach’s alpha coefficient was 0.726, indicating the high consistency and stability of the
collected data.
Overall, based on our comprehensive analysis of validity and reliability, the collected
data on teaching quality were highly reliable and valid. This provided scientific and
reasonable data to support our research questions.
4.1.2. Teacher Burnout
A detailed questionnaire was administered to the 27 teachers using an educational
version of the Burnout Scale (MBI-Educators Survey). Subsequently, validity and reliability
tests were conducted on the collected data, and the results are presented in Table 2.
Table 2.
Validity and reliability statistics for the teacher burnout and their sum, including factor
loadings, corrected item-total correlation (CITC), commonalities, corresponding Cronbach’s alpha
values, and KMO.
Items Factor Loadings CITC Communalities Cronbach α
Emotional exhaustion 0.806 0.529 0.785
0.741
Depersonalization 0.790 0.673 0.712
Personal Accomplishment
0.932 0.359 0.878
Teacher burnout 0.977 1.000 1.000
KMO 0.624
Based on Table 2, the commonalities corresponding to all research items were greater
than 0.4, indicating that information on the research items can be effectively extracted.
Factor loadings greater than 0.4 indicate a correspondence between the options and the
factors. A CITC value greater than 0.3 indicates a positive correlation between the research
items and the total test score. In addition, the KMO value was 0.624, which was greater
than 0.6, indicating that the data could be extracted effectively. A Cronbach’s
α
value of
0.741 indicated the strong consistency and stability of the collected data.
Figure 4shows the varying burnout situations of university teachers, highlighting
the need for more extensive and comprehensive research on their work environments and
psychological conditions.
Systems 2023,11, 455 14 of 24
Systems 2023, 11, x FOR PEER REVIEW 14 of 24
Figure 4. A comparison of 27 teachers’ burnout. As can be seen from the comparison results, each
teacher had inconsistent results on all three dimensions of burnout and dierences in total burnout.
4.1.3. Teaching Styles
The TSTI assesses teachers’ teaching styles. Student questionnaires were used to col-
lect 1568 valid responses, enabling an analysis of the teaching styles of 27 teachers. Validity
and reliability tests were performed to ensure data accuracy; the results are presented in
Tabl e 3.
Table 3. Validity and reliability statistics for the seven dimensions of teaching style, including factor
loadings, CITC, commonalities, corresponding Cronbach’s alpha values, and KMO.
Style Types Factor Loadings CITC Communalities Cronbach
α
Legislative 0.891 0.853 0.794
0.968
Executive 0.860 0.803 0.739
Judicial 0.954 0.936 0.909
Global 0.946 0.924 0.896
Local 0.937 0.916 0.878
Liberal 0.937 0.914 0.878
Conservative 0.903 0.867 0.815
KMO 0.866
Based on the results presented in Table 3, we can condently state that the data col-
lected on teaching styles were of excellent quality and reliability. The community value
for all the research items was higher than 0.4, indicating eective information extraction.
Additionally, the factor-loading coecients were above 0.4, conrming the correspond-
ence between options and factors. The CITC values were also above 0.4, indicating a sig-
nicant correlation between the analyzed items. A KMO value of 0.866, which is greater
than the desirable criterion of 0.6, indicated ecient information extraction from the data.
Moreover, a Cronbach’s alpha coecient of 0.968 ensures high internal consistency. These
high-quality data provide a solid foundation for future research, facilitating scientic and
eective decision making based on accurate and reliable information.
4.1.4. Academic Achievement
Over the past decade, teachers’ academic achievements have been comprehensively
assessed by meticulously reviewing papers published as rst or corresponding authors in
reputable databases (e.g., CNKI, Web of Science, and EI). We also considered the number
of published monographs and patents led to indicate their academic contributions and
innovation skills. To gauge teachers’ academic inuence accurately, we relied on the H-
index, which considers the number of papers and the frequency of citations. This metric
provides a more precise reection of the academic impact.
Figure 4.
A comparison of 27 teachers’ burnout. As can be seen from the comparison results, each
teacher had inconsistent results on all three dimensions of burnout and differences in total burnout.
4.1.3. Teaching Styles
The TSTI assesses teachers’ teaching styles. Student questionnaires were used to collect
1568 valid responses, enabling an analysis of the teaching styles of 27 teachers. Validity
and reliability tests were performed to ensure data accuracy; the results are presented
in Table 3.
Table 3.
Validity and reliability statistics for the seven dimensions of teaching style, including factor
loadings, CITC, commonalities, corresponding Cronbach’s alpha values, and KMO.
Style Types Factor Loadings CITC Communalities Cronbach α
Legislative 0.891 0.853 0.794
0.968
Executive 0.860 0.803 0.739
Judicial 0.954 0.936 0.909
Global 0.946 0.924 0.896
Local 0.937 0.916 0.878
Liberal 0.937 0.914 0.878
Conservative 0.903 0.867 0.815
KMO 0.866
Based on the results presented in Table 3, we can confidently state that the data
collected on teaching styles were of excellent quality and reliability. The community value
for all the research items was higher than 0.4, indicating effective information extraction.
Additionally, the factor-loading coefficients were above 0.4, confirming the correspondence
between options and factors. The CITC values were also above 0.4, indicating a significant
correlation between the analyzed items. A KMO value of 0.866, which is greater than the
desirable criterion of 0.6, indicated efficient information extraction from the data. Moreover,
a Cronbach’s alpha coefficient of 0.968 ensures high internal consistency. These high-quality
data provide a solid foundation for future research, facilitating scientific and effective
decision making based on accurate and reliable information.
4.1.4. Academic Achievement
Over the past decade, teachers’ academic achievements have been comprehensively
assessed by meticulously reviewing papers published as first or corresponding authors in
reputable databases (e.g., CNKI, Web of Science, and EI). We also considered the number
of published monographs and patents filed to indicate their academic contributions and
Systems 2023,11, 455 15 of 24
innovation skills. To gauge teachers’ academic influence accurately, we relied on the H-
index, which considers the number of papers and the frequency of citations. This metric
provides a more precise reflection of the academic impact.
All data were normalized to ensure a fair comparison of different scholarly achieve-
ments. This normalization eliminates the magnitude effect between data points, thereby
evaluating diverse accomplishments using the same criteria. The results of this analysis are
shown in Figure 5.
Systems 2023, 11, x FOR PEER REVIEW 15 of 24
All data were normalized to ensure a fair comparison of dierent scholarly achieve-
ments. This normalization eliminates the magnitude eect between data points, thereby
evaluating diverse accomplishments using the same criteria. The results of this analysis
are shown in Figure 5.
Figure 5. Results of academic achievement of 27 teachers.
4.1.5. TPACK
We m odied the TPACK-level measurement tool for pre-service teachers to beer
align with the computer science discipline. This adaptation was aimed at ensuring a more
accurate assessment of TPACK levels, specically for computer science teachers. Subse-
quently, the revised questionnaires were then distributed to teachers and feedback was
collected. A thorough analysis of the questionnaire data was conducted to verify their va-
lidity and accuracy. Table 4 presents the results of this analysis.
Upon examining the results in Table 4, it is evident that all 49 questions exhibited
strong correlations with their respective factors, as indicated by factor loadings that ex-
ceeded 0.4. Furthermore, the high commonality values, surpassing 0.7, suggest that the
gathered questionnaire data are of excellent quality and consistency.
Our analysis also revealed that the KMO values for all seven dimensions exceeded
0.5, and small p-values indicated the suitability of the data for factor analysis, thereby
conrming the validity of our ndings. In addition, we assessed the reliability of the data
and obtained a Cronbach’s alpha coecient of 0.619. Although not exceptionally high, it
fell within the acceptable range, indicating reliability of the data.
Table 4. Validity analysis. The table gives the values of factor loadings and communities for the 49
questions of the TPACK scale, the KMO and p-values for the seven dimensions, the overall KMO, p-
values, and Cronbach’s alpha.
Dimensions Items Factor Loadings Communalities KMO P
TK
1 0.742 0.820
0.630 0.000
2 0.459 0.915
3 0.679 0.786
4 0.904 0.907
5 0.824 0.941
6 0.439 0.965
7 0.855 0.969
8 0.503 0.942
9 0.548 0.842
10 0.458 0.890
11 0.567 0.957
Figure 5. Results of academic achievement of 27 teachers.
4.1.5. TPACK
We modified the TPACK-level measurement tool for pre-service teachers to better
align with the computer science discipline. This adaptation was aimed at ensuring a
more accurate assessment of TPACK levels, specifically for computer science teachers.
Subsequently, the revised questionnaires were then distributed to teachers and feedback
was collected. A thorough analysis of the questionnaire data was conducted to verify their
validity and accuracy. Table 4presents the results of this analysis.
Upon examining the results in Table 4, it is evident that all 49 questions exhibited strong
correlations with their respective factors, as indicated by factor loadings that exceeded
0.4. Furthermore, the high commonality values, surpassing 0.7, suggest that the gathered
questionnaire data are of excellent quality and consistency.
Our analysis also revealed that the KMO values for all seven dimensions exceeded
0.5, and small p-values indicated the suitability of the data for factor analysis, thereby
confirming the validity of our findings. In addition, we assessed the reliability of the data
and obtained a Cronbach’s alpha coefficient of 0.619. Although not exceptionally high, it
fell within the acceptable range, indicating reliability of the data.
Considering the validity and reliability analyses, we can confidently assert that the
TPACK-level data are scientifically robust and valid. The values obtained for each TPACK
dimension from 27 teachers provided a solid foundation for this study.
Systems 2023,11, 455 16 of 24
Table 4.
Validity analysis. The table gives the values of factor loadings and communities for the
49 questions of the TPACK scale, the KMO and p-values for the seven dimensions, the overall KMO,
p-values, and Cronbach’s alpha.
Dimensions Items Factor Loadings Communalities KMO P
TK
1 0.742 0.820
0.630 0.000
2 0.459 0.915
3 0.679 0.786
4 0.904 0.907
5 0.824 0.941
6 0.439 0.965
7 0.855 0.969
8 0.503 0.942
9 0.548 0.842
10 0.458 0.890
11 0.567 0.957
12 0.790 0.889
13 0.889 0.969
14 0.856 0.899
15 0.701 0.937
16 0.545 0.922
PK
17 0.468 0.968
0.882 0.000
18 0.569 0.904
19 0.890 0.913
20 0.542 0.907
21 0.819 0.890
CK
22 0.791 0.940
0.631 0.000
23 0.960 0.976
24 0.430 0.916
25 0.867 0.963
26 0.553 0.868
PCK
27 0.553 0.883
0.643 0.000
28 0.590 0.814
29 0.825 0.789
30 0.545 0.940
TCK
31 0.889 0.946
0.858 0.000
32 0.622 0.931
33 0.836 0.922
34 0.579 0.904
35 0.502 0.890
36 0.469 0.925
37 0.844 0.872
38 0.416 0.939
TPK
39 0.840 0.902
0.822 0.000
40 0.778 0.871
41 0.860 0.883
42 0.909 0.935
43 0.409 0.904
44 0.711 0.878
45 0.803 0.913
TPACK
46 0.416 0.879
0.000
47 0.428 0.827
48 0.790 0.896
49 0.924 0.907
KMO 0.668
p-value 0.004
Cronbach’s alpha 0.619
Systems 2023,11, 455 17 of 24
4.1.6. Course Difficulty
To evaluate the difficulty levels of the 82 computer science courses, we conducted an
extensive survey using a detailed questionnaire. The questionnaire used a scale ranging
from 0.1 to 1.0, with ten levels indicating varying degrees of difficulty. Teachers and
students participated in the survey and provided comprehensive and insightful feedback.
To account for potential disparities in the perceptions of course difficulty between
teachers and students, we introduced a weighting parameter
λ
, set at 0.6. This ensured
fairness and comprehensiveness in our analysis, with 60% of the final difficulty assessment
value attributed to teachers’ assessments and 40% attributed to students’ assessments. By
incorporating the professional opinions of teachers and learning experiences of students,
our results achieved greater accuracy and impartiality.
We calculated the comprehensive difficulty assessment value for each course by using
the weighted average method. These values are presented in Figure 6, which clearly
illustrates our findings.
Systems 2023, 11, x FOR PEER REVIEW 17 of 24
assessments. By incorporating the professional opinions of teachers and learning experi-
ences of students, our results achieved greater accuracy and impartiality.
We calculated the comprehensive diculty assessment value for each course by us-
ing the weighted average method. These values are presented in Figure 6, which clearly
illustrates our ndings.
Figure 6. The diculty of 82 computer science courses.
4.1.7. Course Teaching Evaluations
Using the Teaching Service Management System, we gathered 1014 evaluations from
27 teachers who had taught relevant courses over the past ve years. These evaluations
combined scores from students, peers, and supervisors to assess teaching quality compre-
hensively.
Multiple evaluations of the same course were averaged to eliminate bias, resulting in
141 comprehensive datasets. We then normalized the data to ensure fairness and to enable
easy analysis. The normalized data are presented in Figure 7 for a visual representation.
Figure 7. Comprehensive evaluation scores of 27 teachers’ course lectures.
Using the data acquisition and processing processes described above, we successfully
constructed an experimental dataset, Cf = (T_id, C_id, Tq, Eb, Dg, Pt, Gd, Ag, Ta, Tb, Leg,
Exe, Jud, Glo, Loc, Lib, Con, Aa, TK, PK, CK, PCK, TCK, TPK, TPACK, Cd, Te). The dataset
contains 130 teaching records from 27 teachers.
Figure 6. The difficulty of 82 computer science courses.
4.1.7. Course Teaching Evaluations
Using the Teaching Service Management System, we gathered 1014 evaluations from
27 teachers who had taught relevant courses over the past five years. These evaluations com-
bined scores from students, peers, and supervisors to assess teaching quality comprehensively.
Multiple evaluations of the same course were averaged to eliminate bias, resulting in
141 comprehensive datasets. We then normalized the data to ensure fairness and to enable
easy analysis. The normalized data are presented in Figure 7for a visual representation.
Figure 7. Comprehensive evaluation scores of 27 teachers’ course lectures.
Systems 2023,11, 455 18 of 24
Using the data acquisition and processing processes described above, we successfully
constructed an experimental dataset, Cf = (T_id,C_id,Tq,Eb,Dg,Pt,Gd,Ag,Ta,Tb,Leg,Exe,
Jud,Glo,Loc,Lib,Con,Aa,TK,PK,CK,PCK,TCK,TPK,TPACK,Cd,Te). The dataset contains
130 teaching records from 27 teachers.
4.2. Experimental Results and Analysis
4.2.1. Correlation Analysis
Using Python’s Pandas and Numpy libraries, we performed a correlation analysis
on the Cf dataset as part of our experiment. We aimed to identify characteristics that
strongly correlate with teaching quality (Tq). To accomplish this, we calculated Pearson’s
correlation coefficient for each characteristic in relation to Tq. Table 5presents the candidate
influencing characteristics obtained by selecting the top Ncharacteristics with correlation
values exceeding a threshold of 0.05.
Table 5.
Correlation coefficients of Tq with different characteristics. The bold characteristics have
correlation coefficients greater than the threshold value.
Characteristics Correlation Coefficients
Jud 0.186820
Exe 0.164657
Glo 0.157737
Ta 0.147954
Lib 0.143140
Con 0.137340
Loc 0.109996
Pt 0.105862
TCK 0.080511
Cd 0.076653
Leg 0.075396
Gd 0.065621
PK 0.058221
Eb 0.056243
TK 0.050682
Aa 0.044805
TPACK 0.039777
Te 0.036798
CK 0.033117
Tb 0.032526
PCK 0.031813
Ag 0.031474
Dg 0.023780
Our analysis of Table 5identified candidate characteristics that were strongly corre-
lated with the teaching quality (Tq of computer courses. These include teacher characteris-
tics and course difficulty, defined as Ckf = (T_id,C_id,Tq,Eb,Pt,Gd,Ta,Leg,Exe,Jud,Glo,
Loc,Lib,Con,TK,PK,TCK,Cd).
At this stage, we identified the key characteristics that were strongly associated with the
quality of computer course teaching. This dataset served as the foundation for subsequent
multiple correlation analyses. Our next step involves conducting a comprehensive analysis
to ascertain the specific impact of these influential characteristics on teaching quality.
4.2.2. Multiple Correlation Analysis
Multiple correlation analyses were conducted to assess the combined effects of the vari-
ous factors on teaching quality (Tq). The Ckf dataset contains potential correlations between
the candidate key influencing characteristics and Tq. Using a multiple linear regression
model, we identify the key influential variables by treating the candidate key influence
characteristics as independent variables and Tq as the dependent variable (
Equation (3))
.
Systems 2023,11, 455 19 of 24
We estimated the coefficients for each independent variable through regression analysis, as
presented in Table 6.
Table 6. Coefficients of each item of the multiple regression model.
Items Correlation Coefficients
Eb 0.08762497
Pt 0.18192852
Gd 0.16146295
Ta 0.16760372
Leg 0.91275239
Exe 0.11504773
Jud 0.31069709
Glo 0.00707843
Loc 0.18233648
Lib 0.69943627
Con 0.06296348
TK 0.04605904
PK 0.09686554
TCK 0.02922831
Cd 0.09414485
Intercept 0.7192219170075398
R-squared 0.14485157622728662
These coefficients indicate the predicted effect of each independent variable on the
teaching quality. We established a prediction model (Equation (4)) by combining the
following coefficients, and calculated the correlation between Tq and the predicted teaching
quality using a regression model to determine the coefficient of determination, which
quantifies the extent to which the independent variable explains the dependent variable
(Table 7).
Table 7. Coefficients of determination for each characteristics.
Characteristics Coefficient of Determination R2
Eb 0.74849181
Pt 0.31818084
Gd 0.50910173
Ta 0.43757353
Leg 0.27792175
Exe 0.72262906
Jud 0.67067184
Glo 0.33800404
Loc 0.58113391
Lib 0.23069742
Con 0.3413322
TK 0.73740502
PK 0.45785315
TCK 0.29118119
Cd 0.54061881
Among the characteristics analyzed, those with a coefficient of determination exceed-
ing 0.5 are highlighted in bold in Table 7. Influential characteristics, such as Eb,Gd,Exe,Jud,
Loc,TK, and Cd, demonstrated a goodness-of-fit above 0.5. These specific variables signifi-
cantly contributed to the teaching quality (Tq). Thus, we identified this set of characteristics
as the key influencing factors (Kf ) for teaching quality in the examined computer science
classes, and denoted their set as Kf = (Eb,Gd,Exe,Jud,Loc,TK,Cd). Moving forward, our
analysis focused on these key characteristics. We aimed to conduct a more comprehensive
study to better understand how they influence the quality of teaching.
Systems 2023,11, 455 20 of 24
4.2.3. Analysis of Association Degree between Key Influence Characteristics and
Teaching Quality
We employed GRA to accurately assess the impact of various influence characteristics
on teaching quality (Tq). GRA is a quantitative method that facilitates the evaluation
of the relationships between system factors, enabling us to identify the key influential
characteristics that have the strongest association with teaching quality.
By examining each influencing characteristic within the Kf dataset, Table 8presents
the degree of correlation between each influencing characteristic within the Kf dataset and
Tq. The results demonstrate that, as the correlation value approaches 1, the influence of the
characteristics on teaching quality becomes more pronounced.
Table 8. Analysis of the correlation between key impact characteristics and teaching quality.
Key Influence Characteristics Association Degree
Eb 0.65991873
Gd 0.50405441
Exe 0.72121656
Jud 0.77624262
Loc 0.73482096
TK 0.71722238
Cd 0.71548622
The data in Table 8indicate that all the key characteristics influencing teaching quality
are strongly connected, with an association degree exceeding 0.5. This reaffirms our
earlier findings from the multiple correlation analyses and deepens our understanding
of the factors that impact teaching quality. The grey correlation analysis underscores
the significance of these key influence characteristics and establishes a basis for future
optimization efforts with a more targeted approach.
4.2.4. Validation of Teaching Quality Based on Key Influence Characteristics
To validate the interpretability and impact of the identified key influence character-
istics, we conducted an empirical analysis using teaching data from 29 courses taught by
eight teachers over five years.
Our initial step involved constructing a validation dataset (Vf ), following the same
data collection and standardization process. Subsequently, we employed the multiple
linear regression model outlined in Equation (4) to predict the teaching quality based on the
identified key influence characteristics. The predicted results were compared with actual
teaching quality, as shown in Figure 8.
Figure 8illustrates a close alignment between the predicted teaching quality derived
from the key influence characteristics and actual teaching quality. The Mean Squared Error
(MSE) calculation yielded a small value of 0.008051157, indicating the accuracy of our
predictions. This validation outcome confirmed the significant impact of the identified key
influence characteristics on teaching quality. Furthermore, our multiple linear regression
model demonstrated high prediction accuracy, offering a valuable perspective on teaching
quality and establishing a robust foundation for improving teaching practice.
Systems 2023,11, 455 21 of 24
Systems 2023, 11, x FOR PEER REVIEW 21 of 24
Figure 8. Comparison of teaching quality and predicted value. The dark blue color in the gure
shows the teaching quality scale survey results, and the red doed line shows the predicted quality
of teaching based on key inuential characteristics.
Figure 8 illustrates a close alignment between the predicted teaching quality derived
from the key inuence characteristics and actual teaching quality. The Mean Squared Er-
ror (MSE) calculation yielded a small value of 0.008051157, indicating the accuracy of our
predictions. This validation outcome conrmed the signicant impact of the identied key
inuence characteristics on teaching quality. Furthermore, our multiple linear regression
model demonstrated high prediction accuracy, oering a valuable perspective on teaching
quality and establishing a robust foundation for improving teaching practice.
5. Conclusions and Discussion
This study utilized a computational pedagogy approach to identify the key inuen-
tial characteristics that impact the quality of teaching in computer science courses. Our
ndings revealed that teacher and course characteristics signicantly inuence teaching
quality, which is consistent with the existing literature [53,54], emphasizing the im-
portance of teacher and course characteristics in improving educational quality.
In addition to the teacher and course characteristics identied in our study, other
factors such as teaching interaction, communication skills, the ability to maintain good
relationships, instructional methods, and cognitive and emotional factors may also play a
role in teaching quality. Future research could explore these factors to provide further
insight into improving the quality of computer science courses.
To gain a deeper understanding of the complex interplay between gender, burnout,
teaching style, and teaching quality, future studies should investigate how these factors
interact with teachers and the course characteristics identied in our study. A mixed-
methods approach that combines quantitative analysis with qualitative data collection is
ideal for this purpose.
However, further exploration of this research paradigm based on computational ped-
agogy is necessary. First, the accuracy and persuasiveness of this study largely depended
on data quality and credibility [55]. Therefore, future research should focus on quality
control during the data collection, processing, and analysis. In addition, this study pro-
poses a novel computational model that requires further validation for applicability and
eectiveness in dierent contexts. Second, the characteristics that aect the quality of
teaching may vary across disciplines [56], necessitating broader research to expand the
scope of this study. Moreover, the applicability of the model constructed in this study to
other research areas related to teaching quality should be tested.
Despite these challenges, our research provides a new perspective and tool to im-
prove the teaching quality of computer science courses. Our approach emphasizes the
correlation between teaching quality and teacher and course characteristics. This
Figure 8.
Comparison of teaching quality and predicted value. The dark blue color in the figure
shows the teaching quality scale survey results, and the red dotted line shows the predicted quality
of teaching based on key influential characteristics.
5. Conclusions and Discussion
This study utilized a computational pedagogy approach to identify the key influential
characteristics that impact the quality of teaching in computer science courses. Our findings
revealed that teacher and course characteristics significantly influence teaching quality,
which is consistent with the existing literature [
53
,
54
], emphasizing the importance of
teacher and course characteristics in improving educational quality.
In addition to the teacher and course characteristics identified in our study, other
factors such as teaching interaction, communication skills, the ability to maintain good
relationships, instructional methods, and cognitive and emotional factors may also play
a role in teaching quality. Future research could explore these factors to provide further
insight into improving the quality of computer science courses.
To gain a deeper understanding of the complex interplay between gender, burnout,
teaching style, and teaching quality, future studies should investigate how these factors
interact with teachers and the course characteristics identified in our study. A mixed-
methods approach that combines quantitative analysis with qualitative data collection is
ideal for this purpose.
However, further exploration of this research paradigm based on computational peda-
gogy is necessary. First, the accuracy and persuasiveness of this study largely depended on
data quality and credibility [
55
]. Therefore, future research should focus on quality control
during the data collection, processing, and analysis. In addition, this study proposes a novel
computational model that requires further validation for applicability and effectiveness in
different contexts. Second, the characteristics that affect the quality of teaching may vary
across disciplines [
56
], necessitating broader research to expand the scope of this study.
Moreover, the applicability of the model constructed in this study to other research areas
related to teaching quality should be tested.
Despite these challenges, our research provides a new perspective and tool to improve
the teaching quality of computer science courses. Our approach emphasizes the correlation
between teaching quality and teacher and course characteristics. This methodology can be
adjusted and optimized according to various research backgrounds and requirements.
In summary, our study offers an effective approach to enhancing the teaching quality of
computer science courses. Although further research is needed to explore additional factors
influencing teaching quality and validate our proposed model, our findings contribute to a
better understanding of the factors that impact teaching quality. Future research can build
on our findings to develop more effective strategies to improve educational outcomes in
computer science courses.
Systems 2023,11, 455 22 of 24
Author Contributions:
Conceptualization, D.Y. and J.L.; methodology, D.Y.; software, J.L.; validation,
D.Y. and J.L.; formal analysis, D.Y. and J.L.; investigation, D.Y.; resources, J.L.; data curation, D.Y. and
J.L.; writing—original draft preparation, D.Y.; writing—review and editing, J.L.; visualization, D.Y.;
supervision, J.L.; funding acquisition, D.Y. All authors have read and agreed to the published version
of the manuscript.
Funding:
This research was funded by the Natural Science Foundation of Hunan Province (grant
number 2022JJ50316).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The datasets used and analyzed during the current study are available
from the corresponding author upon reasonable request.
Conflicts of Interest: The authors declare no conflict of interest.
References
1.
Saloviita, T.; Pakarinen, E. Teacher Burnout Explained: Teacher-, Student-, and Organisation-Level Variables. Teach. Teach. Educ.
2021,97, 103221. [CrossRef]
2.
Tan, J.; Deng, Y.; Yang, L. A Study on the Effects of Teaching Age on Burnout, Appreciative Social Support, and Trait Coping
Styles of Young College Teachers. Heilongjiang Res. High. Educ. 2011,2011, 110–112. [CrossRef]
3.
Palali, A.; van Elk, R.; Bolhaar, J.; Rud, I. Are Good Researchers Also Good Teachers? The Relationship between Research Quality
and Teaching Quality. Econ. Educ. Rev. 2018,64, 40–49. [CrossRef]
4.
Sacre, H.; Akel, M.; Haddad, C.; Zeenny, R.M.; Hajj, A.; Salameh, P.