Content uploaded by Muhammad Umair
Author content
All content in this area was uploaded by Muhammad Umair on Jul 17, 2021
Content may be subject to copyright.
Umair et al., International Journal on Emerging Technologies 12(2): 177-182(2021) 177
International Journal on Emerging Technologies 12(2): 177-182(2021)
ISSN No. (Print) : 0975-8364
ISSN No. (Online) : 2249-3255
Sentiment Analysis of Students’ Feedback before and after COVID-19 Pandemic
M. Umair1, A. Hakim1*, A. Hussain1 and S. Naseem2
1Researcher, Department of Computer Science, MNS University of Agriculture, Multan, Pakistan.
1Assistant Professor, Department of Computer Science, MNS University of Agriculture, Multan, Pakistan.
1Assistant Professor, Department of Computer Science, MNS University of Agriculture, Multan, Pakistan.
2Lecturer, Department of Statistics, MNS University of Agriculture, Multan, Pakistan.
(Corresponding author: A. Hakim)
(Received 05 April 2021, Revised 26 May 2021, Accepted 22 June 2021)
(Published by Research Trend, Website: www.researchtrend.net)
ABSTRACT: COVID pandemic has impacted most of the higher education institutes by shifting to online
teaching. Due to this shift, there has been increasing use of e-text, online learning management systems,
social media apps, and micro-blogging platforms used by students to provide feedback and comments about
a course and online classroom experience before and during the pandemic. This feedback is important for
the education institutes and can be used to improve the teaching and learning experience. One of the major
problems is to extract useful information out of several comments and feedback. This study presents a
hybrid approach of student sentiment analysis based on feedback of classes collected through Google
survey forms and WhatsApp social media platforms before and during the pandemic. For classification and
comparative analysis, Support Vector Machine, and Naïve Bayes algorithms have been used and an average
accuracy of 85.62% has been achieved using Support vector machine with K-fold cross-validation.
Keywords: COVID-19, Opinion mining, Sentiment analysis, online feedback, Natural language processing, COVID.
Abbreviations: COVID, Coronavirus Disease; AI, Artificial Intelligence; ML, Machine Learning; LMS, Learning
Management System; LIWC, Linguistic Inquiry Word Count; NB, Naïve Bayes; MNB, Multinomial Naïve Bayes; RF,
Random Forest; SGD, Stochastic Gradient Descent; SVM, Support Vector Machine; NLTK, Natural Language Toolkit;
VADER, Valence Aware Dictionary and Sentiment Reasoner; NLP, natural language processing; GSP, Generalized
Sequential Pattern; TD-IDF, Term Frequency-Inverse Document Frequency; BOW, Bag of Words.
I. INTRODUCTION
Sentiment Analysis, also known as Opinion mining or
Opinion AI is one of the leading research areas due to
the increasing use of e-text, online learning
management system (LMS), and microblogging
platforms including Twitter, WhatsApp, Facebook,
AnswerGarden, Poll Everywhere, and online blogs.
People use these platforms to express their thoughts
and opinion regarding any product, place, or event
throughout the world. This feedback is very critical for
concerned organizations to satisfy their customers. One
of the major problems is to extract useful information out
of feedback and comments to improve the quality of
products or services.
Sentiment Analysis is the process of classification of
data in different categories such as positive, negative, or
neutral class. In this work, we gathered student
feedback in two different scenarios of before and during
the COVID pandemic from WhatsApp groups and
Google forms. An analysis has been performed on
student’s textual comments in both scenarios using
sentiment analysis to identify their opinion about the
quality of teaching and learning. One of the objectives of
this system is to help university faculty and
administration to find gaps between students learning
and tutor teaching quality.
Sentiment analysis is mostly done using three
techniques: lexicon-based approach, machine learning
approach, and a hybrid technique that uses both lexicon
and machine learning techniques [1, 2]. In the lexicon-
based technique, a pre-defined dictionary is used where
words are already weighted according to their
sentiments. The most common tools for performing
lexicon-based analysis are SenticNet, SentiStrength,
and Linguistic Inquiry Word Count (LIWC) [3]. In the
machine-learning approach, the common algorithms for
sentiment classification are Naïve Bayes (NB),
Multinomial Naïve Bayes (MNB), Random Forest (RF),
Stochastic Gradient Descent (SGD), Support vector
machine (SVM), and hybrid techniques using a
combination of these algorithms.
Due to the COVID-19 pandemic situation, almost all
higher education institutes were shifted from face-to-
face or blended to fully online teaching, and many
students faced difficulties regarding online learning. To
facilitate them, it is critical to design a system that takes
feedback from students and provide an overall class
opinion to teachers and administrators to revise their
teaching and assessment methodology. This can be
done by converting textual data into positive, negative,
and neutral classes using Natural language processing
algorithms. It will help to determine students’ level of
understanding and any difficulties they are facing after
each class and to classify text-based data into positive,
negative, and neutral feedback to evaluate tutor
performance.
In this work, we have performed classification and
analysis of student’s feedback before and after COVID-
19 pandemic using machine learning techniques. Data
is collected through online google forms, LMS, and
e
t
Umair et al., International Journal on Emerging Technologies 12(2): 177-182(2021) 178
Whatsapp group messages of selected courses.
Annotation of student’s feedback on classes before and
during pandemic has been performed using open-
source tools: TextBlob and VADER [4]. These tools
were integrated with the Natural Language Toolkit
(NLTK) Python library using Jupiter notebook.
VADER (Valence Aware Dictionary and sEntiment
Reasoner) is a rule-based and vocabulary-based
sentiment analysis tool adapted to the sentiment
expressed on social media and other types of data [5].
VADER is different from other emotional analysis tools
because it does not classify text in discrete categories of
positive, negative, or neutral. Instead, VADER
generates a composite score between -1 and +1, which
indicates a range of positive, negative, or neutral.
TextBlob is a Python library for processing textual data
that provides a simple Application programming
interface to perform common natural language
processing (NLP) tasks like speech part tagging, noun
phrase extraction, emotional analysis, classification, and
translation. It generates a result score between -1 and
+1. Score grading for both TextBlob and VADER is such
that all values less than zero are considered as
negative, zero consider neutral and values above zero
considered as positive. For classification, Naïve Bayes,
and Support Vector Machine [6] have been used due to
their high accuracy on textual data [7].
This paper is divided into five sections. Section 2
presents the background to the related literature of
sentiment analysis and opinion mining. It also discusses
the main approaches and techniques for sentiment
classification. The proposed methodology is described
in Section 3. Results and discussions are presented in
Section 4 followed by a conclusion and future work that
is discussed in Section 5.
Sentiment analysis of text data on the Internet can be
useful in many applications such as product reviews,
political campaigns, popular topics, and educational
improvements, but it is most often applied to feedback
analysis [8]. Feedbacks are received from several
domains, including advertising [9] , movies [3, 10, 11,
12], products [4, 10, 13], automobiles, tourism,
smartphones [7, 14] , and education-learning [15-17].
We focus on an overview of previous work related to the
sentimental analysis of educational related data.
An opinion is someone’s feelings, beliefs, or judgments
about an important issue in a particular situation and is
generally considered subjective. Studies have shown
that opinions have a major impact not only on facts but
also on individual decisions, as well as on communities,
such as organizations and government sectors. The
terms sentiment analysis and opinion mining often used
interchangeably in the field of text data mining which
extracts opinions from evaluation texts and classifies the
polarity of opinions into positive or negative rankings
based on the valency of the text results [18].
In the field of education, very few studies have focused
on sentiment analysis of online learning such as [19-23],
and only one feedback study was conducted on the
classroom related data [24]. In [25], authors used two
pattern mining algorithms Apriori and Generalized
Sequential Pattern (GSP) to extract comment words
from student feedback data sets for evaluation of tutor
performance. The results of the evaluation showed that
GSP performs better than Apriori in the extraction of
opinion words.
In [26], authors claimed to detect polarity (positive,
negative, and neutral) on Facebook comments related
to e-learning. The positive class consisted of happy and
excited emotions, and negative class included emotions
such as anger, sadness, and profanity. The method was
evaluated on 1000 positive and negative text messages
and Facebook statuses.
Most of the sentiment analysis techniques focused on
after-class feedback and classified students’ feedback
into discrete categories. In this work, we are performing
comparative analysis of student’s feedback in blended
and fully online learning in two scenarios of before and
after COVID pandemic respectively.
II. MATERIALS AND METHODS
The proposed framework for this research (as shown in
Fig. 1) consists of five phases: Dataset collection and
preprocessing, data annotation/polarity calculation,
feature extraction, classification, and visualization
Fig. 1. Proposed Framework for Sentiment Analysis.
A. Data Collection
For data collection purposes three different sources
were used. The first was before the COVID-19
pandemic, where classes were conducted in traditional
or blended mode. Feedback of undergraduate and
postgraduate students was collected during and after
each class of the selected courses from Department of
Computer Science at MNS-University of Agriculture. A
google questionnaire form is designed with the
coordination of the Supervisor. The others source was
feedback collected using an online Google form
provided to students at the end of the class.
During COVID-19 pandemic scenario, all classes were
shifted to fully online mode. We extracted feedback from
students’ WhatsApp groups (see Appendix-I for details).
The students sent textual comments/messages
expressing their opinions and feedback about the
lecture. An un-labelled dataset of around 2000
instances was collected from all sources. Word Cloud of
all words in the dataset is shown in Fig. 2.
Umair et al., International Journal on Emerging Technologies 12(2): 177-182(2021) 179
Fig. 2. Word cloud of the dataset.
There were several challenges in classification of
sentiments for instance, the data collected from
Whatsapp messages was unstructured that made it
hard to annotate and classify data using machine
learning techniques. Students’ feedback data was noisy,
for instance, the feedback “Today's lessons are
understandable and the topic is awesome”. There were
several deviations of spelling of the same word,
awesome word can be seen in many different ways as-
“awsem, awssuumm, awsomee”. The process of data
cleaning is discussed in the next section.
Pre-processing
Input data pre-processing is a significant step before the
classification process. At this stage, the dataset is first
normalized and then prepared for the classification
algorithm training, so that proposed algorithms work
efficiently and achieve actual results consuming less
time [32]. We have used pre-processing parameters:
stemmer, stop-words handler, lowercase or uppercase
conversion, and tokenizer [9, 27, 28, 29], for cleaning up
junk data and increasing data accuracy by reducing
data errors. There are pros and cons of pre-processing,
such as without pre-processing, the system may lose
the importance of words, while on the other hand, the
loss of important data might occur due to the extensive
pre-processing. An open-source web application Jupyter
Notebook has been used for data statistical modelling,
numerical simulation, data visualization, data cleaning
and transformation, and machine learning model
implementation. Jupyter Notebook Anaconda 3 has
been used for data pre-processing and model
implementation using python language.
Data Annotation
Polarity detection of textual data has been performed
using two methods: TextBlob and VADER. TextBlob
annotated our total dataset into 650 negative feedbacks,
900 neutral feedbacks, and 450 positive feedbacks as
shown in Table 1. The problem with TextBlob
annotations is that if feedback received from a student is
in the form of ‘yes’or ‘no’ for any question, instead of
annotating it as ‘positive’ or ‘negative’ based on the
context, TextBlob annotates it as ‘neutral’. To perform
context-aware annotation, we used Vader sentiment
analysis that annotated our total dataset into 633
Negative feedbacks, 793 Neutral feedbacks, and 574
positive feedbacks as shown in Table 2.
Table 1: Dataset annotation using TextBlob.
Before COVID
During COVID
Class
Feedback
Class
Feedback
Negative 160 Negative 490
Neutral 400 Neutral 500
Positive 240 Positive 210
Total 800 Total 1200
Table 2: Dataset annotation using Vader.
Before COVID
During COVID
Class
Feedback
Class
Feedback
Negative 134 Negative 499
Neutral 450 Neutral 343
Positive 323 Positive 251
Total 800 Total 1200
Feature Extraction
annotating the dataset using two different techniques,
we applied vectorization, Term Frequency – Inverse
Document Frequency TF-IDF, and Bag of word
techniques for feature extraction.
Vectorization: As we know Machine Learning (ML)
algorithms operate best on numeric values, where rows
represent instances and columns represent features into
two-dimensional feature matrices. To perform ML in
text, the document needs to be converted into a vector
representation to apply numerical machine learning. The
numerical representation of the document gives the
ability for performing meaningful analysis of ML
algorithms.
Term Frequency-Inverse Document Frequency (TF-
IDF): In the pre-processing phase, TF-IDF provides a
numerical statistic to state the importance of a word in a
document that helps in sentiment analysis (Pang & Lee,
2008). Frequency evaluation of commonly use words
has been performed in the TF-IDF phase to identify
important words in the dataset. In the dataset, the most
frequently used words are ‘yes’, ‘no’, ‘excellent’, ‘bad’,
‘sad’, or ‘happy’. The number of occurrences of a
term/word in a given document is referred to as term
frequency.
Umair et al., International Journal on Emerging Technologies 12(2): 171-176(2021) 180
Bag of words (BOW): Extracting information is a key
phase in text mining, which serves as the starting point
in many data mining algorithms. Extracting entities and
their relationships from the text may reveal meaningful
semantic information in the text data rather than a
general representation of the text bag, and it is
necessary to understand the hidden knowledge in the
text data. For feature extraction, we have used a
common technique called a bag of word approach
(BOW). Using this approach, a list of unique words in
the dataset is created which is referred to as
vocabulary. This approach performs analysis of the
histogram of words within the text by considering each
word as a characteristic assign a value of 1 if the word
is present and 0 if the word is absent in the vector
representation of data.
Classification
After feature extraction, we used Naïve Bayes, and a
Support vector machine (SVM) for classification. Both
are the famous machine learning algorithms for
supervised based approach and in the literature, both
are followed by different researchers [7, 30] for
classification purpose. Both perform batter accuracy for
text classification rather than other supervised machine
learning classifiers after comparisons [31].
SVM classifier works best for classifying sparse text
data by defining rectilinear partitions in the data set and
divides the set into different classes. SVM also uses
kernel functions to transform data in a certain way, so
that Hyperplane distribution classes can be allocated
efficiently. On the other hand, Naïve Bayes classifier is
the most commonly used text mining classifier which
uses Bayes theorem to calculate the possibility of the
given label related to a particular feature
III. RESULTS AND DISCUSSION
To analyze the classifier’s performance with a sentiment
analysis framework using both TextBlob, and Vader
annotated dataset, three measures have been taken
(Recall, Precision, and F-Measure) to evaluate the
performance of the proposed method.
TextBlob Annotated dataset classification results
To optimize results and overcome the model overfilling
problem we work on k-fold cross-validation in this model
we calculate the best value of K by using hyper-
parameter tuning using Grid Search for maximum
results for both TextBlob annotated dataset and Vader
annotated dataset, In this validation we use K as 5
which provides maximum result scores after applying
hyper-parameter tuning using Grid Search technique on
TextBlob annotated dataset and analyze the results
which are the mean scores for recall, precision, and F-
Measure score are 70.0%, 71.0%, and 69.0% with
70.8% accuracy using SVM and 69.8% accuracy by
using Naïve Bayes, see Table. 3 class wise (negative,
neutral, positive). Figure 3 shows classification results
after TextBlob annotation through pie chart showing
63% neutral responses, 29% positive (high positive +
positive), and 8% negative (high negative + negative)
responses in the entire dataset.
Fig. 3. Classification results of TextBlob annotated data.
Table 3: Results for TextBlob Annotated Dataset
using train-test split using SVM.
Class
Precision
Recall
F
-
Measure
Negative
0.61 0.72 0.70
Neutral
0.50 0.57 0.53
Positive
0.71 0.69 0.70
Vader Annotated dataset classification results
One of the issues of using TextBlob annotation is that if
feedback received from the student says only ‘Yes’ or
‘No’ for any question, instead of annotating it as positive
or negative it shows that the given feedback is ‘neutral’.
That’s why the neutral value graph also shows more
percentage. To deal with this problem we move forward
to Vader sentiment analysis whose annotation result is
more authentic as compared to TextBlob. To optimize
results and overcome the model overfitting problem we
work on k-fold cross-validation in this model we use the
value of k as 15 which provides maximum result scores
after applying hyper-parameter tuning using Grid Search
technique using both algorithms (SVM and Naïve
Bayes). The obtained results are mean scores of 73.3%
accuracy using Naïve Bayes, and 85.6% accuracy using
Support vector machine which is higher than Naïve
Bayes. According to the results the Precision, Recall,
and F-Measure of Support vector machine is 85.0%,
83.0%, and 82.0%. The results are presented class-
wise (negative, neutral, positive) as shown in Table 4.
Table 4: Results for TextBlob Annotated Dataset using train-test split using SVM.
Class Precision Recall F-Measure
Negative 0.71 0.99 0.83
Neutral 0.55 0.60 0.63
Positive 0.97 0.94 0.53
7%
22%
1%
7%
63%
High Positive Positive High Negative
Negative Neutral
Umair et al., International Journal on Emerging Technologies 12(2): 171-176(2021) 181
Fig. 4 shows pie chart of all Vader annotated feedback
after classification showing 42% neutral responses, 41%
positive, and 17% negative responses in the entire
dataset.
Fig. 4. Pie chart of all polarity values (annotated using
Vader).
Fig. 5. Pie chart of manual annotation.
After TextBlob and Vader, we also annotated our
dataset based on expert opinion of three human
experts. Fig. 5 shows that human experts assign 54%
positive responses, 19% neutral responses, and 27%
negative responses in the entire dataset.
A comparison table to compare annotation percentage
of TextBlob, VADER sentiment analysis, and manual
annotation is shown in Table 5. The difference between
TextBlob, VADER, and manual annotation polarity
distribution is because of TextBlob and VADER use
different libraries and lexicons for the polarity distribution
process. VADER sentiment analysis is most applicable
to social media and even educational text. It relies on a
dictionary of emotional words. Each word in the
dictionary is divided into positive or negative numbers.
Table 5: Comparison of dataset annotation using
different methods.
Annotation
Methods
Positive
comments
(%)
Negative
comments
(%)
Neutral
comme
nts (%)
TextBlob 29 8 63
VADER 41 17 42
Manual
Annotation 54 27 19
IV. CONCLUSION
The purpose of this research is to propose a
comparative analysis of student’s feedback before and
during the COVID pandemic when all educational
institutes have been shifted from face-to-face learning to
a fully online learning system. Student’s feedback has
been collected through different online platforms
including WhatsApp and google forms. We applied both
Naïve Bayes and Support vector machine supervised
machine learning algorithms for classification and
compared their performance on the given dataset.
Support vector machine works best for text-polarity
classification in our study. It was found that there are
more negative instances of feedback during fully online
classes as compared to that in the blended teaching
mode (Table 2). The findings can help tutors in
designing strategies for teaching improvement in fully
online classes after each class.
V. DISCUSSION &FUTURE SCOPE
Due to COVID-19 pandemic, most of the universities
around the world have shifted to blended learning or
fully online learning systems. In this scenario, there is a
need to get informed about the students’ sentiments and
their opinion when the student-teacher interaction is
minimal. There is a potential to merge the proposed
system with real time facial expression analysis to
develop an information board for the tutors. The results
presented in their paper shows the performance of
different annotation methods and classification
techniques for sentiment analysis out of textual data.
Based on results, it is recommended to use VADER for
annotation and SVM for classification of textual data.
We are also working on automated detection of
learner’s engagement by analyzing their facial
expressions. We plan to compare the results of both
systems to find the difference (if any) between the
written (through comments/text messages/forms) and
expressed (through face) sentiments of the students.
The comparative analysis could be applied in the quality
assurance program at the university to determine and
improve the student-tutor relationship.
Conflict of Interest: There is no conflict of interest
relevant to publication of this paper.
REFERENCES
[1]. Pang, B., & Lee, L. (2009). Opinion mining and
sentiment analysis. Comput. Linguist, 35(2), 311-312.
[2]. Saif, H., He, Y., Fernandez, M., & Alani, H. (2016).
Contextual semantics for sentiment analysis of Twitter.
Information Processing & Management, 52(1), 5-19.
[3]. Ahmad, M., Aftab, S., Muhammad, S. S., &
Waheed, U. (2017). Tools and techniques for lexicon
41%
17%
42%
Positive Negative Neutral
54%
27%
19%
Positive
Negative
Neutral
Umair et al., International Journal on Emerging Technologies 12(2): 177-182(2021) 182
driven sentiment analysis: a review. Int. J. Multidiscip.
Sci. Eng, 8(1), 17-23.
[4]. Elli, M. S., & Wang, Y.-F. (2016). Amazon Reviews,
business analytics with sentiment analysis.
Hutto, C., & Gilbert, E. (2014, May). Vader: A
parsimonious rule-based model for sentiment analysis
of social media text. In Proceedings of the International
AAAI Conference on Web and Social Media (vol 8(1)).
[6]. Cortes, C., &Vapnik, V. (1995). Support-vector
networks. Machine learning, 20(3), 273-297.
[7]. Ahmad, M., Aftab, S., Bashir, M. S., & Hameed, N.
(2018). Sentiment Analysis using SVM: A Systematic
Literature Review. International Journal of Advanced
Computer Science and Applications, 9(2), 182-188.
[8]. Jagtap, V., &Pawar, K. (2013). Analysis of different
approaches to sentence-level sentiment classification.
International Journal of Scientific Engineering and
Technology, 2(3), 164-170.
[9]. Altawaier, M.M., & Tiun, S. (2016). Comparison of
machine learning approaches on arabic twitter
sentiment analysis. International Journal on Advanced
Science, Engineering and Information Technology, 6(6),
1067-1073.
[10]. Ahmad, M., Aftab, S., & Ali, I. (2017). Sentiment
analysis of tweets using svm. Int. J. Comput. Appl.,
177(5), 25-29.
[11]. Asur, S., & Huberman, B. A. (2010). Predicting the
future with social media. Paper presented at the 2010
IEEE/WIC/ACM international conference on web
intelligence and intelligent agent technology.
[12]. Pang, B., Lee, L., & Vaithyanathan, S. (2002).
Thumbs up? Sentiment classification using machine
learning techniques. Paper presented at the
Proceedings of the ACL-02 conference on Empirical
methods in natural language processing-Volume 10.
[13]. Thelwall, M., & Buckley, K. (2013). Topic‐based
sentiment analysis for the social web: The role of mood
and issue‐related words. Journal of the American
Society for Information Science and Technology, 64(8),
1608-1617.
[14]. Zavvar, M., Rezaei, M., & Garavand, S. (2016).
Email spam detection using combination of particle
swarm optimization and artificial neural network and
support vector machine. International Journal of Modern
Education and Computer Science, 8(7), 68.
[15]. Qiuhan, L., Afzaal, M., Alaudan, R., & Younas, M.
(2020). COVID 19 Pandemic and Online Education in
Hong Kong: An Exploratory Study. International Journal
on Emerging Technologies, 11(5), 411-418.
[16]. Arshad, M., Almufarreh, A., Noaman, K. M., &
Saeed, M. N. (2020). Academic Semester Activities by
Learning Management System during COVID-19
Pandemic: A Case of Jazan University. International
Journal on Emerging Technologies, 11(5), 213-219.
[17]. Poulos, A., &Mahony, M. J. (2008). Effectiveness
of feedback: The students’ perspective. Assessment &
Evaluation in Higher Education, 33(2), 143-154.
[18]. Medhat, W., Hassan, A., & Korashy, H. (2014).
Sentiment analysis algorithms and applications: A
survey. Ain Shams engineering journal, 5(4), 1093-
1113.
[19]. Altrabsheh, N., Cocea, M., & Fallahkhair, S.
(2014b). Sentiment analysis: towards a tool for
analysing real-time students’ feedback. Paper presented
at the 2014 IEEE 26th international conference on tools
with artificial intelligence.
[20]. Aung, K.Z., & Myo, N.N. (2017). Sentiment
analysis of students' comment using lexicon-based
approach. Paper presented at the 2017 IEEE/ACIS 16th
International Conference on Computer and Information
Science (ICIS).
[21]. Chamlertwat, W., Bhattarakosol, P., Rungkasiri, T.,
& Haruechaiyasak, C. (2012). Discovering Consumer
Insight from Twitter via Sentiment Analysis. J. UCS,
18(8), 973-992.
[22]. Jing, L., & Yanqing, Z. (2012). Teaching evaluation
method based on least squares support vector machine
and chaos particle swarm optimization algorithm.
JDCTA: International Journal of Digital Content
Technology and Its Applications, 6, 343-351.
[23]. Patel, T., Undavia, J., & Patela, A. (2015).
Sentiment analysis of parents’ feedback for educational
institutes. International Journal of Innovative and
Emerging Research in Engineering, 2(3), 75-78.
[24]. Altrabsheh, N., Cocea, M., &Fallahkhair, S.
(2014a). Learning sentiment from students’ feedback for
real-time interventions in classrooms. Paper presented
at the International Conference on Adaptive and
Intelligent Systems.
[25]. Rashid, A., Asif, S., Butt, N. A., & Ashraf, I. (2013).
Feature level opinion mining of educational student
feedback data using sequential pattern mining and
association rule mining. International Journal of
Computer Applications, 81(10).
[26]. Ortigosa, A., Martín, J. M., &Carro, R. M. (2014).
Sentiment analysis in Facebook and its application to e-
learning. Computers in human behavior, 31, 527-541.
[27]. Jeong, H., Shin, D., & Choi, J. (2011). Ferom:
Feature extraction and refinement for opinion mining.
Etri Journal, 33(5), 720-730.
[28]. Hu, M., & Liu, B. (2004). Mining and summarizing
customer reviews. Paper presented at the Proceedings
of the tenth ACM SIGKDD international conference on
Knowledge discovery and data mining.
[29]. Kobayashi, N., Inui, K., & Matsumoto, Y. (2007).
Opinion mining from web documents: Extraction and
structurization. Information and Media Technologies,
2(1), 326-337.
[30]. Yadav, S. K. (2015). Sentiment analysis and
classification: a survey. International Journal of Advance
Research in Computer Science and Management
Studies, 3(3), 113-121.
[31]. Shivaprasad, T. K., & Shetty, J. (2017, March).
Sentiment analysis of product reviews: a review. In 2017
International Conference on Inventive Communication
and Computational Technologies (ICICCT) (pp. 298-
301). IEEE.
[32]. Zainudin, S., Jasim, D. S., & Bakar, A. A. (2016).
Comparative analysis of data mining techniques for
Malaysian rainfall prediction. Int. J. Adv. Sci. Eng. Inf.
Technol, 6(6), 1148-1153.
How to
cite this article
:
Umair, M., Hakim, A., Hussain, A. and Naseem, S.
(2021).
Sentiment Analysis of Students’
Feedback before and after COVID-19 Pandemic. International Journal on Emerging Technologies, 12(2): 177–182.