Figure 1 - available via license: CC BY
Content may be subject to copyright.
An example of aspect-based sentiment analysis (ABSA). Sentiment expressions such as "nice" or "bad" appear near their related aspect expressions "view of river" and "sushi rolls," respectively.

An example of aspect-based sentiment analysis (ABSA). Sentiment expressions such as "nice" or "bad" appear near their related aspect expressions "view of river" and "sushi rolls," respectively.

Source publication
Article
Full-text available
Aspect-based sentiment analysis (ABSA) is the task of classifying the sentiment of a specific aspect in a text. Because a single text usually has multiple aspects which are expressed independently, ABSA is a crucial task for in-depth opinion mining. A key point of solving ABSA is to align sentiment expressions with their proper target aspect in a t...

Contexts in source publication

Context 1
... main problem of the context vector is that it loses positional information of salient words, even if the information is important in ABSA to know where the aspect expressions of a target aspect appear. For instance, Figure 1 shows an example review on a restaurant with two different target aspects and their corresponding sentiment expressions. In this figure, one target aspect is LOCATION represented with an aspect expression "view of river", and the other is FOOD expressed as "sushi rolls". ...
Context 2
... the aspect map is extracted, the second CNN classifies the sentiment of a target aspect. The convolutional layer of this CNN produces feature maps that activate the sentiment expressions such as "nice" and "bad" in Figure 1. Generally, in ABSA, only the sentiment expressions near the aspect expressions of a target aspect are under interest. ...
Context 3
... ATAE-LSTM [22], the aspect vector is concatenated to every word representation, and then an attention mechanism finds proper sentiment expressions of the aspect vector. The attention mechanism in ATAE-LSTM can capture the relation between aspects and aspect-specific sentiment words like price/cheap, but fails in classifying the text in Figure 1 due to general sentiment words such as "nice" and "bad". Because general sentiment words are used with too many aspects, the attention mechanism cannot detect their correct aspect. ...
Context 4
... sentiment analysis (ABSA) is a task of making a sentiment decision s for a given text t and a target aspect c. For instance, if the very short review text in Figure 1 is considered as t, it contains two aspects of LOCATION and FOOD. Thus, two input tuples should be considered for ABSA as in Figure 2. If the tuple (t, LOCATION) is given to an ABSA system, then the system should output positive as a sentiment label, while negative is the right decision when the tuple is (t, FOOD). ...
Context 5
... softmax activation and categorical cross-entropy loss are used for both AMEN and AMSAN, because they show empirically better performance than the sigmoid activation and the binary cross-entropy loss in this task. The filter size h is (1, 2, 3) for AMEN and (3, 4, 5) for AMSAN, and the number of kernels K is set to 100 for all hs. The dimension of the fully connected layer of AMSAN is 100. ...

Citations

... However, all these methods lack of semantic logicality and fluency for context divided by aspectspecific; hence, the representation of context is continuous so that the semantic information is not fluent. Yunseok Noh (Noh et al. 2019) applied CNN as Aspect-Map Extraction Network (AMEN) to extract the representation of aspect to constitute an aspect map and then regarded the representations of aspect and context as input to conduct sentiment classification. Attention over Attention mechanism was introduced to model the interactive relationships between aspect-specific and every word of text, which can assign the greater weights to some important words on sentiment classification . ...
Article
Full-text available
Aspect-level sentiment classification aims to integrating the context to predict the sentiment polarity of aspect-specific in a text, which has been quite useful and popular, e.g., opinion survey and products’ recommending in e-commerce. Many recent studies exploit a Long Short-Term Memory (LSTM) networks to perform aspect-level sentiment classification, but the limitation of long-term dependencies is not solved well, so that the semantic correlations between each two words of the text are ignored. In addition, traditional classification model adopts SoftMax function based on probability statistics as classifier, but ignores the words’ features in the semantic space. Support Vector Machine (SVM) can fully use the information of characteristics, and it is appropriate to make classification in the high-dimensional space, however, which just considers the maximum distance between different classes and ignores the similarities between different features of the same classes. To address these defects, we propose the two-stage novel architecture named Self Attention Networks and Adaptive SVM (SAN-ASVM) for aspect-level sentiment classification. In the first stage, in order to overcome the long-term dependencies, Multi-HeadsSelf Attention (MHSA) mechanism is applied to extract the semantic relationships between each two words; furthermore, 1-hop attention mechanism is designed to pay more attention on some important words related to aspect-specific. In the second stage, ASVM is designed to substitute the SoftMax function to perform sentiment classification, which can effectively make multi-classifications in high-dimensional space. Extensive experiments on SemEval2014, SemEval2016 and Twitter datasets are conducted, and compared experiments prove that SAN-ASVM model can obtain better performance.
... However, all these methods lack of semantic logicality and fluency for context divided by aspect-specific, hence the representation of context is continuous so that the semantic information is not fluent. Yunseok Noh (Noh et al. 2019) applied CNN as Aspect-Map Extraction Network (AMEN) to extract the representation of aspect to constitute an aspect map, then regarded the representations of aspect and context as input to conduct sentiment classification. Attention over Attention mechanism was introduced to model the interactive relationships between aspect-specific and every word of text, which can assign the greater weights to some important words on sentiment classification . ...
Preprint
Full-text available
Aspect-level sentiment classification aims to integrating the context to predict the sentiment polarity of aspect-specific in a text, which has been quite useful and popular, e.g. opinion survey and products’ recommending in e-commerce. Many recent studies exploit a L ong S hort- T erm M emory ( LSTM ) networks to perform aspect-level sentiment classification, but the limitation of long-term dependencies is not solved well, so that the semantic correlations between each two words of the text are ignored. In addition, traditional classification model adopts SoftMax function based on probability statistics as classifier, but ignores the words’ features in the semantic space. S upport V ector M achine ( SVM ) can fully use the information of characteristics and it is appropriate to make classification in the high dimension space, however which just considers the maximum distance between different classes and ignores the similarities between different features of the same classes. To address these defects, we propose the two-stages novel architecture named S elf A ttention N etworks and A daptive SVM ( SAN-ASVM ) for aspect-level sentiment classification. In the first-stage, in order to overcome the long-term dependencies, M ulti- H ead s S elf A ttention ( MHSA ) mechanism is applied to extract the semantic relationships between each two words, furthermore 1-hop attention mechanism is designed to pay more attention on some important words related to aspect-specific. In the second-stage, ASVM is designed to substitute the SoftMax function to perform sentiment classification, which can effectively make multi-classifications in high dimensional space. Extensive experiments on SemEval2014, SemEval2016 and Twitter datasets are conducted, compared experiments prove that SAN-ASVM model can obtains better performance.
... SA's focus has shifted from detecting the polarity of an entire document, paragraph, or sentence to the product aspects [5]. The idea behind aspect-based sentiment analysis (ABSA) is to identify the polarity of an aspect in a review sentence [6]. This type of sentiment analysis is a finer-grained version of previous versions such as document level and sentence level sentiment analysis [1]. ...
Article
Sentiment analysis of product reviews on e-commerce platforms aids in determining the preferences of customers. Aspect-based sentiment analysis (ABSA) assists in identifying the contributing aspects and their corresponding polarity, thereby allowing for a more detailed analysis of the customer’s inclination toward product aspects. This analysis helps in the transition from the traditional rating-based recommendation process to an improved aspect-based process. To automate ABSA, a labelled dataset is required to train a supervised machine learning model. As the availability of such dataset is limited due to the involvement of human efforts, an annotated dataset has been provided here for performing ABSA on customer reviews of mobile phones. The dataset comprising of product reviews of Apple-iPhone11 has been manually annotated with predefined aspect categories and aspect sentiments. The dataset’s accuracy has been validated using state-of-the-art machine learning techniques such as Naïve Bayes, Support Vector Machine, Logistic Regression, Random Forest, K-Nearest Neighbor and Multi Layer Perceptron, a sequential model built with Keras API. The MLP model built through Keras Sequential API for classifying review text into aspect categories produced the most accurate result with 67.45 percent accuracy. K- nearest neighbor performed the worst with only 49.92 percent accuracy. The Support Vector Machine had the highest accuracy for classifying review text into aspect sentiments with an accuracy of 79.46 percent. The model built with Keras API had the lowest 76.30 percent accuracy. The contribution is beneficial as a benchmark dataset for ABSA of mobile phone reviews.
... Udit et al. [14] developed an improved sentiment analysis method using image processing techniques based on visual data. Other research related to aspect-based sentiment analysis (ABSA) classify the sentiment of a specific aspect in a text presented in [15][16][17]. Earlier research has shown that these approaches are appropriate only under certain conditions. ...
Article
Full-text available
Customer reviews on the Internet reflect users’ sentiments about the product, service, and social events. As sentiments can be divided into positive, negative, and neutral forms, sentiment analysis processes identify the polarity of information in the source materials toward an entity. Most studies have focused on document-level sentiment classification. In this study, we apply an unsupervised machine learning approach to discover sentiment polarity not only at the document level but also at the word level. The proposed topic document sentence (TDS) model is based on joint sentiment topic (JST) and latent Dirichlet allocation (LDA) topic modeling techniques. The IMDB dataset, comprising user reviews, was used for data analysis. First, we applied the LDA model to discover topics from the reviews; then, the TDS model was implemented to identify the polarity of the sentiment from topic to document, and from document to word levels. The LDAvis tool was used for data visualization. The experimental results show that the analysis not only obtained good topic partitioning results, but also achieved high sentiment analysis accuracy in document- and word-level sentiment classifications.
... However, all these methods lack semantic logicality and fluency if the context is divided by aspects. Yunseok Noh et al. [39] applied a CNN as an aspect-map extraction network (AMEN) to extract the representation of aspect and sentiment, then constituted an aspect map, regarding the representations of the sentence and aspect as an input to conduct sentiment classification. Nevertheless, the finegrained correlation between the context and aspect also failed to be captured. ...
Article
Aspect-level sentiment analysis aims to predict the sentiment orientation of a specific aspect in one context. Recent studies have achieved great success in modeling aspect and context by applying long-short term memory networks (LSTMs) and an attention mechanism. However, the semantic correlations between each word of the aspect and context fail to be noticed, which decreases the effectiveness of feature representations. Given this problem, in order to comprehensively analyze semantic correlations from the perspective of the word level and feature level, a co-attention mechanism is proposed to capture the interactions between aspect and context, which interactively concentrates the semantic influences on context and aspect to generate greater informative representation. Specifically, a co-attention mechanism consists of the 1-pair hop mechanism and an interactive mechanism, in which the 1-pair hop mechanism pays more attention to the important word’s aspect or context and the interactive mechanism highlights the significant feature of the aspect or context by calculating the interactive attention matrix from the perspective of the feature level. In addition, considering that one context contains more than one aspect, the novel loss function is designed to fully employ the attention weights of different aspects on every word in the given context. Extensive compared experiments are conducted based on the GloVe and BERT pre-trained models, and the results show that the proposed method can achieve state-of-the-art performance on the Restaurant and Twitter datasets. Furthermore, ablation studies are designed to validate the necessity and importance of the 1-pair hop mechanism and interactive mechanism.
... Moreover, it can be applied at different levels, including analysis of words, sentences or whole documents. Recently, aspect-based sentiment analysis has also gained attention as a text may contain multiple aspects having different sentiments [13], [14]. ...
Article
Full-text available
Sentiment analysis is one of the prominent research areas in data mining and knowledge discovery, which has proven to be an effective technique for monitoring public opinion. The big data era with a high volume of data generated by a variety of sources has provided enhanced opportunities for utilizing sentiment analysis in various domains. In order to take best advantage of the high volume of data for accurate sentiment analysis, it is essential to clean the data before the analysis, as irrelevant or redundant data will hinder extracting valuable information. In this paper, we propose a hybrid feature selection algorithm to improve the performance of sentiment analysis tasks. Our proposed sentiment analysis approach builds a binary classification model based on two feature selection techniques: an entropy-based metric and an evolutionary algorithm. We have performed comprehensive experiments in two different domains using a benchmark dataset, Stanford Sentiment Treebank, and a real-world dataset we have created based on World Health Organization (WHO) public speeches regarding COVID-19. The proposed feature selection model is shown to achieve significant performance improvements in both datasets, increasing classification accuracy for all utilized machine learning and text representation technique combinations. Moreover, it achieves over 70% reduction in feature size, which provides efficiency in computation time and space.
Article
Aspect-based sentiment classification, a fine-grained sentiment analysis task, aims to predict the sentiment polarity for a specified aspect. However, the existing aspect-based sentiment classification approaches cannot fully model the dependency-relationship between words and are easily disturbed by irrelevant aspects. To address this problem, we propose a novel approach named Dependency-Relationship Embedding and Attention Mechanism-based LSTM. DA-LSTM first merges the word hidden vector output by LSTM with the dependency-relationship embedding to form a combined vector. This vector is then fed into the attention mechanism together with the aspect information which can avoid interference to calculate the final word representation for sentiment classification. Our extensive experiments on benchmark data sets clearly show the effectiveness of DA-LSTM.