This paper summarizes our participated solution for the shared task of the text classification (scope detection) of peer review articles at the SDPRA (Scope Detection of the Peer Review Articles) workshop at PAKDD 2021. By participating this challenge, we are particularly interested in how well those pre-trained word embeddings from different neural models, specifically transformer models, such
... [Show full abstract] as BERT, perform on this text classification task. Additionally, we are also interested in whether utilizing entity embeddings can further improve the classification performance. Our main finding is that using SciBERT for obtaining sentence embeddings for this task provides the best performance as an individual model compared to other approaches. In addition, using sentence embeddings with entity embeddings for those entities mentioned in each text can further improve a classifier’s performance. Finally, a hard-voting ensemble approach with seven classifiers achieves over 92% accuracy on our local test set as well as the final one released by the organizers of the task. The source code is publicly available at https://github.com/parklize/pakdd2021-SDPRA-sharedtask.