Conference Paper
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Editorial pre-screening is the first step in academic peer review. The deluge of research papers and the huge amount of submissions being made to journals these days makes editorial decision a very challenging task. The current work attempts to investigate certain impact factors that may have a role in the editorial decision making process. The proposed work exhibits potential for the development of an AI-assisted peer review system which could aid the editors as well as the authors in making appropriate decisions in reasonable time and thus accelerate the overall process of scholarly publishing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With the increasing amount of scientific research work being done, there is a need to speed up the process of evaluation so as to handle a large number of papers and encourage a large number https://doi.org/10.1016/j.neucom.2020. 11.004 0925-2312/Ó 2020 Elsevier B.V. All rights reserved. ...
... In addition, for other domains or areas, they can not generalise well. Deep learning has emerged in recent years as a powerful way of solving problems of sentiment classification in academia [10,11]. However, the word sequence information is not fully utilized by the semantic representation of existing methods. ...
... Ghosal et al. [11] investigated the impact of various features in the editorial pre-screening process. Ghosal et al. [17] used the full 4 publons.com/dashboard/records/review/. 5 https:/www.crossref.org/blog/peer-reviews-are-open-for-registering-at-crossref/. ...
Article
Peer reviews form an essential part of scientific communications. Research papers and proposals are reviewed by several peers before they are finally accepted or rejected for publication and funding, respectively. With the steady increase in the number of research domains, scholarly venues (journal and/or conference), researchers, and papers, managing the peer review process is becoming a daunting task. Application of recommender systems to assist peer reviewing is, therefore, being explored and becoming an emerging research area. In this paper, we present a deep learning network based Meta-Review Generation considering peer review prediction of the scholarly article (MRGen). MRGen is able to provide solutions for: (i) Peer review prediction (Task 1) and (ii) Meta-review generation (Task 2). First, the system takes the peer reviews as input and produces a draft meta-review. Then it employs an integrated framework of convolution layer, long short-term memory (LSTM) model, Bi-LSTM model, and attention mechanism to predict the final decision (accept/reject) of the scholarly article. Based on the final decision, the proposed model MRGen incorporates Pointer Generator Network-based abstractive summarization to generate the final meta-review. The focus of our approach is to give a concise meta-review that maximizes information coverage, coherence, readability and also reduces redundancy. Extensive experiments conducted on the PeerRead dataset demonstrate good consistency between the recommended decisions and original decisions. We also compare the performance of MRGen with some of the existing state-of-the-art multi-document summarization methods. The system also outperforms a few existing models based on accuracy, Rouge scores, readability, non-redundancy, and cohesion.
... In spite of having merit, many papers suffer rejections since they do not fall within the scope of the intended journal. Studies [1,5] show that around 25-30% of desk-rejections owe to misfit submissions. Editors invest a considerable amount of time to judge the appropriateness of submissions at the desk before forwarding it to reviewers for meticulous evaluation. ...
... Finally, we perform T=A B. To get the semantic representation of T, we take the word2vec [2] representations of individual words present in T and concatenate them to form the semantic document representation. We generate word2vec word vectors from 400k Elsevier Computer Science journal papers [1] to preserve scholarly domain knowledge. Lexical View: We adopt similar approach and use term frequencyinverse document frequency (tf-idf) as the lexical representation. ...
... Qiao et al. [42] applied a recurrent convolutional network to predict the aspect scores of a paper using both modularity and attention mechanisms. In Ghosal et al. [43], they examined the impact of various characteristics on the pre-screening of research papers. ...
Article
Full-text available
One key frontier of artificial intelligence (AI) is the ability to comprehend research articles and validate their findings, posing a magnanimous problem for AI systems to compete with human intelligence and intuition. As a benchmark of research validation, the existing peer-review system still stands strong despite being criticized at times by many. However, the paper vetting system has been severely strained due to an influx of research paper submissions and increased conferences/journals. As a result, problems, including having insufficient reviewers, finding the right experts, and maintaining review quality, are steadily and strongly surfacing. To ease the workload of the stakeholders associated with the peer-review process, we probed into what an AI-powered review system would look like. In this work, we leverage the interaction between the paper’s full text and the corresponding peer-review text to predict the overall recommendation score and final decision. We do not envisage AI reviewing papers in the near future. Still, we intend to explore the possibility of a human–AI collaboration in the decision-making process to make the current system FAIR. The idea is to have an assistive decision-making tool for the chairs/editors to help them with an additional layer of confidence, especially with borderline and contrastive reviews. We use a deep attention network between the review text and paper to learn the interactions and predict the overall recommendation score and final decision. We also use sentiment information encoded within peer-review texts to guide the outcome further. Our proposed model outperforms the recent state-of-the-art competitive baselines. We release the code of our implementation here: https://github.com/PrabhatkrBharti/PEERRec.git.
... Qiao et al. [18] used a recurrent convolutional network model incorporating both modularity and attention mechanisms to predict the aspect scores of an academic paper. Ghosal et al. [10] investigated the impact of various characteristics on the pre-screening of research papers. Superchi et al. [22] provide a comprehensive overview of criteria tools used to assess the quality of peer review reports in the biomedical field. ...
Chapter
Full-text available
Peer review is the widely accepted method of research validation. However, with the deluge of research paper submissions accompanied with the rising number venues, the paper vetting system has come under a lot of stress. Problems like dearth of adequate reviewers, finding appropriate expert reviewers, maintaining the quality of the reviews are steadily and strongly surfacing up. To ease the peer review workload to some extent, here we investigate how an Artificial Intelligence (AI)-powered review system would look like. We leverage on the paper-review interaction to predict the decision in the reviewing process. We do not envisage an AI reviewing papers in the near-future, but seek to explore a human-AI collaboration in the decision-making process where the AI would leverage on the human-written reviews and paper full-text to predict the fate of the paper. The idea is to have an assistive decision-making tool for the chairs/editors to help them with an additional layer of confidence, especially with borderline and contrastive reviews. We use cross-attention between the review text and paper full-text to learn the interactions and henceforth generate the decision. We also make use of sentiment information encoded within peer-review texts to guide the outcome. Our initial results show encouraging performance on a dataset of paper+peer reviews curated from the ICLR openreviews. We make our codes and dataset (https://github.com/PrabhatkrBharti/PEERAssist) public for further explorations. We re-iterate that we are in an early stage of investigation and showcase our initial exciting results to justify our proposition.
... Here, the editors, who are generally domain experts, primarily look into the suitability of the submitted article to the journal's aims, scope, and standards [5]. Along with they also consider certain other factors like plagiarism [8], template inconsistencies, language, grammar, etc. [6] for the initial screening, which is better known as the editorial review. More or less, with these factors, editors decide whether to forward the article to expert reviewers for meticulous evaluation or to reject the paper outright from the desk. ...
Chapter
Deciding the appropriateness of a manuscript to the aims and scope of a journal is very important in the first stage of peer review. Editors should be confident about the article’s suitability to the intended journal to further channel its progress through the steps in the review process. However, not all sections in a research article are equally contributory or essential to determine its aptness to the journal under consideration. Here in this work, we investigate which sections in a manuscript are more significant to decide on its belongingness to the intended journal’s scope. Our empirical studies on two Computer Science journals suggest that the meta information from bibliography and author profiles can reach a competitive benchmark to full-text performance. The features we develop in this study display the potential to evolve as a decision support system for the journal editors to identify out-of-scope submissions.
... The famous Toronto Paper Matching system (Charlin and Zemel, 2013) was developed to match paper with reviewers. Recently we ( Ghosal et al., 2018b,a) investigated the impact of various features in the editorial pre-screening process. Wang and Wan (2018) explored a multi-instance learning framework for sentiment analysis from the peer review texts. ...
Chapter
In the current paper, we will present the results of our shared task at The First Workshop & Shared Task on Scope Detection of the Peer Review Articles (SDPRA) collocated with PAKDD 2021. It aims to develop system(s) which can help in the peer-review process in the initial screening usually performed by the editor(s). We received four submissions in total: three from academic institutions and one from the industry. The quality of submission shows a greater interest in the task by the research community.
Article
A widely used measure of scientific impact is citations. However, due to their heavy-tailed distribution, citations are fundamentally difficult to predict. Instead, to characterize scientific impact, we address two analogous questions asked by many scientific researchers: “How will my h -index evolve over time, and which of my previously or newly published papers will contribute to it?” To answer these questions, we perform two related tasks. First, we develop a model to predict authors’ future h -indices based on their current scientific impact. Second, we examine the factors that drive papers—either previously or newly published—to increase their authors’ predicted future h -indices. By leveraging relevant factors, we can predict an author's h -index in five years with an R2R^2 value of 0.92 and whether a previously (newly) published paper will contribute to this future h -index with an F1F_1 score of 0.99 (0.77). We find that topical authority and publication venue are crucial to these effective predictions, while topic popularity is surprisingly inconsequential. Further, we develop an online tool that allows users to generate informed h -index predictions. Our work demonstrates the predictability of scientific impact, and can help researchers to effectively leverage their scholarly position of “standing on the shoulders of giants.”
An AI aid to the editors. Exploring the possibility of an AI assisted article classification system
  • Tirthankar Ghosal
  • Rajeev Verma
Tirthankar Ghosal, Rajeev Verma, Asif Ekbal, Sriparna Saha, and Pushpak Bhattacharyya. 2018. An AI aid to the editors. Exploring the possibility of an AI assisted article classification system. arXiv preprint arXiv:1802.01403 (2018).