Article

Citation contagion: a citation analysis of selected predatory marketing journals

Authors:
  • University of Gafsa, Tunisia
To read the full-text of this research, you can request a copy directly from the author.

Abstract

To date, limited studies have examined the citations of articles published in predatory journals, and none appears to have been done in marketing. Using Google Scholar (GS) as a citation source, this study aims to examine the extent of citations of (articles published in) 10 predatory marketing journals. Citation analyses indicate that the most cited predatory marketing journal gathered 6,296 citations since it was first published in 2008. Four of the 10 predatory journals gathered over 732 citations each since they were launched (i.e., highly cited). Three other journals were cited between 147 and 732 times (i.e., moderately cited). The three remaining journals received below 147 citations each (i.e., trivially cited). Findings show that the 1,246 articles published in these 10 predatory journals, and which are visible to GS, received 10,935 citations with 8.776 citations per paper. About 11.624% of these 1,246 articles were cited 13 times or more. The most cited article received 217 citations, of which 21 are from journals indexed in Clarivate Analytics’ Social Sciences Citation Index. Based on these findings, this study concludes that the conventional marketing literature has been already contaminated by predatory marketing journals.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Therefore, an unbiased and more quantitative approach is required to reveal the questionable publications. A few researchers have, in fact, quantitatively analysed the questionable publications, yet only gave limited insights on their behaviour Moussa, 2021;Yan et al., 2011 ). For instance, a study reported that papers published in QJs received fewer citations on average ( Moussa, 2021 ); however, they only presented a simple citation count. ...
... A few researchers have, in fact, quantitatively analysed the questionable publications, yet only gave limited insights on their behaviour Moussa, 2021;Yan et al., 2011 ). For instance, a study reported that papers published in QJs received fewer citations on average ( Moussa, 2021 ); however, they only presented a simple citation count. Prior journal impact, which is often not considered in questionable publication studies ( Yan et al., 2011 ), is a crucial factor determining if the paper should receive future citations. ...
... With the large-scale data of 48,579,504 papers, 277,218 journals, and 2,714 publishers indexed and published in Scopus between 1996 and 2018 ( Elsevier ), we analysed the hidden citation patterns of questionable publications that can reflect graft. In addition to the simple citation count discussed in Moussa (2021) , we also collected a set of journals closest to the QJs in terms of fame for comparison (see Materials and methods). This approach allowed us to systematically monitor the unique patterns of questionable publications that distinguish them from the unquestioned publications. ...
Article
Full-text available
Questionable publications have been accused of “greedy” practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, i.e., journals and publishers, and prosumers, i.e., authors. Questionable publications attribute publisher-level self-citations to their journals while limiting journal-level self-citations; yet, conventional journal-level metrics are unable to detect these publisher-level self-citations. We propose a hybrid journal-publisher metric for detecting self-favouring citations among QJs from publishers. Additionally, we demonstrate that the questionable publications were less disruptive and influential than their counterparts. Our findings indicate an inflated citation impact of suspicious academic publishers. The findings provide a basis for actionable policy-making against questionable publications.
... Oermann et al. (2019) analysed Scopus citations of seven predatory nursing journals. Moussa (2021), using Google Scholar, examined citations of 10 predatory marketing journals. Frandsen (2017) analysed how 124 potential predatory journals are cited in Scopus. ...
... Nwagwu and Ojemeni (2015) reported an average of 2.25 citations per article in a predatory journal. In contrast to other research, Moussa (2021) argues that predatory journals in marketing have a relatively high number of citations in Google Scholar, with an average of 8.8 citations per article. ...
... Oermann et al. (2020) studied further articles in which predatory journals are cited and found that most of the citations are used substantively and placed in the introduction or literature review sections. Moreover, by analysing a small sample of the best-cited articles from predatory journals, Moussa (2021) found that around 10% of citing articles from Google Scholar were published in journals indexed in the Social Sciences Citation Index, and Oermann et al. (2019) did not find any significant difference in citations in Scopus from impact-factor and non-impact-factor nursing journals. ...
Article
Full-text available
One of the most fundamental issues in academia today is understanding the differences between legitimate and questionable publishing. While decision-makers and managers consider journals indexed in popular citation indexes such as Web of Science or Scopus as legitimate, they use two lists of questionable journals (Beall's and Cabell's), one of which has not been updated for a few years, to identify the so-called predatory journals. The main aim of our study is to reveal the contribution of the journals accepted as legitimate by the authorities to the visibility of questionable journals. For this purpose, 65 questionable journals from social sciences and 2338 Web-of-Science-indexed journals that cited these questionable journals were examined in-depth in terms of index coverages, subject categories, impact factors and self-citation patterns. We have analysed 3234 unique cited papers from questionable journals and 5964 unique citing papers (6750 citations of cited papers) from Web of Science journals. We found that 13% of the questionable papers were cited by WoS journals and 37% of the citations were from impact-factor journals. The findings show that neither the impact factor of citing journals nor the size of cited journals is a good predictor of the number of citations to the questionable journals.
... Citation contamination is a serious science integrity issue that should warrant the attention of researchers in every scientific domain or discipline. To the best of the author's knowledge, only Moussa (2021a) has dealt with citation contamination in the marketing discipline. ...
... Partially entitled "citation contagion", the recent study by Moussa (2021a) is perhaps the first to provide solid evidence that some predatory marketing journals are being (extensively) cited, with some of them receiving thousands of citations from various sources, a few peerreviewed marketing journals included (see Moussa, 2021a, p. 500). Though it provides hard proof that some marketing journals are contaminated by citations to predatory journals, the study by Moussa (2021a) remains descriptive in nature and mainly focused on the predatory marketing journals, not the peer-reviewed marketing journals. ...
... Partially entitled "citation contagion", the recent study by Moussa (2021a) is perhaps the first to provide solid evidence that some predatory marketing journals are being (extensively) cited, with some of them receiving thousands of citations from various sources, a few peerreviewed marketing journals included (see Moussa, 2021a, p. 500). Though it provides hard proof that some marketing journals are contaminated by citations to predatory journals, the study by Moussa (2021a) remains descriptive in nature and mainly focused on the predatory marketing journals, not the peer-reviewed marketing journals. Furthermore, it uses Google Scholar as the citation source. ...
Article
Full-text available
Purpose - Predatory publishing is a growing and global issue infecting all scientific domains. Predatory publishers create counterfeit, not (properly) peer-reviewed journals to exploit the Open Access model in which the author pays. The plethora of predatory marketing journals along with the sophisticated deceptive practices of their publishers may create total confusion. One of the many highly likely risks of that confusion is when peer-reviewed, prestigious marketing journals cite these pseudo-marketing journals. This phenomenon is called citation contamination. This study aims to investigate the extent of citation contamination in the peer-reviewed marketing literature. Design/methodology/approach – Using Google Scholar as a citation gathering tool, this study investigates references to four predatory marketing journals in 68 peer-reviewed marketing journals listed in the 2018 version of the Academic Journal Guide by the Chartered Association of Business Schools (CABS). Findings– Results indicate that 59 of the 68 CABS-ranked peer-reviewed marketing journals were, up to late January 2021, contaminated by at least one of the four sampled predatory journals. Together, these four pseudo-journals received (at least) 605 citations. Findings from nonparametric statistical procedures show that citation contamination occurred irrespective of the age of a journal or its 2019 Journal Impact Factor. They also point out that citation contamination happened independently from the fact that a journal is recognized by Clarivate Analytics or not. Research limitations/implications –This study investigated citations to only four predatory marketing journals in only 68 CABS-listed peer-reviewed marketing journals. Practical implications– These findings should sound an alarm to the entire marketing community (including academics and practitioners). To counteract citation contamination, recommendations are provided for researchers, practitioners, journal editors, and academic and professional associations. Originality – This study is the first to offer a systematic assessment of references to predatory journals in the peer-reviewed marketing literature.
... Oermann et al. (2019) analysed Scopus citations of seven predatory nursing journals. Moussa (2020), using Google Scholar, examined citations of 10 predatory marketing journals. Frandsen (2017) analysed how 124 potential predatory journals are cited in Scopus. ...
... Nwagwu and Ojemeni (2015) reported an average of 2.25 citations per article in a predatory journal. In contrast to other research, Moussa (2020) argues that predatory journals in marketing have a relatively high number of citations in Google Scholar, with an average of 8.8 citations per article. ...
... Oermann et al. (2020) studied further articles in which predatory journals are cited and found that most of the citations are used substantively and placed in the introduction or literature review sections. Moreover, by analysing a small sample of the best-cited articles from predatory journals, Moussa (2020) found that around 10% of citing articles from Google Scholar were published in journals indexed in the Social Sciences Citation Index, and Oermann et al. (2019) did not find any significant difference in citations in Scopus from impact-factor and non-impact-factor nursing journals. ...
Preprint
Full-text available
One of the most fundamental issues in academia today is understanding the differences between legitimate and predatory publishing. While decision-makers and managers consider journals indexed in popular citation indexes such as Web of Science or Scopus as legitimate, they use two blacklists (Beall's and Cabell's), one of which has not been updated for a few years, to identify predatory journals. The main aim of our study is to reveal the contribution of the journals accepted as legitimate by the authorities to the visibility of blacklisted journals. For this purpose, 65 blacklisted journals in social sciences and 2,338 Web-of-Science-indexed journals that cited these blacklisted journals were examined in-depth in terms of index coverages, subject categories, impact factors and self-citation patterns. We have analysed 3,234 unique cited papers from blacklisted journals and 5,964 unique citing papers (6,750 citations of cited papers) from Web of Science journals. We found that 13% of the blacklisted papers were cited by WoS journals and 37% of the citations were from impact-factor journals. As a result, although the impact factor is used by decision-makers to determine the levels of the journals, it has been revealed that there is no significant relationship between the impact factor and the number of citations to blacklisted journals. On the other hand, country and author self-citation practices of the journals should be considered. All the findings of this study underline the importance of the second part of this study, which will examine the contents of citations to articles published in predatory journals because understanding the motivations of the authors who cited blacklisted journals is important to correctly understand the citation patterns between impact-factor and blacklisted journals.
... In this paper, we argue that the simplifications of the two hidden phenomena are strongly related. Previous studies that counted the number of citations to articles in predatory journals (Frandsen, 2017;Moussa, 2021) were unable to show the more complex nature of the phenomenon of predatory publishing due to the limitation of the method, that is, citation counting. ...
... These practices are the result of unequal power relations across central, semi-peripheral and peripheral countries and institutions. The debate over predatory publishing focuses almost entirely on journals published in English in non-English-speaking countries (Eykens et al., 2019;Grudniewicz et al., 2019;Moussa, 2021). Various lists of predatory journals such as the discontinued Beall's List or Cabell's Predatory Reports are perceived as useful tools for indicating undesirable journals and provide a dichotomous view: good journals published mostly in central countries in English and bad journals published mostly in semi-peripheral countries in English. ...
Preprint
Full-text available
This study uses content-based citation analysis to move beyond the simplified category of predatory (or questionable) journals. We present that when we analyze papers not only in terms of the number of their citations but also the content of these citations, we are able to show the much more complicated role of papers published in journals accused of being predatory. We analyzed the content of 9,995 citances from 6,706 papers indexed in the Web of Science Core Collection, which cites papers published in so-called questionable journals. The analysis revealed that the vast majority of such citances are neutral (97.3%), and negative citations of articles from questionable journals are almost completely nonexistent (0.8%). Moreover, the analysis revealed that the most frequently mentioned countries in the citances are India, Pakistan, and Iran, and mentions of Western countries are rare. This highlights geopolitical bias and shows the usefulness of looking at such journals as mislocated centers of scholarly communication. Apparently, the analyzed journals provide data needed in mainstream scholarly discussions, and the idea of predatory publishing hides geopolitical inequalities in global scholarly publishing. These findings also contribute to the further development of content-based citation analysis. 2
... 22 Nwagwu, Ojemeni (2015). 23 Moussa (2020). 24 Macháček, Srholec (2021). ...
... 48 Oermann et al. (2020). 49 Moussa (2020). 50 Oermann et al. (2019). ...
Article
Full-text available
Jednym z problemów współczesnego środowiska akademickiego jest zrozumienie różnic pomiędzy uznanymi czasopismami naukowymi a tzw. czasopismami drapieżnymi. Podczas gdy osoby kształtujące politykę naukową i menedżerowie nauki uważają czasopisma indeksowane w popularnych indeksach cytowań, takich jak Web of Science czy Scopus, za rzetelne, to do identyfikacji drapieżnych czasopism używają dwóch czarnych list (tzw. lista Bealla i lista Cabell’s), z których jedna nie jest aktualizowana od kilku lat. Głównym celem naszego artykułu jest pokazanie, jak czasopisma uznane za rzetelne podnoszą widoczność artykułów opublikowanych w czasopismach znajdujących się na czarnych listach. W tym celu przebadaliśmy 65 czasopism z nauk społecznych znajdujących się na czarnych listach oraz 2338 czasopism indeksowanych przez Web of Science, które cytowały te czasopisma. Przeanalizowaliśmy 3234 artykuły z czasopism znajdujących się na czarnych listach oraz 5964 artykuły (6750 cytowań) z czasopism indeksowanych w Web of Science. Nasze wyniki pokazują, że 13% artykułów z czarnych list było cytowanych przez czasopisma z Web of Science, a 37% cytowań pochodziło z czasopism z impact factor. Okazuje się, że nie ma istotnej zależności między impact factor a liczbą cytowań w czasopismach z czarnej listy, mimo że jest on jest wykorzystywany przez osoby kształtujące politykę naukową do określania poziomu czasopism. Z drugiej strony należy wziąć pod uwagę jako czynnik wyjaśniający kraj i praktyki autocytowania stosowane w czasopismach.
... A few researchers have in fact quantitatively analysed the questionable publications. For instance, a study reported that papers published in QJs received fewer citations on average [19]; however, they only presented a simple citation count. Prior journal impact, which is often not considered in questionable publication studies [20], is a crucial factor determining if the paper should receive future citations. ...
... With the large-scale data of 48,579,504 papers, 277,218 journals, and 2,714 publishers indexed and published in Scopus between 1996 and 2018 [28], we analysed the hidden citation patterns of questionable publications that can reflect graft. In addition to the simple citation count discussed in [19], we also collected the set of journals closest to the QJs in terms of fame for comparison (see Methods). This approach allowed us to systematically monitor the unique patterns of questionable publications that distinguish them from the unquestioned publications. ...
Preprint
Full-text available
Questionable publications have been accused of "greedy" practices; however, their influence on academia has not been gauged. Here, we probe the impact of questionable publications through a systematic and comprehensive analysis with various participants from academia and compare the results with those of their unaccused counterparts using billions of citation records, including liaisons, e.g., journals and publishers, and prosumers, e.g., authors. The analysis reveals that questionable publications embellished their citation scores by attributing publisher-level self-citations to their journals while also controlling the journal-level self-citations to circumvent the evaluation of journal-indexing services. This approach makes it difficult to detect malpractice by conventional journal-level metrics. We propose journal-publisher-hybrid metric that help detect malpractice. We also demonstrate that the questionable publications had a weaker disruptiveness and influence than their counterparts. This indicates the negative effect of suspicious publishers in the academia. The findings provide a basis for actionable policy making against questionable publications.
... Thus, authors can receive credit for having articles appearing in the database even if the journal has been delisted. Their articles can continue to be cited (even self-cited), allowing, to some extent, the bibliometrics indicators of scientists, research groups and entire institutions to increase (Cortegiani et al., 2020;Moussa, 2021). This feature of indexing content in Scopus and WoS creates difficulties for bibliometric analyses or research evaluation because it might distort the results. ...
Article
In Ukraine, Scopus data are used to evaluate academics. Existing shortcomings in the Ukrainian evaluation system allow them to publish in titles that have been delisted from Scopus, and continue to use those papers as credible research output for evaluation. The purpose of this study was to analyse the publishing activity of Ukrainian institutions in Scopus‐delisted titles (as of September 2021) in different fields between 2011 and 2020 and to attempt to appreciate how common this practice is among Ukrainian authors. Scopus was sourced to collect bibliographic and citations‐related data, while SciVal was used to analyse these data. The findings suggest that for 17 Ukrainian institutions, papers from titles that have been delisted from Scopus still play an important part of the publication achievement of their employees. In particular, in the field of economics, econometrics and finance, 46.92% of Ukrainian papers were published in a title that was excluded from Scopus. Moreover, the analysis indicated that in two Ukrainian institutions, the level of citation of such papers significantly exceeds the average number of citations to Scopus‐indexed papers in the same year and in the same field. Given that bibliometric indicators are also used for research assessment in other Eastern European countries, the results of this paper are applicable to a wider geographic context.
... Web of Science was retained as the citation source instead of Google Scholar as the latter "indexes all output regardless of whether or not it is peer-reviewed" (Halevi et al., 2017, p. 825). Prior studies indicate that Google Scholar covers citations in master thesis, working papers, preprints, and any other document types visible to Google Scholar, articles in predatory journals included (Moussa, 2019a(Moussa, , 2021a. ...
Article
A retraction is the removal of an article from the scientific record at any time after its publication. This study investigates the characteristics of retracted articles in marketing. A total of 30 retracted articles published in 18 marketing journals were identified using Google Scholar and then analyzed. The analysis shows that the main reason for retracting marketing articles is duplication, followed by errors in data and data fabrication. On average, it took 2.371 years for each of these articles to be retracted. Using Clarivate Analytics’ Web of Science, it has been found that these retracted articles received 421 citations, 196 of which are post-retraction citations. More specifically, 22 of the 30 retracted marketing articles continue to be cited several years after their retraction. The most cited retracted marketing article gathered 67 citations of which 30 are post-retraction citations with all of them being positive citations referencing it as valid and legitimate work. The citation pollution caused by that retracted article transcends marketing to cover such disciplines as information science, psychology, and health nutrition.
... • Instructing researchers in responsible authorship and ethical publishing. Researchers should be instructed to not engage themselves in suspect activities that are not adding value to the scientific enterprise (Moussa, 2021b). ...
Article
- Hijacked journals mimic the name (and the ISSN) of a reputable journal with the sole purpose of financial exploitation. - A hijacked journal is an even more pernicious scam than a predatory journal. - Grounded in stakeholder theory, this opinion piece indicates that the hijackers are the sole stakeholder group that benefits from journal hijacking. - Hijacked journals will continue to menace scholarly research and publishing unless all stakeholders take specific and coordinated actions against them.
... -It is a predatory journal listed in Cabells' Predatory Reports (the subscription-based service by Cabells Scholarly Analytics). -The publisher of that journal (i.e., ARIPD) is a predatory publisher listed in three free and updated lists of predatory publishers: the Dolos list, the Kscien list, and the Stop Predatory Journals list (for further information on these lists, see Koerber et al., 2020 andMoussa, 2021). -Those at the UK-based Academy of Marketing and Taylor & Francis group (i.e., the publisher of the genuine journal) have "made repeated attempts to contact ARIPD about the possible confusion this duplication in journal name may cause for authors, but as yet have received no response". ...
Article
Hijacked journals are publication outlets that are created by fraudulent entities for financial gain. They deceitfully use the names of genuine journals to dupe researchers. A hijacked journal publishes papers in return for article publication charges similar to those of gold open access journals, but they are not authentic. By using the same title of a genuine journal, a hijacked journal may confuse authors who send their manuscripts to it. A hijacked journal may also confuse authors who cite articles published in it, wrongly assuming that they appeared in an authentic journal (a phenomenon herein called citation infiltration). Adopting a case study methodology, the main aim of this paper is to investigate the extent of citations received by a hijacked marketing journal from marketing journals indexed in Clarivate Analytics’ Social Sciences Citation Index (SSCI). Results indicate that the hijacked journal received 25 citations from 13 SSCI-indexed marketing journals. The list of the infiltrated journals includes some of marketing’s most “prestigious” journals. Ironically, the SSCI-indexed marketing journal that cited the hijacked journal the most (with nine citations) is none other than the genuine journal whose identity has been theft.
Article
Full-text available
Predatory journals and publishers are a growing concern in the scholarly publishing arena. As one type of attempt to address this increasingly important issue, numerous individuals, associations, and companies have begun curating journal watchlists or journal safelists. This study uses a qualitative content analysis to explore the inclusion/exclusion criteria stated by scholarly publishing journal watchlists and safelists to better understand the content of these lists, as well as the larger controversies that continue to surround the phenomenon that has come to be known as predatory publishing. Four watchlists and ten safelists were analyzed through an examination of their published mission statements and inclusion/exclusion criteria. Notable differences that emerged include the remaining influence of librarian Jeffrey Beall in the watchlists, and the explicit disavowal of his methods for the safelists, along with a growing recognition that the “list” approach may not fully address systemic aspects of predatory publishing that go beyond the individual author's ethical decision-making agency.
Article
Full-text available
On January 15, 2017, a blog that was maintained by a US librarian, Jeffrey Beall, was suddenly shut down. That blog was famed for its divisive and controversial content, namely two blacklists that in essence labelled open access journals and publishers as “predatory”. Beall showed that the entries on his lists increased annually, yet several publishing entities that had been blacklisted by Beall felt that they had been unfairly listed, causing, in some cases, reputational damage. In the vacuum that ensued in academic publishing quality control, a few entities tried to fill the gap to serve as a warning to academics. One of the organizations that stepped in was US-based Cabell’s International, which created a blacklist of journals that did not fulfill their established criteria. This brief communication reports on a structured interview that was held in June of 2017 between the author and Kathleen Berryman, Cabell’s project manager. Some perspectives on Cabell’s whitelists and blacklists are provided.
Article
Full-text available
Background: The increase in the number of predatory journals puts scholarly communication at risk. In order to guard against publication in predatory journals, authors may use checklists to help detect predatory journals. We believe there are a large number of such checklists yet it is uncertain whether these checklists contain similar content. We conducted a systematic review to identify checklists that help to detect potential predatory journals and examined and compared their content and measurement properties. Methods: We searched MEDLINE, Embase, PsycINFO, ERIC, Web of Science and Library, and Information Science & Technology Abstracts (January 2012 to November 2018); university library websites (January 2019); and YouTube (January 2019). We identified sources with original checklists used to detect potential predatory journals published in English, French or Portuguese. Checklists were defined as having instructions in point form, bullet form, tabular format or listed items. We excluded checklists or guidance on recognizing "legitimate" or "trustworthy" journals. To assess risk of bias, we adapted five questions from A Checklist for Checklists tool a priori as no formal assessment tool exists for the type of review conducted. Results: Of 1528 records screened, 93 met our inclusion criteria. The majority of included checklists to identify predatory journals were in English (n = 90, 97%), could be completed in fewer than five minutes (n = 68, 73%), included a mean of 11 items (range = 3 to 64) which were not weighted (n = 91, 98%), did not include qualitative guidance (n = 78, 84%), or quantitative guidance (n = 91, 98%), were not evidence-based (n = 90, 97%) and covered a mean of four of six thematic categories. Only three met our criteria for being evidence-based, i.e. scored three or more "yes" answers (low risk of bias) on the risk of bias tool. Conclusion: There is a plethora of published checklists that may overwhelm authors looking to efficiently guard against publishing in predatory journals. The continued development of such checklists may be confusing and of limited benefit. The similarity in checklists could lead to the creation of one evidence-based tool serving authors from all disciplines.
Article
Full-text available
Predatory journals are Open Access journals of highly questionable scientific quality. Such journals pretend to use peer review for quality assurance, and spam academics with requests for submissions, in order to collect author payments. In recent years predatory journals have received a lot of negative media. While much has been said about the harm that such journals cause to academic publishing in general, an overlooked aspect is how much articles in such journals are actually read and in particular cited, that is if they have any significant impact on the research in their fields. Other studies have already demonstrated that only some of the articles in predatory journals contain faulty and directly harmful results, while a lot of the articles present mediocre and poorly reported studies. We studied citation statistics over a five-year period in Google Scholar for 250 random articles published in such journals in 2014 and found an average of 2.6 citations per article, and that 56% of the articles had no citations at all. For comparison, a random sample of articles published in the approximately 25,000 peer reviewed journals included in the Scopus index had an average of 18, 1 citations in the same period with only 9% receiving no citations. We conclude that articles published in predatory journals have little scientific impact.
Article
Full-text available
Objective To conduct a Delphi survey informing a consensus definition of predatory journals and publishers. Design This is a modified three-round Delphi survey delivered online for the first two rounds and in-person for the third round. Questions encompassed three themes: (1) predatory journal definition; (2) educational outreach and policy initiatives on predatory publishing; and (3) developing technological solutions to stop submissions to predatory journals and other low-quality journals. Participants Through snowball and purposive sampling of targeted experts, we identified 45 noted experts in predatory journals and journalology. The international group included funders, academics and representatives of academic institutions, librarians and information scientists, policy makers, journal editors, publishers, researchers involved in studying predatory journals and legitimate journals, and patient partners. In addition, 198 authors of articles discussing predatory journals were invited to participate in round 1. Results A total of 115 individuals (107 in round 1 and 45 in rounds 2 and 3) completed the survey on predatory journals and publishers. We reached consensus on 18 items out of a total of 33 to be included in a consensus definition of predatory journals and publishers. We came to consensus on educational outreach and policy initiatives on which to focus, including the development of a single checklist to detect predatory journals and publishers, and public funding to support research in this general area. We identified technological solutions to address the problem: a ‘one-stop-shop’ website to consolidate information on the topic and a ‘predatory journal research observatory’ to identify ongoing research and analysis about predatory journals/publishers. Conclusions In bringing together an international group of diverse stakeholders, we were able to use a modified Delphi process to inform the development of a definition of predatory journals and publishers. This definition will help institutions, funders and other stakeholders generate practical guidance on avoiding predatory journals and publishers.
Article
Full-text available
The trend through which academia finances scientific publication has been developing in recent years towards open access publishing, which can be exploited by predatory journals and publishers. The aim of this paper is to review the current situation of predatory publishing and introduce Kscien's list of predatory journals and publishers. Kscien has recruited a special committee consisting of 23 young researchers. They are working to keep the list up-to-date, expose current tactics of the predators and guide authors. The list is designed to be updated daily. Currently, the criteria used to recognize predatory journals and publishers depends on the journal's conduct, evidence of fabrication and levels of peer review. Research is ongoing to provide more solid criteria and objective evidence to overcome the critics of Beall. Kscien's list could help to fill the gap left by Beall's list.
Article
Full-text available
Predatory journals have emerged as an unintended consequence of the Open Access paradigm. Predatory journals only supposedly or very superficially conduct peer review and accept manuscripts within days to skim off publication fees. In this provocation piece, we first explain how predatory journals exploit deficiencies of the traditional peer review process in times of Open Access publishing. We then explain two ways in which predatory journals may harm the management discipline: as an infrastructure for the dissemination of pseudo-science and as a vehicle to portray management research as pseudo-scientific. Analyzing data from a journal blacklist, we show that without the ability to validate their claims to conduct peer review, most of the 639 predatory management journals are quite difficult to demarcate from serious journals. To address this problem, we propose open peer review as a new governance mechanism for management journals. By making parts of their peer review process more transparent and inclusive, reputable journals can differentiate themselves from predatory journals and additionally contribute to a more developmental reviewing culture. Eventually, we discuss ways in which editors, reviewers and authors can advocate reform of peer review.
Article
Full-text available
Purpose The purpose of this paper is to assess the viability of the scholarly search engine Microsoft Academic (MA) as a citation source for evaluating/ranking marketing journals. Design/methodology/approach This study performs a comparison between MA and Google Scholar (GS) in terms of journal coverage, h -index values and journal rankings. Findings Findings indicate that: MA (vs GS) covers 96.80 percent (vs 97.87 percent) of the assessed 94 marketing-focused journals; the MA-based h -index exhibits values that are 35.45 percent lower than the GS-based h -index; and that the MA-based ranking and the GS-based ranking are highly consistent. Based on these findings, MA seems to constitute a rather viable citation source for assessing a marketing journal’s impact. Research limitations/implications This study focuses on one discipline, that is, marketing. Originality/value This study identifies some issues that would need to be fixed by the MA’s development team. It recommends some further enhancements with respect to journal title entry, publication year allocation and field classification. It also provides two up-to-date rankings for more than 90 marketing-focused journals based on actual cites (October 2018) of articles published between 2013 and 2017.
Article
Full-text available
The purpose of this research note is to define and review the extent of “predatory¹1. Predatory an adjective of, relating to, or practicing plunder, pillage, or rapine; inclined or intended to injure or exploit others for personal gain or profit as in predatory pricing practices; living by predation, predaceous; adapted to predation. Source: (https://www.merriam-webster.com/dictionary/predatory#synonyms) To avoid connotative inaccuracies the authors intend to convey the construct predatory in its broader sense thereby striving toward denotative accuracy.View all notes” publishing process practices in academic journals in recent years. In addition, ascertaining what, if any, substantive damage can result from these practices. And to derive “warnings signs” for those embarking on the road to creating/distributing what they have learned. “Predatory publishing” is the charging of a fee or proving value in exchange for the publication of research material without providing the publication services an author would have reasonably expected such as peer review and editing to mention only a few. There is evidence that this “practice” has sadly grown in prevalence in recent years. Following a review of the literature and using case study methodology, it was found that damages in the case of one large publisher were estimated to be over $50 m, as per a court adjudication in the US. With open access publishing becoming more popular as a result of institutional, funder and national mandates, it is likely more authors could be tempted or even baited into making poor decisions and publishing their research articles in illegitimate journals, wasting funding resources and damaging their research reputations. These and other implications are considered, as well as enumeration of such behaviors with an eye toward fostering deterrence. Further research and actions that could mitigate the problems are outlined.
Article
Full-text available
Available from: http://doi.org/10.3346/jkms.2019.34.e99
Article
Full-text available
Background. Scholarly communication is an ever-evolving practice. As publishing advanced from the printed format to digital formats, new trends, practices and platforms emerged in academia. As reputable publishers adapted their business models to accommodate open access, many non-reputable publishers have emerged with questionable business models and less-than-favourable or unacceptable publishing services. Objectives. This paper discusses changing trends in scholarly publishing, the advent of and problems caused by pervasive predatory publishing practices, and possible solutions. The paper also investigates possible alternatives to Beall's list and whether a "one-stop shop" black- or white list would serve as a comprehensive tool for scholarly authors. Results. The paper concludes that there is no "one-stop shop" or comprehensive resource or guidelines available at this stage for scholarly authors to consult before publishing. It alerts scholars to be cautious and to do research about potential publishers, before submitting manuscripts for publication. Contributions. It provides recommendations and some useful resources to assist authors before they publish their works.
Article
Full-text available
Despite citation counts from Google Scholar (GS), Web of Science (WoS), and Scopus being widely consulted by researchers and sometimes used in research evaluations, there is no recent or systematic evidence about the differences between them. In response, this paper investigates 2,448,055 citations to 2299 English-language highly-cited documents from 252 GS subject categories published in 2006, comparing GS, the WoS Core Collection, and Scopus. GS consistently found the largest percentage of citations across all areas (93%-96%), far ahead of Scopus (35%-77%) and WoS (27%-73%). GS found nearly all the WoS (95%) and Scopus (92%) citations. Most citations found only by GS were from non-journal sources (48%-65%), including theses, books, conference papers, and unpublished materials. Many were non-English (19%-38%), and they tended to be much less cited than citing sources that were also in Scopus or WoS. Despite the many unique GS citing sources, Spearman correlations between citation counts in GS and WoS or Scopus are high (0.78-0.99). They are lower in the Humanities, and lower between GS and WoS than between GS and Scopus. The results suggest that in all areas GS citation data is essentially a superset of WoS and Scopus, with substantial extra coverage.
Article
Full-text available
Predatory publishing, a destructive phenomenon that has been highlighted and discussed since 2011, is the consequence of the gold (author-pays) open access publishing model [1]. Predatory journals are money-making stations characterized by charging publication fees and an absence of ‘true’ peer review [2]. These journals and publishers have grown to a very large number in recent years [2,3]. The general assumption is that the most common victims of such journals are young, naive, early career researchers, especially from developing countries, but this may not always be the case [3-5]. Recent evidences suggest that predatory publishing is a global phenomenon affecting authors from both developing and developed countries, and even experienced authors get duped [5,6]. How can we deal with these publishers and how can authors avoid getting duped by them? This essay presents a few incidents to build on for answering these questions.
Article
Full-text available
The extent of publishing in predatory journals in economics is examined. A simple model of researcher behavior is presented to explore those factors motivating an academic to publish in predatory journals as defined by Beall (Criteria for determining predatory open access publishers, Unpublished document, 3rd edn, 2015. https://scholarlyoa.com/publishers/). Beall’s lists are used to identify predatory journals included in the Research Papers in Economics archives. The affiliations of authors publishing in these outlets indicate that the geographic dispersion of authorship is widespread. A very small subset of authors is registered on RePEc. A surprising number of authors who are in the RePEc top 5% also published in predatory journals in 2015.
Article
Full-text available
Warnings against publishing in predatory journals are plentiful and so are the suggested solutions to the problem. The existing studies all confirm that authors of articles published in potential predatory journals are typically young, inexperienced and from Asia or Africa. To what extend we can consider the problem negligible is determined by the impact they are having on the scholarly communication in terms of publications and citations. The existing literature can provide more information about the former than the latter. This paper is an analysis of potential predatory journals as well as potential poor scientific standards journals. Citations to 124 potential predatory journals and poor scientific standards journals are looked up in Scopus and the citing authors analysed in regards to geographic location, publications and citations. The results show that the characteristics of the citing author indeed resemble those of the publishing author. Implications for recommendations and future research are discussed.
Article
Full-text available
In the internet era spam has become a big problem. Researchers are troubled with unsolicited or bulk spam emails inviting them to publish. However, this strategy has helped predatory journals hunt their prey and earn money. These journals have grown tremendously during the past few years despite serious efforts by researchers and scholarly organizations to hinder their growth. Predatory journals and publishers are often based in developing countries, and they potentially target researchers from these countries by using different tactics identified in previous research. In response to the spread of predatory publishing, scientists are trying to develop criteria and guidelines to help avoid them—for example, the recently reported “predatory rate”. This article attempts to (a) highlight the strategies used by predatory journals to convince researchers to publish with them, (b) report their article processing charges, (c) note their presence in Jeffrey Beall’s List of Predatory Publishers, (d) rank them based on the predatory rate, and (e) put forward suggestions for junior researchers (especially in developing counties), who are the most likely targets of predatory journals.
Article
Full-text available
An investigation finds that dozens of academic titles offered ‘Dr Fraud’ — a sham, unqualified scientist — a place on their editorial board. Full article is freely available at: http://www.nature.com/polopoly_fs/1.21662!/menu/main/topColumns/topLeftColumn/pdf/543481a.pdf
Article
Full-text available
Available here http://rdcu.be/DHkL Predatory journals operate as vanity presses, typically charging large submission or publication fees and requiring little peer review. The consequences of such journals are wide reaching, affecting the integrity of the legitimate journals they attempt to imitate, the reputations of the departments, colleges, and universities of their contributors, the actions of accreditation bodies, the reputations of their authors, and perhaps even the generosity of academic benefactors. Using a stakeholder analysis, our study of predatory journals suggests that most stakeholders gain little in the short run from such publishing and only the editors or owners of these journals benefit in the long run. We also discuss counter-measures that academic and administrative faculty can employ to thwart predatory publishing.
Article
Full-text available
This paper describes and discusses the phenomenon ‘predatory publishing’, in relation to both academic journals and books, and suggests a list of characteristics by which to identify predatory journals. It also raises the question whether traditional publishing houses have accompanied rogue publishers upon this path. It is noted that bioethics as a discipline does not stand unaffected by this trend. Towards the end of the paper it is discussed what can and should be done to eliminate or reduce the effects of this development. The paper concludes that predatory publishing is a growing phenomenon that has the potential to greatly affect both bioethics and science at large. Publishing papers and books for profit, without any genuine concern for content, but with the pretence of applying authentic academic procedures of critical scrutiny, brings about a worrying erosion of trust in scientific publishing.
Article
Full-text available
Predatory journals are a well-known issue for scholarly publishing and they are repositories for bogus research. In recent years, the number of predatory journals has risen and it is necessary to present a solution for this challenge. In this paper, we will discuss about a possible ranking of predatory journals. Our ranking approach is based on Beall’s criteria for detection of predatory journals and it can help editors to improve their journals or convert their questionable journals to non-predatory ones. Moreover, our approach could help young editors to protect their journals against predatory practice. Finally, we present a case study to clarify our approach.
Article
Full-text available
The aim of the present paper was to introduce some online-based approaches to evaluate scientific journals and publishers and to differentiate them from the hijacked ones, regardless of their disciplines. With the advent of open-access journals, many hijacked journals and publishers have deceitfully assumed the mantle of authenticity in order to take advantage of researchers and students. Although these hijacked journals and publishers can be identified through checking their advertisement techniques and their websites, these ways do not always result in their identification. There exist certain online-based approaches, such as using Master Journal List provided by Thomson Reuters, and Scopus database, and using the DOI of a paper, to certify the realness of a journal or publisher. It is indispensable that inexperienced students and researchers know these methods so as to identify hijacked journals and publishers with a higher level of probability.
Article
Full-text available
Background A negative consequence of the rapid growth of scholarly open access publishing funded by article processing charges is the emergence of publishers and journals with highly questionable marketing and peer review practices. These so-called predatory publishers are causing unfounded negative publicity for open access publishing in general. Reports about this branch of e-business have so far mainly concentrated on exposing lacking peer review and scandals involving publishers and journals. There is a lack of comprehensive studies about several aspects of this phenomenon, including extent and regional distribution. Methods After an initial scan of all predatory publishers and journals included in the so-called Beall’s list, a sample of 613 journals was constructed using a stratified sampling method from the total of over 11,000 journals identified. Information about the subject field, country of publisher, article processing charge and article volumes published between 2010 and 2014 were manually collected from the journal websites. For a subset of journals, individual articles were sampled in order to study the country affiliation of authors and the publication delays. Results Over the studied period, predatory journals have rapidly increased their publication volumes from 53,000 in 2010 to an estimated 420,000 articles in 2014, published by around 8,000 active journals. Early on, publishers with more than 100 journals dominated the market, but since 2012 publishers in the 10–99 journal size category have captured the largest market share. The regional distribution of both the publisher’s country and authorship is highly skewed, in particular Asia and Africa contributed three quarters of authors. Authors paid an average article processing charge of 178 USD per article for articles typically published within 2 to 3 months of submission. Conclusions Despite a total number of journals and publishing volumes comparable to respectable (indexed by the Directory of Open Access Journals) open access journals, the problem of predatory open access seems highly contained to just a few countries, where the academic evaluation practices strongly favor international publication, but without further quality checks.
Article
Full-text available
Scholars beware! For years, researchers have lamented the long lag times endemic in conventional academic publishing, where even the highest quality papers have often taken more than two years from initial submission to publication. Luckily, advances in digital technologies and the advent of online, open-access (OA) journals are rendering such delays obsolete. Society can now directly benefit from published research within months (and sometimes weeks) of a study being completed. Unfortunately however, open-access, online technologies are interacting with new revenue-generating business models and historic assessment systems, leading to the rise of predatory open-access (POA) journals that prioritize profit over the integrity of academic scholarship. Such interaction is leading to disruptive distortions that are systematically undermining academia’s ability to disseminate the highest quality scholarship and to benefit from free, timely access.
Article
Full-text available
This paper presents the bibliometric characteristics of 32 biomedical open access journals published by Academic Journals and International Research Journals – the two Nigerian publishers in Jeffery Beall's list of 23 predatory open access publishers in 2012. Data about the journals and the authors of their articles were collected from the websites of the publishers, Google Scholar and Web of Science. As at December 2012, the journals had together produced a total of 5,601 papers written by 5,599 authors, and received 12,596 citations. Authors from Asia accounted for 56.79% of the publications; those from Africa wrote 28.35% while Europe contributed 7.78%. Authors from Africa accounted for 18.25% of the citations these journals received, and this is about one-third the number of citations by authors in Asia (54.62%). At country level, India ranks first in the top 10 citer countries, while Nigeria, the host country of the journals, ranked eighth. More in-depth studies are required to develop further information about the journals such as how much scientific information the journals contain, as well as the science literacy of the authors and the editorial.
Article
Full-text available
Researchers from diverse disciplines have exam-ined the many factors that contribute to the influence of published research papers. Such influence dynamics are in essence a marketing of science issue. In this paper, we propose that in addition to known established, overt drivers of influ-ence such as journal, article, author, and Matthew effects, a latent factor "citability" influences the eventual impact of a paper. Citability is a mid-range latent variable that captures the changing relationship of an article to a field. Our analysis using a discretized Tobit model with hidden Markov processes suggests that there are two states of citability, and these dynamic states determine eventual influence of a paper. Prior research in marketing has relied on models where the various effects such as author and journal effects are deemed static. Unlike ours, these models fail to capture the continuously evolving impact dynamics of a paper and the differential effect of the various drivers that depend on the latent state a paper is in at any given point of time. Our model also captures the impact of uncitedness, which other models fail to do. Our model is estimated using articles published in seven leading marketing journals during the years 1996–2003. Findings and implications are discussed.
Article
Full-text available
The number of citations a paper receives is the most commonly used measure of scientific impact. In this paper, we study not only the number but also the type of citations that 659 marketing articles generated. We discern five citation types: application, affirmation, negation, review and perfunctory mention (i.e., citing an article only indirectly without really using it). Prior literature in scientometrics recognizes that the former three types, on average, signal a higher level of scientific indebtedness than the latter two types. In our sample, these three types of citation represent only 15% of all citations. We also find different determinants of citation behavior across citation types. Across the 49 determinants we included, only 13 have the same effect across all citation types, of which only 5 are statistically significant across all citation types. For instance, we find a significant inverted U-effect of challenging commonly held beliefs on citations counts, but only for three of the citation types: affirmation, review and perfunctory mention. Our results encourage scientific stakeholders to move beyond mere citation counts to assess a paper’s or a scholar’s scientific contribution, as well as to devote greater attention to the citation process itself.
Article
Full-text available
The authors investigate the overall and subarea influence of a comprehensive set of marketing and marketing-related journals at three points in time during a 30-year period using a citation-based measure of structural influence. The results show that a few journals wield a disproportionate amount of influence in the marketing journal network as a whole and that influential journals tend to derive their influence from many different journals. Different journals are most influential in different subareas of marketing; general business and managerially oriented journals have lost influence, whereas more specialized marketing journals have gained in influence over time. The Journal of Marketing emerges as the most influential marketing journal in the final period (1996-97) and as the journal with the broadest span of influence across all subareas. Yet the Journal of Marketing is notably influential among applied marketing journals, which themselves are of lesser influence. The index of structural influence is significantly correlated with other objective and subjective measures of influence but least so with the impact factors reported in the Social Sciences Citation Index. Overall, the findings demonstrate the rapid maturation of the marketing discipline and the changing role of key journals in the process.
Article
The emergence of open access (OA) publishing has altered incentives and opportunities for academic stakeholders and publishers. These changes have yielded a variety of new economic and academic niches, including journals with questionable peer‐review systems and business models, commonly dubbed “predatory publishing.” Empirical analysis of Cabellʼs Journal Blacklist reveals substantial diversity in types and degrees of predatory publishing. While some blacklisted publishers produce journals with many severe violations of academic norms, “gray” journals and publishers occupy borderline or ambiguous niches between predation and legitimacy. Predation in academic publishing is not a simple binary phenomenon and should instead be perceived as a spectrum with varying types and degrees of illegitimacy. Conceptions of predation are based on overlapping evaluations of academic and economic legitimacy. High institutional status benefits publishers by reducing conflicts between—if not aligning—professional and market institutional logics, which are more likely to conflict and create illegitimacy concerns in downmarket niches. High rejection rates imbue high‐status journals with value and pricing power, while low‐status OA journals face “predatory” incentives to optimize revenue via low selectivity. Status influences the social acceptability of profit‐seeking in academic publishing, rendering lower‐status publishers vulnerable to being perceived and stigmatized as illegitimate.
Article
Leading scholars and publishers from ten countries have agreed a definition of predatory publishing that can protect scholarship. It took 12 hours of discussion, 18 questions and 3 rounds to reach. Leading scholars and publishers from ten countries have agreed a definition of predatory publishing that can protect scholarship. It took 12 hours of discussion, 18 questions and 3 rounds to reach.
Article
Following Hartley (Scientometrics 118:375–381, 2019) I attempted to draw lessons from my personal Google citations (> 100,000) by reviewing over 100,000 personal citations. The review asked eight questions: Do papers in high impact journals necessarily lead to higher personal citations? Does innovative research attract more citations than replications and refinement? Do reviews and meta-analysis attract more citations than empirical studies? Which gets cited more: books, chapters, presentations, chapters? What determines the pattern of individual paper citations over time? Do citations vary across academic disciplines? Is it better to focus on a few specific journals or “spread-the-word” to maximize citations? How important is it to devise one’s own tests (statistical/diagnostic) to maximize citations? All these questions were answered by inspecting this N = 1 data set. It provides hypotheses for other researchers to explore and test. Limitations are acknowledged.
Article
Background: Nursing journals from predatory publication outlets may look authentic and seem to be a credible source of information. However, further inspection may reveal otherwise. Purpose: The purpose of this study was to analyze publication and dissemination patterns of articles published in known predatory nursing journals. Method: Using Scopus, reference lists were searched for citations from seven identified predatory nursing journals. Bibliographic information and subsequent citation information were then collected and analyzed. Findings: A total of 814 citations of articles published in predatory nursing journals were identified. Further analysis indicated that these articles were cited in 141 nonpredatory nursing journals of various types. Discussion: Predatory nursing journals continue to persist, yet fewer may now be in existence. Education and information may help authors and reviewers identify predatory journals, thereby discouraging submissions to these publications and hesitancy among authors to cite articles published in them.
Article
This contribution focuses on the scholarly social network ResearchGate (RG). We take the cue from a recent change in the information shown on each researcher’s profile page, which now discloses the number of full-text reads, in addition to the already provided number of overall reads. Building on the findings of two previous studies (Orduna-Malea et al. in Scientometrics 112(1):443–460, 2017. https://doi.org/10.1007/s11192-017-2396-9; Copiello and Bonifaci in Scientometrics 114(1):301–306, 2018. https://doi.org/10.1007/s11192-017-2582-9), we delve into the relationship among full-text research items uploaded in that platform, full-text reads of the same items, and the so-called RG Score. The dataset examined here provides conflicting results. Firstly, the number of full-text publications and reads is significantly different, along with the RG Score, for the analyzed samples. Secondly, the RG Score implicitly rewards the ratio between the full-texts available to users and total research items. Moreover, the same score seems to be affected to a greater degree by the level of overall reads. However, apart from an indirect relationship, it does not reward how much attention the full-texts get in comparison to the other research items featured in the scholars’ profile pages.
Article
In recent years the academic world has witnessed the mushrooming of journals that falsely pretend to be legitimate academic outlets. We study this phenomenon using information from 46,000 researchers seeking promotion in Italian academia. About 5% of them have published in journals included in the blacklist of ‘potential, possible, or probable predatory journals’ elaborated by the scholarly librarian Jeffrey Beall. Data from a survey that we conducted among these researchers confirms that at least one third of these journals do not provide peer review or they engage in some other type of irregular editorial practice. We identify two factors that may have spurred publications in dubious journals. First, some of these journals have managed to be included in citation indexes such as Scopus that many institutions consider as a guarantee of quality. Second, we show that authors who publish in these journals are more likely to receive positive assessments when they are evaluated by (randomly selected) committee members who lack research expertise. Overall, our analysis suggests that the proliferation of ‘predatory’ journals reflects the existence of severe information asymmetries in scientific evaluations.
Article
Using a database of potential, possible, or probable predatory scholarly open-access journals, the objective of this research is to study the penetration of predatory publications in the Brazilian academic system and the profile of authors in a cross-section empirical study. Based on a massive amount of publications from Brazilian researchers of all disciplines during the 2000–2015 period, we were able to analyze the extent of predatory publications using an econometric modeling. Descriptive statistics indicate that predatory publications represent a small overall proportion, but grew exponentially in the last 5 years. Departing from prior studies, our analysis shows that experienced researchers with a high number of non-indexed publications and PhD obtained locally are more likely to publish in predatory journals. Further analysis shows that once a journal regarded as predatory is listed in the local ranking system, the Qualis, it starts to receive more publications than non-predatory ones.
Article
This study examines the reasons why authors publish in ‘predatory’ OA journals. In total, 50 journals were randomly selected from Beall's list of ‘predatory’ journals. Different methods, including WHOIS tracking, were utilized to query basic information about the selected journals, including location and registrant. Then, 300 articles were randomly selected from within selected journals in various scientific fields. Authors of the selected articles were contacted and sent survey questions to complete. A grounded theory qualitative methods approach was used for data collection and analysis. The results demonstrated that most of these journals were located in the developing world, usually Asia or Africa, even when they claimed they were in the USA or UK. Furthermore, four themes emerged after authors’ survey responses were coded, categorized, and sub-categorized. The themes were: social identity threat, unawareness, high pressure, and lack of research proficiency. Scholars in the developing world felt that reputable Western journals might be prejudiced against them and sometimes felt more comfortable publishing in journals from the developing world. Other scholars were unaware of the reputation of the journals in which they published and would not have selected them had they known. However, some scholars said they would still have published in the same journals if their institution recognised them. The pressure to ‘publish or perish’ was another factor influencing many scholars’ decisions to publish in these fast-turnaround journals. In some cases, researchers did not have adequate guidance and felt they lacked the knowledge of research to submit to a more reputable journal. More needs to be done by institutions and reputable journals to make researchers aware of the problem of ‘predatory’ journals.
Article
p>Predatory journals are a global and growing problem contaminating all domains of science. A coordinated response by all stakeholders (researchers, institutions, funders, regulators and patients) will be needed to stop the influence of these illegitimate journals.</p
Article
As Google Scholar (GS) gains more ground as free scholarly literature retrieval source it’s becoming important to understand its quality and reliability in terms of scope and content. Studies comparing GS to controlled databases such as Scopus, Web of Science (WOS) and others have been published almost since GS inception. These studies focus on its coverage, quality and ability to replace controlled databases as a source of reliable scientific literature. In addition, GS introduction of citations tracking and journal metrics have spurred a body of literature focusing on its ability to produce reliable metrics. In this article we aimed to review some studies in these areas in an effort to provide insights into GS ability to replace controlled databases in various subject areas. We reviewed 91 comparative articles from 2005 until 2016 which compared GS to various databases and especially Web of Science (WOS) and Scopus in an effort to determine whether GS can be used as a suitable source of scientific information and as a source of data for scientific evaluation. Our results show that GS has significantly expanded its coverage through the years which makes it a powerful database of scholarly literature. However, the quality of resources indexed and overall policy still remains known. Caution should be exercised when relying on GS for citations and metrics mainly because it can be easily manipulated and its indexing quality still remains a challenge.
Article
Predatory journals are out to get you and your work. Awareness of predatory publishers and their practices is now much higher than even three years ago: predatory being defined by the Oxford English Dictionary as 'preying naturally on' and 'seeking to exploit' others. This article is protected by copyright. All rights reserved.
Article
‘Continuous effort, not strength or intelligence, is the key to understanding our potential.’ Margaret J Wheatley The focus of any academic or research author is to share his or her findings, and to gain respect and reward for publishing. The ideal journal is one that not only publishes an article quickly but also helps the author to improve the article before publication through peer review, selects only the best research so that the author’s article lies alongside other high quality articles, and provides maximum (and long-term) visibility and access to the article. Unfortunately, in the real world, authors need to make tradeoffs between high quality journals, those that work quickly, those that are willing to accept the article and those that provide the best access. Into this mix has come the potential of open access as a means of increasing visibility: journals publish the article without a subscription barrier so anyone, anywhere, can read the article. However, the growth of open access (pushed by institutions, grant bodies and governments as a means of improving human health and knowledge) has come with some unforeseen consequences. In this article, Jeffrey Beall discusses one recent phenomenon that has arisen from the open access movement: that of ‘predatory publishers’. These are individuals or companies that use the open access financial system (author pays, rather than library subscribes) to defraud authors and readers by promising reputable publishing platforms but delivering nothing of the sort. They frequently have imaginary editorial boards, do not operate any peer review or quality control, are unclear about payment requirements and opaque about ownership or location, include plagiarised content and publish whatever somebody will pay them to publish. Predatory publishers generally make false promises to authors and behave unethically. They also undermine the scholarly information and publishing environment with a deluge of poor quality, unchecked and invalidated articles often published on temporary sites, thus losing the scholarly record. Jeffrey Beall, a librarian in Denver, US, has watched the rise of such fraudulent practice, and manages a blog site that names publishers and journals that he has identified as predatory. While Beall’s lists can provide librarians and knowledgeable authors with information on which journals and publishers to be cautious about, several legitimate publishers, library groups and others have joined forces to educate and inform authors in what to look for when selecting journals to publish in (or read). This initiative, called Think. Check. Submit. (http://thinkchecksubmit.org/), was launched in the latter half of 2015 and hopes to raise awareness of disreputable journals while clearly separating them from valid, high quality, open access journals (of which there are many). PIPPA SMART Guest Editor
Article
Publication in refereed journals is critical to career success for most marketing faculty members, and the peer review process is the gatekeeper for a refereed journal. The study reported here examines marketing academics' perceptions of this peer review process. Based on responses from 653 marketing academics, we find favorable overall impressions of the review process as being fair/unbiased, truly double-blind, timely, and successful in improving the quality of research. But the process is not without concerns. In particular, there are concerns about the ability to have (and adhere to) a truly double-blind review process given the close-knit nature of the academy in conjunction with today's interconnected world. Problematically, researchers express dissatisfaction with the lack of timeliness in the review process. Given these concerns, we offer a strategically oriented recommendation for a code of publishing ethics and a tactically oriented recommendation focused on improvements in manuscript turnaround time.
Article
Many open access journals have a reputation for being of low quality and being dishonest with regard to peer review and publishing costs. Such journals are labeled “predatory” journals. This study examines author profiles for some of these “predatory” journals as well as for groups of more well-recognized open access journals. We collect and analyze the publication record, citation count, and geographic location of authors from the various groups of journals. Statistical analyses verify that each group of journals has a distinct author population. Those who publish in “predatory” journals are, for the most part, young and inexperienced researchers from developing countries. We believe that economic and sociocultural conditions in these developing countries have contributed to the differences found in authorship between “predatory” and “nonpredatory” journals.
Article
This paper investigates citations of influential papers in the marketing and management area. These papers are successful in terms of the direct citations they receive (i.e., primary citations). To be truly influential, however, the papers citing them must in turn be used and cited by subsequent papers (i.e., have secondary citations) to demonstrate their long-run relevance. We propose a measure of enduring impact that takes into account (1) both primary and secondary citations and (2) the number of citations in the bibliography. The measure is non-linearly (exponentially) related to the traditional influence measure (i.e., primary citations to a paper) and captures the dissemination of knowledge from a paper more completely than cumulative or average primary citations.
Article
Purpose – The purpose of this paper is to see whether it is possible to reliably detect, prospectively, superior intellectual contributions to marketing's literature. Design/methodology/approach – Citation data accessed on the Institute of Scientific Information Web of Science were used to examine the impact of award-winning marketing articles with those of lead articles and non-lead articles in the same journal issues. Findings – Award-winners gathered more citations than those for the two comparison groups. It is shown, however, that this finding should not be taken for granted. The peer review system frequently fails to identify high quality, innovative research. Research limitations/implications – The paper only considers US marketing journals. Originality/value – This is the only in-depth study of the impact of award-winning research in the marketing community.
Article
Journals that exploit the author-pays model damage scholarly publishing and promote unethical behaviour by scientists, argues Jeffrey Beall.