Science topic
Scientific Publishing - Science topic
Explore the latest questions and answers in Scientific Publishing, and find Scientific Publishing experts.
Questions related to Scientific Publishing
Quantity vs. Quality in Scientific Publishing: Where Do We Stand?
Reimagining Scientific Publishing: Your Voice Matters! We want to hear from researchers like you! Share your thoughts on current scientific publishing and what changes you believe are needed. Your answers will remain anonymous unless you choose to provide identifying information in open-ended responses. Link to the survey: https://wsu.co1.qualtrics.com/jfe/form/SV_bf7yG0U7H3rqZo2
How can Big Data Analytics and Data Science, supported by generative artificial intelligence technology, support conducting scientific research and publishing its results?
In the age of digitalisation, where science generates unprecedented amounts of data, big data analytics and data science, supported by generative artificial intelligence, are becoming key tools to support the research process. They enable researchers not only to process and analyse this data effectively, but also to discover hidden patterns and trends that would be inaccessible using traditional methods. Thanks to machine learning algorithms, researchers can identify complex relationships, formulate new hypotheses and generate innovative theories, which significantly accelerates scientific progress. Generative artificial intelligence, which is capable of creating new content based on existing data, opens up new possibilities for automating analysis, generating hypotheses and supporting the publication of research results, allowing scientists to focus on interpreting and formulating conclusions. However, to fully utilise the potential of these technologies, it is necessary to continuously develop methodologies and algorithms, as well as to consider the ethical aspects of their application, which emphasises the key role of scientific research in this field.
The research and observations that I conduct show that artificial intelligence technology has been developing rapidly in recent years and is finding new applications, with new opportunities and threats emerging. I have described the main determinants, including the potential opportunities and threats to the development of artificial intelligence technology, in my article below:
OPPORTUNITIES AND THREATS TO THE DEVELOPMENT OF ARTIFICIAL INTELLIGENCE APPLICATIONS AND THE NEED FOR NORMATIVE REGULATION OF THIS DEVELOPMENT
I have described the issue of Industry 4.0/5.0 technology applications, including Big Data Analytics, with the aim of improving data and information transfer and processing systems, in the following articles:
THE QUESTION OF THE SECURITY OF FACILITATING, COLLECTING AND PROCESSING INFORMATION IN DATA BASES OF SOCIAL NETWORKING
IMPORTANCE AND SECURITY OF INFORMATION PROVIDED BY THE INTERNET IN THE CONTEXT OF THE DEVELOPMENT OF ECONOMIC ENTITIES IN POLAND
APPLICATION OF DATA BASE SYSTEMS BIG DATA AND BUSINESS INTELLIGENCE SOFTWARE IN INTEGRATED RISK MANAGEMENT IN ORGANISATION
The postpandemic reality and the security of information technologies ICT, Big Data, Industry 4.0, social media portals and the Internet
The Big Data technologies as an important factor of electronic data processing and the development of computerised analytical platforms, Business Intelligence
And what is your opinion on this topic?
What is your opinion on this matter?
Please answer,
I invite everyone to the discussion,
Thank you very much,
Best wishes,
I invite you to scientific cooperation,
Dariusz Prokopowicz

The scientific publishing industry is facing several significant challenges today:
- Sustainability of the Subscription-Based Model
- Open Access and Funding Mandates:
- Peer Review Challenges
- Predatory Journals and Questionable Practice
We invite researchers in Islamic, Jewish, Christian, and Abrahamic finance to submit chapter proposals for the upcoming book, “Legal and Regulatory Aspects of Abrahamic Finance.”
Submission Deadline: January 29th, 2025
Submit chapter proposals at: Call for Chapters: Legal and Regulatory Aspects of Abrahamic Finance | IGI Global Scientific Publishing
Contact at: paldi16@gmail.com
Ecosystem Services (Elsevier) is open to receive proposals for new innovative Special Issues. Ecosystem Services is a high rank journal, among the 15 best in environmental sciences (6.1 impact factor 2023), ranked Q1. Our position gives high visibility to the articles we publish.
Special Issues are articles collections (12 or more) that share well-defined, scientifically innovative and coherent aims and scope. For an overview of present and past SI please visit the journal's webpage.
If you are interested in proposing an appealing and high-quality topic for a special issue in Ecosystem Services journal, please send to me a one-page description of the SI aims and scope, including the team of guest editors (3-5). We will carefully assess the proposal in terms of scientific quality and aims and scope. The scope has to be clear around a framing conceptual outlining that gathers together an article collection with sufficient coherence.
Looking forward to hearing from you!
Many journals have implemented a soft policy against the use of generative AI, such as Chat-GPT, and now require authors to disclose the use of AI in assisting with manuscript writing. This policy distinguishes from the use of tools like Grammarly Generative AI for syntax improvement and extends to cases where AI is used even for doing and writing formal analysis.
Experienced journal editors can typically identify AI-generated content, as it often lacks coherence with the rest of the manuscript. In my view, this emerging issue needs to be addressed promptly and decisively.
I believe that AI "assisted" research writing should be considered unethical and prohibited in the scientific community, with measures put in place to prevent its use in total. Allowing this trend to persist could have detrimental effects on scientific research in the long term. IMO, It takes out all the human creativity, intuition, personality and fun in writing science research papers.
However, I'm open to hearing different perspectives on this matter.
SDGs = UN' Sustainable Development Goals
HEI = Higher Education Institutions
If any relation is detected, you should specify which these are and how they work or must be working. You can give any ideas to optimize this processes or actions.
Dear colleagues and fellow scientific researchers,
We would like to invite you to publish your next paper in our upcoming issues of DYSONA – Applied Science (ISSN: 2708-6283).
Main subjects:
- Agriculture
- Applied environmental sciences
- Engineering and Technology
- Food science
Publication fees: Free of charge
Time from submission to publication: Average 4 weeks (depending on peer review/revision process)
Indexing and abstracting: The journal is not yet included in Scopus or WoS. However, the journal is index in AGRIS (FAO), Google Scholar, FSTA, ROAD, ZDB and many more scientific repositories
Indexing page: https://applied.dysona.org/page_2003.html
Website: https://applied.dysona.org
We welcome your questions here and for more information, kindly contact us at: dysona@e-namtila.com

Dear Friends and Colleagues,
I have a question regarding the review process for journal submissions. When we submit an article to a journal, is it illegal for a reviewer to provide authors with a list of their own articles and request that these articles be cited in the manuscript? I understand that this practice is certainly unethical, but I am curious about its legality.
In the situations I am referring to, the articles suggested for citation are entirely unrelated to the manuscript under review. Reviewers often claim that their final decision on the acceptance of a manuscript does not depend on whether the authors cite their articles, stating that these suggestions are optional.
However, I have spoken with several authors in my country who recognize that this practice is unethical. They feel pressured to comply with these requests because their papers have already been under review for several months, and they fear the risk of rejection or further delay if the journal has to find new reviewers to replace the previous ones.
I appreciate your insights on this matter.
Preprint servers play a valuable role in the scientific publishing workflow by accelerating the sharing of research, promoting openness and transparency, and diversifying the publication landscape. As the scientific community continues to evolve, the role of preprint servers will likely continue to be an important and dynamic aspect of the scholarly communication ecosystem.
The choice between open-access and subscription-based publishing models involves weighing the trade-offs between accessibility, sustainability, quality, and the evolving landscape of scholarly communication. Institutions, funders, and researchers must carefully consider their priorities and the broader implications for the scientific publishing ecosystem.
Guys here is partial list of JCR (Web of Science) Impact Factor.
Title: Factors Contributing to the Decline of Journal Impact Factors
Abstract:
The Journal Citation Reports (JCR) Impact Factor is a widely recognized metric used to evaluate the influence and prestige of scientific journals. However, in recent years, there has been a noticeable decline in the impact factors of many journals indexed in the Web of Science’s JCR database. Causes can be
- Evolving Publication Landscape: The scientific publishing landscape has undergone significant transformations in recent years. The rise of open access publishing, preprint servers, and alternative metrics has led to a diversification of research dissemination channels. As a result, traditional subscription-based journals may face increased competition, leading to a redistribution of citations across different platforms.
- Field-Specific Trends: Impact factors can vary significantly across different scientific disciplines due to variations in publication practices, citation patterns, and research culture. Changes in funding priorities or emerging research areas may result in shifts in citation patterns, impacting the impact factors of certain journals.
- Quality vs. Quantity: The pressure to publish more articles within shorter timeframes can lead to an increase in the overall number of publications. While this can enhance scientific output, it may also dilute the impact factors of individual journals if the focus shifts from quality to quantity.
- Citational Behavior: Changes in the way researchers cite literature can affect impact factors. The increasing use of self-citations and the concentration of citations towards a limited number of highly influential papers can impact the overall citation metrics of journals.
- Editorial Practices and Policies: The editorial policies and practices of journals can influence their impact factors. Factors such as rigorous peer review, editorial selectivity, and adherence to ethical publishing standards can attract high-quality submissions and subsequently increase impact factors.
Conclusion:
The decline in the impact factors of journals listed in the JCR database can be attributed to a combination of factors related to evolving publication practices, field-specific trends, citational behavior, and editorial practices. Understanding these factors is crucial for researchers, publishers, and other stakeholders to interpret impact factors accurately and make informed decisions regarding journal selection and evaluation.
Further research and analysis are needed to delve deeper into the dynamics of impact factors and explore potential strategies for maintaining the quality and relevance of journals in an evolving scholarly publishing landscape.
The attachment contains List of impact Factors 2024.
Dear Colleagues,
Peer-review isn't working well, and it needs an overhaul. In the time of artificial intelligence, blockchain, and remote work, it doesn't make sense to wait for months just to receive few lines rejecting an excellent manuscript or accepting a poor one!
Would you spend five minutes to answer a questionnaire on Google forms, and help SCIENEUM.io solve this problem for all of us?
This is it: https://forms.gle/2BskfDeAoeqKf5Wt5
Are you one of us? https://youtu.be/ewOuhohAjWc
Write your comment below!
Dear all,
I would like to share with you publishing opportunities in our journal, Ecosystem Services, a Q1 journal, with an impact factor 2023 of 7.6, according to the 2023 Journal Citations Report and 12.5 CitiScore (SCOPUS). We have currently 3 Special Issues (SI) open for submissions:
Payments for Ecosystem Services and Motivations: exploring the driving conditions for success or failure.
Innovative governance of ecosystem services: from hierarchical to collaborative models and from single instrument to “blended” approaches.
Ecosystem services towards planning healthy and resilient landscapes
The scope and full information can be found here:
Regards
Luis
Recently I visited <https://research.com/> (R) (a platform that lists top scientists around the world from the areas of computer science and electronics) and later I also visited <https://atlas.cern/> (A) (the ATLAS Experiment at CERN). I made the following observations (I intentionally don't mention names):
1. One of the top authors at R has published 1,816 papers.
If one's professional career lasts 40 years, the calculation says:
40 years X 365 days = 14600 days; 14600 / 1816 = 8 days to publish a paper
That means 1 paper is published every 8 days during the entire professional life! That's about 45 papers a year... every year!
2. The same author at R has an h-index of 167.
"The h-index is defined as the maximum value of h such that the given author/journal has published at least h papers that have each been cited at least h times."
The R's top author has 167 papers each one cited 167 times!
3. A paper published by A's researchers had 78 authors!
I realize that CERN is something "big" and quite complex. But... there are 78 authors anyway...
Probably all those people are high-level scientists. But... what makes them hyperprolific? Is it real? How is it possible? Is it more for the benefit of science or is it a kind of business?
What's your opinion?
Is it a good idea to send pre-submission inquiries to multiple journals to accelerate the publication process? or do we have to wait for the editors' response to a pre-sub before sending another one?
Thank you!
Greetings fellow researchers,
We have researched the Frankl theory, which suggests that subscribing to nihilism can lead to experiencing existential vacuum (Man's Search for Meaning, p.111). Furthermore, we have studied how this phenomenon poses a threat to psychotherapy (). We have investigated how internet usage and existential nihilism affect the experience of existential vacuum.
As we have decided to take this research further for publication, I would like to gather your insights on selecting appropriate journals, approaching them, how the university affiliation can help, the ethical process, and whether we can take assistance from a guide or the university. We appreciate any additional insights or experiences you may have on the topic.
Thank you!
Remark_1: a PDF of this draft has been added to this discussion to allow the readers to have access to the hyperlinks.
Remark_2: this discussion is aimed at drawing attention to the seriousness of the current man-made global warming in which science has much to do in order to avoid the uncertainty spreading.
Last November 17 and 18 a very concerning fact took place for the first time in modern recorded history. The global surface air temperature exceeded in 2-degree Celsius the pre-industrial average temperature taken between 1850-1900 prior to extensive and widespread use of fossil fuels. Despite scientists assure that the observed exceeding, that happened for a limited number of days, does not mean that the Paris Agreement targets are already compromised, it is urgent and mandatory to keep a precautionary tracking of the atmosphere to dilucidated if a threshold is gaining momentum pushing the atmosphere to start working around the 2-degree Celsius atmospheric overheating, and becoming the main feature of the anthropogenic climate warming within the next ten years.
What happened last November 17 is a serious issue that cannot be overlooked nor discarded by the irresponsible "optimism" which tells things will get better because of technology-based fairy-tales, and by the institutional denialism that exist around the seriousness of the human-sparked global warming and all that has to do with its speed (or if you prefer, its rate of advancement). For those reasons, a conservative perspective will not be helpful keeping in mind the last twenty years trends in CO2 global emissions.
As expected, COP 28 was unable to leave behind its 1.5-degree Celsius goal as nothing serious is taking place with regards how fast the human-boosted warming is going to exceed the 2-degree Celsius above pre-industrial average.
Almost in parallel, the tipping points narrative has been warning humans cannot exceed the 1.5-degree Celsius, despite it is being also said that humans are "near climate tipping points". The bad news is humans still have not developed the hard models and measurements to obtain an accurate metrics of who far humans are to reach that tipping points. Furthermore, the "tipping points" discourse is too vague, and it is becoming another meaningless concept that too many in the world talk about, without having yet available any measurable parameters nor a quantifiable perception of those potential thresholds.
For decades it has been told that remote sensing and all that comes from Earth Observation (EO) systems would help to achieve a sustainable path while planning for a sustainable development (SD), and for a tough future under severe climate strikes. Tonnes of papers using satellite-provided data have been published and, no doubt of it, will keep a high rate of publishing being, so far, unable to show evidence of an overall improvement of the global situation as human dynamics seems unstoppable.
Despite the lack of a decisive global and integrative climate action will persist as one of the main features and drivers of the international system in the near term, to start thinking about implementing a global coverage alert system to inform globally when and how often the global mean Earth temperature gets closer or exceeds the 2-degree Celsius above pre-industrial average. That alert system should also have a straightforward design to display the information to obtain trends (the speed of atmospheric overheating is crucial) and frequency of that events.
That alert system should be very "sonorous". It does mean it should, among other means and devices, reach the cell phones of the people in a similar way as, for instance, earthquakes alarm systems work. In few words, each time the global mean temperature gets closer and/or exceeds the 2-degree Celsius above pre-industrial average people must know.
To make concrete progresses concerning the sense of urgency and the situational awareness among global citizens, to end with the self-deceiving attitude that can be witnessed not only in rich but middle income and poor countries too. The warming is being faster than predicted and expected.
Humans lost this war twenty years ago when it was, finally, accepted that the warming was faster the previously accepted. Unfortunately, despite the huge amount of data, and the quantity of satellites orbiting Earth, it is rather an impossible task yet to provide any measure of that speed and nor agree on how humans should measure that rate of change.
It is time to end the over discussing time and get serious. It is quite advisable to carry out a sustained observing effort on what is going on in Brazil and in the middle of the Amazonia, while following the situation over there all along the summer 2023 in the Southern Hemisphere. It is important to be able to know how many times it could happen during the next six months.
It is also advisable that science make its best effort in avoiding publishing papers that provide grounds for time ambiguity. It should be a mandatory attitude to be quite clear in validating the scope and conclusions of any paper in concrete time-frames. To leave the door open for speculation regarding the timing that can be inferred from those publications exerts a very negative impact in all that pertain to figure out the right time scales for climate action globally speaking.
An explicit acknowledgment of what version, the weak or the strong, of the sustainable development (SD) concept is being framed as the main analytical tool is a complementary publishing strategy that could be of great assistance when evaluating the reach and strength of the conclusions. It is worth mentioning that the “weak” version has been adopted for so long and can be the explanatory root for the aggregate failure of both, to accomplish higher levels of sustainability and give shape to the urgent human collective self-restrain to ameliorate the response to the climate and ecological crisis.
Science is not free of being submitted to any governance regime which should be vigilant of the undesired and counterproductive effects of scientific papers on the political process that, regrettably, took the control of all that concerns to the climate discussion, and the institutions designed to institutionalize a, supposedly thought, collective action.
The bottom line is nineteen years have been lost. In December 2015 it was projected the world would reach the 1.5-degree Celsius by March 2045. Reassessed estimations are suggesting the world risk breaching that benchmark by February 2034.
Remark_3: as always I am willing to build network capabilities aimed at publishing papers with policy-implications, participate in workshop, and/or find the paths for setting the structure of a good well-funded research project.
Dear colleagues,
As you know 15.11.2023 ResearchGate announced its partnership with MDPI (https://www.researchgate.net/press-newsroom/researchgate-and-mdpi-partner-to-boost-the-visibility-of-open-access-journals-through-journal-home). MDPI is known by its very questionable practices (there are many discussions about MDPI here, on RG +there is a good analysis by Paolo Crosetto https://paolocrosetto.wordpress.com/2021/04/12/is-mdpi-a-predatory-publisher/ and also a fresh preprint about general problems with scientific publishing involving this publisher https://arxiv.org/pdf/2309.15884.pdf) and at least in my opinion, it should not be promoted publicly.
Such a partnership poses a danger to Good Scientific Practices (GSP) and legitimizes questionable approaches especially those related to scientific publishing, and peer review. Besides all, there is also an ethical issue that should not be ignored.
At the same time, RG was a relatively good platform for exchanging and discussing research. It was helpful for me for networking and other science-related activities. And it seems that there is no good alternative to it at the moment.
That is why I would like to know whether you plan to stay there, leave this platform, or take any other actions.
Thank you.
As a regular user of ResearchGate, I'm disappointed by their decision to favor MDPI journals over numerous society journals (https://twitter.com/ResearchGate/status/1724759715358351423). Like many others, I'm considering deleting my account unless this unwise decision is reconsidered. What are your thoughts on this matter?
My manuscript was accepted in an Elsevier journal on Sep. 26, 2018. The corrected proof is available online since Oct. 5, 2018 but I haven't received the final version yet. At the same time, I see more recent publications that are available as final versions. What could be the reasons?
Please advise resources where you can search for dissertations using keywords. We need sites where you can choose the type of scientific work (article, dissertation, etc.). Thank you!
I have a confusion in the results of some sanger sequence reactions for IBDV viruses.
the sequence reactions fail to give any reads or give non specific, bad quality and short fragments, although the PCR of those reactions are strongly positive and the sequenced samples was positive for both classic and vvIBDV by real time PCR.
could this failure be related to the mixed infection in samples or quasispecies phenomena?
I am in bad need to find out the scientific causes of these results supported with scientific published papers.
thanks for help
Dear researchers
I get invitation from Universe Scientific Publishing Pte. Ltd. I check the website there is no indexed and the journal is still in early stage of creation. I check the publisher on net whether predatory or not however there is no information. Do you have any suggestion?
Thank you
Teguh

What is a helpful tactic for evaluating the quality of your academic writing from the perspective of the reader? and what are the main elements to be assessed while proofreading your final draft?
Exciting news for researchers and academics!
Leading academic publishers like #Elsevier and #Cambridge University Press have announced that researchers can use applications like #𝗖𝗵𝗮𝘁𝗚𝗣𝗧 for academic writing, provided that the work is original.
This means that texts created using tools like #𝗖𝗵𝗮𝘁𝗚𝗣𝗧 and Bing can be used to accelerate the process of scientific and academic publishing, but cannot be included as authors or contributors in the publication. This is an important step forward in reconciling the role of global publishing and research with modern technological advancements. Learn more about this implicit statement on the publishers' official website in the editorial policies and scientific publishing page.
Link: https://lnkd.in/dXnctutf
Hello,
I am a long-time user of R, but I basically always do the same thing… Generate an « ugly » table of descriptive statistics with summaryBy and doing ANOVA, post-hoc tests, etc.
I recently discover R Markdown and got really existed with its great potential to create nice statistical reports and more.
I have attached a screenshot of a simplified kind of raw data I usually produce in my research and the type of table I eventually publish.
I search the web for R code to produce the second table on my screenshot, but I did not find exactly what I was looking for.
Does someone is the Research Gate could help me?
Thanks in advance!

With the volatility of the scientific publishing world these days, what should a PhD graduate do when he/she discovers that his/her hard earned and meticulous scientific findings that was published in a journal for which he was awarded his doctorate and possibly awarded a grant for, and was included in CVs that landed him/her a job suddenly disappears overnight alongside the publishing company could not being found?
Research trends are in many cases impacted by the gate-keeper's attitude or approval of a prospective research paper. The majority of those gate-keepers are specialized chief editors, editors, and reviewing-scholars in their fields.
Is it justified that a research paper gets rejected based on the criteria of being/not being fashionable or popular in the field?
Would you go with the current in order to get pass, or would you rather be concerned with saying your word in the field? Supposing that there are evidences for both the popular and the "not fashionable attitude nowadays".
I'm in the humanities, researching American poetry.
Thanks for your comments!
Hello researchers,
I'm interested in working on research about the language barrier as the first reason for scientific isolation in publishing globally. If you are interested to take part in this research from the side of your native language, please contact me.
Please recommend this to interested colleagues.
Best regards
Hello,
Aside from Beall's List of Predatory Journals and Publishers, are there other lists of the same content? For example, World Scientific Publishing Company is not on the list of Beall, but how to know if this publisher is predatory?
Thanks.
In many of these serious journals authors should pay for publishing (more than in pirate journals) and process of acceptance is unreasonably too long and too formalistic. I found a lot of very good papers (high quality and scientific) published in pirate journals. In some of serious journals engineering approach and practical use are neglected. The most important criterion for the paper acceptance is using of any new statistical method and/or model (in recent time especially machine learning). Remember Klemeš and his papers: Dilettantism in hydrology: Transition or destiny?; Political pressures in water resources management – do they influence predictions? etc.
With the advances in community review and Web3 on the horizon, I've been starting to wonder if the way in which traditional peer-review works is outdated. Have y'all found any systems out there that feel like the future of peer-review?
Many Academician and researchers think that number, quality and citation does not matter. But another school of thought relates quality of a paper to number of citation. What can be some successful strategies to increase the number of citations?
The Dolos list has started to work well, but I do not think this opposition to predatory publishing is enough. From what I have seen, there is no collective or institute (consisting of researchers) of reference that denounces this sector. It could be a collective whose objective would be to establish regular reports and publish statements about this sector. The collective aspect would give more weight to this opposition. For the moment, I have the impression that these are mostly isolated actions from researchers. What do you think about this idea ?
I just have to warn you of one thing: A researcher who publicly opposes this sector would have to assume unpleasant consequences. For my part, a few days after launching the Dolos list, I was already receiving threatening messages.
Best regards,
Alexandre.
(Edited)
World Scientific Publishing Co Pte Ltd is a Singapore-based publishing company that has been in the business since 1981. They publish a lot of edited books each year. How well accepted are those books in academia? Are they peer-reviewed?
We've all been in circumstances when we couldn't find the literature we were looking for. Other times, we found exactly what we wanted, but very late.
Thus, beginners in research are pushed to formulate a clear and innovative title. Keywords rescue us, but, is it enough?
In these days of data mining, what tools can we utilize to structure our research paper titles such that they are search engine optimized (SEO)?
How can we write technical titles that are both engaging and readable?
1. Google Trends
Track changes in word usage over time, within a region, etc.
2. Web-of-Science
Regular keyword literature search, to find jargon usage.
3. Semrush
Helps with Keyword Research, Competitive Research, PR, etc. It is a marketing tool, perhaps used by industrial researchers and product developers?
4. What am I missing? What are your tricks and tips?
Thank you!
I have personally experience that its relatively tough to publish in a good journal in social science, for example let us suppose in the field of finance, marketing and so on. In comparison i have seen that in natural sciences the publication chances and frequency are higher. What are the possible reasons for this?
I have been asked to submit a paper on a special issue of Genes- MDPI. The impact factor of the journal is showing to be 3.4 but I saw conflicting articles about the MDPI journals being a predatory one. Though the editor of the issue is a reputed person in the field, I am bit confused about the journal in general. What are your thoughts?
I recently read a recent publication, that was well done experimentally, results and discussion were excellent, but the abstract contained facts, that were not results of this study, they contrasted the results of the study. It was possible to make a list and put sentences in the abstract against the corrsponding part of the main part and the discussion and to see to difference. The main author did not answer to my question about it.
How is this possible ? Can Abstracts be changed after the review process und go to print without a check ? Or do you think, something like this may have been overseen by referees.
In Germany it is possible to send such findings to a DFG commision.
Dear community,
this seems to have been out there since 2018 in medical sciences, but I only stumbled over it recently:
T&F are offering extremely fast review and publication times for a higher fee, they pay reviewers for handing in reviews in time. How do you feel about this? Will this bias acceptance of papers and just be a new way to buy a publication? Or do you think this is the right incentive for reviewers and a way to recognize the importance of fast and constructive review?
I would be interested in your opinion.
Differentiating Science from Pseudoscience is becoming a challenge at so many levels these days. How can we separate the two and acknowledge a grey area in between?
Hi everyone,
I am about to define an experiment where we want to investigate 10 - 20 de novo small proteins. We are mainly interested in affinity but also want to show that proteins are folding properly. For that we are thinking about using circular dichroism. I am having seconds thoughts though if this is the right method in the long run. When it comes to publishing, I have the gut feeling that reviewers might ask for a crystal structure of the protein or even the complex. I am working on getting an impression myself by reading nature and science papers but I would like to get to know your advice and experience concerning the matter. What methods are best suited to give our research credibility that might be expected in high impact journals?
cheers
Martin
Does anyone have experience with Columbus Publishers?
trustworthy or predatory journals?
As a researcher, I suffer a lot from the dilemma of global research cooperation, and since I am mainly from a poor African country, and I have no research support and I rely in my research on the laboratory capabilities available in the workplace, I face great difficulties in front of scientific publishing, in which I waste a lot of time and effort..
While research cooperation spawns scientific papers and innovations like the village of ants and at frequent intervals because each member of the group has a small and specific task!
So far I have failed to find a serious research group that suits me and would like to share with me, what are the possible reasons??
Hi all,
I have submitted a paper for publication in June 2017 and I am wondering how long people generally wait? I submitted to a relatively small journal, Impact Factor ~ 1, and waited over 8 months for my first response from them (while their home page says it is generally 185 days ~ 6 months). I emailed them a couple of times during the process for a check, but they just said there is nothing they could do and they are waiting for the reviewer. I worked on the revisions promptly and returned them after a week. I was then told again that I would have to wait (this time about 2-3 weeks for a second revision). I got it back (2 months later) and it was conditionally accepted with more revisions requested. Again I worked on it promptly and re-submitted. Now it has been almost 3 months again. Prior they told me that revisions in this stage only take 2-3 weeks (and last time they had to switch out a reviewer because they took so long, after my requests). Now I am not sure what to do, it feels like their deadlines keep getting pushed back, and they say there's nothing they can do and there has not been an update since the day after I submitted it. Should I request new reviewers or is that just the way it is? It has been over 16 months since I originally submitted the article.
I have heard as good as bad commentaries about this scientific publisher. In my case, I feel that the generalized perception is that this publisher is predatory. Can anyone tell me any experience (good or bad) with this publisher?. Your comments can help me to decide if I publish with this group.
I have just published a book with a big international science publisher (CRC Press, a branch of Taylor and Francis). The multi-author edited book is nice and hopefully useful for many (https://www.researchgate.net/publication/321016401_Grasslands_of_the_world_diversity_management_and_conservation), but the experiences with the publisher were so disappointing that some co-authors and I decided to start a public discussion on writing scientific books in the age of greedy publishers.
Here are some key facts of our collaboration with CRC/Francis and Taylor:
· The communication with the publisher was very unreliable and inefficient: e.g. did we receive various requests multiple times and the publisher “forgot” about previous written agreements.
· The typesetting as the only service provided by the publisher was very poor: about 90% of the changes made by the publisher introduced errors into previously correct text or tables and it was very time-consuming for us to find all these errors and remove them again.
· Instead of paying the authors a honorarium for their work, the publisher forced us to pay for the colour figures in our articles.
· The publisher refused to give the authors a complimentary print copy of their book (only the editors got one).
· First the publisher wanted to provide an electronic version of the chapter/book only to each corresponding author, not to all authors, and only after serious negotiations they accepted to provide e-books to all authors. We assumed that these would be functional pdf’s, but instead they received the books in a very weird e-book format with a display in an ugly and hardly readable layout (e.g. all text in bold), not allowing proper printing nor sharing parts of the content (e.g. single pages or figures) with others. This means that the authors did not receive any printed or electronic copy of that exactly corresponds to the published version of their own work.
I am extremely frustrated about the behaviour of CRC/Francis and Taylor and consider the last point as being at the edge of unethical. My feeling is that CRC might only reflect the strategy of most international science publishers to maximise profit by pressing money out of both authors and readers/libraries, while at the same time minimising the service they provide. On the other hand my gut feeling tells me that nowadays with cheap print-on-demand technology and the possibility to distribute printed or open access e-books without the need to involve a big marketing/distribution machinery should allow for other solutions.
Therefore, I would like to ask you two questions:
· Did you make similar experiences with other science publishers, or are they better or even worse?
· Do you see ways how those among us who would like to continue to write nice and useful books can do this without sacrificing themselves to profit-maximisation strategy of the big international science publishers?
Looking forward to your responses and hoping for a lively debate,
Prof. Dr. Jürgen Dengler
(ZHAW, Wädenswil, Switzerland)
Colleagues,
do you know some UAV-dedicated Special Issues that are open for submission now? Both Magazines and Journals SIs will be highly appreciated! No discrimination on the publishers (IEEE, Frontiers, MDPI, Elsevier, River....)!
Thank you :)
PS: I think following this discussion will be useful. both to find Special Issues and/or to advertise them
Despite certain disadvantages, peer-review is generally accepted as a quality control of manuscripts submitted to scientific journals. The higher the rank of the journal, the more often manuscripts are rejected by the Editor for various reasons (lack of novelty, routine work, low technical quality etc.). Getting you manuscript back with staggering reviewer comments is a rather frustrating experience. What are your personal tips and tricks to avoid rejection?
Delhi, Tamil Nadu, and Kerala Ophthalmological societies have their own journals which are unfortunately not indexed in pubmed? What's your take on publishing works in these journals as first choice?
Recently, one of my papers has been published after five year of its final acceptance by a SCI index journal, that to after lot of reminders. Can anybody suggest about fixing the upper time-limit for the publication of manuscripts at least in peer reviewed Journals?
What is the future of scientific publishing in light of the trend of most journals to open access?
And will this affect the quality of the scientific publication, as the view is still that the open-access is of lower quality and prestigious than the non-open one?
I received an unsolicited email from Scientia Global and I can't tell if they are a predatory publisher of scientific journal articles or news articles or if they are legit.
"Dear Dr. April Robin Martinig,
I hope you do not mind me emailing you directly, I thought it would be the easiest way to make first contact. If you have time for a short discussion I was hoping to speak with you about your research and our interest to feature your work in an upcoming issue of our science communication publication, Scientia.
I will run you through this in more detail when we talk. But to give you a very quick insight into Scientia and the style in which we publish, I have attached a few example articles from research groups we have recently worked with. I have attached these as HTML files to reduce the size, but I can send PDF versions if you would prefer.
You may also view one of our recent full publications here: https://www.scientia.global/scientia-issue-132/
Please let me know if you might have 10 minutes for a short phone call and advise when would be a good time and day for you to discuss further?
I look forward to talking soon.
Kind regards,
Paris Allen"
Dear RG colleagues,
Let us discuss on what basis are the authors sequence in multi-authored publications is arranged? Does it depend only on the contribution’s weight of each author? How to estimate the contribution’s weight of each author? Are there other criteria to determine this arrangement? Thank you very much for sharing your opinions with us,
Kind regards
As all of us are familiar with the different journal ranking systems and requirement conditions, in many cases we meet different kind of fees, charges for publishing our researches. Usually only the submitting and the previewing cost US$ 50-250, which is non-refundable and the paper may be rejected by the editors without being sent for review. Others introduce fees for the publishing US$ 500 -1000 (extra fees for colour graphs, maps etc. or for appeals against a Chief Editor's decision). For good English, they offer some links for the grammar review: US$ 100-200. Besides all of this, they employ embargo for 12-36 months, and ask US$ 600-2500 for open access. I think these fees sometimes unreasonable, so it is hard not to find the business factor behind them.
What are some of the difficulties or disadvantages, if any, of publishing in high impact journals?
Thanks in advance for your participation!
How to distinguish between a predatory journal and genuine journal in publish our research work??? Every now and then we get emails for the publisher to submit our work to their journals......
Dear Colleagues, I hope someone can provide some answer :
I recently had notified by Research Gate that ELSEVIER editorial did notified them that they needed to take one Scientific Article I had on my Research Items down, due to violation of ELSEVIER's Copyright.
This article was published on the Journal "Nano Energy", of ELSEVIER's, and I appear as the first autor.
Is there a way to keep one of this articles on your RG Items without infringing the Copyrights of ELSEVIER ?
Can I try to upload it again? This time under the "Private" mode (not open sharing, but via request)
Or it's better to leave the matter alone? Meaning that all ELSEVIER's editorial articles cannot be shared freely on Research Gate ?
Thank You! Best Regards !
There are several journals with varying impact factors. Still we find journals having no impact factor. I want to know whether the impact or importance of a researcher becomes less to a scientific community when he/she publishes a paper in a journal with low impact factor or no impact factor?
I need a co-author for a scopus article to participate in ІСSF 2021, Innovative Approaches for Solving Environmental Issues Workshop (IASEI-WS'2021), Kyiv. The article has already been written and is ready for publication. I am the main athor of this article. It is is an overview article about means of remote monitoring of air quality, and the possibility of their use for operational monitoring in Ukraine. I place a discreet emphasis on using the UAV.
link to the conference website: https://icsf.ccjournals.eu/2021/index2.html
Requirements for a co-author: student, PhD student, or any other researcher who works in a scientific institution at a given time. The institution should not be located in Ukraine or in the Russian Federation (such co-authors already exist).
If you are interested, please contact me as soon as possible.
Deadline of this offer - 28.12.2020
Anastasiia Turevych,
e-mail: ognetyr@gmail.com
Thank you!
We in the scientific community often hear about and are aware of some unworthy individuals copying and reproducing results (without due recognition of authors right). How shall we create responsibility and make aware of those who may do such unlawful acts presumably without knowing?
The peer review system has been the cornerstone of scientific publishing for centuries.
I am looking for research papers copied (in full or in part) by other papers to identify a "percentage of plagiarism". Authors can be different. Can you help me find any papers?
I just found Viper but I'm having some problems with it. Is there any other plagiarism software online?
Hi,
I'm working with a set of soil analyses obtained from an external laboratory.
Studying the results, I am highly confident that one of the analyses gave incorrect results because the values are extremely unlikely (in total disagreement with what is normally naturally occurring).
Besides, I have conducted additional analyses to triple-check this analysis.
The results I have obtained contradict, as I expected, the anomalous data.
The problem is, that the method I used is not the same as the initial method (unavailable at my lab), but is supposed to measure the same variable.
Now that it is time to write a research article, what would you do to overcome this problem?
Should I explain that for this particular analysis, results were abnormal and were not considered further?
Should it be done early in the results section, or later in the discussion section?
How have you dealt with unexpected/erroneous data with your research, when you cannot repeat the same analysis?
Will a journal accept to publish results which include one bad apple, while the rest of the basket is fine?
I know it gives another chance to resubmit after major revision. But could corresponding author consider "Reject & Resubmit" as a simple reject, and submit the article to another journal? Should corresponding author ask for permission of previous journal?
In some journal papers, I am included in the acknowledgement section. Reason for this is, I have helped them in taking some measurements. How should we mention such contributions in the CV?
Or, put another way, all the new journals that are created are really necessary?
Here you are our contribution to the debate:
Urbano, C.; Rodrigues, R.S.; Somoza-Fernández, M.; Rodríguez-Gairín, JM. (2020). “Why are new journals created?". Profesional de la Informacion (2020) 29(4) 1-19.
Are you a publisher or an editor who has recently participated in the launch of a new journal ... What is your opinion?
Frontiers in Psychology has a decent Impact Factor and is one of the highly cited journals in the field, but the journal is not listed in the ABS journal quality guidelines. I was wondering if it's a good idea for someone working in a business school to consider the journal as a potential outlet for organizational psychology related topics.
I am curious to know why a methodology will get acceptance in one journal for a given topic and get rejected in another journal for another topic.
Take Artificial Neural Networks (ANN) application in Civil Engineering problems for an example. You find Civil Engineering journals accepting or rejecting papers with ANN methodology.
How can one overcome these barriers?
As a reviewer, in your view what are the criteria you use to accept or reject a paper with machine learning applications?
If a machine learning method that has not been previously used is applied to a model (a common topic ), how do you as a reviewer respond?
I am graduating from business school as part of my med school program in two weeks. I'm working on a paper with a team that will be ready to submit for publication this week. Most likely, I will actually have my degree by the time the manuscript is in front of a reviewer, and definitely by the time it's published. Is it permissible for me to include my MBA in the author's section since I will have it before publication, or am I required to leave it off because I do not officially have the degree at the time of submission?
The highest impact factor journals are often criticized for rejecting too many too fast, and often too unfairly, a high proportion of the manuscripts they receive. Since they receive a larger amount of manuscripts relative to lower IF Journals, they are considered (assumed?) able to select the best quality research in their field. But, are they really publishing better science than lower impact journals? Many excellent scientists across the globe are unable to publish in High IF journals because they are unable to afford the publication fees. One may think that in many (most?) cases the quality of science high IF journals publish is not necessarily better than lower IF journals (note that High vs Low IF is a relative comparison, there is not a line/value separating both). What do you think? For instance, are the papers you have published in the highest IF Journal, your best quality papers? Do you see a positive relationship between IF and the quality of the science being published?
I just received a request for a review from "Internal Medicine Journal". I had a surprise recently with Journal of rare disease research and treatment for whom I wrote an "invited commentary" that appeared to be not so free (over 1000$), and I wonder if this one could also be a "predatory publisher". Has anyone information about this journal? Thanks
I was and am interested in this topic because I was wondering how much it would actually cost (annually) to run a technical journal for environment, including all costs starting with office space (if one sticks to that tradition) paying staff and expenses (servers etc.). I was wondering if it would be possible to run a small technical journal for environment without publication costs on alterative funding (without excessive need of volunteers). I thank you for your helps and cooperation that will be given to us!
Lets say you know very well the basics about writing, contents, formatting and style. Right before submission (your research is complete), do you focus only on the instructions for authors of the selected journal? Do you try to make the best/catchy title ever? Select a specific editor? do you contact the editor/journal before you submit? In short, Do you have a formula/method you apply? would you share it?
We know that rejection of research papers is very common in academic publishing, but what really bothers the authors is rejecting their papers without stating the reasons for rejection by editors, particularly when the paper matches the scope of the journal and the writing of the paper is academically acceptable, but it still somehow not enough to get accepted. The decision letter only informs the authors about the decision without any further explanations as to why such a decision was taken, many editors do this, it happened to me a couple of times. Why do editors do this? Why don't they care to explain the reasons for rejection? This is completely unfair to authors, especially if the submitted paper complies with the journal requirements. It's very frustrating. What should the authors do in this case?
Suppose a paper have been reviewed by three different reviewer's.
Two reviewer's suggested acceptance of the article, and one suggests not to accept the article.
In such situations, what the editor does?
In the medical discipline, the first and the last authors are the most contributed authors and the middle author is the least (in a curve). In contrast, in the Engineering discipline, contribution level goes from the who contributed the most to the least: first (most contribution), second, third, fourth..... last author (least contribution).
If an author is working in both medical and Engineering disciplines, how should he/she fix the author order in publications?
As the scientific community does not follow a uniform standard to express authorship, the readers have no clue about the contributions made by each co-author. Any thoughts?
Does that imply different levels in the process of admission to JCR?
Emerging Sources Citation Index (Is the waiting list to be evaluated for JCR?): http://wokinfo.com/media/pdf/S024651_Flyer.pdf
Does it mean they found it interesting at first?
I'm a new researcher. All inputs would be appreciated. Thank you very much.
There are times when I think that articles are rejected or comments are too mean based on ethnicity and just being female.
Depending on your experience in scientific publishing in the field of marketing, there are scientific areas that you recommend taking into consideration the conditions of copyright, cost and time and ranking.
Dear one & all Kindly give the details for the similarities and difference between Article, Communication, Notes, Reports, Full paper, Featured Article, Perspectives, Review, & Tutorial ... How to select the suit one? What are the criteria for this? Most us select on the basis of length our findings; and present and previous work summery. This discussion is seems to be very simple.... but it is always better if we get an better idea of each one... May be it will help for research beginners..
I have a short article, which I would like to submit to a broad ecological journal and I'd value a few suggestions.
Dear fellows,
I would like to know your opinion, as authors of academic papers, on your preferred peer-review type(s). Different journals implement different policies, from completely open to double-blind in various shades.
Please feel free to discuss your preferences in the comments!
Here is a starting list of existing and more creative options:
- Single-blind review. A classic. The name of the reviewers is not known to the authors, but the reviewers know the names of the authors. The reviewers may choose to disclose their identity during the peer review process or upon publication.
- Double-blind review. Increasingly popular. The name of the reviewers is not known to the authors, and the reviewers do not know the names of the authors. The identity of the reviewers might be disclosed upon publication.
- Open review. The name of the reviewers is known to the authors at any stage of the peer review, and it's published with the paper afterwards.
- Fully open (all-in). The name of the reviewers is known at any stage of the peer review and afterwards. In addition, the reviews are published online with the accepted paper.
- Open discussion. The manuscript is immediately published as a preprint. Reviewers (which may choose to remain anonymous or not) are appointed by an editor. Their reviews appear online as soon as they are submitted. The authors post their response online together with the revised manuscript, and so on until the editor makes a final decision.
- Very open discussion. Like the open discussion, but also members of the community can post (signed) online comments to the manuscript, and the authors may reply and account for them in the revised manuscript.
- Triple-blind. The review process is not public. The authors don't know the reviewers, the reviewers don't know the authors, and even the handling editor doesn't know the authors.
- Quadruple-blind (poker). The authors don't know the reviewers, the reviewers don't know the authors, the editor doesn't know the authors, the editor doesn't know the reviewers (which are chosen from a pool of eligible reviewers through keywords).
- Quintuple-blind. Like the quadruple blind, but the authors don't know the name of the handling editor.
- Sestuple-blind. Like the quadruple blind, but even the reviewers don't know the name of the handlind editor.
- Hardcore-blind. The authors submit their manuscript to the publisher, without the possibility of indicating the target journal. After anonymous peer review, the publisher suggests the suitable journal based on the reviewers scores. The authors may agree or appeal.
Other variants/suggestions are welcome!
I am going to have printed a scientific book with illustrations. They are small screenshots grabbed from historical books (e.g. 18th-century). The books are available freely online in digital libraries across the world. Can I use such illustrations (they are small-area images cropped from single pages) in my book? The book will be sold by a scientific publishing house. Any experience regarding legal aspects of this will be welcome. Shall I obtain any license or permission? This is neither using the entire source scanned by someone, nor a re-edition of such.
Some researchers acknowledge anonymous reviewers for reviewing their manuscripts. Do you think that reviewers know that they have been acknowledged? Do you think that reviewers care to read these acknowledgements? Any benefits to reviewers if they are acknowledged when their identities are not known/revealed?
Studies have shown that as many as 50% of submissions are declined directly by editors after being submitted. If the paper receives a “yay” instead of a “nay,” the journal sends it to reviewers. How do journals select competent reviewers?
Common sense says that more experience and a higher rank translate to better reviewing skills. However, a PLOS Medicine study in 2007 showed no such relationship. The authors examined 2,856 reviews by 308 reviewers for Annals of Emergency Medicine, a revered journal that for over 15 years has rated the quality of each review using a numerical scoring system. The results showed that experience, academic rank, and formal training in epidemiology or statistics did not significantly predict subsequent performance of higher-quality reviews. It also suggested that, in general, younger reviewers submitted stronger reviews.
So what? When presented the opportunity, any physician can and would produce a scrupulous review of a manuscript — right? Wrong.
Flashback to 1998, when Annals of Emergency Medicine cleverly put together a fictitious manuscript riddled with errors and distributed it to 203 reviewers for evaluation. The errors were divided into major and minor categories. The major errors included such blunders as faulty or plainly unscientific methods, as well as blatantly erroneous data analyses. Minor errors consisted of failure to observe or report negative effects on study participants, incorrect statistical analysis, and fabricated references — just to mention a few. According to the authors, the majority of peer reviewers failed to identify two-thirds of the major errors in the manuscript. Forty-one percent of reviewers indicated that the manuscript should be accepted for publication.
What about consistency? In 1982, two authors took twelve papers that had been published by prestigious psychology journals within the previous three years and resubmitted them to the respective journals. The names of the authors for the resubmitted papers, and the names of their affiliations, were all changed to fictitious ones. Three manuscripts were recognized as being duplicates. Of the nine remaining papers, eight were rejected for “methodological inconsistencies,” not for lack of originality. One paper was accepted again.
Last week, I received an email from a well-respected medical journal. The editor wanted my help reviewing a manuscript that was being considered for publication. Noticing the request was addressed to “Dr. Spencer,” I shot back a quick reply saying there’d been a mistake. I’m not a doctor; I’m a medical student.
Hours later, I got this response:
Thank you for your email. We would very much appreciate your review of this manuscript. Your degree information has been corrected.
The peer review process clearly has flaws. It’s no wonder so many publications are retracted every year, or that each issue of a journal includes several corrections of previously published articles. Without universal standards, manuscript reviews remain subjective, imperfect, and inconsistent at best. The time has come to re-examine the quality of this system.
Meanwhile, those who rely on medical journals for practice guidelines should be educated on the flawed nature of this system and should be encouraged to judge publications on their merit rather than the apparent prestige of a journal.
Now, if you’ll please excuse me, I have a manuscript to review.
Robert Spencer is a medical student.
I am interested in exploring whether criteria for academic success for professors are culture-sensitive. Is it the academic rank, publications, or teaching excellence?
Hi all,
I have found my experience with Elsevier to be increasingly frustrating over the years, especially with the copy-editing. Once a paper gets accepted (assuming it is in TeX format with perfect formatting), a month can easily pass by until the corrected proofs are actually available. The publisher seems to be careless and irresponsible about the typesetting and copy-editing (which is the only job they actually do in addition to selling our work!), e.g. in my recent Corrosion Science paper I had to submit a long list of comments when they simply messed up the paper layout completely and inserted incomprehensible symbols ("-->") everywhere in the text.
I am wondering what is your experience with other publishers, e.g. AIP, IOP, or Springer? Now I am deliberately trying to choose suitable journals that are not published by Elsevier for my next publication, and so far I've found one belonging to AIP and another to IOP.
Thanks
N.B.: There's almost a monopoly of the above-mentioned publisher in certain scientific fields, which is alarming to me!
Hey,
please, can anyone provide an explanation to me what is the scientific publisher in the worldwide and tell me how many they are and named them?
Based on what they are rated?
Also, what is the relation between the scientific publisher and the database (Scopus and ISI)?
Before, in my country and several countries was considered the first author more important than the corresponding author. Recently the situation has changed.
I believe that first author is usually carried out most of the practical part of the research and must be the correspondence author too. Co-authors do part of the work.
What is the difference between first and corresponding author? Can PhD students be a corresponding author?
Dear all respected researchers; kindly let me know your opinion.
What do you think the future of scientific publishing? Do you think that all journals will move to open access journals? or do you think open access journals will gradually disappear? What is the future of this area?
In the era of open access publication and the emergence of a huge number of journals that barely follow publishing ethics (no peer review/pay and publish). It becomes necessary that all journals must be screened by an authority based on editorial/reviewer board/scientific content and other criteria.
I have noted some online journals publishing >150 papers quarterly.
I understand today that scientific publishing, to contribute effectively to science, must follow specific strategies. What are the main tactics that should be followed?
(If you liked this question, please recommend it to extend the scope of this discussion.)