Federal Agency for Cartography and Geodesy
Started 20th Mar, 2023
Quality of research articles vs. Quantity of research articles Which is the better way to establish an eminent scientist?
I have seen some researchers who publish 4 to 5 articles per year but only in high impact factor journals (IF > 7.5) as well as many others who publish 20 to 30 articles per year inluding both the Q1, Q2, Q3 and Q4 journals as well as book chapters.
However, I would like to know the metrics on which a scientist is being recognised by his/her university to be qualified to receive grants and to become a Professor?
The responses are welcomed.
Top contributors to discussions in this field
All replies (3)
See also these related discussions:
Hello Shuraik Kader
Further to what Dr Genick Bar-Meir said that "... no one can judge quality..." - that seems to be so true. It seems to be ("true" quality) best recognised by people from your own field only. It seems they are the only people who best understand the worth of your work.
Having said that, appointments and decisions are made by people who do NOT know or understand.., In that case, quantity is more helpful to impress them.
Similar questions and discussions
Old obsolete knowledge refers to any publication older than ten years whether it is the core knowledge in an area or not.
- Elias Mjaika Ndifon
I always wonder what will happen to all these high impact articles in ten years. they will become obsolete? i don't agree or I agree.
Why self-publishing is becoming a new normal in academics ?
- Shaukat Mazari
Liu et al. find that 24% of editors of Elsevier journals self-publish 10% of their papers in the journals they edit; 12% of editors self-publish 20% of their papers; and 3% self-publish one out of every two papers.
Liu, F. M., Holme, P., Chiesa, M., AlShebli, B. & Rahwan, T. Nature Hum. Behav. https://doi.org/10.1038/s41562-022-01498-1 (2023).
Can I Request the Names of the Referees?
- Barış Kurt
I prepared a 23-page article on parameterization. I submitted the article to a journal known as "good" in its field. After keeping my article for 4 months, the journal suggested a revision. Although the proposed revision appeared to be "major", it was actually a "minor" change due to the reviewers' lack of knowledge about the subject. The reviewers were asking for impossible changes about a subject they did not know (they asked why I didn't write my program in C++ which I wrote in Python, why I didn't use FTIR calculation in parameterization, etc.) and neither of them had any command of the subject of "parameterization". I made my explanations and 6 months later I got a rejection for a ridiculous reason that I "did not respond to the referee's questions" ( I responded with 17 pages and the email they sent after my responses had nothing to do with my responses; these two referees who wanted to present themselves as proficient in the subject ignored my responses like two fraudsters and played various word games: For example, they alleged that my program would not work without the Antechamber program used in the molecular dynamics procedure, and asked why I didn't publish it in the AMBER package, whereas that process was not related to my program, it was already part of the molecular dynamics procedure. They alleged that the VFFDT program would automatically number, I copied that part from the manual and wrote that it can't do it, if it can, show how it is done, they didn't respond to that either :) The other referee first said that the formula I took from the AMBER Manual was wrong, look, he said it was wrong, he didn't say the formula was incomplete or erroneous, he said it was wrong. In the reason for rejection, he found an excuse like "why did you write the formula in a way that will confuse the reader" since he understood that it was not wrong. You would appreciate that I had to hold myself not to swear.) ..and I got rejected. I was very surprised that a journal with such a good impact factor appointed two ignorant referees who really did not know the subject, did not understand the subject but insisted on pretending to understand the subject. The journal appears to be famous in its field and we corresponded occasionally with the editor-in-chief during the article process, actually the editor-in-chief is quite proficient in the subject and would never make the comments made by these two fake referees. Either due to bad luck or the incompetence of the field editor, I lost 6 months. Therefore, I want to learn the names of these two reviewers and write their names as "opposing referees" in my next articles. Do you think doing this would create an ethical problem? Since the incident happened to me, I could not make an unbiased decision. Therefore, I wanted it to be discussed here before sending the email: What would you do if you were I? Thank you.
How can I respond to the situation described below?
- Jerrold H. Zar
I often receive ResearchGate e-mail notifications that someone has requested one of my publications that I have on file with ResearchGate; and I have responded to the requester.
However, when I received the most recent such e-mail, I clicked on "View request" but did not see how to send the stored document to the requester. What must I do?
Scientific Scamming and Bias from journals and/or research groups
- Hussein Muhammed
How to avoid it? I have an accepted manuscript in applied geophysics journal from springer (https://www.springer.com/journal/11770) according to my corresponding author. However, I discovered that this paper has been withdrwan from the journal and not issued in the previously-mentioned issue by him due to personal reasons, thus I really blame both the editorial board and the corresponding author for this. I am asking relevant researchers whom faced such a serious issue before to give their sincere opinion and I will accept, listen to all critisims and follow your guidence too.
Replacing The Hirsch Index
- Ohad Manor
I find the idea of H index problematic as it becomes a motivation by itself to ensure it will increase. it is problematic for another reason, I Believe that it is an indirect reflector (to some extent) of the of magnitude and the depth of networking and collaborations of a researcher.
In other words, researchers can raise this index simply by writing papers with peers, small groups or large groups, without actually making a discovery or being truly innovative. It is not filtering quotes from peers of the same institutions as an example.
I would like to replace it (for physics only) with an index which is raising only based on a single criteria, How many testable predictions a physicist made, and how many of them turned out to be roughly correct after margin ? It's the only thing that counts, the only measure that a TP impact should be measured upon, and it should also be binary, zero or an integer.
Physicist who made wrong predictions should have minus the number of wrong predictions they made. overall I think it can help further clarify who is at the top of the field rather than the current method of Hirsch.
What to do with fake/faulty citations?
- Siyaves Azeri
Recently, I came across an article that is also available on Researchgate claiming to cite an article of mine. It attributes to me (particularly in the allegedly cited article) this sentence:
"According to Siyaves Azeri (2013), the people who are the most deprived and poverty-stricken have a desperate hungry rebellion." with Azeri 2013 being "Conceptual Cognitive Organs: Toward an Historical-materialist Theory of Scientific Knowledge" which has been published in Philosophia.
The citing article is
The problem is that there is no such a sentence in my article; the "cited" article is specifically dealing with problems in general philosophy of science and the role of concepts in scientific theories.
I know there are no sanctions against such "abuses" or "misuses" but I still am quite disturbed by the fact that I am referenced for something I have not said and by the ignorance or disregards of referees and/or editors (if there are truly any) of the journal that has published such a piece.
Maybe outlets such as Researchgate should device ways to filter out such articles.
What do you suggest?
What are the sustainable practices available within the construction industry?
- Shuraik Kader
The construction industry is a major contributor to global carbon emissions and environmental degradation. In recent years, there has been a growing interest in adopting sustainable practices in the built environment. Based on recent research studies, what are some of the most promising sustainable practices being implemented in the construction industry?
The Poor Quality of PDF Document Displays on Research Gate
- Philip Mulholland
Dear Fellow Researchers
I notice that PDF text documents that I load to Research Gate as preprints are poorly imaged and not easy to read. Is anyone else experiencing this and have you any best practice tips to avoid this issue?
This study presents a ranking of 182 academic journals in the field of artificial intelligence. For this, the revealed preference approach, also referred to as a citation impact method, was utilized to collect data from Google Scholar. This list was developed based on three relatively novel indices: h-index, g-index, and hc-index. These indices cor...
We propose a new metric and data source to assess journal impact in the field of Economics & Business. The metric – Hirsch's h-index – improves on the traditionally used ISI Journal Impact Factor by providing a more robust and less time-sensitive measure of journal impact. The data source – Google Scholar – provides a more comprehensive coverage of...
Excerpts from interviews conducted on various researchers regarding their general opinion on impact tendencies, their individual publication strategy, and how this influences their responsibilities as advisor, editor or academic manager, are presented. One of the researcher commented on the tendency that publication impact factors and citation meas...