Questions related to Science 2.0 and Open Access
For example, for a large lab with a number of research projects, a document management system that would work in a collaborative environment. Also, that would include connecting figures to actual data (numbers and images) standard tools for creating manuscript templates.
ResearchGate is the leading SNS for researchers in the world. Meanwhile, there are several similar websites, such as ResearcherID, Academia. Some people argue that ResearchGate is not doing anything that is fundamentally different than others, there have long been researchers on LinkedIn and Facebook. In China, we have similar website Researchmate, but active users are quite limited. Therefore, I have 3 questions:
1.Why do you use ResearchGate?
2.What are the real benefits does ResearchGate bring to you?
3.What are the barriers when you using ResearchGate?
The Beall's web site scholarly-oa.com does not host the Beall's list of predatory journals and publishers anymore.
I have recenly found a web site https://predatoryjournals.com/ which claims to build on it and expand this list (see https://predatoryjournals.com/about/ ).
What do you think of it?
Update [August 1, 2019]: The question was originally posted on December 26, 2017 but now it looks like the site in question remains dormant and was not updated since 2017, which makes the question somewhat moot.
Let’s imagine that at the beginning of the coronavirus pandemic we locked all leading scientists of the world from a particular field in one room for a few months to make sure they are safe. Since they have a lot of common interests and many of them already know each other, they started discussing their research and the state of the field, which resulted in an intensive exchange of knowledge, explication of commonly shared assumptions and long discussions about the most controversial topics. By the end of the quarantine they wrote one textbook for their students where only the most important knowledge of the field is aggregated and structured with axioms one has to accept to be able to work in the field and open questions formulated with all possible answers listed after each of them. Of course, this textbook should be constantly updated, like Wikipedia of this discipline, so the scientists decided to publish it online as a website. Moreover, every chapter of this book ends with a list of relevant literature, and everyone promised to add their new papers there, if they are relevant for this topic, so that other people do not have to search for new publications across multiple journals – and yet miss some of them.
What a disappointing news! The quarantine has been prolonged by several weeks. Scientists already have discussed everything they wanted. But they can use this additional time to plan further research! Since they already decided what is clear in the field and what has been done, no one will plan uninformative and unnecessary experiments that didn’t work out in other labs. Instead, scientists can focus on most relevant issues and – something really new! – they can distribute tasks to make sure that all current problems are covered and every lab gets the tasks which fit best with its capacities and competences, regardless of the fact that these labs sometimes are located even on the opposite sides of the Earth.
After the lockdown scientists continued their work in labs. They noticed that their students now understand the tasks and new articles much better – because they already have read the online textbook of the field and have practically the best possible level of knowledge of the topic. The same applies for all new students and researchers coming to the field. All new labs don’t have to search for their identity or position themselves loudly – they just undertake one of important issues where not enough resources were allocated and immediately become significant actors. The communication between different labs became easier, since they have a shared background. Projects became much more ambitious, because the tasks are distributed and coordinated across labs, so they are not limited to one or two labs anymore. Projects also run smoother, because related labs constantly share their experience in solving technical and methodological problems. Finally, the field progresses faster, since all relevant labs are focused on common goals and issues instead of promoting their own agenda.
How did we spend our quarantine?
Why we have what we have
The described process consists of two steps: (1) explication of a common background and (2) formulation of a research strategy. The latter is here more important, but the former would facilitate the process. This is nothing radical new: many elements of these two sub-processes are already present in research communication. There are reviews, there is collaboration between labs leading to the development of a common background on some issue and there are multiple textbooks in each discipline. Many authors formulate questions for future research in their papers, and some of them even do it for the whole field. However, strategical planning is often not systematic, is not reflected enough and does not appear at a level which is as global as it could be nowadays.
Why is our common background fragmented or distorted? Science is a special type of activity, because its goal is to professionally generate new knowledge. This means that the scientific discourse should be a function of the knowledge acquisition process itself: there should be more texts about topics that are more important and fewer texts about topics that are less important; more texts about problematic issues where controversial opinions exist and fever texts about issues where everyone agrees and there is nothing to discuss. This (ideal) discourse can be called undistorted – it could be like this, if there were no other factors influencing scientific communication. Some of these factors are:
- Noise and randomness in the scientific communication: We miss some of important papers in the flow of literature which became extremely dense. It is often a matter of luck whether we attend a particular conference and get acquainted with a highly relevant person there or not.
- Barriers in the scientific communication: Not all results from all labs are published, especially negative ones – so we even don’t have a chance to learn about these unsuccessful experiments from journal publications. We might have no access to some of the publications because our institution did not pay for it.
- Limited capacities at the individual level: We have several projects at once, including student projects, plus teaching and administrative work, so we simply cannot read all literature from every field for every single project in order to evaluate how important or innovative some of the ideas are, unless there is a good recent review exactly on this issue.
- Institutional pressure: We need to carry out projects, publish papers to apply for funding, obtain a degree or get a position. The time to familiarize ourselves with the literature is limited, the circle of minimal necessary literature is not determined, and as a result we are often forced to start a project with wrong assumptions or incomplete picture of the field in mind, and only in the process learn how it should have been done (hopefully not too late) – who of us has never had this situation? Another downside of institutional pressure is that we can take a safer way and test some trivial hypothesis with a slight variation, in order to increase probability of getting publishable positive results, while important but risky problems remain untouched.
- Varying terminology: Same terms can mean different things, while same things can be called differently, which makes keywords as a searching tool sometimes misleading. Again, it takes time to solve all these complications – precious time which we not always have.
- Fashion, hype and personality of other researchers: Some topics receive more attention than they deserve, because they resonate with current political or cultural circumstances or because they are promoted by charismatic researchers, while some more important issues may remain overshadowed because they don’t sound that cool or because people who generated these ideas are not able to communicate them loudly enough.
- Competitive and market-like culture: Scientific work is largely structured as a market, where everyone has to advertise themselves, sell their ideas and collect publications, citations and impact, which then is converted into financial and symbolic benefits. Marketing promotes distinction, not cooperation: everyone has to make a unique offer, cultivate their difference from others, not similarity – new effect, new abbreviation, catchy title of the paper or conference, description of how the project is unique and different from all other projects in the field etc.
All of these factors have been recognized and discussed before. What is new here is the idea that they all are parts of one and the same problem: they all lead to distortion and fragmentation of our knowledge structures or scientific communication. We need to reflect on their consequences and disadvantages they bring and compensate for them in our work. I.e., we need to close all leading scientists in one room and to let them formulate common background and develop research strategy for the field – and not for themselves as individuals or small teams.
Luckily, we have new technologies to help us with this. Internet not only intensifies flows of information in our lives. It also makes us less dependent on local context and gives us an opportunity to build virtual communities according to thematic fits and, most importantly, coordinate our work as a community at a global level. It can help us to better structure the discourse, to distribute tasks, to reduce the role of personality and focus on explicit rational arguments instead and, finally, to learn to cooperate rather than to compete.
Building virtual scientific communities
Knowledge is social in nature – we gain and shape it together, share it with each other and evaluate it collectively. Knowledge is embedded into social structures, and yet we often have an illusion that we possess it alone. In some sense our individual mind is not individual at all, since almost everything we know is just a compilation of things that other people know too. Although frightening for the individualistic culture, this idea highlights our interconnections with others – the basis of communication and social life.
Scientific knowledge, as every other kind of knowledge, is embedded into scientific social structures. These structures are, for example, faculties and departments, labs, professional organizations or long-term collaborations between single scientists or labs. Journals and regular conferences build a virtual or real community around themselves too. While many virtual communities are built just for informal communication on interesting topics, such as communities of cat lovers or fans of a musical band on Facebook, scientific community has its goal at generating new knowledge. This means, that any distortion or incompleteness of the picture results at wasting resources – testing wrong hypotheses, loosing time with reading unrelated literature or missing important publications on the topic. That’s why it is so important that information flows are organized in a best possible way allowing to achieve the minimal required knowledge structure in everyone’s individual mind in that community to ensure effective communication and coordination. This should not be a matter of luck, whether you learn about a new important paper or not. Neither can we rely on creative chaos which results in new ideas: we all know astonishing examples when the same discoveries were made in science simultaneously and independently from each other – not because there is some mystery about that, but simply because people who share the same background and work on the same problem come to the same solutions. These people could work together and, probably, get their results faster. So, we need communities with structured knowledge and effective communication and coordination.
But if we look at the existing communities, we will notice that they still are largely organized according to physical presence of researchers (labs, departments, faculties, regional or macro-regional organizations) or static topics (research field, subject of research, particular approach). This makes sense because, on the one hand, resources for research are distributed in the physical world, still mostly at the national or regional level; on the other hand, model of an object is something very natural and basic which we understand intuitively – although I cannot touch and see cognition, but it is natural for me to think about cognition as if it is a thing which exists somewhere in the physical world, has its parts etc. And yet, although it looks logical and natural to keep the things like this, I claim that we need another basis for building our research communities, and this basis is global research strategy.
Global research strategy is what scientists got in the thought experiment described at the beginning of this essay – an explicit list of most relevant questions and goals defining the closest and more distant future of the field, the intention to work towards these goals and concrete agreements about coordination of this work. This strategy is global because it is not bound with a particular lab, regional, national or macro-regional context – everyone can join it regardless of their location. It is called strategy because it’s dynamic and prospective, future-oriented, rather than static and descriptive, presenting what is already known. Research strategy is a unit which is bigger than a single project, but smaller than a discipline or a field. Importantly, it’s not only about the size – research strategy is also qualitatively different from other units that we use to structure our work and communication.
Questions and goals, not topics and projects
The most important thing about research strategy is that it is future-oriented. It is about goals and questions, not descriptions and answers. Our mental units that we use to think about science are field, topic and project. Topics and fields define our positioning of ourselves in the virtual community (“I am a cognitive scientist working on attention and the motor system” – by saying this I identify myself with some group and distinguish myself from all other groups), while projects define our operational level, the work we do (“I have a project where I investigate how the motor system responds to horizontal attentional shifts”). Topics and fields are very natural for our thinking, since they represent very complex and often abstract phenomena or constructs as concrete objects, so that we can think about them as located somewhere, attached to some other phenomena, consisting of parts or modules etc. Projects come from management and they are indeed very useful: project is time-limited and has measurable outcomes, which makes it effective to think in this terms about our work – it can be planned, split into steps and evaluated. Actually, project is future-oriented too. But there is also a conflict here, because scientific knowledge is not structured in projects: some questions are too broad to be answered in a single project, and some projects are relevant for several questions or even disciplines at the same time. Another problem is that the project, when formulated by a single researcher or a lab, is a compromise between what the field really needs (as this person or this lab sees it), which resources this person or lab has and what would be most beneficial for this lab in terms of resulting impact, and safety (don’t forget about negative results!). One more limitation is that the lab or a person simply cannot start a project which requires more resources than this lab or person has – this limits the range of possible projects a lot. How many brilliant ideas were formulated – and forgotten for decades until someone occasionally found some of them in an old paper and realized as a project? How many useless projects we do without knowing that other labs have already failed with the same idea? How many times we had a feeling that we do something very special on a narrow topic that no one actually needs and will ever need in the future? And how to solve these problems? Ideally, projects should be subordinated to goals and questions relevant for the discipline, not to technical capacities, financial situation or fragmented understanding of the field developed by a particular person or a lab under time pressure. Scientific work is driven by questions and goals – not projects, however convenient they are.
Topics and fields are represented in journals by keywords. If you have ever searched for papers by using keywords, you probably noticed that they are merely knowledge bags where everything is just being thrown without any structure – more and less important papers, central for the field and less related publications, reviews, experimental reports, commentaries and just everything which is somehow in touch with the topic. Every new scientist has to develop his or her knowledge structure from scratch and constantly track and filter new literature to update this structure. If a global virtual community existed which worked on a particular question or towards a particular goal, this community could structure the knowledge collectively by sorting the literature into more and less relevant for the question or a goal and keeping track of knowledge, updating its parts with new publications – imagine it as a topic in the online-forum, where every new message is in a dialog with previous messages, and all participants try to answer the initial question. I can just come, read the forum till the recent point and join the discussion. This can save a lot of time for everyone and facilitate development of shared understanding in the field, overcoming the problem of fragmentation.
Questions and goals can have structure, they can be parts of higher-order questions and goals or be themselves split into smaller questions and goals. Keywords are all equal, they are associatively related to each other, since some of them appear together – but they do not build any hierarchy. Questions can be answered and goals can be achieved. This progress can be evaluated and direction can be corrected. Keywords have no answer – they come and disappear, creating isolated islands of unstructured knowledge around them. Our very way of thinking is fragmentary and associative because of keywords used to structure knowledge, instead of goals and questions.
One might object that if someone else has already defined all important questions and goals, and even selected most relevant literature, then no space for individual creativity remains. First, questions and goals are defined collectively, which means that all labs and researchers experienced in this field should have influence on this process. Second, questions and goals give direction, but they do not define the exact steps – how to reach a convincing answer or how to achieve the goal in the best possible way leaves a lot of space for creativity. Third, in the process of answering, we can transform the question by reformulating it, specifying or splitting into smaller parts – it is not a closed system. Finally, participation in research strategy is voluntary – if someone thinks, they have a better idea of what should be done, they are still free to do it outside of the research strategy.
You might think that it is trivial, to formulate questions and goals for future research, because everyone is already doing it – at the end of the paper or a thesis, sometimes in reviews, in grant proposals or informal discussions after conferences. There are two problems about it. First, it’s not systematic – sometimes we do it, sometimes not. We are all too busy with what we just found and what we want to present right now that we have not so much time to think of a broader context of the whole field. And not all of us are even able at all to do it, because our knowledge is distorted or fragmented. Thus, formulating research strategy for the field must be a special explicit effort, maybe the most important genre we need now – and yet it is missing, scattered in many texts, often merely formal just because everyone expects something like this to be said somewhere here in the paper or a thesis. The second problem is that we all think small. Science requires us to go deeper and deeper in details of our work to become experts, and at some point even most experienced of us can lose broader picture, not to talk about master and PhD students and postdocs. If you ask most prominent researchers in your field what the most relevant questions and goals at the moment are, you might get surprisingly different answers, or even notice that they need some time to figure out what to say. Even more confusion might cause the question why exactly these questions and goals are important right now. Note that global research strategy is not what I am going to do in my next project, and not how our lab is going to spend next few months, but how the whole field should develop, where efforts of multiple labs and researchers should be focused.
Coordination, not competition
As discussed before, scientific culture is largely competitive and organized according to marketing principles. In fact, internet even promotes it: the more intensive the flow of information is and the more noise surrounds us, the louder one has to be to catch attention. As a result, this market-like culture distorts our could-be-ideal discourse and competitiveness prevents us from collaboration. We have an invisible boundary between silence and self-promotion – and this boundary is publication in a journal. Before publication we try to tell others as little as possible about the project, being afraid that they can steal our ideas. After the publication we try to tell others as much and loudly as possible, to make sure that everyone has heard our ideas. Quite a strange practice, if we think about it more, which is only justified by the fact that the number of publications is the basis for getting funding or obtaining a position. So, finally it’s all about limited local resources, what prevents us from unlimited international collaboration. And even international funding programs still do not solve the problem – they just shift the competition at the higher level, since everyone has to compete against other researchers from their field, instead of collaborating with them. It looks even more ridiculous if we remember that we all live in the same physical world, have the same bodies and very similar genes and brains, read more or less the same literature with other specialists from our field and work on the same problems. Remember, we think we possess some unique knowledge, but this is just an illusion – this knowledge does not belong us, we just borrowed it from someone else.
So, what is this, which gives us advantages in a competition? These are two components: (1) formulated questions or goals and (2) methods to answer or achieve them. Projects that ask deeper and more relevant questions and suggest better research methods get the funding. But what will happen, if all most relevant questions are already formulated by the leading researchers in the field and are freely available for everyone? Only methods remain the potential field of competition, and it is much easier to reach an agreement and collaborate with someone, if you both try to answer the same question, even if your approaches are somewhat different. This means that simply by formulating research strategy we already increase probability of collaboration in the field and decrease competitiveness, even if we cannot change the system of financing research at this point.
It is well-known that the average number of co-authors in natural and technical sciences is higher than in social sciences and humanities. Co-authoring a paper means to accept and share views expressed there. The number of co-authors is then an indicator of how unified and coherent the knowledge in a given field is – and this is not a good news for humanities and social sciences. Formulating questions which will intensify collaboration and decrease competitiveness is one of the steps towards building virtual communities, whose coordinated work will ultimately lead to more coherent understanding of the world.
It is possible that with increasing coordination more specialization will come: some labs will only collect data, others perform analyses and meta-analyses, and a group of people can only write reviews and theoretical papers. As long as everyone’s contribution is recognized by including them in the list of co-authors and submitting grant proposals together, and this whole work is well coordinated, this is not a problem. A lab can collect data in one research network, being at the same time an analyst or a theoretical hub in another – this what many of us already do as individuals. The only difference here is just the level of coordination. This allows newly established labs or young scientists to actively participate in the most cutting-edge research regardless of how methodologically rich and theoretically advanced their actual local environment is. More advanced and established labs loose nothing: answering questions and reaching goals usually leads to new questions and new goals in science, so there will be always something to do for everyone.
Continuity, not fashion
Apart from gaps in our discourse that are related to physical location and need to establish ourselves on the market with our unique scientific products, there are other gaps which are associated with the dimension of time. Scientists are also people and thus they are also prone to fashion. Some new hot topics appear and everyone starts working on them – either because we find them really interesting, or because we understand that it will bring us more attention and associated citations, or both. Then everyone gets tired of the topic and we abandon it, waiting for some new inspiring key words. It is natural – but it is not strategic. If we carefully read old papers we might notice how many important ideas were forgotten for longer time in order to appear nowadays for some external reason – differently formulated and starting again from zero unless someone will point to the congruency between the two topics. This effect is also a result of missing explicit research strategy, where all important questions and goals are documented, along with their changes, so that everyone starting working on some question can track back how it appeared, was reformulated, where it comes from and which important discussion in this issue has already happened – regardless of specific terms used in one or another period. This is similar to what we all do in the introduction of our research theses, but again the difference is that a global research strategy is collective, not individual. The more information we have, the more important it is to organize it to get knowledge – and we are reaching a point where there is so much information that we cannot organize it individually anymore. Explicit research strategy ensures continuity of the discourse not only between researchers and labs, but also between generations of scientists.
This continuity does not mean that science makes a step out of time and becomes preserved in itself. Researchers still can leave the strategy for some time to work on other prompt projects and then come back to the strategy with these new ideas. Strategy is open for reasonable and well-justified changes, but creates a barrier against random noise and fashion.
Subject of research first
Subject of research must be the reference point in all fields where possible. National contexts distort the discourse – some labs in some countries receive more financial support than other labs in other countries, which gives them an opportunity to develop their lines of research more intensively, without guaranteeing that this line of research is indeed most important for this field at this moment. Coordination will ensure that a common plan is achieved and tasks are distributed across labs or researchers all over the world in the best possible way. Thus, if some very important task receives not enough funding in one country for some reasons, and the field cannot progress without this task to be performed, other labs can take it over. On the other hand, if a lab sees that many other actors are already involved in a particular task and the work is progressing, this lab can look for other tasks that are important too, but received less attention.
Once again: the subject of research and the needs of a particular research field should explicitly prevail over financial situations, individual self-promotion and random noise coming from the local environment. Even if we cannot reach this goal now, since we are still parts of many other systems, the act of formulating a research strategy as if we were free of these distorting factors will itself push the field towards new principles of thinking and communication. While planning next projects, scientists will read the research strategy and try to coordinate their (yet very much individual) projects with it, which will naturally lead to more and more convergence around these goals and questions.
How to develop a research strategy: Globally, decentralized, inclusively
Organizations, formal and informal networks, regular conferences are the natural agents which could start this process of transformation. What they should start doing to transform the discourse to a structured one is to formulate questions, rather than promote themselves; coordinate distribution of tasks among members and keep track of progress. However, even if these agents are not interested in developing a research strategy, an initiative group of scientists from the field can initiate this process. It is not necessary that members of this group are all leading scientists in the field. Note that the field here can be defined very differently – it can be a very narrow topic, a particular theory, or a subject of research corresponding (more or less) to a “departmental” level or the whole discipline – it does not matter. Since there is a natural hierarchy of topics and problems, even if not explicit one, the process of formulating a strategy can start at any level, and this strategy can be later aggregated by higher levels or be specified at lower levels, if the actors there decide to join the initiative.
1. The first step is to identify potential participants of this new virtual community – leading researchers in the field. The criteria here can be discussed, for example:
- Everyone who has a PhD and at least 10 publications in the field
- Everyone who is a professor and works in the field
- Everyone who has at least 15 publications with particular keywords
- Ask each of 5 leading scientists from your field to name other 5 important scientists working in the same field, contact them and repeat the question. Continue doing so until you exhaust professional networks. Then select those who satisfy particular criteria (e.g., everyone, or those who were named by at least 3 other people etc.)
Regardless of the exact criteria, this process of pre-selection should be objective, strict and inclusive, to ensure that as many as possible actors from the field are included and there is no systematic distortion in the selection procedure. The result of this step is a list of potential participants who are able to formulate a research strategy.
2. The second step is to contact all potential participants, to explain them the idea and advantages of having a common research strategy and get their agreement to participate. Importantly, they should understand that (a) this participation does not require much efforts from them, they are not obliged to follow the research strategy or to change their work in any way; the only thing we need from them is their expert opinion about the needs and future of the field; (b) this is not just another possibility for their self-promotion, but a collective act of strategical planning; all individual answers will be later collectively evaluated and accepted or rejected. So ask experts to think big.
The result of this step is a list of actual participants. Again, it should be as inclusive as possible with respect to the previous list.
3. The third step is to collect questions and goals formulated by participants. Examples of questions are:
- What are components of X?
- Which factors influence development of Y and how?
- Which theory is true – A or B?
Examples of goals are:
- We need a computational model of M. It should have such and such properties.
- We need a review/meta-analysis of research on N with a particular focus on aspects D and F.
- We need a systematic cross-cultural investigation of L.
Questions and goals can be more or less concrete at the beginning, this is not a problem since they will be either specified or combined into bigger ones later. Whenever possible, such questions and goals should be formulated in a language which can be understood by non-specialists and general public. This will facilitate interdisciplinary interaction and allow non-scientific actors (e.g., journalists, policy makers or professional organizations) to discuss and evaluate the strategy, thus giving their feedback on it, or use it for their long-term planning.
It should be underlined again that the ultimate goal for participants is to develop a strategy for the field, not for themselves. It does not exclude their own current research interests, but they should be evaluated as objectively as possible. This initiative cannot be used promoting own research. Importantly, every question or goal should be justified, i.e., if you suggest a question or a goal, you should also briefly explain why exactly you think it is important (a word limit can be helpful here). A moderator or a group of moderators collects all answers and tries to organize them, by bringing together identical or almost identical questions and goals. Note that moderators cannot remove anything at this point, especially if they are uncertain about whether they understand it correctly – finally we are dealing here with the knowledge of the best experts in the field, so we should assume they thought a lot about every word.
4. Collective evaluation of the questions and goals. At this stage the outline of the final document – research strategy – should be defined. It can happen in different formats, depending on the number of experts, density of their personal contacts or suspicion that they ignored the request to be objective at the previous stage:
- Experts can receive a document with all questions and goals plus explanations why these questions and goals are important. Experts comment on it and suggest their changes, the moderator tries to apply these changes and sends the new version out. The process repeats until everyone is satisfied with the final document. This could be an option for smaller groups of participants.
- The same process can happen online in a shared document, where exerts can directly suggest changes and have short discussions.
- An online-forum can be used where experts have longer discussions either anonymously or not. Anonymous discussion may help to remove personal attitudes and factors from the discussion and focus on arguments. In any case it should be ensured that only authorized people participate in the discussion – which means that there should be a moderator or technical support. The forum can be either publicly available, since the discussion itself might be interesting for those who do not participate in the strategical planning, or hidden and only available by invitation.
Very similar questions or goals should be merged in order to reduce the fragmentation of the field. Also a vertical structure can appear at this stage: some questions are subordinate to other questions. It is also possible that there will be incompatible interpretations – problematize them too as sub-questions and build into the strategy with both conflicting interpretations listed, since they will also serve for future interaction. If too many participants are involved and strategy becomes too dispersed, large groups of questions and goals can be separated from each other and form smaller strategies. One and the same person can participate in different strategies – this is not a problem, since strategies are first of all abstract impersonal structures which direct thoughts and work of participants into a particular direction, and only then – concrete embodied networks.
The result of this step is a document where most important strategic questions and goals for the field are defined, with explanations following each question or goal – why it is important and what we get from answering this question or reaching this goal.
5. Prioritizing goals and questions (optional): at this stage, experts may agree on prioritizing goals and questions. This can happen anonymously by voting. It should be underlined again that this procedure is not for self-promotion, and even if there are priorities set, everyone is still free to decide what question they are working on, or whether they participate in the research strategy at all. Another aspect to highlight here is that prioritizing should be based not only on the intuitive perception of the question or goal, how familiar or cool it sounds, but mostly on the explanation following the question or goal – how important it is and why.
6. Publication of the research strategy. When published, research strategy becomes a collective statement and a basis for new social interactions in the field, so it should be made publicly available for everyone. One option could be a publication of it with all participants listed as co-authors – this will attract a lot of attention to this publication and facilitate its discussion, so it starts working. Another option is to create a website and publish the strategy there, since it will be constantly updated. The website can be integrated with a forum for discussion. Also both can be done – a journal publication and an online presentation.
7. Implementation of the research strategy. While the strategy was developed by the leading scientists from the field, it is open and can be implemented by everyone. This is what it was developed for – to focus everyone’s efforts on most important issues. Implementation means nominating ourselves for working on some question or goal, creating virtual teams (regardless whether we know each other personally or not), discussing approaches and distributing tasks, informing others about our progress and presenting results. It is possible that multiple teams or researchers work on the same question in parallel, if they cannot combine their approaches – but they will at least know about each other and interactively follow each other’s results, which can lead to convergence in a long-term perspective. Another outcome of multiple teams working in parallel can be higher reliability of results (similarly to replications initiatives) and cross-cultural comparisons. An optimal format for communication and coordination could be an online forum where only authorized users can register. Probably a moderator or a group of moderators could control, so that no one uses this joint space for unrelated self-promotion, but only for goal-oriented communication. Discussions can be open up to some point, so that everyone willing to join the initiative can do it, and after a certain point they can become private, in order to coordinate very specific details that are not interesting for other participants. Participants should be open for collaboration with unknown users as long as they like their ideas or approaches and appreciate their contribution to the discussion. This is a major difference from our traditional research networks, where the personal contact comes first, and collaboration follows with some probability.
8. Updating the research strategy. Research strategy should structure our work, but it is not carved in stone. The strategy must be updated with new data and ideas coming, new necessary distinctions can be made or new merges between different questions can happen. The progress on each question or goal should be formally evaluated and communicated, to update everyone’s knowledge. Finally, some questions can be answered and goals achieved – and they must be removed from the strategy. All important changes in the strategy can be communicated via publications with as many authors as possible (potentially – everyone working on this particular question or goal), thus guiding other people in the scientific community in the direction defined by the strategy, even if they do not participate in the strategy explicitly.
An additional step made either by the experts (step 4) or by the community actually working on each question or goal (step 7) can be defining a list of minimal necessary literature that everyone has to be familiar with if he or she wants to participate in work on that particular question. This is different from the thought experiment at the beginning, where researchers first defined common assumptions and then came to the strategy. A reverse engineering is needed here, in the real world, because we do not have unlimited time and one space where all relevant people can share their knowledge until it is merged – so it is easier to start with the strategy and then define the necessary background for each question separately. This is important not only for people who are already working on a particular question (although it will help them to understand each other better), but also for newcomers, e.g., new students and labs, who are willing to participate in the strategy – don’t forget, they are still under institutional and time pressure. This list of minimal required literature should always have a limited volume, so that everyone is able to read it within a few weeks and start understanding the discussion on the forum. At the same time, the list should not be closed and can be changed according to changes in the field. Its primary goal is to represent common knowledge of everyone who works on this particular question. It does not mean that everyone knows only what is listed – people also have other interests, different backgrounds and read other literature around. The complex picture of the field in every individual mind will be achieved by reading discussions and updates on other questions and goals within the research strategy, especially on its theoretical aspects – pretty much the same what we already do now, just in a more structured way.
Three key words about research strategies are: globality, decentralization, inclusion. The strategy is global: it does not depend on local contexts and everyone can join it if he or she has enough resources and competence to work on a particular question or goal. The strategy coordinates scientific work globally, despite the fact that most of resources are still provided by local agents – states, national funding agencies and macro-regional organizations. The strategy is decentralized: there are no formal leaders in the strategy and no one can benefit alone from its existence, but only the whole field. There is a danger of monopolization of intellectual resources here: some actors will formulate ideas and interpret results, whereas others will be just reduced to data collection stations which limits their intellectual development. This should never happen, and this is what is prevented by explanations of importance for every goal and question: even if I just collect or analyze the data in some project, I still understand the broader context – why I am doing it and for what purpose, which allows me to participate in further discussion too. Finally, the strategy is inclusive: every project should try to incorporate all people willing to join it, not to exclude them. Even if it is impossible to include someone, participants should give a feedback and explain why they don’t want this person to be in the project and make suggestions about other tasks nearby that might be fitting with this person’s interests and abilities. Research strategy is about coordination and best distribution of resources, including human resources.
How professional organizations and associations will change
Professional organizations and associations are a natural basis for developing research strategies and many of them already do it in some form, more or less systematically. Members of such organizations already constitute networks that are highly congruent with the structure of knowledge in their field and introducing a formal research strategy will not change much at the beginning. However, having a research strategy explicated and agreed by everyone will make a difference in a long-term perspective: first, newcomers will be able to better understand the field and identify their interests; second, it increases commitment and engagement of members of the organization, since they define together the future of the field; third, developing a strategy increases the value of organization for the field. Importantly, although the organization can take the initiative and start working on a research strategy, it does not make this organization to the “owner” of this research strategy – the strategy belongs to everyone working in the field, also for those who are not members of that organization. The organization can provide its resources for developing the strategy and supporting its technical needs (such as a forum for discussions), but not use the strategy for self-promotion or forcing people to join the organization.
How journals and publications will change
Journals completely covering a narrow field could initiate development of a global research strategy in that field in the same way as organizations do – with the same requirements.
Authorship in publications resulting from interactions in research strategies should be rather inclusive, as long as it is compatible with guidelines of the journal. If there is a conflict about whether to include a person as a co-author or not, an anonymous voting can be used among other authors with a certain threshold needed for a positive decision, e.g., 80%. In any case, some form of reward for those whose contribution was valuable but did not resulted in authorship should be developed, such as rating on the forum.
Preregistration will be facilitated by the presence of the research strategy, since distant and often unfamiliar with each other participants who distribute tasks have to clearly define them in advance anyway, so it is just a matter of a few hours to generate a protocol out of these discussions and submit it to a journal prior to the study. It might be even introduced as a part of rules for participants of a strategy – to pre-register all studies, since it prevents misunderstandings at early stages.
Groups working in parallel on the same issue can decide for two different publications or consider one joint publication – the latter requires additional efforts from them in order to make the studies more comparable and leads to more convergence in the field. Journals should be ready to deal with such combined publications and develop their guidelines about coherence of such texts if needed. A similar change can appear in theoretical papers and reviews: if there are two or three people willing to write a paper on X, as it is formulated in the research strategy, but they have very different opinions about some aspects of X, they can simply co-author one paper and present their different opinions in it with arguments pro and contra. Again, this structures the discourse more, since everyone reading this paper will learn about all possible opinions at once, instead of occasionally finding only one or two out of three papers on the topic and having to match their arguments against each other and reconstruct the discussion by her- or himself.
Reviewing process might change too: now other experts from the field review papers on some issue. The task of an editor is to find experts who work on the topic as close as possible, and at the same time do not have conflicts of interests with authors. It can be impossible if all relevant experts already co-author the paper or participated in the preparation of the study, due to more convergence within a research strategy. In that case, however, only technical reviewing is needed, which can be done by specialists from close field, since the content of the paper is already at the best possible level: all experts in the field have discussed or even co-authored it. The fact that reviewing will be rather technical can speed up publication process; most of important discussions will happen less formally well before the paper is submitted.
There are two types of publications with respect to the research strategy: publications that are directly discussing the strategy (“We need to formulate another additional question in the strategy…” or “We found a final answer to the question N. The answer is…”) and publications that are relevant for the strategy, but do not even explicitly mention it. Publications explicitly related to the research strategy could have some indicators, e.g., keywords with special coding of a particular research strategy unit (question or goal) or, if the majority of journal authors already participate in a strategy, it can become an additional tag, along with keywords. More traditional format of special issues is quite slow and inflexible, but it may be appropriate for some important shifts in the strategy, e.g., appearance of a new theory that has implications for many questions and goals, which should be reflected and discussed.
Someone can use discussions on the forum of a research strategy and publish a study or a theoretical paper based on this without referring to the strategy. If participants of the strategy find it critical, they can try to use the forum with dates and times tracked in order to prove their authorship. On the other hand, in most of the cases this does not prevent them from performing the study and publishing it, while discussing in the paper a recently published study by N. In the worst case, for example if the study was performed on a unique sample which is exhausted by N, participants of the strategy can publish a commentary where they explicitly drag the cheating publication into the magnetic field of the strategy by discussing its implications for the strategy, despite the fact that N does not recognize it. The same kind of publications can follow any related publication whose author for some reason does not participate in the research strategy, although de facto works on it. This kind of publication behavior will fulfil the goal of the research strategy – to bring more convergence into the field around most important goals and questions.
Publications will not be defined by individual needs anymore (“Now I need to write a review, then three papers with experiments.”), neither by journals (“Let’s make a special issue on X, because it’s a hot topic now and people will buy it.”), but by the needs of the field – first of all. Some people say we need more reviews and meta-analyses, others say we need more data. Maybe both sides are right, but in their fields. An explicit research strategy will show what is needed at the moment, and someone will undertake these tasks.
How conferences will change
Conferences are often random and chaotic, since we may or may not attend a particular conference for some external reasons, and we may or may not get in touch with someone having critical knowledge for our work. Conferences are competitive, starting from symposia where we compete for the right to organize our meeting on one or another topic, up to individual talks and posters, where we try to get as much attention as possible from everyone, with the hope that exactly the right people will occasionally hear us. Conferences are justified by the fact that we can learn there about ongoing research which has not yet been published – but if we have a research strategy and a forum where we can read all these discussions online, why do we need conferences at all? We will practically have a never-ending online conference with most relevant people from our field without any associated costs, so the importance of conferences might decrease.
But we still will need conferences for two reasons. First, we are not cyborgs yet and personal contact and discussion can bring a new impression of a person and deeper understating of their ideas. Second, it is important for scientists to travel and get in touch with different cultures and contexts – this broadens our mind and improves our understanding of colleagues who live and work in these various environments, which might be even more important when research activity becomes more focused due to explicit strategy.
A possible option for conferences on a given research strategy is to offer broader cross-sectional discussions which do not happen on the forum of the research strategy, particularly focusing on theoretical, philosophical and methodological issues, which have implications for everyone in the field. Regardless of the concrete solution, conferences should be seen as local resources that should be subordinated to the global research strategy of the field, like any other resource that we have.
How individual scientific development and graduation will change
Perhaps the main concern regarding individual scientific development is that many scientists will not be able to formulate questions, but rather will become mere workers in projects outlined by someone else. This concern is related to our current research training which necessarily includes reading, formulating a question, further literature search, specifying the question, performing the study and interpreting the results – necessarily performed by the person who wants to obtain a degree. However, many PhD positions are in fact created after getting funding on a particular topic, i.e., the general framework for these projects is already outlined by someone else. Questions do not limit creativity but guide it. The method to answer the question, the way to interpret it and to formulate sub-questions can vary and leave enough space for individual freedom. What is different in the case of a global research strategy is only that (1) these questions are coordinated at a higher level and (2) everyone in the world has access to the full list of these questions.
Ability to formulate questions themselves is an issue. Should a bachelor, master or PhD student formulate new questions leading scientific research all over the world? Even the most experienced scientists not always formulate completely new questions; often these questions are inspired by philosophers or other scientists. Clearly, after a certain point individuals should start participating in formulating the questions and evaluating the state of knowledge. Probably, broader educational perspective will be needed to compensate for developing unification of intellectual development: second field of interests, parallel projects, philosophical education, more interaction with fields outside of science could be options to keep individuals open-minded. This will be even easier, since the main field will be better structured and thus will require less efforts and time for reading tens and hundreds of unrelated or weakly related papers, thus leaving more time for other subjects. Moreover, having research strategy may facilitate interdisciplinarity in itself, because goals and questions formulated in a clear way can be better understood by students or researchers coming from other fields, and minimal necessary literature organized by people working on the question will make it possible for everyone to easily reach necessary competence. Questions and goals lead to more convergence – also across disciplines.
Finally, not all questions we formulate are as new as we might think, as discussed before (continuity vs. fashion). By recognizing questions that are already formulated collectively, we might achieve a better evaluation of emerging ideas: how new they are, which preceding lines of research are relevant for these ideas and what exactly is new – all this should be reflected and explicated from the point of view of the field, not individual research interests and development.
Voluntariness of participation
Global research strategy is not only an abstract knowledge structure or embodied research networks, but also a different way of thinking: focusing on coordination of work to achieve the best outcome for the whole field. Those who acquire this way of thinking are already participating in the initiative and it is just a matter of time when they join or outline the strategy formally.
Participation in the research strategy cannot be compulsory, but only voluntary. Researchers and labs will join the initiative not because they have to, but because they will understand benefits of this participation for their work:
- Newly established labs and researchers who only start working in a field will get better overview of the whole field, it will be easier for them to outline their interests and position themselves in a broader research context.
- It will be easier to find collaborators, particularly from distant regions / other countries, which is often an advantage when submitting grant proposals.
- It will be easier to write grant proposals, theses or papers, since both broad research context, the general research question or goal and its relevance are already specified by the scientific community. Such projects will have better chances to receive funding.
- Increasing specialization will allow researchers and labs to be more efficient, participate in larger number of projects.
- By working on the same problem in parallel, researchers and labs can exchange their innovations and technical or methodological solutions more efficiently already during the research process.
Along with the discourse structured by the research strategy, there can be of course another field of exploratory research, where everyone will develop their own ideas and projects that they find relevant or interesting, but there is no place for them in the research strategy. Exploration is important too, and all our current research can be called exploratory. It is impossible, to absorb all current research in some field into one initiative – but the effort to do it can bring more convergence in the field than we have. It is always possible that there some important idea or question remained unnoticed by leading researchers in the field – and yet the chance of this situation is rather small, so we should not rely on exploratory science alone.
In an age where there is an app for everything, reading scientific articles remains woefully linear and static. This was brought to my attention after trying the SciVerse online article viewer (paid-subscription through ScienceDirect, sadly) which offers a multitude of apps (>50) to enhance article reading, such as: clickable links to related articles; grant opportunities based on an article's domain (!!!); links to species summaries; links to molecule renderings, etc, all in a discrete side-panel next to the fulltext article. No mere bells and whistles, this felt a much more intuitive way to explore science than a static pdf.
My question: is there a similar product that is a) free (at the very least), b) open-source, preferably, or offers an API to develop new apps? Like a Zotero for PDF viewing? UtopiaDocs and ReadCube seem promising, but both merely offer a few recommendations and minor details on the article in question, not a litany of customizable apps.
- Stated that, "By 1 January 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant OA Journals or on compliant OA Platforms,”
- Let us suppose, if all the journals are OA then what about the scientists of low-income countries/limited funding/without funding.
University Ranking is a hot topic nowadays. Where does your university stand in world Rankings and How does it make you feel?
Open access, open review, public review...too many faces for just one coin.
I read utmost interesting comments from several colleagues about the meaning and the potential dangerous side-effects of open access journals...we can discuss about them for years; also, we can discuss about the opportunity of open review but nobody says that just public reviews, as well as public authors' answers, would dramatically improve both review process and articles' quality.
Few, very few journals (either open or not open access) are working on this item and I hope that much more will do in the next future.
A new monoclonal antibody to a cell surface receptor has been produced in the laboratory. When the cells are incubated with antibody solution, cells get activated instead of inhibition implying failure of blocking of receptor by antibody. What could be the reason for this result? How could you possibly modify antibody to prevent the activation reaction?
Open Innovation intermediaries perform specific services in problem solving, technology scouting, co-development, IP transfer, and general crowdsourcing. The field has many players and terminology can mean very different things to different users. In your experience, what are the major shortcomings of methodologies you have encountered and what would you like to see from service providers going forward. I represent IdeaConnection.com and would like to know what obstacles to growth are inherent in our system, or any others that you know about.
We have just published the draft policy recommendations based on our research on science2.0 implications for European policies on R&I.
Our draft is available in a commentable format at http://science20study.wordpress.com/ so I strongly encourage you to to comment on it and to complement it.
Looking forward to having your views on this.
Do you have a review study of Science 2.0? It doesn't need to bee a publication. Actually it would be better not to be a publication.
We are working on a new section of the Multitude Project website, which will be about the democratization process of science, along with other social processes. We also want to propose new tools for scientists.
You will be acknowledged as the author of your contribution. If you want to remain anonymous we respect that too.
If you are willing and passionate about this subject you can actually become the main driver behind this new section of the Multitude Project. You can make it your project. We are a decentralized collaborative network.
Here's the link to the main site:
Thank you all!
I am wondering if there are any copyright issues when we post our published papers on ResearchGate? Is there any rule we should follow or we can simply upload the papers and hope that we do not really break the publisher's copyrights. I will be more than happy to know more about this.
1) list your profile(s) in your comment - I am @scigrrlz and @acdbio
2) follow other community members who post or tweet #science2point0
3) tweet #science2point0 and see how big the reach of this community is on twitter
I find twitter a great tool for keeping up with hot science news, colleagues, tracking meetings, following funding agencies and other organizations. What do you use it for?
Guess, this is not a very subject specific question. However: I don't like reading papers on my computer screen, but don't want to print them out. What is your experience with e-readers for paper reading (esp. the kindle)? Is there a possibility to mark text passages (e. g. underlining)
Some journals on ScienceDirect started to offer this new service, How would that reflect on the merit of the article?
How do OA journals deal with liability issues such as plagiarism or misinformation in an article submitted...if there is little budget? Is there any history that points to likelihood of any legal issues arising?
Looking for any comments and views on how health information seeking will affect health literacy, if it does so significantly, especially in the contexts of new media.
I would like to get an overview which datasets the community uses when working with altmetrics and whether they are publicly available. Also, are there any "standard" datasets? Do you think such datasets would drive research further?
See abstract of the paper. Obviously this is posted in response to the criteria stated in ResearchGate's Open Review introduction. I would be pleased to see ResearchGate take over http://oPeer.org and do it right, but this notion of evaluating strictly on the basis of "reproducibility" is as silly as counting Facebook "likes". (IMNSHO)
Scan the questions sections for #OpenScience, #Science20, #OpenAccess and you will notice that the vast majority by far, are spending significant Q&A on what is wrong with Open Science, what does not work, and what could (or has) horribly gone wrong. Makes you wonder how high the h-index on those questions gets, and how representative that is of reality.
Although exposing fraud, plagiarism, bad publisher service, and poor quality is essential to prevent other falling victim, it is hardly as inspiring or motivating. Actually, it can give the wrong impression to the novice, and be dangerous.
So, let's balance the discussion and focus on the what works, by apply the scientific method to gauge the positive side of #OpenScience (if any). There are many shining examples of how #OpenScience can boost your career profile, on the way to that tenure.
OS practioners, we know you are out there, so don't be shy and tell us how you integrate OS in your daily workflow, and in what measurable ways does #OpenScience contribute to your profile and impact?
With the high cost of journals and electronic databases many people cannot afford to purchase. Open Educational Resources are now an alternative, do you trust these resources?
As a researcher and therefore a reader of papers, I often encounter poorly written papers, full of grammatical and spelling errors. Of course, one cannot fully blame the authors -- many researchers are not native english speakers (myself included), and some only speak english at a very basic level, or not at all. However, some publication platforms (journals, magazines, transactions and so on) have editors listed: a group of people that tends to change per issue or a set of issues. My impression was that editors edit the papers for publication, mainly focussing on formatting and language (with feedback from authors, of course).
However, reading papers from a Lecture Notes in Computer Science issue, I'm quite certain that language isn't part of what editors do. Some articles, especially those that come from Workshop and Conference proceedings, often lack editing, sometimes to the extent that the paper is (for me, at least) no longer understandable. However, I've even noticed this problem for Journals. In addition, my reading experience seems to indicate that the problem is increasing, which would be a rather troubling fact that could be addressed by editors or by reviewers. I expected this to be addressed by reviewers; the quality of text is important to the correct communication of information. On the other hand, it may be unfair to non-native speakers to consider language in the review process.
Thus, my question actually consists of two parts:
1. What is the 'job description' of an editor?
2. How can the research community as a whole (authors, reviewers, PCs, readers, editors, ...) improve the situation?
Are any researchers who are undertaking systematic reviews also adding a search of google.scholar? And if so, what numerical limit are you putting on results that you inspect? In some earlier trials, I found that scholar returned in the order of at least 10x more results than did the more usual sources (like Medline) which I feel would then artificially distort the number of excluded articles in your flow diagram of articles to be included.
I cannot find it in the official lists of Thomson Reuters or similar databases and the impact factor listed on their webpage is based on citations of their articles but an unofficial one.
Some journals (like those comes under Elsevier) have two types of policies. First is making your article subscribed (i.e. one have to pay fee for access), and second is paying the open access publishing fee (like USD 2500-3500) where anyone can freely download manuscript from journal website.
My question is: Do you think that paying the Open Access Publishing Fee enhances the chance of manuscript acceptance rather than making it subscribed (not open access)?
I would like to provide copies of papers I have published, but I do not wish to violate the rights of the journal or the copyright laws. I would welcome the experience others have had in answering this question
Are researchers appreciating a piece of the scientific work (paper, abstract, thesis etc) on the basis of its content or on the basis of the journal score, the institutions or the prestige of the authors?
Science, at least in some cases, is reaching an intellectual status higher than politics, religion and humanism. This is because it is based on the rigorous thinking and evidence; authorities or social influences are, in theory, minimized.
How can science be more democratic and based on reason? What is our responsibility in this? Is, for instance, unconventional thinking of independent researchers or minorities penalized or stimulated? Do they have ways to publish their research in a fair peer-review system? What are these ways?
Here is a presentation slides to a 1/2 day seminar I will be giving at Universiti Sains Malaysia (USM) on the 28th June, 2013. Your thoughts, experiences and ideas on this topic would be highly appreciated. Still in ultra learning mode :)
What should be a new or improved peer review model, do we need it, and why?
There already is a number of proposed modifications to the existing peer review model, e.g.
1. open peer review, see e.g. http://www.nature.com/nature/peerreview/debate/nature05535.html
2. giving the reviewers bonus points which are later required for article submission, http://aclinks.wordpress.com/2012/02/17/more-on-peer-review-2-0/
3. creating a channel (say, a secure form at the journal web site) through which the reviewers could anonymously contact the authors (and get response from them) in order to clarify something or check some subtle details not included in the paper (proposed by Igor Belegradek at this discussion: http://mathoverflow.net/questions/50947/on-referee-author-communications )
What do you think of these modifications? Which modifications to the peer review system would *you* suggest?
Thanks for your input.
Is it difficult to categorize the roles an author in a multi-authored paper had? Like for example (analysis, experiments, review, etc.), to be introduced in the affiliations of the authors?
I have created a special interest group about Science 2.0 on Linkedin.
More than 150 people have already subscribed
If you want to join the group please go to:
I am researching on the data sharing behaviour of academic researchers in a particular research field - with the aim of conducting an empirical study in 2014. Currently I am working on a systematic review of studies in order to build a decent framework for the interview. Though I have already checked the most prominent multidisciplinary databanks, I wanted to make sure that nothing important slips through my fingers. In advance - thank you very much for the feedback. Much appreciated.
I am a PhD scholar in Computer Science. Can anyone please suggest as how to get funds to attend a research conference or to publish a research article in some reputed journals? Are there any scopus indexed journals in computer science which publish research articles for free. If so what is the procedure to send a paper to those journals?
More and more open access journals are emerging. Would you like to submit your paper in such journals? Additionally, open access is also provided as an option in a lot of conventional journals, which can surely increase the visibility of your paper. Thus, do you prefer this option? Finally, does your institute or project pay the costs?
Most journals charge a fee to authors for publishing their work Open Access. Some research suggests publishing OA increases citations, some says it does not. Regardless, this bill can be footed by the author him/herself, the University/Institute or the Grants agency. But what benefit does the funding agency get by paying $500-5000 worth of Open Access fee?
Suppose a scientific article signed by a single author. In another situation, suppose this same article, but signed by more than one author. In your opinion, does the number of authors influence the facility or difficulty for a scientific article be published and receive citations after its publication?
I do not use Firefox, but I've come over a note at Google+ reporting the launch of Mozilla Science Lab. All of you using it, please tell us about your experiences and practices working with it.
The role of social media in building scientific and medical knowledge has been changing rapidly, and seems to be increasingly reliable, usable, and accepted within the field. How do you use Wikipedia in your medical work and how do you recommend others (especially students) use this resource. Also, does anyone have experience writing medial articles for Wikipedia? If so, do you have any advice to offer on the process?
Our lab has an up-to-date website with a nice news section. However, it seems certain that most of our target audience will not visit the website regularly, so the readership of our news is very limited.
The university is active in LinkedIn, Facebook and Twitter, but naturally only shares the most important news. Therefore it seems that we should be active ourselves in sharing links to our news in social media. Some individual researchers do promote their own research online, but a concentrated lab-level effort would seem more effective.
We have considered setting up a LinkedIn group for our lab. This would be used for sharing links to our website news, new papers and job opportunities. A joint SlideShare account also seems worth the effort. Other obvious alternatives are Twitter and Facebook, and of course ResearchGate.
Do you think this would work, or would there be a better way?
As a motivation, increased publicity potentially brings new contacts, collaborations, projects, research, funding and so on.
Edit: Prof. Ravi Sharma nicely clarified the motivation for participating in social media below:
2. Popularising/sharing of services/ products/land mark achievements/papers/articles/presentations etc.
3. Attracting desired human resources
4. Networking among participating scientist resulting in better circumstances for productive and collaborative research & development - institutional as well as individual levels.
For our institute's publication strategy, we aim to identify core conferences and journals that deal with open science, science 2.0, open access, altmetrics and so on.
Since these topics touch upon quite a few disciplines and research areas, it is not that easy to gain a good overview. This is why I think it is an ideal question for expert crowd here.
Particularly considering social networking and discussion forums - such as RG (for scientific matters), LinkedIn (for business matters) –, if we spend time on any of those, we basically have less time for face-to-face discussions, paper writing etc. Is it worth it? Is it legitimate to still also write articles? How much time should be spent on which kind of activity? And, ultimately, is this worsening work-life balance?
In contrast to most other fields, CS uses conference proceedings as the main outlet of its research results. This leads to a huge competitiveness in conference acceptance (10-15% are quite common for top conferences) and long papers (say 12 or even 14 pages double-column). And because the papers are already seen as a final product, the incentive to write a journal version seems to be low, especially when other researchers go on citing the conference version anyway.
I find that this special position has some negative impact on CS:
- there is only a single round of reviews, leaving little room to improve the paper (shepherding is not fixing this problem)
- good papers get rejected at good conferences (not a "hot topic," reviewer randomness)
- papers must be "perfect" at time of submission, large but fixable problems lead to rejection
- no consistent reviews, if you fix all problems a new set of reviewers will have different problems (or even want you to revert to its previous state), wasting your time and the reviewers'
- the evaluation of research quality across fields is skewed because the widely used impact factor is bad for CS and good for everyone else
- the editing of papers is done by the authors themselves in a short timeframe, resulting in a poor style and print quality
A good point:
- the time-to-print of journals is high (1-2 years), conferences have fixed deadlines and only about 6 months (and you can also tell that something is happening)
So what is your opinion on this? Should we change to the classical model of journals (maybe with faster journals), try to improve the proceedings, or switch to something else entirely?
Public budegt is a budget of the government working at different levels. It is a document containing policy of the government towards the development of the economy, with emphasis on social welfare maximisation. Budget also indicates the beneficiary section of the community and sufferers as well. This shows the necessity of studying public budget and its interpretation.
In the era of common internet communication seems to be normal that knowledge (science) treated as common human heritage and accessed for all, might be the best tool for breaking several barriers and making global progress.
Costs of electronic publications are incomparably lower than those in traditional paper production. Meanwhile only small part of renowned editors decided to give opportunity to open access for scientific publications, other as Springer proposes to open choice option after payment of 3000 US$ fee for one publication. Most authors in the world are employed in different state universities, where salaries are dramatically low, and no chance to pay such costs by them.
Should governments support these costs of editors (if are really necessary) to open access to knowledge for their citizens? Is the place for common open access publishers competitive for monopolised market of high ranked journals? Should governments initiate discussion about common open access for science regulations, instead of ACTA law conspiracy?
Most of the digital version of published papers use Postscript or PDF as their document format, which helps to ensure that the document looks as intended on various platforms and in print. One of its drawbacks is that text is just a large array of glyphs that are to be placed on a page, semantics such as text being a heading, caption, text in a figure, or a reference, is lost.
This causes problems when you want to make automated use of a paper, e.g., to extract the references inside, enable fulltext search, or generate bibtex entries. For example, for inclusion into the ACM Digital Library you have to provide the LaTeX source and bibtex file to help them get the proper metadata in the first place. Google Scholar or RG seem to invest lots of effort to find out which papers are yours, who is cited by whom, etc.
There was much research on topics like the semantic web, are there already solutions for this problem that could be used for research documents at scale? To improve the situation, why not have a new data format (or just include this metadata in PDF) and make this information mandatory when publishing it?
This is becoming an important issue, as the distribution of means to interact (computation and the internet are good examples) and obtain competence (MOOC) make practical and inevitable the integration of amateurs with professionals over research topics.
As a researcher, every so often one encounters publications that are not of the high quality that one might expect from a 'good' publisher (such as the IEEE). Sometimes it is simply a low quality workshop or conference, where peer review is simply not that high a bar compared to high profile conferences or high quality journals. We all know there is a difference in quality between each workshop, conference or journal. This is something we can deal with- lower quality work is not necessarily wrong, but it is just often less innovative, or it is on-going research.
However, very rarely, one encounters publications that simply contain many errors. In this case, I'm referring to errors in the content, rather than spelling and grammar (because spelling and grammar errors are virtually everywhere). Sometimes 'errors' are simply assumptions you do not agree with (which is acceptable most of the time), but in other cases, 'errors' may refer to falsifiable facts (for example, claiming to be able to protect integrity by using HMAC-MD4 -- MD4 has been known to be broken for many years, and this algorithm isn't part of major communication security standards like TLS).
What do you do in such a situation? Do you ignore the paper, or do you cite it and reason why its solution is bad? However, citing increases its citation count, thereby increasing the legitimacy of the paper - a confusing dillema, for me at least.
Secondly, do you think the world would benefit from a place where this type of (motivated!) criticism can be published? Something like arxiv, where authors can publish technical reports explaining the issues, with the option for the original authors to respond to the reports as desired? Or would this lead to a strongly negative spiral that only serves for authors to attack each other? Finally, could ResearchGate fullfill this positiion, using its significant userbase as a kick-start?
Recently, I've been reading a collection of papers, and I noticed that almost none of these include any error bars (neither standard deviation, nor confidence intervals or some other indication of error). This is really surprising to me - I've always been taught that error bars are essential to the correct interpretation of results.
Is it bad practice to leave them out? If so, why are there so many conference and journal publications that contain graphs without them? Or should I not even bother including them in my publications? How does this affect trust we can have in the research results?
The immediate context is a number of IEEE Conference publications I was reading, which included simulation results as part of the evaluation. This was for work performed in VANETs, but I've noticed this practice across computer science and across both publishers of all quality levels, both open- and closed-access.
I'm not sure whether this is also common in other fields (comments to that end are also welcome!).
Websites such as ResearchGate, Academia.edu, Mendeley Web, and Zotero Web have a lot common features: 1) They are social tools targeting scholars, 2) “Real Names” Policies (academic people are easier to follow others, connect others, and self-promote based on this), 3) enable publication/citation/reading list sharing, and 4) encourage self-archiving and open access.
My advisors and I usually use "Academic Social Networking Sciences" (ASNS) to indicate these services in our work, and we are from the field of Library and Information Science. However, I have to admit that ASNS is sometimes not the best name because the term seems to be unclear. Moreover, ASNS couldn't describe some research-oriented features that we can find in these websites. For example, Mendeley and Zotero are more likely to be identified as "social reference managers" and RG seems to feature as "Scholars' Social Q&A". So I am very curious about if there is more choices for describing them altogether.
Here is some other terms that researchers described these services:
-Networked Participatory Scholarship (NPS) that I borrowed from the field of Education
(Veletsianos, G. and Kimmons, R. 2011. Networked Participatory Scholarship: Emergent techno-cultural pressures toward open and digital scholarship in online networks. Computers & Education. 58, 2 (2011), 766–774.)
-Research Networking (RN) platforms form Bio-Medicine domains
(Schleyer, T., Spallek, H., Butler, B.S., Subramanian, S., Weiss, D., Poythress, M.L., Rattanathikun, P. and Mueller, G. 2008. Facebook for Scientists: Requirements and Services for Optimizing How Scientific Collaborations Are Established. Journal of Medical Internet Research. 10, 3 (2008), e24.)
The peer review system is often unfair and biased. Very often, when I am asked to review a paper, I personally know the authors. Shall I abuse this to "get even"? Shall I be more strict because I actually value the author as a scientist, as a person, as a colleague? Remaining impartial is close to impossible. Also, since the author does not know who I am, I cannot have an open discussion, honestly pointing out my view of things, or how I understand the theory.
Sometimes, to circumvent these problems, I bluntly state my name in the review. A move which is usually suppressed by editors-in-chief. More often, however, I feel I have to turn down the review.
Conversely, one often finds reviewers doing a really sloppy job: they cannot be criticised, so they review a paper in (what appears to be) a few minutes, criticise parts of the paper unjustly (due to missing parts of the argument laid out in the paper), or are not familiar enough with the field to criticise justly. This is what one almost invariably observes in robotics conference reviews in, e.g., ICRA, IROS. This is the major reason why I do not like to publish there and why such papers are often not (and indeed should not be) acknowledged as publications on par with journal papers.
The often-mentioned double-closed approach seemingly solves one of the two issues---but only seemingly, since even there I can often, in about 50% of the cases, guess who the author is.
A way out would be to use a double-open approach, in which reviewers are acknowledged and, better still, reviews or commentaries are published. The new, open-access journal series Frontiersin uses the former approach, and it appears to works well.
An improvement on this approach would be to use a grading system, in which everyone---with names being named, of course---can comment on or grade papers online. This will not only lead to majority voting, but also to accreditation of accreditors. Of course, blind votes must be prevented.
Wikipedia lists such ventures under http://en.wikipedia.org/wiki/Open_Peer_Commentary, of which Behavioral and Brain Sciences, a journal initiated by (the legendary open-access promoter) Stevan Harnad, was a brilliant example (unfortunately, Cambridge University Press seem to have decided to close its traditional open access, and now papers cost a daring $45). The journal "Papers in physics" uses a comparative approach, I think, but I don't know that journal (nor the field) so I cannot comment on it.
The suggestion would therefore be, that a "journal" would consist of an open access web platform where logins are given to accredited users (that step requires some thinking). Each user can post an article or a commentary to an article, and a rating system along the lines of RG's silly score (silly, since at RG it is heavily based on discussion ratings, and only very lightly on publication and impact factor (caveat!)) will lead to something like an impact factor for the paper. Can a person build a reputation on reviewing only? I think not; the effort in writing a scientific article is much higher, while its impact (=influence on others) is much more profound, and should be honoured accordingly. Yet, only publishing without engaging in discussions is not right, too.
Advantages: fair reviews; but also a better understanding of the papers, since commentaries of peers are included. And, much more impact of papers since those papers which are controversial or important will have high grades. High grades are better earned in such a way than by being referred to from papers in obscure conferences!