Science topic

Science 2.0 and Open Access - Science topic

Information exchange on Open Access topics in scientific publishing
Questions related to Science 2.0 and Open Access
  • asked a question related to Science 2.0 and Open Access
Question
8 answers
For example, for a large lab with a number of research projects, a document management system that would work in a collaborative environment. Also, that would include connecting figures to actual data (numbers and images) standard tools for creating manuscript templates.
Thanks
Relevant answer
Answer
A data management tool combines and manages data from numerous data sources. It extracts, cleans, transforms, and integrates data without compromising on integrity so that you can access it in an easy-to-use format.
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
Does anyone have experience with Columbus Publishers?
trustworthy or predatory journals?
Relevant answer
Answer
This publisher is new (and certainly too new to be mentioned in for example the Beall’s list (if they turn out to be predatory). I do see some red flags:
-Location is suspect, Google the address and you end up with some pretty nice-looking real estate but a highly unlikely location for an office
-I noticed in literally the first paper that this publisher is not sharp in copyright permissions of images, this is a red flag for the lack of proper peer review and use of well-established scientific standards
-The photo used on their homepage is probably not original since it is already used here https://professionals.hartstichting.nl/samenwerking-en-financiering/samenwerking/talentontwikkeling
-They are new so consequently non-established but still they a membership with ridiculous prices https://www.columbuspublishers.com/membership
-The APC’s are way too high for a basically non-indexed journal https://www.columbuspublishers.com/journal/research-journal-of-gastroenterology-and-hepatology?submenu=article-process-feefor a research/review article they charge 1499 USD
-The journals I checked are empty (no papers and no board members…)
Even if this might turn out to be a genuine and legit initiative I would go for another journal. Looking at your publication list you found way better one’s than this new player.
Best regards.
  • asked a question related to Science 2.0 and Open Access
Question
96 answers
ResearchGate is the leading SNS for researchers in the world. Meanwhile, there are several similar websites, such as ResearcherID, Academia. Some people argue that ResearchGate is not doing anything that is fundamentally different than others, there have long been researchers on LinkedIn and Facebook. In China, we have similar website Researchmate, but active users are quite limited. Therefore, I have 3 questions:
1.Why do you use ResearchGate?
2.What are the real benefits does ResearchGate bring to you?
3.What are the barriers when you using ResearchGate?
Thanks much!
Relevant answer
Answer
Digital academic network is vital recently and researchgate contributes academicians to meet at the same time and same place.
  • asked a question related to Science 2.0 and Open Access
Question
102 answers
The Beall's web site scholarly-oa.com does not host the Beall's list of predatory journals and publishers anymore.
I have recenly found a web site https://predatoryjournals.com/ which claims to build on it and expand this list (see https://predatoryjournals.com/about/ ).
What do you think of it?
Update [August 1, 2019]: The question was originally posted on December 26, 2017 but now it looks like the site in question remains dormant and was not updated since 2017, which makes the question somewhat moot.
Relevant answer
Answer
Dear Michael John McAleer , we have already discussed on some other thread about definition of predatory publishing.
Leading scholars and publishers from ten countries have agreed a definition of predatory publishing that can protect scholarship. It took 12 hours of discussion, 18 questions and 3 rounds to reach. Leading scholars and publishers from ten countries have agreed a definition of predatory publishing that can protect scholarship. It took 12 hours of discussion, 18 questions and 3 rounds to reach...
  • asked a question related to Science 2.0 and Open Access
Question
35 answers
Do you share your research ideas openly with others? Do you make your research process transparent? Do you make your research findings accessible?
If so, which online tools are useful?
Relevant answer
Answer
I made a list of open science tools, including data repositories, preprint storage servers, open-access publishing tools, search engines, and more: https://peerrecognized.com/open-science/
  • asked a question related to Science 2.0 and Open Access
Question
6 answers
Let’s imagine that at the beginning of the coronavirus pandemic we locked all leading scientists of the world from a particular field in one room for a few months to make sure they are safe. Since they have a lot of common interests and many of them already know each other, they started discussing their research and the state of the field, which resulted in an intensive exchange of knowledge, explication of commonly shared assumptions and long discussions about the most controversial topics. By the end of the quarantine they wrote one textbook for their students where only the most important knowledge of the field is aggregated and structured with axioms one has to accept to be able to work in the field and open questions formulated with all possible answers listed after each of them. Of course, this textbook should be constantly updated, like Wikipedia of this discipline, so the scientists decided to publish it online as a website. Moreover, every chapter of this book ends with a list of relevant literature, and everyone promised to add their new papers there, if they are relevant for this topic, so that other people do not have to search for new publications across multiple journals – and yet miss some of them.
What a disappointing news! The quarantine has been prolonged by several weeks. Scientists already have discussed everything they wanted. But they can use this additional time to plan further research! Since they already decided what is clear in the field and what has been done, no one will plan uninformative and unnecessary experiments that didn’t work out in other labs. Instead, scientists can focus on most relevant issues and – something really new! – they can distribute tasks to make sure that all current problems are covered and every lab gets the tasks which fit best with its capacities and competences, regardless of the fact that these labs sometimes are located even on the opposite sides of the Earth.
After the lockdown scientists continued their work in labs. They noticed that their students now understand the tasks and new articles much better – because they already have read the online textbook of the field and have practically the best possible level of knowledge of the topic. The same applies for all new students and researchers coming to the field. All new labs don’t have to search for their identity or position themselves loudly – they just undertake one of important issues where not enough resources were allocated and immediately become significant actors. The communication between different labs became easier, since they have a shared background. Projects became much more ambitious, because the tasks are distributed and coordinated across labs, so they are not limited to one or two labs anymore. Projects also run smoother, because related labs constantly share their experience in solving technical and methodological problems. Finally, the field progresses faster, since all relevant labs are focused on common goals and issues instead of promoting their own agenda.
How did we spend our quarantine?
Why we have what we have
The described process consists of two steps: (1) explication of a common background and (2) formulation of a research strategy. The latter is here more important, but the former would facilitate the process. This is nothing radical new: many elements of these two sub-processes are already present in research communication. There are reviews, there is collaboration between labs leading to the development of a common background on some issue and there are multiple textbooks in each discipline. Many authors formulate questions for future research in their papers, and some of them even do it for the whole field. However, strategical planning is often not systematic, is not reflected enough and does not appear at a level which is as global as it could be nowadays.
Why is our common background fragmented or distorted? Science is a special type of activity, because its goal is to professionally generate new knowledge. This means that the scientific discourse should be a function of the knowledge acquisition process itself: there should be more texts about topics that are more important and fewer texts about topics that are less important; more texts about problematic issues where controversial opinions exist and fever texts about issues where everyone agrees and there is nothing to discuss. This (ideal) discourse can be called undistorted – it could be like this, if there were no other factors influencing scientific communication. Some of these factors are:
- Noise and randomness in the scientific communication: We miss some of important papers in the flow of literature which became extremely dense. It is often a matter of luck whether we attend a particular conference and get acquainted with a highly relevant person there or not.
- Barriers in the scientific communication: Not all results from all labs are published, especially negative ones – so we even don’t have a chance to learn about these unsuccessful experiments from journal publications. We might have no access to some of the publications because our institution did not pay for it.
- Limited capacities at the individual level: We have several projects at once, including student projects, plus teaching and administrative work, so we simply cannot read all literature from every field for every single project in order to evaluate how important or innovative some of the ideas are, unless there is a good recent review exactly on this issue.
- Institutional pressure: We need to carry out projects, publish papers to apply for funding, obtain a degree or get a position. The time to familiarize ourselves with the literature is limited, the circle of minimal necessary literature is not determined, and as a result we are often forced to start a project with wrong assumptions or incomplete picture of the field in mind, and only in the process learn how it should have been done (hopefully not too late) – who of us has never had this situation? Another downside of institutional pressure is that we can take a safer way and test some trivial hypothesis with a slight variation, in order to increase probability of getting publishable positive results, while important but risky problems remain untouched.
- Varying terminology: Same terms can mean different things, while same things can be called differently, which makes keywords as a searching tool sometimes misleading. Again, it takes time to solve all these complications – precious time which we not always have.
- Fashion, hype and personality of other researchers: Some topics receive more attention than they deserve, because they resonate with current political or cultural circumstances or because they are promoted by charismatic researchers, while some more important issues may remain overshadowed because they don’t sound that cool or because people who generated these ideas are not able to communicate them loudly enough.
- Competitive and market-like culture: Scientific work is largely structured as a market, where everyone has to advertise themselves, sell their ideas and collect publications, citations and impact, which then is converted into financial and symbolic benefits. Marketing promotes distinction, not cooperation: everyone has to make a unique offer, cultivate their difference from others, not similarity – new effect, new abbreviation, catchy title of the paper or conference, description of how the project is unique and different from all other projects in the field etc.
All of these factors have been recognized and discussed before. What is new here is the idea that they all are parts of one and the same problem: they all lead to distortion and fragmentation of our knowledge structures or scientific communication. We need to reflect on their consequences and disadvantages they bring and compensate for them in our work. I.e., we need to close all leading scientists in one room and to let them formulate common background and develop research strategy for the field – and not for themselves as individuals or small teams.
Luckily, we have new technologies to help us with this. Internet not only intensifies flows of information in our lives. It also makes us less dependent on local context and gives us an opportunity to build virtual communities according to thematic fits and, most importantly, coordinate our work as a community at a global level. It can help us to better structure the discourse, to distribute tasks, to reduce the role of personality and focus on explicit rational arguments instead and, finally, to learn to cooperate rather than to compete.
Building virtual scientific communities
Knowledge is social in nature – we gain and shape it together, share it with each other and evaluate it collectively. Knowledge is embedded into social structures, and yet we often have an illusion that we possess it alone. In some sense our individual mind is not individual at all, since almost everything we know is just a compilation of things that other people know too. Although frightening for the individualistic culture, this idea highlights our interconnections with others – the basis of communication and social life.
Scientific knowledge, as every other kind of knowledge, is embedded into scientific social structures. These structures are, for example, faculties and departments, labs, professional organizations or long-term collaborations between single scientists or labs. Journals and regular conferences build a virtual or real community around themselves too. While many virtual communities are built just for informal communication on interesting topics, such as communities of cat lovers or fans of a musical band on Facebook, scientific community has its goal at generating new knowledge. This means, that any distortion or incompleteness of the picture results at wasting resources – testing wrong hypotheses, loosing time with reading unrelated literature or missing important publications on the topic. That’s why it is so important that information flows are organized in a best possible way allowing to achieve the minimal required knowledge structure in everyone’s individual mind in that community to ensure effective communication and coordination. This should not be a matter of luck, whether you learn about a new important paper or not. Neither can we rely on creative chaos which results in new ideas: we all know astonishing examples when the same discoveries were made in science simultaneously and independently from each other – not because there is some mystery about that, but simply because people who share the same background and work on the same problem come to the same solutions. These people could work together and, probably, get their results faster. So, we need communities with structured knowledge and effective communication and coordination.
But if we look at the existing communities, we will notice that they still are largely organized according to physical presence of researchers (labs, departments, faculties, regional or macro-regional organizations) or static topics (research field, subject of research, particular approach). This makes sense because, on the one hand, resources for research are distributed in the physical world, still mostly at the national or regional level; on the other hand, model of an object is something very natural and basic which we understand intuitively – although I cannot touch and see cognition, but it is natural for me to think about cognition as if it is a thing which exists somewhere in the physical world, has its parts etc. And yet, although it looks logical and natural to keep the things like this, I claim that we need another basis for building our research communities, and this basis is global research strategy.
Global research strategy is what scientists got in the thought experiment described at the beginning of this essay – an explicit list of most relevant questions and goals defining the closest and more distant future of the field, the intention to work towards these goals and concrete agreements about coordination of this work. This strategy is global because it is not bound with a particular lab, regional, national or macro-regional context – everyone can join it regardless of their location. It is called strategy because it’s dynamic and prospective, future-oriented, rather than static and descriptive, presenting what is already known. Research strategy is a unit which is bigger than a single project, but smaller than a discipline or a field. Importantly, it’s not only about the size – research strategy is also qualitatively different from other units that we use to structure our work and communication.
Questions and goals, not topics and projects
The most important thing about research strategy is that it is future-oriented. It is about goals and questions, not descriptions and answers. Our mental units that we use to think about science are field, topic and project. Topics and fields define our positioning of ourselves in the virtual community (“I am a cognitive scientist working on attention and the motor system” – by saying this I identify myself with some group and distinguish myself from all other groups), while projects define our operational level, the work we do (“I have a project where I investigate how the motor system responds to horizontal attentional shifts”). Topics and fields are very natural for our thinking, since they represent very complex and often abstract phenomena or constructs as concrete objects, so that we can think about them as located somewhere, attached to some other phenomena, consisting of parts or modules etc. Projects come from management and they are indeed very useful: project is time-limited and has measurable outcomes, which makes it effective to think in this terms about our work – it can be planned, split into steps and evaluated. Actually, project is future-oriented too. But there is also a conflict here, because scientific knowledge is not structured in projects: some questions are too broad to be answered in a single project, and some projects are relevant for several questions or even disciplines at the same time. Another problem is that the project, when formulated by a single researcher or a lab, is a compromise between what the field really needs (as this person or this lab sees it), which resources this person or lab has and what would be most beneficial for this lab in terms of resulting impact, and safety (don’t forget about negative results!). One more limitation is that the lab or a person simply cannot start a project which requires more resources than this lab or person has – this limits the range of possible projects a lot. How many brilliant ideas were formulated – and forgotten for decades until someone occasionally found some of them in an old paper and realized as a project? How many useless projects we do without knowing that other labs have already failed with the same idea? How many times we had a feeling that we do something very special on a narrow topic that no one actually needs and will ever need in the future? And how to solve these problems? Ideally, projects should be subordinated to goals and questions relevant for the discipline, not to technical capacities, financial situation or fragmented understanding of the field developed by a particular person or a lab under time pressure. Scientific work is driven by questions and goals – not projects, however convenient they are.
Topics and fields are represented in journals by keywords. If you have ever searched for papers by using keywords, you probably noticed that they are merely knowledge bags where everything is just being thrown without any structure – more and less important papers, central for the field and less related publications, reviews, experimental reports, commentaries and just everything which is somehow in touch with the topic. Every new scientist has to develop his or her knowledge structure from scratch and constantly track and filter new literature to update this structure. If a global virtual community existed which worked on a particular question or towards a particular goal, this community could structure the knowledge collectively by sorting the literature into more and less relevant for the question or a goal and keeping track of knowledge, updating its parts with new publications – imagine it as a topic in the online-forum, where every new message is in a dialog with previous messages, and all participants try to answer the initial question. I can just come, read the forum till the recent point and join the discussion. This can save a lot of time for everyone and facilitate development of shared understanding in the field, overcoming the problem of fragmentation.
Questions and goals can have structure, they can be parts of higher-order questions and goals or be themselves split into smaller questions and goals. Keywords are all equal, they are associatively related to each other, since some of them appear together – but they do not build any hierarchy. Questions can be answered and goals can be achieved. This progress can be evaluated and direction can be corrected. Keywords have no answer – they come and disappear, creating isolated islands of unstructured knowledge around them. Our very way of thinking is fragmentary and associative because of keywords used to structure knowledge, instead of goals and questions.
One might object that if someone else has already defined all important questions and goals, and even selected most relevant literature, then no space for individual creativity remains. First, questions and goals are defined collectively, which means that all labs and researchers experienced in this field should have influence on this process. Second, questions and goals give direction, but they do not define the exact steps – how to reach a convincing answer or how to achieve the goal in the best possible way leaves a lot of space for creativity. Third, in the process of answering, we can transform the question by reformulating it, specifying or splitting into smaller parts – it is not a closed system. Finally, participation in research strategy is voluntary – if someone thinks, they have a better idea of what should be done, they are still free to do it outside of the research strategy.
You might think that it is trivial, to formulate questions and goals for future research, because everyone is already doing it – at the end of the paper or a thesis, sometimes in reviews, in grant proposals or informal discussions after conferences. There are two problems about it. First, it’s not systematic – sometimes we do it, sometimes not. We are all too busy with what we just found and what we want to present right now that we have not so much time to think of a broader context of the whole field. And not all of us are even able at all to do it, because our knowledge is distorted or fragmented. Thus, formulating research strategy for the field must be a special explicit effort, maybe the most important genre we need now – and yet it is missing, scattered in many texts, often merely formal just because everyone expects something like this to be said somewhere here in the paper or a thesis. The second problem is that we all think small. Science requires us to go deeper and deeper in details of our work to become experts, and at some point even most experienced of us can lose broader picture, not to talk about master and PhD students and postdocs. If you ask most prominent researchers in your field what the most relevant questions and goals at the moment are, you might get surprisingly different answers, or even notice that they need some time to figure out what to say. Even more confusion might cause the question why exactly these questions and goals are important right now. Note that global research strategy is not what I am going to do in my next project, and not how our lab is going to spend next few months, but how the whole field should develop, where efforts of multiple labs and researchers should be focused.
Coordination, not competition
As discussed before, scientific culture is largely competitive and organized according to marketing principles. In fact, internet even promotes it: the more intensive the flow of information is and the more noise surrounds us, the louder one has to be to catch attention. As a result, this market-like culture distorts our could-be-ideal discourse and competitiveness prevents us from collaboration. We have an invisible boundary between silence and self-promotion – and this boundary is publication in a journal. Before publication we try to tell others as little as possible about the project, being afraid that they can steal our ideas. After the publication we try to tell others as much and loudly as possible, to make sure that everyone has heard our ideas. Quite a strange practice, if we think about it more, which is only justified by the fact that the number of publications is the basis for getting funding or obtaining a position. So, finally it’s all about limited local resources, what prevents us from unlimited international collaboration. And even international funding programs still do not solve the problem – they just shift the competition at the higher level, since everyone has to compete against other researchers from their field, instead of collaborating with them. It looks even more ridiculous if we remember that we all live in the same physical world, have the same bodies and very similar genes and brains, read more or less the same literature with other specialists from our field and work on the same problems. Remember, we think we possess some unique knowledge, but this is just an illusion – this knowledge does not belong us, we just borrowed it from someone else.
So, what is this, which gives us advantages in a competition? These are two components: (1) formulated questions or goals and (2) methods to answer or achieve them. Projects that ask deeper and more relevant questions and suggest better research methods get the funding. But what will happen, if all most relevant questions are already formulated by the leading researchers in the field and are freely available for everyone? Only methods remain the potential field of competition, and it is much easier to reach an agreement and collaborate with someone, if you both try to answer the same question, even if your approaches are somewhat different. This means that simply by formulating research strategy we already increase probability of collaboration in the field and decrease competitiveness, even if we cannot change the system of financing research at this point.
It is well-known that the average number of co-authors in natural and technical sciences is higher than in social sciences and humanities. Co-authoring a paper means to accept and share views expressed there. The number of co-authors is then an indicator of how unified and coherent the knowledge in a given field is – and this is not a good news for humanities and social sciences. Formulating questions which will intensify collaboration and decrease competitiveness is one of the steps towards building virtual communities, whose coordinated work will ultimately lead to more coherent understanding of the world.
It is possible that with increasing coordination more specialization will come: some labs will only collect data, others perform analyses and meta-analyses, and a group of people can only write reviews and theoretical papers. As long as everyone’s contribution is recognized by including them in the list of co-authors and submitting grant proposals together, and this whole work is well coordinated, this is not a problem. A lab can collect data in one research network, being at the same time an analyst or a theoretical hub in another – this what many of us already do as individuals. The only difference here is just the level of coordination. This allows newly established labs or young scientists to actively participate in the most cutting-edge research regardless of how methodologically rich and theoretically advanced their actual local environment is. More advanced and established labs loose nothing: answering questions and reaching goals usually leads to new questions and new goals in science, so there will be always something to do for everyone.
Continuity, not fashion
Apart from gaps in our discourse that are related to physical location and need to establish ourselves on the market with our unique scientific products, there are other gaps which are associated with the dimension of time. Scientists are also people and thus they are also prone to fashion. Some new hot topics appear and everyone starts working on them – either because we find them really interesting, or because we understand that it will bring us more attention and associated citations, or both. Then everyone gets tired of the topic and we abandon it, waiting for some new inspiring key words. It is natural – but it is not strategic. If we carefully read old papers we might notice how many important ideas were forgotten for longer time in order to appear nowadays for some external reason – differently formulated and starting again from zero unless someone will point to the congruency between the two topics. This effect is also a result of missing explicit research strategy, where all important questions and goals are documented, along with their changes, so that everyone starting working on some question can track back how it appeared, was reformulated, where it comes from and which important discussion in this issue has already happened – regardless of specific terms used in one or another period. This is similar to what we all do in the introduction of our research theses, but again the difference is that a global research strategy is collective, not individual. The more information we have, the more important it is to organize it to get knowledge – and we are reaching a point where there is so much information that we cannot organize it individually anymore. Explicit research strategy ensures continuity of the discourse not only between researchers and labs, but also between generations of scientists.
This continuity does not mean that science makes a step out of time and becomes preserved in itself. Researchers still can leave the strategy for some time to work on other prompt projects and then come back to the strategy with these new ideas. Strategy is open for reasonable and well-justified changes, but creates a barrier against random noise and fashion.
Subject of research first
Subject of research must be the reference point in all fields where possible. National contexts distort the discourse – some labs in some countries receive more financial support than other labs in other countries, which gives them an opportunity to develop their lines of research more intensively, without guaranteeing that this line of research is indeed most important for this field at this moment. Coordination will ensure that a common plan is achieved and tasks are distributed across labs or researchers all over the world in the best possible way. Thus, if some very important task receives not enough funding in one country for some reasons, and the field cannot progress without this task to be performed, other labs can take it over. On the other hand, if a lab sees that many other actors are already involved in a particular task and the work is progressing, this lab can look for other tasks that are important too, but received less attention.
Once again: the subject of research and the needs of a particular research field should explicitly prevail over financial situations, individual self-promotion and random noise coming from the local environment. Even if we cannot reach this goal now, since we are still parts of many other systems, the act of formulating a research strategy as if we were free of these distorting factors will itself push the field towards new principles of thinking and communication. While planning next projects, scientists will read the research strategy and try to coordinate their (yet very much individual) projects with it, which will naturally lead to more and more convergence around these goals and questions.
How to develop a research strategy: Globally, decentralized, inclusively
Organizations, formal and informal networks, regular conferences are the natural agents which could start this process of transformation. What they should start doing to transform the discourse to a structured one is to formulate questions, rather than promote themselves; coordinate distribution of tasks among members and keep track of progress. However, even if these agents are not interested in developing a research strategy, an initiative group of scientists from the field can initiate this process. It is not necessary that members of this group are all leading scientists in the field. Note that the field here can be defined very differently – it can be a very narrow topic, a particular theory, or a subject of research corresponding (more or less) to a “departmental” level or the whole discipline – it does not matter. Since there is a natural hierarchy of topics and problems, even if not explicit one, the process of formulating a strategy can start at any level, and this strategy can be later aggregated by higher levels or be specified at lower levels, if the actors there decide to join the initiative.
1. The first step is to identify potential participants of this new virtual community – leading researchers in the field. The criteria here can be discussed, for example:
- Everyone who has a PhD and at least 10 publications in the field
- Everyone who is a professor and works in the field
- Everyone who has at least 15 publications with particular keywords
- Ask each of 5 leading scientists from your field to name other 5 important scientists working in the same field, contact them and repeat the question. Continue doing so until you exhaust professional networks. Then select those who satisfy particular criteria (e.g., everyone, or those who were named by at least 3 other people etc.)
Regardless of the exact criteria, this process of pre-selection should be objective, strict and inclusive, to ensure that as many as possible actors from the field are included and there is no systematic distortion in the selection procedure. The result of this step is a list of potential participants who are able to formulate a research strategy.
2. The second step is to contact all potential participants, to explain them the idea and advantages of having a common research strategy and get their agreement to participate. Importantly, they should understand that (a) this participation does not require much efforts from them, they are not obliged to follow the research strategy or to change their work in any way; the only thing we need from them is their expert opinion about the needs and future of the field; (b) this is not just another possibility for their self-promotion, but a collective act of strategical planning; all individual answers will be later collectively evaluated and accepted or rejected. So ask experts to think big.
The result of this step is a list of actual participants. Again, it should be as inclusive as possible with respect to the previous list.
3. The third step is to collect questions and goals formulated by participants. Examples of questions are:
- What are components of X?
- Which factors influence development of Y and how?
- Which theory is true – A or B?
Examples of goals are:
- We need a computational model of M. It should have such and such properties.
- We need a review/meta-analysis of research on N with a particular focus on aspects D and F.
- We need a systematic cross-cultural investigation of L.
Questions and goals can be more or less concrete at the beginning, this is not a problem since they will be either specified or combined into bigger ones later. Whenever possible, such questions and goals should be formulated in a language which can be understood by non-specialists and general public. This will facilitate interdisciplinary interaction and allow non-scientific actors (e.g., journalists, policy makers or professional organizations) to discuss and evaluate the strategy, thus giving their feedback on it, or use it for their long-term planning.
It should be underlined again that the ultimate goal for participants is to develop a strategy for the field, not for themselves. It does not exclude their own current research interests, but they should be evaluated as objectively as possible. This initiative cannot be used promoting own research. Importantly, every question or goal should be justified, i.e., if you suggest a question or a goal, you should also briefly explain why exactly you think it is important (a word limit can be helpful here). A moderator or a group of moderators collects all answers and tries to organize them, by bringing together identical or almost identical questions and goals. Note that moderators cannot remove anything at this point, especially if they are uncertain about whether they understand it correctly – finally we are dealing here with the knowledge of the best experts in the field, so we should assume they thought a lot about every word.
4. Collective evaluation of the questions and goals. At this stage the outline of the final document – research strategy – should be defined. It can happen in different formats, depending on the number of experts, density of their personal contacts or suspicion that they ignored the request to be objective at the previous stage:
- Experts can receive a document with all questions and goals plus explanations why these questions and goals are important. Experts comment on it and suggest their changes, the moderator tries to apply these changes and sends the new version out. The process repeats until everyone is satisfied with the final document. This could be an option for smaller groups of participants.
- The same process can happen online in a shared document, where exerts can directly suggest changes and have short discussions.
- An online-forum can be used where experts have longer discussions either anonymously or not. Anonymous discussion may help to remove personal attitudes and factors from the discussion and focus on arguments. In any case it should be ensured that only authorized people participate in the discussion – which means that there should be a moderator or technical support. The forum can be either publicly available, since the discussion itself might be interesting for those who do not participate in the strategical planning, or hidden and only available by invitation.
Very similar questions or goals should be merged in order to reduce the fragmentation of the field. Also a vertical structure can appear at this stage: some questions are subordinate to other questions. It is also possible that there will be incompatible interpretations – problematize them too as sub-questions and build into the strategy with both conflicting interpretations listed, since they will also serve for future interaction. If too many participants are involved and strategy becomes too dispersed, large groups of questions and goals can be separated from each other and form smaller strategies. One and the same person can participate in different strategies – this is not a problem, since strategies are first of all abstract impersonal structures which direct thoughts and work of participants into a particular direction, and only then – concrete embodied networks.
The result of this step is a document where most important strategic questions and goals for the field are defined, with explanations following each question or goal – why it is important and what we get from answering this question or reaching this goal.
5. Prioritizing goals and questions (optional): at this stage, experts may agree on prioritizing goals and questions. This can happen anonymously by voting. It should be underlined again that this procedure is not for self-promotion, and even if there are priorities set, everyone is still free to decide what question they are working on, or whether they participate in the research strategy at all. Another aspect to highlight here is that prioritizing should be based not only on the intuitive perception of the question or goal, how familiar or cool it sounds, but mostly on the explanation following the question or goal – how important it is and why.
6. Publication of the research strategy. When published, research strategy becomes a collective statement and a basis for new social interactions in the field, so it should be made publicly available for everyone. One option could be a publication of it with all participants listed as co-authors – this will attract a lot of attention to this publication and facilitate its discussion, so it starts working. Another option is to create a website and publish the strategy there, since it will be constantly updated. The website can be integrated with a forum for discussion. Also both can be done – a journal publication and an online presentation.
7. Implementation of the research strategy. While the strategy was developed by the leading scientists from the field, it is open and can be implemented by everyone. This is what it was developed for – to focus everyone’s efforts on most important issues. Implementation means nominating ourselves for working on some question or goal, creating virtual teams (regardless whether we know each other personally or not), discussing approaches and distributing tasks, informing others about our progress and presenting results. It is possible that multiple teams or researchers work on the same question in parallel, if they cannot combine their approaches – but they will at least know about each other and interactively follow each other’s results, which can lead to convergence in a long-term perspective. Another outcome of multiple teams working in parallel can be higher reliability of results (similarly to replications initiatives) and cross-cultural comparisons. An optimal format for communication and coordination could be an online forum where only authorized users can register. Probably a moderator or a group of moderators could control, so that no one uses this joint space for unrelated self-promotion, but only for goal-oriented communication. Discussions can be open up to some point, so that everyone willing to join the initiative can do it, and after a certain point they can become private, in order to coordinate very specific details that are not interesting for other participants. Participants should be open for collaboration with unknown users as long as they like their ideas or approaches and appreciate their contribution to the discussion. This is a major difference from our traditional research networks, where the personal contact comes first, and collaboration follows with some probability.
8. Updating the research strategy. Research strategy should structure our work, but it is not carved in stone. The strategy must be updated with new data and ideas coming, new necessary distinctions can be made or new merges between different questions can happen. The progress on each question or goal should be formally evaluated and communicated, to update everyone’s knowledge. Finally, some questions can be answered and goals achieved – and they must be removed from the strategy. All important changes in the strategy can be communicated via publications with as many authors as possible (potentially – everyone working on this particular question or goal), thus guiding other people in the scientific community in the direction defined by the strategy, even if they do not participate in the strategy explicitly.
An additional step made either by the experts (step 4) or by the community actually working on each question or goal (step 7) can be defining a list of minimal necessary literature that everyone has to be familiar with if he or she wants to participate in work on that particular question. This is different from the thought experiment at the beginning, where researchers first defined common assumptions and then came to the strategy. A reverse engineering is needed here, in the real world, because we do not have unlimited time and one space where all relevant people can share their knowledge until it is merged – so it is easier to start with the strategy and then define the necessary background for each question separately. This is important not only for people who are already working on a particular question (although it will help them to understand each other better), but also for newcomers, e.g., new students and labs, who are willing to participate in the strategy – don’t forget, they are still under institutional and time pressure. This list of minimal required literature should always have a limited volume, so that everyone is able to read it within a few weeks and start understanding the discussion on the forum. At the same time, the list should not be closed and can be changed according to changes in the field. Its primary goal is to represent common knowledge of everyone who works on this particular question. It does not mean that everyone knows only what is listed – people also have other interests, different backgrounds and read other literature around. The complex picture of the field in every individual mind will be achieved by reading discussions and updates on other questions and goals within the research strategy, especially on its theoretical aspects – pretty much the same what we already do now, just in a more structured way.
Three key words about research strategies are: globality, decentralization, inclusion. The strategy is global: it does not depend on local contexts and everyone can join it if he or she has enough resources and competence to work on a particular question or goal. The strategy coordinates scientific work globally, despite the fact that most of resources are still provided by local agents – states, national funding agencies and macro-regional organizations. The strategy is decentralized: there are no formal leaders in the strategy and no one can benefit alone from its existence, but only the whole field. There is a danger of monopolization of intellectual resources here: some actors will formulate ideas and interpret results, whereas others will be just reduced to data collection stations which limits their intellectual development. This should never happen, and this is what is prevented by explanations of importance for every goal and question: even if I just collect or analyze the data in some project, I still understand the broader context – why I am doing it and for what purpose, which allows me to participate in further discussion too. Finally, the strategy is inclusive: every project should try to incorporate all people willing to join it, not to exclude them. Even if it is impossible to include someone, participants should give a feedback and explain why they don’t want this person to be in the project and make suggestions about other tasks nearby that might be fitting with this person’s interests and abilities. Research strategy is about coordination and best distribution of resources, including human resources.
How professional organizations and associations will change
Professional organizations and associations are a natural basis for developing research strategies and many of them already do it in some form, more or less systematically. Members of such organizations already constitute networks that are highly congruent with the structure of knowledge in their field and introducing a formal research strategy will not change much at the beginning. However, having a research strategy explicated and agreed by everyone will make a difference in a long-term perspective: first, newcomers will be able to better understand the field and identify their interests; second, it increases commitment and engagement of members of the organization, since they define together the future of the field; third, developing a strategy increases the value of organization for the field. Importantly, although the organization can take the initiative and start working on a research strategy, it does not make this organization to the “owner” of this research strategy – the strategy belongs to everyone working in the field, also for those who are not members of that organization. The organization can provide its resources for developing the strategy and supporting its technical needs (such as a forum for discussions), but not use the strategy for self-promotion or forcing people to join the organization.
How journals and publications will change
Journals completely covering a narrow field could initiate development of a global research strategy in that field in the same way as organizations do – with the same requirements.
Authorship in publications resulting from interactions in research strategies should be rather inclusive, as long as it is compatible with guidelines of the journal. If there is a conflict about whether to include a person as a co-author or not, an anonymous voting can be used among other authors with a certain threshold needed for a positive decision, e.g., 80%. In any case, some form of reward for those whose contribution was valuable but did not resulted in authorship should be developed, such as rating on the forum.
Preregistration will be facilitated by the presence of the research strategy, since distant and often unfamiliar with each other participants who distribute tasks have to clearly define them in advance anyway, so it is just a matter of a few hours to generate a protocol out of these discussions and submit it to a journal prior to the study. It might be even introduced as a part of rules for participants of a strategy – to pre-register all studies, since it prevents misunderstandings at early stages.
Groups working in parallel on the same issue can decide for two different publications or consider one joint publication – the latter requires additional efforts from them in order to make the studies more comparable and leads to more convergence in the field. Journals should be ready to deal with such combined publications and develop their guidelines about coherence of such texts if needed. A similar change can appear in theoretical papers and reviews: if there are two or three people willing to write a paper on X, as it is formulated in the research strategy, but they have very different opinions about some aspects of X, they can simply co-author one paper and present their different opinions in it with arguments pro and contra. Again, this structures the discourse more, since everyone reading this paper will learn about all possible opinions at once, instead of occasionally finding only one or two out of three papers on the topic and having to match their arguments against each other and reconstruct the discussion by her- or himself.
Reviewing process might change too: now other experts from the field review papers on some issue. The task of an editor is to find experts who work on the topic as close as possible, and at the same time do not have conflicts of interests with authors. It can be impossible if all relevant experts already co-author the paper or participated in the preparation of the study, due to more convergence within a research strategy. In that case, however, only technical reviewing is needed, which can be done by specialists from close field, since the content of the paper is already at the best possible level: all experts in the field have discussed or even co-authored it. The fact that reviewing will be rather technical can speed up publication process; most of important discussions will happen less formally well before the paper is submitted.
There are two types of publications with respect to the research strategy: publications that are directly discussing the strategy (“We need to formulate another additional question in the strategy…” or “We found a final answer to the question N. The answer is…”) and publications that are relevant for the strategy, but do not even explicitly mention it. Publications explicitly related to the research strategy could have some indicators, e.g., keywords with special coding of a particular research strategy unit (question or goal) or, if the majority of journal authors already participate in a strategy, it can become an additional tag, along with keywords. More traditional format of special issues is quite slow and inflexible, but it may be appropriate for some important shifts in the strategy, e.g., appearance of a new theory that has implications for many questions and goals, which should be reflected and discussed.
Someone can use discussions on the forum of a research strategy and publish a study or a theoretical paper based on this without referring to the strategy. If participants of the strategy find it critical, they can try to use the forum with dates and times tracked in order to prove their authorship. On the other hand, in most of the cases this does not prevent them from performing the study and publishing it, while discussing in the paper a recently published study by N. In the worst case, for example if the study was performed on a unique sample which is exhausted by N, participants of the strategy can publish a commentary where they explicitly drag the cheating publication into the magnetic field of the strategy by discussing its implications for the strategy, despite the fact that N does not recognize it. The same kind of publications can follow any related publication whose author for some reason does not participate in the research strategy, although de facto works on it. This kind of publication behavior will fulfil the goal of the research strategy – to bring more convergence into the field around most important goals and questions.
Publications will not be defined by individual needs anymore (“Now I need to write a review, then three papers with experiments.”), neither by journals (“Let’s make a special issue on X, because it’s a hot topic now and people will buy it.”), but by the needs of the field – first of all. Some people say we need more reviews and meta-analyses, others say we need more data. Maybe both sides are right, but in their fields. An explicit research strategy will show what is needed at the moment, and someone will undertake these tasks.
How conferences will change
Conferences are often random and chaotic, since we may or may not attend a particular conference for some external reasons, and we may or may not get in touch with someone having critical knowledge for our work. Conferences are competitive, starting from symposia where we compete for the right to organize our meeting on one or another topic, up to individual talks and posters, where we try to get as much attention as possible from everyone, with the hope that exactly the right people will occasionally hear us. Conferences are justified by the fact that we can learn there about ongoing research which has not yet been published – but if we have a research strategy and a forum where we can read all these discussions online, why do we need conferences at all? We will practically have a never-ending online conference with most relevant people from our field without any associated costs, so the importance of conferences might decrease.
But we still will need conferences for two reasons. First, we are not cyborgs yet and personal contact and discussion can bring a new impression of a person and deeper understating of their ideas. Second, it is important for scientists to travel and get in touch with different cultures and contexts – this broadens our mind and improves our understanding of colleagues who live and work in these various environments, which might be even more important when research activity becomes more focused due to explicit strategy.
A possible option for conferences on a given research strategy is to offer broader cross-sectional discussions which do not happen on the forum of the research strategy, particularly focusing on theoretical, philosophical and methodological issues, which have implications for everyone in the field. Regardless of the concrete solution, conferences should be seen as local resources that should be subordinated to the global research strategy of the field, like any other resource that we have.
How individual scientific development and graduation will change
Perhaps the main concern regarding individual scientific development is that many scientists will not be able to formulate questions, but rather will become mere workers in projects outlined by someone else. This concern is related to our current research training which necessarily includes reading, formulating a question, further literature search, specifying the question, performing the study and interpreting the results – necessarily performed by the person who wants to obtain a degree. However, many PhD positions are in fact created after getting funding on a particular topic, i.e., the general framework for these projects is already outlined by someone else. Questions do not limit creativity but guide it. The method to answer the question, the way to interpret it and to formulate sub-questions can vary and leave enough space for individual freedom. What is different in the case of a global research strategy is only that (1) these questions are coordinated at a higher level and (2) everyone in the world has access to the full list of these questions.
Ability to formulate questions themselves is an issue. Should a bachelor, master or PhD student formulate new questions leading scientific research all over the world? Even the most experienced scientists not always formulate completely new questions; often these questions are inspired by philosophers or other scientists. Clearly, after a certain point individuals should start participating in formulating the questions and evaluating the state of knowledge. Probably, broader educational perspective will be needed to compensate for developing unification of intellectual development: second field of interests, parallel projects, philosophical education, more interaction with fields outside of science could be options to keep individuals open-minded. This will be even easier, since the main field will be better structured and thus will require less efforts and time for reading tens and hundreds of unrelated or weakly related papers, thus leaving more time for other subjects. Moreover, having research strategy may facilitate interdisciplinarity in itself, because goals and questions formulated in a clear way can be better understood by students or researchers coming from other fields, and minimal necessary literature organized by people working on the question will make it possible for everyone to easily reach necessary competence. Questions and goals lead to more convergence – also across disciplines.
Finally, not all questions we formulate are as new as we might think, as discussed before (continuity vs. fashion). By recognizing questions that are already formulated collectively, we might achieve a better evaluation of emerging ideas: how new they are, which preceding lines of research are relevant for these ideas and what exactly is new – all this should be reflected and explicated from the point of view of the field, not individual research interests and development.
Voluntariness of participation
Global research strategy is not only an abstract knowledge structure or embodied research networks, but also a different way of thinking: focusing on coordination of work to achieve the best outcome for the whole field. Those who acquire this way of thinking are already participating in the initiative and it is just a matter of time when they join or outline the strategy formally.
Participation in the research strategy cannot be compulsory, but only voluntary. Researchers and labs will join the initiative not because they have to, but because they will understand benefits of this participation for their work:
- Newly established labs and researchers who only start working in a field will get better overview of the whole field, it will be easier for them to outline their interests and position themselves in a broader research context.
- It will be easier to find collaborators, particularly from distant regions / other countries, which is often an advantage when submitting grant proposals.
- It will be easier to write grant proposals, theses or papers, since both broad research context, the general research question or goal and its relevance are already specified by the scientific community. Such projects will have better chances to receive funding.
- Increasing specialization will allow researchers and labs to be more efficient, participate in larger number of projects.
- By working on the same problem in parallel, researchers and labs can exchange their innovations and technical or methodological solutions more efficiently already during the research process.
Along with the discourse structured by the research strategy, there can be of course another field of exploratory research, where everyone will develop their own ideas and projects that they find relevant or interesting, but there is no place for them in the research strategy. Exploration is important too, and all our current research can be called exploratory. It is impossible, to absorb all current research in some field into one initiative – but the effort to do it can bring more convergence in the field than we have. It is always possible that there some important idea or question remained unnoticed by leading researchers in the field – and yet the chance of this situation is rather small, so we should not rely on exploratory science alone.
Relevant answer
Answer
Great!
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
In an age where there is an app for everything, reading scientific articles remains woefully linear and static. This was brought to my attention after trying the SciVerse online article viewer (paid-subscription through ScienceDirect, sadly) which offers a multitude of apps (>50) to enhance article reading, such as: clickable links to related articles; grant opportunities based on an article's domain (!!!); links to species summaries; links to molecule renderings, etc, all in a discrete side-panel next to the fulltext article. No mere bells and whistles, this felt a much more intuitive way to explore science than a static pdf.
My question: is there a similar product that is a) free (at the very least), b) open-source, preferably, or offers an API to develop new apps? Like a Zotero for PDF viewing? UtopiaDocs and ReadCube seem promising, but both merely offer a few recommendations and minor details on the article in question, not a litany of customizable apps.
Thanks
Relevant answer
I started working on a solution for this.
  • asked a question related to Science 2.0 and Open Access
Question
4 answers
  • Stated that, "By 1 January 2020 scientific publications that result from research funded by public grants provided by participating national and European research councils and funding bodies, must be published in compliant OA Journals or on compliant OA Platforms,”
  • Let us suppose, if all the journals are OA then what about the scientists of low-income countries/limited funding/without funding.
Relevant answer
Answer
That will be a great service to science.
  • asked a question related to Science 2.0 and Open Access
Question
11 answers
Dear All,
Please suggest me any scientific journal to submit free manuscript.
I have written a paper on HIV coreceptor variation but unable to publish it as idont have enough financial support.
Relevant answer
Answer
we publish your research paper free of cost after plagiarism check
if interested please forward your paper to
  • asked a question related to Science 2.0 and Open Access
Question
274 answers
In your opinion, what is the single most important quality of a good students?
Relevant answer
Answer
Abdullah, since you are focusing your question in the single most important quality I choose among other relevant qualities the tandem curiosity-motivation (for me it is impossible to disentangle them).
Studying is about learning, discovering, exploring... Therefore it is about wanting to learn, it is an active process and it requires fundamentally the will to learn which I think comes from this tandem of motivation-curiosity.
Having this, other qualities enhance the learning process but without it there is a problem at the heart of it.
  • asked a question related to Science 2.0 and Open Access
Question
54 answers
University Ranking is a hot topic nowadays. Where does your university stand in world Rankings and How does it make you feel?
Relevant answer
Answer
These figures are the figures currently (June 2018) appearing in Wikipedia for the University of the Witwatersrand.
Take your choice of which of the following to believe!
8th Bloomberg Billionaire Ranking (2014)
24th Times Higher Education Alma Mater Index (2013)
139th Tredence-Emerging Global Employability University Ranking (2013)
149th Center for World University Rankings (2015)
201st-300th Academic Ranking of World Universities (2015)
251st-300th Times Higher Education World University Rankings (2108)
316th University Ranking by Academic Performance (2016/7)
318th QS World University Rankings (2014/5)
These figures are the oneds currently (June 2018) appearing in Wikipedia for the University of the Witwatersrand.
  • asked a question related to Science 2.0 and Open Access
Question
4 answers
Open access, open review, public review...too many faces for just one coin.
I read utmost interesting comments from several colleagues about the meaning and the potential dangerous side-effects of open access journals...we can discuss about them for years; also, we can discuss about the opportunity of open review but nobody says that just public reviews, as well as public authors' answers, would dramatically improve both review process and articles' quality.
Few, very few journals (either open or not open access) are working on this item and I hope that much more will do in the next future.
Relevant answer
  • asked a question related to Science 2.0 and Open Access
Question
3 answers
I have prepared a review paper on wireless sensor networks and their applications. Can somebody suggest me good review publishing journals
Relevant answer
Answer
If you already have the title and abstract for your article, you can use the following tools from Elsevier and Springer to help you pick a right journal among those that belong to the respective publisher http://journalfinder.elsevier.com http://journalsuggester.springer.com The output of these tools shows inter alia average article processing times and impact factors of the journals.
  • asked a question related to Science 2.0 and Open Access
Question
2 answers
A new monoclonal antibody to a cell surface receptor has been produced in the laboratory. When the cells are incubated with antibody solution, cells get activated instead of inhibition implying failure of blocking of receptor by antibody. What could be the reason for this result? How could you possibly modify antibody to prevent the activation reaction?
Relevant answer
Answer
Antibody doesn't mean antagonist; it could be either agonist or antagonist depending on the liaison site on your receptor.
Try to generate a new antibody against a mutated form of you receptor!
  • asked a question related to Science 2.0 and Open Access
Question
1 answer
Open Innovation intermediaries perform specific services in problem solving, technology scouting, co-development, IP transfer, and general crowdsourcing. The field has many players and terminology can mean very different things to different users. In your experience, what are the major shortcomings of methodologies you have encountered and what would you like to see from service providers going forward. I represent IdeaConnection.com and would like to know what obstacles to growth are inherent in our system, or any others that you know about.
Relevant answer
Answer
Dear Jim.
My answer is not able to cover all the open innovation intermediaries at all, but it provides useful information about the possible shortcomings of the famous and increasingly used method LEAN STARTUP.
Look at the paper, if it matches your interests and needs, I will provide you the full text.
BR
Zornitsa
  • asked a question related to Science 2.0 and Open Access
Question
1 answer
We have just published the draft policy recommendations based on our research on science2.0 implications for European policies on R&I.
Our draft is available in a commentable format at http://science20study.wordpress.com/ so I strongly encourage you to to comment on it and to complement it.
Looking forward to having your views on this.
Relevant answer
Answer
Is it Science in a 2.0 teaching and learning environment? Please narrow down the question? Or is it science research in a large scale?
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
Do you have a review study of Science 2.0? It doesn't need to bee a publication. Actually it would be better not to be a publication.
We are working on a new section of the Multitude Project website, which will be about the democratization process of science, along with other social processes. We also want to propose new tools for scientists.
You will be acknowledged as the author of your contribution. If you want to remain anonymous we respect that too.
If you are willing and passionate about this subject you can actually become the main driver behind this new section of the Multitude Project. You can make it your project. We are a decentralized collaborative network.
Here's the link to the main site:
Thank you all!
Relevant answer
Answer
Hey Tiberius,
What you are asking for is already published in various white papers, high level blogs, books and even peer-reviewed publications.
If the objectives of the section are clearly outlined, any enthusiastic young scientist around should be able to tell a story within 5-10 pages.
Tip: use "Open Science" as a key word to search materials. "Science 2.0" is dated and may lead you to dead ends
Ivo
  • asked a question related to Science 2.0 and Open Access
Question
2 answers
Relevant answer
Answer
Joomla, Wordpress and Moodle here :)
  • asked a question related to Science 2.0 and Open Access
Question
328 answers
I am wondering if there are any copyright issues when we post our published papers on ResearchGate? Is there any rule we should follow or we can simply upload the papers and hope that we do not really break the publisher's copyrights. I will be more than happy to know more about this.
Relevant answer
Answer
Here's my take: there are a lot of researchers out there. We work in a system that is acknowledged from all sides to be broken:
- I write papers
- My colleagues review them
- I have to pay large amounts of money for open access publications, and even in closed access journals I have to pay a large amount for color figures
- The publishers sell the work back to my University for a lot of money. In fact, just a few days ago I published a paper and had to ask on Facebook for people to send me the full text because I don't have access to the journal (!).
Here is my recommendation: always upload all your final papers on your website and researchgate. Make sure to note somewhere that this is not for mass dissemination but just for students and colleagues. The worst that could happen is that a publisher actually writes you an email and asks you to take a publication down ... so be it. But they will not do that because they are happy that the papers are read and cited (we know that open access papers generate more publications, given that everything else is equal). 
Let's fight for the right to get our things online. It's silly that many of us are funded by tax payers, but tax payers can't actually read the final work because they have no access to University libraries and so forth.
If we all put everything online, it will change the system over time. Elsevier will note write 150.000 researchers and ask them to remove papers from their websites... 
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
1) list your profile(s) in your comment - I am @scigrrlz and @acdbio
2) follow other community members who post or tweet #science2point0
3) tweet #science2point0 and see how big the reach of this community is on twitter
I find twitter a great tool for keeping up with hot science news, colleagues, tracking meetings, following funding agencies and other organizations. What do you use it for?
Relevant answer
Answer
@Jacic012 is my twitter account. Please, be my guest.
  • asked a question related to Science 2.0 and Open Access
Question
2 answers
Pay $259 once and publish as many papers as you want
Relevant answer
Answer
Hi Mirelys,
The fee is for publishing not for accessing the article- access to the article is free. The fee for publishing is pretty decent considering that many open access journals ask researchers for thousands of dollars just for publishing their articles, while at the same time they have to contribute all their referee/peer review work to the publisher for free.
Best
rolando
  • asked a question related to Science 2.0 and Open Access
Question
43 answers
Guess, this is not a very subject specific question. However: I don't like reading papers on my computer screen, but don't want to print them out. What is your experience with e-readers for paper reading (esp. the kindle)? Is there a possibility to mark text passages (e. g. underlining)
Relevant answer
Answer
I realize this question is old, but in case someone is still looking for an answer, I am quite happy with K2pdfopt (http://willus.com/k2pdfopt/). For most papers the results is decent, and it handles the two column format as well.
My reader is a Kindle Paperwhite, but other devices/dimensions are supported.
Cheers!
  • asked a question related to Science 2.0 and Open Access
Question
1 answer
Some journals on ScienceDirect started to offer this new service, How would that reflect on the merit of the article?
Relevant answer
Answer
I watched an AudioSlide presentation on identification of human remains in a mass casualty incident the other day.  It was a little dry, but extremely informative. I got something out of it and thought that my time was well spent.
  • asked a question related to Science 2.0 and Open Access
Question
4 answers
How do OA journals deal with liability issues such as plagiarism or misinformation in an article submitted...if there is little budget? Is there any history that points to likelihood of any legal issues arising?
Relevant answer
Answer
I look forward to learning more about what you are doing and what your students are doing.  My husband, Dallas Smith, comes to India every year with "Mynta," Swedish Indian-Jazz fusion musical group.  We have been to India many times and have Indian culture around us every day!  Dallas studies with Ali Akbar Khan and continues to play the bansuri.  We have much to talk about!
Our work in healthcare draws upon all of our musical experiences.  I am a professional harpist for many decades!
Susan
  • asked a question related to Science 2.0 and Open Access
Question
4 answers
Looking for any comments and views on how health information seeking will affect health literacy, if it does so significantly, especially in the contexts of new media.
Relevant answer
Answer
Hi Mohammad,
Interesting question - but i think that health information seeking will have little direct short-medium term impact on health literacy. I think that the converse is more likely where health literacy impacts on health information seeking. To what extent is the question and leads us to the chasm of what's being done to address poor health literacy across the globe.
  • asked a question related to Science 2.0 and Open Access
Question
8 answers
I would like to get an overview which datasets the community uses when working with altmetrics and whether they are publicly available. Also, are there any "standard" datasets? Do you think such datasets would drive research further?
Relevant answer
Answer
Dear Elisabeth,
if you want to do research on altmetrics, or social media metrics (I prefer this term, cause we already know that social media counts are no alternative to citations), Altmetric.com has been quite generous with providing their data for that. Just contact Euan Adie and his team to let the know what you have planned and ask!
Elsevier is also offering data but you would have to submit a proposal before they can tell you what they can/want to deliver: http://emdp.elsevier.com/
Then there is also the PLOS data, which is pretty detailed but of course limited to PLOS journals. Currently CrossRef is working on a metrics dataset for papers with DOIs.
If you are interested in research results, have a look at the PLOS ONE altmetrics collection, the altmetrics workshops, the recently launched 1:am and other relevant papers on arXiv, in JASIST.
Me and my colleagues at the University of Montreal and Indiana University also recently received a grant by the Sloan Foundation "to support greater understanding of social media in scholarly communication and the actual meaning of various altmetrics", so maybe you are also interested in what we are doing: crc.ebsi.umontreal.ca/sloan
If you are doing research on social media in scholarly communication, you might want to consider to submit to this forthcoming special issue in Aslib Journal of Information Management: http://www.emeraldgrouppublishing.com/products/journals/call_for_papers.htm?id=5754
  • asked a question related to Science 2.0 and Open Access
Question
10 answers
-
Relevant answer
Answer
Hi
You'll find an answer when following up the attached file.
  • asked a question related to Science 2.0 and Open Access
Question
8 answers
I need to submit the impact factor of this journal along with my published article.
Relevant answer
Answer
Usually I am using this website http://www.bioxbio.com/if/ 
  • asked a question related to Science 2.0 and Open Access
Question
1 answer
if theris a one how can gives me the latest journal cited report jcr of ISI thomson.
Relevant answer
Answer
Hi there,
The latest JCR finally released after one month delay by the Thomson Reuters company.
  • asked a question related to Science 2.0 and Open Access
Question
14 answers
See abstract of the paper. Obviously this is posted in response to the criteria stated in ResearchGate's Open Review introduction. I would be pleased to see ResearchGate take over http://oPeer.org and do it right, but this notion of evaluating strictly on the basis of "reproducibility" is as silly as counting Facebook "likes". (IMNSHO)
Relevant answer
Answer
@Balázs: I think we are both running out of new things to say here. Sort of a "quibble-off".... I should just let you have the last word, but I can't help one last quibble: it seems to me that an individual theorist can never use your version of The Scientific Method, because the required experiments or fresh observations may not be possible for that one person to perform; in that case a theory becomes "scientific" only after its predictions are thoroughly tested by others, sometimes after the original theorist is deceased. This seems awfully retroactive for a "Method".
I regard my original Question as "answered"; RG should revise its criteria in the Open Review tool! Meanwhile we have raised a number of more philosophically interesting questions; anyone feel like enshrining them in new Question threads?
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
Scan the questions sections for #OpenScience, #Science20, #OpenAccess and you will notice that the vast majority by far, are spending significant Q&A on what is wrong with Open Science, what does not work, and what could (or has) horribly gone wrong. Makes you wonder how high the h-index on those questions gets, and how representative that is of reality.
Although exposing fraud, plagiarism, bad publisher service, and poor quality is essential to prevent other falling victim, it is hardly as inspiring or motivating. Actually, it can give the wrong impression to the novice, and be dangerous.
So, let's balance the discussion and focus on the what works, by apply the scientific method to gauge the positive side of #OpenScience (if any). There are many shining examples of how #OpenScience can boost your career profile, on the way to that tenure.
OS practioners, we know you are out there, so don't be shy and tell us how you integrate OS in your daily workflow, and in what measurable ways does #OpenScience contribute to your profile and impact?
Relevant answer
Answer
I agree. We need easier ways to help researchers to share data and publications and to  get credit for their work. Impactstory could be helpful in this respect.
'Impactstory is an open-source, web-based tool that helps researchers explore and share the diverse impacts of all their research products—from traditional ones like journal articles, to emerging products like blog posts, datasets, and software. By helping researchers tell data-driven stories about their impacts, we're helping to build a new scholarly reward system that values and encourages web-native scholarship.'
  • asked a question related to Science 2.0 and Open Access
Question
11 answers
With the high cost of journals and electronic databases many people cannot afford to purchase. Open Educational Resources are now an alternative, do you trust these resources?
Relevant answer
Answer
My dear @Mardene, please read this experience and initiative, it was held in Belgrade, Serbia! It is on promoting adoption of open educational practice.
  • asked a question related to Science 2.0 and Open Access
Question
29 answers
As a researcher and therefore a reader of papers, I often encounter poorly written papers, full of grammatical and spelling errors. Of course, one cannot fully blame the authors -- many researchers are not native english speakers (myself included), and some only speak english at a very basic level, or not at all. However, some publication platforms (journals, magazines, transactions and so on) have editors listed: a group of people that tends to change per issue or a set of issues. My impression was that editors edit the papers for publication, mainly focussing on formatting and language (with feedback from authors, of course).
However, reading papers from a Lecture Notes in Computer Science issue, I'm quite certain that language isn't part of what editors do. Some articles, especially those that come from Workshop and Conference proceedings, often lack editing, sometimes to the extent that the paper is (for me, at least) no longer understandable. However, I've even noticed this problem for Journals. In addition, my reading experience seems to indicate that the problem is increasing, which would be a rather troubling fact that could be addressed by editors or by reviewers. I expected this to be addressed by reviewers; the quality of text is important to the correct communication of information. On the other hand, it may be unfair to non-native speakers to consider language in the review process.
Thus, my question actually consists of two parts:
1. What is the 'job description' of an editor?
2. How can the research community as a whole (authors, reviewers, PCs, readers, editors, ...) improve the situation?
Relevant answer
Answer
Editors main jobs are to follow a strict peer-reviewing process through learned and devoted / expert reviewers, selection of qualified reviewers for this process, ask reviewers to make good changes with tools and tracks and pass valuable suggestions / comments for improving a paper and reject low quality papers, appoint an english proof reader, have support of cross-reference manager and plagiarism checker, finally accept only quality papers, with help of editorial board members and all links and resources attract quality papers to the journal, maintain regularity of issues being published, all papers cannot be edited by editors but where felt so must be done.
  • asked a question related to Science 2.0 and Open Access
Question
33 answers
Are any researchers who are undertaking systematic reviews also adding a search of google.scholar? And if so, what numerical limit are you putting on results that you inspect? In some earlier trials, I found that scholar returned in the order of at least 10x more results than did the more usual sources (like Medline) which I feel would then artificially distort the number of excluded articles in your flow diagram of articles to be included.
Relevant answer
Answer
From review of this issue I would conclude that:
(1) neither database (Google Scholar (GS), PUBMED) is sufficient for optimal discovery of all highly relevant content in a topical medical search;
(2) that there are several myths and misunderstanding concerning their true differences and divergent focus and specialization;
(3) that there is data to support the contention that Google Scholar can indeed be used effectively for systematic review provided one understands its unique modes of operation and execution;
(4) that one is better served seeing GS and PUBMED more as complimentary than as exclusive of one another; and that
(5) an optimal search process would involve multiple general search databases coupled with specialized collections, with searches being executed best in a highly articulated form (in GS for instance, using scope qualifiers in Boolean expressions). Below I provide the basis for these conclusions:
To begin with, the two search databases, Google Scholar and PUBMED, reflect different relevancy algorithms: PubMed uses algorithms based on MeSH terms, with the most recent articles reported at the top of the list (which likely have not had adequate time to be appropriately cited), while in stark contrast, Google Scholar's proprietary algorithms (first released in 2004 in beta) have been found to favor the number of citations as an important criterion in the initial list of articles, with date of publication not an important criterion [1,2].
GOING HEAD-TO-HEAD: GOOGLE SCHOLAR versus PUBMED
Several studies [3,4] suggest that Google Scholar searches compare favorably with PubMed searches but have both advantages and disadvantages. Another recent study [5], building on these foundations, focused on content relevance and article quality and suggests that the Google Scholar search engine retrieves more relevant, higher quality articles. And in a comparative study as to locating primary literature to answer drug-related questions, no significant differences were identified in the number of target primary literature articles located between the Google Scholar versus PUBMED databases [6]. In addition, as to a single focused query (risk factors for sarcoma) Google Scholar resulted in a higher sensitivity (proportion of relevant articles, meeting the search criteria), compared to PubMed which resulted in a higher specificity (proportion of lower quality articles not meeting the criteria, that are not retrieved) [7].
Similarly, one of the most recent and comprehensive robust studies, this one from the University of Rouen [8], examined explicitly the core question of "Is the coverage of Google Scholar enough to be used alone for systematic reviews", performing a study to assess the coverage of GS specifically for the studies included in systematic reviews, and to evaluate if GS was sensitive enough to be used alone for systematic reviews, in order ultimately to assess the percentage of studies which could have been identified by searching only GS; there were 738 original studies included in a specially constructed gold standard database. The results: GS retrieved all 738 studies (100% hits) mined from 29 systematic reviews, allowing the authors to conclude that "The coverage of GS for the studies included in the systematic reviews is 100%. If the authors of the 29 systematic reviews had used only GS, no reference would have been missed. With some improvement in the research options, to increase its precision, GS could become the leading bibliographic database in medicine and could be used alone for systematic reviews".
What this entails is that despite GS not covering all the medical literature, nonetheless its coverage of the studies of sufficient quality or relevance to be included in a systematic review was complete, so that if the authors of these 29 systematic reviews had relied only on GS, they would have obtained the very same results. In contrast, it's been shown that the recall ratios of Medline RCTs only ranges between 35% and 56% [9,10]. This is in essential agreement with still another recent study [5] where PubMed and Google Scholar searches were compared by evaluating the first 20 articles recovered for four clinical questions for relevance and quality, GS provided more relevant results that PubMed (although the difference was not significant), serving as another reminder that we should not overestimate the precision of PubMed in real life [8]).
LOOKING FORWARD
We await further enhancements to GS to provide reliable advanced search functions, a controlled vocabulary, and improved scope of coverage and currency, but even in the latest instantiation GS performs at a respectable level of recall and precision, and can be enhanced with judicious Boolean expressions and some undocumented qualifiers. All told, as the Rouen study concludes: "the coverage of GS is much higher than previously thought for high quality studies ". (And note that other comparisons have found GS more than credible; in a comparison with Web of Science/WoS [11], the study authors concluded that: "since its inception, GS has shown substantial expansion, and that the majority of recent works indexed in WoS are now also retrievable via GS"). And I would add one further caution: despite the correct claim of many advanced search features being absent from GS but present in PUBMED, nonetheless this has less relevancy that one might believe: only 7% of respondents used these features in their searches for the Canadian study [12], only 37% used controlled vocabularies, and only 20% used filters such as the Clinical Queries feature in PubMed [13,14]. Therefore, in the real-world rather than the theoretical domain, the two search technologies are less far apart than the advanced features suggested, when we look at actual usage patterns.
But the debate will certainly continue for some while, with divergent opinions [15,16]. But it is increasingly clear from a critical review of the data to date that the two databases should be considered complimentary and not mutually exclusive, each with unique advantages and tradeoffs: thus, noting that recent evidence suggests Google Scholar may have closed the gap between itself and PUBMED, and that it is now often leading in searches (with one family of journals reporting that 60% of their traffic is coming from Google Scholar, ahead of PUBMED and other traditional medical databases), University of Utah researchers [17] assessed efficiency and completeness of searching for known moderate and high quality RCTs in PubMed versus Google Scholar, finding that each database consistently identified one of the two highest quality studies, but neither database identified both, yet the difference search time was nearly three-fold (to accomplish the search by experienced researchers, it search time was 63 minutes for GS but 194 minutes for PUBMED, without the later providing any superior results). This again reflects what I have called the INCOMPLETENESS THEOREM OF MEDICAL SEARCH: namely that no single search is sufficient to identify all relevant quality studies, cross-confirmed in still another recent study where Canadian researchers [12] evaluated the recall (proportion of relevant articles found) and precision (ratio of relevant to nonrelevant articles) of searches performed in PubMed and Google Scholar, with primary studies included in the systematic reviews serving as the reference standard for relevant articles, finding that for For quick clinical searches, compared with PubMed, the average search in Google Scholar retrieved twice as many relevant articles (PubMed: 11%; Google Scholar: 22%), with precision being similar in both databases (PubMed: 6%; Google Scholar: 8%). And note that it would be tempting but erroneous to attribute the two-fold greater retrieval manifold as due to differences in content coverage, since 78% of the tested articles were available in BOTH databases.
These and other studies assessing different medical databases have demonstrated that no single search engine provides all the related articles, full capturing of the complete body of available literature on a subject requiring searches over multiple databases, depending on the topic. Thus, a much more comprehensive search would include cross-spectrum searching[18], and as an example I note that I myself use an extensive resource collectivity of approximately 18 databases and tools including ones for specialized content: see "METHODOLOGY FOR THIS REVIEW" below).
As to peer-review, often claimed a major factor that distinguishes Google Scholar (unrestricted) from PUBMED, in fact despite the widely held but erroneous belief that PubMed will only consider Peer Reviewed literature, it is explicitly stated on their website that this is not the case: “Most journals in PubMed are peer-reviewed or refereed. Non-editorial journal-staff review original articles before the articles are accepted for publication. Criteria for peer review and the qualifications of peers or referees vary among publishers. We have no list of peer-reviewed/refereed journals in PubMed; and you cannot limit your search to peer-reviewed journals using PubMed” [http://www.nlm.nih.gov/services/peerrev.html] [19].
USING ARTICULATED/SMART SEARCHES
Finally, it pays to learn how to execute articulated ("smart") searches in GS: thus, in answer to a question in another topic concerning searching for all systematic reviews and meta-analyses concerning HRV (heart rate variability), I advised the Google Scholar smart search (besides a MeSH-enriched PUBMED search):
insubject:"heart rate variability" intitle:("systematic review" | meta-analysis)
or the somewhat more permissive relaxed smart search:
insubject:"heart rate variability" intext:("review" | meta-analysis)
leveraging the power of the scope qualifiers "insubject", "intitle" and "intext" when coupled with appropriate Boolean operators. It also pays to remember that GS is an "opportunistic" search engine, as it will try to data-mine any resources that could be of relevance rather than honing to the more narrow constraints of a formal PUBMED search (often providing riches not otherwise easily uncovered, so that its claimed lesser precision is not at all necessarily a disadvantage, as some of the discovered resources (like dissertations, commissioned monographs, peer-reviewed CMEs, etc.) could themselves - as I have often found - contain bibliographical references to invaluable materials not located through PUBMED, that could greatly enrich the quality of any paper using its technology.
METHODOLOGY OF THE REVIEW
A search of the PUBMED, Cochrane Library / Cochrane Register of Controlled Trials, MEDLINE/MedlinePlus, EMBASE, AMED (Allied and Complimentary Medicine Database), CINAHL (Cumulative Index to Nursing and Allied Health Literature), PsycINFO, ISI Web of Science (WoS), BIOSIS, LILACS (Latin American and Caribbean Health Sciences Literature), ASSIA (Applied Social Sciences Index and Abstracts), SCEH (NHS Evidence Specialist Collection for Ethnicity and Health), and scope-qualified Boolean searches submitted to Google Scholar and SLIM, was conducted without language or date restrictions, and updated again current as of date of publication, with systematic reviews and meta-analyses extracted separately. Search was expanded in parallel to include just-in-time (JIT) medical feed sources as returned from Terkko (provided by the National Library of Health Sciences - Terkko at the University of Helsinki). Unpublished studies were located via contextual search, and relevant dissertations were located via NTLTD (Networked Digital Library of Theses and Dissertations), OpenThesis or Proquest. Sources in languages foreign to this reviewer were translated by language translation software.
REFERENCES
1. Beel, J. & Gipp, B. Google Scholar's ranking algorithm: the impact of citation counts (an empirical study). Proceedings of the 3rd International Conference on Research Challenges in Information Science 2009a, 439–446.
2. Beel, J. & Gipp, B. Google Scholar's Ranking Algorithm: The impact of articles' age (an empirical study). Proceedings of the 6th International Conference on Information Technology: New Generations 2009b, 160–164.
3. Shultz, M. Comparing test searches in PubMed and Google Scholar. JMIA 2007, 95, 442–445.
4. Anders, M. E. & Evans, D. P. Comparison of PubMed and Google Scholar literature searches. Respiratory Care 2010, 55, 578–583.
5. Nourbakhsh E, Nugent R, Wang H, Cevik C, Nugent K. Medical literature searches: a comparison of PubMed and Google Scholar. Health Info Libr J 2012; 29(3):214-22.
6. Freeman MK, Lauderdale SA, Kendrach MG, Woolley TW. Google Scholar versus PubMed in locating primary literature to answer drug-related questions. Ann Pharmacother 2009; 43(3):478-84.
7. Mastrangelo, G. , Fadda, E. , Rossi, C. , Zamprogno, E. , Buja, A. & Cegolon, L. Literature search on risk factors for sarcoma: PubMed and Google Scholar may be complementary sources. BMC Research 2010, 3, 131–134.
8. Gehanno JF, Rollin L, Darmoni S. Is the coverage of Google Scholar enough to be used alone for systematic reviews. BMC Med Inform Decis Mak 2013; 13:7.
9. Türp JC, Schulte J, Antes G: Nearly half of dental randomized controlled trials published in German are not included in Medline. Eur J Oral Sci 2002, 110:405-411.
10. Hopewell S, Clarke M, Lusher A, Lefebvre C, Westby M: A comparison of hand searching versus MEDLINE searching to identify reports of randomized controlled trials. Stat Med 2002, 21:1625-1634.
11. de Winter JCF, Zadpoor AA, Dodou D. The expansion of Google Scholar versus Web of Science: a longitudinal study. Scientometrics 2014; 98(2): 1547-1565.
12. Shariff SZ, Bejaimal SA, Sontrop JM, et al. Retrieving clinical evidence: a comparison of PubMed and Google Scholar for quick clinical searches. J Med Internet Res 2013; 15(8):e164.
13. Shariff SZ, Bejaimal SA, Sontrop JM, Iansavichus AV, Weir MA, Haynes RB, et al. Searching for medical information online: a survey of Canadian nephrologists. J Nephrol 2011;24(6):723-732.
14. Shariff SZ, Sontrop JM, Haynes RB, Iansavichus AV, McKibbon KA, Wilczynski NL, et al. Impact of PubMed search filters on the retrieval of evidence by physicians. CMAJ 2012 Feb 21;184(3):E184-E190.
15. Bramer WM, Giustini D, Kramer BM, Anderson P. The comparative recall of Google Scholar versus PubMed in identical searches for biomedical systematic reviews: a review of searches used in systematic reviews. Syst Rev 2013; 2:115.
16. Boeker M, Vach W, Motschall E. Google Scholar as replacement for systematic literature searches: good relative recall and precision are not enough. BMC Med Res Methodol 2013; 13:131.
17. Thiese M, Effiong A, Passey D, Ott U, Hegmann K. Pubmed vs. Google Scholar: A Database Arms Race? BMJ Qual Saf 2013;22:A33.
18. Zheng B, Zheng W, Zhu Y, Guo C, Wu W, Chen C. Are PubMed alone and English literature only enough for a meta-analysis? Ann Oncol 2013; 24(4):1130.
19. Kejariwal D, Mahawar KK. Is Your Journal Indexed in PubMed? Relevance of PubMed in Biomedical Scientific Literature Today. WebmedCentral MISCELLANEOUS 2012;3(3):WMC003159.
  • asked a question related to Science 2.0 and Open Access
Question
8 answers
I cannot find it in the official lists of Thomson Reuters or similar databases and the impact factor listed on their webpage is based on citations of their articles but an unofficial one.
Relevant answer
Answer
Hi,
the journal is published by the "OMICS publishing group". The publisher is listed on the "Bealls List" as predatory publisher. Most of the journals (if not all) of the group are not indexed at PubMedCentral due to some questionable peer reviewing practice and some other issues. I would think twice about publishing in these journals and spend money on something only a few number of people is going to see.
  • asked a question related to Science 2.0 and Open Access
Question
3 answers
I'm looking for a simple and easy to use tool or software to help collect, organize, cite, and share research sources.
Relevant answer
Answer
I also Vote for Mendeley, it is very good. I am maintaining shared literature and citation with more then 10 different networks of scientist... in a breeze
  • asked a question related to Science 2.0 and Open Access
Question
18 answers
Some journals (like those comes under Elsevier) have two types of policies. First is making your article subscribed (i.e. one have to pay fee for access), and second is paying the open access publishing fee (like USD 2500-3500) where anyone can freely download manuscript from journal website.
My question is: Do you think that paying the Open Access Publishing Fee enhances the chance of manuscript acceptance rather than making it subscribed (not open access)?
Relevant answer
Answer
I am not well placed to answer that question and I do not even know if anyone can. These journals have made it possible to publish in open access for them to be able to publish studies with grants that require publications to be publicly available. If they start publishing too many open-access articles, been subscribed to the journal will become less attractive and they could start loosing a part of the market. The biggest threat for them is loosing public library subscriptions. They therefore need to provide a number of quality article that you need a subscription to access to. Therefore, if an article is of interest, and is methodologically sound, I would say it has more chance of been rapidly put into print.
I would advice to target journals for their aim and scope, their audience, and the interest editors have shown for a topic. If a research and an article is good, they will accept it whether you are in open access or not.
  • asked a question related to Science 2.0 and Open Access
Question
10 answers
I would like to provide copies of papers I have published, but I do not wish to violate the rights of the journal or the copyright laws. I would welcome the experience others have had in answering this question
Relevant answer
Answer
I very much appreciate the link to SHERPA/RoMEO. It provides exactly the information I was looking for. Many thanks to Jens Peter Andersen. (And thanks to Sharon for asking the exact question I had.)
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
see above
Relevant answer
Answer
which deals with _cost_ allocation. Here Amar is apparently looking for the formula or algorithm that calculated B from A and C.
B=Blocking probability; A=Amount of traffic to be carried; C=Number of Circuits (resources) that must be allocated.
  • asked a question related to Science 2.0 and Open Access
Question
16 answers
Are researchers appreciating a piece of the scientific work (paper, abstract, thesis etc) on the basis of its content or on the basis of the journal score, the institutions or the prestige of the authors?
Science, at least in some cases, is reaching an intellectual status higher than politics, religion and humanism. This is because it is based on the rigorous thinking and evidence; authorities or social influences are, in theory, minimized.
How can science be more democratic and based on reason? What is our responsibility in this? Is, for instance, unconventional thinking of independent researchers or minorities penalized or stimulated? Do they have ways to publish their research in a fair peer-review system? What are these ways?
Relevant answer
Answer
I don't know about others, but when I do a literature search, it's based on keywords, and I'm paying a lot more attention to the title and abstract than I am to the journal it's published in, at least in the first go-round. Usually, if the title and abstract sound interesting, I'll do a quick read-through, and only then pay attention to the source. If the source isn't obviously wonky (an unknown journal, or one with a reputation for predatory publishing, etc.), I look around for other work by the paper's authors, so I can see if they've got a track record on the subject, and I look for citations to the paper. This works pretty well for me.
Your questions about how to make science more "democratic and based on reason" seems to be something else entirely. There's far too much that's going unstated in the second paragraph of your question, and it's too broad. I think that in the U.S., the status of science has been diminishing, especially since the circles of unreason (and by this, I mean right wing politicians and their followers, who are steeped in anti-intellectualism, and who can't tell the difference between expertise and "bias") wield unreasonable power, and are doing their best to drive us back into some weird fundamentalist Dark Age. Maybe 30 years ago (definitely pre-Reagan) we could talk about idolizing and elevating science beyond the status it deserved, but not anymore. Today, scientists struggle to make a largely science-illiterate public understand the most basic things about what we are doing. Not that the social sciences or humanities are doing much better -- the tide of Willful Ignorance rose over them first. But science was not a victor in that struggle. It's just the latest loser.
If you want answers to the stack of questions in your second paragraph, I suggest you sort them out and ask each one separately, with some details about the kinds of answers you want.
  • asked a question related to Science 2.0 and Open Access
Question
7 answers
Here is a presentation slides to a 1/2 day seminar I will be giving at Universiti Sains Malaysia (USM) on the 28th June, 2013. Your thoughts, experiences and ideas on this topic would be highly appreciated. Still in ultra learning mode :)
Relevant answer
Answer
Social media are useful in many ways to research such as concept development, research topics development, research visibility and science communication to meet specific needs of stakeholders in research. Wiki virtual is good for concept development and research topics generation for a working group. Wed 2 tools are good for science communication, picture communication, research result packaging and linkages among scientist in the globe. However, skill, knowledge and positive attitude to new icts are needed to apply and enjoy them.
  • asked a question related to Science 2.0 and Open Access
Question
31 answers
What should be a new or improved peer review model, do we need it, and why?
There already is a number of proposed modifications to the existing peer review model, e.g.
2. giving the reviewers bonus points which are later required for article submission, http://aclinks.wordpress.com/2012/02/17/more-on-peer-review-2-0/
3. creating a channel (say, a secure form at the journal web site) through which the reviewers could anonymously contact the authors (and get response from them) in order to clarify something or check some subtle details not included in the paper (proposed by Igor Belegradek at this discussion: http://mathoverflow.net/questions/50947/on-referee-author-communications )
What do you think of these modifications? Which modifications to the peer review system would *you* suggest?
Thanks for your input.
Relevant answer
Answer
What is open peer review? Peer review didn't exist before not a so long time ago, and publications were of better quality, without having to wade through all the "not even wrong" papers. Science has become more administrative than scientific, and the scientists act more to preserve their reputation than to be creative.
This question has something out of place, since the format of the publications is bound to change. There is no more the constraint of paper, there is no reason to find again these series of articles from the same author, in different journal, and differing only by some more development in the same vein. There could be "live publication" in the likes of wikis. The HTML format, initially developed for scientific publication, allows to regroup materials from different source in the same document, for example a figure extracted from another article elsewhere in the Web, or even from another author, into a new text. Article need not be self contained if they use hyperlinks.
We have then a choice: keeping peer review, with old fashion and inefficient way of publication, even if n.0, or moving forward, drawing from collective intelligence and trusting the ability of the peers to judge a contribution for and by themselves. In any case, there will be lot of articles, possibly of high quality and very interesting, that will be published avoiding any form of peer review. Science is not a game in which points delivered by some authority have to be earned.
  • asked a question related to Science 2.0 and Open Access
Question
2 answers
I like the idea of a public space for discussing and commenting on journal articles. Anyone here on ResearchGate using it? What's your experience been?
Relevant answer
Answer
Many papers have zero comments, and those with a nonzero number of comments are mostly in the field of life sciences.
  • asked a question related to Science 2.0 and Open Access
Question
7 answers
.
Relevant answer
Answer
In physics, it has long been a tradition that papers uploaded to the arXiv preprint servers are citable, which is how they work around the long journal peer-review+publication times and can gather citations before publication date.arXiv identifiers are more prevalent than DOIs within physics. Other fields are not as uniform (pubmed are quite popular within the biomedical community, but only list articles post publication).
This however means you have to later "merge" the citations when/if the article is actually published, for arXiv this is easy as you can later fill in DOI and journal references. However, when you have a multitude of such repositories, and in particular institutional repositories with variable quality of availability and metadata entries, it gets much trickier.
Also there is the question of what is a "single unit of publication" - if I upload an 8 page preprint that I submitted, got rejected, then added 5 more pages and got accepted somewhere else - is that all the same unit? Which one of them did you cite?
Can also non-peer-reviewed documents, reports and white papers be published and cited in the same way? I think you should be able to - but with a clear indication of their provenance.
  • asked a question related to Science 2.0 and Open Access
Question
7 answers
Is it difficult to categorize the roles an author in a multi-authored paper had? Like for example (analysis, experiments, review, etc.), to be introduced in the affiliations of the authors?
Relevant answer
Answer
In principle, it could be a good idea. However, as Tiia notes, it could also result in a very complicated and bureaucratic categorization system.
What is sure is that the current system is flawed, and the choice of the ordering of authors of a scientific paper changes too much across different disciplines.
What is worse, even if authors are listed alphabetically, still the first author is almost automatically perceived as the main contributor to the work, and the last author as the group leader.
With the inevitable consequences when applying for a grant or a tenure...
  • asked a question related to Science 2.0 and Open Access
Question
11 answers
Is it due to removal of amorphous carbon?
Relevant answer
Answer
Shivani, Id/Ig ratio decreased means defects in CNTs have decreased due to nanoparticle attachment which itself is interesting as the attachment should increase defects in CNTs. Does it means that someway, these nanoparticles are actually being attached to the defect sites in CNTs ?? So, that there is no generation of 2nd order phonon in Raman spectroscopy. I think so.
  • asked a question related to Science 2.0 and Open Access
Question
2 answers
Dear all,
I have created a special interest group about Science 2.0 on Linkedin.
More than 150 people have already subscribed
If you want to join the group please go to:
Kindest regards
Andrea Gaggioli
Relevant answer
Answer
Now approaching 1000 members!
  • asked a question related to Science 2.0 and Open Access
Question
4 answers
I am researching on the data sharing behaviour of academic researchers in a particular research field - with the aim of conducting an empirical study in 2014. Currently I am working on a systematic review of studies in order to build a decent framework for the interview. Though I have already checked the most prominent multidisciplinary databanks, I wanted to make sure that nothing important slips through my fingers. In advance - thank you very much for the feedback. Much appreciated.
Relevant answer
Answer
I'd also suggest 'Data Sharing by Scientists: Practices and Perceptions' at
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0021101 and possibly a series of forward, backward, and related citation searches on already retrieved relevant articles. Since we do not know the field in which you are looking, I am hesitant to provide any bibliographies that may be irrelevant to that particular group.
On another note, I would recommend this paper by Stiles and Boothroyd on "Ethical Use of Administrative Data for Research Purposes" located at http://www.sp2.upenn.edu/aisp_test/wp-content/uploads/2012/12/0033_12_SP2_Ethical_Admin_Data_001.pdf. It provides an overview of the ethical issues and considerations associated with the maintenance, integration, and use of administrative data for research purposes. Although it is meant as a guide for data custodians and data users/researchers, it raises the larger question of sharing data that may have adverse consequences for the individuals who participated in studies.
  • asked a question related to Science 2.0 and Open Access
Question
4 answers
I am a PhD scholar in Computer Science. Can anyone please suggest as how to get funds to attend a research conference or to publish a research article in some reputed journals? Are there any scopus indexed journals in computer science which publish research articles for free. If so what is the procedure to send a paper to those journals?
Relevant answer
Answer
Apply to University Grants Commission, New Delhi, they have funds allotted for research projects and attending conferences ! Same holds true for Department of Science and Technology, New Delhi.
  • asked a question related to Science 2.0 and Open Access
Question
29 answers
More and more open access journals are emerging. Would you like to submit your paper in such journals? Additionally, open access is also provided as an option in a lot of conventional journals, which can surely increase the visibility of your paper. Thus, do you prefer this option? Finally, does your institute or project pay the costs?
Relevant answer
Answer
Open access allow more readability of the paper, and more citations., however, most of the high IF journals do not allow it or allow with very high fees!
I think this issue will be solved in the future, and open access will be the only choice.
  • asked a question related to Science 2.0 and Open Access
Question
16 answers
Most journals charge a fee to authors for publishing their work Open Access. Some research suggests publishing OA increases citations, some says it does not. Regardless, this bill can be footed by the author him/herself, the University/Institute or the Grants agency. But what benefit does the funding agency get by paying $500-5000 worth of Open Access fee?
Relevant answer
Answer
Well, @Hansika, you commented further above in this thread "I don't understand how the model is moving slowly from charging consumers of the information to charging producers. It's like charging a farmer for producing wheat", but at the end of the day it's not like that. In academics we share ideas (at best, knowledge). If you share your idea with me, and I share mine with you - we both end up with two ideas. But if you give me a bunch of wheat (or an apple), and I give you mine, we both are left with only one item each.
So, it's much more rewarding to share ideas than apples (or wheat). Makes me wonder, is it that what makes OA so interesting to many? We don't distribute volumes of journals we have to pay for but ideas that are free for us to read.
  • asked a question related to Science 2.0 and Open Access
Question
15 answers
Suppose a scientific article signed by a single author. In another situation, suppose this same article, but signed by more than one author. In your opinion, does the number of authors influence the facility or difficulty for a scientific article be published and receive citations after its publication?
Relevant answer
Answer
Managing more that 6 coauthors is like trying to herd cats!
Never include names of non-contributors: it is dishonest and unethical.
  • asked a question related to Science 2.0 and Open Access
Question
2 answers
I do not use Firefox, but I've come over a note at Google+ reporting the launch of Mozilla Science Lab. All of you using it, please tell us about your experiences and practices working with it.
Relevant answer
Answer
Hi Michael,
The Mozilla Science Lab project has just launched, so it is obviously very early days from what I can see.
Reading the announcement, but also looking at the lead of Science Lab, I do not think that the Science Lab will be a product (such as Firefox or even a Firefox add-on), but more of a project (as is their Drumbeat initiative) aimed at fostering web innovation for science (again, my personal take here).
From the Techcrunch article linked in the references below:
"For now, the initiative wants to focus on *bringing digital literacy to science* through teaching students and academics digital literacy and about the tools that are already at their disposal, as well as through teaching researchers basic computing skills. Mostly, though, the project wants to help start a conversation around how the approaches that built the open web can also help shape the future of science."
This is obviously close to what folks at Software Carpentry are focusing on (their CEO is one of the lead of Science Lab): http://software-carpentry.org/4_0/python/index.html, but I am certainly more looking forward for the discussion between open-science advocates and web hackers, and see how innovative web technologies can change the way we publish and promote our science. Which I think is kind of what ResearchGate is trying to do :)
More to come soon it seems, you can also read more at the following pages:
Cheers,
P
  • asked a question related to Science 2.0 and Open Access
Question
38 answers
The role of social media in building scientific and medical knowledge has been changing rapidly, and seems to be increasingly reliable, usable, and accepted within the field. How do you use Wikipedia in your medical work and how do you recommend others (especially students) use this resource. Also, does anyone have experience writing medial articles for Wikipedia? If so, do you have any advice to offer on the process?
Relevant answer
Answer
You also have to take into account the risk that it might contain obsolete, controvesial or unverified information, as any service of the kind. While in research this is easibly verifiable, there are many people who simply follow articles in wikipedia to treat themselves and do so without any critical thinking. So I would advise the students to think critically and try to verify the information before even considering any real application of wikipedia knowledge. An article on the wikipedia, however, might be a good starting point, provided one does read all the references and follows up on his investigation. As for writing -- try to make each information you put into your article well documented and easily verifiable, which is (or should be) no different than in a proper scientific paper.
  • asked a question related to Science 2.0 and Open Access
Question
16 answers
Our lab has an up-to-date website with a nice news section. However, it seems certain that most of our target audience will not visit the website regularly, so the readership of our news is very limited.
The university is active in LinkedIn, Facebook and Twitter, but naturally only shares the most important news. Therefore it seems that we should be active ourselves in sharing links to our news in social media. Some individual researchers do promote their own research online, but a concentrated lab-level effort would seem more effective.
We have considered setting up a LinkedIn group for our lab. This would be used for sharing links to our website news, new papers and job opportunities. A joint SlideShare account also seems worth the effort. Other obvious alternatives are Twitter and Facebook, and of course ResearchGate.
Do you think this would work, or would there be a better way?
As a motivation, increased publicity potentially brings new contacts, collaborations, projects, research, funding and so on.
-----
Edit: Prof. Ravi Sharma nicely clarified the motivation for participating in social media below:
1.Creating awareness,
2. Popularising/sharing of services/ products/land mark achievements/papers/articles/presentations etc.
3. Attracting desired human resources
4. Networking among participating scientist resulting in better circumstances for productive and collaborative research & development - institutional as well as individual levels.
Relevant answer
Answer
This such a good question, and the answer should be - a strong one. Social media, whether it be via a lab Facebook page, or twitter account, or blog is such a great way to communicate with the public at large. Of course the caveat is that you need to be able to accurately communicate your message in an interesting manner. Science needs to (further) embrace social media as a way for scientists to directly
communicate with the public. I try to share interesting research related news and events via twitter ( https://twitter.com/jgryall ) and I follow a number of great scientists (see Dr. Darren Saunders @whereisdaz, Dave Hughes @HughesDC_MCMP, and Dr Krystal @dr_krystal) who regularly contribute random thoughts, an interesting result, an interesting paper recently published etc. For those who dislike the 140 character limit, a lab blog (or website) is a great way to engage the public as well as other scientists. On my blog ( http://www.jgryall.wordpress.com ) I discuss my research on the metabolic reprogramming of stem cells, but I also provide detailed methods for techniques that I use. Blogging has allowed me to interact with people from Peru to India, to Saudi Arabia.
I think that we are really building towards a dramatic shift in the way we publish and disseminate our research results, and social media are likely to play a big role. I think that sites like ResearchGate are also likely to play an important role in Science2.0
  • asked a question related to Science 2.0 and Open Access
Question
19 answers
For our institute's publication strategy, we aim to identify core conferences and journals that deal with open science, science 2.0, open access, altmetrics and so on.
Since these topics touch upon quite a few disciplines and research areas, it is not that easy to gain a good overview. This is why I think it is an ideal question for expert crowd here.
Relevant answer
Answer
For the 'dark side' of open access (predatory publishers) see this nice blog: http://scholarlyoa.com/publishers/
  • asked a question related to Science 2.0 and Open Access
Question
12 answers
Particularly considering social networking and discussion forums - such as RG (for scientific matters), LinkedIn (for business matters) –, if we spend time on any of those, we basically have less time for face-to-face discussions, paper writing etc. Is it worth it? Is it legitimate to still also write articles? How much time should be spent on which kind of activity? And, ultimately, is this worsening work-life balance?
Relevant answer
Answer
Indeed, reactions to a publication tell you much more then information on 'most downloaded' or 'most cited' articles. The former may mean that you selected a catchy title (and maybe abstract), while the latter does not distinguish is the citation was part of a background summary, critique. etc. In this sense, even without a peer review, direct comments on papers are useful. Analysis of such information, as well as context of the paper in citations might provide a step in the right direction.
Sill, much of this is in its infancy and it is unclear when (and if) we will move forward. I doubt that we will take a large step to a Web 3.0, but would expect many little ones... kids grow up in this environment and there won't be any chance to 'keep them away'. Ultimately, they will shape this whole setting in the next 10 to 20 years ;-)
  • asked a question related to Science 2.0 and Open Access
Question
87 answers
In contrast to most other fields, CS uses conference proceedings as the main outlet of its research results. This leads to a huge competitiveness in conference acceptance (10-15% are quite common for top conferences) and long papers (say 12 or even 14 pages double-column). And because the papers are already seen as a final product, the incentive to write a journal version seems to be low, especially when other researchers go on citing the conference version anyway.
I find that this special position has some negative impact on CS:
- there is only a single round of reviews, leaving little room to improve the paper (shepherding is not fixing this problem)
- good papers get rejected at good conferences (not a "hot topic," reviewer randomness)
- papers must be "perfect" at time of submission, large but fixable problems lead to rejection
- no consistent reviews, if you fix all problems a new set of reviewers will have different problems (or even want you to revert to its previous state), wasting your time and the reviewers'
- the evaluation of research quality across fields is skewed because the widely used impact factor is bad for CS and good for everyone else
- the editing of papers is done by the authors themselves in a short timeframe, resulting in a poor style and print quality
A good point:
- the time-to-print of journals is high (1-2 years), conferences have fixed deadlines and only about 6 months (and you can also tell that something is happening)
So what is your opinion on this? Should we change to the classical model of journals (maybe with faster journals), try to improve the proceedings, or switch to something else entirely?
Relevant answer
Answer
All the conferences and journals that I have published in have a review process and all are aiming to improve the process to ensure quality publications.
However, since I struggle to find funding for my research, I am increasingly looking at publishing materials for open review. Why not have a review / rating system similar to social networking likes. Take the emphasis off meeting standards for publication and allow the research community to review / rate / cite as appropriate. Under such a system, the cream will come to the top and those who are just publishing at every opportunity regardless of the quality of the research to obtain a research ranking will slowly drop out of the mix.
We have technologies that enable new publishing models. I wonder why we are not exploring these possibilities. We are not just publishers of research, we should be consumers of research and as a consequence be willing to rate the articles that we consume for the benefit of others. I suspect if we put such processes in place we may actually obtain better quality research and publications.
The model that we use is based on an outdated printing based model and we should be looking at alternatives.
  • asked a question related to Science 2.0 and Open Access
Question
7 answers
Public budegt is a budget of the government working at different levels. It is a document containing policy of the government towards the development of the economy, with emphasis on social welfare maximisation. Budget also indicates the beneficiary section of the community and sufferers as well. This shows the necessity of studying public budget and its interpretation.
Relevant answer
Answer
I would even more broadly argue that analyzing politics and political economy in general is highly important because in most countries around 50% of GDP goes through the hands of collective institutions (government, etc.) and the remaining 50% are influenced by the rules set up by collective institutions.
  • asked a question related to Science 2.0 and Open Access
Question
19 answers
In the era of common internet communication seems to be normal that knowledge (science) treated as common human heritage and accessed for all, might be the best tool for breaking several barriers and making global progress.
Costs of electronic publications are incomparably lower than those in traditional paper production. Meanwhile only small part of renowned editors decided to give opportunity to open access for scientific publications, other as Springer proposes to open choice option after payment of 3000 US$ fee for one publication. Most authors in the world are employed in different state universities, where salaries are dramatically low, and no chance to pay such costs by them.
Should governments support these costs of editors (if are really necessary) to open access to knowledge for their citizens? Is the place for common open access publishers competitive for monopolised market of high ranked journals? Should governments initiate discussion about common open access for science regulations, instead of ACTA law conspiracy?
Relevant answer
Answer
From a developing country point of view, this would be a very wonderful idea. In my undergraduate training we had free access to some scientific journals through HINARI. This really helped us so much. However, most of the renown journals such as Jstor and Sciencedirect and many more were restricted.
I am very sure, most scientists in low and middle income countries are constrained by payments. Even if they could afford the money, credit cards, mastercard, Visacard,paypal and all the cash transfer portals are unheard of in this part of the world. That is even before we consider the access to reliable internet services. I know most people who only have internet for just a few minutes and only in cybershops. Now imagine you see a wonderful and resourceful article online but unfortunately you can't access it. This is the dilemma we face.
Thus, you idea, Mamcarz, is a very good one.
I wish on top of all the aid that is sent this way, knowledge was included. It would be more like teaching a man how to fish rather than giving him fish.
  • asked a question related to Science 2.0 and Open Access
Question
5 answers
Most of the digital version of published papers use Postscript or PDF as their document format, which helps to ensure that the document looks as intended on various platforms and in print. One of its drawbacks is that text is just a large array of glyphs that are to be placed on a page, semantics such as text being a heading, caption, text in a figure, or a reference, is lost.
This causes problems when you want to make automated use of a paper, e.g., to extract the references inside, enable fulltext search, or generate bibtex entries. For example, for inclusion into the ACM Digital Library you have to provide the LaTeX source and bibtex file to help them get the proper metadata in the first place. Google Scholar or RG seem to invest lots of effort to find out which papers are yours, who is cited by whom, etc.
There was much research on topics like the semantic web, are there already solutions for this problem that could be used for research documents at scale? To improve the situation, why not have a new data format (or just include this metadata in PDF) and make this information mandatory when publishing it?
Relevant answer
Answer
There is no need to "invent" something new for this. PDF already supports metadata by default.
There are the basic PDF metadata that are not really much but good for beginning.
And then there is also XMP, the metadata library of Adobe that support adding RDF data to all objects in a PDF. The Downside of this is that you need to add all the metadata by hand in Acrobat or other (in most cases commercial) tools.
Some more Integration to the editing process would be the most helpful thing for the "world of metadata".
  • asked a question related to Science 2.0 and Open Access
Question
9 answers
This is becoming an important issue, as the distribution of means to interact (computation and the internet are good examples) and obtain competence (MOOC) make practical and inevitable the integration of amateurs with professionals over research topics.
Relevant answer
Answer
I have been approached by TV producers to take part in a program. They offered me no money. It would have used my precious research time. It would not have advanced teaching of the topic, just filled air time, and made money for the producers. It would not have advanced my career as I was judged by my student output and papers. I did not need 'exposure'.
On the other hand, I have had e-mail requests out of the blue from overseas researchers beginning their research, and I have been happy to provide pointers and papers and advice for free. That, I consider, was why the university was paying me a salary.
  • asked a question related to Science 2.0 and Open Access
Question
30 answers
As a researcher, every so often one encounters publications that are not of the high quality that one might expect from a 'good' publisher (such as the IEEE). Sometimes it is simply a low quality workshop or conference, where peer review is simply not that high a bar compared to high profile conferences or high quality journals. We all know there is a difference in quality between each workshop, conference or journal. This is something we can deal with- lower quality work is not necessarily wrong, but it is just often less innovative, or it is on-going research.
However, very rarely, one encounters publications that simply contain many errors. In this case, I'm referring to errors in the content, rather than spelling and grammar (because spelling and grammar errors are virtually everywhere). Sometimes 'errors' are simply assumptions you do not agree with (which is acceptable most of the time), but in other cases, 'errors' may refer to falsifiable facts (for example, claiming to be able to protect integrity by using HMAC-MD4 -- MD4 has been known to be broken for many years, and this algorithm isn't part of major communication security standards like TLS).
What do you do in such a situation? Do you ignore the paper, or do you cite it and reason why its solution is bad? However, citing increases its citation count, thereby increasing the legitimacy of the paper - a confusing dillema, for me at least.
Secondly, do you think the world would benefit from a place where this type of (motivated!) criticism can be published? Something like arxiv, where authors can publish technical reports explaining the issues, with the option for the original authors to respond to the reports as desired? Or would this lead to a strongly negative spiral that only serves for authors to attack each other? Finally, could ResearchGate fullfill this positiion, using its significant userbase as a kick-start?
Relevant answer
Answer
Thank you for asking that question! I've been thinking about this as well for some time, but I've not come to any conclusions yet.
I do think that posting article-size responses on arXiv.org without going through peer review first is dangerous. Every once in a while, anybody has a really, really bad idea. It's good for everybody involved if such things are caught before being circulated!
I agree with you that "boring" papers are not necessarily bad. I prefer a solid work with boring outcome over an "innovative" piece full of errors!
I think your suggestion of a culture of commenting here on ResearchGate, like in blogs, would be nice for academic papers. Others could ask questions or voice concern easily and more "low-profile", and a reader can get a better image of what the community thinks of a work. I always enjoy reading the comments on a controversial journalistic online article. At the same time, the criticism would not boost the impact of the paper like traditional responses.
Related to this, I think that both authors and journals have an interest to post "troll" papers, i.e. papers that everybody will have an opinion on and that contain subtle errors that will create a lively discussion between believers and critics. The infamous "arsenic bacteria" paper in Science < http://www.sciencemag.org/content/332/6034/1163.short > is in my opinion a perfect example, or the 10^8 improvement of ionic conductivity < http://www.sciencemag.org/content/321/5889/676.short > (also Science...). I do not take Nature or Science seriously anymore because they publish such stuff with obvious errors that should never have been printed. In my opinion, it is clearly either trolling, or utter incompetence of the authors, editors and reviewers. I'd like to have a downvote button for such nonsense!
  • asked a question related to Science 2.0 and Open Access
Question
9 answers
Recently, I've been reading a collection of papers, and I noticed that almost none of these include any error bars (neither standard deviation, nor confidence intervals or some other indication of error). This is really surprising to me - I've always been taught that error bars are essential to the correct interpretation of results.
Is it bad practice to leave them out? If so, why are there so many conference and journal publications that contain graphs without them? Or should I not even bother including them in my publications? How does this affect trust we can have in the research results?
For context:
The immediate context is a number of IEEE Conference publications I was reading, which included simulation results as part of the evaluation. This was for work performed in VANETs, but I've noticed this practice across computer science and across both publishers of all quality levels, both open- and closed-access.
I'm not sure whether this is also common in other fields (comments to that end are also welcome!).
Relevant answer
Answer
Error estimation requires a clear explanation of what kind of error and if there is a benchmark that is acceptable to the community to which you are communicating. It can be difficult to explain what a model is comparing against, and how a reduced error (or improvement against a benchmark) really means your model works better as a result of the research. In climate analyses, for example, errors are often expressed as differences from the previous iteration of the model, not against reality. Errors could be expressed as differences from an accurate spatial observation (for example, atmospheric depth from observations ) or against a time series (error in the ability of the model to capture temporal variability). Many model outputs do not have clear observational correlates that can be used as a benchmark, thus 'errors' are entirely model-dependent, reducing their utility outside of the field.
NASA has a project called the Carbon Monitoring System, that seeks to incorporate environmental observations from satellites into climate models of land, ocean and atmosphere. In this project, the objective is to develop data products that might be useful to policy makers in various fields. The problem that had to immediately be addressed was how to expressing the errors in our models to each other (each discipline has different standards and terminology), and to communities outside of science. 'How good is good enough?' is the question that we seek to address - does the model really provide the information needed to sufficient accuracy? This requires both an understanding of which errors matter in the model (for example, can you simulate seasons in your model accurately) and then conveying these errors simply and clearly. If the community you are trying to communicate with has no idea of their requirements (which errors are important) then coming up with useful measures of model error is very difficult.
Thus to answer the question about error bars - it depends entirely on who you are trying to communicate with and whether you've done the considerable work before hand to determine what your benchmark is. If you know 'how good is good enough' then yes, absolutely, you need to show that your results are 'good enough' to be trusted, cited and emulated or simply used to make a decision. But it is very hard to do right.
  • asked a question related to Science 2.0 and Open Access
Question
3 answers
Websites such as ResearchGate, Academia.edu, Mendeley Web, and Zotero Web have a lot common features: 1) They are social tools targeting scholars, 2) “Real Names” Policies (academic people are easier to follow others, connect others, and self-promote based on this), 3) enable publication/citation/reading list sharing, and 4) encourage self-archiving and open access.
My advisors and I usually use "Academic Social Networking Sciences" (ASNS) to indicate these services in our work, and we are from the field of Library and Information Science. However, I have to admit that ASNS is sometimes not the best name because the term seems to be unclear. Moreover, ASNS couldn't describe some research-oriented features that we can find in these websites. For example, Mendeley and Zotero are more likely to be identified as "social reference managers" and RG seems to feature as "Scholars' Social Q&A". So I am very curious about if there is more choices for describing them altogether.
Here is some other terms that researchers described these services:
-Networked Participatory Scholarship (NPS) that I borrowed from the field of Education
(Veletsianos, G. and Kimmons, R. 2011. Networked Participatory Scholarship: Emergent techno-cultural pressures toward open and digital scholarship in online networks. Computers & Education. 58, 2 (2011), 766–774.)
-Research Networking (RN) platforms form Bio-Medicine domains
(Schleyer, T., Spallek, H., Butler, B.S., Subramanian, S., Weiss, D., Poythress, M.L., Rattanathikun, P. and Mueller, G. 2008. Facebook for Scientists: Requirements and Services for Optimizing How Scientific Collaborations Are Established. Journal of Medical Internet Research. 10, 3 (2008), e24.)
Relevant answer
Answer
That's an interesting question. Such services could be used to provide a social networking environment for researchers ("Facebook for Researchers"); a repository service for one's research content; a shared bookmaking service about others research. In a paper which asked "Can LinkedIn and Academia.edu
Enhance Access to Open Repositories?" we tended to use the term "researcher profiling services" since a researcher's profile may include combinations of sharing their papers, the papers they are reading and their interactions with their peers.
Brian Kelly, UK Web Foicus, UKOLN, University of Bath
  • asked a question related to Science 2.0 and Open Access
Question
12 answers
The peer review system is often unfair and biased. Very often, when I am asked to review a paper, I personally know the authors. Shall I abuse this to "get even"? Shall I be more strict because I actually value the author as a scientist, as a person, as a colleague? Remaining impartial is close to impossible. Also, since the author does not know who I am, I cannot have an open discussion, honestly pointing out my view of things, or how I understand the theory.
Sometimes, to circumvent these problems, I bluntly state my name in the review. A move which is usually suppressed by editors-in-chief. More often, however, I feel I have to turn down the review.
Conversely, one often finds reviewers doing a really sloppy job: they cannot be criticised, so they review a paper in (what appears to be) a few minutes, criticise parts of the paper unjustly (due to missing parts of the argument laid out in the paper), or are not familiar enough with the field to criticise justly. This is what one almost invariably observes in robotics conference reviews in, e.g., ICRA, IROS. This is the major reason why I do not like to publish there and why such papers are often not (and indeed should not be) acknowledged as publications on par with journal papers.
The often-mentioned double-closed approach seemingly solves one of the two issues---but only seemingly, since even there I can often, in about 50% of the cases, guess who the author is.
A way out would be to use a double-open approach, in which reviewers are acknowledged and, better still, reviews or commentaries are published. The new, open-access journal series Frontiersin uses the former approach, and it appears to works well.
An improvement on this approach would be to use a grading system, in which everyone---with names being named, of course---can comment on or grade papers online. This will not only lead to majority voting, but also to accreditation of accreditors. Of course, blind votes must be prevented.
Wikipedia lists such ventures under http://en.wikipedia.org/wiki/Open_Peer_Commentary, of which Behavioral and Brain Sciences, a journal initiated by (the legendary open-access promoter) Stevan Harnad, was a brilliant example (unfortunately, Cambridge University Press seem to have decided to close its traditional open access, and now papers cost a daring $45). The journal "Papers in physics" uses a comparative approach, I think, but I don't know that journal (nor the field) so I cannot comment on it.
The suggestion would therefore be, that a "journal" would consist of an open access web platform where logins are given to accredited users (that step requires some thinking). Each user can post an article or a commentary to an article, and a rating system along the lines of RG's silly score (silly, since at RG it is heavily based on discussion ratings, and only very lightly on publication and impact factor (caveat!)) will lead to something like an impact factor for the paper. Can a person build a reputation on reviewing only? I think not; the effort in writing a scientific article is much higher, while its impact (=influence on others) is much more profound, and should be honoured accordingly. Yet, only publishing without engaging in discussions is not right, too.
Advantages: fair reviews; but also a better understanding of the papers, since commentaries of peers are included. And, much more impact of papers since those papers which are controversial or important will have high grades. High grades are better earned in such a way than by being referred to from papers in obscure conferences!
Relevant answer
Answer