Science topic
Consensus - Science topic
Consensus are general agreement or collective opinion; the judgment arrived at by most of those concerned.
Questions related to Consensus
Writing up ,methods, especially for qualitative work, is important for the reader to be able to contextualise the data. However, when publishing multiple papers based on the same data/methods, is there a consensus on how to do this?
There will, of course, be some small differences given that different articles will have a different focus, but there are also many aspects of the methods which will be the same.
It feels wrong to copy and paste the methods with some small tweaks. We have also tried to write a more brief summary in the second paper of a series of papers based on the same data that references the first paper where the methods are fully written out, but this seems also unsatisfactory.
Are there agreed norms/a consensus/guidance on how best to handle this?
Dear all.
We are working on a project whose main subject is to detect cyberattacks on Smart Inverters. Specifically, on two Smart Inverters in two separate PV microgrids.
Currently, an IoT device is used to measure the output data from Smart Inverters. It is expected to deploy a machine learning model trained offline to detect any cyberattacks, adversarial attacks, or even FDIA (False Data Injection Attack).
However, we could not figure out and are currently stuck on where blockchain can fit into and contribute to this research structure and not interfere with each component. From what we currently understand about blockchain technology, it is a chained-to-together block data structure and it can be used as a distributed ledger. Within the category, it has a P2P network, consensus mechanism, and smart contracts.
We have surveyed quite a lot of research articles, mostly review or survey papers; few are research articles. Of those few research articles, most of which focused on energy trading using blockchain, specifically P2P networks, smart contracts were employed the most, and then consensus mechanisms/algorithms. Some suggested using blockchain as a distributed ledger but did not specify how exactly it was implemented.
My apology for posting the long question, and thank all who read word by word. We are asking if everyone could provide guidance, references, and/or implemented program code examples that could help us push forward on this project and contribute to the research field.
Thank you.
As there are different opinion on muscle atrophy what is consensus on its use in tongue reconstruction.
I am looking to conduct a 3 rounded Delphi study. The first round will consist of an open ended question, which will shape Round 2. Here participants will be asked to rate the answers from Round 1, with binary answers (yes/no/unsure/not qualified). Round 3 will also ask questions on the suitability of the answers and use in clinical settings, with an option to select multiple answers.
Is there a way to define consensus i.e 70% through the platform, or is this to be done manually?
Thanks so much!
I wish to create a consensus sequence of viruses on a high taxonomic level (family).
I have several thousand sequences of variable length (300-20,000 nt), representing partial or whole genome sequences of viruses. The viruses all belong to the same taxonomic family, but they are different genuses and species, which means they have some similarity, but also quite a lot of diversity.
I have different numbers of sequences for each species, so I cannot just throw all the sequences into the same alignment, because that would bias the consensus sequence to over-represent the species with the highest representation in the alignment. So I am looking for strategies to curate the sequences before the final alignment to make sure that the alignment best represents the diversity in the family.
I am considering creating separate alignments for each species. Then I might align the species level consensus sequences to create genus level consensus sequences. Perhaps I will even be able to align the genus level consensus sequences to a family level consensus.
BUT I worry that the species and genus consensus sequences will lose the information on the original ratio of ambiguous nucleotides, which would mean that the family level consensus would also not contain any true information on these ratios.
So my question is -
Is there any way to align multiple consensus nucleotide sequences, while retaining the information on the correct ratios af ambiguous nucleotides?
Thanks in advance.
In my understanding, hyperspectral remote sensing data is equivalent to imaging spectroscopy. But more and more often I see the term used for point spectroscopy (field or laboratory measurements) that of course also fulfill the literal sense of the word since they have lots of spectral bands.
Some people have argued for abolishing the term altogether and only using imaging or point spectroscopy instead.
Is there a consensus on using the term?
Should we use it for
a) all reflectance measurements using many bands, or
b) only for imaging spectroscopy, or
c) not use it at all?
I want to construct a plasmid (for Drosophila cell system) containing a intron in order to study splicing process. In detail, I would like to insert wether a weak or a strong splice donor site (followed by small intron+ splice acceptor) in the firefly luciferase to be able to easy monitor the splicing activity by standard luciferase assay. However, if I can find easily the consensus site for 5'SS and 3'SS, I am struggling to find the appropriate full DNA sequences that I would like to practically insert in my plasmid. Is there someone that can help me finding this sequence? (addgene reference? detailled publication with the DNAsequence fully available ?)
Thanks in advance for your help,
J
Hi,
I am looking for build up a consensus or a group in clinical biochemistry for mutual exchange in scientific idea, research protocols, cooperation in proposal preparation, sharing in books, Research articles and reviews.
How could politicians and scientists better work together to address issues in our world? For example, can Researchgate provide opportunities for politicians to get involved in some sort of discussion forum for a specific issue to exchange information and ideas between researchers and politicians?
I want to find the most essential and reliable academic research AI tools and collections to save time and provide better research outcomes. I have some suggestions for you here (time-by-time, I'll try to update it). Let me know your tips and suggestions!
- Scite - research on scholarly articles and analyze citations
- Consensus - study findings on a range of subjects easily accessible
- Trinka - 3000+ grammar checks, tone, and style enhancements
- Elicit- deduce summaries and visualizations for practical data interpretation
- Rayyan- organize, manage, and accelerate systematic literature reviews
- Scolarcy - automating the process of reading, summarizing, and extracting information
- Grade scope - grading and feedback tool
- Knewton - analyze student performance data, strengths, weaknesses, and progress.
- Watson - has its own Watson Discovery and Watson Natural Language Understanding features
- Tableau - explore, understand, and identify data, trends, patterns, and outliers
- Semantic Scholar - academic search engine tool
- Mendeley - organize, share, and cite your research papers properly in one place
- Zotero - collect, organize, annotate, cite, and share research documents
- Wordvice AI - real-time, all-in-one text editor
- Typeset.io - a comprehensive platform that provides predefined manuscript templates and automated formatting tool
- SciSpace - provides a one-stop shop for everything from manuscript submission to peer review to publication
- Scite.ai - gives you accurate citations to published papers
- Quillbot - writing assistant that helps people create high-quality content
- Scholarcy - an online research tool that reads and summarizes articles, reports, and book chapters
- ResearchRabbit - track citations, create bibliographies, and generate summaries of papers
- ProofHub - All-in-One Project and Team Management
- ChatPDF - creates a semantic index for each paragraph
- Consensus - answers and summaries based on peer-reviewed literature
- Gradescope (for teachers) - administer, organize, access, grade, and regrade students' work
- Flot.ai - Improve, summarize, translate, and reply to any text
Thanks to your feedback, here are the new ones ...
- Connected Papers - find related papers
- Hypothes.is - annotate the web / pdfs - share with other people
- Endnote - Straightforward tool for organizing and citing papers
I have a dataset of approximately 170 CT cases. The idea is that the gold standard is the consensus evaluation of two radiologists on 12 descriptive parameters and 1 conclusion. Is it conceivable that, because 170x2 readings are quite demanding, I test inter-rater agreement on a portion of the cases (like 40?) and that the remaining 130 cases are randomly read by one of the two readers if the Kappa on the 40 cases is >, say, 0.7? In this way, each of the two readers would read 40+(130/2)=105 cases instead of 170.
Is this a possible shortcut? Thanks a lot
Hi,
Could anyone please suggest appropriate bioinformatics tools to generate consensus sequence. I tried with EMBOSSCON but I am getting a number of 'n' in the consensus generated. Using this consensus I need to design primers for my further work.
Thanks
Deepti
Non-insane automatism is a legal defense used in some jurisdictions to argue that a person's actions were committed involuntarily due to a state of automatism, which is a condition where a person performs actions without conscious control or awareness. Unlike the defense of "insane automatism," which involves actions resulting from a mental disorder, the defense of non-insane automatism involves actions resulting from external factors that temporarily impair the person's consciousness or control over their actions.
The concept of non-insane automatism caused by electromagnetic fields (EMF) is a topic that has been debated and researched in various contexts, including legal, medical, and scientific realms. Some individuals claim that exposure to electromagnetic fields can lead to involuntary actions or states of automatism, but it's important to understand that the mainstream scientific consensus does not support a direct causal link between EMF exposure and non-insane automatism.
Electromagnetic fields are generated by the movement of charged particles and are present in various forms in our environment, including from power lines, electronic devices, and wireless technologies. While EMF exposure is a legitimate area of concern and research due to potential health effects, the idea that EMF exposure can directly cause a person to engage in involuntary actions or lose control over their behavior is not well-substantiated by scientific evidence.
Here are some important points to consider:
- Scientific Consensus: The mainstream scientific consensus does not support the notion that exposure to typical levels of electromagnetic fields can lead to non-insane automatism or involuntary behavior.
- Health Effects: EMF exposure has been studied primarily in relation to potential health effects, such as the risk of certain illnesses or conditions. Research on EMF and its effects on human health is ongoing, but the evidence for causing involuntary actions is limited.
- Individual Differences: Responses to EMF can vary among individuals, but the idea that EMF exposure universally causes non-insane automatism is not supported by the available research.
- Legal Considerations: In legal cases involving claims of non-insane automatism due to EMF, the courts typically rely on established scientific evidence and expert testimony to determine the validity of such claims.
- Causation and Evidence: For any claim that EMF exposure caused non-insane automatism, there would need to be robust scientific evidence demonstrating a direct cause-and-effect relationship between the two. Establishing causation in legal cases involves rigorous scientific analysis and evaluation.
If you have concerns about EMF exposure, it's important to seek information from reputable scientific sources, government health agencies, and expert organizations. If you believe that EMF exposure has caused you or someone else to engage in involuntary actions, it's advisable to consult with medical and legal professionals who can provide accurate guidance based on the available evidence and expertise.
I am interested in the relationship between gene dosage and the amount of protein expression. Any one has experience in this concern? If there is a consensus?
I am working on blockchain based energy sharing. I have consensus mechnism implemented in Matlab, now I want to implement a complete system. My Question: Can we implement BC in hyperledger etc and run consensus in matlab?
This question explores the role of the consensus mechanism in ensuring the security of blockchain networks, discussing concepts such as proof-of-work, proof-of-stake, and their impact on network security.
Hello everybody, I'm a master degree student. I'm working with 16S data on some environmental samples. After all the cleaning, denoising ecc... now I have an object that stores my sequences, their taxonomic classification, and a table of counts of ASV per sample linked to their taxonomic classification.
The question is, what should I do with the counts for assessing Diversity metrics? Should I transform them prior to the calculation of indexes, or i should transform them according to the index/distance i want to assess? Where can I find some resources linked to these problems and related other for study that out?
I know that these questions may be very simple ones, but I'm lost.
As far as I know there is no consensus on the statistical operation of transforming the data, but i cannot leave raw because of the compositionality of the datum.
Please help
A general methodology question about reaching consensus in the Delphi method:
When we have a Likert scale questionnaire for our experts to fill, the consensus criteria is items that have a mode above 5, and we have 11 items in total. In the first round, many items have reached a mode above 5. Do we exclude them for the next round and only ask about the other items which haven't reached the consensus criteria? Or can we do the opposite and exclude the items which haven't reached the criteria and include the items that have reached the criteria to narrow them down, as the goal is to limit the number of items to 4-5?
Thank you in advance
Some propose that the presence of gravitational time dilation and the effect on red shifted photons means that gravitons can travel faster than light. Can some one elaborate and explain why exactly this should indicate FTL gravitons. And what is the consensus on this on this effect, and in general comment on whether they think it is true
I am new to the world of Bayesian phylogenetics and I am trying to get my head around the two types of consensus tree MrBayes offers. I understand the Majority-Rule consensus but I am struggling to grasp the allcompat option. Is there another name for it which I may be more familiar with? Any help would be much appreciated!
Hannah
In recent decades I've noticed a tendency in organizations to attempt to solve problems by assembling large groups early of varying experience levels and backgrounds, featuring lots of discussion meetings and pursuit of consensus. Often the results have not been excellent, which I find unsurprising. Thoughts?
Democracy and Consensus in African Traditional politics: A Plea for a non-party??
I need the full text
Does anyone have experience purifying protein directly from buffered complex media (recipe from EasySelect system) over a Ni-NTA column? I am trying to decide if I will need to run the media directly from my AKTA sample pump or if I should use TFF to concentrate and exchange first. Either way is fine, just curious what the consensus is. Also considering ammonium sulfate precip as one process pathway.
What is the current consensus among historians and other scholars? Apart from his alleged relationship with Mrs. Crawford that compromised his political career, did Dilke have other clandestine romantic liaisons?
I have performed a dab seq experiment to enrich the putative NAC binding sites in the genome. By sanger sequencing I have obtained certain reads with the consensus NAC motif. Is it possible to identify the CDS of the gene downstream of the obtained promoter region harbouring the NAC site
The statistic most commonly used for interrater reliability is Cohen’s kappa, but some argue that it’s overly conservative in some situations (Feinstein & Cicchetti, 1990. High agreement but low kappa). For binary outcomes with a pair of coders, for example, if the probability of chance agreement is high, as few as two disagreements out of 30 could be enough to pull kappa below levels considered acceptable. I’m wondering whether any consensus has emerged for a solution to this problem, or how others address it.
I do not think that there is a consensus and I would like to collect opinions on the more reliable soluble platelet activation marker in plasma.
Thank you
It can be said that Thomas Kuhn’s loop is active only when the working of paradigms generates abnormalities. If a paradigm does not generate abnormalities it is a golden paradigm.
Hence, the Kuhn’s loop can be envisioned as moving from paradigm to paradigm correcting abnormalities until there are no more abnormalities to correct.
In other words, the Kuhn’s loop works its way up from non-golden paradigms to the golden paradigm.
And this raises the question; Can Thomas Kuhn’s scientific revolution loop be seen as the road that leads in the end to a golden paradigm ruled world?
I think the answer is Yes, what do you think?
Feel free to share your own views on the question!
I am using OVL to determine the amount of overlap between two probability distributions. I have a relatively high degree of overlap, averaging around 0.808342. I would like to know if there is a consensus in the field of what threshold for OVL is considered a good overlap.
thank you for your input.
I was only able to find these articles. But there is no consensus yet.
- K.I. Triantou D.I. Pantelis, V. Guipont, M. Jeandin Microstructure and tribological behavior of copper and composite copper+alumina cold sprayed coatings for various alumina contents Wear 336-337 (2015) 96–107.
- T. Chandanayaka and F. Azarmi Investigation on the Effect of Reinforcement Particle Size on the Mechanical Properties of the Cold Sprayed Ni-Ni3Al Journal of Materials Engineering and Performance 23(5) (2014) 1815–1822.
Are there other works on this problem?
What could be possible problems or topics of the blockchain to address in a master thesis ?
So far I have been researching the variety of consensus mechanisms including their pros/cons. Though my idea to use this as a topic was shattered by the realization that some very recent studies had done some extensive work here already. I did not feel like there was enough room to go over this again, since I could not find another angle for new research.
I can´t really put my finger onto an actual scientific question to answer in my thesis yet.
Im looking for some inspiration, guidance and tips.
Kind regards to the community & stay healthy everyone!
It seems to be an apparent consensus that the activation and function of AhR influences the landscape of TC populations, but is there a group that is more severely affected?
Focusing on the vulnerabilities in consensus protocols
I would like to use a practical and yet rigorous tool to build a consensus among a large number of stakeholders in an educational intervention project.
I would appreciate experts' opinions in suggesting research tools, comparing popular tools e.g. Delphi, Group Concept Mapping, etc.
I would also appreciate practical advice from all researchers.
Many Thanks in Advance
The Fuzzy Delphi is a more advanced version of the Delphi Methodin that it utilizes triangulation statistics to determine the distance between the levels of consensus within the expert panel. Yet, Is there any other method can use instead of fuzzy Delphi method to select the most suitable criteria?
Delphi-Method to develop consensus
Content validation-no of expert rate each item of questionnaire according to its relevance and clarity
I need code of Consensus + Innovations and OCD in any programming language preferably Matlab or R
Hello researchers, I am interested to find new alternatives to blockchain consensus algorithms that existed so far in academia since max 2 years describing new methods of how a scalable cryptocurrency should be. Please feel free to answer the question and link some references on it
COVID-19 is mainly a respiratory disease that affects the lung, although other organ structures with endothelium seems to be affected too.
When should we do imaging?
What is the aim of the imaging?
How can it help with management?
Do you agree with the following consensus statement?
How will you adjust your own practice and difficulties encountered? Why?
Ref:
The Role of Chest Imaging in Patient Management during the COVID-19 Pandemic: A Multinational Consensus Statement from the Fleischner Society. Chest. 2020 Apr 07.
Dear all,
Recently I met a problem. We screened out a gene A which performs like a tumor suppressor gene. It negatively correlates with clinical patients' survival. And it affectes tumor cell proliferation much. When we knockdowm it the proliferation of lung tumor cell line increases. The radiosensitivity seemed to be decreased. My quenstion is: Commonly what should the radiosensitivity be implicated when the proliferation was upregulated by one gene? If my results are repeatable, is it strange as we inhibit a tumor suppressor gene to achieve radiosensitivity? I read some papers about the relationship between cell proliferation and its radiosensitivity, no consensus opinion was found. What do you think about this kind of thing?
Hi everyone
I am starting a Delphi consensus study, which will including ranking responses into the top 5 (most important). Can anyone help (with any references) that can help guide data analysis.
Many thanks,
Concettina
I am planning a study using concept mapping that will involve procrustes analysis to compare across group visualizations, and I am wondering if it is possible to analyze the residuals or consensus proportions from GPA alongside data from surveys, scores on a behavioral measure, etc., by way of ANOVA or correlational study?
Dear all,
First I will share some details about my research:
- My dataset consists out of 23 statements.
- I would like to analyze my dataset for two subgroups (Advisers, HCP that do recommend intervention x and Non-advisers, HCP that do not recommend intervention x)
- The aim of my study is to identify differences in how these two groups (slightly) agreed or (slightly) disagreed with the 23 statements.
- Hence, we performed a PCA. This PCA graph of variables and their loadings show us statements advisers/non-advisers agreed upon (most of then (slightly) agreed or (slightly) disagreed, there was 'consensus') and statements they didn't agree upon (there was lots of variation among how they answered on the statement).
- Lots of variance (high loadings) indicating advisors or non-advisers did not reach "consensus" as a group. Hence, that particular statement is probably not related to being an advisor or non-adviser.
- At this moment I am not sure if what I am doing is valid or that I overlook some important points.
- Furthermore, I am wondering if there is a technique on how to compare outcomes of PCA's of two groups within the same dataset?
Kind regards,
Anne
[Information] Special Issue - Intelligent Control and Robotics
How to classify distributed systems (traditional) algorithms like XFT, RAFT, Paxos, Sieve, BFT, and DAG with blockchain (modern) algorithms like PoW, PoS, PoA, etc.?
Please refer to these diagrams:
1. Validation, Voting and Authentication based consensus algorithms: https://www.researchgate.net/profile/Saraju_Mohanty/publication/335854956/figure/download/fig3/AS:803962764677121@1568691068916/Various-Consensus-Algorithms-used-in-the-Blockchain.png
2. Permissionless and Permissioned consensus algorithms: https://www.lianapress.hk/media/userfiles/125080/1519729103/distributed_consensus_mechanisms.png
3. something in Mandarin -- https://imgs.developpaper.com/imgs/695257476-5d42a9ea1e981_articlex.png
I want to create consensus fasta sequence for long-read sequencing BAM files. I have used
samtools mpileup -uf reference.fasta file.bam | bcftools call -c | vcfutils.pl vcf2fq > sample.fq
seqtk seq -a sample.fq > sample.fasta
but variants present (in abundance) in the reads do not make it to the fasta file. I have added a lot of parameters, without success. Is there any other tool that I could use to create a consensus fasta file from bam files from long-read sequencing?
I am attempting to perform Consensus Molecular Subtyping (CMS) on colorectal cancer specimens that my lab has collected over the years. I am struggling to find a method that is streamlined, and since I am not very familiar with bioinformatics, I was hoping that someone might be able to help me understand what I need to do step-by-step to accomplish this. Thank you!
I am using PAUP4 for parsimony analysis. The differences in branch length seem important enough to retain, but the number of trees that are being retained necessitates building a consensus tree.
Is it appropriate/possible to build a consensus tree that averages the branch lengths, and if so, how would this be achieved in PAUP4?
In Group decision making, the consensus is very importante to make it possible to a decision makers group to obtain a mutual importance of factors.
According to Popov, the IOTA DAG through GHOST protocol has made the Blockchain data structure from a chain to a tree structure which improving the confirmation times and overall security of the network (2018). Popov argument is merely improving from the Bitcoins Blockchain consensus and creation weakness and its data structure. As such, improvement the employment of DAG shall provide significant speed for the DLT protocol.
On the other hand, it is not certain at this point, how IOTA DAG is able to improve the performance and moreover, the ioT devices are predominantly relatively smaller scale chip which could not support heavy hashing algorithm such as SHA256 with a Proof-of-work algorithm in solving the nonce.
Last years a have an arising question based on my experience from different academic and scientific activities. The contemporary humanism, obviously very good and peaceful, including everybody, is more and more changing its approach towards a truth as to something that actually equals consensus of a largest possible (inclusive and democratic) group. It seems so, that we have somehow forgotten a mission or quest of the past centuries, that there IS some Truth and we are to discover it or at least come one step closer than our predecessors. Now, we tend to be more and more satisfied with "having OUR truth" about something, actually a mere consensus in particular group. We are a bit confusing this consensual semi-truth with the general truth (not speaking about Truth as eternal spirit or even person). In humanities it is as usually more visible. The theory of firm Truth is understood as something "ideological" and thereby dangerous, potentially threatening by some kind of mis-use in the service of political party or religious authority. Are we still able to know this? Or is the comfortable consensus already here as "the truth"? Is it SUSTAINABLE?
What the the standard and recommendation for data anonymization with consideration to dataset traceability
In order to save energy on consensus in a consortium blockchain, I want to limit the number of members. But I am curious how it impacts consensus resilience and vulnerability to attacks. Also, is there any optimum size of the consortium ? I would appreciate any help guidance, comment, a reference to literature.
I'm curious why Bitcoin's inter-block time is 10 minutes while Ethereum's is only about 15 seconds. Given that both Bitcoin and Ethereum use the PoW consensus algorithm, why not reduce the inter-block time in Bitcoin to match that of Ethereum and thus increase system throughput?
What was the light year distance to the original departure point:
of light arriving here and now from the most distant stellar objects?
I am not asking the travel distance, but fine to also mention that..
assume the current consensus of ongoing cosmic expansion, over the course of 13B rounded years, so that the current visible universe is 46.5B LY radius, so that the original departure point would be __x__ LY maximum
A strong scientific consensus that the Earth is warming and that this warming is mainly caused by human activities. Many years of consensus means a blokage of other theories and opinions and unclear explains the physics of process. How this "dimensionless" quantity of "ppm " can be used in equations? This notation "ppm"is not part of the International System of Units (SI) system and its meaning is ambiguous.
There is still no consensus on whether the fields are composed of particles or no. For examples:
Art Hobson, There are no particles, there are only fields, American Journal of Physics 81, 211 (2013);
Robert J. Sciamanda, THERE ARE NO PARTICLES, AND THERE ARE NO FIELDS, American Journal of Physics 81, 645 (2013);
This problem arises because modern physics describes quantum phenomena in quantum scale (subatomic particles). So, to describe the fields we have to cross the quantum scale and reconsider quantum phenomena at the sub-quantum level to understand what are fields made of?
You are invited to participate as expert panel member in our consensus study on International Online Collaboration Competencies (IOCC). The purpose of this study is to build consensus on the key competencies for International Online Collaboration, which gained importance specifically during the COVID-19 pandemic, where global virtual teams were becoming increasingly important.
We are interested in getting the perspective and experiences on IOCC from an expert panel of experts from the academic field as well as practice-based experts. As academic expert you published on IOCC or virtual teamwork within the last 10 years. As practice-based expert you are either are or have been a member/leader in/of a virtual team. Your experiences, perspectives and comments on key competencies for International Online Collaboration are highly appreciated.
Timeframe for the study
The iterative nature of a Delphi technique means that participants are anonymous to each other, but not to the researcher (quasi-anonymity). Your participation in the survey is voluntary. The survey will take about 20-30 minutes and can be interrupted at any time:
The first round is already completed and aimed at answering the question if the given competence domains are completely and accurately reflecting IOCC, and if a change of domain titles or wording is needed.
The second round (12th – 23rd of November 2020) aims at reaching consensus on the results of the first round, and you are asked about your opinion on a given set of competencies developed in a previous systematic review.
The third round (3rd - 13th of December 2020) aims at reaching consensus on the aligned competencies and your opinion on which competencies you consider as most relevant for evaluating IOCC.
What is in it for you?
The final outcome of this study will be an expert consensus on International Online Collaboration Competencies, which is important for training future workforce and for the continuous development of current workforce.
There is no compensation for participation in this study, but the results will be of importance for higher education and the future workforce.
In case you have a colleague being interested in this topic or research, please feel free to forward this invitation for participating in our consensus study, thank you!
Thank you for your time and consideration. In case of questions, please do not hesitate to contact me: alexandra.kolm@fhstp.ac.at
Sincerely,
Alexandra Kolm, Jascha de Nooijer, Janneke Frambach, Jeroen J.G. van Merriënboer,
School of Health Professions Education, Maastricht University, The Netherlands
University of Applied Sciences St. Pölten GmbH, Austria
At the beginning of 2017, Donald Trump was about to be inaugurated as the next United States president. In anticipation of President Trump's policy changes in the United States, with possible consequences for the world economy, we sourced copper price forecasts from analysts and research organizations. The graph shows two copper price scenarios versus the subsequent actual outcome:
· The forecasts made before Donald Trump won the US election (i.e., before November 2016).
· The forecasts made after Donald Trump won the US election (i.e., post-November 2016).
· The LME average market prices for the years Donald Trump has been president (Source IMF).
2017 2018 2019 2020 (Y-t-D Sep)
Pre-election
Average $5’137/mt $5’490/mt $6’063/mt $6’305/mt
Post-election
Average $5’490/mt $5’689/mt $5’864/mt $6’305/mt
Actual
Average $6’170/mt $6’530/mt $6’010/mt $5’838/mt
Difference (Average)
Pre-election $1’033/mt $1’040/mt ($53/mt) ($467/mt)
Post-election $680/mt $841/mt $146/mt ($467/mt)
When comparing the forecasts against the subsequent outcomes, could they be considered to have been reliable? The consensus forecasts for the pre-election group consisted of 8 participants. For the post-election consensus forecasts, the group consisted of 40 participants, including some prominent research organizations in the sector. In defense of the forecasts, the copper price had been tracking lower in 2016 and could justifiably account for underestimating in 2017 and 2018. By comparison, 2019 was remarkedly accurate, and perhaps without the COVID outbreak in 2020, the overall comparative results might have proven to be reliable again. However, for some mining industry executives, the results might support their skepticism about using consensus forecasts.
I am open to considering the use of consensus forecasts, but with some modifications in the compilation process. Just asking someone for an estimate without allowing them to share their thought process would seem akin to aggregating all the votes in an election without allowing them to explain their position on the issues. The challenge in implementing a more transparent consensus forecasting approach is creating the scope for participants to share their views alongside their forecasts and simultaneously see the predictions and justifications of other industry experts.
In order to evaluate the possibility that an appropriately structured consensus forecasting panel could yield reliable results, a web-based application was designed to evaluate the concept for a doctoral research project. The web application allows participants to register for the research project and make anonymous copper and gold forecasts together with their justification for their predictions. As mentioned, to fully evaluate the concept of using an "open source" approach to developing consensus metal forecasts, all registered participants can see the anonymous forecasts of all other participants, as well as the evolving consensus forecasts. For those interested in participating in the research project, the web application can be found at https://consensusmetals.herokuapp.com
In a week, the next US presidential elections will be held, and once again, the question will be what lies in store for the coming four years for miners. I acknowledge my forecasting track record is questionable and feel that at times my approach is best described as "gut feel" rather than any systematic approach. If my guesses are partly right and are aggregated with other similarly partially correct forecasts, perhaps together, we can achieve a more reliable outcome for the benefit of all contributors!
Has anyone found any adverse effects of high-intensity infrasound on people living within close distances to wind turbines? A long while ago I published a paper with an undergraduate in which she collected many anecdotes of adverse affects of this type, but the article received some rather harsh criticism, specifically from researchers funded by the wind generator manufacturers! Is there a consensus today about this phenomenon?
Thank you,
Peter
I am wondering what current practices are for the timing of chest drain removal? I am looking at doing a study on the relationship to removal timing and the development of pleural effusions requiring invasive drainage. As has been my experience, drainage thresholds and removal times are fairly arbitrary and there is no real consensus on how much is too much. When you balance that against pain, mobility, atelectasis etc then you have to consider what risk is acceptable in waiting longer to remove them. I have seen protocols that use serous nature of drainage, vol based anywhere from 40mls in 4hours, to 150mls in 4 hours to 50 mls in 24hours (considering this is less than physiological production of fluid it seems extreme).
I would love to get peoples thoughts. I have seen one weight based guideline discussed but no published results.
Also, what are peoples general experience with incidence of pleural effusions significant enough to require drainage after OHS. My institutions sits at around 10-15% which is why we are looking to determine of drain removal timing can impact this number
Thanks
What is the general consensus on one vs. two-tailed hypothesis testing in planned contrasts? I have a repeated measures mixed design. The study consists of three groups (A, B, C) and we have three assessment comparisons, i.e. Time 1 vs Time 2, Time 1 vs. Time 3 and Time 1 vs. Time 4.
Group A is our intervention group
Group B is an active control group
Group C is a control group
Our hypotheses are directional:
Group A > Group B
Group A > Group C
Group B > Group C
In this case, would applying the one-tailed significance test be ok?
Best
Martin
Hello everyone,recently I have met some problems in my study.I want to achieve a multi-agent system in DC micro-grids,but I never used java so that I can't use Jade to achieve it. Can I only use Matlab-Simulink to model the network consensus problem of a multi-agent system in DC micro-grids?
It seems that there is, more or less, some sort of consensus on academic standards. Who is responsible for drawing the guidelines that shape the way academia functions? Who do you think puts the standards for research publishing in influential journals?
Recommendations by ordinary researchers? Decisions by elite researchers? Do policy makers have a say in this? What connects these academic decision makers, whether individuals or institutions, and governs them?
I would appreciate your views. Thanks!
Different miRNA target prediction algorithms use different scores which is not comparable directly. So what is the most reliable way to get the consensus of results produced by different miRNA target prediction tools?
When analysis bacterial DNA sequences the consensus doesnt match the reference sequence, when blasting them. Many bad psectra are viewed in the analysis software. Please what to do in this case?
What would be the acceptable CV% and SD for copy number results between stool samples in qPCR (same group)? I am evaluating different bacteria (E.coli, Lactobacillus, etc.) using the absolute quantification method. My CVs after calculating the number of copies was very high (some over 25%). I read that for gene expression this would be acceptable, for bacterial quantification would it also be? If you can give me a reference, I appreciate it. I know that there also doesn't seem to be a consensus on the expression of the final result, I saw articles expressing the result in number of copies, log10, number of copies / ng of feces, among others
Block chain in its current state, though provide secure and decentralized transaction but are making a computer hundreds or
thousands of times less efficient because conceptually each node in the network must redundantly make the same computations as all other nodes in the network. Similarly the key idea of blockchain techologies lies in the consensus mechanism, therby requires these nodes must communicate with other nodes all around the world, which further introduces latency issues.
What are the recommended databases? Any consensus available?
There is a lot of method to measure the consensus in Delphi. But i need to know more about the method use Median and Interquartile. My Delphi panel is below than 10. so it is possible to use this Median and IQR to measure the consensus? Please ASAP
Ottawa technology enhanced assessment consensus statement was adopted in 2011. Richard fuller in a presentation on 'Technology in assessment' in Ottawa 2020 conference addressed the issue. But till to date I failed to understand it. Is there any one to explain 'What is actually meant by 'Technology enhanced assessment?''
I was looking for a simulator to simulate a proof of concept in hyperledger fabric by changing the number of the orderer, endorser, organisation, and also wanted to apply consensus in the network. Is there any simulator which can help me in this regard? I also want to make a transaction by coding (e.g., javascript, java).
Any suggestions?
Thanks in advance.