Science topic

Consensus - Science topic

Consensus are general agreement or collective opinion; the judgment arrived at by most of those concerned.
Questions related to Consensus
  • asked a question related to Consensus
Question
4 answers
Writing up ,methods, especially for qualitative work, is important for the reader to be able to contextualise the data. However, when publishing multiple papers based on the same data/methods, is there a consensus on how to do this?
There will, of course, be some small differences given that different articles will have a different focus, but there are also many aspects of the methods which will be the same.
It feels wrong to copy and paste the methods with some small tweaks. We have also tried to write a more brief summary in the second paper of a series of papers based on the same data that references the first paper where the methods are fully written out, but this seems also unsatisfactory.
Are there agreed norms/a consensus/guidance on how best to handle this?
Relevant answer
Answer
Thanks Sarah and David for your responses (and the 'text recycling link Sarah!).
We did use the anchoring technique in a previous project, but it sometimes felt like editors then viewed the second and third papers as inferior/less important, and often reviewers then wanted details about the methods adding back in. But it is one good way to deal with this.
In our current project we have decided that two members of the study team will lead on drafting a paper each which we plan to submit simultaneously, and so at least the methods section, whilst describing the same process, will be written in a different voice. I'll let you know how that goes!
  • asked a question related to Consensus
Question
7 answers
Dear all.
We are working on a project whose main subject is to detect cyberattacks on Smart Inverters. Specifically, on two Smart Inverters in two separate PV microgrids.
Currently, an IoT device is used to measure the output data from Smart Inverters. It is expected to deploy a machine learning model trained offline to detect any cyberattacks, adversarial attacks, or even FDIA (False Data Injection Attack).
However, we could not figure out and are currently stuck on where blockchain can fit into and contribute to this research structure and not interfere with each component. From what we currently understand about blockchain technology, it is a chained-to-together block data structure and it can be used as a distributed ledger. Within the category, it has a P2P network, consensus mechanism, and smart contracts.
We have surveyed quite a lot of research articles, mostly review or survey papers; few are research articles. Of those few research articles, most of which focused on energy trading using blockchain, specifically P2P networks, smart contracts were employed the most, and then consensus mechanisms/algorithms. Some suggested using blockchain as a distributed ledger but did not specify how exactly it was implemented.
My apology for posting the long question, and thank all who read word by word. We are asking if everyone could provide guidance, references, and/or implemented program code examples that could help us push forward on this project and contribute to the research field.
Thank you.
Relevant answer
Answer
Dear Chou-Mo Yang Blockchain technology can enhance the security and resiliency of smart inverters against cyberattacks through several mechanisms. These mechanisms focus on creating a decentralized, tamper-proof ledger of transactions and data that can help detect, prevent, and respond to cyber threats effectively. Here are the key ways in which blockchain can be used to detect cyberattacks on smart inverters:
1. Immutable Data Records
  • Logging Events: Blockchain can provide an immutable record of all operational data and events from smart inverters, such as power generation, energy storage, configuration changes, and communications. This allows for effective monitoring of normal operations and immediate identification of anomalies that could indicate a cyberattack.
  • Tamper-Proof Auditing: Any changes to the inverter settings or software (such as firmware updates) can be logged on the blockchain. If any unauthorized changes occur, they can be easily traced and verified against the blockchain records, alerting operators to potential attacks.
2. Decentralized Security Model
  • Distributed Ledger: Using a decentralized system reduces the risk of a single point of failure. Each smart inverter could have its own record in a distributed ledger, making it harder for attackers to compromise the entire network at once.
  • Consensus Mechanisms: Blockchain can utilize consensus models to validate any changes or transactions made by the inverter. If a majority of nodes (or inverters) do not agree with a proposed change, it could be flagged as suspicious and investigated further.
3. Smart Contracts for Automated Responses
  • Automated Threat Detection: Smart contracts can be programmed to trigger specific actions when certain conditions are met, such as recognizing unusual patterns of behavior. For example, if an inverter sends out a large number of error messages beyond a predefined threshold, a smart contract could initiate a protocol to isolate the affected inverter from the network.
  • Self-Healing Mechanisms: Smart contracts can be designed to implement automatic corrective actions when a potential cyber threat is detected, such as reverting settings to a known good state or performing a software rollback.
4. Enhanced Identity Management
  • Secure Device Authentication: Blockchain can provide a secure framework for authenticating devices communicating within the network. Each smart inverter can have a unique identity stored on the blockchain, ensuring that only authorized devices can connect and interact with one another.
  • Certificate Management: Blockchain can manage certificates and credentials for devices within the network, making it harder for attackers to gain unauthorized access and providing transparency in the authentication process.
5. Anomaly Detection and Behavior Monitoring
  • Aggregating Data: Blockchain allows for the secure aggregation of data across multiple inverters. Anomalous behavior detected in one inverter can be compared against the behavior of similar devices in the network, identifying deviations that could signal an attack.
  • Real-Time Monitoring: Monitoring can be enhanced using blockchain's secure data logging capabilities, allowing real-time analysis of inverter performance and operational events to quickly detect potential cyber threats.
6. Transparency and Accountability
  • Chain of Custody: Blockchain provides a clear audit trail of all actions taken concerning the inverters. If a cyberattack occurs, the history can be reviewed to determine how the attack was orchestrated and which vulnerabilities were exploited.
  • Stakeholder Visibility: All parties involved in the energy production and management process (manufacturers, operators, regulators) can have visibility into the status and security of the inverters, creating an environment of shared responsibility for surveillance and defense against cyber threats.
  • asked a question related to Consensus
Question
1 answer
As there are different opinion on muscle atrophy what is consensus on its use in tongue reconstruction.
Relevant answer
Answer
Muscle atrophy
  • asked a question related to Consensus
Question
2 answers
I am looking to conduct a 3 rounded Delphi study. The first round will consist of an open ended question, which will shape Round 2. Here participants will be asked to rate the answers from Round 1, with binary answers (yes/no/unsure/not qualified). Round 3 will also ask questions on the suitability of the answers and use in clinical settings, with an option to select multiple answers.
Is there a way to define consensus i.e 70% through the platform, or is this to be done manually?
Thanks so much!
Relevant answer
Answer
So sorry for the delay Vanessa! I had to do this manually then remove each question that reached consensus in the next round - hope that helps!
  • asked a question related to Consensus
Question
2 answers
I wish to create a consensus sequence of viruses on a high taxonomic level (family).
I have several thousand sequences of variable length (300-20,000 nt), representing partial or whole genome sequences of viruses. The viruses all belong to the same taxonomic family, but they are different genuses and species, which means they have some similarity, but also quite a lot of diversity.
I have different numbers of sequences for each species, so I cannot just throw all the sequences into the same alignment, because that would bias the consensus sequence to over-represent the species with the highest representation in the alignment. So I am looking for strategies to curate the sequences before the final alignment to make sure that the alignment best represents the diversity in the family.
I am considering creating separate alignments for each species. Then I might align the species level consensus sequences to create genus level consensus sequences. Perhaps I will even be able to align the genus level consensus sequences to a family level consensus.
BUT I worry that the species and genus consensus sequences will lose the information on the original ratio of ambiguous nucleotides, which would mean that the family level consensus would also not contain any true information on these ratios.
So my question is -
Is there any way to align multiple consensus nucleotide sequences, while retaining the information on the correct ratios af ambiguous nucleotides?
Thanks in advance.
Relevant answer
Answer
Thank you, Hazrat! I think that is indeed a viable approach.
  • asked a question related to Consensus
Question
6 answers
In my understanding, hyperspectral remote sensing data is equivalent to imaging spectroscopy. But more and more often I see the term used for point spectroscopy (field or laboratory measurements) that of course also fulfill the literal sense of the word since they have lots of spectral bands.
Some people have argued for abolishing the term altogether and only using imaging or point spectroscopy instead.
Is there a consensus on using the term?
Should we use it for
a) all reflectance measurements using many bands, or
b) only for imaging spectroscopy, or
c) not use it at all?
Relevant answer
Answer
Hey Henning,
I also stumbled upon that a few times. I guess the term hyperspectral does not imply any information on the technique itself.
In our publication on the topic, we therefore always used "hyperspectral imaging" to be clear about the technique.
That's why I'd answer your question with a), however, I'm happy to hear about different concepts.
Best,
Chris
  • asked a question related to Consensus
Question
2 answers
I want to construct a plasmid (for Drosophila cell system) containing a intron in order to study splicing process. In detail, I would like to insert wether a weak or a strong splice donor site (followed by small intron+ splice acceptor) in the firefly luciferase to be able to easy monitor the splicing activity by standard luciferase assay. However, if I can find easily the consensus site for 5'SS and 3'SS, I am struggling to find the appropriate full DNA sequences that I would like to practically insert in my plasmid. Is there someone that can help me finding this sequence? (addgene reference? detailled publication with the DNAsequence fully available ?)
Thanks in advance for your help,
J
Relevant answer
Answer
It's a very good question. I'm now focusing on intron splicing. my previous data suggests that G|gt-intron-ag|N structure is adviced. However, this rule was not enough. I found despite the artificial intron locating behind the G, it has only 1/3 chance to be correctly spliced in expressing vectors. If you'd like to insert an intron into an ORF, you'd better try more positions. the splicing rules are too faraway from it could be easily used. Wish you good Luck.
  • asked a question related to Consensus
Question
8 answers
Hi,
I am looking for build up a consensus or a group in clinical biochemistry for mutual exchange in scientific idea, research protocols, cooperation in proposal preparation, sharing in books, Research articles and reviews.
Relevant answer
Answer
I am in fir such collaboration. Cant wait to get started
  • asked a question related to Consensus
Question
8 answers
How could politicians and scientists better work together to address issues in our world? For example, can Researchgate provide opportunities for politicians to get involved in some sort of discussion forum for a specific issue to exchange information and ideas between researchers and politicians?
Relevant answer
Many thanks,
Lemma Lessa
for your points and contributions to this discussion!
  • asked a question related to Consensus
Question
5 answers
I want to find the most essential and reliable academic research AI tools and collections to save time and provide better research outcomes. I have some suggestions for you here (time-by-time, I'll try to update it). Let me know your tips and suggestions!
  • Scite - research on scholarly articles and analyze citations
  • Consensus - study findings on a range of subjects easily accessible
  • Trinka - 3000+ grammar checks, tone, and style enhancements
  • Elicit- deduce summaries and visualizations for practical data interpretation
  • Rayyan- organize, manage, and accelerate systematic literature reviews
  • Scolarcy - automating the process of reading, summarizing, and extracting information
  • Grade scope - grading and feedback tool
  • Knewton - analyze student performance data, strengths, weaknesses, and progress.
  • Watson - has its own Watson Discovery and Watson Natural Language Understanding features
  • Tableau - explore, understand, and identify data, trends, patterns, and outliers
  • Semantic Scholar - academic search engine tool
  • Mendeley - organize, share, and cite your research papers properly in one place
  • Zotero - collect, organize, annotate, cite, and share research documents
  • Wordvice AI - real-time, all-in-one text editor
  • Typeset.io - a comprehensive platform that provides predefined manuscript templates and automated formatting tool
  • SciSpace - provides a one-stop shop for everything from manuscript submission to peer review to publication
  • Scite.ai - gives you accurate citations to published papers
  • Quillbot - writing assistant that helps people create high-quality content
  • Scholarcy - an online research tool that reads and summarizes articles, reports, and book chapters
  • ResearchRabbit - track citations, create bibliographies, and generate summaries of papers
  • ProofHub - All-in-One Project and Team Management
  • ChatPDF - creates a semantic index for each paragraph
  • Consensus - answers and summaries based on peer-reviewed literature
  • Gradescope (for teachers) - administer, organize, access, grade, and regrade students' work
  • Flot.ai - Improve, summarize, translate, and reply to any text
Thanks to your feedback, here are the new ones ...
  • Connected Papers - find related papers
  • Hypothes.is - annotate the web / pdfs - share with other people
  • Endnote - Straightforward tool for organizing and citing papers
Relevant answer
Answer
https://www.connectedpapers.com/ -> find related papers
https://web.hypothes.is/ -> annotate the web / pdfs - share with other people
  • asked a question related to Consensus
Question
1 answer
I have a dataset of approximately 170 CT cases. The idea is that the gold standard is the consensus evaluation of two radiologists on 12 descriptive parameters and 1 conclusion. Is it conceivable that, because 170x2 readings are quite demanding, I test inter-rater agreement on a portion of the cases (like 40?) and that the remaining 130 cases are randomly read by one of the two readers if the Kappa on the 40 cases is >, say, 0.7? In this way, each of the two readers would read 40+(130/2)=105 cases instead of 170.
Is this a possible shortcut? Thanks a lot
Relevant answer
Answer
you can explore more about on this by reading articles on Google Scholars and IEEE Xplor Sites
  • asked a question related to Consensus
Question
1 answer
Hi,
Could anyone please suggest appropriate bioinformatics tools to generate consensus sequence. I tried with EMBOSSCON but I am getting a number of 'n' in the consensus generated. Using this consensus I need to design primers for my further work.
Thanks
Deepti
Relevant answer
Answer
Well it depends on what sequences/files you plan to align, but in general Bioedit or UGENE can both make alignments and generate consensus sequence
  • asked a question related to Consensus
Question
4 answers
Non-insane automatism is a legal defense used in some jurisdictions to argue that a person's actions were committed involuntarily due to a state of automatism, which is a condition where a person performs actions without conscious control or awareness. Unlike the defense of "insane automatism," which involves actions resulting from a mental disorder, the defense of non-insane automatism involves actions resulting from external factors that temporarily impair the person's consciousness or control over their actions.
The concept of non-insane automatism caused by electromagnetic fields (EMF) is a topic that has been debated and researched in various contexts, including legal, medical, and scientific realms. Some individuals claim that exposure to electromagnetic fields can lead to involuntary actions or states of automatism, but it's important to understand that the mainstream scientific consensus does not support a direct causal link between EMF exposure and non-insane automatism.
Electromagnetic fields are generated by the movement of charged particles and are present in various forms in our environment, including from power lines, electronic devices, and wireless technologies. While EMF exposure is a legitimate area of concern and research due to potential health effects, the idea that EMF exposure can directly cause a person to engage in involuntary actions or lose control over their behavior is not well-substantiated by scientific evidence.
Here are some important points to consider:
  1. Scientific Consensus: The mainstream scientific consensus does not support the notion that exposure to typical levels of electromagnetic fields can lead to non-insane automatism or involuntary behavior.
  2. Health Effects: EMF exposure has been studied primarily in relation to potential health effects, such as the risk of certain illnesses or conditions. Research on EMF and its effects on human health is ongoing, but the evidence for causing involuntary actions is limited.
  3. Individual Differences: Responses to EMF can vary among individuals, but the idea that EMF exposure universally causes non-insane automatism is not supported by the available research.
  4. Legal Considerations: In legal cases involving claims of non-insane automatism due to EMF, the courts typically rely on established scientific evidence and expert testimony to determine the validity of such claims.
  5. Causation and Evidence: For any claim that EMF exposure caused non-insane automatism, there would need to be robust scientific evidence demonstrating a direct cause-and-effect relationship between the two. Establishing causation in legal cases involves rigorous scientific analysis and evaluation.
If you have concerns about EMF exposure, it's important to seek information from reputable scientific sources, government health agencies, and expert organizations. If you believe that EMF exposure has caused you or someone else to engage in involuntary actions, it's advisable to consult with medical and legal professionals who can provide accurate guidance based on the available evidence and expertise.
Relevant answer
Answer
  1. Thank you for your insightful explanation of non-insane automatism and its potential connection to electromagnetic fields (EMF). It's clear that this is a complex and debated topic that spans legal, medical, and scientific domains.
  2. In the legal realm, the defence of non-insane automatism has been applied in various cases to argue that an individual's actions were involuntary due to external factors affecting their consciousness or control. However, the courts typically require rigorous evidence and expert testimony to determine the validity of such claims. A notable example is the case of R v. Quick (1973) 2 WLR 291, where the defendant experienced a hypoglycemic episode and committed a violent act. The court considered whether the involuntary state caused by the medical condition could lead to the defense of non-insane automatism.
  3. Your explanation of the relationship between EMF exposure and involuntary actions is well-rounded. While some individuals claim that EMF exposure can lead to automatism, the consensus within the scientific community does not firmly support this notion. The effects of EMF exposure have primarily been studied in the context of potential health implications, rather than direct causation of involuntary behavior. For instance, in the case of R v. Sullivan [1984] 3 All ER 932, the court evaluated the claim of automatism arising from a hypoglycemic episode, highlighting the need for medical evidence and expert opinion.
  4. The point that responses to EMF can vary among individuals underscores the complexity of this issue. It's crucial to distinguish between concerns over potential health effects and claims of direct causation of involuntary actions. The legal consideration you provided highlights that establishing a causal link between EMF exposure and non-insane automatism would require robust scientific evidence and expert analysis.
  5. Incorporating case law, your response has effectively conveyed the importance of relying on established scientific consensus and expert testimony in legal cases involving claims of non-insane automatism due to EMF exposure. For anyone with concerns about EMF exposure, seeking guidance from reputable sources and experts is paramount. If a situation arises where EMF exposure is believed to have caused involuntary actions, consulting both medical and legal professionals would be prudent to ensure informed decisions based on the available evidence and expertise.
Thank you for shedding light on this intriguing and multifaceted topic. It serves as a reminder of the intricate interplay between law, science, and human behavior.
  • asked a question related to Consensus
Question
4 answers
I am interested in the relationship between gene dosage and the amount of protein expression. Any one has experience in this concern? If there is a consensus?
Relevant answer
Answer
As I am not certain what the product is that you may be working on, my response may be a bit more general.
Gene dosage is the number of copies of a particular gene present in a genome. Gene dosage is related to the amount of gene product (proteins or functional RNAs) the cell is able to express. Since a gene acts as a template, the number of templates in the cell contributes to the amount of gene product able to be produced.
Generally speaking, more copies of a gene — or higher gene dosage — will result in increased expression of the proteins for which the genes code. However, this is not always the case, as some genes are regulated by other factors that affect their expression levels. For example, some genes are dosage-sensitive, meaning that changes in their copy number can have significant phenotypic consequences, such as diseases or developmental defects.
An example of a dosage-sensitive gene is HBB, which codes for the beta-subunit of hemoglobin. Humans normally have two copies of this gene, one from each parent. However, some people inherit a mutated version of this gene that causes sickle cell anemia, a blood disorder that affects the shape and function of red blood cells. People who have one normal and one mutated copy of HBB are carriers of sickle cell anemia, and they produce half normal and half abnormal hemoglobin. People who have two mutated copies of HBB have sickle cell anemia, and they produce mostly abnormal hemoglobin. Therefore, the amount of protein expression from HBB depends on the gene dosage and the type of alleles inherited.
  • asked a question related to Consensus
Question
2 answers
I am working on blockchain based energy sharing. I have consensus mechnism implemented in Matlab, now I want to implement a complete system. My Question: Can we implement BC in hyperledger etc and run consensus in matlab?
Relevant answer
Answer
  • asked a question related to Consensus
Question
3 answers
This question explores the role of the consensus mechanism in ensuring the security of blockchain networks, discussing concepts such as proof-of-work, proof-of-stake, and their impact on network security.
Relevant answer
Answer
FAKE ANSWER WARNING
The answer by
Rana Hamza Shakil
was analysed by https://gptzero.me, and the result was
Your text is likely to be written entirely by AI
This person uses ChatGPT to answer questions without apparently knowing or caring whether they are right or wrong.
  • asked a question related to Consensus
Question
3 answers
Hello everybody, I'm a master degree student. I'm working with 16S data on some environmental samples. After all the cleaning, denoising ecc... now I have an object that stores my sequences, their taxonomic classification, and a table of counts of ASV per sample linked to their taxonomic classification.
The question is, what should I do with the counts for assessing Diversity metrics? Should I transform them prior to the calculation of indexes, or i should transform them according to the index/distance i want to assess? Where can I find some resources linked to these problems and related other for study that out?
I know that these questions may be very simple ones, but I'm lost.
As far as I know there is no consensus on the statistical operation of transforming the data, but i cannot leave raw because of the compositionality of the datum.
Please help
Relevant answer
Answer
Assessing diversity metrics in 16S data is an important step in analyzing microbial communities. Handling count data in this context can be challenging due to the compositional nature of the data, as you mentioned. While there is no one-size-fits-all approach, there are several techniques and considerations you can explore. Here are some suggestions:
  1. Transformations for diversity metrics: The choice of transformation depends on the diversity metric you want to assess. Common transformations include rarefaction, normalization (e.g., by library size or cumulative sum scaling), or transformations that aim to address compositionality, such as log-ratio transformations (e.g., centered log-ratio, clr transformation) or Hellinger transformation. Different transformations may be more suitable for specific diversity metrics, so it's essential to consider the metric's assumptions and properties.
  2. Compositional data analysis (CoDA): Compositional data analysis provides a statistical framework to analyze and interpret compositional data. It accounts for the constrained nature of relative abundance data by working on transformed data. CoDA methods, such as ALDEx2 or ANCOM, can help identify differentially abundant features between groups while considering the compositional structure.
  3. Multivariate analyses: If you want to explore the overall community structure and relationships, multivariate techniques like principal component analysis (PCA), correspondence analysis (CA), or non-metric multidimensional scaling (NMDS) can be employed. It's advisable to perform these analyses on transformed data to mitigate the effects of compositionality.
  4. Research articles and resources: To delve deeper into the subject, you can refer to scientific articles and resources that discuss the statistical analysis of 16S data. Some useful references include: "Microbiome Analysis Methods" by Paul J. McMurdie and Susan Holmes. "A guide to statistical analysis in microbial ecology: a community-focused, living review of multivariate data analyses" by Egoitz Martínez-Costa et al. "Statistical analysis of microbiome data with R" by Yinglin Xia et al. "MicrobiomeSeq: An R package for analysis of microbial communities in an environmental context" by Paul McMurdie and Susan Holmes. These resources provide insights into various statistical approaches, transformations, and analysis techniques for 16S data.
Remember that there is ongoing research in the field, and best practices continue to evolve. It's important to critically evaluate the methods, consider the specific characteristics of your data, and consult with your advisor or peers with expertise in microbiome analysis to make informed decisions about data transformations and diversity metric assessment.
  • asked a question related to Consensus
Question
2 answers
A general methodology question about reaching consensus in the Delphi method:
When we have a Likert scale questionnaire for our experts to fill, the consensus criteria is items that have a mode above 5, and we have 11 items in total. In the first round, many items have reached a mode above 5. Do we exclude them for the next round and only ask about the other items which haven't reached the consensus criteria? Or can we do the opposite and exclude the items which haven't reached the criteria and include the items that have reached the criteria to narrow them down, as the goal is to limit the number of items to 4-5?
Thank you in advance
Relevant answer
Answer
In a quantitative Delphi method, the goal is to reach a consensus among a panel of experts on a particular topic through multiple rounds of surveys/questionnaires. The consensus criteria may vary depending on the study's objectives, but in general, it is agreed upon beforehand and should be adhered to throughout the Delphi process.
In your case, the consensus criteria for each item is having a mode above 5 on a Likert scale questionnaire. If multiple items have reached this consensus in the first round, there are two possible ways to proceed in the next round:
  1. Exclude the items that have already reached consensus and focus on the remaining items that have not yet reached the consensus criteria. This approach can help to identify which items still need further discussion and refinement to reach consensus.
  2. Include the items that have already reached consensus in the next round but narrow down the focus of the discussion to those items. This approach can help to refine the wording and phrasing of the items that have already reached consensus and may lead to further refinement and consensus.
Both approaches have their advantages and disadvantages, and the decision on which approach to use ultimately depends on the study's objectives and the consensus criteria. However, it is important to ensure that the consensus criteria are consistently applied throughout the Delphi process and that any changes to the process are justified and transparent.
  • asked a question related to Consensus
Question
1 answer
Some propose that the presence of gravitational time dilation and the effect on red shifted photons means that gravitons can travel faster than light. Can some one elaborate and explain why exactly this should indicate FTL gravitons. And what is the consensus on this on this effect, and in general comment on whether they think it is true
Relevant answer
Answer
  • asked a question related to Consensus
Question
1 answer
I am new to the world of Bayesian phylogenetics and I am trying to get my head around the two types of consensus tree MrBayes offers. I understand the Majority-Rule consensus but I am struggling to grasp the allcompat option. Is there another name for it which I may be more familiar with? Any help would be much appreciated!
Hannah
Relevant answer
Answer
Hi Hannah,
allcompat = strict consensus tree.
  • asked a question related to Consensus
Question
1 answer
In recent decades I've noticed a tendency in organizations to attempt to solve problems by assembling large groups early of varying experience levels and backgrounds, featuring lots of discussion meetings and pursuit of consensus. Often the results have not been excellent, which I find unsurprising. Thoughts?
Relevant answer
Answer
Fairly on point
  • asked a question related to Consensus
Question
2 answers
Democracy and Consensus in African Traditional politics: A Plea for a non-party??
I need the full text
Relevant answer
Answer
Thank you Sir @Mohamed
  • asked a question related to Consensus
Question
5 answers
Does anyone have experience purifying protein directly from buffered complex media (recipe from EasySelect system) over a Ni-NTA column? I am trying to decide if I will need to run the media directly from my AKTA sample pump or if I should use TFF to concentrate and exchange first. Either way is fine, just curious what the consensus is. Also considering ammonium sulfate precip as one process pathway.
Relevant answer
Answer
Ryan Garrigues thanks for the follow up. This now seems like ages ago, but still good to address. For our paper we outsourced the fermentation to achieve a scale sufficient for the animal study. Our CRO at the University of GA did use TFF to concentrate the supe prior to Ni-NTA purification. We also purchased a small scale TFF system (MiniMate from Pall) not long after posting this question and it works great. However, I have found that by simply added 10xPBS to the media and supplementing with NaCl to a final 0.5M concentration that I am able to directly run the media over a HiTrap column using a sample pump to purify same day. The limitation is the flow rate of the column, so I tend to grow no more than 1-liter at a time. I have used this system to successfully purify nanobody constructs and scFvs with exellent initial purity and then polish them by SEC.
Best,
Thomas
  • asked a question related to Consensus
Question
2 answers
What is the current consensus among historians and other scholars? Apart from his alleged relationship with Mrs. Crawford that compromised his political career, did Dilke have other clandestine romantic liaisons?
Relevant answer
Answer
I posted this question after reading the fascinating account of Dilke's career by Roy Jenkins, Sir Charles Dilke: A Victorian Tragedy, because I saw that a number of researchers on RG had posted material about Dilke. Is there no current interest in the Dilke issue?
  • asked a question related to Consensus
Question
3 answers
I have performed a dab seq experiment to enrich the putative NAC binding sites in the genome. By sanger sequencing I have obtained certain reads with the consensus NAC motif. Is it possible to identify the CDS of the gene downstream of the obtained promoter region harbouring the NAC site
Relevant answer
Answer
Hi
just apply the sequence you obtained in a blasting tool as in the UCSC website:
be sure on the genome and assembly.
all the best
fred
  • asked a question related to Consensus
Question
2 answers
The statistic most commonly used for interrater reliability is Cohen’s kappa, but some argue that it’s overly conservative in some situations (Feinstein & Cicchetti, 1990. High agreement but low kappa). For binary outcomes with a pair of coders, for example, if the probability of chance agreement is high, as few as two disagreements out of 30 could be enough to pull kappa below levels considered acceptable. I’m wondering whether any consensus has emerged for a solution to this problem, or how others address it.
Relevant answer
Answer
Perfect agreement
  • asked a question related to Consensus
Question
1 answer
I do not think that there is a consensus and I would like to collect opinions on the more reliable soluble platelet activation marker in plasma.
Thank you
Relevant answer
A reliable plasma marker of platelet activation: does it exist?
  • asked a question related to Consensus
Question
24 answers
It can be said that Thomas Kuhn’s loop is active only when the working of paradigms generates abnormalities. If a paradigm does not generate abnormalities it is a golden paradigm.
Hence, the Kuhn’s loop can be envisioned as moving from paradigm to paradigm correcting abnormalities until there are no more abnormalities to correct.
In other words, the Kuhn’s loop works its way up from non-golden paradigms to the golden paradigm.
And this raises the question; Can Thomas Kuhn’s scientific revolution loop be seen as the road that leads in the end to a golden paradigm ruled world?
I think the answer is Yes, what do you think?
Feel free to share your own views on the question!
Relevant answer
Over the past decades, a number of sources of globalization have emerged. One of them is technological progress, which has led to a sharp reduction in transport and communication costs, a significant reduction the costs of processing, storing and using information.
The second source of globalization is trade liberalization and other forms of economic liberalization that have curtailed protectionist policies and made world trade freer. As a result there were tariffs have been substantially reduced, and many other barriers to trade in goods and services have been removed. Other liberalization measures have led to an increase in the movement of capital and other factors of production.
The third source of globalization can be considered a significant expansion of the scope of organizations, which became possible both as a result of technological progress and wider horizons of management on basis of new means of communication. Thus, many companies that previously focused only on local markets have expanded their production and marketing capabilities, reaching the national, multinational, international and even global level.
Globalization brings not only benefits, it is fraught with negative consequences or potential problems, which some of its critics see as a great danger.
One of the main problems is related to the question: who benefits from globalization? In fact, most of the benefits are rich countries or individuals. The unfair distribution of the benefits of globalization gives rise to the threat of conflicts at the regional, national and international levels.
The second problem is related to potential regional or global instability due to the interdependence of national economies at the global level. Local economic fluctuations or crises in one country may have regional or even global implications.
The third set of problems posed by globalization is caused by the fear that control over the economies of individual countries may shift from sovereign governments to other hands, including the most powerful states, multinational or global corporations and international organizations.
Because of this, some see globalization as an attempt to undermine national sovereignty. For this reason, globalization can make national leaders feel helpless before its forces, and the electorate - antipathy towards her. Such sentiments can easily turn into extreme nationalism and xenophobia with calls for protectionism, lead to the growth of extremist political movements, which is potentially fraught with serious conflicts.
The problem generated by globalization - the infringement of national sovereignty and the independence of political leaders - can also be largely resolved on the basis of international cooperation, for example, by a clear delineation of the powers of the parties, i.e. national governments and their leaders, on the one hand, and international organizations and multinational or global corporations, on the other. The very involvement of political leaders in building the necessary institutions to deal with these and other globalization-related issues will help them regain the sense that they are in control of their future and in control of their positions in the world.
Globalized world. In the meantime, unfortunately, the world is moving in the opposite direction, along the path of political and military dictate of a strong
weak, that in the context of globalization of all aspects of the life of the world community, it is fraught with a global confrontation.
The current crisis of the Western economy is not a recession because it is not cyclical and is not limited to 12-16 months. What is happening in the US and Europe today is a structural crisis, a process that began in the fourth quarter of 2021 and will continue for at least five years without interruption. However, the West does not understand the causes and essence of the crisis, because they do not have theories describing it. That is why, according to the economist, the American and European authorities are doing stupid things instead of effective measures to resolve problems.
It was impossible to avoid this crisis, because they went too far. They have expanded private consumption so much that they can no longer keep it. You need to name the main number. There is an indicator in the United States that they do not disclose in public discussion: this is the level of price growth for all industrial goods, not only for final goods entering the wholesale trade, but in general for everything, from raw materials to the final product. For the first time, the rise in prices for manufactured goods exceeded the level of the late 1970s. The previous peak was at the end of 1947. There are 23 with something percent.
The entire system of socio-political management in the West, both in the USA and in Europe, is built through representatives of the middle class, qualified consumers. Today this instrument is being destroyed. Instead of the middle class, new poor people appear, who have a middle-class attitude, but they have no money.
The sanctions pressure on Russia has exacerbated the economic problems of the West. European financiers note that EU politicians are afraid to take responsibility for decisions taken under the slogans of transatlantic solidarity and assistance to Ukraine.
In fact, this whole situation with global confrontation and the breakdown of the dollar system is disastrous for the United States not by economic factors, but by intellectual ones. Roughly speaking, Washington will undoubtedly lose to Moscow only because the US does not even have a concept of a plan to solve the colossal economic problems and save the dollar system.
Intellectual life in the US and Russia goes in opposite directions. The US has nothing left for a long time. There, no one can imagine even a weak positive scenario. The complete absence of any thought, not to mention the concept.
  • asked a question related to Consensus
Question
1 answer
I am using OVL to determine the amount of overlap between two probability distributions. I have a relatively high degree of overlap, averaging around 0.808342. I would like to know if there is a consensus in the field of what threshold for OVL is considered a good overlap.
thank you for your input.
Relevant answer
Answer
I don't think much consideration of OVL exists in mainstream statistics, as it seems to have obvious flaws in not being suited to various obvious questions that would be asked of data-sets. However this reference might be a start (found by a google search); https://www.proquest.com/openview/8d0f05a8ff707a5bb6891295edb8de9c
"BEHAVIOR AND PROPERTIES OF THE OVERLAPPING COEFFICIENT AS A MEASURE OF AGREEMENT BETWEEN DISTRIBUTIONS (ASSOCIATION, DISSIMILARITY)", 1984
  • asked a question related to Consensus
Question
3 answers
I was only able to find these articles. But there is no consensus yet.
- K.I. Triantou D.I. Pantelis, V. Guipont, M. Jeandin Microstructure and tribological behavior of copper and composite copper+alumina cold sprayed coatings for various alumina contents Wear 336-337 (2015) 96–107.
- T. Chandanayaka and F. Azarmi Investigation on the Effect of Reinforcement Particle Size on the Mechanical Properties of the Cold Sprayed Ni-Ni3Al Journal of Materials Engineering and Performance 23(5) (2014) 1815–1822.
Are there other works on this problem?
Relevant answer
Answer
Here it is necessary to distinguish between the size of inclusions of aluminum oxide in the coating and the particle size of corundum in the powder. Of course, the smaller the size of the inclusions of aluminum oxide in the coating, the better (the higher the wear resistance of the coating, the better its mechanical properties, machinability, etc.). However, in the powder, on the contrary, sufficiently large particles of the abrasive component are required, which improve the flowability of the powder mixture and contribute to better adhesion of the metal component of the coating. To overcome this contradiction (large grains of corundum in the powder, but small inclusions of corundum in the coating), it is necessary to ensure that the grains of corundum are broken into small pieces from hitting the surface of the substrate during the spraying process. In order for the abrasive grains to break upon impact, a sufficiently high speed is required, but the properties of the abrasive itself are no less important: the less strong (more brittle) the abrasive grains, the smaller will be its inclusions in the coating. In this sense, corundum is not an ideal abrasive for use in powder mixtures for spraying with low pressure gas-dynamic spraying devices (Dimet and others). It is much more profitable to use more brittle abrasive materials: glass powders, cristobalite slags, etc.
Тут надо различать размер включений оксида алюминия в покрытии и размер частиц корунда в порошке. Разумеется, что чем меньше размер включений оксида алюминия в покрытии, тем лучше (выше износостойкость покрытия, лучше его механические свойства, обрабатываемость и т.д.). Однако в порошке требуются, наоборот, достаточно крупные частицы абразивного компонента, которые улучшают сыпучесть порошковой смеси и способствуют лучшей адгезии металлического компонента покрытия. Чтобы преодолеть это противоречие (крупные зерна корунда в порошке, но мелкие включения корунда в покрытии) необходимо добиться, чтобы зерна корунда разбивались на мелкие куски от удара о поверхность субстрата в процессе напыления. Для того, чтобы зерна абразива разбивались при ударе, необходима достаточно высокая скорость, но не менее важны и свойства самого абразива: чем менее прочны (более хрупки) зерна абразива, тем мельче будут его включения в покрытии. В этом смысле корунд не является идеальным абразивом для применения в порошковых смесях для напыления устройствами газодинамического напыления низкого давления (Димет и прочие). Гораздо выгоднее использование более хрупких абразивных материалов: порошков стекол, шлаков кристобалита и т.д.
  • asked a question related to Consensus
Question
4 answers
What could be possible problems or topics of the blockchain to address in a master thesis ?
So far I have been researching the variety of consensus mechanisms including their pros/cons. Though my idea to use this as a topic was shattered by the realization that some very recent studies had done some extensive work here already. I did not feel like there was enough room to go over this again, since I could not find another angle for new research.
I can´t really put my finger onto an actual scientific question to answer in my thesis yet.
Im looking for some inspiration, guidance and tips.
Kind regards to the community & stay healthy everyone!
Relevant answer
Answer
You can apply blockchain to drone communication and data acquisition. Both are vulnerable to cyber threats. Also, to automate the process, AI can be an excellent candidate for merging with it. I have some papers regarding this. All the best.
Blockchain in AI-based drone communication
Blockchain in data collection using drone
  • asked a question related to Consensus
Question
1 answer
It seems to be an apparent consensus that the activation and function of AhR influences the landscape of TC populations, but is there a group that is more severely affected?
Relevant answer
Answer
T helper 17 cells (Th17) are the most affected. AhR activation may directly or indirectly also modulate the commitment of Tregs, Th1 and Th2.
Please refer to the articles attached for more information.
Best.
  • asked a question related to Consensus
Question
3 answers
Focusing on the vulnerabilities in consensus protocols
Relevant answer
Answer
Permissioned Blockchain makes building consensus protocols a linear functionality of considering the vulnerabilities that may exist within other protocols. It is similar to making some variables in a programming language general to the set or looking for a subset to accommodate such variables.
Check this:
I hope this is helpful.
  • asked a question related to Consensus
Question
2 answers
I would like to use a practical and yet rigorous tool to build a consensus among a large number of stakeholders in an educational intervention project.
I would appreciate experts' opinions in suggesting research tools, comparing popular tools e.g. Delphi, Group Concept Mapping, etc.
I would also appreciate practical advice from all researchers.
Many Thanks in Advance
Relevant answer
Answer
Hello, please take a look at this research. These statistical coefficients are used for determining the conformity or reliability of experts' evaluations, and the Kendall coefficient with a value greater than or equal to 0.7 was considered as the stopping index for the procedure of the Delphi method.
  • asked a question related to Consensus
Question
3 answers
The Fuzzy Delphi is a more advanced version of the Delphi Methodin that it utilizes triangulation statistics to determine the distance between the levels of consensus within the expert panel. Yet, Is there any other method can use instead of fuzzy Delphi method to select the most suitable criteria?
Relevant answer
Answer
Hello, please take a look at this research. These statistical coefficients are used for determining the conformity or reliability of experts' evaluations, and the Kendall coefficient with a value greater than or equal to 0.7 was considered as the stopping index for the procedure of the Delphi method.
  • asked a question related to Consensus
Question
3 answers
Delphi-Method to develop consensus
Content validation-no of expert rate each item of questionnaire according to its relevance and clarity
Relevant answer
Answer
Hello, please take a look at this research. These statistical coefficients are used for determining the conformity or reliability of experts' evaluations, and the Kendall coefficient with a value greater than or equal to 0.7 was considered as the stopping index for the procedure of the Delphi method.
  • asked a question related to Consensus
Question
4 answers
I need code of Consensus + Innovations and OCD in any programming language preferably Matlab or R
Relevant answer
Answer
Aamir Nawaz, Can you provide code for Consensus + Innovations and Optimality Conditions Decomposition?
I would appreciate it if you help me.
  • asked a question related to Consensus
Question
3 answers
Hello researchers, I am interested to find new alternatives to blockchain consensus algorithms that existed so far in academia since max 2 years describing new methods of how a scalable cryptocurrency should be. Please feel free to answer the question and link some references on it
Relevant answer
Answer
You can take a look at the following papers:
  • asked a question related to Consensus
Question
4 answers
COVID-19 is mainly a respiratory disease that affects the lung, although other organ structures with endothelium seems to be affected too.
When should we do imaging?
What is the aim of the imaging?
How can it help with management?
Do you agree with the following consensus statement?
How will you adjust your own practice and difficulties encountered? Why?
Ref:
The Role of Chest Imaging in Patient Management during the COVID-19 Pandemic: A Multinational Consensus Statement from the Fleischner Society. Chest. 2020 Apr 07.
Relevant answer
Answer
I personally believe that imaging examinations in covid should be rapid, simple and executable at the patient's bedside and therefore I believe that the most useful is LUS.
Unfortunately still today is used the chest X-ray that has been proven useless.
The purpose of LUS is to stage the pathology in order to predict its evolution, unfavorable or favorable. With LUS and blood gas analysis we can determine which patients should be discharged at home in a period when bed meals are scarce in all hospitals.
  • asked a question related to Consensus
Question
14 answers
Dear all,
Recently I met a problem. We screened out a gene A which performs like a tumor suppressor gene. It negatively correlates with clinical patients' survival. And it affectes tumor cell proliferation much. When we knockdowm it the proliferation of lung tumor cell line increases. The radiosensitivity seemed to be decreased. My quenstion is: Commonly what should the radiosensitivity be implicated when the proliferation was upregulated by one gene? If my results are repeatable, is it strange as we inhibit a tumor suppressor gene to achieve radiosensitivity? I read some papers about the relationship between cell proliferation and its radiosensitivity, no consensus opinion was found. What do you think about this kind of thing?
Relevant answer
Answer
What Malcom refers to is the "Law of Bergonie & Tribondeau". This is misunderstood in the Anglophone literature, due to a mistranslation (from their French) of their reported observations. A more correct translation of their reported "appear to be more radiosensitive" is not the "are more radiosensitive" as translated in Radiation Research. What best explains the B&T observations is that a cell that has been sterilized (lost it's reproductive integrity or clonogenicity) will not be seen to be sterilized until it attempts to divide (mitosis) and fails (most apoptosis in solid tissues occurs after a failed mitosis). This is best exemplified in lethally irradiated hibernating squirrels, which are very radioresistant compared to active squirrels, but once brought out of hibernation, die with the same time course post-hibernation as that observed for normally active squirrels post-irradiation (see CS Lange, DPhil thesis, Oxford University, Fac Med, Bodleian Library, 1968). Hence, radiosensitivity should be measured in terms of loss of reproductive integrity (ability to proliferate ad infinitum, or clonogenicity, where a sufficiently large number of divisions (at least 10) are taken as the endpoint).
  • asked a question related to Consensus
Question
2 answers
Hi everyone
I am starting a Delphi consensus study, which will including ranking responses into the top 5 (most important). Can anyone help (with any references) that can help guide data analysis.
Many thanks,
Concettina
  • asked a question related to Consensus
Question
3 answers
I am planning a study using concept mapping that will involve procrustes analysis to compare across group visualizations, and I am wondering if it is possible to analyze the residuals or consensus proportions from GPA alongside data from surveys, scores on a behavioral measure, etc., by way of ANOVA or correlational study?
Relevant answer
Answer
Use the ANOVA to get more useful information in the study
  • asked a question related to Consensus
Question
3 answers
Dear all,
First I will share some details about my research:
- My dataset consists out of 23 statements.
- I would like to analyze my dataset for two subgroups (Advisers, HCP that do recommend intervention x and Non-advisers, HCP that do not recommend intervention x)
- The aim of my study is to identify differences in how these two groups (slightly) agreed or (slightly) disagreed with the 23 statements.
- Hence, we performed a PCA. This PCA graph of variables and their loadings show us statements advisers/non-advisers agreed upon (most of then (slightly) agreed or (slightly) disagreed, there was 'consensus') and statements they didn't agree upon (there was lots of variation among how they answered on the statement).
- Lots of variance (high loadings) indicating advisors or non-advisers did not reach "consensus" as a group. Hence, that particular statement is probably not related to being an advisor or non-adviser.
- At this moment I am not sure if what I am doing is valid or that I overlook some important points.
- Furthermore, I am wondering if there is a technique on how to compare outcomes of PCA's of two groups within the same dataset?
Kind regards,
Anne
Relevant answer
Answer
If you do two different PCA and plot them side by side you should see the differences like two different hours in a clock.
But, why don´t you use a decision tree using rpart. Here is the package https://cran.r-project.org/web/packages/rpart/index.html
Just take the example of Titanic data that is attached to the rpart library and change OpnionA - OpinionB instead of Survive-NotSurvive. Play around and may be you´ll get something.
Best!!
  • asked a question related to Consensus
Question
9 answers
[Information] Special Issue - Intelligent Control and Robotics
Relevant answer
Thanks for sharing.
  • asked a question related to Consensus
Question
2 answers
How to classify distributed systems (traditional) algorithms like XFT, RAFT, Paxos, Sieve, BFT, and DAG with blockchain (modern) algorithms like PoW, PoS, PoA, etc.?
Please refer to these diagrams:
Relevant answer
Answer
Roughly the following should work
----------------------------------------------
>Centralized
---------------------
  • >>Traditional
  • >>>Authentication, Authorization, and Accounting
  • >>>Trusted Third Party Authentication
  • >>Permissioned
  • >>>Authentication Based
>Partially Decentralized
---------------------------------------
  • >>Distributed Permissioned
  • >>>Centralized Source
  • >>>>Authentication Based
  • >>>>Shared Stake
  • >>>>Voting Based
  • >>>Reputation-Based
>Decentralized
------------------------------
  • >>Permissionless
  • >>>Consensus Techniques
  • >>>>Validation Based
  • >>>>>Block-Mining
  • >>>>>Non-Mining
Data Structure
------------------------
  • >>Non-branching
  • >>>Blockchain
  • >>Branching
  • >>>DAG
  • >>>Merkle Tree
--------------------------------------------------
This is just what I would do, based on current knowledge. As to what goes where… I’ll let you dig into that. DAG isn’t an algorithm, but a data structure. Some of the same algorithms can be applied, but many DAG systems aren’t mining-based but transaction based because DAG is generally used for speed advantage over Blockchain or for use in IOT-systems. I know the formatting is weird... no tabs on this editor. I hope this helps.
  • asked a question related to Consensus
Question
2 answers
I want to create consensus fasta sequence for long-read sequencing BAM files. I have used
samtools mpileup -uf reference.fasta file.bam | bcftools call -c | vcfutils.pl vcf2fq > sample.fq
seqtk seq -a sample.fq > sample.fasta
but variants present (in abundance) in the reads do not make it to the fasta file. I have added a lot of parameters, without success. Is there any other tool that I could use to create a consensus fasta file from bam files from long-read sequencing?
Relevant answer
Answer
try samtools and bcfools to pick consensus fastq then convert fastq to fasta using seqtk
samtools mpileup -uf REFERENCE.fasta SAMPLE.bam | bcftools call -c | vcfutils.pl vcf2fq > output_cns.fastq
seqtk seq -aQ64 -q20 -n N output_cns.fastq > output_cns.fasta
below seqtk is to filter the bases of quality lower than 20 to N. you may adjust it according to your aim of your study
  • asked a question related to Consensus
Question
1 answer
I am attempting to perform Consensus Molecular Subtyping (CMS) on colorectal cancer specimens that my lab has collected over the years. I am struggling to find a method that is streamlined, and since I am not very familiar with bioinformatics, I was hoping that someone might be able to help me understand what I need to do step-by-step to accomplish this. Thank you!
Relevant answer
Answer
Consensus molecular subtyping is an RNA expression-based classification system for colorectal cancer (CRC). Genomic alterations accumulate during CRC pathogenesis, including the premalignant adenoma stage, leading to changes in RNA expression. Only a minority of adenomas progress to malignancies, a transition that is associated with specific DNA copy number aberrations or microsatellite instability (MSI). We aimed to investigate whether colorectal adenomas can already be stratified into consensus molecular subtype (CMS) classes, and whether specific CMS classes are related to the presence of specific DNA copy number aberrations associated with progression to malignancy. RNA sequencing was performed on 62 adenomas and 59 CRCs. MSI status was determined with polymerase chain reaction-based methodology. DNA copy number was assessed by low-coverage DNA sequencing (n = 30) or array-comparative genomic hybridisation (n = 32). Adenomas were classified into CMS classes together with CRCs from the study cohort and from The Cancer Genome Atlas (n = 556), by use of the established CMS classifier. As a result, 54 of 62 (87%) adenomas were classified according to the CMS. The CMS3 'metabolic subtype', which was least common among CRCs, was most prevalent among adenomas (n = 45; 73%). One of the two adenomas showing MSI was classified as CMS1 (2%), the 'MSI immune' subtype. Eight adenomas (13%) were classified as the 'canonical' CMS2. No adenomas were classified as the 'mesenchymal' CMS4, consistent with the fact that adenomas lack invasion-associated stroma. The distribution of the CMS classes among adenomas was confirmed in an independent series. CMS3 was enriched with adenomas at low risk of progressing to CRC, whereas relatively more high-risk adenomas were observed in CMS2. We conclude that adenomas can be stratified into the CMS classes. Considering that CMS1 and CMS2 expression signatures may mark adenomas at increased risk of progression, the distribution of the CMS classes among adenomas is consistent with the proportion of adenomas expected to progress to CRC. © 2018 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of Pathological Society of Great Britain and Ireland.
  • asked a question related to Consensus
Question
2 answers
I am using PAUP4 for parsimony analysis. The differences in branch length seem important enough to retain, but the number of trees that are being retained necessitates building a consensus tree.
Is it appropriate/possible to build a consensus tree that averages the branch lengths, and if so, how would this be achieved in PAUP4?
Relevant answer
Answer
When you say build a consensus tree - this is a bootstrap consensus tree or a strict consensus of a set of equally parsimonious trees? In both cases, since each tree will have a different topology and set of branch lengths, it is not appropriate to calculate some average, thus PAUP does not do this for you.
What you want to do is re-optimize branch lengths on your consensus tree. It has been a long time since I used PAUP so I cannot offer more specifics about the options, but you can optimize branch lengths on a fixed tree (for example your bootstrap consensus tree) with the "PScores" command. To see the options for PScores in the PAUP command-line interface, try "PScores ?". To see all PAUP commands in the command-line interface, simply type "?".
You might also consider using the "DescribeTrees" command to look at the consistency index (CI) and retention index (RI) values. This will give you an idea of how much homoplasy is in the data and how informative the data are, respectively.
I say this with the caveat that you are assuming the consensus tree is the true tree. The branch lengths on that tree might not be particularly meaningful and depending on your goals, presenting the cladogram may be more appropriate. I assumed that you are working on morphological data.
  • asked a question related to Consensus
Question
1 answer
In Group decision making, the consensus is very importante to make it possible to a decision makers group to obtain a mutual importance of factors.
Relevant answer
Answer
Can you please provide a another formulation of the question?
  • asked a question related to Consensus
Question
3 answers
According to Popov, the IOTA DAG through GHOST protocol has made the Blockchain data structure from a chain to a tree structure which improving the confirmation times and overall security of the network (2018). Popov argument is merely improving from the Bitcoins Blockchain consensus and creation weakness and its data structure. As such, improvement the employment of DAG shall provide significant speed for the DLT protocol. 
On the other hand, it is not certain at this point, how IOTA DAG is able to improve the performance and moreover, the ioT devices are predominantly relatively smaller scale chip which could not support heavy hashing algorithm such as SHA256 with a Proof-of-work algorithm in solving the nonce.
Relevant answer
Answer
The ingineering prespetive in it is it will increase the computation rate by deviding the hash function over many users and investe in the high submission rate to aggregate a more difficulte to backtrack sequence comparable to the Bitcoin PoW
  • asked a question related to Consensus
Question
20 answers
Last years a have an arising question based on my experience from different academic and scientific activities. The contemporary humanism, obviously very good and peaceful, including everybody, is more and more changing its approach towards a truth as to something that actually equals consensus of a largest possible (inclusive and democratic) group. It seems so, that we have somehow forgotten a mission or quest of the past centuries, that there IS some Truth and we are to discover it or at least come one step closer than our predecessors. Now, we tend to be more and more satisfied with "having OUR truth" about something, actually a mere consensus in particular group. We are a bit confusing this consensual semi-truth with the general truth (not speaking about Truth as eternal spirit or even person). In humanities it is as usually more visible. The theory of firm Truth is understood as something "ideological" and thereby dangerous, potentially threatening by some kind of mis-use in the service of political party or religious authority. Are we still able to know this? Or is the comfortable consensus already here as "the truth"? Is it SUSTAINABLE?
Relevant answer
Answer
'Truth' has a variety of meanings, but the most common definitions refer to the state of being in accordance with facts or reality
  • asked a question related to Consensus
Question
3 answers
What the the standard and recommendation for data anonymization with consideration to dataset traceability
  • asked a question related to Consensus
Question
5 answers
In order to save energy on consensus in a consortium blockchain, I want to limit the number of members. But I am curious how it impacts consensus resilience and vulnerability to attacks. Also, is there any optimum size of the consortium ? I would appreciate any help guidance, comment, a reference to literature.
Relevant answer
  • asked a question related to Consensus
Question
5 answers
I'm curious why Bitcoin's inter-block time is 10 minutes while Ethereum's is only about 15 seconds. Given that both Bitcoin and Ethereum use the PoW consensus algorithm, why not reduce the inter-block time in Bitcoin to match that of Ethereum and thus increase system throughput?
Relevant answer
Answer
The Bitcoin block time was chosen to make sure disk space would not become an issue. Another reason is to minimize orphan blocks. Which is a block that has been solved within the blockchain network but was not accepted due to a lag within the network itself. So the block is valid but broadcasted to the network too late. In the Bitcoin blockchain these orphan blocks go to waste, as the miner that mined it gets no reward for it. Which is a waste of computing power.
  • asked a question related to Consensus
Question
17 answers
What was the light year distance to the original departure point:
of light arriving here and now from the most distant stellar objects?
I am not asking the travel distance, but fine to also mention that..
assume the current consensus of ongoing cosmic expansion, over the course of 13B rounded years, so that the current visible universe is 46.5B LY radius, so that the original departure point would be __x__ LY maximum
Relevant answer
Answer
I have added a short video presentation called How Far Away which helps with this question:
Deleted research item The research item mentioned here has been deleted
Richard
  • asked a question related to Consensus
Question
2 answers
A strong scientific consensus that the Earth is warming and that this warming is mainly caused by human activities. Many years of consensus means a blokage of other theories and opinions and unclear explains the physics of process. How this "dimensionless" quantity of "ppm " can be used in equations? This notation "ppm"is not part of the International System of Units (SI) system and its meaning is ambiguous.
Relevant answer
Answer
Currently it is estimated that on a global scale approx. 97 percent. of all research projects and the results of the conducted and conducted research in this field confirms that the development of human civilization since the first industrial revolution is responsible for the progressing faster and faster global warming process. In my opinion, this is not a fully consensus and it does not imply a stagnation in research and scientific development as research is and will continue. In addition, research concerns not only climatology but also many other fields and scientific disciplines in which the results of the research are multifaceted despite the fact that they are organizationally independent.
Greetings, Have a nice day, Stay healthy!
Dariusz Prokopowicz
  • asked a question related to Consensus
Question
20 answers
There is still no consensus on whether the fields are composed of particles or no. For examples:
Art Hobson, There are no particles, there are only fields, American Journal of Physics 81, 211 (2013);
Robert J. Sciamanda, THERE ARE NO PARTICLES, AND THERE ARE NO FIELDS, American Journal of Physics 81, 645 (2013);
This problem arises because modern physics describes quantum phenomena in quantum scale (subatomic particles). So, to describe the fields we have to cross the quantum scale and reconsider quantum phenomena at the sub-quantum level to understand what are fields made of?
Relevant answer
Answer
Theorist Sean Carroll thinks it’s time you learned the truth: All of the particles you know—including the Higgs—are actually fields, 2013
Charles Sebens, If you think of electrons as a field, then you can think of photons the same way, 2019
Art Hobson, There are no particles, there are only fields, 2013
  • asked a question related to Consensus
Question
2 answers
You are invited to participate as expert panel member in our consensus study on International Online Collaboration Competencies (IOCC). The purpose of this study is to build consensus on the key competencies for International Online Collaboration, which gained importance specifically during the COVID-19 pandemic, where global virtual teams were becoming increasingly important. 
We are interested in getting the perspective and experiences on IOCC from an expert panel of experts from the academic field as well as practice-based experts. As academic expert you published on IOCC or virtual teamwork within the last 10 years. As practice-based expert you are either are or have been a member/leader in/of a virtual team. Your experiences, perspectives and comments on key competencies for International Online Collaboration are highly appreciated. 
Timeframe for the study
The iterative nature of a Delphi technique means that participants are anonymous to each other, but not to the researcher (quasi-anonymity). Your participation in the survey is voluntary. The survey will take about 20-30 minutes and can be interrupted at any time:
The first round is already completed and aimed at answering the question if the given competence domains are completely and accurately reflecting IOCC, and if a change of domain titles or wording is needed. 
The second round (12th – 23rd of November 2020) aims at reaching consensus on the results of the first round, and you are asked about your opinion on a given set of competencies developed in a previous systematic review. 
The third round (3rd - 13th of December 2020) aims at reaching consensus on the aligned competencies and your opinion on which competencies you consider as most relevant for evaluating IOCC.
What is in it for you?
The final outcome of this study will be an expert consensus on International Online Collaboration Competencies, which is important for training future workforce and for the continuous development of current workforce.
There is no compensation for participation in this study, but the results will be of importance for higher education and the future workforce.
In case you have a colleague being interested in this topic or research, please feel free to forward this invitation for participating in our consensus study, thank you!
Thank you for your time and consideration. In case of questions, please do not hesitate to contact me: alexandra.kolm@fhstp.ac.at
Sincerely,
Alexandra Kolm, Jascha de Nooijer, Janneke Frambach, Jeroen J.G. van Merriënboer,
School of Health Professions Education, Maastricht University, The Netherlands 
University of Applied Sciences St. Pölten GmbH, Austria
Relevant answer
Unfortunately I read your post too late. Please let me know about the results. I am very interested in the topic.
  • asked a question related to Consensus
Question
4 answers
KK
Relevant answer
Answer
yes Andrew you are right. The text has been changed to sth not readable.
  • asked a question related to Consensus
Question
4 answers
At the beginning of 2017, Donald Trump was about to be inaugurated as the next United States president. In anticipation of President Trump's policy changes in the United States, with possible consequences for the world economy, we sourced copper price forecasts from analysts and research organizations. The graph shows two copper price scenarios versus the subsequent actual outcome:
· The forecasts made before Donald Trump won the US election (i.e., before November 2016).
· The forecasts made after Donald Trump won the US election (i.e., post-November 2016).
· The LME average market prices for the years Donald Trump has been president (Source IMF).
2017 2018 2019 2020 (Y-t-D Sep)
Pre-election
Average $5’137/mt $5’490/mt $6’063/mt $6’305/mt
Post-election
Average $5’490/mt $5’689/mt $5’864/mt $6’305/mt
Actual
Average $6’170/mt $6’530/mt $6’010/mt $5’838/mt
Difference (Average)
Pre-election $1’033/mt $1’040/mt ($53/mt) ($467/mt)
Post-election $680/mt $841/mt $146/mt ($467/mt)
When comparing the forecasts against the subsequent outcomes, could they be considered to have been reliable? The consensus forecasts for the pre-election group consisted of 8 participants. For the post-election consensus forecasts, the group consisted of 40 participants, including some prominent research organizations in the sector. In defense of the forecasts, the copper price had been tracking lower in 2016 and could justifiably account for underestimating in 2017 and 2018. By comparison, 2019 was remarkedly accurate, and perhaps without the COVID outbreak in 2020, the overall comparative results might have proven to be reliable again. However, for some mining industry executives, the results might support their skepticism about using consensus forecasts.
I am open to considering the use of consensus forecasts, but with some modifications in the compilation process. Just asking someone for an estimate without allowing them to share their thought process would seem akin to aggregating all the votes in an election without allowing them to explain their position on the issues. The challenge in implementing a more transparent consensus forecasting approach is creating the scope for participants to share their views alongside their forecasts and simultaneously see the predictions and justifications of other industry experts.
In order to evaluate the possibility that an appropriately structured consensus forecasting panel could yield reliable results, a web-based application was designed to evaluate the concept for a doctoral research project. The web application allows participants to register for the research project and make anonymous copper and gold forecasts together with their justification for their predictions. As mentioned, to fully evaluate the concept of using an "open source" approach to developing consensus metal forecasts, all registered participants can see the anonymous forecasts of all other participants, as well as the evolving consensus forecasts. For those interested in participating in the research project, the web application can be found at https://consensusmetals.herokuapp.com
In a week, the next US presidential elections will be held, and once again, the question will be what lies in store for the coming four years for miners. I acknowledge my forecasting track record is questionable and feel that at times my approach is best described as "gut feel" rather than any systematic approach. If my guesses are partly right and are aggregated with other similarly partially correct forecasts, perhaps together, we can achieve a more reliable outcome for the benefit of all contributors!
Relevant answer
Answer
Following
  • asked a question related to Consensus
Question
2 answers
Has anyone found any adverse effects of high-intensity infrasound on people living within close distances to wind turbines? A long while ago I published a paper with an undergraduate in which she collected many anecdotes of adverse affects of this type, but the article received some rather harsh criticism, specifically from researchers funded by the wind generator manufacturers! Is there a consensus today about this phenomenon?
Thank you,
Peter
Relevant answer
Answer
Thanks, Vadym. Any ideas about how far away one must be to avoid the effects of the wind turbines?
Peter
  • asked a question related to Consensus
Question
4 answers
I am wondering what current practices are for the timing of chest drain removal? I am looking at doing a study on the relationship to removal timing and the development of pleural effusions requiring invasive drainage. As has been my experience, drainage thresholds and removal times are fairly arbitrary and there is no real consensus on how much is too much. When you balance that against pain, mobility, atelectasis etc then you have to consider what risk is acceptable in waiting longer to remove them. I have seen protocols that use serous nature of drainage, vol based anywhere from 40mls in 4hours, to 150mls in 4 hours to 50 mls in 24hours (considering this is less than physiological production of fluid it seems extreme).
I would love to get peoples thoughts. I have seen one weight based guideline discussed but no published results.
Also, what are peoples general experience with incidence of pleural effusions significant enough to require drainage after OHS. My institutions sits at around 10-15% which is why we are looking to determine of drain removal timing can impact this number
Thanks
Relevant answer
Answer
the criteria for chest tube removal after cardiac surgery differs significantly among institutions moreover among surgeons in the same institution.
it always depends on the quality and quantity of the drainage. as well as BSA specially in pediatric population.
our formula is 5cc/kg/day for pediatric patient, and less than 150cc/day for adult.
this values are not valid for mediastinal tube particularly following pericardial drainage like in cases for tamponade, or ventricular injury, in this case we require an echo prior to removal to ensure no pericardial fluid.
  • asked a question related to Consensus
Question
13 answers
What is the general consensus on one vs. two-tailed hypothesis testing in planned contrasts? I have a repeated measures mixed design. The study consists of three groups (A, B, C) and we have three assessment comparisons, i.e. Time 1 vs Time 2, Time 1 vs. Time 3 and Time 1 vs. Time 4.
Group A is our intervention group
Group B is an active control group
Group C is a control group
Our hypotheses are directional:
Group A > Group B
Group A > Group C
Group B > Group C
In this case, would applying the one-tailed significance test be ok?
Best
Martin
Relevant answer
Answer
Let me put it this way: If you plan to use one-tailed tests for A-B, A-C and B-C (with the expectation that all of those differences will be positive), if any difference is negative and large enough to be statistically significant by a two-tailed test, you're stuck with treating that large difference in the "wrong" direction exactly the same as a zero difference. To do otherwise would be cheating.
  • asked a question related to Consensus
Question
4 answers
Hello everyone,recently I have met some problems in my study.I want to achieve a multi-agent system in DC micro-grids,but I never used java so that I can't use Jade to achieve it. Can I only use Matlab-Simulink to model the network consensus problem of a multi-agent system in DC micro-grids?
Relevant answer
Answer
Dear Jilin Zheng,
It should be noted that in the field of scientific and technological research, a large majority use Matlab / Simulink for its simplicity but also for its scientific credibility. Regarding the field that interests you there is research work that has been carried out using Matlab / Simulink, i suggest you to see links and attached files on topic.
Best regards
  • asked a question related to Consensus
Question
6 answers
It seems that there is, more or less, some sort of consensus on academic standards. Who is responsible for drawing the guidelines that shape the way academia functions? Who do you think puts the standards for research publishing in influential journals?
Recommendations by ordinary researchers? Decisions by elite researchers? Do policy makers have a say in this? What connects these academic decision makers, whether individuals or institutions, and governs them?
I would appreciate your views. Thanks!
Relevant answer
Answer
"Informality in Metagovernance"! So that's what it's called. This is so intriguing, Remi. Thanks!
  • asked a question related to Consensus
Question
4 answers
Different miRNA target prediction algorithms use different scores which is not comparable directly. So what is the most reliable way to get the consensus of results produced by different miRNA target prediction tools?
Relevant answer
Answer
But all the algorithms have different criteria and direct comparison of top results is not recommended, as per my belief.
  • asked a question related to Consensus
Question
10 answers
When analysis bacterial DNA sequences the consensus doesnt match the reference sequence, when blasting them. Many bad psectra are viewed in the analysis software. Please what to do in this case?
Relevant answer
Answer
You are wolcome! BioEdit should also give you the option to trim your sequence to avoid low quality lectures. Good luck!
  • asked a question related to Consensus
Question
5 answers
What would be the acceptable CV% and SD for copy number results between stool samples in qPCR (same group)? I am evaluating different bacteria (E.coli, Lactobacillus, etc.) using the absolute quantification method. My CVs after calculating the number of copies was very high (some over 25%). I read that for gene expression this would be acceptable, for bacterial quantification would it also be? If you can give me a reference, I appreciate it. I know that there also doesn't seem to be a consensus on the expression of the final result, I saw articles expressing the result in number of copies, log10, number of copies / ng of feces, among others
Relevant answer
Answer
I wouldn't be an expert in bacterial molecular work but from those details I would expect a high variation between samples. So many variables we cannot account for. If you can figure out a standardisation method for calculation that might be the best thing. Maybe something like ct/no. of CFU on the final streaks etc. Defiantly a topic to chat with others in your lab about.
Best of luck
  • asked a question related to Consensus
Question
3 answers
Block chain in its current state, though provide secure and decentralized transaction but are making a computer hundreds or
thousands of times less efficient because conceptually each node in the network must redundantly make the same computations as all other nodes in the network. Similarly the key idea of blockchain techologies lies in the consensus mechanism, therby requires these nodes must communicate with other nodes all around the world, which further introduces latency issues.
Relevant answer
Answer
I agree with Helge Egil Seime Pettersen the PoS as many other consensus algorithms can solve the problem. Also, you have to take into consideration the private blockchain model as a centralised or decentralised database (no mining is demanded).
  • asked a question related to Consensus
Question
16 answers
What are the recommended databases? Any consensus available?
Relevant answer
Answer
It depends on the question and the field, but some journals, e.g. BMJ Open doesn't accept articles with under four databases.
A comprehensive search should include searches on the relevant databases within the field, e.g. in biomedical research at least PubMed/Medline, Embase, Scopus, Web of Science, and Cochrane. Furthermore, a search in the grey literature e.g. ProQuest and OpenGrey should be conducted. And finally, you should do a backward snowballing where you look at the references in the included studies.
An article about the number of databases has been published:
  • asked a question related to Consensus
Question
4 answers
There is a lot of method to measure the consensus in Delphi. But i need to know more about the method use Median and Interquartile. My Delphi panel is below than 10. so it is possible to use this Median and IQR to measure the consensus? Please ASAP
Relevant answer
Following
  • asked a question related to Consensus
Question
6 answers
Ottawa technology enhanced assessment consensus statement was adopted in 2011. Richard fuller in a presentation on 'Technology in assessment' in Ottawa 2020 conference addressed the issue. But till to date I failed to understand it. Is there any one to explain 'What is actually meant by 'Technology enhanced assessment?''
Relevant answer
Answer
Thanks, Ibraheem Kadhom Faroun and Leo Atwood for your excellent insight.
  • asked a question related to Consensus
Question
8 answers
I was looking for a simulator to simulate a proof of concept in hyperledger fabric by changing the number of the orderer, endorser, organisation, and also wanted to apply consensus in the network. Is there any simulator which can help me in this regard? I also want to make a transaction by coding (e.g., javascript, java).
Any suggestions?
Thanks in advance.