Science topics: Software EngineeringWorkflow
Science topic
Workflow - Science topic
Description of pattern of recurrent functions or procedures frequently found in organizational processes, such as notification, decision, and action.
Questions related to Workflow
I have observed that some highly reputed researchers seem to have their papers reviewed and published within a relatively short time (2-3 months), while others, including myself, experience significantly longer waiting periods (e.g., 7+ months just for the first decision). Interestingly, the quality of work in these quickly published papers often seems comparable to those that take much longer for acceptance. Is this due to biases in the review process, prioritization of certain submissions, or differences in journal workflows? How can early-career researchers better navigate or mitigate these delays? Insights or shared experiences from the community would be greatly appreciated.
Hi all,
I am working with mouse brain T2 MRI data and want to analyze volumetric differences in specific brain regions, including the hippocampus, lateral ventricles (LV), cortex, and corpus callosum (CC), between normal and Alzheimer's disease (AD) mouse models. As I am new to this type of analysis and interpretation, I would greatly appreciate your guidance on the following:
- Workflow: What should the overall workflow look like, from preprocessing the MRI data to statistical analysis and interpretation?
- Tools and Software: Which tools and software are recommended for preprocessing, segmentation, registration, alignment, and volumetric analysis?
- Steps Involved: Could you outline the specific steps (e.g., preprocessing, ROI segmentation, template registration) and provide suggestions for resources or tutorials?
- Statistical Analysis: How should I perform statistical comparisons to identify significant differences between groups?
This question explores the transformative potential of machine learning in scenography and stage design, focusing on its ability to optimize creative workflows, enhance real-time adaptability, and foster innovative storytelling techniques. It invites specialists to reflect on how data-driven technologies can balance technical advancements with the artistic vision of theatrical productions.
I sent in my prepped plasmids for DNA sequencing. Unfortunately, the results are very noisy (I attached a screenshot). I double-checked and the samples have the right concentration (100 ng/ul) and a good 260/280 ratio. I didn't send any primers because the company offers the ones that I need. Therefore, I am quite sure that the primers should work.
Maybe it is helpful to briefly describe my workflow; I did site directed mutagenesis via PCR and subsequent gel extraction. I then transformed competent bacteria and selected them. I used single clones for cultivation and performed plasmid preparation with a kit.
Does anybody have an idea what might be the problem? Are there any things that I am missing or pitfalls that can happen during my workflow and might not be obvious?
Many thanks in advance!

Hi everyone,
I am currently working on lattice structure optimization using ModeFrontier integrated with ANSYS (2022). My workflow involves:
- Generating .inp files from nTop and directing them through a File Transfer Node in ModeFrontier.
- Using the Direct ANSYS Node in ModeFrontier to run ANSYS simulations with the transferred .inp files.
The issue I am facing is that ANSYS does not update the input file (.inp) with each iteration. It seems that ANSYS External Model is fixed to the initial .inp file path, and subsequent files generated during iterations are not being used.
I suspect the problem lies with the External Model Input Path in ANSYS. Even when I specify it initially, ANSYS does not seem to dynamically read the updated files provided during iterations.
Has anyone encountered a similar issue? How can I configure ANSYS to dynamically update and use the new .inp file during each iteration in ModeFrontier?
Thank you for your help!




Subject: Request for Insights on Research Workflows
I hope you’re well. I’m Dipro Paul, an undergraduate student at Daffodil International University, working on an AI tool to simplify research workflows. I would be grateful if you could share some insights from your research experience, especially regarding areas where AI support could be valuable.
Your perspective would be invaluable, and I would truly appreciate any time you could spare.
Link : https://forms.gle/PqSN6tvz4tgb1LQA9
Thank you for considering my request.
Best regards,
Dipro Paul
Undergraduate Student, Daffodil International University
Workflow for best results in MLPA deletion/duplication kits
Despite the extensive academic research and new technologies in the AEC industry, their real-world impact remains limited. How, when, and by whom can the AEC sector effectively leverage these advancements—owners, CEOs, contractors, or organizations?
Hello researchers,
Sorry for my stupid question. I am learning the QIIME2 workflow for analyzing some 16s amplicon NGS fastq data. I found a very nice paper with data and code public available ( ) so I decided to reproduce their qiime2 results.
In their steps, they cut off the barcode and primer sequences from the raw fastq sequences (Yes, there are really barcode & primer sequences at the front of sequences). However, I did not find any primer sequences in my own NGS fastq files. Does this mean that the sequencing company have removed barcodes, adapters and primer sequences for me?
Also, should I perform quality-control before importing my fastq raw data to QIIME2?
Hi,
I am very new to Bioinformatics. Recently, I have the project that aims to perform taxonomic anlysis on raw reads from mixed samples taken from environment.
For example, we perfromed NGS (whole genome) on a insect and we want to identify the taxonomic of every symbiotic bacteria in the raw reads.
Currently, I can use Kraken2 to perform the analysis. However, I have following few questions.
1. How can I focus only on the bacteria and remove the rest of data and make a summary table or visualization (the percentage of each bacteria strain).
2. Because in the future, we will switch to the mixed environmental samples and focused on 16s rRNA, I would like to know how to perform the taxonomic analysis by identifying 16s rRNA first from the raw reads and make analysis that focuses only on 16s rRNA. Because mixed environmental samples will contain not only bacteria but also other eukaryotic DNA reads, I want to identify them and analyze them later to reduce the process time.
Followed by that, what process and tools shoud I use?
I found the tutorial: " 16S Microbial Analysis with mothur (with galaxy)" however, I tried with the current data of NGS data on insects, it took so much time on making contigs of non bacterial reads. I am wandering if there is any methods that can get rid of reads that is non bacterial in the first hand.
Additionally, I found other tools such as RNAmmer, barrnap, prokka. However, these tools seems to be only accepting bacterial whole genome but not mixed reads.
If you can share some experience and good workflow or tools to try, I will very appreiate that.
Thank you very much for your great help.
Dear Colleagues,
I hope you’re all doing well. As we know, systematic reviews are crucial for synthesizing evidence and providing comprehensive insights into specific research questions. However, the process of conducting these reviews can be complex, demanding and time consuming.
We are in the process of developing an AI-based software solution designed to assist with the article screening and data extraction stages of systematic reviews. To ensure our tool effectively addresses the needs of the research community, we’d like to learn more about your experiences and get your perspective on common problems.
· What are the most time-consuming aspects of your workflow?
· Are there particular pain points or frustrations you regularly encounter?
· How do you currently address these challenges, and what improvements would you like to see in an AI tool?
If you are interested in trying the tool or have any questions, please don't hesitate to send me a message (or visit www.revisio.ai )
Looking forward to hearing your thoughts!
Hi ResearchGate community,
I'm currently working on a project that involves the structural alignment of multiple protein sequences. However, I do not have PDB IDs for these proteins. I was wondering if anyone could provide advice or share their experience on how to approach this problem.
Specifically, I'm looking for methods or tools that can help with:
- Predicting the 3D structures of proteins from their sequences.
- Performing structural alignment with these predicted structures.
I've come across several protein structure prediction tools like AlphaFold, Rosetta, and I-TASSER. However, I'm not entirely sure how to integrate these predictions into a workflow for structural alignment. Additionally, any recommendations for software or online tools that can handle multiple structure alignments would be greatly appreciated.
Has anyone tackled a similar challenge? What tools or approaches worked best for you?
Thank you in advance for your insights!
Best regards
Salam,
As we look to the future of scientific research and communication, it's clear that advances in AI will have a major impact. AI-powered tools are already augmenting many aspects of the research workflow, from literature synthesis to data analysis.
My question is: can we see a new form of scientific publication ?
F CHELLAI
What do you do if your Technological Innovation workflow needs a time-saving boost?
What do you do if your Technological Innovation workflow needs a time-saving boost?
Based on the research article, “Red and white blood cell morphiology characterization and hands-on time analysis by the digital cell imaging analyzer DI-60”. In its conclusion, DI-60 demonstrated acceptable performance for normal samples but for the abnormal wbc differentials and rbc morphology characterization, it should be utilized carefully. Therefore, does this state that sysmex DI-60 is not that accurate when it comes to abnormal samples? Will this hinder its performance when it comes to efficiency or feasibility?
This question is directed on the performance of the Sysmex DI-60 digital morphology analyzer and its contributions on the improvement in the laboratory workflow. I am curious as to what capabilities of the analyzer could potentially reduce the long TAT and tedious process of performing manual methods which may possibly eliminate/lessen the need to perform manual methods.
The question is based on the article “Tenets of Specimen Management in Diagnostic Microbiology” by Rajeshwar and Pathak. It explores how advances in technology, specifically barcoding systems, can enhance specimen management processes in diagnostic microbiology. The aim is to investigate the potential benefits of barcoding systems, such as improved tracking, accuracy, and efficiency in specimen identification and tracking throughout the laboratory workflow.
In the past years I've been creating ENMs using dismo and its related packages like raster.
I have my own workflow but for didactic purposes I also use modleR workflow (https://github.com/Model-R/modleR) which is very good for students learning ENMs.
Recently, the package raster was retired and a lot of my analysis and workflow rely on raster and dismo which has been causing me some issues.
As far as I'm capable I've been changing my codes to use the package terra instead of raster, but it has been a nightmare.
There is any workflow or package I can follow/use as an alternative for dismo/raster ? Any package or workflow which already uses terra to manipulate spatial data ?
Thanks for the attention !
A sub field of medical physics/molecular biology, thh overall aim of the effort is to improve the understanding of the molecular architecture of unknown or understudies biological systems, for example of the human sperm (flagellum component) using cryo-electron tomography and advanced image processing workflows.
In this example, men fertilization issues from accidents, excessive therapeutic radiation and pathological development in puberty call for scientists to solve the structures of key flagellar macromolecular complexes to understand the molecular mechanisms controlling sperm function both in health and disease.
Do you know about the current degree of effectiveness compared to other approaches and the reliability in bi8logical systems structure elucidation by cryo-electron tomography ?
Can I get workflow to perform beam dynamics using ASTRA?
My current workflow needs the following features
- Daily jots and from that making reminders and to-do lists
- Making mind maps to create relations between strings
- Cross-platform without losing features
- Integration with Zotero or GitHub
- To dump any thoughts or ideas on the go and reflect on it later
- to support latex not just maths but whole typesetting if possible
- weekly history or review summary should be generated.
Hello everyone,
I am calculating for Li2TiO3 about DOS cal by HSEO6. I use VASP.
But my calcultaion take a lot of time, and now I thinnk it stoped.
My work flow:
1. Relaxation calculation by PBE, obtain CONTCAR -KPOINT 4 4 4
2. SCF by PBE with CONTCAR, obtain CHGCAR, WAVECAR, KOINT 4 4 4
3. DOS cal by HSE06 , KPOINT 9 9 9
I put my INPUT file bellow.. Can everyone help me?


What tools/softwares/apps you use to track milestones & monitor the workflow of your research?
How can end-to-end workflow automation be achieved for cloud-based machine and deep learning pipelines?
I am working on Petrophysical property modelling. After creating an average velocity log from a check shot data for depth conversion, I find it difficult to successfully upscale the log. A dialogue keeps popping up "no wells with the selected log found. Scale up failed".
Log upscale which is supposed to be a very simple operation in Petrel is giving me serious thinking. I have double checked my work flow to see if I missed a thing. I really need assistance on this
I have an Isight workflow which simply includes an optimization tool and an Abaqus component. My abaqus model is a single lap joint which has some pins. Thus, im trying Isight to calculate the maximum force the joint withstands (which i calculate from the odb file with an external python script) for a variable length of the pins.
However, when I run the workflow, i get the errors detailed in the picture.
Has anyone encountered a similar problem before? or has any idea of how to solve this?
Thanks beforehands!
Does anyone use the software CUE for workflows....any suggestions???
Snakemake is a versatile workflow management system that can be applied to various fields, including plant pathology. In plant pathology, Snakemake can streamline and automate complex analysis pipelines, making research more efficient and reproducible. Here's a brief overview of how Snakemake is used in plant pathology:
1. **Automated Analysis Pipelines**: Plant pathologists often deal with diverse datasets, such as DNA/RNA sequences, microscopy images, and phenotypic data. Snakemake enables researchers to create automated pipelines that handle data preprocessing, quality control, analysis, and visualization. This automation reduces manual errors and ensures consistent analysis across different samples.
2. **Bioinformatics Workflows**: Snakemake is particularly useful in plant pathology for managing bioinformatics workflows. It can integrate various tools and software packages for tasks like sequence alignment, variant calling, and phylogenetic analysis. Researchers define rules that describe dependencies and data transformations, allowing complex analyses to be executed seamlessly.
3. **Reproducibility and Traceability**: Snakemake ensures reproducibility by capturing all dependencies and steps in a workflow. Researchers can easily reproduce their analyses by rerunning the same Snakemake script. This is crucial in plant pathology, where accurate and reproducible results are essential for understanding disease mechanisms and developing mitigation strategies.
4. **Iterative Studies**: Plant pathologists often conduct iterative studies to investigate disease progression or response to treatments. Snakemake simplifies these studies by automating repetitive tasks and adjusting the workflow as new data or hypotheses emerge.
5. **Data Integration and Visualization**: Snakemake can incorporate data integration and visualization steps in the workflow. For instance, it can merge multiple types of data (genomic, transcriptomic, and phenotypic) to provide a comprehensive view of plant-pathogen interactions.
6. **Customized Analysis**: Snakemake allows researchers to customize their analysis pipelines based on the specific needs of their plant pathology studies. This flexibility ensures that the workflow is tailored to address research questions effectively.
7. **Parallel Processing**: Large-scale plant pathology studies often involve analyzing extensive datasets. Snakemake's parallel processing capabilities enable researchers to distribute tasks across multiple processors or compute nodes, significantly reducing analysis time.
8. **Collaboration and Sharing**: Snakemake workflows can be easily shared with collaborators, making it simpler to collaborate on complex analyses. This promotes knowledge sharing and accelerates research progress.
In summary, Snakemake plays a vital role in plant pathology by automating and streamlining analysis pipelines, enhancing reproducibility, and facilitating complex bioinformatics workflows. Its flexibility, parallel processing capabilities, and user-friendly syntax make it a valuable tool for researchers studying plant-pathogen interactions, disease mechanisms, and mitigation strategies.
I see that there are many screw-top tubes that are prefilled with glass beads, but these are usually used with bench-top homogenizers. I'm curious if it would work to use these tubes with a vortex instead of having to buy a new instrument. Has anyone tried this when homogenizing animal-tissue samples? If so, was the rest of your protocol successful?
FYI: We are interested in using this in a protocol for RNA extraction because our current workflow hasn't been giving us the best results.
Hello, I'm trying to find pre labelled 90mm petri dishes to be included in an automation workflow for a new biotech lab. Does anyone know a brand? I can't find one! THX
Nidia
We are looking for software to manage the wet lab workflow and DNA storage, through the use of QR-codes (or similar) and consumables compatible with QR-codes and software.
Based on the latest capability and API of ChatGPT, how can we realize an image-to-text workflow to employ it for deriving a rational analysis for common scientific graphics?
We frequently have mixed results for a qPCR assay. We'd like to determine if this is non-specific amplification, or target amplification when the target is low-frequency. DNA tape-station shows multiple bands, that are unique to different samples. Bands are too close together to reliably isolate. I'd like to sequence them all using next gen. I'm trying to figure out if this could work using the Ampliseq workflow for ion torrent, which is something we already do regularly. Distributer will not advise so I'm asking here. Thnx.
There are three matlab scripts provided with the old CAFFA 3D package. The code was developed by Gabriel Usera in 2004 during his PhD time.There is an example provided with the code about cavity flow. It is quite easy to see the grid with the provided code . But the provided matlab scripts do not work properly to see the final results such as U , V and P profile. During running code in the matlab , it shows some errror . But I cannot understand what to do actually. Please do help me who did work before with the code. Here I am giving you the code:
>> filnam=’cavmp1.10’;
>> caffa3dMBs;
>> figure; hold on; grid on;
>> plot(GR(1).XC(42,:,42)*2-1,FL(1).VV(42,:,42),'b','LineWidth',2)
>> plot(GR(2).XC(42,:,42)*2-1,FL(2).VV(42,:,42),'b','LineWidth',2)
>> plot(FL(2).UU(:,3,42),GR(2).YC(:,3,42)*2-1,'b','LineWidth',2)
>> axis square; axis([-1,1,-1,1]);
Also I am attaching the caffa3dMBs code and the errors are mainly from GR and FL.
Now I am working with flow around a NACA 0012 profile to visualize the pressure coefficients with the same software. I have run the code successfully . I have also seen the grid with provided matlab scripts. But I cannot see the results because the provided matlab scripts do not work properly. Please do help.
Regards,
Waliur Rahman
The above question is part of the workflow for inversion process using 3D seismic wave data
Among hundreds of Linux distributions, what are the most suitable ones which can be used by a biological science researchers whose workflow is largely based on genomic data science and its analytics (Bioinformatics), scientific manuscript preparation, illustration etc.
Often I need to give some feedback to the students on their work or assignments which come to me as PDFs. Basically, I comment on some parts of the PDF, add notes, and highlight or strike words or sentences.
What are your recommendations in terms of software and workflow?
I have a mutant bacteria strain and want to confirm its phenotype by TEM. Copper mesh with carbon membrane was used to grid,here are my workflow:
1, 2.5ul diluted bacterial fluid,2min,
2, 5ul UA(Uranium acetate),10s,
3, 5ul UA(Uranium acetate),10s,
4, 5ul UA(Uranium acetate),1min,
5, Dry 10min.
Unfortunately,I cant see any bacteria on negative staining,how can I improve my experiments?Thanks in advance.
Dear scientific community,
I would be very interested to hear your input regarding the scaling-up of LCA studies to a portfolio level. I know there is a plethora of product LCAs and plenty of them consider several individual products or product variants in parallel. However, I have not found an awful lot of studies that extend to several hundred, let alone thousands of individual products within the scope of one study (as opposed to equally as many individual case studies).
Surely, more people have approached this apparent research gap. So for anyone that has been active in this area: I would greatly appreciate you sharing what experience you have made or you pointing me at any related publications in the field.
Many thanks and best regards
Tobias
P.S: If you are interested what my colleagues and I have done in this field, feel free to check this framework article and the case study we presented at LCM 2021 conference:
Conference Paper The Sustainability Data Science Life Cycle for automating mu...
Conference Paper Detecting environmental hotspots in extensive portfolios thr...
My lab wants to try to do as much of our pre-processing, processing, and analysis in R as possible, for ease of workflow and replicability. We use a lot of psychophysiological measures and have historically used MATLAB for our workflow with this type of data. We want to know if anyone has been successful in using R for these types of tasks.
Any good R packages?
Note:
I am looking in particular for packages for the preprocessing, processing, and analysis of *physiological* signals.
suggestions need for workflow in compare and analysis the primary data obtained from construction phase with VR technology
Dear all,
I have the following problem when I am working with an Abaqus python script:
- Everytime I update a module that is loaded as a part of my main script, after compiling the corresponding module (and generating the associated *.pyc file) in order to test it, I have to close and open Abaqus GUI, otherwise, the new *.pyc file version is not loaded into the system.
I am formulating this question because during the debugging phase of my code, everytime a new change is done, the fact of closing and reopening Abaqus interrupts the workflow (as it takes time to start Abaqus GUI).
Thanks in advance.
We have a hybridization-based, TruSeq-compatible targeted enrichment protocol from Twist Biosciences that we have been using in our lab with great success.
We would like to use the Illumina® DNA Prep, (S) Tagmentation #20025520 library prep kit with IDT® for Illumina® DNA/RNA UD Indexes Set A, Tagmentation (96 Indexes, 96 Samples) #20027213 in our current targeted enrichment workflow with as few changes as possible.
A problem we have is that the Twist Universal Blockers #100767 do not work well for these libraries, as they are not TruSeq-based.
Neither Illumina nor IDT sell their blocking oligos for Illumina® DNA Prep, (S) Tagmentation as a stand-alone product.
Has anyone found an alternative blocking solution that works well for these libraries? Do you have any other suggestions on how to improve the on-target rate in the absence of an optimal blocking solution?
Are there any recommendations for tools or workflows that would allow me to do this? Thanks!
I have one PCR workstation.
I performed my RNA extraction/cDNA synthesis on a separate bench (outside hood).
I now have very concentrated cDNA.
In previous labs I had 3 work stations.
1: Prepare primer/probe working stock
2: Prepare serial dilution of concentrated template
3: Preparing master mix and plating serial dilutions
I now have one workstation.
Would you perform you cDNA dilution outside of the workstation?
Or would you do it inside and just turn on UV light/spray DNAzap before plating?
Any advice on PCR workflow would be useful.
Thank you!
Hello everyone,
I need help understanding whether my two groups are paired or not.
I am collecting data from one group of cells. We have developed two different workflows (let's call them A, and B) for data analysis. We want to test whether these two workflows give the 'same' results for the same set of cells.
At the end, I obtain:
- Group 1 (contains the variable obtained with workflow A)
- Group 2 (contains the variable obtained with workflow B).
I have been considering the two groups as independent because the two workflows do not interfere with each other. However, the fact that both workflows operate on the same cells is throwing me off and I am wondering if these groups are actually paired.
Could you advise me on this and on what test is best to use?
The hypothesis for the test would be:
- the distributions of the variable is the same with both workflow A and B; and/or
- the median of the distribution from workflow A equals the one from workflow B
Thank you.
GN
Apart from the available sources from http://www.open-systems-pharmacology.org/ on PBPK model building workflow, how can you optimize parameters in a case were simulated data is different from observed data?
Within the 3DForEcoTech COST Action, we want to create a workflow database of all solutions for processing detailed point clouds of forest ecosystems. Currently, we are collecting all solutions out there.
So if you are a developer, tester or user do not hesitate to submit the solution/algorithm here: https://forms.gle/xmeKtW3fJJMaa7DXA
You can follow the project here: https://twitter.com/3DForEcoTech
I am working on a project in which we seek the best way to optimize the energy and cost of workflows in distributed data centers, but I have a problem with modeling the cost. It would be highly appreciated if you could help me or share my question with computer engineers.
Hey guys, I was just wondering whether there are any public clusters for evaluation of NGS data like ? Thanks in advance for your comments!
Theoretically, Stage III includes IIIA, IIIB, IIIC. So, I wonder why is there a fourth group (Stage III) in the Subtype profile workflow at Gent2 database.
My other question does anyone know a database where mRNA and protein datas from blood samples are available from cancer and normal patients?
Thanks,
Anita Kurilla
I want to analyze the responses and conformational states of proteins at various temperatures in silico. Can I do this with molecular dynamics simulation programs like gromacs? Is there a special work flow chart or program or methodology?
Hello, I wanted to know what are the possible project ideas that I can work on using proteome datasets from ProteomeXchange or any other proteome databases, where I can perform while sitting at home. A very basic workflow will also be appreciated. Thank you!
Hi,
I'm using Proteome Discoverer 2.5 to analyze my peptide spectrum from Q-Exactive. But after the process complete, I get two error warnings in consensus workflow. In Peptide Validator node, it says "no decoy search was performed for the following search nodes:-SequestHT(A2) in workflow Workflow.FDR+fixed threshold validation is used instead". Meanwhile in Protein FDR Validator node, the error warning says "Cannot validate proteins for group SequestHT, because at least one search node has no decoy PSMs."
Do you have any idea what is the problem?
Thank you :)
Including these steps: 1) raw data format transformation for five companies 2) update positions for all SNPs to hg37 version 3) Quality control within companies 4) Pre-phasing (SHAPEIT2) and imputation (IMPUTE2) for all SNPs of each company 5) Perform GWAS using two logistic models for 27 phenotypes 6) Statistic and downstream bioinformatic analysis. 7) Estimation of genetic parameters (rg and hg). 8) PRS analysis. However. the size of my dataset only consist more than 1000 people. With no background knowledge, how long would this take as a bioinformatics master student?
For example AI can automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions. But what are the clinical challenges for its application?
#AI #cancer #clinical #oncology #biomedical
I have to use mass spectrometry proteome data (analyzed data from Maxquant) to carry out post-translational modification analysis using Proteome Discoverer software. I'm searching for a proper workflow for this purpose. Any suggestions/links to useful sources, etc. would be very welcome.
Hello everyone,
I need a simple instruction for modeling of End-of-Life scenarios in GaBi software. I have reviewed every document existed on the internet on this topic but all of them explain the backlog methodology not the modeling procedure. So I kindly ask you rather than referring to documents, tell me if you know a simple workflow in the software for end-of-life modeling. Thank you!
dear Community, can we build a costumer relationship management (crm) using python , for workflow automations? if it's possible , any sources would be helpful thank you.
best regards
Are there any alternatives to SequenceMatrix that can be deployed through Linux in a Snakemake workflow?
I can only find such processes for explicit applications. Some more generic step-by-step descriptions of the workflow of building digital twins would be more helpful. Can someone recommend relevant literature for this topic? Thanks in advance!
I am very interested in VS and MD. Now I am expecting a simple workflow protocol with freely available software for Virtual Screening. Thanks in Advance....
G. Varatharaju
Dear friends!
Due to the project, we have 8 groups of rats (n=6), my task is to perform PCR. It includes the expression of 8 genes, 3 microRNA, 4 lncRNA. I`m a bit stressed about the quantity of these samples, expressions to work with simultaneously...
Some of the primers are not new, frozen - I have no documents on them, only name, presumably concentration (100 mМ).
Some will be purchased with all needed documentation.
My question is: how to organize the PCR experiment workflow which includes trying different concentration of primers/probes/cDNA/troubleshooting?
I assume it not reasonable+expensive to try all 48 cDNA samples each time? How then should I act?
One more: how to number samples? Now they are simply 1-48. But maybe it is worth differentiating control/experimental/experimental+treatment samples via specific numeration?
Please, share your algorithms and considerations :) Will answer any needed details gladly)
Thanks in advance!!!
Have a great day!
P. S. Soon we`ll have 3 more groups (n=6) for the same 8 genes, 3 microRNA, 4 lncRNA expression.
i have done lcms of plant extract and analysed the data to generate a list of metabolites that are now needed to be categorized in categories such as anti diabetic, anti cancerous anti microbial and various others. can you please suggest a workflow other than manual literature search as the data is quite heavy to be processed manually.
Where can I find common benchmark datasets for task/workflow scheduling at edge cloud computing that have information about the processing and transmission latency or energy consumption of all the tasks to be executed on each available processing devices (end devices, edge, cloud)? Thank You.
Hello! In my last question, I asked a question about how to analyze a dataset with an ordinal dependent variable and multiple categorical independent variables. Here's the question if you'd like to check it out: https://www.researchgate.net/post/How_can_I_analyze_multiple_binary_independent_variables_against_an_ordinal_dependent_variable
My dataset is questionnaire data that has a field about skill level in a certain sport. This skill level is the target variable in the study. The questionnaire also had a question about which of four sports the respondents answers regard. The aim is to compare if the correlations between skill level and answers are similar or not in each sport. So I would like to find which variables predict skill level the best in each sport, and how important the variables are in the prediction.
It was suggested that I use ordinal logistic regression, and test for proportional odds assumption. If the proportional odds assumption is not met, I should use consecutive binary logistic regressions to construct an ordinal model myself. It was also suggested that I could use a Boosted regression tree. I would like to use these both as a cross validating method, as there seem to be uncertainties in ordinal logistic regression.
I understand that the workflow should be as follows:
- Make ordinal logistic regression
- Check for proportional odds assumption
- Regardless of whether or not the assumption is violated, create binary logistic regressions to study the details of the data, and also to check if the ordinal logistic regression model did indeed provide an accurate summary of the correlations in the data
- Train a boosted regression tree and find the importances for each independent variable. Use as cross validation method.
The binary logistic regressions should be run as follows: Class 1 vs Class 2-4, Class 1-2 vs Class 3-4, Class 1-3 vs Class 4. I understand this and know how to do this, but I do not know if the boosted regression tree should be done in the same manner. Should I make three different boosted regression trees and calculate the importances separately, or should I only create one tree model that I train with all four target classes at once? It seems boosted regression trees don't perform well with target variables with more than two values.
I would truly appreciate your help. Also, if you know of studies that have used a similar method, I would really appreciate if you could link them.
Best regards,
Timo Ijäs
Should we perform radiometric calibration, multi looking etc. for the processing?
What pre-processing steps must be performed for further forming the coherency matrix?
Hello,
What are the best tools and workflows to do DFT inside an active pocket of a given protein and ligand?
Thanks
Home office during the Corona pandemic has changed the workflow of project management in our research group - things have become much more cumbersome.
Therefore we are looking for a project management tool.
We need limited functions - but these should be easy and intuitive to use:
- A project structure (certain people have access to certain projects)
- Assignment of ToDos
- Definition of milestones
- A gantt chart to visualize milestones
- Compatible with linux mac and windows
Would be great if you could share your experiences/suggestions/dos and don'ts.
I am assisting an Endocrinology office in eastern NC with improving their office throughput. Their downloading of patient device data is causing appointments to go over and cause bottlenecking.
Could someone point me in a good direction for best practice information and workflow recommendations in this are?