Science topic

Workflow - Science topic

Description of pattern of recurrent functions or procedures frequently found in organizational processes, such as notification, decision, and action.
Questions related to Workflow
  • asked a question related to Workflow
Question
1 answer
I have observed that some highly reputed researchers seem to have their papers reviewed and published within a relatively short time (2-3 months), while others, including myself, experience significantly longer waiting periods (e.g., 7+ months just for the first decision). Interestingly, the quality of work in these quickly published papers often seems comparable to those that take much longer for acceptance. Is this due to biases in the review process, prioritization of certain submissions, or differences in journal workflows? How can early-career researchers better navigate or mitigate these delays? Insights or shared experiences from the community would be greatly appreciated.
Relevant answer
Answer
Let me be blunt about this. Ususally, journal editors respect deadlines. Some peer reviewers may not. Author-wise, there are things that shock reviewers (and I am one of them) such as making sweeping statements (I am the first one to be doing this), level of language proficiency (unintelligible English at some places), methodology (non-existent or beside the point), etc., which turns them off and makes them take time thinking about the pros and cons of the manuscript's outcome. Let's face it, in the Anglo-American higher education system, students are initiated very early in their career into publishing. One of my own MA students in the US was already a publishing author though not really in a highly-ranked, reputed journal. Thus, what we really need as a young reseracher is to have enough modesty, patience, and willingness to learn from one's mistakes and the others.
Sorry for a long message
  • asked a question related to Workflow
Question
1 answer
Hi all,
I am working with mouse brain T2 MRI data and want to analyze volumetric differences in specific brain regions, including the hippocampus, lateral ventricles (LV), cortex, and corpus callosum (CC), between normal and Alzheimer's disease (AD) mouse models. As I am new to this type of analysis and interpretation, I would greatly appreciate your guidance on the following:
  1. Workflow: What should the overall workflow look like, from preprocessing the MRI data to statistical analysis and interpretation?
  2. Tools and Software: Which tools and software are recommended for preprocessing, segmentation, registration, alignment, and volumetric analysis?
  3. Steps Involved: Could you outline the specific steps (e.g., preprocessing, ROI segmentation, template registration) and provide suggestions for resources or tutorials?
  4. Statistical Analysis: How should I perform statistical comparisons to identify significant differences between groups?
Relevant answer
Answer
Dear Dr. Sharma,
As you are new to this area unless you are willing to setup some automated measurement software, test it and validate it, you will doing these measurements manually by tracing regions-of-interest (ROIs) by eye on your images.
For this you will need a program that reads your image format and allows making these measurements. A freely available one is FIJI at https://fiji.sc/. It is based off the NIH ImageJ software but with a lot of added functions. Second you will need a mouse brain atlas to guide you on the neuroanatomy of the MRI images based on the location the images were taken from. As you are using T2 weighted images you will easily identify the ventricles as they will be hyperintense. Depending on the MR acquisition parameters (TE) you may have good grey/white matter delineation also. From here you will have most of the visual cues to identify the structures from the atlas.
Go check out FIJI and it includes a MRI data set that can be practiced on. Try to load your mouse images and see if you can trace out the structure with the ROI tool. You can test your reproducibility error by doing an ROI trace on a structure say 5 different times. This will give you your standard error for your method of measurement. You can then use that to determine the differences between your experimental cohort. Of course for completeness you will want to perform this standard of error test on the extrema of your cohort and the median to make certain your don't have a systematic error from size differences.
That should get you started. Good Luck.
  • asked a question related to Workflow
Question
2 answers
This question explores the transformative potential of machine learning in scenography and stage design, focusing on its ability to optimize creative workflows, enhance real-time adaptability, and foster innovative storytelling techniques. It invites specialists to reflect on how data-driven technologies can balance technical advancements with the artistic vision of theatrical productions.
Relevant answer
Answer
Machine learning (ML) can revolutionize modern stage design by enhancing creativity, sustainability, and audience engagement while preserving artistic integrity. Here's how:
1. Enhancing Creativity
  • Generative Design: ML algorithms like Generative Adversarial Networks (GANs) can produce novel design concepts, from stage layouts to costumes and lighting effects, offering fresh and diverse ideas for creative exploration.
  • Adaptive Scenography: By analyzing live audience feedback or performer movements, ML can dynamically adjust stage elements such as lighting, projections, or soundscapes to create immersive, interactive environments.
  • Inspiration from Data: By analyzing historical stage designs, cultural motifs, and visual trends, ML tools can provide designers with inspiration that bridges tradition and innovation.
2. Promoting Sustainability
  • Efficient Resource Management: ML can optimize material usage by simulating construction and deconstruction processes, reducing waste in stage construction.
  • Energy Optimization: Intelligent systems can monitor and adapt lighting and HVAC systems during rehearsals and performances to minimize energy consumption without compromising artistic goals.
  • Recyclable and Modular Designs: Machine learning can assist in creating designs that are modular and easier to repurpose for multiple productions, thereby reducing the environmental footprint.
3. Increasing Audience Engagement
  • Personalized Experiences: By analyzing audience preferences and demographic data, ML can tailor performances, such as recommending specific designs or effects to resonate with particular audiences.
  • Interactive Storytelling: ML can integrate real-time audience feedback into performances, allowing viewers to influence the narrative or visual elements through interactive devices.
  • Virtual and Augmented Reality: ML can enhance VR and AR technologies, offering audiences immersive experiences that expand the boundaries of traditional stage design.
4. Preserving Artistic Integrity
  • Creative Control for Designers: ML serves as a tool rather than a replacement, empowering designers to experiment and refine their vision while retaining full creative control.
  • Ethical Design Practices: ML can be programmed to respect cultural and artistic traditions, ensuring that innovations are contextually and culturally appropriate.
  • Collaboration over Replacement: Positioning ML as a collaborative assistant ensures that human artistry remains at the forefront while benefiting from technological augmentation.
  • asked a question related to Workflow
Question
9 answers
I sent in my prepped plasmids for DNA sequencing. Unfortunately, the results are very noisy (I attached a screenshot). I double-checked and the samples have the right concentration (100 ng/ul) and a good 260/280 ratio. I didn't send any primers because the company offers the ones that I need. Therefore, I am quite sure that the primers should work.
Maybe it is helpful to briefly describe my workflow; I did site directed mutagenesis via PCR and subsequent gel extraction. I then transformed competent bacteria and selected them. I used single clones for cultivation and performed plasmid preparation with a kit.
Does anybody have an idea what might be the problem? Are there any things that I am missing or pitfalls that can happen during my workflow and might not be obvious?
Many thanks in advance!
Relevant answer
Answer
The results of 13-19 are a surprise but i am not convinced that secondary structure effects are to blame for the size differences. How sure are you that this is a perfectly clean plasmid.? It has happened that researchers have borrowed plasmid from another colleague and the plasmid already has an insert or inserts. 15,17,and 18 run smaller thatn the expected size and I would have to blame secondary structure for this effect. Either sequencing or RE digest of some of the smaller bands and one correct size band might give some information as to insertion/deletion effects are possibilities
  • asked a question related to Workflow
Question
1 answer
Hi everyone,
I am currently working on lattice structure optimization using ModeFrontier integrated with ANSYS (2022). My workflow involves:
  1. Generating .inp files from nTop and directing them through a File Transfer Node in ModeFrontier.
  2. Using the Direct ANSYS Node in ModeFrontier to run ANSYS simulations with the transferred .inp files.
The issue I am facing is that ANSYS does not update the input file (.inp) with each iteration. It seems that ANSYS External Model is fixed to the initial .inp file path, and subsequent files generated during iterations are not being used.
I suspect the problem lies with the External Model Input Path in ANSYS. Even when I specify it initially, ANSYS does not seem to dynamically read the updated files provided during iterations.
Has anyone encountered a similar issue? How can I configure ANSYS to dynamically update and use the new .inp file during each iteration in ModeFrontier?
Thank you for your help!
Relevant answer
Answer
Dear Dr. Numan Khan, I am Mauro, researcher at ESTECO. Thank you for sharing your question regarding the integration between ANSYS and modeFRONTIER. To address this issue in detail and receive specific support, we kindly invite you to contact our technical team at support@esteco.com. We will be happy to assist you and help resolve the problem. Best regards, Mauro
  • asked a question related to Workflow
Question
2 answers
Subject: Request for Insights on Research Workflows I hope you’re well. I’m Dipro Paul, an undergraduate student at Daffodil International University, working on an AI tool to simplify research workflows. I would be grateful if you could share some insights from your research experience, especially regarding areas where AI support could be valuable. Your perspective would be invaluable, and I would truly appreciate any time you could spare. Link : https://forms.gle/PqSN6tvz4tgb1LQA9
Thank you for considering my request. Best regards, Dipro Paul Undergraduate Student, Daffodil International University
Relevant answer
Answer
I hope you’re doing well. I’ve had the chance to learn about your experiences and research contributions, and I’m truly inspired by the impact you’re making. I would love to connect on LinkedIn or any social platfrom to stay updated on your journey and explore potential synergies.
Looking forward to connecting!
  • asked a question related to Workflow
Question
1 answer
Workflow for best results in MLPA deletion/duplication kits
Relevant answer
Answer
Dear Nitish K Mohanta , try Peak Scanner™ Software v1.0.
  • asked a question related to Workflow
Question
2 answers
Despite the extensive academic research and new technologies in the AEC industry, their real-world impact remains limited. How, when, and by whom can the AEC sector effectively leverage these advancements—owners, CEOs, contractors, or organizations?
Relevant answer
Answer
I propose creating a repository for academic research and new technologies in the AEC industry, managed by artificial intelligence, to provide easy access to cutting-edge information. this is only a dream.
  • asked a question related to Workflow
Question
2 answers
Hello researchers,
Sorry for my stupid question. I am learning the QIIME2 workflow for analyzing some 16s amplicon NGS fastq data. I found a very nice paper with data and code public available ( ) so I decided to reproduce their qiime2 results.
In their steps, they cut off the barcode and primer sequences from the raw fastq sequences (Yes, there are really barcode & primer sequences at the front of sequences). However, I did not find any primer sequences in my own NGS fastq files. Does this mean that the sequencing company have removed barcodes, adapters and primer sequences for me?
Also, should I perform quality-control before importing my fastq raw data to QIIME2?
Relevant answer
Answer
Sometimes FastQC failed to check the barcode and primer sequences. In this paper( ), I found barcode and primers sequences in the 5' of the sequences by manually checking. But the FastQC told me there's no adapter sequences. You can download the 16s amplicon raw data in NCBI database under BioProject PRJNA788265.@Péter Gyarmati
  • asked a question related to Workflow
Question
3 answers
Hi,
I am very new to Bioinformatics. Recently, I have the project that aims to perform taxonomic anlysis on raw reads from mixed samples taken from environment.
For example, we perfromed NGS (whole genome) on a insect and we want to identify the taxonomic of every symbiotic bacteria in the raw reads.
Currently, I can use Kraken2 to perform the analysis. However, I have following few questions.
1. How can I focus only on the bacteria and remove the rest of data and make a summary table or visualization (the percentage of each bacteria strain).
2. Because in the future, we will switch to the mixed environmental samples and focused on 16s rRNA, I would like to know how to perform the taxonomic analysis by identifying 16s rRNA first from the raw reads and make analysis that focuses only on 16s rRNA. Because mixed environmental samples will contain not only bacteria but also other eukaryotic DNA reads, I want to identify them and analyze them later to reduce the process time.
Followed by that, what process and tools shoud I use?
I found the tutorial: " 16S Microbial Analysis with mothur (with galaxy)" however, I tried with the current data of NGS data on insects, it took so much time on making contigs of non bacterial reads. I am wandering if there is any methods that can get rid of reads that is non bacterial in the first hand.
Additionally, I found other tools such as RNAmmer, barrnap, prokka. However, these tools seems to be only accepting bacterial whole genome but not mixed reads.
If you can share some experience and good workflow or tools to try, I will very appreiate that.
Thank you very much for your great help.
Relevant answer
Answer
Hi Dear Sonja,
I prepared the answer as a pdf, I hope it will be useful.
  • asked a question related to Workflow
Question
3 answers
Dear Colleagues,
I hope you’re all doing well. As we know, systematic reviews are crucial for synthesizing evidence and providing comprehensive insights into specific research questions. However, the process of conducting these reviews can be complex, demanding and time consuming. We are in the process of developing an AI-based software solution designed to assist with the article screening and data extraction stages of systematic reviews. To ensure our tool effectively addresses the needs of the research community, we’d like to learn more about your experiences and get your perspective on common problems.
· What are the most time-consuming aspects of your workflow?
· Are there particular pain points or frustrations you regularly encounter?
· How do you currently address these challenges, and what improvements would you like to see in an AI tool?
If you are interested in trying the tool or have any questions, please don't hesitate to send me a message (or visit www.revisio.ai )
Looking forward to hearing your thoughts!
Relevant answer
Answer
Dear Sreenivas, thank you for your comment. Streamlining multiple article screening and data extraction are our main goals. We hope you will find the tool helpful when it is released.
  • asked a question related to Workflow
Question
1 answer
Hi ResearchGate community,
I'm currently working on a project that involves the structural alignment of multiple protein sequences. However, I do not have PDB IDs for these proteins. I was wondering if anyone could provide advice or share their experience on how to approach this problem.
Specifically, I'm looking for methods or tools that can help with:
  1. Predicting the 3D structures of proteins from their sequences.
  2. Performing structural alignment with these predicted structures.
I've come across several protein structure prediction tools like AlphaFold, Rosetta, and I-TASSER. However, I'm not entirely sure how to integrate these predictions into a workflow for structural alignment. Additionally, any recommendations for software or online tools that can handle multiple structure alignments would be greatly appreciated.
Has anyone tackled a similar challenge? What tools or approaches worked best for you?
Thank you in advance for your insights!
Best regards
Relevant answer
Answer
Dear Aryaman Sajwan,
To perform structural alignment of proteins, you first have to get their 3D structures - either from databases (if they were solved by X-Ray / NMR / CryoEM) or by generating with modeling tools like Modeller, I-Tasser, AlphaFold, etc.
Have you checked the Structure section of the Uniprot records for your proteins?
After you have obtained the 3D coordinates for your proteins of interest, you can easily align them in most moleular modeling / visualization programs like UCSF Chimera, Pymol, or Maestro which are free for academia.
E.g., in Pymol, if you have proteins named ProtA and ProtB, one can superimpose them using a simple command:
>super ProtB, ProtA
Please feel free to ask more questions, if something is unclear.
  • asked a question related to Workflow
Question
3 answers
Salam,
As we look to the future of scientific research and communication, it's clear that advances in AI will have a major impact. AI-powered tools are already augmenting many aspects of the research workflow, from literature synthesis to data analysis.
My question is: can we see a new form of scientific publication ?
F CHELLAI
Relevant answer
Answer
Given that the peer-review process as is, is not functionaing well in many case, I think this is a hughe opportunity to make this process leaner and make research available quicker. Typical economic journals have a lead time to some years until sth. is published and comments are often contradictory so I think it would be great to build more automated processes around it.
  • asked a question related to Workflow
Question
1 answer
What do you do if your Technological Innovation workflow needs a time-saving boost?
Relevant answer
Answer
If your technological innovation workflow needs a time-saving boost, there are several strategies you can consider implementing:
  1. Automation: Identify repetitive tasks in your workflow and explore automation tools and technologies to streamline these processes. This could involve using scripting languages, workflow automation platforms, or robotic process automation (RPA) solutions to automate routine tasks and free up time for more creative work.
  2. Standardization: Establish standardized procedures and templates for common tasks and processes. This can help eliminate ambiguity, reduce errors, and speed up the execution of routine activities.
  3. Collaborative Tools: Utilize collaborative tools and platforms to facilitate communication, coordination, and project management within your team. Tools such as project management software, version control systems, and collaborative document editing platforms can help streamline collaboration and keep projects on track.
  4. Lean Principles: Apply lean principles to identify and eliminate waste in your workflow. This could involve conducting value stream mapping exercises to identify inefficiencies, implementing continuous improvement initiatives, and empowering team members to suggest and implement process improvements.
  5. Agile Methodologies: Adopt agile methodologies such as Scrum or Kanban to manage your innovation projects more efficiently. Agile approaches emphasize iterative development, frequent feedback, and adaptive planning, allowing you to deliver value more quickly and respond to changing requirements effectively.
  6. Outsourcing and Partnerships: Consider outsourcing non-core tasks or partnering with external vendors or experts to accelerate certain aspects of your innovation workflow. Outsourcing can help leverage specialized expertise, access additional resources, and expedite project delivery.
  7. Continuous Learning and Skill Development: Invest in continuous learning and skill development to enhance the efficiency and effectiveness of your innovation workflow. Stay updated on emerging technologies, tools, and best practices in your field to leverage the latest advancements and stay ahead of the curve.
  8. Prioritization: Prioritize tasks and focus your efforts on high-impact activities that contribute most directly to your innovation objectives. Use techniques such as the Eisenhower Matrix or MoSCoW prioritization to identify and prioritize tasks based on their urgency and importance.
  • asked a question related to Workflow
Question
2 answers
What do you do if your Technological Innovation workflow needs a time-saving boost?
Relevant answer
Answer
Due to the lack of details, I can only answer in a generic way. You can either boost resources until diminishing returns set in, or reduce the scope to make it within the allotted time frame.
Regards
  • asked a question related to Workflow
Question
1 answer
Based on the research article, “Red and white blood cell morphiology characterization and hands-on time analysis by the digital cell imaging analyzer DI-60”. In its conclusion, DI-60 demonstrated acceptable performance for normal samples but for the abnormal wbc differentials and rbc morphology characterization, it should be utilized carefully. Therefore, does this state that sysmex DI-60 is not that accurate when it comes to abnormal samples? Will this hinder its performance when it comes to efficiency or feasibility?
Relevant answer
Answer
To enhance the Sysmex DI-60's lab performance for WBC differentials and RBC morphology analysis without impacting workflow efficiency, we can employ a variety of methods. These include improving image analysis algorithms to better identify abnormal cells, offering thorough user training programs for precise result validation, implementing routine quality control checks to monitor system performance, encouraging feedback for ongoing enhancements, providing software customization options, and ensuring smooth integration with Laboratory Information Systems (LIS). By focusing on these areas, the DI-60 can deliver more precise and dependable results while still supporting efficient lab operations.
  • asked a question related to Workflow
Question
1 answer
This question is directed on the performance of the Sysmex DI-60 digital morphology analyzer and its contributions on the improvement in the laboratory workflow. I am curious as to what capabilities of the analyzer could potentially reduce the long TAT and tedious process of performing manual methods which may possibly eliminate/lessen the need to perform manual methods.
Relevant answer
Over the years, several hematology analyzers have been developed, but they have been limited in their ability to provide information about abnormal morphologies. This limitation has caused many laboratories to continue relying heavily on manual counts, which are labor-intensive, time-consuming, and prone to reviewer subjectivity errors. However, the Sysmex DI-60 analyzer has been proven to significantly improve the speed and accuracy of sample processing by reducing long TAT and manual processes. Its outstanding ability to locate, identify, and characterize both normal and abnormal RBCs and WBCs sets it apart from the previous analyzers. The system and criteria that are set for this machine also enable it to review flagged and unidentified cells, which in turn can be checked and verified by medical technologists. With this in mind, the TAT can be reduced in a much shorter time because medical technologists will no longer have to dreadfully examine manual PBS samples, especially for those working at tertiary hospitals and heavily worked laboratories that run hundreds of samples every day. Combining the efficient and precise performance of DI-60 and the effective skills and mastery of medical technologists to verify results sets the Sysmex DI-60 as a game-changer in the field of hematology and a valuable asset to laboratories seeking to improve their efficiency and accuracy.
  • asked a question related to Workflow
Question
1 answer
The question is based on the article “Tenets of Specimen Management in Diagnostic Microbiology” by Rajeshwar and Pathak. It explores how advances in technology, specifically barcoding systems, can enhance specimen management processes in diagnostic microbiology. The aim is to investigate the potential benefits of barcoding systems, such as improved tracking, accuracy, and efficiency in specimen identification and tracking throughout the laboratory workflow.
Relevant answer
Answer
Innovations in technology have been beneficial in the field of medicine, particularly in laboratory diagnostics, over the past years. As noted by Harpreet (2023) in his article Current Trends and Innovations in Medical Laboratory Technology, Medical Laboratory Science is a field that is continually developing. It is integral in the diagnosis and treatment of innumerable diseases; thus, it is crucial to be up-to-date with the latest trends and innovations to bring about more promising health assistance. Some of the latest innovations Harpreet mentioned include Advanced Automation and Robotics, Next Generation Sequencing, Artificial Intelligence and Machine Learning, Point-of-Care Testing, and Telemedicine and Remote Monitoring. In addition, the Barcoding System as well is a product of growing technology. Further, this too has its role in diagnostic laboratories, which includes the Microbiology Section.
Rezaei‐Hachesu et al. (2016) defined in their journal, Recommendations for Using Barcode in Hospital Process, that barcodes are reference numbers that allow document searching, including descriptive data and other important information, through the utilization of computers. As medical errors are one the primary concerns of on-site healthcare, the authors cited how barcodes are the most economical method of improving patient safety. Barcoding Systems may be beneficial in considerable ways, such that they aid in specimen collection, patient identification, patient care, and the sharing of information in real-time (Vinutha, 2024). The barcode tracking system provides the laboratory with an accurate collection and processing of specimens through verification between patient wristband data and specimen containers, thus reducing errors in labeling and processing. Further, it has the capability to provide real-time information sharing. It is possible through linking of data to wristband barcodes. It then allows immediate access for medical staff.
Innovations in technology, such as the Barcoding System, do have an impact on laboratory diagnostics. The Microbiology Section, as part of the clinical laboratory, also shares benefits from the barcode technology as it enhances patient safety by allowing a more accurate specimen collection to verification of treatment by the medical staff, thus improving medical staff’s productivity and further being able to provide better health assistance.
References:
Harpreet. (2023, October 20). Current Trends and Innovations in Medical Laboratory Technology - BizTech College. BizTech College. https://biztechcollege.com/biztalk/medical-laboratory-technology/
Rezaei‐Hachesu, P., Zyaei, L., & Hassankhani, H. (2016). Recommendations for using barcode in hospital process. Acta Informatica Medica, 24(3), 206. https://doi.org/10.5455/aim.2016.24.206-210
Vinutha. (2024, January 22). All about Barcode Technology in Healthcare Industry 2024 | Infraon. Infraon Blogs - Experts Insights on ITOps and Customer Success. https://infraon.io/blog/barcode-technology-in-healthcare-industry/
  • asked a question related to Workflow
Question
2 answers
In the past years I've been creating ENMs using dismo and its related packages like raster.
I have my own workflow but for didactic purposes I also use modleR workflow (https://github.com/Model-R/modleR) which is very good for students learning ENMs.
Recently, the package raster was retired and a lot of my analysis and workflow rely on raster and dismo which has been causing me some issues.
As far as I'm capable I've been changing my codes to use the package terra instead of raster, but it has been a nightmare.
There is any workflow or package I can follow/use as an alternative for dismo/raster ? Any package or workflow which already uses terra to manipulate spatial data ?
Thanks for the attention !
Relevant answer
Answer
Hi Alexandre,
The new package “predicts” which is based on terra is replacing dismo. This package is created by Robert Hijman who is also the creator of terra, raster and dismo.
  • asked a question related to Workflow
Question
1 answer
A sub field of medical physics/molecular biology, thh overall aim of the effort is to improve the understanding of the molecular architecture of unknown or understudies biological systems, for example of the human sperm (flagellum component) using cryo-electron tomography and advanced image processing workflows.
In this example, men fertilization issues from accidents, excessive therapeutic radiation and pathological development in puberty call for scientists to solve the structures of key flagellar macromolecular complexes to understand the molecular mechanisms controlling sperm function both in health and disease.
Do you know about the current degree of effectiveness compared to other approaches and the reliability in bi8logical systems structure elucidation by cryo-electron tomography ?
Relevant answer
Answer
Cryo-electron tomography (cryo-ET) is a technique that allows the study of the 3D structure of cells and tissues at near-native conditions⁴. It can provide unprecedented insights into the inner working of molecular machines in their native environment, as well as their functional relevant conformations and spatial distribution within biological cells or tissues¹.
The field of structure elucidation by cryo-ET has evolved rapidly in recent years, thanks to the advances in instrumentation, sample preparation, image processing, and data analysis. Some of the current trends and challenges in the field are:
- Achieving higher resolution and better contrast for cryo-ET images, by using phase plates, direct electron detectors, and improved alignment and reconstruction algorithms¹³.
- Developing new methods and protocols for cryo-ET sample preparation, such as cryo-focused ion beam milling, cryo-correlative light and electron microscopy, and cryo-electron microscopy of vitreous sections¹⁴.
- Applying cryo-ET to study a wide range of biological systems and processes, such as viruses, bacteria, organelles, cytoskeleton, membrane proteins, and macromolecular complexes¹²³.
- Integrating cryo-ET with other complementary techniques, such as mass spectrometry, X-ray crystallography, nuclear magnetic resonance, and computational modeling, to obtain a comprehensive understanding of the structure and function of biological macromolecules¹⁵.
- Disseminating and standardizing cryo-ET data and structures in public databases, such as the Protein Data Bank (PDB) and the Electron Microscopy Data Bank (EMDB), to facilitate data sharing and reproducibility⁵.
(2) High-resolution in situ structure determination by cryo-electron .... https://www.nature.com/articles/s41596-021-00648-5.pdf.
(3) Cryo-electron tomography: 3-dimensional imaging of soft matter. https://pubs.rsc.org/en/content/articlehtml/2011/sm/c0sm00441c.
(4) JoF | Free Full-Text | Preliminary Structural Elucidation of β-(1,3 .... https://www.mdpi.com/2309-608X/7/2/120/htm.
(5) Evolution of standardization and dissemination of cryo-EM structures .... https://www.jbc.org/article/S0021-9258%2821%2900338-0/pdf.
  • asked a question related to Workflow
Question
2 answers
Can I get workflow to perform beam dynamics using ASTRA?
Relevant answer
Answer
can you please share in details?
  • asked a question related to Workflow
Question
2 answers
My current workflow needs the following features
  • Daily jots and from that making reminders and to-do lists
  • Making mind maps to create relations between strings
  • Cross-platform without losing features
  • Integration with Zotero or GitHub
  • To dump any thoughts or ideas on the go and reflect on it later
  • to support latex not just maths but whole typesetting if possible
  • weekly history or review summary should be generated.
Relevant answer
Answer
While it might be challenging to find a single application that covers every aspect perfectly, a combination of tools might serve your purpose effectively. Here are some suggestions:
1. Notion
  • Features: Notion is excellent for daily jots, to-do lists, and setting reminders. It also supports creating databases and integrating various types of content.
  • Cross-Platform: Available on multiple platforms without losing features.
  • Mind Maps: While Notion doesn't natively support mind maps, it can be integrated with third-party mind mapping tools.
  • Integration: Limited direct integration with Zotero or GitHub, but can be partially achieved through third-party integration tools like Zapier.
  • Latex Support: Supports inline Latex for math, but full Latex typesetting might be limited.
2. Evernote
  • Features: Good for quick note-taking, to-do lists, and reminders.
  • Cross-Platform: Consistent experience across platforms.
  • Mind Maps: Limited native support but can be integrated with external mind mapping tools.
  • Integration: No direct integration with Zotero or GitHub.
  • Reflection and Review: Offers features to review past notes but doesn't generate a weekly summary natively.
3. Microsoft OneNote
  • Features: Strong for daily notes, to-do lists, and has good organizational features.
  • Cross-Platform: Available across multiple platforms.
  • Mind Maps: You can create basic mind maps, but for complex ones, integration with external tools is better.
  • Integration: Limited with Zotero and GitHub.
  • Latex Support: Supports math Latex, but full typesetting is not its strength.
4. Obsidian
  • Features: Excellent for linking thoughts and creating a knowledge base.
  • Cross-Platform: Consistent experience across platforms.
  • Mind Maps: Has plugins for mind mapping.
  • Integration: Can integrate with GitHub for version control of your notes; Zotero integration might require additional setup.
  • Latex Support: Good support for Latex.
  • Review Summary: Plugins may be available for weekly reviews.
5. Roam Research
  • Features: Great for networked thoughts and daily notes.
  • Cross-Platform: Web-based, accessible on any platform with internet.
  • Mind Maps: Native support for creating mind maps.
  • Integration: Limited with Zotero and GitHub.
  • Review Summary: Offers daily notes and can be customized for weekly reviews.
6. TiddlyWiki
  • Customizable: Highly customizable and can be tweaked to fit various needs.
  • Latex Support: With plugins, it can support Latex.
  • Cross-Platform: As a local HTML file, it works across platforms, but with a slightly different approach.
Additional Tools
  • Zotero: For research and reference management.
  • GitHub: For version control and collaboration.
  • LaTeX Editors: For comprehensive typesetting, standalone Latex editors might be necessary.
Integration Tools
  • Zapier/IFTTT: For integrating different apps where native integration is not available.
Considering your specific needs, you might have to use a combination of these tools and explore available plugins or integrations to fully optimize your workflow. Experiment with a few to see which combination aligns best with your working style.
  • asked a question related to Workflow
Question
5 answers
Hello everyone,
I am calculating for Li2TiO3 about DOS cal by HSEO6. I use VASP.
But my calcultaion take a lot of time, and now I thinnk it stoped.
My work flow:
1. Relaxation calculation by PBE, obtain CONTCAR -KPOINT 4 4 4
2. SCF by PBE with CONTCAR, obtain CHGCAR, WAVECAR, KOINT 4 4 4
3. DOS cal by HSE06 , KPOINT 9 9 9
I put my INPUT file bellow.. Can everyone help me?
Relevant answer
Answer
Yes, you should. Because step 3 is a non scf calculation, which must start from a converged wavefunction and charge density, otherwise the non scf calculation of DOS will start from a random wavefunction, which is totally different from wavefunction from step 3. Therefore, you should use wavefunction and charge density from step 2 to ensure that your DOS calculation is exactly the same strcuture and the same wavefuction, the same charge density to step 2.
  • asked a question related to Workflow
Question
3 answers
What tools/softwares/apps you use to track milestones & monitor the workflow of your research?
Relevant answer
Answer
Matlab
  • asked a question related to Workflow
Question
1 answer
How can end-to-end workflow automation be achieved for cloud-based machine and deep learning pipelines?
Relevant answer
Answer
Here I provided a comprehensive approach to performing E2E automation. Initialize automation by defining the requirements of your ML/DL-based pipeline. Develop models and train large data sets by storing them in secure cloud-based data lakes. Deploy and monitor the real-time insights.
  • asked a question related to Workflow
Question
3 answers
I am working on Petrophysical property modelling. After creating an average velocity log from a check shot data for depth conversion, I find it difficult to successfully upscale the log. A dialogue keeps popping up "no wells with the selected log found. Scale up failed".
Log upscale which is supposed to be a very simple operation in Petrel is giving me serious thinking. I have double checked my work flow to see if I missed a thing. I really need assistance on this
Relevant answer
Answer
A model was brough to me with the same problem and I was able to identify the reason, it was a problem in depth domain between the surfaces and the 3D grid. The depth conversion of the surfaces was not correct.
when we used well tops as input to create the surfaces it works, so I advised them to recheck/create a new TDR and try.
  • asked a question related to Workflow
Question
2 answers
I have an Isight workflow which simply includes an optimization tool and an Abaqus component. My abaqus model is a single lap joint which has some pins. Thus, im trying Isight to calculate the maximum force the joint withstands (which i calculate from the odb file with an external python script) for a variable length of the pins.
However, when I run the workflow, i get the errors detailed in the picture.
Has anyone encountered a similar problem before? or has any idea of how to solve this?
Thanks beforehands!
Relevant answer
Answer
Talina Terrazas Monje Have you solved this problem yet?
  • asked a question related to Workflow
Question
1 answer
Does anyone use the software CUE for workflows....any suggestions???
Relevant answer
Answer
Thanks Doctor, Maybe you if you worked with CUE before could you help me with some troubleshooting about how to declare variables and make some calculations to report concentrations in the workflow??? I will recommend your works.
  • asked a question related to Workflow
Question
1 answer
Snakemake is a versatile workflow management system that can be applied to various fields, including plant pathology. In plant pathology, Snakemake can streamline and automate complex analysis pipelines, making research more efficient and reproducible. Here's a brief overview of how Snakemake is used in plant pathology:
1. **Automated Analysis Pipelines**: Plant pathologists often deal with diverse datasets, such as DNA/RNA sequences, microscopy images, and phenotypic data. Snakemake enables researchers to create automated pipelines that handle data preprocessing, quality control, analysis, and visualization. This automation reduces manual errors and ensures consistent analysis across different samples.
2. **Bioinformatics Workflows**: Snakemake is particularly useful in plant pathology for managing bioinformatics workflows. It can integrate various tools and software packages for tasks like sequence alignment, variant calling, and phylogenetic analysis. Researchers define rules that describe dependencies and data transformations, allowing complex analyses to be executed seamlessly.
3. **Reproducibility and Traceability**: Snakemake ensures reproducibility by capturing all dependencies and steps in a workflow. Researchers can easily reproduce their analyses by rerunning the same Snakemake script. This is crucial in plant pathology, where accurate and reproducible results are essential for understanding disease mechanisms and developing mitigation strategies.
4. **Iterative Studies**: Plant pathologists often conduct iterative studies to investigate disease progression or response to treatments. Snakemake simplifies these studies by automating repetitive tasks and adjusting the workflow as new data or hypotheses emerge.
5. **Data Integration and Visualization**: Snakemake can incorporate data integration and visualization steps in the workflow. For instance, it can merge multiple types of data (genomic, transcriptomic, and phenotypic) to provide a comprehensive view of plant-pathogen interactions.
6. **Customized Analysis**: Snakemake allows researchers to customize their analysis pipelines based on the specific needs of their plant pathology studies. This flexibility ensures that the workflow is tailored to address research questions effectively.
7. **Parallel Processing**: Large-scale plant pathology studies often involve analyzing extensive datasets. Snakemake's parallel processing capabilities enable researchers to distribute tasks across multiple processors or compute nodes, significantly reducing analysis time.
8. **Collaboration and Sharing**: Snakemake workflows can be easily shared with collaborators, making it simpler to collaborate on complex analyses. This promotes knowledge sharing and accelerates research progress.
In summary, Snakemake plays a vital role in plant pathology by automating and streamlining analysis pipelines, enhancing reproducibility, and facilitating complex bioinformatics workflows. Its flexibility, parallel processing capabilities, and user-friendly syntax make it a valuable tool for researchers studying plant-pathogen interactions, disease mechanisms, and mitigation strategies.
Relevant answer
Answer
This is not a question, but some sort of advertising/publicity plug.
Delete/Ignore
  • asked a question related to Workflow
Question
1 answer
What is innovation process workflow approach?
Relevant answer
Answer
Innovation process is exploring or introducing a new method or action steps or work package for doing activity in compress way that helps an organization remain constrain and satisfies customer actual demands.
  • asked a question related to Workflow
Question
1 answer
I see that there are many screw-top tubes that are prefilled with glass beads, but these are usually used with bench-top homogenizers. I'm curious if it would work to use these tubes with a vortex instead of having to buy a new instrument. Has anyone tried this when homogenizing animal-tissue samples? If so, was the rest of your protocol successful?
FYI: We are interested in using this in a protocol for RNA extraction because our current workflow hasn't been giving us the best results.
Relevant answer
Answer
There is actually some configuration designed specifically for this. See the general guidelines for beads selection, and at the bottom there is a commercial Bead Tube Holder on a Vortex-Genie 2
  • asked a question related to Workflow
Question
4 answers
Hello, I'm trying to find pre labelled 90mm petri dishes to be included in an automation workflow for a new biotech lab. Does anyone know a brand? I can't find one! THX
Nidia
Relevant answer
Answer
Binyamin Kerman very kind for your answer, thanks. Exploring all your suggestions!
N
  • asked a question related to Workflow
Question
1 answer
We are looking for software to manage the wet lab workflow and DNA storage, through the use of QR-codes (or similar) and consumables compatible with QR-codes and software.
Relevant answer
Answer
I created a platform with a series of applications for laboratories. I don't know if it covers everything you need, if you're interested in looking at it, it's at www.locatorapps.com. I am developing an application for laboratory quality management but I have not finished it yet
  • asked a question related to Workflow
Question
1 answer
Based on the latest capability and API of ChatGPT, how can we realize an image-to-text workflow to employ it for deriving a rational analysis for common scientific graphics?
  • asked a question related to Workflow
Question
5 answers
We frequently have mixed results for a qPCR assay. We'd like to determine if this is non-specific amplification, or target amplification when the target is low-frequency. DNA tape-station shows multiple bands, that are unique to different samples. Bands are too close together to reliably isolate. I'd like to sequence them all using next gen. I'm trying to figure out if this could work using the Ampliseq workflow for ion torrent, which is something we already do regularly. Distributer will not advise so I'm asking here. Thnx.
Relevant answer
Answer
In this case, I would say T-RFLP could be a better or simpler choice rather than sequencing.
  • asked a question related to Workflow
Question
1 answer
There are three matlab scripts provided with the old CAFFA 3D package. The code was developed by Gabriel Usera in 2004 during his PhD time.There is an example provided with the code about cavity flow. It is quite easy to see the grid with the provided code . But the provided matlab scripts do not work properly to see the final results such as U , V and P profile. During running code in the matlab , it shows some errror . But I cannot understand what to do actually. Please do help me who did work before with the code. Here I am giving you the code:
>> filnam=’cavmp1.10’;
>> caffa3dMBs;
>> figure; hold on; grid on;
>> plot(GR(1).XC(42,:,42)*2-1,FL(1).VV(42,:,42),'b','LineWidth',2)
>> plot(GR(2).XC(42,:,42)*2-1,FL(2).VV(42,:,42),'b','LineWidth',2)
>> plot(FL(2).UU(:,3,42),GR(2).YC(:,3,42)*2-1,'b','LineWidth',2)
>> axis square; axis([-1,1,-1,1]);
Also I am attaching the caffa3dMBs code and the errors are mainly from GR and FL.
Now I am working with flow around a NACA 0012 profile to visualize the pressure coefficients with the same software. I have run the code successfully . I have also seen the grid with provided matlab scripts. But I cannot see the results because the provided matlab scripts do not work properly. Please do help.
Regards,
Waliur Rahman
Relevant answer
Answer
Hi, Hope you have solved your problem. I got a problem that the old caffa3d package provides tutorial to generate mesh for a cavity flow, how did you generate mesh for you NACA 0012 profile? Thanks
  • asked a question related to Workflow
Question
2 answers
The above question is part of the workflow for inversion process using 3D seismic wave data
Relevant answer
Answer
The wavelet transform is an attempt to see the order in the rock layers
  • asked a question related to Workflow
Question
3 answers
Among hundreds of Linux distributions, what are the most suitable ones which can be used by a biological science researchers whose workflow is largely based on genomic data science and its analytics (Bioinformatics), scientific manuscript preparation, illustration etc.
Relevant answer
Answer
Hi,
I can suggest using Ubuntu as Linux (Debian) based OS for bioinformatic workflows since it is quiet popular (for a Linux) and tons of various packages can be installed. Also, it have GUI. Probably, 22.04 LTS is not a best option since it is the most recent update and it is better to use 20.04 or 18.04 LTS.
But as it is already answered, data analysis is not "distribution" specific, so there is no "best distribution for bioinformatics", only the one that is best for you.
  • asked a question related to Workflow
Question
4 answers
Often I need to give some feedback to the students on their work or assignments which come to me as PDFs. Basically, I comment on some parts of the PDF, add notes, and highlight or strike words or sentences.
What are your recommendations in terms of software and workflow?
Relevant answer
Hello Alberto!
I have purchased Acrobat pro, and I use it to edit and correct manuscripts and other things. I think it is very useful. So, the inversion is not expensive.
Regards from Mexico!
  • asked a question related to Workflow
Question
2 answers
I have a mutant bacteria strain and want to confirm its phenotype by TEM. Copper mesh with carbon membrane was used to grid,here are my workflow:
1, 2.5ul diluted bacterial fluid,2min,
2, 5ul UA(Uranium acetate),10s,
3, 5ul UA(Uranium acetate),10s,
4, 5ul UA(Uranium acetate),1min,
5, Dry 10min.
Unfortunately,I cant see any bacteria on negative staining,how can I improve my experiments?Thanks in advance.
Relevant answer
Answer
If you want to see the bacterias under TEM then can go for negative staining method. For that, put a drop of bacterial solution on to the cu grid, let it set on the grid for about 1 min then remove the excess solution by a filter paper then add the uranyle acetate stain to it and after 1 min remove the excess stain by a filter paper. Next, dry the grid in a incubator for sometime and now it will be ready to view under TEM.
  • asked a question related to Workflow
Question
4 answers
Dear scientific community,
I would be very interested to hear your input regarding the scaling-up of LCA studies to a portfolio level. I know there is a plethora of product LCAs and plenty of them consider several individual products or product variants in parallel. However, I have not found an awful lot of studies that extend to several hundred, let alone thousands of individual products within the scope of one study (as opposed to equally as many individual case studies).
Surely, more people have approached this apparent research gap. So for anyone that has been active in this area: I would greatly appreciate you sharing what experience you have made or you pointing me at any related publications in the field.
Many thanks and best regards
Tobias
P.S: If you are interested what my colleagues and I have done in this field, feel free to check this framework article and the case study we presented at LCM 2021 conference:
Relevant answer
Answer
Tobias Manuel Prenzel Cradle-to-gate is an evaluation of a portion of a product's life cycle from resource extraction (cradle) through factory gate (ie, before it is transported to the consumer). Cradle-to-gate evaluations are occasionally used as the foundation for business-to-business environmental product declarations (EPDs).
  • asked a question related to Workflow
Question
4 answers
My lab wants to try to do as much of our pre-processing, processing, and analysis in R as possible, for ease of workflow and replicability. We use a lot of psychophysiological measures and have historically used MATLAB for our workflow with this type of data. We want to know if anyone has been successful in using R for these types of tasks.
Any good R packages?
Note:
I am looking in particular for packages for the preprocessing, processing, and analysis of *physiological* signals.
Relevant answer
Answer
Begin by consulting this very useful book by Jared Lander R for everyone to get started its available from the z-library and contains much useful research grade code. To find useful packages in R Google what you want to do alongside R package. Example attached. Best wishes David Booth
  • asked a question related to Workflow
Question
2 answers
suggestions need for workflow in compare and analysis the primary data obtained from construction phase with VR technology
Relevant answer
Answer
Gopika Balasubramanian can you please share more insights as to what is the data format and what are you trying to accomplish?
  • asked a question related to Workflow
Question
1 answer
Dear all,
I have the following problem when I am working with an Abaqus python script:
- Everytime I update a module that is loaded as a part of my main script, after compiling the corresponding module (and generating the associated *.pyc file) in order to test it, I have to close and open Abaqus GUI, otherwise, the new *.pyc file version is not loaded into the system.
I am formulating this question because during the debugging phase of my code, everytime a new change is done, the fact of closing and reopening Abaqus interrupts the workflow (as it takes time to start Abaqus GUI).
Thanks in advance.
Relevant answer
Answer
I have the same issue and I found a solution using the built-in function reload() (for python2.7 only). In the main file I would write:
import myfunctions as myfunc
reload (myfunc)
where myfunctions is the python file myfunctions.py, in the same folder as the main file. This way in each run the file is reloaded and a new compiled myfunctions.pyc is created.
However reloading each time, might be time consuming and it is better to disable reloading when you are done changing myfunctions.py
If you already found another better solution please share it.
  • asked a question related to Workflow
Question
1 answer
We have a hybridization-based, TruSeq-compatible targeted enrichment protocol from Twist Biosciences that we have been using in our lab with great success.
We would like to use the Illumina® DNA Prep, (S) Tagmentation #20025520 library prep kit with IDT® for Illumina® DNA/RNA UD Indexes Set A, Tagmentation (96 Indexes, 96 Samples) #20027213 in our current targeted enrichment workflow with as few changes as possible.
A problem we have is that the Twist Universal Blockers #100767 do not work well for these libraries, as they are not TruSeq-based.
Neither Illumina nor IDT sell their blocking oligos for Illumina® DNA Prep, (S) Tagmentation as a stand-alone product.
Has anyone found an alternative blocking solution that works well for these libraries? Do you have any other suggestions on how to improve the on-target rate in the absence of an optimal blocking solution?
Relevant answer
Answer
For any of you that are following this question, IDT offers a Nextera-compatible blocking oligos as a commercial product now. Problem solved!
  • asked a question related to Workflow
Question
1 answer
Are there any recommendations for tools or workflows that would allow me to do this? Thanks!
Relevant answer
Answer
Use NCBI BLAST and choose human as the reference genome.
  • asked a question related to Workflow
Question
1 answer
I have one PCR workstation.
I performed my RNA extraction/cDNA synthesis on a separate bench (outside hood).
I now have very concentrated cDNA.
In previous labs I had 3 work stations.
1: Prepare primer/probe working stock
2: Prepare serial dilution of concentrated template
3: Preparing master mix and plating serial dilutions
I now have one workstation.
Would you perform you cDNA dilution outside of the workstation?
Or would you do it inside and just turn on UV light/spray DNAzap before plating?
Any advice on PCR workflow would be useful.
Thank you!
  • asked a question related to Workflow
Question
7 answers
Hello everyone,
I need help understanding whether my two groups are paired or not.
I am collecting data from one group of cells. We have developed two different workflows (let's call them A, and B) for data analysis. We want to test whether these two workflows give the 'same' results for the same set of cells.
At the end, I obtain:
  1. Group 1 (contains the variable obtained with workflow A)
  2. Group 2 (contains the variable obtained with workflow B).
I have been considering the two groups as independent because the two workflows do not interfere with each other. However, the fact that both workflows operate on the same cells is throwing me off and I am wondering if these groups are actually paired.
Could you advise me on this and on what test is best to use?
The hypothesis for the test would be:
  • the distributions of the variable is the same with both workflow A and B; and/or
  • the median of the distribution from workflow A equals the one from workflow B
Thank you.
GN
Relevant answer
Answer
If you have two samples, from two measurement methods (workflows), but from the same subject (single group of cells), then you can use a paired samples test.
The benefit of this is that it has more statistical power than an unpaired test.
  • asked a question related to Workflow
Question
3 answers
Apart from the available sources from http://www.open-systems-pharmacology.org/ on PBPK model building workflow, how can you optimize parameters in a case were simulated data is different from observed data?
Relevant answer
Answer
You can reference to the "parameter-identification" section of PK-Sim document.
There are three available optimization algorithms in PK-Sim, Monte-Carlo , L-M and Nelder - Mead. While for simple optimization problems (e.g. 1 - 3 Identification parameters which are well informed by sufficient and not contradicting observed data) each of the algorithms works stably and fast, there can be big differences in applicability, robustness and performance in more complex situations. In such cases, some optimization experience is often required. The descriptions and hints given here can only give some basic support, for more detailed information follow the references.
Some Hints:
  • You should start with the Levenberg-Marquardt algorithm and the option Standard (= single optimization run) which are the default settings.
  • 2.If the result of a Parameter Identification is not satisfying, choose one of the following options: In case you do not have enough time and/or hardware resources available, switch to Monte-Carlo algorithm and the option Standard;Afterwards, you may perform an additional Levenberg- Marquardt run using optimal parameter values produced by Monte-Carlo as the new start values. Otherwise (enough time and hardware resources): perform parameter identification using Levenberg-Marquardt algorithm with option Multiple optimization.
  • asked a question related to Workflow
Question
2 answers
Within the 3DForEcoTech COST Action, we want to create a workflow database of all solutions for processing detailed point clouds of forest ecosystems. Currently, we are collecting all solutions out there.
So if you are a developer, tester or user do not hesitate to submit the solution/algorithm here: https://forms.gle/xmeKtW3fJJMaa7DXA
You can follow the project here: https://twitter.com/3DForEcoTech
Relevant answer
Answer
Dear Martin Mokros,
Got this Project! I‘ll share this with corresponding workmates
Thanks for sharing this info. Kind Regards!
  • asked a question related to Workflow
Question
3 answers
I am working on a project in which we seek the best way to optimize the energy and cost of workflows in distributed data centers, but I have a problem with modeling the cost. It would be highly appreciated if you could help me or share my question with computer engineers.
Relevant answer
Answer
To be more realistic, you can see cloud providers: aws, google; etc
  • asked a question related to Workflow
Question
9 answers
Hey guys, I was just wondering whether there are any public clusters for evaluation of NGS data like ? Thanks in advance for your comments!
Relevant answer
Answer
I have the best experience with the Galaxy cluster of bioinformatics tools https://usegalaxy.eu/ (it has a cutting-edge workflow for nearly all types of sequencing (DNA, RNA, long reads/short reads, metagenomics, epigenomics,...)
Hope this helps
Martin
  • asked a question related to Workflow
Question
2 answers
Theoretically, Stage III includes IIIA, IIIB, IIIC. So, I wonder why is there a fourth group (Stage III) in the Subtype profile workflow at Gent2 database.
My other question does anyone know a database where mRNA and protein datas from blood samples are available from cancer and normal patients?
Thanks,
Anita Kurilla
Relevant answer
Answer
Probably Stage III stands for samples with missing information regarding the exact T or N stage, so further classification cannot be done to A, B or C stages...
  • asked a question related to Workflow
Question
3 answers
I want to analyze the responses and conformational states of proteins at various temperatures in silico. Can I do this with molecular dynamics simulation programs like gromacs? Is there a special work flow chart or program or methodology?
Relevant answer
Answer
It is possible but I'd suggest looking into some kind of enhanced sampling technique such as accelerated MD as conformational changes require long timescales and hence heavy computational resources to calculated.
  • asked a question related to Workflow
Question
2 answers
Hello, I wanted to know what are the possible project ideas that I can work on using proteome datasets from ProteomeXchange or any other proteome databases, where I can perform while sitting at home. A very basic workflow will also be appreciated. Thank you!
Relevant answer
Answer
Thank you so much for guidance!! Mohammed Gagaoua
  • asked a question related to Workflow
Question
1 answer
Hi,
I'm using Proteome Discoverer 2.5 to analyze my peptide spectrum from Q-Exactive. But after the process complete, I get two error warnings in consensus workflow. In Peptide Validator node, it says "no decoy search was performed for the following search nodes:-SequestHT(A2) in workflow Workflow.FDR+fixed threshold validation is used instead". Meanwhile in Protein FDR Validator node, the error warning says "Cannot validate proteins for group SequestHT, because at least one search node has no decoy PSMs."
Do you have any idea what is the problem?
Thank you :)
Relevant answer
Answer
Hey, not an MS or PD expert here but I managed to circumvent this error by setting a different protein FDR in percolator. The default here was 0.01 and 0.05 respectively. You can increase each of these 2 values until your file gets finally processed. Again, I'm not an expert but I noticed that by setting the Target (FDR) strict higher, you get more decoys, and generally 0.03 correlates to 3% decoys, meaning that if you only identified 50 high confident peptides, you might not find a decoy using only 0.01 but you'll need roughly 0.03 for this parameter.
  • asked a question related to Workflow
Question
6 answers
Including these steps: 1) raw data format transformation for five companies 2) update positions for all SNPs to hg37 version 3) Quality control within companies 4) Pre-phasing (SHAPEIT2) and imputation (IMPUTE2) for all SNPs of each company 5) Perform GWAS using two logistic models for 27 phenotypes 6) Statistic and downstream bioinformatic analysis. 7) Estimation of genetic parameters (rg and hg). 8) PRS analysis. However. the size of my dataset only consist more than 1000 people. With no background knowledge, how long would this take as a bioinformatics master student?
Relevant answer
Answer
more than 1000? please tell the exact number of samples and size of the data?
  • asked a question related to Workflow
Question
7 answers
For example AI can automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions. But what are the clinical challenges for its application?
#AI #cancer #clinical #oncology #biomedical
Relevant answer
Answer
The first is the sample size issue, a predictive model are trained and built with limited imaging data from a particular hospital or region; this has to face bias caused by various factors when dealing with a wider range of diagnoses; it may not be adaptable when making diagnostic decisions for images from other regions.
The second is the inability to balance the patient's treatment intentions, the patient's financial situation, and different regional customs when making treatment decisions.
  • asked a question related to Workflow
Question
3 answers
I have to use mass spectrometry proteome data (analyzed data from Maxquant) to carry out post-translational modification analysis using Proteome Discoverer software. I'm searching for a proper workflow for this purpose. Any suggestions/links to useful sources, etc. would be very welcome.
Relevant answer
Answer
Hi Joshua,
when you already have analyzed data from MaxQuant a search with Proteome Discoverer is (to my knowledge) not possible, as this programme is using the machine raw files for analyis. However, MaxQuant has an option including PTMs and also gives you the information on their position. You can process your MaxQuant files with Perseus, to analyse your PTMs. If you want to use PD for PTM analysis, then you have to use the raw files and you include your PTMS in the search engine tab.
  • asked a question related to Workflow
Question
4 answers
Hello everyone,
I need a simple instruction for modeling of End-of-Life scenarios in GaBi software. I have reviewed every document existed on the internet on this topic but all of them explain the backlog methodology not the modeling procedure. So I kindly ask you rather than referring to documents, tell me if you know a simple workflow in the software for end-of-life modeling. Thank you!
Relevant answer
Answer
Dear Shahin, many thanks for posting this interesting technical question on RG. As an inorganic chemist I'm ansolutely not an expert in this field, and personally I never used this software. I assume that you already checked all handbooks and tutorials which are available on the gabi-software.com homepage. In addition to the potentially useful links suggested by Mohamed-Mourad Lafifi please also have a look at the following freely available handbook:
GaBi Manual
This manual is freely available as public full text (see attached pdf file).
It might also be worth having a look at the answers given to the following closely related question which has been asked earlier on RG:
Anybody familiar with End of Life modelling(i.e. recycling) on GaBi?
(6 answers)
I hope this helps. Good luck with your work and best wishes!
  • asked a question related to Workflow
Question
4 answers
dear Community, can we build a costumer relationship management (crm) using python , for workflow automations? if it's possible , any sources would be helpful thank you.
best regards
Relevant answer
Answer
This webinar might be useful, have a look
Kind Regards
Qamar Ul Islam
  • asked a question related to Workflow
Question
2 answers
Are there any alternatives to SequenceMatrix that can be deployed through Linux in a Snakemake workflow?
Relevant answer
Answer
It cannot be used in a continuous snakemake workflow, for example.
  • asked a question related to Workflow
Question
2 answers
I can only find such processes for explicit applications. Some more generic step-by-step descriptions of the workflow of building digital twins would be more helpful. Can someone recommend relevant literature for this topic? Thanks in advance!
Relevant answer
Hello, dear
I suggest you take a look at the studies on optimization science
  • asked a question related to Workflow
Question
4 answers
I am very interested in VS and MD. Now I am expecting a simple workflow protocol with freely available software for Virtual Screening. Thanks in Advance....
G. Varatharaju
Relevant answer
Answer
You can use the screening score for multiple bioactivities of a single compound also using software PASS. Choose what you need, and what enormity!
  • asked a question related to Workflow
Question
6 answers
Dear friends!
Due to the project, we have 8 groups of rats (n=6), my task is to perform PCR. It includes the expression of 8 genes, 3 microRNA, 4 lncRNA. I`m a bit stressed about the quantity of these samples, expressions to work with simultaneously...
Some of the primers are not new, frozen - I have no documents on them, only name, presumably concentration (100 mМ).
Some will be purchased with all needed documentation.
My question is: how to organize the PCR experiment workflow which includes trying different concentration of primers/probes/cDNA/troubleshooting?
I assume it not reasonable+expensive to try all 48 cDNA samples each time? How then should I act?
One more: how to number samples? Now they are simply 1-48. But maybe it is worth differentiating control/experimental/experimental+treatment samples via specific numeration?
Please, share your algorithms and considerations :) Will answer any needed details gladly)
Thanks in advance!!!
Have a great day!
P. S. Soon we`ll have 3 more groups (n=6) for the same 8 genes, 3 microRNA, 4 lncRNA expression.
Relevant answer
Answer
(1) Organize your samples, including any positive and negative controls. You should at minimum have a negative "no template" control (just your primers and master mix) so you can figure out which genes aren't expressed in any sample at all and just show primer dimer signal.
(2) Organize your plate layouts on Excel. Color-code 8x12 cells (assuming you are using a 96 well plate) and label A-H on one side, 1-12 on the other side. Know exactly where your sample and gene combinations will be.
(3) Group & separate your samples so that the downstream analysis will be correct. If you are doing ddCt analysis to compare gene expression, then pro tip, you just need to have ALL your samples *OR* ALL your genes (or gene groups in your case) on one plate. Obviously it's too many samples to have all variables on one plate, but if you pick one or the other, the ddCt will work out and cancel out plate-to-plate batch effects better that way. I recommend treating the genes, long noncoding, and miRNA as 3 separate experiments. The long noncoding are likely to have weaker expression so you may need more cDNA. The miRNAs are the length of a primer so you need to increase the length during the cDNA step so you are able to qPCR them. If you put all your samples on one plate, you can space out the genes on different plates.
(4) When pipetting everything, open a new tip box and take advantage of the 8x12 layout to match your plate. Keep your place that way. Don't rely on memory.
  • asked a question related to Workflow
Question
5 answers
i have done lcms of plant extract and analysed the data to generate a list of metabolites that are now needed to be categorized in categories such as anti diabetic, anti cancerous anti microbial and various others. can you please suggest a workflow other than manual literature search as the data is quite heavy to be processed manually.
Relevant answer
  • asked a question related to Workflow
Question
2 answers
Where can I find common benchmark datasets for task/workflow scheduling at edge cloud computing that have information about the processing and transmission latency or energy consumption of all the tasks to be executed on each available processing devices (end devices, edge, cloud)? Thank You.
Relevant answer
Answer
Thanks for the reply, so probably I will need a simulator to generate the dataset. There are no common benchmark dataset on this problem out there yet. Anyway thanks! Appreciate that.
  • asked a question related to Workflow
Question
7 answers
Hello! In my last question, I asked a question about how to analyze a dataset with an ordinal dependent variable and multiple categorical independent variables. Here's the question if you'd like to check it out: https://www.researchgate.net/post/How_can_I_analyze_multiple_binary_independent_variables_against_an_ordinal_dependent_variable
My dataset is questionnaire data that has a field about skill level in a certain sport. This skill level is the target variable in the study. The questionnaire also had a question about which of four sports the respondents answers regard. The aim is to compare if the correlations between skill level and answers are similar or not in each sport. So I would like to find which variables predict skill level the best in each sport, and how important the variables are in the prediction.
It was suggested that I use ordinal logistic regression, and test for proportional odds assumption. If the proportional odds assumption is not met, I should use consecutive binary logistic regressions to construct an ordinal model myself. It was also suggested that I could use a Boosted regression tree. I would like to use these both as a cross validating method, as there seem to be uncertainties in ordinal logistic regression.
I understand that the workflow should be as follows:
  1. Make ordinal logistic regression
  2. Check for proportional odds assumption
  3. Regardless of whether or not the assumption is violated, create binary logistic regressions to study the details of the data, and also to check if the ordinal logistic regression model did indeed provide an accurate summary of the correlations in the data
  4. Train a boosted regression tree and find the importances for each independent variable. Use as cross validation method.
The binary logistic regressions should be run as follows: Class 1 vs Class 2-4, Class 1-2 vs Class 3-4, Class 1-3 vs Class 4. I understand this and know how to do this, but I do not know if the boosted regression tree should be done in the same manner. Should I make three different boosted regression trees and calculate the importances separately, or should I only create one tree model that I train with all four target classes at once? It seems boosted regression trees don't perform well with target variables with more than two values.
I would truly appreciate your help. Also, if you know of studies that have used a similar method, I would really appreciate if you could link them.
Best regards,
Timo Ijäs
Relevant answer
Answer
I'm pleased I could help, Timo Ijas ,
I did think about multicollinearity after I posted my first response. ANOVA and t-tests will not remove that problem. Multicollinearity can increase or decrease the values of coefficients and, in extreme cases, even change the sign of a coefficient. With different data sets (sport_A, sport_B) you almost certainly will have different coefficients purely because of the multicollinearity problem.
A straightforward approach would be to systematically add one variable at a time to your Ordinal Logit model, removing a variable if it causes other variables' coefficients to change much. Of course, that means you won't be able to include all of your desired predictor variables.
Another approach is to apply a Data Reduction technique on your data - usually Principal Components Analysis (PCA). PCA takes a set of correlated variables and reduces them to a smaller number of uncorrelated components, thus removing the multicollinearity problem. This comes at the cost of not having an easy grasp of which variables contribute to the solution. Rotating the components, as in Factor Analysis, can help a little, but probably not much.
Another respondent, David Eugene Booth , knows much more than I do about the technicalities of these things. Perhaps he can help.
  • asked a question related to Workflow
Question
3 answers
Should we perform radiometric calibration, multi looking etc. for the processing?
What pre-processing steps must be performed for further forming the coherency matrix?
Relevant answer
Answer
Thanks you Mounira Ouarzeddine for the answer, that clears it a lot.
  • asked a question related to Workflow
Question
4 answers
Hello,
What are the best tools and workflows to do DFT inside an active pocket of a given protein and ligand?
Thanks
Relevant answer
Answer
Thank you for the suggestion. How is your workflow to do it? I have the impression mine is not so efficient.
  • asked a question related to Workflow
Question
11 answers
Home office during the Corona pandemic has changed the workflow of project management in our research group - things have become much more cumbersome.
Therefore we are looking for a project management tool.
We need limited functions - but these should be easy and intuitive to use:
  • A project structure (certain people have access to certain projects)
  • Assignment of ToDos
  • Definition of milestones
  • A gantt chart to visualize milestones
  • Compatible with linux mac and windows
Would be great if you could share your experiences/suggestions/dos and don'ts.
Relevant answer
Answer
There is a lot of project management software on the market with these features and from my own experience I will recommend MS Project, Asana and Wrike for the above needs. It seems to me that only Wrike doesn’t have the ability to manage milestones while others have that feature.
  • asked a question related to Workflow
Question
3 answers
I am assisting an Endocrinology office in eastern NC with improving their office throughput. Their downloading of patient device data is causing appointments to go over and cause bottlenecking.
Could someone point me in a good direction for best practice information and workflow recommendations in this are?
Relevant answer
Answer
Did you use POCT for the glucose assessment? is it fasting or random glucose you did?Could you explain better as it is not clear to me?