Science topics: Baseline
Science topic

Baseline - Science topic

Explore the latest questions and answers in Baseline, and find Baseline experts.
Questions related to Baseline
  • asked a question related to Baseline
Question
3 answers
Hello Mass Spec Wizs,
I recently started working on QExactive orbitrap for quantification of small molecules.
The sensitivity of analytes in the current PRM method is low. Also the baseline level is quite high in samples (~3*e7). Has anyone faced similar issues?
Also how sensitive is QExactive plus compared to QE.
Thank you
Relevant answer
Answer
Hello,
I am also facing sensitivity issue in PRM method. Would you please share how you fixed it?
  • asked a question related to Baseline
Question
2 answers
Is it possible to compare a 5-minute baseline HRV (SDNN or LF/HF) with a 10-minute task HRV?
Relevant answer
Answer
In my personal opinion, it is possible to compare 5-min baseline HRV against 10-min task HRV. Even shorter durations of 1-min has been suggested to correlate with 5-min segments (PMID: 34211411). However, whether it is technically correct to compare 5-min vs 10-min segments will be dependent on whether expected HRV changes with your task is time dependent (do you expect difference between first 5 and last 5-min of the task?). Without knowing the exact task at hand, it may be difficult to conclude.
If unsure, I'd suggest to segregate post-task HRV in 5-min blocks and analyze to see if any differences exist (such as has been performed in PMID: 33300852). In any case, peer review may still request this information to justify the differences in analysis durations pre- and post-task.
  • asked a question related to Baseline
Question
4 answers
Hi! Can I ask a favor to everyone? I really need to know the require sample size and any citations will be much appreciated
Relevant answer
Answer
Do you have any citations??
  • asked a question related to Baseline
Question
8 answers
Hello, we are using a thermo TSQ Fortis triple quad mass spec with a ultimate 3000 hplc system from thermo as well.
We are having a lot of issues with the baseline, it suddenly starts going up and it makes the chromatogram look like a thick black line with really high intensity signals (like x10^6).
Sometimes it goes down during two inyections quickly, taking between 15-30 minutes. Other times it stays up for a long time and it has us dumbfounded. We don't know what else to try. We pause the reading of the MS part and it goes down easily and stays like normal, other times it shoots back up again without even starting an inyection.
Any suggestions?
Thanks in advance
Relevant answer
Answer
William Letter Thank you so much for your resources
  • asked a question related to Baseline
Question
5 answers
Hi all,
I want to conduct a correlation analysis, to check whether changes in learning outcomes correlate with the number of lessons students attend. Learner outcomes are measured at baseline and endline, and the measurement is ordinal. To be specific, students get assessed and assigned to one of 5 learning categories (for example for literacy, this would be Beginner, Letter, Word, Sentence, or Paragraph). The number of lessons students attended is measured quantitatively.
Since we are interested in the changes in learning outcomes, I was initially planning to calculate a difference score between endline and baseline, and correlate that with the number of lessons attended. However, having discovered that learner outcomes are measured ordinally, this does not make any sense. What would be the best way to compute a correlation between changes in learner outcomes (between baseline and endline) and the number of lessons students attend?
Thank you in advance for your responses!
Best,
Sharon
Relevant answer
Answer
Administer same test at the beginning (pre-test) and do same at the end (post-test) and find the difference between pre-test and post-test.
  • asked a question related to Baseline
Question
2 answers
Hi everyone,
I wanted to ask some advice on EPSC detection using Neuromatic in IgorPro. I'm quite new to the software and I'm struggling to detect true events, it still includes a lot of noise.
A few things I struggle with are:
- It generally detects a lot of noise being an event. In some recordings the noise level is a bit higher. Even when I increase the threshold, still quite some noise is being picked up. Do you use a threshold of x ampl below baseline, or standard deviation? Or does the template matching work better? What threshold do you use.
- It does not accurately put the onset of the event. Often it puts the onset too early, well before the event actually starts. I've played quite a bit with changing the detection parameters for onset / baseline but haven't figured out a good setting. Do you have some advice?
- I wonder if I should filter the data before starting the event detection. Do people use a filter and if yes, what kind of filter do you use?
- I was wondering if there is a way in the software to filter events for certain settings and then visualize them? I now copy the output to excel, sort it and for example exclude some events based on the rise time, but it is then really hard to find back the events in the software to check if what I'm excluding I should actually excluded. Does anyone know of a way to sort data into the different Sets based on a certain parameter being higher or lower than x?
I use the software for data from in vitro cultured neurons and brain slices. It is especially a struggle with in vitro cultured neurons where our noise level usually is a bit higher.
It's a lot of questions but I hope someone can give me some advice on how to perform this analysis using IgorPro Neuromatic.
Thanks already!!
Anouk
Relevant answer
Answer
Thanks a lot for your thoughts, Apostolos. I appreciate it. I guess I'll have to do some post-processing then.
  • asked a question related to Baseline
Question
9 answers
I am performing some DSC (perkin elmer 8500) measurements (same materials, same mass, same program). Following calibration, I encountered different baselines every day. The baseline was flat only on the first day. Any idea what's happening? Does the instrument need to be calibrated every day?
Relevant answer
Answer
Just a few thoughts. 1. The baseline should be calibrated within the temperature range extended beyond the range of your runs. 2. While 600C is below melting point of your alloys, it is possible that at such high temperature aluminum pans may deform or react with the alloy. 3. May be, you should try performing your baseline calibrations and runs using platinum or ceramic pans.
  • asked a question related to Baseline
Question
3 answers
I installed a superdex 75 pg myself, and the conductance baseline was flat at the beginning of the balance, and then the baseline was flat when the sample was eluted to 50%, and the last 50% jittered up and down, which was unstable.
Relevant answer
Answer
Could be a bubble stuck in the conductivity meter.
  • asked a question related to Baseline
Question
2 answers
Sometimes, I have a high fluorescence baseline (see attached png file). Any suggestions?
Relevant answer
Answer
Kindly use automatic baseline opition, if you have then you can set it manually and that may be helpful.
  • asked a question related to Baseline
Question
1 answer
I ran the same qPCR test (SYBR Green) on a lot of samples that didn't fit on one plate (over 20 plates all together) and want to compare all of them to the same standard curve that was included in one of those plates. I did have a few positive controls on each plate to check variability between plates.
Now I am aware that I need to set parameters in the same way for all the plates in order to compare them. The threshold will be a fixed value, but I am wondering about the baseline. Ideally, it would be fixed at a certain intensity for all plates. That is not possible in the software I'm using (Eppendorf MasterCycler ep realplex, v. 2.2), so I can either use automatic baseline or manual baseline with specifying start and end cycle.
In a few cases, automatic baseline gives me much better results than manual baseline (with manual setting the curves are not parallel, no matter which cycle range I choose provided I stay below the first amplification; while with automatic baseline the curves are nicely parallel).
I think automatic baseline is not a good option for comparison, but I'm still wondering what is the best thing to do. Even if I choose manual baseline and specify start and end cycle, I think it won't be exactly comparable between plates, but that's probably the best option, just to keep in mind that some results might be of a slightly lower quality because of that.
Any thoughts?
Relevant answer
Answer
The software allows you to opt for analysis with or without ROX. however, it is highly recommended that you use high-ROX when supported by your RT-Thermocycling machine.
  • asked a question related to Baseline
Question
1 answer
I am a beginner in this field and have read from papers that a stable baseline current has to be obtained, the MFC should start its operation in an open circuit potential (OCP) etc. How do I check for OCP?
Relevant answer
Answer
The OCP means that there is no change in the potential by injecting feed into the MFC, so the system is stable. This could be evaluated when two consecutive peaks (potential) appear.
You can also take a look at this paper for more illustration (Fig. 2):
  • asked a question related to Baseline
Question
3 answers
We are getting results that are too similar to each other on our UV-VIS spectra for very different molecules. We are in need of a procedure or help with building the appropriate way of going about this characterization. We are working with fullerenol, sulfo-SMCC, and an antibody. The buffer of this solution is PBS-EDTA as well as some DI water. What should our baseline be? is water fine? Should we use all PBS?
As for the reference, is a clear cuvette appropriate or should we do PBS in the reference as well?
Relevant answer
Answer
First, blank the spectrophotometer. You do this with a solution that has ALL of the components that you would expect to find in your sample EXCEPT the molecule of interest. So, if you want to measure antibody that is in PBS/EDTA your blank has to be PBS/EDTA.
However you should also consider how the sample has been prepared. If you desalted the antibody into the PBS/EDTA buffer, the blank is the column equilibration buffer. If you dialysed the Ab into buffer, the strictly correct blank to use is the spent dialysis buffer (not fresh buffer). If you diluted 10mg/ml Ab in 50 mM Tris/0.1 % azide into PBS/EDTA to give a 1mg/ml Ab solution, the strictly correct blank to use then is a 1/10 dilution of the tris/azide buffer in the PBS/EDTA buffer, so that the ONLY difference between the blank and sample is the presence of the molecule that you wish to measure.
Finally, you will also need to use a cuvette that can transmit UV wavelengths e.g., quartz.
  • asked a question related to Baseline
Question
3 answers
During my time processing results curves, I frequently need to correct the baseline before peak fit, and intensity comparison. This thing happens to XRD, XPS, Raman, and many other data processing. Can anyone give some advice on the general principle of baseline correction and some specific rules for each type of measurement? For example, I heard that the (FWHM) can not be higher than a specific eV value when processing XPS data.
Relevant answer
Answer
Speaking in general terms, a baseline subtraction is removing whatever contribution your data has that is completely irrelevant to your data analysis. I can only say something about Raman, but the best way to subtract a baseline is to measure your sample and your substrate using the same laser power, number of acquistions and time and subtract, from the sample signal, the signal from your substrate. If your sample has some contribution coming from some sort of luminescence, one way to remove the luminescent background would be to measure your sample in a parallel polarization configuration and follow that with a cross polarization configuration. Generally, the PL is unpolarized, and your measurement in cross polarization should remove the totally symmetric Raman modes, leaving only the other types of modes and the PL, which you can use to remove the PL from the parallel configuration. Again, if you know enough about Raman, and even PL, you'll know that these are suggestions rather than rules, because there is not a one perfect rule for all cases. As with most people though, you'll eventually have enough experience to do background subtraction easily.
Hope that helps.
  • asked a question related to Baseline
Question
1 answer
I would like to measure baseline anxiety in adolescent (34 to 50 days old) C57BL/6J mice. Is there a best time of day during the light cycle to perform these experiments?
Relevant answer
Answer
I have done this test many times. As far as I know, it doesn't matter much if you do it under a homogeneous light condition and likewise test all the experimental groups at about the same time. For example, testing the control group at 8 am and the experimental group at 3 pm can make a difference depending on the circadian rhythm. After all, since you will be comparing the experimental groups with the control group, it will be sufficient to do it at the same time.
  • asked a question related to Baseline
Question
2 answers
I am using Agilent Infinity II ELSD (G4260B) to develop a new analysis method and facing these problems.
1. Large baseline offset: When turning on the system (parameters as below), the signal was about 700 mV, even though I have spent a few days to wash as recommended as in the manual of Agilent.
- Column: XBridge C18, 150 x 4.6 mm, 3.5 µm, 25 °C
- Mobile phase: 100% Methanol (LC grade)
- Flow Rate: 1.0 mL/min
- Evaporator temperature: 60 °C
- Nebulizer temperature: 25 °C
- Evaporator gas flow: 1.6 SLM (nitrogen)
2. Baseline increases when using water in mobile phase: The parameters were the same as above except mobile phase was methanol - water. Whenever the percentage of water was higher than 20%, e.g. methanol - water (6:4, isocratic), the baseline began to increase steadily (approximately 3.5 mV/min or 210 mV/h). After a few hours, the baseline became straight because the signal had reached the upper limit (1250 mV). If changing mobile phase back to 100% methanol, the baseline would slowly decrease back to its previous value.
Does anyone know about or experience these problems? Please help me!
Thank you!
Relevant answer
Answer
  • Thành Nguyễn Tri wrote: "Baseline increases when using water in mobile phase".
Yes, it should change when the amount of less volatile liquid, water, is added to methanol. What you have observed is normal. Water is far less volatile than Methanol and take different nebulizer, heat and gas flow settings. Each mobile phase must be evaluated to determine the best ELSD settings for use in your application. The signal to noise (S/N) level of a standard peak must be evaulated with each setting change to find the optimized conditions for use (To evaluate "noise", never rely on the "baseline" signal only! Often, as the baseline noise changes, so does the signal level, so monitoring the baseline is unreliable. S/N only of signal). Failure to optimize the ELSD to each mobile phase used may result in instability of signal and internal contamination. Once the settings are optimized, autozero the signal and monitor to determine ripple.
  • You can not use one set of ELSD settings for all mobile phases.
  • The ELSD is NOT a "Universal" detector.
  • "Optimization" of the detector to the method must take place before any data is collected, and this takes time and detailed HPLC system knowledge of how the ELSD system works.
Please find a professional chromatographer with practical experience using HPLC and esp ELSD detectors in your area for to assist you. To use an ELSD, you must first receive professional training in how to use this complex HPLC detector (ELSD). Few have experience using them and you can not learn how to use one in a day. This detector is not a simple detector like an RID or UV/VIS is. An ELSD module is one of the more complex detectors you can use (similar to a MS or MS/MS system requiring many years of training).
We only recommend ELSD/CAD for advanced level users (> 10 years professional exp) as they are very difficult to use and obtain reliable data. *Be cautious of the sales people/vendors who provide misinformation about them to make a sale.
  • IMHO: Due to the ELSD's (or CAD, same issues) complexity of operation, advanced level skill needed to use, lack of linearity and selectivity, high maintenance requirements and expense, they should also only be used when no other detector will work for the application.
  • asked a question related to Baseline
Question
9 answers
Hi,
I had done an intervention study with two groups (Treatment group=1, Control group=0). I have three time points (Baseline=t0, post-intervnetion=T1, follow-up=T2). My outcome variables are quality of life and anxiety level (Both measured in continuous scales). As my outcome variables didn't follow the normal distribution, I am conducting GEE. I would like to know do I need to adjust baseline values of outcome variables? If yes, how should I interpret the output tables? If anyone has example of similar study, I would be grateful to read that. Appreciating your support
Relevant answer
Answer
Just a note: if your goal is to report the findings to regulatory agencies (FDA, EMA), it's commonly advised to do that, please find the EMA guideline linked below. But not because of importance of the baseline differences (it's an RCT, so even if present, differences are due to chance and must be ignored), but to "filter out" these potential fake differences from the treatment and improve inference. The adjustment applies for both analysing post-values and change from baseline*.
----
* Just a warning about modelling the change-from-baseline in RCTs. It poses a serious problem if there was actual "virtual" (regardless of the randomization) difference at baseline and also the between-arm difference in post-values, of similar magnitude. In this case the slopes connecting pre-post in both arms may be close-to-parallel suggesting "no difference in differences", EVEN if the post-values DO indicate statistically signifiant (and hopefully also clinical) difference. This is called the Lord's paradox and it's a sound reason to use change scores in observational studies and post-values in RCTs. Change scores = parallelism of slopes approach actually "looks" at the baseline differences (which makes a lot of sense in non-RCTs and not in RCTs), post-values approach ignores the baseline difference and looks only at the follow-up post-treatment findings (makes a lot of sense in RCT and not non-RCT). It's up to you which approach is the best for you.
  • asked a question related to Baseline
Question
2 answers
We have shimadzu™ uv 1800 spectrophotometer. We have a problem in the baseline especially in the uv range (to see the photo). Who can help us to solve this problem?
Relevant answer
Answer
I am using a quartz cuvette.
  • asked a question related to Baseline
Question
2 answers
I am doing a mixed effect linear model using Stata and need your advice. The study is an RCT with 4 groups and 3 time points. We would like to test a group X time interaction to see the change of score over the treatment period. However, the main outcome measure was significantly different at baseline (P = 0.001). My question is: How to adjust for baseline measure in mixed effect model (long shape data) using Stata?
Relevant answer
Answer
I agree with Dr Ali. However the programming is up to the researcher IMHO. best wishes David Booth
  • asked a question related to Baseline
Question
3 answers
Dear Community,
I have performed a PCA analysis on an ASTER IMAGE in ENVI 5.3. I displayed the statistics and got different statistical tables (baseline statistics, covariance, correlation between bands...). However, it does not show the correlation matrix between the PCs and the bands of the input image. How can I display this specific table?
Any information or help is very appreciated.
Relevant answer
Answer
To generate the statistical correlation matrix between the principal components (PCs) and the input image bands in ENVI, you can follow these steps:
  1. Open the input image in ENVI.
  2. Click on "Dimensionality Reduction" from the "Raster" menu.
  3. In the "Dimensionality Reduction" dialog box, select "Principal Components Analysis" as the method and choose the input bands that you want to use for generating the PCs.
  4. Click "OK" to run the Principal Components Analysis.
  5. Once the analysis is completed, the PCs will be displayed as individual layers in the ENVI Layer Manager.
  6. Click on "Tools" from the ENVI toolbar and select "Matrix/Vector Operations."
  7. In the "Matrix/Vector Operations" dialog box, select "Matrix" as the operation type.
  8. Select the PC layers that you want to include in the correlation matrix.
  9. Click "OK" to generate the correlation matrix.
The correlation matrix will display the statistical correlation coefficients between the PCs and the input image bands.
  • asked a question related to Baseline
Question
3 answers
I need to use 27Al NMR for liquid sample and i whant to know if quartz NMR tube have lower backgroud than NMR glass tube.
If someone know how to reduce the bacground noise let me know.
Thanks
Relevant answer
Answer
To complement Clemens answer, I would like to mention backward linear prediction, which is a standard procedure in almost all NMR processing software packages. It helped me a lot for measuring 11B NMR spectra using standard (non-quartz) NMR tubes.
Good luck,
Vladimir
  • asked a question related to Baseline
Question
15 answers
Hello
I have received the FTIR graph after the analysis of the sample but the graph isn't aligned to the baseline. I am attaching the file. Kindly guide me is it the sample or machine error?
Relevant answer
Answer
  • Run a standard material
  • Contact the manufacturer or distributor/agent with the above
  • asked a question related to Baseline
Question
4 answers
Currently I'm struggling with choosing between various analysis options, ranging from repeated measures design, to 4-way ANOVA to ANCOVA or to moderation analysis with the PROCESS macro of Hayes.
Some background information: My main research question is: To what extent can subgroup membership predict changes in X scores six months after participation in an intervention, and is this effect moderated by Y(controlled for Z.
I am not sure if I should work with a repeated measures design, or work with a change score of AUDIT (by calculating T1-T0). What I have read about this is that the difference scores ANOVA (1) tests whether the change or difference from T0 to T1 is equal acorss all groups, whereas ANCOVA (2) tests whether the T1 scores are equal across groups while controlling for their scores on T0. I've read that this is a small however potentially impactful distinction which got famous through Lord's paradox (https://m-clark.github.io/docs/lord/index.html / ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference: Multivariate Behavioral Research: Vol 48, No 6 (tandfonline.com)). It's been stated that if your groups are randomly assigned experimental groups, both methods are equivalent and you can choose whichever you prefer. If they are naturally occuring groups the literature indeed suggests using the difference scores method.
Since the subgroups I'm working with are latent classes that indeed 'naturally' occur, I am wondering if I should indeed go with change scores (despite its downfalls that has been written about in the literature, e.g. addition of measurement errors etc).
What is important to keep in mind is that my data (in both options, so taking AUDIT change or AUDIT_T1 as DV) has been violating assumptions of normality ánd homogeneity throughout, and I am not sure how to best deal with that in my current situation. The macro's of Hayes moderation PROCESS tool seemed like a good solution, but it depends on question 1 whether I can use that (because i need to use de AUDIT change score for that).
All in all, I am unsure how to proceed. Thank you in advance for thinking along.
Relevant answer
Answer
I suggest you use Structural Equation Modeling (SEM), widely used for moderation analysis.
  • asked a question related to Baseline
Question
1 answer
In a scenario, where I have about 25 protein sequences of a lytic enzyme (Variants) of bacterial origin, I would like to estimate the mutational distance of each of these sequences and selecting one of them as baseline I would like to see if the other sequences are its predecessor or successor and estimate their mutational distance from the baseline. Can building a specific phylogenetic tree accomplish the above? If yes, what is the procedure for building such a tree and what tools to use?
Relevant answer
How about Mega software or EMBL web server for doing Multiple Sequence Alignment followed by phylogenetic tree build up?
  • asked a question related to Baseline
Question
2 answers
Dear colleagues,
I'm performing LTD experiments in the mouse hippocampus (CA1) using coronal brain slices and recording fEPSP.
Mice are 30 days old and I'm trying low frequency stimulation (900 stimuli at 1Hz) in order to induce longlasting LTD. However, after LFS the fEPSP response comes back to baseline after 10/15 minutes from the conditioning. Any suggestion from your side?
Below you can find my experimental setting:
- 400 um slice thickness;
- Ice cold cutting solution: Sucrose (195 mM), NaCl (10 mM), Glucose (10 mM), NaHCO3 (84 mM), KCl (74.55 mM), NaH2PO4 (1.25mM), Sodium Pyruvate (2mM), CaCl2 (0.5 mM), MgCl2 (7 mM);
- After cutting procedure, slices are incubated in aCSF for 40' at 35°C, then at RT
- Standard aCSF as recording solution (Cacl2 2mM and MgCl2 1mM), no blockers;
- 50% of max fEPSP response as baseline for 10 minutes (pair of stimuli every 20", ISI 50ms);
- Borosilicate capillaries (filled with aCSF) as stimulating (< 1 MOHM resistance)and recording electrodes (around 2 MOHM resistance);
- LTD protocol (900 single stimuli at 1 Hz or 900 pair of stimuli at 1 Hz, ISI 50ms).
Thanks in advance for helping me.
Relevant answer
Answer
thanks for your suggestions.
I would like to underline that the protocol that I used was 15' long (and it's reported to induce LTD in hippocampus).
Regarding your second suggestions: are you sure that blocking NMDAR with APV might be correct? LFS mediated LTD is a NMDAR-dependent mechanism. How can I induce LTD blocking NMDAR? Has you a reference for that?
  • asked a question related to Baseline
Question
4 answers
I’ve been having numerous issues with achieving stable baselines recording from the TA-CA1 synapse from juvenile (P12-P24) rat hippocampus slices. In addition, when applying drugs such as antagonists/inhibitors which should not show any effect on baseline, I have been seeing gradual increases in synaptic transmission that differ from what other students have previously shown in my lab.
I cull my rats by cervical dislocation and slice in ice cold sucrose aCSF and allow the slices to rest for 1 h at RT in regular aCSF. I then stimulate and record from the TA-CA1 and my first slice usually takes 2-3 hours to stabilise. I oxygenate my aCSF for at least 40 minutes prior to putting a slice on the rig and I use a platinum harp to hold it down in the bath. My rig uses a gravity feed system and the flow rate is 2.5 mL/min. My recording electrode is filled with aCSF and I bleach the silver wire every few days.
When the slice eventually stabilises for 20 min, I add my drug which has been oxygenating for at least 10 min. I can often see strange increases caused by the drugs that have not previously been seen. I thought it might be down to changes in oxygenation but I’ve been keeping all of my solutions in similar sized cylinders and have increased my oxygen so that everything is saturated.
Can anyone advise me how I can improve this and shed some light onto why I am seeing such instability and increases when switching drug?
Any help would be much appreciated, as I feel as though I’ve exhausted all ideas at this point.
Thank you!
Relevant answer
Answer
I think that the speed of the flow rate can influence fEPSP amplitude. You may believe that your flow rate is constant between conditions, but if your system is gravity fed, it could be that the flow rate varies depending on the height of the solution.
Apostolos' idea about reference electrode is worth considering, but I believe that changes between reference (ground) and recording electrode will influence the absolute baseline values, but not the amplitude of the fEPSP.
  • asked a question related to Baseline
Question
3 answers
I am doing research to assess the impact of counseling on level of appetite. Level of appetite was categorized as poor moderate and good. Furthermore level of appetite was assess at baseline, after 2 weeks and 6 weeks. So what will be the most appropriate statistical test for analyzing this type of data?
Relevant answer
If the sample is small, we use the non-parametric statistic, but if the sample is large, we use the parametric statistic
  • asked a question related to Baseline
Question
1 answer
Hi,
I am conducting a meta-analysis with just RCTs and I am interested in the changes between baseline and outcome assessments as a response to an intervention for two study arms.
For the most of my studies, I have just mean and SD values and nothing else. So, I am not able to calculate the correlation and then find the SD value of the changes. And, I could not find any thing about how I can find the SD of the changes using the mean and SD data of baseline and outcome assessment. There is a formula on the Cochrane handbook to combine the two subgroups SD to find a common one, that you can see in on the attachment. However, I am not so sure if this is a reasonable way to calculate the SD for the changes using this formula. If you have any information about that and inform me or suggest another way to solve my problem, I will be so pleased.
Kind Regards
Relevant answer
Answer
You should not calculate the effect size via correlation. I think the best way in this situation is to calculate the mean difference between the baseline and after-treatment for both intervention or control groups. Then, you should use this mean difference as an effect size, and due to the presence of an equal number in baseline and after-treatment groups, you can sum the standard deviation and divide 2.
  • asked a question related to Baseline
Question
3 answers
In a single case experimental design ABA type, n:3, can data collection (A) (minimum thrice ) be started for all three participants at the same time, and can the intervention (B) be started after establishing a baseline trend for Participant 1 and then withdraw (A), while the other two are still at the baseline/if all the three achieve can mindfulness intervention (12 weeks) be given simultaneously for each individual one to one, on different days (eg. Monday wed Friday) or do I have to proceed one at a time (Ie complete the A-B-A chain fully for participant 1 and move on to participant 2 then participant 3, start fresh?) If data collection and intervention cannot be simultaneously carried out for all the individuals in ABA, can multiple baselines be implemented to overcome these shortcomings?
There are 4 variables in the study.
Kindly provide your valuable input.
Relevant answer
Answer
If the intervention seeks an individual effect on the subject and not a group response, there is no problem with the methodological design. Even when the effect of the intervention is intended to be evaluated in aggregate. The problem arises when the group reaction to the intervention intervenes in the expected effect on each subject. Under this assumption, the intervention must be carried out simultaneously with the test group.
  • asked a question related to Baseline
Question
6 answers
Hi everybody, I need some help with an analysis of pupillometric data; it’s the first time that I use pupillometry, so I hope I didn’t make too many mistakes or at least that they won’t jeopardize the whole analysis.
I ran a between-subjects experiment in which the participants watched the same visual stimulus in three different conditions; during the stimulus presentation I recorded their eye-tracking data. I'm very interested in pupillometry but here's my problem:
  1. the software I use (iMotions) provides me with the aggregated and auto-scaled data for each of the three conditions: these data are apparently very clean and consistent (there has to be some kind of automated correction of blinks and artifacts).
  2. The software output basically has two columns: timestamp (in milliseconds, identical in the three conditions) and pupil diameter (in cm, strangely enough, but never mind…)
  3. I ran an ANOVA with the condition as the factor and the pupil diameter as the dependent. variable, F(2, 5141) = 119.38, p < .001 ηp2 = .044, (1-β) > .99. Bonferroni corrected post-hocs were all significant p < .001 (see graph 1 in attachment)
  4. I got suspicious: the significance was too high and, above all, the three conditions do not start at the same point on the y axis (ycond1 = 0.47; ycond2 = 0.47; ycond3 = 0.50). I thought that maybe the significant difference could be due to this (let’s say the participants in Condition 3 had larger pupils for some reason); so, to baseline the data, I tried to let them start at the same point.
  5. To do this, I rearranged the columns for them to show me not the pupil diameter but the pupil dilation; I organized them so that the new y value of, let’s say, x = 1 (time frame = 132) was the old y value (pupil diameter) minus the value of y with x = 0) (see attached screenshot) In the example, for condition 1; the new value for timeframe 132 is: 0,48 - 0,47 = 0,01.
  6. I ran the same ANOVA and now the results appear to be more reliable, F(2, 5141) = 42.15, p < .001 ηp2 = .016, (1-β) > .99. There has been a drastic decrease in the F value and partial eta squared. Bonferroni corrected post-hoc analyses revealed that the condition 2 only was significantly different from the other ones (p < .001) (see graph 2).
…and now…question time!
  1. Would you say that this procedure is right? I guess there could be many errors in it, but I’m not an expert and I didn’t manage to take great advantage from reading many papers on this matter.
  2. Would you have trusted the ANOVA results at point 3? Or rather I was right to baseline those data?
  3. To baseline the data, I acted freely and according to nothing but a rule of thumb that came into my mind. Would you suggest other processes?
Relevant answer
Answer
As a matter of fact, when you subtract the baseline value to the actual value, you do have the possibility of going below 0. This is fairly normal in several circumstances.
Let me give you an example: your stimuli are very bright in luminance, but, in the baseline, your participants watch a black screen. In this case, you will record a very high value in your baseline (say, 6 mm, that is, mydriasis). Then, as soon as the stimuli are presented, your participants' pupil dilation will normally decrease due to the high luminance (i.e., myosis). So, the average pupil size could be - as an example: 2 mm. When you subtract your baseline, you will reach the value of -4 mm. This does not mean, of course, that the pupil dilation was negative, but that it was below the baseline level. In cases like this, the baseline value should never be considered as it's unrepresentative by definition.
To avoid such a result, you have two possibilities:
1. Convert your pupil dilation data into percentage change pupil dilation. This procedure was described by Lemercier and colleagues (2014). I used it and explained it thoroughly in this paper (attached in pdf):
Ansani, A., Marini, M., D’Errico, F., and Poggi, I. (2020) How Soundtracks Shape What We See: Analyzing the Influence of Music on Visual Scenes Through Self-Assessment, Eye Tracking, and Pupillometry. Frontiers in Psychology 11:2242. doi: 10.3389/fpsyg.2020.02242
2. As a secondary choice, after the baseline subtraction, you can normalize your pupil dilation data; namely, the variable's values will range between 0 and 1.
Here's a link with the formula:
In both cases, you will lose the real data indicating the pupil dimension in mm. However, unless you're doing clinical research in the very field of ophthalmology, or similar domains, it's fairly rare to resort to pupil dilation in mm. What counts the most is the extent to which the pupils dilate in correspondence of some stimuli.
I hope I helped you ;)
Please don't hesitate to contact me for any further information.
All the best,
Alessandro
Lemercier, A., Guillot, G., Courcoux, P., Garrel, C., Baccino, T., and Schlich, P. (2014). “Pupillometry of taste: Methodological guide-from acquisition to data processing-and toolbox for MATLAB,” in Quantitative Methods for Psychology, Vol. 10, (University of Ottawa; School of Psychology), 179–195
  • asked a question related to Baseline
Question
7 answers
Hello, everyone:
When the outcome of ANOVA revealed a main effect of our primary dependent variable which one should we consider? Absolute value or the percent change from baseline?
We usually learn that choose the outcome of absolute value from the lesson of statistics , but I don't understand why.
Thank you for your reply.
Relevant answer
Answer
This really isn't a question for stats folks, it depends what your model is for the mechanism that may be reasonable for some change. So, more information is necessary about this.
  • asked a question related to Baseline
Question
4 answers
How to people propose a new optimization algorithm. I mean what is the baseline? Is there any intuition or mathematical foundation behind it?
Relevant answer
Answer
The question is too general. To generate the basic idea of the algorithm, you need to know the detailed statement of the problem. Typically, the process of developing an algorithm starts with the identification of your problem complexity class. If there is no evidence or clear feeling that the problem is NP-hard, then it is reasonable to try to develop a polynomial algorithm for solving it. Such algorithms are usually based on the use of specific properties of the problem. Sometimes it is possible to construct a polynomial algorithm based on the general scheme of dynamic programming, taking into account the specific properties of the problem.
If the problem is known to be NP-hard, then branch-and-bound methods, dynamic programming, and their modifications often work well for a relatively small problem dimension. Sometimes it is possible to build a successful formulation of the problem in the form of an integer programming model, followed by the use of appropriate methods or ready-made software. For high-dimensional problems, you can either use well-known metaheuristics, or develop your own approximate algorithm. In the latter case, success is usually based on the use of the problem properties. As you can see, in any case, it is useful to start by studying the specific properties of your particular problem.
  • asked a question related to Baseline
Question
2 answers
Hi,
i am trying to use the ratio profiler plugin on imagej to analyze GCaMP data. We followed the instructions written on the imagej wiki page and still cannot figure out why we aren't getting the data we'd expect from the plugin. The graph imagej spits out after running the plugin is just one vertical line. We are confused. Has anyone else run into this issue? If so how do you resolve it. 
Further, if imagej doesn't not work for us to analyze ratiometric data how would you recommend analyzing GCaMP data? We are looking for a program to measure peaks in fluorescence and baseline fluorescence. 
Thanks!! 
Relevant answer
Answer
Here is an interesting video that helps with that
  • asked a question related to Baseline
Question
1 answer
I am working on longitudinal data with a 4-time point. I want to fit a model exploring how outcome Y(Binary) changes over time (no missing in any time-point) with the exposures X, W, T and Z (these exposures are also time-changing - some are full observed and missing in points - so unbalanced). But on the same model, I want to account for other exposure variables which do not change over time(Constant Baseline Covariates).
I have tried GEE models, and also explored some references that recommend generating another binary variable for each exposure per time point, but these will generate quite many exposures as 4 covariates * 4 point=16 new variables + other 6 constant covariates the model won't converge.
Please can you advise on how best I can analyse this type of data, or provide any reference or analysis program that I can learn more about this?
Relevant answer
Answer
Dr. Mwoya Byaro kindly assist
  • asked a question related to Baseline
Question
4 answers
The problem: We are interested in how ketamine modulates hedonic experiences such as chills. Participants listen to music while they are in the scanner both on ketamine and placebo. Participants also pre-rate all the songs outside the scanner a week before. For our analysis we need to know when participants experience their peak emotional moment during the scan session.
Our ‘solutions’ that did not work:  1. Live rating – people press a button while they experience their peak moment. Problem: we confound our neuronal signal of interest (pleasure) with motor activity. 2. Additionally rating the music after the scan session. Problem: even though, we think that the peak for most songs won’t shift between the baseline rating and the in-scanner rating sessions, it might do so during the ketamine condition (makes everything more pleasurable and maybe even earlier). So, if they rate the music again afterwards when most of ketamine's effects have already subsided, we might not find the same peak moments as during the scan session. (plus, we also doubt that participants could reliably remember when those peaks occurred during the scan session…)
3. measure physiological responses. problem: yes, skin conductance does correlate with chills but we do not have the equipment for that...
Does anyone of you have an idea how we could measure the peak moment during the scan session without majorly confounding our measurements? I would really appreciate your help!
Relevant answer
Answer
Sorry, for the long delay!
Our music excerpts are 70 seconds long and we have 20 songs altogether (10 neutral/10 positive).
Would you mind sharing this groove literature with me? I would be really interested in reading papers showing this connection between moderate music complexity and peak moments!
ad 1. Yes, we thought about that but many scientist I talked to argued, that the brain's network activity is so complex and sophisticated that one cannot (should not) simply subtract this confound from the signal... But I will look into it again! Thank you!
ad 4. Mhm this is a very good idea! I will look into that aswell.
Thank you for your responses Connor!
  • asked a question related to Baseline
Question
2 answers
I think I understand now that you ignore the baseline (pre-test) data when calculating Hedge's g. Instead subtracting the intervention posttreatment mean from the control posttreatment mean and diving the result by the pooled standard deviation (of both samples at posttreatment). However, I am now wondering that means for studies where, despite randomisation, there were significant differences in the outcome of interest at baseline.
For example, with the data below, where let's say the mean value (M) is pertaining to a depression score. Would it be appropriate to calculate Hedge's g in the method above or what would have to be done differently, if intervention and control group baselines scores were not similar?
Thankfully I think only a couple studies had this problem, but I am unsure whether I exclude, perform a correction, or run as normal in the meta-analysis.
Intervention Group Pre-treatment: M=63.92; SD=10.67; N=63
Intervention Group Post-treatment: M=59.43; SD=7.23; N=63
Control Group Pre-treatment: M=74.57; SD=9.79; N=65
Control Group Post-treatment: M=72.69; SD=4.84; N=65
Many thanks for any help.
Relevant answer
Answer
Dear John,
Please, read my paper and see some options:
Regards,
Gokhan
  • asked a question related to Baseline
Question
4 answers
I am trying to calculate the change in standard deviation for my metaanalysis and would like to know the correct way of calculating it.
I have the following data available:
1. mean for control group at baseline and endpoint
2. mean for intervention group at baseline and endpoint
3. 95% Confidence interval for control group at baseline and endpoint
4. 95% Confidence interval for intervention group at baseline and endpoint
5. Number of subjects in control and intervention group
I would like to calculate the difference in standard deviation for control group (sd_baseline and sd_endpoint) and intervention group (sd_baseline and sd_endpoint).
I would like to use the cochrane handbook as reference:
It is stated that
"When there is not enough information available to calculate the standard deviations for the changes, they can be imputed."
Does this mean we need to impute the correlation coefficient for all the studies for every different outcomes separately?
Relevant answer
Answer
Dear Ambrin,
I am late to write, but this can be helpful for the people newly experiencing the same problem. For a solution, please read my paper's methodology:
  • asked a question related to Baseline
Question
5 answers
Dear Research Gate community,
I’m conducting a study on how different management practices in wetland (ponds) affects the diversity and abundance of species in the wetlands. The different ponds are next to each other. Some ponds are control without any management practices. The water in the treatment ponds is regularly drawn down and filled up again with water from the river. We recorded the waterbird species number and abundance of each pond regularly (record the bird data of all the ponds at the same time for each survey).
- In the first year, we conducted a baseline study in which no treatment was done for all the ponds (data were collected monthly).
- In the second year, we conducted the treatment (operational study), and data of birds were collected weekly.
We’re now trying to study:
1) first, if there is any difference between the treatment ponds and control ponds during the operation
2) if there is any difference between baseline study and operational study of the same pond.
We wonder what kind of statistics are suitable for statistically analysing our data.
Some problems we are encountering is:
1. The data do look like normally distributed. The data collected are time series data, there are natural seasonal variation in the number of waterbirds in our region (a lot of migratory birds in fall and winter). How to take into account of influence of the time of survey.
2. The sample frequency for baseline year (12 times) and operational year is different (52 times), how to compare the difference between baseline and operational year.
Highly appreciate any help or suggestion!
Best regards
Relevant answer
Answer
Try to check a book on: Design and Analysis of Experiments, just be sure that you are doing the right design. Regards.
  • asked a question related to Baseline
Question
1 answer
When is the specific time should I measure the Baseline BGL? How about the Pretest and Post-test Blood Glucose Test in Oral Glucose Tolerance Test? I want to conduct a 4-hour fast to white mice.
Relevant answer
Answer
If you want measure fasting glucose, you must tak it beFore your mice get food and they not eat 8 hours. If you want measure prandial Glucose you must take 2 hours after meal. You can make it for baseline bolos glucose level
  • asked a question related to Baseline
Question
1 answer
Hi,
I was wondering if anyone has experience statistically analyzing Ca2+ oscillation patterns?
Specifically, models you used to quantify what you considered to be a "peak" in the oscillation pattern, how you determined baseline and what programs you used to accomplish this.
Thank you!
Relevant answer
Answer
You may like this publication of the topic:
doi: 10.1096/fj.201801460R
Best, Karoly
  • asked a question related to Baseline
Question
8 answers
I am looking to do a meta analysis on intervention RCTs but all of the papers have provided baseline and post intervention Mean (SD) for the groups. I have looked at the Cochrane page as I am using Revman ( https://handbook-5-1.cochrane.org/index.htm#chapter_16/16_1_3_2_imputing_standard_deviations_for_changes_from_baseline.htm )
However, I am not from a statistical background so this is getting quite complex. Just wanted to see if there are any resources to guide as this must be very common, and I would hope there are fairly simple ways to deal with it. I have come across using the SD from baseline or post-intervention as the SD for the change but obviously am hesitant of just going along with this.
As only baseline and post-intervention Mean and SD are reported for the majority of the studies, I am thinking I may need to leave it at the narrative synthesis and leave out the meta analysis. But given that the trials are all randomised with similar baseline characteristics and biomarker parameters, could I just enter the final measurements in both groups, rather than the mean change and SD from baseline?
Appreciate of any input.
Relevant answer
Answer
Piyush,
Please read the methodology of my paper:
you will see what you can do.
  • asked a question related to Baseline
Question
5 answers
I am working with a 7820A GC and 5977B MS. Recently, we changed the helium tank and since then we have been seeing baseline noise in the chromatogram which is high enough to interfere with my peaks. We also found that the N2 and O2 counts have been higher than we used to see; 5-8% and 1-1.8% respectively. Previously, the counts used to be 1-2% and <1%
We replaced the helium regulator, purged the inlet, changed the septum, baked out the MS and the baseline noise still exists. Any ideas why this might be happening?
Relevant answer
Perhaps you have found the answer, but apparently the Helium cylinder can be contaminated with N2 which explains the high N2 ratio.
  • asked a question related to Baseline
Question
1 answer
Hopefully someone can help me out with how to quantify a significant representation of a group.
I am analysing a bias in protein detections. To see a pattern I group the proteins in their families I know the baseline occurrence of a family, based on the proteome. The sample data consist of a list of how many times a protein family occurred in my sample (always equal or less than the baseline).
Data -- baseline ---- %
1 -------- 6 --------- 16.67%
2 -------- 15 ------ 13.33%
11 ------ 141 ----- 7.80%
3 -------- 18 ------- 16.67%
58 ------ 361 ----- 16.07%
1 -------- 3 -------- 33.33%
1 -------- 21 ------- 4.76%
7 -------- 421 ----- 1.66%
1 -------- 2 ---------- 50.00%
I could take a percentage of representation, but since the families are not equally represented this creates a bias.
Here the value of 58 out of 361 weights more than 1 of 2.
Is there a way to calculate the 'magnitude' of representation beyond percentage to take the frequency into account?
Chi-square testing doesn't work since the dataset consists out of more than 600 family groups.
I struggle a bit to get my problem forward, please ask if I can do anything to clarify.
Relevant answer
Answer
Perhaps I am being dense here, but percentage does take frequency into account,
%A = ( freqA÷total ) ×100
Am I missing something here.?
Best wishes David Booth
  • asked a question related to Baseline
Question
4 answers
Hello,
I am hoping that someone who is well versed in statistics can help me with my analysis and design. I am investigating the torque produced via stimulation from different quadriceps muscles. I have two groups (INJ & CON), three muscles (VM, RF, VL), three timepoints (Pre, Post, 48H) in which torque is measured at two different frequencies (20 & 80 Hz). In addition to the torque, we also want to look at the relative change from baseline for immediately Post and 48H in order to remove some of the baseline variability between muscles or subjects. A ratio of 1.0 indicates same torque values post and Pre. This is a complex design so I have a few questions.
If I wanted to use repeated measures ANOVA, I have to first for normality. When I run the normality test on the raw data in SPSS, I have one condition that fails and others that are close (p < 0.1). When I run the ratios I also have a condition that fails normality. Does this mean now that I have to do a non-parametric test for each? If so, which one? I am having a difficult time finding a non-parametric test that can account for all my independent variables. Friedman's is repeated measures but it is not going to be able to account for group/frequency/muscle differences like an ANOVA would.
Is repeated measures ANOVA robust enough to account for this? If so, should I set this up as a four-way repeated measures ANOVA? It seems like I am really increasing my risk of type I error. It could be separated it by frequency (20 and 80 Hz) because it's established a higher frequency produces higher torque but as you can tell I have a lot of uncertainties in the design. I apologize if I am leaving out vital information in order to get answers. Please let me know and I can elaborate further.
Thank you,
Chris
Relevant answer
Answer
You can use a plain ANOVA for repeated measures (time) according to Bhogaraju Anand suggestion, as for normality test, forget it...it is not essential and parametric tests are sufficiently robust as for deviations from normality (see attached file)
  • asked a question related to Baseline
Question
1 answer
As per literature,
"The net peak heights were determined by subtracting the height of the baseline directly from the total peak height. The same baseline was taken for each peak before and after exposure to UV.
The carbonyl index was calculated as: carbonyl index = IC/IR(100),
where IC represents the intensity of the carbonyl peak and IR is the intensity of the reference band."
Now, how do I substract the baseline height from the peak height???
Relevant answer
Answer
Dear Gitashree Gogoi,
I suggest you to calculate carbonyl index according to the method explained in the attached article. I hope it would be useful.
  • asked a question related to Baseline
Question
2 answers
Edit: the paper was approved so if you want to see it just message me :)
I'm writing a paper on a multimodal active sham device for placebo interventions with electrostimulators. We believe it has a low manufacturing cost, but it's probably better to have some baseline for comparison. Have any of you ever requested a manufacturer to produce a sham replica of an electrostimulator to be used on blind trials? If so, how much did it cost? Was it an easy procedure?
Relevant answer
Answer
I would say, if you need it for a study purpose not for exhibition (just kidding), I would suggest to check if it is possible to use the original working device with just unplugged wires in a sham group. Just and idea, good luck with your paper!
  • asked a question related to Baseline
Question
17 answers
What is the origin of the shift (up) in the baseline of the UV-VIS spectrum as noticed from 300 nm to 800 nm in the screenshot attached? I'm measuring phenobarbital in 0.2 NaOH against 0.2 NaOH blank. I have tried turning off fluorescent lights, CRT monitors, and capping the cuvette while measuring the sample on my HP 8453 chemstation.
Relevant answer
Answer
Baseline drift is a common problem in UV-vis spectroscopy - see https://ibsen.com/resources/detector-resources/subtracting-dark-spectra/ for a useful outline of the issue. I am glad that you were able to pinpoint the source of error in your experiment, Zaid Assaf. Increases in dark current (e.g. due to heating) seem to be a frequent offender, though loss of minor absorbing species (e.g. consumption of acid during a reaction) or changes in the solvent environment (e.g. in gradient chromatography) could also contribute.
My advice to William Letter is to read questions carefully before jumping to conclusions regarding the poster's knowledge and experience level. Condescending comments like "This is basic spectrophotometry" are not constructive and may deter other researchers from seeking assistance online. In this instance, you mistook the poster's issue with baseline drift as a misunderstanding of the Beer-Lambert law, despite repeated clarification. I hope that you will take greater care in your future replies.
  • asked a question related to Baseline
Question
3 answers
I use BV2 cell line to record a calcium signal with a dye called Calbryte 520 AM. I added ATP into the perfusion system at 5 min, and I could see a peak in the figure. I did four times of this experiment, but the calcium baseline decreased continuously. Generally, the calcium baseline should be stable.
Relevant answer
Answer
Nice signal. The decrease is probably due to bleaching like both answers said. Continuous and more exponential than linear. Nothing dramatic you can either decrease the illumination power to reduce it or you can detrend it post hoc with your software or choice. The fact that it comes back up on the blue trace is a bit strange but probably due to the way you calculate your DeltaF/F0.
  • asked a question related to Baseline
Question
4 answers
Hi
I am developing a method for computing fussy similarity in WorDnet. Previous work mainly focused on the simialrity of SynSets (concepts).
I am serarching for a snatdard baseline for reasons of comparison. My question is: what is the standard baseline for comuting the similarity of words in wordnet.
Thank you.
Relevant answer
Answer
You might also try to take a look at this article:
We used a little bit more sophisticated approach than the one Olga Seminck indicated. However, I would simply try to combine both approaches.
  • asked a question related to Baseline
Question
3 answers
I want to find each peak's enthalpy change (area under the curve). Can I make the baseline as in the photos below?
Relevant answer
Answer
The question is about the baseline as he has shown attached with @Ong Hui Ling question. My answer is not to use such base and I have given a suggestion above.
  • asked a question related to Baseline
Question
5 answers
Over the last couple days, my colleagues recently noticed a significant amount of baseline drift in their chromatograms from one HPLC (see attached). Across columns, methods and samples it seems to have a consistent type of baseline shift. I've monitored the pressure and it is what I expect it to be/stable. Additionally, the washes (see attached) have a sharp increase, plateau and then drop in all of them. Just a day or two prior, the baselines were perfectly fine. I am not sure what is causing this sudden issue nor how to resolve it. Please advise. Thank you.
Relevant answer
Answer
Also, I see that you are running at 210nm. Such a low wavelength will show just about every change in the system (normal, as 210nm is not selective for anything and will show most "changes"). Additionally, you have the software "Reference Wavelength' feature turned ON. Please turn that feature OFF and read the article below for more info why it so important to not have it ON and have professional training in HPLC operation (esp DAD) before you run samples.
  • asked a question related to Baseline
Question
3 answers
I am conducting a meta-analysis of continuous data using RevMan 5.4.
Included studies express their results as mean, SD at baseline and end of study for intervention and control arms. With these data I can impute change from baseline which I will use to perform a meta-analysis of change scores.
However, in a few studies, due to patients lost to follow-up, the number of patients at the end of the trial is lower than at baseline. RevMan 5.4 requires mean change from baseline, SD of change and sample size to perform the meta-analysis. Which number of patients should I use to perform the meta-analysis (the sample size at baseline or the follow-up)? Or would it be better to exclude these studies?
Thanks in advance for your help.
Relevant answer
Answer
Studies with bigger sample sizes should have higher weights, that's the only logical thought I can imagine here.
  • asked a question related to Baseline
Question
8 answers
This is what the baseline looks like running 30% acetonitrile/70% water, using a UV/VIS detector set at 195 nm. I thought it might be air bubbles, but running 100% water gives a perfectly flat baseline. I degassed the mobile phases, and primed the lines several times. I also ran isopropanol through the system for a while, but that didn't help. The pressure of the system is consistent (around 750 PSI), and I cannot find any leaks. Could this be a solvent mixing issue? I know that acetonitrile absorbs at this wavelength, but I've never seen it cause this sort of issue. I'd really appreciate it if anyone could provide some suggestions, thank you!
Relevant answer
Answer
A few comments:
  • Running at 195nm is NOT advisable. EVERYTHING will absorb at that wavelength providing NO selectivity for the sample under analysis. Consider a different detection method that provides selectivity.
  • Your chromatogram shows ~ 0.5 AU of signal and noise. The detector is not balanced and the mobile phase is NOT uniform in composition.
  • The cyclical peak shapes implies a few possible causes: Poor quality degassing (the degasser may be broken) of the mobile phase resulting in pump cavitation and/or sticking check valve(s). *Running just one solvent alone usually gives a good result and the problem re-appears when mixing the two; the water and/or ACN are not pure (NOT HPLC grade); their is a problem in one of the mixing channels of your system (If it is a mixing problem, and it may or may not be, TRY PRE-MIXING THE TWO LIQUIDS together in one bottle, then run the liquid through one channel and monitor the signal output channel. Is the baseline flat? Does this fix the problem? If so, you having a mixing problem. If not, it might be due to large amounts of impurities in one or both of the liquids used.
  • Most importantly of all, you appear to be using HPLC without any formal training. It takes the average scientist 5 years of professional work just to achieve a basic level of skill in HPLC. Please have someone trained in HPLC locally help you. This technique is best utilized with the help of an experienced chromatographer.
  • asked a question related to Baseline
Question
10 answers
I'm looking for a software able to process HPLC-UV chromatograms (espacially baseline correction and peak alignement) in order to integrate the data to statistical analysis.
Relevant answer
Answer
You could also try the package I am developing in R: https://github.com/ethanbass/chromatographR. It doesn't have a parser for chromeleon files, but I could probably write one for you if it's really just a text file?
  • asked a question related to Baseline
Question
10 answers
Dear colleagues
I have a question regarding HPLC baseline issues. Please see attached file and your suggestions would be greatly appreciated.
Xiao
Relevant answer
Answer
As it appears you are using HPLC for the first time and it takes many years of professional experience just to achieve a basic level of experience with HPLC, please contact someone local at your school to help you. As a student, you should not being doing this alone. Having an experienced chromatographer to help you on-site setup a method and run the analysis will help you be successful. The manual that came with your HPLC column also has a lot of useful information regarding the use, cleaning/washing/regeneration/storage too (do not store it in water !!!!) . Be sure to contact Bio-rad if you have column questions.
Also, the fact that your column temperature is set to 80C and your RID at 55C, will result in a slow temperature gradient forming. This may cause long term baseline drift. Since temperature is used as variable in method development (for your specific application) and temperature changes take a VERY LONG TIME to equilibrate, on column, be sure to insulate all capillary lines leading from the column to the detector (see article linked below) and please allow for LONG equilibration times.
  • asked a question related to Baseline
Question
1 answer
Hello,
Recently I have been getting an extremely unstable baseline throughout the runs I have been performing. I allow the HPLC to calibrate for about an hour by just running mobile phase at the flow rate we use for our method. I also clear the RID detector by running the mobile phase at the same flow rate through the RID channel by opening it using the software. The UV baseline is also unstable.
I have also a noticed a problem where the elution times vary greatly between samples within the same sequence. The difference can be as high as 7 minutes. I switched columns and am still receiving the same problems.
Does anyone know how to fix these issues?
Flow rate - 0.6 ml / minute
mobile phase - 5 mm H2S04
Column and RID Temperature - 55 C
Our samples are mixtures of sugars diluted 1:5 in 16 mm NaOH.
I attached an image of what the peaks look like.
Relevant answer
Answer
Hi Chris
I am facing similar issues of baseline stability and varying retention times. what I understand is RID is highly sensitive to temp and it takes a good amount of time to stabilise the baseline. I am operating at 40 degress both column oven and detector.
  • asked a question related to Baseline
Question
8 answers
N/A
Relevant answer
Answer
"What test would offer insight as to group x condition?"
Only a parametric model can do this. As soon as use ranks (i.e. some kind of "non-parametric analysis") an interaction is not meaningfully interpretable.
There are more important things to consider when analysing an interaction: it makes a difference if you assume that effects are additive or multiplicative. A meaningful interpration also requires that that the observed interaction is not due to ceiling or floor effects.
It's easy to do "some test" and to get "some result", but it is tricky to get a meaningful interpretation. I suggest to collaborate with a statistician.
  • asked a question related to Baseline
Question
2 answers
Hello everyone,
i have a question regarding event/task-related EEG data. Are there any indications that baseline power increases from trial to trial? For example, in the case of a fine motor task (e.g. finger movement), that baseline power slowly accumulates from trial to trial through the task (baseline power trial 1 < trial 100)?
I am aware that the time between trials should be such that there is a return to baseline. However, if the power increases slowly, could it be that an effect is only seen after 70-80 trials?
Does anyone have experience or know of studies that can provide guidance on this issue?
Regards
Niko
Relevant answer
Given that the EEG signal is non-stationary, I'd say that it is perfectly possible for the signal to 'drift' across trials. In fact, I have at some point recorded data with such a behaviour. In any case, there are drifts that can be removed with the appropriate type of filtering (see, for example, this link: https://benediktehinger.de/blog/science/electrode-drift-in-eeg/). I would recommend normalizing your task data by the power at baseline, so that, if such a drift is indeed present, you can consider it in both task and baseline.
You could also always compute the baseline power and check if it, indeed, increases over time between your trials.
Does that make sense?
  • asked a question related to Baseline
Question
4 answers
I prepared an article that resulted a baseline aspect. But I do not understand, in what types fields, in where I submit the article in marine pollution bulletin (Baseline and normal). Moreover i have two article in review in baseline study. Can i submit again a paper in baseline? Has any quality deviation regarding two kind of publishing (Baseline and normal) in marine pollution bulletin. Actually i want to submit the article as a corresponding author. want a good suggestion.thank you for advance.
Relevant answer
Answer
"The objective of BASELINE is to publish short communications on different aspects of pollution of the marine environment. Only those
papers which clearly identify the quality of the data will be considered for publication. Contributors to Baseline should refer to
‘Baseline—The New Format and Content’ (Mar. Pollut. Bull. 60, 1–2)." ( ).
  • asked a question related to Baseline
Question
10 answers
Hello. I am using a reversed-phase HPLC with C18 ODS column with PDA detector. My mobile phase is composed of 15mM CH3COONa, 6% V/V CH3CN (ph-5.5) . The baseline starts from Zero and gradually drops to minus values and never stabilizes, I have ran multiple washes with Methanol: water (50:50) but still facing the same problem. I was wondering what could be the reason for this drop??
Thank you in advance!
Relevant answer
Answer
Typically, a drop in absorbance is due to the change in RI or UV spectrum particularly between the mobile phase and extraction solvent and is apparent in the solvent front. This is different. I would ignore it because your sensitivity is set too high. I would be concerned if it was ~0.1 mAU or 10X higher than what you have in your chromatogram. Your molecule of interest will be at least 100 mAU or 50,000-100,000 peak area units.
  • asked a question related to Baseline
Question
4 answers
Hallo
I am conducting an intervention study where I have two groups (control and experimental ). The study subjects were monitored at baseline and then at endline of the study. I would want to;-
1. Compare the data at baseline between the two groups
2. Compare the data at endline between the two groups
3. Show the effect of the intervention on the parameters of the study subjects
4. Check for differences within the groups eg control at baseline compared to control at endline
Kindly advise on the appropriate statistical tests I should perform
Thanks
Relevant answer
Answer
Jane Mbijiwe Because you are working with the same population and the same treatment/intervention, try utilizing the Wilcoxon Rank Sum Test. It will assist you in getting to know the group that benefited the most from the intervention both before and after.
  • asked a question related to Baseline
Question
3 answers
Live cell imaging using IncucyteZoom is the best way to measure effect of drugs on rate of cell proliferation. I noticed that despite of how accurate we try to count the cells using automated cell counters (countess) sometimes not all the datapoint do not start at the same point.
If we normalise the data to timepjoint 0 the gradient of the slopes changes which is not ideal.
One researcher stated I should seed cells at different densities and introduce an error in counting and choose the densities that start at the same point which I believe is not correct because you are just increasing additional errors to that of countess and additionally assuming all cells have the same size and shape.
I had been deducting the baseline values of time-point 0 from all time points of respective condition such that all points start at 0. Reason: This deduction will not alter any slopes.
The only problem is when cells reach 100% confluence and plateau deduction of baseline for example: if the confluence started at 20% at 0 time point and reached 100% at 96 h and remains plateau unto day 7 , deducting the value of baseline will show that the cells reached plateau at 80% at 96 h and 120 h. If I plot the data only in the log phase and deduct the baseline I think is the best option.
Can anyone please comment on this and help me out.
Relevant answer
Answer
David J Walker is it possible to cite the publication in which you found informations about normalisation please
Thanks
  • asked a question related to Baseline
Question
2 answers
Hi all
I am having trouble getting LTP in hippocampal slices (CA3-CA1) from 9-16-week-old WT mice.
I can get LTP fine in aCSF containing 1uM gabazine (I cut CA3 to prevent epileptic activity) but in plain aCSF I don't see any LTP. When I record using Gabazine I often see spiking after LTP induction and I'm worried these are skewing my data. So it would be good if i could record LTP without using Gabazine to prevent this.
In both conditions I get stable baselines and the slices look healthy. For the baseline I use 40% of the max response. I extract in choline chloride aCSF (i have also tried slicing in aCSF containing sucrose). After slicing, the slices are maintained in standard aCSF and left to rest for 1 hour before being transferred to the rig at 30 degrees. I induce LTP using a theta burst stimulus.
Please can anyone help explain why I cannot get LTP without Gabazine? And please let me know if you have any suggestions of what I could try to get LTP.
Thanks in advance
Relevant answer
Answer
Hi Lauren, for me, for some reason, LTP is inducible only in like 20% of slices at 50% of amplitude at first PS response. But if instead, I take ~60-70% of this amplitude, the success rate increases to 75% (with no PS at during LTP recording).
And also I think it is better to deepen the stim much deeper (like penetrate it >half of a slice), so more fibers would be stimulated. But I didn't try it yet.
By the way, since I'm answering pretty late and have similar problems (wish to have 100% success rate at 50% stim of max), could you please tell, did you solve the issue? If so, how?
  • asked a question related to Baseline
Question
3 answers
Hello Everyone,
I am having trouble with baseline removal when creating an epoch containing 1 trial. After creating the epoch, a window pops up that reads "remove mean of each data channel", rather than the window that allows you to specify the time window for baseline removal. I am also unable to later remove baseline by going to "tools" --> "remove baseline".  How may I baseline correct this segment using a window of -200 to 0 (rather than the default -1000 to 0)? Is there a way to change the default settings of the baseline removal in eeglab?
Thank you!
Best,
Ariel
Relevant answer
Answer
Thank you Joao and Nik for your replies.
I was able to baseline correct from the command line/Matlab space, as you suggested Nik. I used the following code, which saves a new dataset with the applied baseline correction:
pop_saveset(pop_rmbase(EEG, [-200, 0]), 'filepath', 'C:\Users\s15aa\Downloads\eeglab_current\eeglab2021.1\EEG Epochs and Baseline Correction\PSVTR08_hard_baseline_corrected')
  • asked a question related to Baseline
Question
3 answers
the input data if between 2006 and 2020, and the number of years I choosed is 100 years. So the generated data gives 100 years values from year 1 to year 100 without mentionning any dates.
Relevant answer
Answer
Hi dear Sara Guemar,
The first and most important thing is that lars works as a stochastic tool. Therefore, it is important that you understand how the LARS works.
Firstly, I want to emphasize a key point. You mentioned that you chose 2006-2010 as your baseline. If you are going to publish your research as a scientific article, you should choose the standard period of 1980-2010. Lars is calibrated for the period.
Secondly, you asked what is the start of the period selected for 100 years? After choosing a GCM model and scenario, you must select a period. When the selected period is, for example, 2041-2060, it is not important whether you choose 100 years or 10 years; the model will generate data for that period. My description of it is as follows.
As I told you lars is a stochastic weather generator. It means the output of Lars is not a time series. This means that the numbers for 2042 are not after 2041. Thus, averages (or std) of series can only be used for analysis. LARS is not a prediction tool, but a projection tool.
You can test this out for yourself. For the period 2041-2060, generate data once for 20 years and once for 100 years. Then compare the average of the temperature and precipitation for the two series. You will see that the means are very close to a very good approximation.
Finally, you should know that you are allowed to analyze the results on a monthly basis since Lars does not support daily changes unless you manually create scenarios based on daily GCMs.
  • asked a question related to Baseline
Question
2 answers
Hello,
I was wondering what are some more concrete examples of baseline imbalances? Thanks
Relevant answer
Answer
To get a comprehensive understanding, first an idea about baseline characteristics is compulsory. These characteristics are the medical, demographic and prognostic variables in Clinical trials. When an error occurs in creating intervention groups in baseline characteristics when participants are non corresponding to prognosis, this is called Baseline imbalance.
Hope that it helps.
  • asked a question related to Baseline
Question
5 answers
Hi everyone,
I hope you have had a great day so far!
Well, I wonder how I can run a mixed-effect analysis on Stata with the following features:
Research question: What baseline variables predict my dependent variable over time?
Dependent variable: discrete --> Poisson distribution
Independent variables: both categorical and continuous variables
The following model is what I have planned so far. But I don't know how to consider only the baseline data from my IDs.
xtmepoisson DV ID##time || participant_ID:time, irr
My question is: What do I need to do to consider only the baseline data from my IVs?
Thank you in advance and happy holidays!
Relevant answer
Answer
Dear Lisandra Almeida,
If I would be your help, my email: miregech897@gmail.com.
Good luck!
  • asked a question related to Baseline
Question
3 answers
I am investigating the efficacy of Gestalt therapy with adolescents engaging in self-harm, using a single case experimental design. I have administered some tools to measure the level of self-harm, anxiety and depression at baseline, after 15 sessions and after 30 sessions. What statistical measures would you suggest I use to show the effect of the treatment besides visual analysis?
Relevant answer
Answer
Paired sample t-tests could be a simple approach to look at differences in a single variable from pre treatment to post treatment.
  • asked a question related to Baseline
Question
7 answers
Hi there, my research aim is to reduce latency in fog environment, and I have a baseline that I would like to compare my work to, in the baseline research paper, their proposed method was compared to a method called "no offloading" and they saved the latency by 40%. in my work, I compared my proposed method to the same method "no offloading" and I saved latency by 80%. the question is that do I have to do coding for the baseline (in the simulation) to officially comparing my work to?? the problem is that the baseline method consider different factors that I don't consider such as deadline, and the values of the parameter used in the baseline is different from mine.
Relevant answer
Answer
Yes, you can compare your findings with someone else's findings to the same research question, even though you used different methods. Findings should be objective, not method dependent. Using the same method is a test of their replicability.
  • asked a question related to Baseline
Question
6 answers
Can I use the means difference (difference in mean between endpoint and baseline) and associated standard error to calculated a standard mean difference (SMD) and its 95%CI (I also have an exact p value and sample size)?
for context: it's for a meta analysis all of my other studies provided me with the means and SDs needed to get my SMD.
n=239, mean difference: −1.2 [SE 1.48]; p=0.4154
either formulas or online calculators or references are welcome :)
Thank you
Relevant answer
Answer
OK, I assume you have baseline and endpoint measurements for each of the 239 subjects. I was lazy and used the Comprehensive Meta-Analysis (CMA) software to do the calculations for me. There are two possible ways you might calculate the standardized mean difference (SMD). The first one (see Test1.jpg below) is probably the best procedure. It makes use of your paired t-test p-value. You do not have to make assumptions about the correlation between the baseline and endpoint values like I used in Test2.jpg. In Test2.jpg (rows 2 and 3), I used assumptions of r=0.9 and r=0. These are extreme correlations but it is nice to see they are similar to the SMD calculated using the p value (i.e., that in Test1.jpg). If you want a reference for the CMA software, use the book by Michael Borenstein et al. (i.e., Introduction to Meta-Analysis, Publisher: Wiley [2nd edition], 2021). SD of Difference = SE x square root of 239