Questions related to Baseline
Hello Mass Spec Wizs,
I recently started working on QExactive orbitrap for quantification of small molecules.
The sensitivity of analytes in the current PRM method is low. Also the baseline level is quite high in samples (~3*e7). Has anyone faced similar issues?
Also how sensitive is QExactive plus compared to QE.
Hi! Can I ask a favor to everyone? I really need to know the require sample size and any citations will be much appreciated
Hello, we are using a thermo TSQ Fortis triple quad mass spec with a ultimate 3000 hplc system from thermo as well.
We are having a lot of issues with the baseline, it suddenly starts going up and it makes the chromatogram look like a thick black line with really high intensity signals (like x10^6).
Sometimes it goes down during two inyections quickly, taking between 15-30 minutes. Other times it stays up for a long time and it has us dumbfounded. We don't know what else to try. We pause the reading of the MS part and it goes down easily and stays like normal, other times it shoots back up again without even starting an inyection.
Thanks in advance
I want to conduct a correlation analysis, to check whether changes in learning outcomes correlate with the number of lessons students attend. Learner outcomes are measured at baseline and endline, and the measurement is ordinal. To be specific, students get assessed and assigned to one of 5 learning categories (for example for literacy, this would be Beginner, Letter, Word, Sentence, or Paragraph). The number of lessons students attended is measured quantitatively.
Since we are interested in the changes in learning outcomes, I was initially planning to calculate a difference score between endline and baseline, and correlate that with the number of lessons attended. However, having discovered that learner outcomes are measured ordinally, this does not make any sense. What would be the best way to compute a correlation between changes in learner outcomes (between baseline and endline) and the number of lessons students attend?
Thank you in advance for your responses!
I wanted to ask some advice on EPSC detection using Neuromatic in IgorPro. I'm quite new to the software and I'm struggling to detect true events, it still includes a lot of noise.
A few things I struggle with are:
- It generally detects a lot of noise being an event. In some recordings the noise level is a bit higher. Even when I increase the threshold, still quite some noise is being picked up. Do you use a threshold of x ampl below baseline, or standard deviation? Or does the template matching work better? What threshold do you use.
- It does not accurately put the onset of the event. Often it puts the onset too early, well before the event actually starts. I've played quite a bit with changing the detection parameters for onset / baseline but haven't figured out a good setting. Do you have some advice?
- I wonder if I should filter the data before starting the event detection. Do people use a filter and if yes, what kind of filter do you use?
- I was wondering if there is a way in the software to filter events for certain settings and then visualize them? I now copy the output to excel, sort it and for example exclude some events based on the rise time, but it is then really hard to find back the events in the software to check if what I'm excluding I should actually excluded. Does anyone know of a way to sort data into the different Sets based on a certain parameter being higher or lower than x?
I use the software for data from in vitro cultured neurons and brain slices. It is especially a struggle with in vitro cultured neurons where our noise level usually is a bit higher.
It's a lot of questions but I hope someone can give me some advice on how to perform this analysis using IgorPro Neuromatic.
I am performing some DSC (perkin elmer 8500) measurements (same materials, same mass, same program). Following calibration, I encountered different baselines every day. The baseline was flat only on the first day. Any idea what's happening? Does the instrument need to be calibrated every day?
I installed a superdex 75 pg myself, and the conductance baseline was flat at the beginning of the balance, and then the baseline was flat when the sample was eluted to 50%, and the last 50% jittered up and down, which was unstable.
I ran the same qPCR test (SYBR Green) on a lot of samples that didn't fit on one plate (over 20 plates all together) and want to compare all of them to the same standard curve that was included in one of those plates. I did have a few positive controls on each plate to check variability between plates.
Now I am aware that I need to set parameters in the same way for all the plates in order to compare them. The threshold will be a fixed value, but I am wondering about the baseline. Ideally, it would be fixed at a certain intensity for all plates. That is not possible in the software I'm using (Eppendorf MasterCycler ep realplex, v. 2.2), so I can either use automatic baseline or manual baseline with specifying start and end cycle.
In a few cases, automatic baseline gives me much better results than manual baseline (with manual setting the curves are not parallel, no matter which cycle range I choose provided I stay below the first amplification; while with automatic baseline the curves are nicely parallel).
I think automatic baseline is not a good option for comparison, but I'm still wondering what is the best thing to do. Even if I choose manual baseline and specify start and end cycle, I think it won't be exactly comparable between plates, but that's probably the best option, just to keep in mind that some results might be of a slightly lower quality because of that.
I am a beginner in this field and have read from papers that a stable baseline current has to be obtained, the MFC should start its operation in an open circuit potential (OCP) etc. How do I check for OCP?
We are getting results that are too similar to each other on our UV-VIS spectra for very different molecules. We are in need of a procedure or help with building the appropriate way of going about this characterization. We are working with fullerenol, sulfo-SMCC, and an antibody. The buffer of this solution is PBS-EDTA as well as some DI water. What should our baseline be? is water fine? Should we use all PBS?
As for the reference, is a clear cuvette appropriate or should we do PBS in the reference as well?
During my time processing results curves, I frequently need to correct the baseline before peak fit, and intensity comparison. This thing happens to XRD, XPS, Raman, and many other data processing. Can anyone give some advice on the general principle of baseline correction and some specific rules for each type of measurement? For example, I heard that the (FWHM) can not be higher than a specific eV value when processing XPS data.
I would like to measure baseline anxiety in adolescent (34 to 50 days old) C57BL/6J mice. Is there a best time of day during the light cycle to perform these experiments?
I am using Agilent Infinity II ELSD (G4260B) to develop a new analysis method and facing these problems.
1. Large baseline offset: When turning on the system (parameters as below), the signal was about 700 mV, even though I have spent a few days to wash as recommended as in the manual of Agilent.
- Column: XBridge C18, 150 x 4.6 mm, 3.5 µm, 25 °C
- Mobile phase: 100% Methanol (LC grade)
- Flow Rate: 1.0 mL/min
- Evaporator temperature: 60 °C
- Nebulizer temperature: 25 °C
- Evaporator gas flow: 1.6 SLM (nitrogen)
2. Baseline increases when using water in mobile phase: The parameters were the same as above except mobile phase was methanol - water. Whenever the percentage of water was higher than 20%, e.g. methanol - water (6:4, isocratic), the baseline began to increase steadily (approximately 3.5 mV/min or 210 mV/h). After a few hours, the baseline became straight because the signal had reached the upper limit (1250 mV). If changing mobile phase back to 100% methanol, the baseline would slowly decrease back to its previous value.
Does anyone know about or experience these problems? Please help me!
I had done an intervention study with two groups (Treatment group=1, Control group=0). I have three time points (Baseline=t0, post-intervnetion=T1, follow-up=T2). My outcome variables are quality of life and anxiety level (Both measured in continuous scales). As my outcome variables didn't follow the normal distribution, I am conducting GEE. I would like to know do I need to adjust baseline values of outcome variables? If yes, how should I interpret the output tables? If anyone has example of similar study, I would be grateful to read that. Appreciating your support
I am doing a mixed effect linear model using Stata and need your advice. The study is an RCT with 4 groups and 3 time points. We would like to test a group X time interaction to see the change of score over the treatment period. However, the main outcome measure was significantly different at baseline (P = 0.001). My question is: How to adjust for baseline measure in mixed effect model (long shape data) using Stata?
I have performed a PCA analysis on an ASTER IMAGE in ENVI 5.3. I displayed the statistics and got different statistical tables (baseline statistics, covariance, correlation between bands...). However, it does not show the correlation matrix between the PCs and the bands of the input image. How can I display this specific table?
Any information or help is very appreciated.
I need to use 27Al NMR for liquid sample and i whant to know if quartz NMR tube have lower backgroud than NMR glass tube.
If someone know how to reduce the bacground noise let me know.
Currently I'm struggling with choosing between various analysis options, ranging from repeated measures design, to 4-way ANOVA to ANCOVA or to moderation analysis with the PROCESS macro of Hayes.
Some background information: My main research question is: To what extent can subgroup membership predict changes in X scores six months after participation in an intervention, and is this effect moderated by Y(controlled for Z.
I am not sure if I should work with a repeated measures design, or work with a change score of AUDIT (by calculating T1-T0). What I have read about this is that the difference scores ANOVA (1) tests whether the change or difference from T0 to T1 is equal acorss all groups, whereas ANCOVA (2) tests whether the T1 scores are equal across groups while controlling for their scores on T0. I've read that this is a small however potentially impactful distinction which got famous through Lord's paradox (https://m-clark.github.io/docs/lord/index.html / ANCOVA Versus CHANGE From Baseline in Nonrandomized Studies: The Difference: Multivariate Behavioral Research: Vol 48, No 6 (tandfonline.com)). It's been stated that if your groups are randomly assigned experimental groups, both methods are equivalent and you can choose whichever you prefer. If they are naturally occuring groups the literature indeed suggests using the difference scores method.
Since the subgroups I'm working with are latent classes that indeed 'naturally' occur, I am wondering if I should indeed go with change scores (despite its downfalls that has been written about in the literature, e.g. addition of measurement errors etc).
What is important to keep in mind is that my data (in both options, so taking AUDIT change or AUDIT_T1 as DV) has been violating assumptions of normality ánd homogeneity throughout, and I am not sure how to best deal with that in my current situation. The macro's of Hayes moderation PROCESS tool seemed like a good solution, but it depends on question 1 whether I can use that (because i need to use de AUDIT change score for that).
All in all, I am unsure how to proceed. Thank you in advance for thinking along.
In a scenario, where I have about 25 protein sequences of a lytic enzyme (Variants) of bacterial origin, I would like to estimate the mutational distance of each of these sequences and selecting one of them as baseline I would like to see if the other sequences are its predecessor or successor and estimate their mutational distance from the baseline. Can building a specific phylogenetic tree accomplish the above? If yes, what is the procedure for building such a tree and what tools to use?
I'm performing LTD experiments in the mouse hippocampus (CA1) using coronal brain slices and recording fEPSP.
Mice are 30 days old and I'm trying low frequency stimulation (900 stimuli at 1Hz) in order to induce longlasting LTD. However, after LFS the fEPSP response comes back to baseline after 10/15 minutes from the conditioning. Any suggestion from your side?
Below you can find my experimental setting:
- 400 um slice thickness;
- Ice cold cutting solution: Sucrose (195 mM), NaCl (10 mM), Glucose (10 mM), NaHCO3 (84 mM), KCl (74.55 mM), NaH2PO4 (1.25mM), Sodium Pyruvate (2mM), CaCl2 (0.5 mM), MgCl2 (7 mM);
- After cutting procedure, slices are incubated in aCSF for 40' at 35°C, then at RT
- Standard aCSF as recording solution (Cacl2 2mM and MgCl2 1mM), no blockers;
- 50% of max fEPSP response as baseline for 10 minutes (pair of stimuli every 20", ISI 50ms);
- Borosilicate capillaries (filled with aCSF) as stimulating (< 1 MOHM resistance)and recording electrodes (around 2 MOHM resistance);
- LTD protocol (900 single stimuli at 1 Hz or 900 pair of stimuli at 1 Hz, ISI 50ms).
Thanks in advance for helping me.
I’ve been having numerous issues with achieving stable baselines recording from the TA-CA1 synapse from juvenile (P12-P24) rat hippocampus slices. In addition, when applying drugs such as antagonists/inhibitors which should not show any effect on baseline, I have been seeing gradual increases in synaptic transmission that differ from what other students have previously shown in my lab.
I cull my rats by cervical dislocation and slice in ice cold sucrose aCSF and allow the slices to rest for 1 h at RT in regular aCSF. I then stimulate and record from the TA-CA1 and my first slice usually takes 2-3 hours to stabilise. I oxygenate my aCSF for at least 40 minutes prior to putting a slice on the rig and I use a platinum harp to hold it down in the bath. My rig uses a gravity feed system and the flow rate is 2.5 mL/min. My recording electrode is filled with aCSF and I bleach the silver wire every few days.
When the slice eventually stabilises for 20 min, I add my drug which has been oxygenating for at least 10 min. I can often see strange increases caused by the drugs that have not previously been seen. I thought it might be down to changes in oxygenation but I’ve been keeping all of my solutions in similar sized cylinders and have increased my oxygen so that everything is saturated.
Can anyone advise me how I can improve this and shed some light onto why I am seeing such instability and increases when switching drug?
Any help would be much appreciated, as I feel as though I’ve exhausted all ideas at this point.
I am doing research to assess the impact of counseling on level of appetite. Level of appetite was categorized as poor moderate and good. Furthermore level of appetite was assess at baseline, after 2 weeks and 6 weeks. So what will be the most appropriate statistical test for analyzing this type of data?
I am conducting a meta-analysis with just RCTs and I am interested in the changes between baseline and outcome assessments as a response to an intervention for two study arms.
For the most of my studies, I have just mean and SD values and nothing else. So, I am not able to calculate the correlation and then find the SD value of the changes. And, I could not find any thing about how I can find the SD of the changes using the mean and SD data of baseline and outcome assessment. There is a formula on the Cochrane handbook to combine the two subgroups SD to find a common one, that you can see in on the attachment. However, I am not so sure if this is a reasonable way to calculate the SD for the changes using this formula. If you have any information about that and inform me or suggest another way to solve my problem, I will be so pleased.
In a single case experimental design ABA type, n:3, can data collection (A) (minimum thrice ) be started for all three participants at the same time, and can the intervention (B) be started after establishing a baseline trend for Participant 1 and then withdraw (A), while the other two are still at the baseline/if all the three achieve can mindfulness intervention (12 weeks) be given simultaneously for each individual one to one, on different days (eg. Monday wed Friday) or do I have to proceed one at a time (Ie complete the A-B-A chain fully for participant 1 and move on to participant 2 then participant 3, start fresh?) If data collection and intervention cannot be simultaneously carried out for all the individuals in ABA, can multiple baselines be implemented to overcome these shortcomings?
There are 4 variables in the study.
Kindly provide your valuable input.
Hi everybody, I need some help with an analysis of pupillometric data; it’s the first time that I use pupillometry, so I hope I didn’t make too many mistakes or at least that they won’t jeopardize the whole analysis.
I ran a between-subjects experiment in which the participants watched the same visual stimulus in three different conditions; during the stimulus presentation I recorded their eye-tracking data. I'm very interested in pupillometry but here's my problem:
- the software I use (iMotions) provides me with the aggregated and auto-scaled data for each of the three conditions: these data are apparently very clean and consistent (there has to be some kind of automated correction of blinks and artifacts).
- The software output basically has two columns: timestamp (in milliseconds, identical in the three conditions) and pupil diameter (in cm, strangely enough, but never mind…)
- I ran an ANOVA with the condition as the factor and the pupil diameter as the dependent. variable, F(2, 5141) = 119.38, p < .001 ηp2 = .044, (1-β) > .99. Bonferroni corrected post-hocs were all significant p < .001 (see graph 1 in attachment)
- I got suspicious: the significance was too high and, above all, the three conditions do not start at the same point on the y axis (ycond1 = 0.47; ycond2 = 0.47; ycond3 = 0.50). I thought that maybe the significant difference could be due to this (let’s say the participants in Condition 3 had larger pupils for some reason); so, to baseline the data, I tried to let them start at the same point.
- To do this, I rearranged the columns for them to show me not the pupil diameter but the pupil dilation; I organized them so that the new y value of, let’s say, x = 1 (time frame = 132) was the old y value (pupil diameter) minus the value of y with x = 0) (see attached screenshot) In the example, for condition 1; the new value for timeframe 132 is: 0,48 - 0,47 = 0,01.
- I ran the same ANOVA and now the results appear to be more reliable, F(2, 5141) = 42.15, p < .001 ηp2 = .016, (1-β) > .99. There has been a drastic decrease in the F value and partial eta squared. Bonferroni corrected post-hoc analyses revealed that the condition 2 only was significantly different from the other ones (p < .001) (see graph 2).
…and now…question time!
- Would you say that this procedure is right? I guess there could be many errors in it, but I’m not an expert and I didn’t manage to take great advantage from reading many papers on this matter.
- Would you have trusted the ANOVA results at point 3? Or rather I was right to baseline those data?
- To baseline the data, I acted freely and according to nothing but a rule of thumb that came into my mind. Would you suggest other processes?
When the outcome of ANOVA revealed a main effect of our primary dependent variable which one should we consider? Absolute value or the percent change from baseline?
We usually learn that choose the outcome of absolute value from the lesson of statistics , but I don't understand why.
Thank you for your reply.
i am trying to use the ratio profiler plugin on imagej to analyze GCaMP data. We followed the instructions written on the imagej wiki page and still cannot figure out why we aren't getting the data we'd expect from the plugin. The graph imagej spits out after running the plugin is just one vertical line. We are confused. Has anyone else run into this issue? If so how do you resolve it.
Further, if imagej doesn't not work for us to analyze ratiometric data how would you recommend analyzing GCaMP data? We are looking for a program to measure peaks in fluorescence and baseline fluorescence.
I am working on longitudinal data with a 4-time point. I want to fit a model exploring how outcome Y(Binary) changes over time (no missing in any time-point) with the exposures X, W, T and Z (these exposures are also time-changing - some are full observed and missing in points - so unbalanced). But on the same model, I want to account for other exposure variables which do not change over time(Constant Baseline Covariates).
I have tried GEE models, and also explored some references that recommend generating another binary variable for each exposure per time point, but these will generate quite many exposures as 4 covariates * 4 point=16 new variables + other 6 constant covariates the model won't converge.
Please can you advise on how best I can analyse this type of data, or provide any reference or analysis program that I can learn more about this?
The problem: We are interested in how ketamine modulates hedonic experiences such as chills. Participants listen to music while they are in the scanner both on ketamine and placebo. Participants also pre-rate all the songs outside the scanner a week before. For our analysis we need to know when participants experience their peak emotional moment during the scan session.
Our ‘solutions’ that did not work: 1. Live rating – people press a button while they experience their peak moment. Problem: we confound our neuronal signal of interest (pleasure) with motor activity. 2. Additionally rating the music after the scan session. Problem: even though, we think that the peak for most songs won’t shift between the baseline rating and the in-scanner rating sessions, it might do so during the ketamine condition (makes everything more pleasurable and maybe even earlier). So, if they rate the music again afterwards when most of ketamine's effects have already subsided, we might not find the same peak moments as during the scan session. (plus, we also doubt that participants could reliably remember when those peaks occurred during the scan session…)
3. measure physiological responses. problem: yes, skin conductance does correlate with chills but we do not have the equipment for that...
Does anyone of you have an idea how we could measure the peak moment during the scan session without majorly confounding our measurements? I would really appreciate your help!
I think I understand now that you ignore the baseline (pre-test) data when calculating Hedge's g. Instead subtracting the intervention posttreatment mean from the control posttreatment mean and diving the result by the pooled standard deviation (of both samples at posttreatment). However, I am now wondering that means for studies where, despite randomisation, there were significant differences in the outcome of interest at baseline.
For example, with the data below, where let's say the mean value (M) is pertaining to a depression score. Would it be appropriate to calculate Hedge's g in the method above or what would have to be done differently, if intervention and control group baselines scores were not similar?
Thankfully I think only a couple studies had this problem, but I am unsure whether I exclude, perform a correction, or run as normal in the meta-analysis.
Intervention Group Pre-treatment: M=63.92; SD=10.67; N=63
Intervention Group Post-treatment: M=59.43; SD=7.23; N=63
Control Group Pre-treatment: M=74.57; SD=9.79; N=65
Control Group Post-treatment: M=72.69; SD=4.84; N=65
Many thanks for any help.
I am trying to calculate the change in standard deviation for my metaanalysis and would like to know the correct way of calculating it.
I have the following data available:
1. mean for control group at baseline and endpoint
2. mean for intervention group at baseline and endpoint
3. 95% Confidence interval for control group at baseline and endpoint
4. 95% Confidence interval for intervention group at baseline and endpoint
5. Number of subjects in control and intervention group
I would like to calculate the difference in standard deviation for control group (sd_baseline and sd_endpoint) and intervention group (sd_baseline and sd_endpoint).
I would like to use the cochrane handbook as reference:
It is stated that
"When there is not enough information available to calculate the standard deviations for the changes, they can be imputed."
Does this mean we need to impute the correlation coefficient for all the studies for every different outcomes separately?
Dear Research Gate community,
I’m conducting a study on how different management practices in wetland (ponds) affects the diversity and abundance of species in the wetlands. The different ponds are next to each other. Some ponds are control without any management practices. The water in the treatment ponds is regularly drawn down and filled up again with water from the river. We recorded the waterbird species number and abundance of each pond regularly (record the bird data of all the ponds at the same time for each survey).
- In the first year, we conducted a baseline study in which no treatment was done for all the ponds (data were collected monthly).
- In the second year, we conducted the treatment (operational study), and data of birds were collected weekly.
We’re now trying to study:
1) first, if there is any difference between the treatment ponds and control ponds during the operation
2) if there is any difference between baseline study and operational study of the same pond.
We wonder what kind of statistics are suitable for statistically analysing our data.
Some problems we are encountering is:
1. The data do look like normally distributed. The data collected are time series data, there are natural seasonal variation in the number of waterbirds in our region (a lot of migratory birds in fall and winter). How to take into account of influence of the time of survey.
2. The sample frequency for baseline year (12 times) and operational year is different (52 times), how to compare the difference between baseline and operational year.
Highly appreciate any help or suggestion!
When is the specific time should I measure the Baseline BGL? How about the Pretest and Post-test Blood Glucose Test in Oral Glucose Tolerance Test? I want to conduct a 4-hour fast to white mice.
I was wondering if anyone has experience statistically analyzing Ca2+ oscillation patterns?
Specifically, models you used to quantify what you considered to be a "peak" in the oscillation pattern, how you determined baseline and what programs you used to accomplish this.
I am looking to do a meta analysis on intervention RCTs but all of the papers have provided baseline and post intervention Mean (SD) for the groups. I have looked at the Cochrane page as I am using Revman ( https://handbook-5-1.cochrane.org/index.htm#chapter_16/16_1_3_2_imputing_standard_deviations_for_changes_from_baseline.htm )
However, I am not from a statistical background so this is getting quite complex. Just wanted to see if there are any resources to guide as this must be very common, and I would hope there are fairly simple ways to deal with it. I have come across using the SD from baseline or post-intervention as the SD for the change but obviously am hesitant of just going along with this.
As only baseline and post-intervention Mean and SD are reported for the majority of the studies, I am thinking I may need to leave it at the narrative synthesis and leave out the meta analysis. But given that the trials are all randomised with similar baseline characteristics and biomarker parameters, could I just enter the final measurements in both groups, rather than the mean change and SD from baseline?
Appreciate of any input.
I am working with a 7820A GC and 5977B MS. Recently, we changed the helium tank and since then we have been seeing baseline noise in the chromatogram which is high enough to interfere with my peaks. We also found that the N2 and O2 counts have been higher than we used to see; 5-8% and 1-1.8% respectively. Previously, the counts used to be 1-2% and <1%
We replaced the helium regulator, purged the inlet, changed the septum, baked out the MS and the baseline noise still exists. Any ideas why this might be happening?
Hopefully someone can help me out with how to quantify a significant representation of a group.
I am analysing a bias in protein detections. To see a pattern I group the proteins in their families I know the baseline occurrence of a family, based on the proteome. The sample data consist of a list of how many times a protein family occurred in my sample (always equal or less than the baseline).
Data -- baseline ---- %
1 -------- 6 --------- 16.67%
2 -------- 15 ------ 13.33%
11 ------ 141 ----- 7.80%
3 -------- 18 ------- 16.67%
58 ------ 361 ----- 16.07%
1 -------- 3 -------- 33.33%
1 -------- 21 ------- 4.76%
7 -------- 421 ----- 1.66%
1 -------- 2 ---------- 50.00%
I could take a percentage of representation, but since the families are not equally represented this creates a bias.
Here the value of 58 out of 361 weights more than 1 of 2.
Is there a way to calculate the 'magnitude' of representation beyond percentage to take the frequency into account?
Chi-square testing doesn't work since the dataset consists out of more than 600 family groups.
I struggle a bit to get my problem forward, please ask if I can do anything to clarify.
I am hoping that someone who is well versed in statistics can help me with my analysis and design. I am investigating the torque produced via stimulation from different quadriceps muscles. I have two groups (INJ & CON), three muscles (VM, RF, VL), three timepoints (Pre, Post, 48H) in which torque is measured at two different frequencies (20 & 80 Hz). In addition to the torque, we also want to look at the relative change from baseline for immediately Post and 48H in order to remove some of the baseline variability between muscles or subjects. A ratio of 1.0 indicates same torque values post and Pre. This is a complex design so I have a few questions.
If I wanted to use repeated measures ANOVA, I have to first for normality. When I run the normality test on the raw data in SPSS, I have one condition that fails and others that are close (p < 0.1). When I run the ratios I also have a condition that fails normality. Does this mean now that I have to do a non-parametric test for each? If so, which one? I am having a difficult time finding a non-parametric test that can account for all my independent variables. Friedman's is repeated measures but it is not going to be able to account for group/frequency/muscle differences like an ANOVA would.
Is repeated measures ANOVA robust enough to account for this? If so, should I set this up as a four-way repeated measures ANOVA? It seems like I am really increasing my risk of type I error. It could be separated it by frequency (20 and 80 Hz) because it's established a higher frequency produces higher torque but as you can tell I have a lot of uncertainties in the design. I apologize if I am leaving out vital information in order to get answers. Please let me know and I can elaborate further.
As per literature,
"The net peak heights were determined by subtracting the height of the baseline directly from the total peak height. The same baseline was taken for each peak before and after exposure to UV.
The carbonyl index was calculated as: carbonyl index = IC/IR(100),
where IC represents the intensity of the carbonyl peak and IR is the intensity of the reference band."
Now, how do I substract the baseline height from the peak height???
Edit: the paper was approved so if you want to see it just message me :)
I'm writing a paper on a multimodal active sham device for placebo interventions with electrostimulators. We believe it has a low manufacturing cost, but it's probably better to have some baseline for comparison. Have any of you ever requested a manufacturer to produce a sham replica of an electrostimulator to be used on blind trials? If so, how much did it cost? Was it an easy procedure?
What is the origin of the shift (up) in the baseline of the UV-VIS spectrum as noticed from 300 nm to 800 nm in the screenshot attached? I'm measuring phenobarbital in 0.2 NaOH against 0.2 NaOH blank. I have tried turning off fluorescent lights, CRT monitors, and capping the cuvette while measuring the sample on my HP 8453 chemstation.
I use BV2 cell line to record a calcium signal with a dye called Calbryte 520 AM. I added ATP into the perfusion system at 5 min, and I could see a peak in the figure. I did four times of this experiment, but the calcium baseline decreased continuously. Generally, the calcium baseline should be stable.
I am developing a method for computing fussy similarity in WorDnet. Previous work mainly focused on the simialrity of SynSets (concepts).
I am serarching for a snatdard baseline for reasons of comparison. My question is: what is the standard baseline for comuting the similarity of words in wordnet.
Over the last couple days, my colleagues recently noticed a significant amount of baseline drift in their chromatograms from one HPLC (see attached). Across columns, methods and samples it seems to have a consistent type of baseline shift. I've monitored the pressure and it is what I expect it to be/stable. Additionally, the washes (see attached) have a sharp increase, plateau and then drop in all of them. Just a day or two prior, the baselines were perfectly fine. I am not sure what is causing this sudden issue nor how to resolve it. Please advise. Thank you.
I am conducting a meta-analysis of continuous data using RevMan 5.4.
Included studies express their results as mean, SD at baseline and end of study for intervention and control arms. With these data I can impute change from baseline which I will use to perform a meta-analysis of change scores.
However, in a few studies, due to patients lost to follow-up, the number of patients at the end of the trial is lower than at baseline. RevMan 5.4 requires mean change from baseline, SD of change and sample size to perform the meta-analysis. Which number of patients should I use to perform the meta-analysis (the sample size at baseline or the follow-up)? Or would it be better to exclude these studies?
Thanks in advance for your help.
This is what the baseline looks like running 30% acetonitrile/70% water, using a UV/VIS detector set at 195 nm. I thought it might be air bubbles, but running 100% water gives a perfectly flat baseline. I degassed the mobile phases, and primed the lines several times. I also ran isopropanol through the system for a while, but that didn't help. The pressure of the system is consistent (around 750 PSI), and I cannot find any leaks. Could this be a solvent mixing issue? I know that acetonitrile absorbs at this wavelength, but I've never seen it cause this sort of issue. I'd really appreciate it if anyone could provide some suggestions, thank you!
Recently I have been getting an extremely unstable baseline throughout the runs I have been performing. I allow the HPLC to calibrate for about an hour by just running mobile phase at the flow rate we use for our method. I also clear the RID detector by running the mobile phase at the same flow rate through the RID channel by opening it using the software. The UV baseline is also unstable.
I have also a noticed a problem where the elution times vary greatly between samples within the same sequence. The difference can be as high as 7 minutes. I switched columns and am still receiving the same problems.
Does anyone know how to fix these issues?
Flow rate - 0.6 ml / minute
mobile phase - 5 mm H2S04
Column and RID Temperature - 55 C
Our samples are mixtures of sugars diluted 1:5 in 16 mm NaOH.
I attached an image of what the peaks look like.
i have a question regarding event/task-related EEG data. Are there any indications that baseline power increases from trial to trial? For example, in the case of a fine motor task (e.g. finger movement), that baseline power slowly accumulates from trial to trial through the task (baseline power trial 1 < trial 100)?
I am aware that the time between trials should be such that there is a return to baseline. However, if the power increases slowly, could it be that an effect is only seen after 70-80 trials?
Does anyone have experience or know of studies that can provide guidance on this issue?
I prepared an article that resulted a baseline aspect. But I do not understand, in what types fields, in where I submit the article in marine pollution bulletin (Baseline and normal). Moreover i have two article in review in baseline study. Can i submit again a paper in baseline? Has any quality deviation regarding two kind of publishing (Baseline and normal) in marine pollution bulletin. Actually i want to submit the article as a corresponding author. want a good suggestion.thank you for advance.
Hello. I am using a reversed-phase HPLC with C18 ODS column with PDA detector. My mobile phase is composed of 15mM CH3COONa, 6% V/V CH3CN (ph-5.5) . The baseline starts from Zero and gradually drops to minus values and never stabilizes, I have ran multiple washes with Methanol: water (50:50) but still facing the same problem. I was wondering what could be the reason for this drop??
Thank you in advance!
I am conducting an intervention study where I have two groups (control and experimental ). The study subjects were monitored at baseline and then at endline of the study. I would want to;-
1. Compare the data at baseline between the two groups
2. Compare the data at endline between the two groups
3. Show the effect of the intervention on the parameters of the study subjects
4. Check for differences within the groups eg control at baseline compared to control at endline
Kindly advise on the appropriate statistical tests I should perform
Live cell imaging using IncucyteZoom is the best way to measure effect of drugs on rate of cell proliferation. I noticed that despite of how accurate we try to count the cells using automated cell counters (countess) sometimes not all the datapoint do not start at the same point.
If we normalise the data to timepjoint 0 the gradient of the slopes changes which is not ideal.
One researcher stated I should seed cells at different densities and introduce an error in counting and choose the densities that start at the same point which I believe is not correct because you are just increasing additional errors to that of countess and additionally assuming all cells have the same size and shape.
I had been deducting the baseline values of time-point 0 from all time points of respective condition such that all points start at 0. Reason: This deduction will not alter any slopes.
The only problem is when cells reach 100% confluence and plateau deduction of baseline for example: if the confluence started at 20% at 0 time point and reached 100% at 96 h and remains plateau unto day 7 , deducting the value of baseline will show that the cells reached plateau at 80% at 96 h and 120 h. If I plot the data only in the log phase and deduct the baseline I think is the best option.
Can anyone please comment on this and help me out.
I am having trouble getting LTP in hippocampal slices (CA3-CA1) from 9-16-week-old WT mice.
I can get LTP fine in aCSF containing 1uM gabazine (I cut CA3 to prevent epileptic activity) but in plain aCSF I don't see any LTP. When I record using Gabazine I often see spiking after LTP induction and I'm worried these are skewing my data. So it would be good if i could record LTP without using Gabazine to prevent this.
In both conditions I get stable baselines and the slices look healthy. For the baseline I use 40% of the max response. I extract in choline chloride aCSF (i have also tried slicing in aCSF containing sucrose). After slicing, the slices are maintained in standard aCSF and left to rest for 1 hour before being transferred to the rig at 30 degrees. I induce LTP using a theta burst stimulus.
Please can anyone help explain why I cannot get LTP without Gabazine? And please let me know if you have any suggestions of what I could try to get LTP.
Thanks in advance
I am having trouble with baseline removal when creating an epoch containing 1 trial. After creating the epoch, a window pops up that reads "remove mean of each data channel", rather than the window that allows you to specify the time window for baseline removal. I am also unable to later remove baseline by going to "tools" --> "remove baseline". How may I baseline correct this segment using a window of -200 to 0 (rather than the default -1000 to 0)? Is there a way to change the default settings of the baseline removal in eeglab?
the input data if between 2006 and 2020, and the number of years I choosed is 100 years. So the generated data gives 100 years values from year 1 to year 100 without mentionning any dates.
I hope you have had a great day so far!
Well, I wonder how I can run a mixed-effect analysis on Stata with the following features:
Research question: What baseline variables predict my dependent variable over time?
Dependent variable: discrete --> Poisson distribution
Independent variables: both categorical and continuous variables
The following model is what I have planned so far. But I don't know how to consider only the baseline data from my IDs.
xtmepoisson DV ID##time || participant_ID:time, irr
My question is: What do I need to do to consider only the baseline data from my IVs?
Thank you in advance and happy holidays!
I am investigating the efficacy of Gestalt therapy with adolescents engaging in self-harm, using a single case experimental design. I have administered some tools to measure the level of self-harm, anxiety and depression at baseline, after 15 sessions and after 30 sessions. What statistical measures would you suggest I use to show the effect of the treatment besides visual analysis?
Hi there, my research aim is to reduce latency in fog environment, and I have a baseline that I would like to compare my work to, in the baseline research paper, their proposed method was compared to a method called "no offloading" and they saved the latency by 40%. in my work, I compared my proposed method to the same method "no offloading" and I saved latency by 80%. the question is that do I have to do coding for the baseline (in the simulation) to officially comparing my work to?? the problem is that the baseline method consider different factors that I don't consider such as deadline, and the values of the parameter used in the baseline is different from mine.
Can I use the means difference (difference in mean between endpoint and baseline) and associated standard error to calculated a standard mean difference (SMD) and its 95%CI (I also have an exact p value and sample size)?
for context: it's for a meta analysis all of my other studies provided me with the means and SDs needed to get my SMD.
n=239, mean difference: −1.2 [SE 1.48]; p=0.4154
either formulas or online calculators or references are welcome :)