Questions related to Baseline
The problem: We are interested in how ketamine modulates hedonic experiences such as chills. Participants listen to music while they are in the scanner both on ketamine and placebo. Participants also pre-rate all the songs outside the scanner a week before. For our analysis we need to know when participants experience their peak emotional moment during the scan session.
Our ‘solutions’ that did not work: 1. Live rating – people press a button while they experience their peak moment. Problem: we confound our neuronal signal of interest (pleasure) with motor activity. 2. Additionally rating the music after the scan session. Problem: even though, we think that the peak for most songs won’t shift between the baseline rating and the in-scanner rating sessions, it might do so during the ketamine condition (makes everything more pleasurable and maybe even earlier). So, if they rate the music again afterwards when most of ketamine's effects have already subsided, we might not find the same peak moments as during the scan session. (plus, we also doubt that participants could reliably remember when those peaks occurred during the scan session…)
3. measure physiological responses. problem: yes, skin conductance does correlate with chills but we do not have the equipment for that...
Does anyone of you have an idea how we could measure the peak moment during the scan session without majorly confounding our measurements? I would really appreciate your help!
I think I understand now that you ignore the baseline (pre-test) data when calculating Hedge's g. Instead subtracting the intervention posttreatment mean from the control posttreatment mean and diving the result by the pooled standard deviation (of both samples at posttreatment). However, I am now wondering that means for studies where, despite randomisation, there were significant differences in the outcome of interest at baseline.
For example, with the data below, where let's say the mean value (M) is pertaining to a depression score. Would it be appropriate to calculate Hedge's g in the method above or what would have to be done differently, if intervention and control group baselines scores were not similar?
Thankfully I think only a couple studies had this problem, but I am unsure whether I exclude, perform a correction, or run as normal in the meta-analysis.
Intervention Group Pre-treatment: M=63.92; SD=10.67; N=63
Intervention Group Post-treatment: M=59.43; SD=7.23; N=63
Control Group Pre-treatment: M=74.57; SD=9.79; N=65
Control Group Post-treatment: M=72.69; SD=4.84; N=65
Many thanks for any help.
In meta-analysis of some studies, a study didn't provide change from baseline results, so I have to calculate it, I only have the following data: pre-treatment mean&SD, post-treatment mean&SD. it is easy to get mean difference by subtraction, but I can't calculate SD!!
I am trying to calculate the change in standard deviation for my metaanalysis and would like to know the correct way of calculating it.
I have the following data available:
1. mean for control group at baseline and endpoint
2. mean for intervention group at baseline and endpoint
3. 95% Confidence interval for control group at baseline and endpoint
4. 95% Confidence interval for intervention group at baseline and endpoint
5. Number of subjects in control and intervention group
I would like to calculate the difference in standard deviation for control group (sd_baseline and sd_endpoint) and intervention group (sd_baseline and sd_endpoint).
I would like to use the cochrane handbook as reference:
It is stated that
"When there is not enough information available to calculate the standard deviations for the changes, they can be imputed."
Does this mean we need to impute the correlation coefficient for all the studies for every different outcomes separately?
Dear Research Gate community,
I’m conducting a study on how different management practices in wetland (ponds) affects the diversity and abundance of species in the wetlands. The different ponds are next to each other. Some ponds are control without any management practices. The water in the treatment ponds is regularly drawn down and filled up again with water from the river. We recorded the waterbird species number and abundance of each pond regularly (record the bird data of all the ponds at the same time for each survey).
- In the first year, we conducted a baseline study in which no treatment was done for all the ponds (data were collected monthly).
- In the second year, we conducted the treatment (operational study), and data of birds were collected weekly.
We’re now trying to study:
1) first, if there is any difference between the treatment ponds and control ponds during the operation
2) if there is any difference between baseline study and operational study of the same pond.
We wonder what kind of statistics are suitable for statistically analysing our data.
Some problems we are encountering is:
1. The data do look like normally distributed. The data collected are time series data, there are natural seasonal variation in the number of waterbirds in our region (a lot of migratory birds in fall and winter). How to take into account of influence of the time of survey.
2. The sample frequency for baseline year (12 times) and operational year is different (52 times), how to compare the difference between baseline and operational year.
Highly appreciate any help or suggestion!
When is the specific time should I measure the Baseline BGL? How about the Pretest and Post-test Blood Glucose Test in Oral Glucose Tolerance Test? I want to conduct a 4-hour fast to white mice.
I was wondering if anyone has experience statistically analyzing Ca2+ oscillation patterns?
Specifically, models you used to quantify what you considered to be a "peak" in the oscillation pattern, how you determined baseline and what programs you used to accomplish this.
I am looking to do a meta analysis on intervention RCTs but all of the papers have provided baseline and post intervention Mean (SD) for the groups. I have looked at the Cochrane page as I am using Revman ( https://handbook-5-1.cochrane.org/index.htm#chapter_16/16_1_3_2_imputing_standard_deviations_for_changes_from_baseline.htm )
However, I am not from a statistical background so this is getting quite complex. Just wanted to see if there are any resources to guide as this must be very common, and I would hope there are fairly simple ways to deal with it. I have come across using the SD from baseline or post-intervention as the SD for the change but obviously am hesitant of just going along with this.
As only baseline and post-intervention Mean and SD are reported for the majority of the studies, I am thinking I may need to leave it at the narrative synthesis and leave out the meta analysis. But given that the trials are all randomised with similar baseline characteristics and biomarker parameters, could I just enter the final measurements in both groups, rather than the mean change and SD from baseline?
Appreciate of any input.
I am working with a 7820A GC and 5977B MS. Recently, we changed the helium tank and since then we have been seeing baseline noise in the chromatogram which is high enough to interfere with my peaks. We also found that the N2 and O2 counts have been higher than we used to see; 5-8% and 1-1.8% respectively. Previously, the counts used to be 1-2% and <1%
We replaced the helium regulator, purged the inlet, changed the septum, baked out the MS and the baseline noise still exists. Any ideas why this might be happening?
Hopefully someone can help me out with how to quantify a significant representation of a group.
I am analysing a bias in protein detections. To see a pattern I group the proteins in their families I know the baseline occurrence of a family, based on the proteome. The sample data consist of a list of how many times a protein family occurred in my sample (always equal or less than the baseline).
Data -- baseline ---- %
1 -------- 6 --------- 16.67%
2 -------- 15 ------ 13.33%
11 ------ 141 ----- 7.80%
3 -------- 18 ------- 16.67%
58 ------ 361 ----- 16.07%
1 -------- 3 -------- 33.33%
1 -------- 21 ------- 4.76%
7 -------- 421 ----- 1.66%
1 -------- 2 ---------- 50.00%
I could take a percentage of representation, but since the families are not equally represented this creates a bias.
Here the value of 58 out of 361 weights more than 1 of 2.
Is there a way to calculate the 'magnitude' of representation beyond percentage to take the frequency into account?
Chi-square testing doesn't work since the dataset consists out of more than 600 family groups.
I struggle a bit to get my problem forward, please ask if I can do anything to clarify.
I am hoping that someone who is well versed in statistics can help me with my analysis and design. I am investigating the torque produced via stimulation from different quadriceps muscles. I have two groups (INJ & CON), three muscles (VM, RF, VL), three timepoints (Pre, Post, 48H) in which torque is measured at two different frequencies (20 & 80 Hz). In addition to the torque, we also want to look at the relative change from baseline for immediately Post and 48H in order to remove some of the baseline variability between muscles or subjects. A ratio of 1.0 indicates same torque values post and Pre. This is a complex design so I have a few questions.
If I wanted to use repeated measures ANOVA, I have to first for normality. When I run the normality test on the raw data in SPSS, I have one condition that fails and others that are close (p < 0.1). When I run the ratios I also have a condition that fails normality. Does this mean now that I have to do a non-parametric test for each? If so, which one? I am having a difficult time finding a non-parametric test that can account for all my independent variables. Friedman's is repeated measures but it is not going to be able to account for group/frequency/muscle differences like an ANOVA would.
Is repeated measures ANOVA robust enough to account for this? If so, should I set this up as a four-way repeated measures ANOVA? It seems like I am really increasing my risk of type I error. It could be separated it by frequency (20 and 80 Hz) because it's established a higher frequency produces higher torque but as you can tell I have a lot of uncertainties in the design. I apologize if I am leaving out vital information in order to get answers. Please let me know and I can elaborate further.
As per literature,
"The net peak heights were determined by subtracting the height of the baseline directly from the total peak height. The same baseline was taken for each peak before and after exposure to UV.
The carbonyl index was calculated as: carbonyl index = IC/IR(100),
where IC represents the intensity of the carbonyl peak and IR is the intensity of the reference band."
Now, how do I substract the baseline height from the peak height???
Edit: the paper was approved so if you want to see it just message me :)
I'm writing a paper on a multimodal active sham device for placebo interventions with electrostimulators. We believe it has a low manufacturing cost, but it's probably better to have some baseline for comparison. Have any of you ever requested a manufacturer to produce a sham replica of an electrostimulator to be used on blind trials? If so, how much did it cost? Was it an easy procedure?
What is the origin of the shift (up) in the baseline of the UV-VIS spectrum as noticed from 300 nm to 800 nm in the screenshot attached? I'm measuring phenobarbital in 0.2 NaOH against 0.2 NaOH blank. I have tried turning off fluorescent lights, CRT monitors, and capping the cuvette while measuring the sample on my HP 8453 chemstation.
I use BV2 cell line to record a calcium signal with a dye called Calbryte 520 AM. I added ATP into the perfusion system at 5 min, and I could see a peak in the figure. I did four times of this experiment, but the calcium baseline decreased continuously. Generally, the calcium baseline should be stable.
I am developing a method for computing fussy similarity in WorDnet. Previous work mainly focused on the simialrity of SynSets (concepts).
I am serarching for a snatdard baseline for reasons of comparison. My question is: what is the standard baseline for comuting the similarity of words in wordnet.
Over the last couple days, my colleagues recently noticed a significant amount of baseline drift in their chromatograms from one HPLC (see attached). Across columns, methods and samples it seems to have a consistent type of baseline shift. I've monitored the pressure and it is what I expect it to be/stable. Additionally, the washes (see attached) have a sharp increase, plateau and then drop in all of them. Just a day or two prior, the baselines were perfectly fine. I am not sure what is causing this sudden issue nor how to resolve it. Please advise. Thank you.
I am conducting a meta-analysis of continuous data using RevMan 5.4.
Included studies express their results as mean, SD at baseline and end of study for intervention and control arms. With these data I can impute change from baseline which I will use to perform a meta-analysis of change scores.
However, in a few studies, due to patients lost to follow-up, the number of patients at the end of the trial is lower than at baseline. RevMan 5.4 requires mean change from baseline, SD of change and sample size to perform the meta-analysis. Which number of patients should I use to perform the meta-analysis (the sample size at baseline or the follow-up)? Or would it be better to exclude these studies?
Thanks in advance for your help.
This is what the baseline looks like running 30% acetonitrile/70% water, using a UV/VIS detector set at 195 nm. I thought it might be air bubbles, but running 100% water gives a perfectly flat baseline. I degassed the mobile phases, and primed the lines several times. I also ran isopropanol through the system for a while, but that didn't help. The pressure of the system is consistent (around 750 PSI), and I cannot find any leaks. Could this be a solvent mixing issue? I know that acetonitrile absorbs at this wavelength, but I've never seen it cause this sort of issue. I'd really appreciate it if anyone could provide some suggestions, thank you!
Recently I have been getting an extremely unstable baseline throughout the runs I have been performing. I allow the HPLC to calibrate for about an hour by just running mobile phase at the flow rate we use for our method. I also clear the RID detector by running the mobile phase at the same flow rate through the RID channel by opening it using the software. The UV baseline is also unstable.
I have also a noticed a problem where the elution times vary greatly between samples within the same sequence. The difference can be as high as 7 minutes. I switched columns and am still receiving the same problems.
Does anyone know how to fix these issues?
Flow rate - 0.6 ml / minute
mobile phase - 5 mm H2S04
Column and RID Temperature - 55 C
Our samples are mixtures of sugars diluted 1:5 in 16 mm NaOH.
I attached an image of what the peaks look like.
i have a question regarding event/task-related EEG data. Are there any indications that baseline power increases from trial to trial? For example, in the case of a fine motor task (e.g. finger movement), that baseline power slowly accumulates from trial to trial through the task (baseline power trial 1 < trial 100)?
I am aware that the time between trials should be such that there is a return to baseline. However, if the power increases slowly, could it be that an effect is only seen after 70-80 trials?
Does anyone have experience or know of studies that can provide guidance on this issue?
I prepared an article that resulted a baseline aspect. But I do not understand, in what types fields, in where I submit the article in marine pollution bulletin (Baseline and normal). Moreover i have two article in review in baseline study. Can i submit again a paper in baseline? Has any quality deviation regarding two kind of publishing (Baseline and normal) in marine pollution bulletin. Actually i want to submit the article as a corresponding author. want a good suggestion.thank you for advance.
Hello. I am using a reversed-phase HPLC with C18 ODS column with PDA detector. My mobile phase is composed of 15mM CH3COONa, 6% V/V CH3CN (ph-5.5) . The baseline starts from Zero and gradually drops to minus values and never stabilizes, I have ran multiple washes with Methanol: water (50:50) but still facing the same problem. I was wondering what could be the reason for this drop??
Thank you in advance!
I am conducting an intervention study where I have two groups (control and experimental ). The study subjects were monitored at baseline and then at endline of the study. I would want to;-
1. Compare the data at baseline between the two groups
2. Compare the data at endline between the two groups
3. Show the effect of the intervention on the parameters of the study subjects
4. Check for differences within the groups eg control at baseline compared to control at endline
Kindly advise on the appropriate statistical tests I should perform
Live cell imaging using IncucyteZoom is the best way to measure effect of drugs on rate of cell proliferation. I noticed that despite of how accurate we try to count the cells using automated cell counters (countess) sometimes not all the datapoint do not start at the same point.
If we normalise the data to timepjoint 0 the gradient of the slopes changes which is not ideal.
One researcher stated I should seed cells at different densities and introduce an error in counting and choose the densities that start at the same point which I believe is not correct because you are just increasing additional errors to that of countess and additionally assuming all cells have the same size and shape.
I had been deducting the baseline values of time-point 0 from all time points of respective condition such that all points start at 0. Reason: This deduction will not alter any slopes.
The only problem is when cells reach 100% confluence and plateau deduction of baseline for example: if the confluence started at 20% at 0 time point and reached 100% at 96 h and remains plateau unto day 7 , deducting the value of baseline will show that the cells reached plateau at 80% at 96 h and 120 h. If I plot the data only in the log phase and deduct the baseline I think is the best option.
Can anyone please comment on this and help me out.
I am having trouble getting LTP in hippocampal slices (CA3-CA1) from 9-16-week-old WT mice.
I can get LTP fine in aCSF containing 1uM gabazine (I cut CA3 to prevent epileptic activity) but in plain aCSF I don't see any LTP. When I record using Gabazine I often see spiking after LTP induction and I'm worried these are skewing my data. So it would be good if i could record LTP without using Gabazine to prevent this.
In both conditions I get stable baselines and the slices look healthy. For the baseline I use 40% of the max response. I extract in choline chloride aCSF (i have also tried slicing in aCSF containing sucrose). After slicing, the slices are maintained in standard aCSF and left to rest for 1 hour before being transferred to the rig at 30 degrees. I induce LTP using a theta burst stimulus.
Please can anyone help explain why I cannot get LTP without Gabazine? And please let me know if you have any suggestions of what I could try to get LTP.
Thanks in advance
I am having trouble with baseline removal when creating an epoch containing 1 trial. After creating the epoch, a window pops up that reads "remove mean of each data channel", rather than the window that allows you to specify the time window for baseline removal. I am also unable to later remove baseline by going to "tools" --> "remove baseline". How may I baseline correct this segment using a window of -200 to 0 (rather than the default -1000 to 0)? Is there a way to change the default settings of the baseline removal in eeglab?
the input data if between 2006 and 2020, and the number of years I choosed is 100 years. So the generated data gives 100 years values from year 1 to year 100 without mentionning any dates.
I hope you have had a great day so far!
Well, I wonder how I can run a mixed-effect analysis on Stata with the following features:
Research question: What baseline variables predict my dependent variable over time?
Dependent variable: discrete --> Poisson distribution
Independent variables: both categorical and continuous variables
The following model is what I have planned so far. But I don't know how to consider only the baseline data from my IDs.
xtmepoisson DV ID##time || participant_ID:time, irr
My question is: What do I need to do to consider only the baseline data from my IVs?
Thank you in advance and happy holidays!
I am investigating the efficacy of Gestalt therapy with adolescents engaging in self-harm, using a single case experimental design. I have administered some tools to measure the level of self-harm, anxiety and depression at baseline, after 15 sessions and after 30 sessions. What statistical measures would you suggest I use to show the effect of the treatment besides visual analysis?
Hi there, my research aim is to reduce latency in fog environment, and I have a baseline that I would like to compare my work to, in the baseline research paper, their proposed method was compared to a method called "no offloading" and they saved the latency by 40%. in my work, I compared my proposed method to the same method "no offloading" and I saved latency by 80%. the question is that do I have to do coding for the baseline (in the simulation) to officially comparing my work to?? the problem is that the baseline method consider different factors that I don't consider such as deadline, and the values of the parameter used in the baseline is different from mine.
Can I use the means difference (difference in mean between endpoint and baseline) and associated standard error to calculated a standard mean difference (SMD) and its 95%CI (I also have an exact p value and sample size)?
for context: it's for a meta analysis all of my other studies provided me with the means and SDs needed to get my SMD.
n=239, mean difference: −1.2 [SE 1.48]; p=0.4154
either formulas or online calculators or references are welcome :)
Essentially I am carrying out a process evaluation of an online intervention that is delivered to CYP with tic disorders. The main outcome measure is the Total Tic Severity Score (TTSS) as measured on the Yale Global Tic Severity Scale which is rated from 0-50 (higher scores indicating higher severity). The participants were measured on this scale at baseline and then 3-months later (primary end-point) to see if the intervention worked or not.
As part of the process evaluation, I need to see if there were any mediators or moderators on the intervention group only (PE's do not look at the control group). So I would like to know what would be the best statistical test to carry out this analysis, please?
So to break it down:
Dependent variables (both continuous):
TTSS at baseline
TTSS at primary end-point (just looking at one group only i.e. the intervention group)
Index of deprivation (continuous)
Method of referral (dichotomous)
Medication use (dichotomous)
Parental education level (categorical)
Treatment acceptance and satisfaction (continuous)
Level of engagement with the intervention (continuous)
Change in mental health (continuous)
Any help would be much appreciated! Many thanks.
PS. I only know how to use SPSS!
When we perform housekeeping genes (such as GADPH) for baseline comparison control in RT-PCR analysis, the degrees of CT values were found to vary a lot between different animals (in our case, 5 control rats) from 17-22. But we have to choose one CT value for baseline comparison of other genes of interest. Do anyone encounter the same condition and what is the solution?
Thanks a lot in advance!
I have enrolled more patients (362 against the samples size 153) due to loss to follow-up. now the baseline data of all patients show some interesting picture and I want to publish it. the question is, Do I have to mention in methods about the original design, the back ground of these samples? Thank you
Hope you are Healthy and doing well,
In my study I have 2 experimental groups (patients with neck pain and sleep disturbances at baseline) these two groups are equal at baseline in almost all variables and one control group (healthy participants).
After 6 weeks of interventions I have taken the post readings.
Is 2 way repeated measure ANOVA suitable test to use ? Or other test could be more accurate?
If 2 way repeated measure is suitable, should I incorporate covariate in the analysis or keep it blank ?
I have a problem calculating the final value of a study.
Knowing the baseline of a parameter in mean & SD form and the change rate after the treatment in mean & SD% form. How could I calculate the final effect as in mean & SD form.
eg. a biomarker
baseline: 0.7+-0.1 mg/kg
change: 6+-5% after a year of treatment
final effect: ?
I am looking for a calculation procedure / advice how to figure out the duration of a baseline recording - or rest state recording for an EEG experiment.
Let's say I'm interested in Theta activity occurrence during a 60min task which will be recorded.
Is there an optimal rest state recording duration?
Most of the literature used 5-minutes baseline recording, is there evidence for these 5 min?
Thank you for any help on this,
I'm trying to determine sample sizes for some straight-forward studies looking at quality of life measurements pre/post intervention using multiple survey tools. I'm using effect size from previous studies in my sample size estimation via power analysis, but most don't provide the standard deviation of pair-wised changes needed to calculate a Cohen D for dependent samples. What most of these do have, however, is the mean change from baseline with a 95% confidence interval.
I can see in the Cochrane Handbook 188.8.131.52 (https://training.cochrane.org/handbook/current/chapter-06#section-6-5-2) a method of obtaining the SD for differences in means using CI to calculate SE, then SE and group n's to calculate SD. My question is whether this method is only valid for getting the SD for the difference between two independent groups (as the example in the handbook shows) or could it also be valid for getting the SD of difference between baseline and post-intervention within the SAME group?
P.S. I know the topic of imputing SD with dependent samples when you only have summary statistics is a previously discussed topic here. After hours of reading I'm still having trouble determining clearly if this is even possible.
We have two columns we use for our HPLC, both Restek, an amino column, and a C18 column. I just ran about 90+ samples on the C18 column and there are no baseline issues at all. So that rules out mechanical issues with the HPLC itself (pump, UV detector, etc)
Now with the amino column, we have been trying to get separation on sugars. A past lab mate got great separation with this column back in 2019, ever since then we haven't been able to get the same type of separation with the same instrument parameters she used. We are currently using 75% Acetonitrile and 25% water as mobile phase and 2ml/min flow rate, which is what was used previously as well. The baseline isn't stable at all and the UV detector just keeps saying "OVER".
We have backflushed the column with different solvents per Restek recommendation and we still have the same problem.
We also did an 8 hour 0.2ml/min IPA flush with no column. Still, no separation and a crazy negative baseline. There are absolutely no pressure problems either.
Does anyone have any suggestions?
I am going to perform behavioral tests, rotarod and Morris water maze, in rats before and after drug administration (7 and 14 days after the first application). In each test, training sessions and baseline data are included. I am not sure if I have to include training sessions at each time point or if the previous training session performed before treatment is sufficient.
Thanks in advance.
I am doing an analysis of data collected in an intervention study seeking to improve antiretroviral treatment adherence using depression management (using a package of defined strategies ranging from psychoeducation, interpersonal psychotherapy, and pharmacotherapy depending on depression severity). My baseline data showed depression score to be significantly correlated to ART adherence score as well as major depression (when adjusted for other covariates) to be significantly associated with poor adherence. Also, for those who had depression or minimal symptoms (in a per-protocol analysis), the intervention significantly dropped their depression scores compared to controls. Now I want to get the real effect of this change in the depression on their change in adherence to treatment i.e. adjusting for the other covariates in the regression model at baseline. I have two questions; Is this the right approach to this analysis? If yes, how do I proceed in getting the change in adherence adjusted for the other covariates used in the model at baseline?
I want to see whether a biomarker level at baseline can be used to predict the prognosis after a treatment alone as compared to a clinical parameter?
Which statistical model will be best to investigate it?
We are recording field EPSPs in CA3-CA1 synapses from 2-3-month old WT mice. We can get good EPSPs (size trace in blue) but during the baseline the EPSP gets small (yellow trace) and then large again and cycles over the course of the baseline.
As you can see in the traces, the EPSP changes in size but the fibre volley remains the same, so I don't think the electrodes aren't drifting. We also wrap our recording electrodes with cotton so droplets of aCSF do not form and land on the slice.
Please let me know if you have any suggestions of other things we can check. Thanks!
Hello, I want to calculate the percentage change in rainfall by taking the future rainfall values and the baseline period values. My baseline period is 1961-1990 and future values are 2041-2070 and 2071-2099. I want to know this baseline period values should be taken from the observed rainfall data or model simulated one for the same period?
Thank you in advance.
The study aim investigated the relationship between meal frequency and timing with changes in BMI. Based on the cohort study data of meal frequency obtain during last follow up and changes in bmi comparing bmi at baseline and last follow up.
We have been doing single molecule TIRF experiments with virus protein Gag to look into its assembly process. When analyzing the data, some traces look nice, but some have this baseline drifting problem. We tried some optimizations but the problem is still there. We have problems in:
1) as shown in Fig 1 attached, we think there is a clear baseline drifting (signal gradually decreasing around 0 intensity), and
2) in some of the slides, the overall signal is very intense for the first couple of seconds (Fig 2 attached), and then the image gradually become more "normal" (Fig 3 attached). This has been making our data processing process very difficult, and we are not sure if this is also related to the baseline drifting problem.
The buffer we are using contains 100 uM propyl gallate, 2 mM DTT and 4.5 mM trolox. We are kind of new in single molecule TIRF, and would really value expert opinions in how to improve/optimize the system.
Thanks in advance for any input! We really appreciate your help!
I would be grateful if anyone could provide me with a real data set consisting of residential customers’ smart meter readings to use for customer baseline load (CBL) estimation.
We have different sets of data for MHD patients on different time points and the problem is- we are missing patient in each time point (death/transfer to another hospital/underwent transplant/drop out due to unknown reason) and now the number of patients survived is almost half compared to the baseline patients. It will be really helpful if anyone could suggest a suitable and advanced statistical procedure by which we could see the changes in patients' health outcome over time.
I have the following design; Independent variable categorized as (Present and Absent) the dependent variable is BMI measured at (baseline line, week 24, week 48, week 96 and week 144)
A 33 week woman presented with bleeding and non hypoxic trace, a subsequent trace showed a deceleration of 4 minutes but recovered to a baseline of 140bpm. Consistent with previous monitoring and antenatal auscultation.
Following Magnesium sulphate as woman thought to be labouring baseline 110bpm all other features normal, cycling present and accelerations. Why??
The baseline of PVA jumps suddenly towards higher intensity (on the Y axis) on reaching 200 degree Celsius.
Hello, recently I faced some issues with DataAnalysis 4.2 sofware of Bruker. In example a take chromatogram from analysis for processing, perform smoothening of chromatogram and subtracting baseline of smoothened chromatogram. So in sample field we have tree with original chromatogram (level 1), smoothened chromatogram (level 2) and substracted baseline chromatogram (level 3). After saving of this analysis and reopening, data of level 3 is not shown in chromatogram field, but if I delete level 2 and substracted baseline chromatogram goes to level 2 data appears. Any suggestions how to solve this problem that after reopening saved analysis all levels will be showed? Thanks
There's an article on Ashwagandha at:
It's a crossover study, switching between ashwaganda and placebo. It seems that for the first period (8 weeks), testosterone levels decreased for ashwaganda group from baseline 354.22pmol/L to 332.77pmol/L. It's not explicitly mentioned in the study, but I would appreciate if anyone could verify if that is true? I'm not sure I'm reading the cross-over study correctly. It also seems unusual given the research indicating it increases testosterone.
I have downloaded SDSS specra data of the dwarf galaxies for my study. I am studying strongest emission lines. I am bit confused, whether we need to perform a baseline correction beofore the measurement or not. Need your help. Thanks.
Good afternoon, I am writing because I am recording with Fura-2AM to observe the response to intracellular calcium to different compounds. I observe the response at two different times, at 60 seconds and then at 400 seconds. It happens that in the course of time between the first and second answers, the baseline tends to rise and I have not been able to know what this problem is due to. Somebody could help me?
I asked a question on disaggregating daily rainfall data to hourly data here:
Now, my question is if the following way for disaggregating rainfall data works well in your idea?
We have both the daily rainfall time series (i.e. output of GCM models) for the future time and the observed hourly rainfall data for the baseline period. We consider the temporal distribution of future rainfall for each day according to temporal distribution pattern of observed data (assuming the rainfall temporal pattern in future and baseline is the same).
I'd like to know if this method seems reasonable.
If I want to take the uv spectra of an enzymatic reaction which is the best way to do it? What troubles me specifically is the blank. What is the best way to set my baseline? Using buffer + substrate? Using buffer + substrate + inactivated (boiled) enzyme (crude cell extract with the overexpressed enzyme)? Thank you all in advance for your help.
So in my systematic review, I have included mostly studies that include baseline, end and change from the baseline data for the outcome, however, one study present only the change from the baseline for two treatment groups. I don't know how I should be able to make a fostest plot from this data. I'm using RevMan5.
I am doing a thesis. The aim of the study is to see the effect of STI infection during pregnancy on adverse birth outcome. All volunteer pregnant women were screened for STI at certain week. Then few of them were positive, others did not infected at that moment and other don`t know their status. I consider this screening as baseline. Then exposure status for STI infection was assessed every two weeks until they give birth, as a result the infection status for each individual vary. The gestational age at baseline vary from individual to individual therefore their delivery date is also vary. which means the outcome measurement date is their delivery date. My questions are:
- Can I say this study is a prospective follow-up or cohort study design?
- Which date is considered as a baseline; the first STI screening date or the STI infection date (if this is the case, each individual can have different baseline date)
- If it is follow-up / cohort study, each women have different follow-up number because of different gestational age when the survey began and their expected delivery date vary. Therefore, how can I handle the administrative missing data?
For my bachelor thesis I am planning to do baseline measurement, an intervention and a follow up test on the treatment group.
How would I deal with dropouts? Do I have to include them in the statistical analyses? For instance if a participant does the baseline test but doesn't participate in the intervention?
Can anyone explain me how can i calculate the ird in single case research when the most of the data points in the baseline are over than the data points in the treatment phase?
I have read the REICHOW's article with the method but in some cases i don't understand it..I don't understand how exactly we write a straight line across the data points to find the overlapped datas
I am using a linear mixed model to analyze longitudinal data. In my restructured database, Time is named Index1. SPSS automatically uses the final timepoints as reference. I want to know the changes in regard to baseline, so I would really like to see numbers for Index1=2, Index1=3 and Index1=4 in stead of Index1=1, Index1=2 and Index1=3.
I think I can do this mathematically, but it would be so much nicer if I could do this in SPSS...anyone an idea?