Science topics: Handling (Psychology)
Science topic
Handling (Psychology) - Science topic
Handling (Psychology) is a physical manipulation of animals and humans to induce a behavioral or other psychological reaction. In experimental psychology, the animal is handled to induce a stress situation or to study the effects of "gentling" or "mothering".
Questions related to Handling (Psychology)
Good morning together,
Maybe a stupid question, but how do you handle it? I'm considering a keyword analysis in my literature review on family businesses. I put almost 20 years of research together and took the keywords from the papers to analyse it. After making my bibliometric analysis, I saw that, for example, a family firm and a family business are used. In my opinion, these two words are the same. Can I combine them by renaming the business to a firm or vice versa?
Best Christian
How can I efficiently handle and analyze large remote sensing datasets in Python, specifically for vegetation indices calculation?
I am currently working with a dataset containing multiple years of Landsat imagery, and I want to calculate various vegetation indices such as NDVI, EVI, SAVI, etc. However, my code is running extremely slow and I keep running out of memory.
How can I optimize my code and handle this large dataset in a more efficient manner?
[Important] I don't want to use Google Earth Engine or any online platforms such as Google Colab.
--- Here is my sample code ---
import numpy as np
import rasterio
from rasterio.plot import show
from rasterio.windows import Window
# Open raster file
with rasterio.open("landsat.tif") as src:
# Define window size for processing
win_height, win_width = 256, 256
# Loop through windows to process data
for i, j, window in src.block_windows(1, height=win_height, width=win_width):
# Read data
data = src.read(window=window, out_shape=(src.count, win_height, win_width))
# Calculate vegetation indices
ndvi = (data[3] - data[2]) / (data[3] + data[2])
evi = 2.5 * (data[3] - data[2]) / (data[3] + 6 * data[2] - 7.5 * data[0] + 1)
savi = ((data[3] - data[2]) / (data[3] + data[2] + 0.5)) * 1.5
# Write indices to output file
with rasterio.open("output.tif", 'w', driver='GTiff',
width=win_width, height=win_height, count=3,
crs=src.crs, transform=src.transform,
dtype=np.float32) as dst:
dst.write(ndvi, 1)
dst.write(evi, 2)
dst.write(savi, 3)
Because of the HF content of Kroll's reagent which is very dangerous to handle, I would like to know if there is/are other etchants for titanium aluminium alloys. Thanks.
Hello. I was curious what I should do about preprints after they have been peer reviewed and published in a journal... do I delete the preprint, or do you simply change the preprint to an article, and then put the journal details in? I was curious what you, as a community, tend to do.
Thank you.
Techniques to handle vanishing or exploding gradients in deep learning neural networks ?
Like solve_ivp or odeint in python, which shows warning messages if there arises any discrepancy during runtime, in Boost odint we need to create observer for that. I just want to know is there any predefined observer exist to handel all kind of errors and warnings message, or we have to creat for our own?
I have checked in boost odeint there are odeint_error.hpp and exception.hpp, but they can't be used directly.
I have heard that Bayesian Networks handle noisy data sets with more accuracy than other machine learning approaches, including improved learning and predictions. Are there any citations that support this?
How should you properly handle the relationship between yourself and your teachers in a situation where your advisor does not give you any academic guidance while you are pursuing your master's degree and you have to worry about graduating all the time,
while your teacher also attacks you psychologically? I thought I was going to get depressed.
Hello,
I tried different methods of making refolding buffer including cystine, I dissolve it in HCL first but then when I raise the pH to 8 I can see it solidifying after raising the pH. how can I handle cystine solution?
Hi everyone. My question is how to handle missing data from a paired T-Test?
I am looking at pre-post survey data for a college course. The survey question are looking at students attitudes on multiple subjects, from the beginning of the course to the end of the course. I used a likert scale and course instructors were hoping that the attitudes towards the topics improved in a positive direction.
In the pre-test I have about 598 responses, in the post-test I have about 363 responses. I have a total of 700 respondents. Some of the students who did the pre-survey dropped the course, and some of the students who did the post survey added the course later in the quarter. My analytical sample with completed responses for both pre-post are about 250 responses. When I do paired T-Test it seems that attitudes towards the topics improved. I am worried about non-response bias.
After conducting expectation maximization I have complete data for all 700 respondents. When I do the paired T-Test it appears most of the attitudes towards the topics became negative. With more than half of the values imputed for, I am worried that this method skewed the data with too many imputed values.
What would be your recommended course of action?
Hello All,
I have a question regarding the SEM-based path analysis. I construct my model as the following:
"Y~M1+M2+X1+X2
M1~X1
M2~X2"
As you can see, both M1 and M2 are used as mediators in my model.
Here comes my question: All my variables (Y, M1, M2, X1, X2) are categorical variables. By running the above model in R ("lavaan" package), I can get the results (i.e., coefficient, p-value) of each category of Y, M1, and M2. However, my results do not show each level of X1 and X2.
Are there any ways to solve this issue?
Any help will be much appreciated!
I just want to know if like titanium isopropoxide, titanium (IV) sulfate also needs to be handled in a glovebox? Thanks!
Peer review can often become delayed when trying to find suitable handling editors or reviewers for a manuscript. Increasing the size of the Editorial Board can create a ready-made resource of potential reviewers or handling editors to be invited more quickly - https://docs.google.com/forms/d/e/1FAIpQLSfjqWsx2Sh5ftjK6pwds7Lq7eqsj_OF_11pwpxCviiKfHS2Zg/viewform?usp=sf_link
I would like to develop an agent-based model in which agents are imposed with cognitive constraints.
I suppose to initialize these constraints by means of a special distribution that tells us how many individuals can handle, say, 5 posts on social media per day, how many individuals can capture 10 posts per day, and so on.
Are there any empirically supported distributions that can model this issue?
It seems like that the majority of people can handle only limited amount of information (recall the Dunbar number), but at the same time, I suppose, there are individuals who can (and who are willing to) proceed much more content. For example, many of us read posts and only top comments (suggested by ranking algorithms) to them on social media, whereas others read all comments.
The question is whether such patterns have been already analyzed in quantitative empirical studies. In particular, it would be interesting to figure out if individuals who read many posts on social media make the objective distribution heavy-tailed or not.
One of the easiest ways to handle missing or corrupted data is to drop those rows or columns or replace them entirely with some other value.
There are two useful methods in Pandas:
- IsNull() and dropna() will help to find the columns/rows with missing data and drop them
- Fillna() will replace the wrong values with a placeholder value
How can we handle the similar temperature in rainy season (20 deg) and the spring season (20 deg) in time series analysis, if we have only one year data? how can we classify the season?
while operating this ISOCRATIC HPLC, ELICO MAKE, with C-18 Column,
1ml/min, Manual injection
i get only hill like peaks and nothing else.
i have also tried various mobile phases but at the end i receive the same result.
can anyone help in this??
What is the simplest method to determine the ratio of Cr (III) to Cr (VI) in a real matrix solution?
I was thinking about ion chromatography, are there other techniques suitable for my purpose?
What precautions should I use to avoid altering the ratio between the two species when handling the sample prior to injection into the instrument?
Thank you
I'm really confused in handling missing data. Some say mar and mcar are ignorable while others suggest another method.
I am working on a project which uses waves 5, 6, and 7 of a longitudinal study (Millennium Cohort Study). I am trying to understand how to deal with missing data and how to weight the data to ensure that the sample is representative. The data include both design weights and non-response weights. I’m really confused about which weighting variable I should use. I will be using a cross-lagged panel model (SEM) to examine the bidirectional relationships between the variables of interest at the three waves. My question is, do I weight the data from each wave using the weight from that wave (e.g., wave 5 weight for wave 5 data) or do I use the wave 7 weight for all of the data? I assume I would use the design weight for this.
I am also trying to deal with missing data due to attrition, and it is recommended to use multiple imputation. Does a non-response weight need to be applied when using multiple imputation? If so, do I impute data for each wave separately using the weight for that wave, or do I impute data for the entire dataset using the wave 7 weight?
I have read the documentation for the study, but it is incredibly confusing and I’m still not sure how to handle weighing and missing data.
Any suggestions on how to handle weighting and missing data in longitudinal surveys with a complex design would be much appreciated.
Is it possible to use SEM in AMOS to model the relationship between two continuous IVs, a continuous mediator, and two binary dependent variables?
I am interested in using SEM because I want to model the relationship between the IVs, mediator, and DVs and the relationship between the two DVs within three different groups.
I have access to AMOS but not MPLUS. If this isn’t possible with AMOS, can someone recommend a free or low-cost alternative capable of handling models like this?
Thank you
Hello,
This is my first time handling 3T3-L1 cell lines.
I've been working with this cells almost a month. I always working from main culture that have P12-13. After 1st and 2nd subculture the cells still looks fine, but after the 3rd subculture the cell always look like contaminations. On the 1st day after subculture the cells looks fine with ~40-50% confluence but on the 2nd day its always looks like contamination. There is something white in the media and under microscope the cells grows to much. Here I attached some picture for better observation. If someone have some advice or same experience please help :"))
I have ecological data, and a physiological response variable. I'm suspicious there are circadian rhythms at play, but the current COSINOR models are a bit heavy for what I'm trying to do. What is the best way to handle day of the year in a glm (if there is one!) that accounts for its cyclical nature (ie 1 = 366)?
Hi all,
I am going to be cryogrinding 800 plant samples in a SPEX Genogrinder with the 5ml cryoblock adapters and the 5 ml tubes (see image). My main issue is that I cannot find a marker that does not rub off during grinding and also when handling the tubes. I tried Staedtler lumocolour permanent markers and a variety of Artline markers. Whilst these markers work great on normal eppies, they do not work on these tubes at all.
Any suggestions would be greatly appreciated
Hello!
I am trying to solve a task, connected to the optimization (curve fitting actually, a normalised sum of nonlinear functions each of the function has 3 parameters that for estimation).
I know that this task was solved using BARON optimizer and Matlab, but it's quite expensive and it can't be used for free.
Maybe there is a solver that can be used for free, open source and available to use with Python?
I have used IPOPT package through GEKKO, solver MA-57 for my task, but this solver unfortunately gave me unstable results or even didn't find a solutions at all... I am looking for 15 parameters in one equation and have noisy data, about 100+ points..
Another version of a task needs to handle about 300 parameters.. So I am looking for good alternative. Maybe someone can suggest one?
Dear all
I have lyophilized (PEG modified aptamer) and i need protocol for handling this aptamer for detection purposes as all protocols for handling of amino and thiol modified aptamer.
i just want to know if PEG modified aptamer needs special folding buffer or i can use the general protocol of handling?
Thanks in advance
Paratonia is one of the symptoms that are present by people with dementia. All in the beginning re the signs that this has influence on walking/balance but also the selectivity of the hands.(Bieke van Deun , Hans Hobbelen).
But we know that this can develop to an extreme form and create an extreme attitude but how we could handle this tone increase and is our handling also an reason for increase of the tone?
The brain damage is one part but transfers etc. in which the person must counter par example when someone pull him uprigth will this increase the paratonia ???
The largest speciose (More than 1000) genera which have a lot of gene sequences, it is very difficult to handle therefore I would like to know to form the experts to share their inputs to make World revisions based on Morphology or Molecular phylogeny. What steps for initiation, elongation, and soulful termination.
Spending survey asked two questions:
First question was how much did you spend per day. Second question, how much did you spend on these five categories (meals, lodging, etc.).
I have 1000 survey respondents, with some answering the first question but leaving the second question empty. We know they spent, but we don't know which categories. Do I weight average that amount to each category?
The bigger issue, is that some will fill out one or two categories, but leave the other categories blank. Are they telling me they spent $0 in those categories or did they just not want to answer? If I put $0, that lowers my average spending. If I leave blank, it could be inflating my average spending.
A fellow economist suggested, if they answered at least one category, then you could assume they spent $0 in the other categories. After all, they took the time to answer at least one. If they left all blank, then leave blank. Do you think this good practice or do you suggest another?
I did research on missing survey data, but couldn't find anything that related to my situation.
Hi Everyone...Currently, I'm doing research with colon adenocarcinoma, HT-29 cell lines with natural products treatment. However, I'm having problem to revive CCD18co, normal colon cancer cell lines as control in my study. I already thaw almost 5 vials but the cells didn't growth at all, and there are several of my friends said that maybe we need to add some "Growth Factor" solution in the media....and we also discuss that maybe we need to change its cryo-preservation solution as DMSO (10%) may give toxicity effect with this kind of cells...I am very pleasure if there's another researcher that can help me on how to handling CCD18co...thanks...
I am currently writing my thesis on the effects of prolonged handling on the behavior and physiology of mice, by comparing a handled and non-handled group. I have worked out the duration spent per behaviors observed from four handled individuals and four non-handled individuals. I discovered that the data values from certain individuals of each group are not normally distributed and data values from other individuals are. Can anyone give me advice on which statistical test I can use to determine if there is significance between the duration spent per behavior from all of the individuals between both groups? I'd also like to test for significance between duration spent per behavior from only handled individuals and non-handled individuals. Hope this makes sense.
I'm not very clued up on statistics.
As I know, photo electron from p,d,f... orbital can make similar peaks in XPS, but differ in size, caused by spin-orbital coupiling.
Now, I'm analysing my metal oxides sample with XPS.
my Ni scan shows two Ni2p1/2 Ni2p3/2 peak as expected, but their shape is different.
the later one has one or two more peak even at a glance.
so, my question is:
1) what can make this things happen?
2) which peak should I handle?
How do we can handle with the multi-label, multi-class data in DEAP dataset using python?
I'm starting a new project on production of monoclonal antibodies (mab) in hybridoma cells.
And I would appreciate if someone could share some good reference materials on how to handle mice (including governmental regulations pertaining to animal based research), injection techniques, spleenectomy, spleenocyte isolation, fusion of spleenocytes with myelocytes, media for cultivation and cryopreservation of hybridomas, mab production induction, etc.
Thanks for tips and references. I appreciate it.
A universal controller with fixed parameters is designed by the derivatives balance method. Controller simultaneously can handle the problems of stabilization, adaptation, reference tracking, and disturbances rejection for both linear and nonlinear systems without controller parameters retuning.
more info about derivatives balance technique https://t.me/universalcontrol
I worked during the period 86-88 on self tuning control of a pressure control process.
The self tuning controller provided superior performance compared to PID controller in simulation and in real-time process control.
The main issues studied are the robustness of the parameters estimator, the handling of non-linear process model and oscillation detection to ensure the process safety in the case of large identified model error.
I don't know of many practical applications of self tuning control in the process industry.
Please advise if you know of any practical application case studies.
I have multiple time-series data over the same timespan (from 1989 to 2019) at an interval of 3 years. I am using SPSS 26 to make forecasts of these time series for 2022. I have changed the timeline data into date type and defined the date and time criteria following SPSS.
However, whenever SPSS tries to forecast from the time series, the timeline changes from triennial to annual (1989, 1992, 1995 to 1989, 1990, 1991). Can anyone tell me what is going wrong here? Is this the limitation of SPSS that it can't handle time series having intervals more than annual? or am I missing something?
I think, this is a future problem of a electrical engineer. In 2030 the number of electric vehicles(EV) increasing day by day in USA & other countries. Can our power station generate enough electricity to charge those EV? So I need a solution of this. Can any one give me any suggestion!!!
I could not find how the "gamultiobj" (a NSGA-II algorithm) penalizes the fitness function for the population violating constraints? Any clues would be appreciated.
Thank you!
How can I process the raw kinematic biomechanics data? For example, how can I process the C3D files exported from VICON to obtain usable, accurate kinematic and kinetic data? Do you have any book recommendations?
I am new to the subject, thank you all!
As many batteries assembled and disassembled in our glove box for Li ion battey, there are many lithium foil wastes accumulated in two grass bottle. And I wonder what measures should be taken to quench these wastes in a safe way. Thanks for your attention and all good suggestions.
Hello
To test the power (pressure \ force) which participant exerts on a device acorrding to a changing stimuli.
i'm searching for a sensor or device which collects this type of data
And I need the component to be able to be powered by python (2.7) or psychopy.
I would appreciate referrals to devices for purchase, articles or any other information on the subject.
with gratitude,
Avishai
We have been trying to purify a chimeric of MW, 31kDa in insect and mammalian cells using the pOET5 and pTT5 vectors respectively. Unfortunately, we can detect very low levels of the protein in the supernatants through Western blot, and any attempts to purify using SEC reveals that the protein is forming aggregates. Can anyone help with a protocol to handle this kind of situation.
When handle with data of ScRNA sequencing, pyhton or R?
Question-9: How may a nondeterministic intelligent programming language be created in order to enable run-time intelligent behavior generation for handling indeterministic events and autonomous decision-making requirements in SSE?
Hi everyone,
I am looking for a way to remotely collect blood from captive red deer, with the intent to minimize the effect of the handling on the hormonal concentrations. Ideally, it should be a device that can be remotely set to collect the blood sample at a certain part of the day - sample that will successively be retrieved during the handling and processed. Also, I'd like something not too invasive and that whose weight won't represent a burden for adult individuals (females: 80-110 kg, males: 160-220 kg).
I hope my question is clear and I'll appreciate any input you'll give me!
Take care,
Bruno
+ and/or, any other tools about photo-electric effect generally (may not be specialized for biomaterials) but then it should allow to change material parameters to adjust them according to biomaterials so.
Thanks to anyone who has any idea for that, so much.
It's been 2 months and I still fail to culture PC12 (ATCC® CRL1721™). I use RPMI as a basal medium, 10% Fetal Bovine Serum, and 1% P/S. I already tried to culture with T-25 flask horizontally and vertically. The thawing process seems good for 2-3 days, and suddenly the cell contaminated either basil or fungus.
If one of you already do research on this kind of cell, please inform me and kindly tell me what should I do.
I need an all in one software that could handle quantitative analysis aside R and also easy to operate
I want to automate fluid handling from well plates, but need to know the depths of a range of well plate formats - was wondering if someone had already compiled all the technical information already.
I’ve been asked to revise a chapter in a textbook. The chapter was written by another author. In revision, is it acceptable to retain any of the original author‘s exact work, or does it need to be completely rewritten in original language? I’ve only revised my own chapters before, and no one seems to know how this is properly handled.
hybrid capture PCR can handle more than 600 genes as a panel, but anchored multiplex PCR can only have no more than 100
In processors the complex and challenging operations are needed to be handled to overcome the demands, which leads to an increase in processor cores. This leads to an increase in the load of the processor and can be limited by placing a co-processors under specific type of functions like signal processing. But anyhow the speed of the ALU replies on the multiplier. Since multipliers are the major components to perform operations in the CPU.
Dear colleagues,
I am struggling to get an acceptable Cronbach's alpha score for a scale on opportunity recognition. Opportunity recognition was measured using Nicolaou, Shane, Cherkas, & Spector’s 5-item scale (2009) with the answer categories ranging from “strongly disagree” to “strongly agree.” These scale questions are drawn from the literature on opportunity recognition (Ozgen & Baron, 2007; Singh et al., 1999). In order to get a Cronbach's alpha score that is acceptable, I need to eliminate items 4 & 5. How do I justify that in my write-up? Is there a better way to handle that?
In the Domain of visual recognition tasks if imbalanced data is there our model gives poor performance?
Hey guys,
I am conducting a measurement invariance test with categorical data in Mplus. The items are measured on a 5-point-scale. I am working with the script by Svetina et al. ( ) (The script is slightly adapted as data is clustered, but that is not the problem, I have done this before).
Now, when I am trying to do the configural invariance testing, Mplus gives me this error:
"Group 2 does not contain all values of categorical variable" and lists three variables (or one when testing for other invariance aspects).
I use listwise deletion, so it is indeed possible that some of the rarely chosen categories are not included in some groups. However, obviously, this ruins the measurement invariance test, which I need.
Does anyone know how to handle this?
I already tried freeing the number of categories (by using (*) when defining the categorical data), but this does not work with WLSMV-estimation, which I need for the categorical data (I tried switching to MLR, which caused new problems).
Do I need to impute missing data? If yes, how? Or is there any other way?
All ideas are highly appreciated! (I feel a little lost right now)
Thanks!
Saskia
Hi,
I am currently working on a project where we examine the effect of an intervention on fatigue. The project has been carried out according to an one-group pre-test post-test design. I am uncertain about the best way to conduct a mediation analysis.
In our project we started with a 12week control period. So we have a baseline measurement at T0, and the pre-test measurement at T12. So we have two measurements for 1 condition. The following 12 weeks are the intervention period, and we have post-test measurement after these 12 weeks at T24.
Our outcome (fatigue) and potential mediators have been measured at all these three time points. We conducted a mediation analysis according to Montoya & Hayes (2017). However, this analysis only took into account the measurements at T12 and T24. So the baseline measurement is not considered at all. Now we are wondering, is this the correct way of conducting this analysis, or do we also need to include the baseline measurement? We had the following idea's
- Conduct the analysis with T12 (pre-test) compared toT24 (post-test) as we did.
- Conduct the analysis with the average of T12 and T0 (pre-test) compared to T24 (post-test).
- Conduct two separate analyses, so compare T0 and T12, and compare T12 with T24.
- Conduct an analysis to compare the change score (T12-T0) with the change score (T24-T12)
What would you guys do?
Good morning,
I should label peptides with cyanine dyes, which are very unstable molecules. Do you have any advice to handle and purify the product?
Thank you,
Margherita
Usecase- To provide the security of the data by building Next-generation firewalls or Is there any better firewall type to handle the normal systems. Please do suggest me any answers!!.
I have variables with missing values (1-5 Likert scale...technically 1-7) coded as "6" I do not know, and "7" Not applicable.
Should I replace all 6s and 7s with "blank"/"dot" (i.e., system missing values), and then use multiple imputation to handle the missing data?
Hi. I'm currently a 3rd year student who is currently taking a course that involves research writing. I would like to ask if you can help me by giving me recommendations for our protocols to follow in terms of handling rats and administration of plant extracts to the said test sample. My research will tackle the potential contraceptive effect of a certain plant, namely from the Terminalia genus (Terminalia microcarpa Decne). Any studies that will be in line with my topic would be a great help for me. Thank you very much for lending your time reading my question.
first I need to analyze and try to answer for predictive and prescriptive maintenance questions
pulsator technology is used widely in water treatment, but it is an innovation to use it in a SBR process , whili it can enhance the handelling of sludge in settling and draw phase and finally an efficient denitrification.
i wanna know about the primary considerations or impediments in using this technology, of course if there is any !
Hi,
I wonder if someone could point me in the direction of a paper or article on the effectiveness of manual handling training as a workplace intervention to reduce the prevalence of low back pain and injury.
Thanks in advance for your help,
Javi
Hi Researchers Community!!
I am initiating a very relevant and useful discussion aiming at generating focused ideas and knowledge of how a teacher handles or should handle gifted and brilliant students in a normal class having varied differences. As it is an established fact that 21st century learners are more creative, innovative and smart comparably, reason being their having easy and granted access to technology since their early age. They are more techno-friendly in comparison to older generation teachers. Today's classes have maximum diversity that existed never. Mostly it happens in case when a teacher faces bright and gifted learners in his normal class, they are overlooked and their special educational demands are hardly met. This problem has severed in modern time, and we can no longer treat creative students with unjust. The teacher should organise well thought strategies and approches to deal and handle exceptionally good students. How do you carry out this task in your class? You need to share your experience and knowledge here for the enlightenment of other fellow researcher. Your all sort of views are most welcomed in advance. Thank you in very much anticipation.
I am working on a large dataset from a cross-sectional survey. About 15% of cases with missing data for both exposure and outcome are present in this dataset. Is it possible to conduct complete case analysis to address this type of missingness instead of doing a multiple imputation? If no, what could be the possible bias for conducting a complete case analysis when deleting cases with missing both exposure and outcome variables? Also, what could be the best method for handling this kind of missingness?
P.S. recommendation for reads is also very much appreciated!
Thank you!
I will pepare denatonium benzoate and denatonium saccharide from lidocaine and benzyl chloride
I need to know more safety precuations during preparation or any health problems on exposure to the produt or handling.
For handling municipal sewage in the lab, e.g., transfer from container to tubes, do we need BSL 1 or BSL 2?
Does working with activated sludge samples require a biological hood (BSL 2)?
E-commerce has transformed the way we buy things and created jobs for millions of people, among other benefits. However, I have personally observed a significant amount of waste associated with online shopping in the form of unnecessary plastic packaging, a large amount of wrapping to deliver hot meals, and a variety of other things. Delivery is handled by integrated logistic companies in some countries, such as CaiNiao in China. One of the main aims of the logistics industry is to reduce logistic costs in order to keep prices competitive. In that case, how can they be persuaded to adopt sustainable (reusable) packaging and other similar solutions? Please share your ideas for resolving this issue.
How do I handle four latent variables in SEM? Out of the four I have one to be a mediator, one to be a moderator, one as a dependent variable, and lastly one to be an independent variable. It's a relational study. How do I handle all the four in one model as a beginner in SEM? Thank you.
I had read there were various ways to handle missing data and the modern imputation methods might be the best solution to handle missing data. However, following planned statistical analysis plan in our RCT protocol, Last observation carried forward (LOCF) method was chosen.
I had done complete case analysis/ per protocol analysis using repeated measure ANOVA. As for the power of the study, we had achieved the minimum sample size (since we had included attrition rate during sample size calculation).
As for ITT analysis, I had a problem with drop-out with missing value at each timepoints and also missing all measurement value including baseline measurement (no data at all). So, is it okay if use mean imputation for to impute the data from those who had no data at all combine with LOCF method for drop-out who have baseline value.
Please advise, I will really appreciate it since statistics always making me confuse. Thank you.