Science topics: PsychologyExperimental Psychology
Experimental Psychology - Science topic
Explore the latest questions and answers in Experimental Psychology, and find Experimental Psychology experts.
Questions related to Experimental Psychology
I am trying to use new emotiv epoch+ headset which was bought in 2018. I am having half of the electrode as green Like this picture. But my question is why the over all contact quality is 0% (written in Red color).
i have a simple control-experimental research design with pre-post exam and 12 person in each group.
What is the appropriate way to extract the effect size? (what is the right formula? cohen d or eta squire or omega squire or ....)
I am looking for a neutral video clip for my control group. I am not looking for specific face datasets, but rather a neutral video about mundane content (e.g. a person explaining something or a neutral/ordinary day-to-day activity such as making the laundry). The video clip should be around 4-5 minutes long and free to use for academic purposes (without copyright issues etc.). If anyone has any suggestions I would highly appreciate them! Thank you!
say one wants to report a linear mixed model of a psychological experiment (mixed design: predictor A between, three levels; predictor B within, four levels).
What would be the best way to report it?
The F test is not that informative when the predictors (fixed factors) have more than two levels (see Schad et al., 2020)
The summary() command only provides the comparison to the reference category (however, lets say we are interested in more than these to-reference-category-comparisons)
The usual repeated-measures-ANOVA-type-of-reporting would be F test and post-hoc tests. However, would you say this applies also to LMM?
I would also argue that it makes sense to report the model comparisons (LRT) of the model that contains the set of fixed effects of the interaction (the “interaction model”) with the main effects only model. And before to evaluate the main effect of predictor A by comparing it to the intercept-only model.
so take the model comparison lmm1 <– dv ~ 1+ (1 | id) vs lmm2 <– dv ~ 1+ A + (1 | id) and then report the LRT test results for the main effect of A
and take the model comparison lmm3 <– dv ~ 1+ A + B + (1 | id) vs lmm4 <– dv ~ 1+ A * B + (1 | id) and then report the LRT test results for the interaction effect
then go ahead and report the individual custom contrasts, simple effects, or pairwise comparisons or whatever one is interested in?
or rather report an F test and then report the individual tests?
the fixed effects estimates seem to be of little use as not only the comparison with the reference category is of interest
but still, coming from a rmANOVA perspective originally, it is difficult to not report some sort of “the main effect of A was qualified by the A*B interaction”) and then report F test or LRT.
ps: Also, would you say it makes sense to report both results from custom contrasts vs pairwise comparisons to indicate some sort of consistency or rather choose one of them?
THANK YOU ALL SO MUCH
Schad, D. J., Vasishth, S., Hohenstein, S., & Kliegl, R. (2020). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language, 110, 104038.
Could we use the path analysis in a between experiment where we have assumed that the effect of the independent variable on the dependent one is mediated? Specifically, the sample will be randomly assigned to one of two experimental condition.
Is there any examples of experimental type of questionnaire type ?
I need scales to measure sustainable behavior, I'm thinking to use both experimental and questionnaire.
Thanks in advance
Today I come to you with a quick request. I am looking for participants in an online experiment for a course in my master's degree and it is important that we get as many replies as possible.
The experiment is super simple, you only need 15 mins and a computer. This is the link: https://farm.pcibex.net/p/HTTBux
So, if you are reading this and you have 15 mins, I'd really appreciate your participation.
Thanks in advance!
The same-different task requires to subjects to indicate whether a pair of stimuli seen or heard are the same (say AA or BB) or different (say AB or BA). Researchers often collect offline measures (e.g. response accuracy and latency) in the task.
Is there a way that I can collect online measures using eye-tracking, ERP or some other experimental techniques in psychology? In other words, instead of people reporting whether the pair of stimuli are different, I hope to infer their knowledge based on their fixations and brain potentials. Please recommend papers that I can read (if any). Thank you!
I want to investigate the effect of "HELP and Mindfulness based flourishing program upon cognitive emotional regulation strategies of kindergarten teachers"
What is appropriate statistical tests and research design ?
thank you in advance
I know that it could be different for each situation, but when I was gathering data from participants in the pilot phase of my study, I've realized that if I explain the instructions aloud it could be more effective than letting them read those. Now my supervisor insists that I refer to a scientific document for this. the experiment was designed with a masking paradigm and the instructions contained content with positive and negative reinforcement.
Does anyone know any reference?
I asked the same respondents about how much luck, their own efforts and connections they need to achieve Goal # 1, # 2, # 3, # 4. (first group). The second group had the same questions and the same goals, but they were in different conditions.
I analyze my data as a mixed plan, BUT the peculiarity of my design is that each respondent answered in the format: % luck +% efforts +% connections = 100%. Other answers were not accepted by the system, the amount always was 100. (a+b+c=100)
In SPSS I have 3 different variables: Luck_Goal1, Effort_Goal1, Connection_Goal1. And so on for goals 2, 3, 4 … But these aren't exactly three different variables, are they? It's like one variable, like a triad, and can be compared to other triads ...
My question is, how to combine this three-part variable into one whole triad variable, and how will intragroup and intergroup analysis of triads be done?
I put up this question at RG in order to find out what is being studied about the effects on people of social media memes as they attempt to find reliable information regarding social media memes.
In my original data set about the addictive power of memes to shape memory storage and alter personality, I was mainly looking at political memes.
It may be also important to study the effects of memes upon people's ability to find verifiable information. So please post any studies that you are aware of so that we can compile these in one place. I hope this inspires some study because I already know the power of memes from my past work on rhetoric, communication theory, and meme addictive behavior.
Here are an initial couple of links to studies which I have not read as yet, but which may be of interest. Check the bibliographies or Works Citeds, as well.
Social Media Reigned by Information or Misinformation About COVID-19: A Phenomenological Study
Social Sciences & Humanities Open Online journal:
MIT Psychologists study:
Fighting COVID-19 misinformation on social media: Experimental evidence for a scalable accuracy nudge intervention
Peer-edited Polish Journal
SOMEBODY TO BLAME: ON THE CONSTRUCTION OF THE OTHER IN THE CONTEXT OF THE COVID-19 OUTBREAK
I have found smart shopper self-perception / emotions / feelings scales. However, I am not able to find any scale for the cognitive part of the process: beliefs about smart shoppers (identification?). Although Green et al (2011) seem to conceptualize & measure the term smart shopping by supporting evidence about three constructs (time-consciousness, right purchase and money savings), the items related to these constructs are not exactly beliefs but shoppers' experiences. I would like to know if there is some research about beliefs by attributions such as "The smart shopper is..."
We sometime face a situation of small sample size. If the assumptions of the pearson correlation are fulfilled, is it valid to calculate correlation?
Are bayesian tests not dependent or less dependent on sample size?
I am planning cross-cultural study in psychology.
I've read various articles, but I can’t understand what are the requirements for translators? I’m at the stage one - translating the original instrument. Let’s say, my translator 1 is fluent in target language with a good understanding of original language and works in translation agency + has a university degree in some field (not in philology) Translator 2- the same. Translator 3 (for a synthesized translated version) is fluent in target language, with a good understanding of original language + has a higher education in Philology!
My question: is it ok? I mean “translator” doesn’t automatically mean that he/she has a bachelor, master or PhD degree in Philology.
What do you think about it?
I calculated Cohen's d and have obtained the following value:
Cohen's d = (4.6 - 7.88) ⁄ 0.791148 = 4.145876.
Wherever I looked at, the highest Cohen's d value for the large effect was 0.8. Just wondering if this could be a mistake?
Wouldn't experimental psychology (the "lab" setting) have a necessary bias AGAINST the existence and availability of some SKILLS & against any thinking of (across/about) multiple circumstances?
I contend: There are some skills developed (or discriminated) across circumstances or between circumstances, that develop over more time and/or more circumstances (usually both), than can be detected or manipulated in the "lab" (using presently used procedures, at least) . AND, there may well be thinking of concepts FORMED (naturalistically) ABOUT existing or not existing "things" AND/OR (also) relationships (relatedness (or NOT)) which involve mentally comparing [representations] between situations/circumstances that are very important in REAL, ACTUAL conceptualizations and thinking (in real "internal" phenomenology -- though based on ACTUAL EXTERNAL SITUATIONS/CIRCUMSTANCES that could be seen if OBSERVATIONS were more realistic __and__ [(relatedly)] imagination about imagination was more reasonably thorough). WE CANNOT SEE THIS (presently); we may NOT MANIPULATE THIS action by the organism IN THE LAB.
There is no doubt we (including AT LEAST even older children) must, can, and do these things BUT WE CANNOT DETECT (measure)(yet, at present) any KEY behavior patterns related to such activities AND we cannot, and will not be able to, fundamentally manipulate such activities.
It is quite possible (if not likely): MOST HUMAN THOUGHT, realistically OR naturalistically considered, IS THEREFORE IS NOT THUS CONSIDERED (at all, or at all realistically) IN THE "LAB". (Thus, the existence of the homuculus (or humuculi) of executive control and all the "meta" this-es or "meta" thats -- NEITHER strange type of concept NECESSARY IN ETHOGRAM THEORY.)
This IS NOT A LIMITATION OF SCIENCE or OBSERVATION, but a limitation of the lab and of typical experimental psychology.
Based on testable particular hypotheses from Ethogram Theory:
I should add that [still], based on the nature of the Memories, at least THE INCEPTION of each new qualitatively different level/stage of cognition would occur at some KEY times and "places" "locally" in circumstances, i.e. could be seen within the time/space frame of the lab: AS DIRECTLY OBSERVABLE OVERT BEHAVIOR PATTERNS -- and these discoveries, by using new sophisticated eye tracking (and, perhaps, computer-assisted analysis) technologies (<-- these basically being our "microscope"). BUT, you would have to know what to look for in what sort of settings _AND_ (at the same time) be able to recognize the KEY junctures in ontogeny and the development of learnings that THESE shifts (starting as very basic and essential "perceptual shifts"; then becoming perceptual/attentional shifts) WOULD OCCUR.
I am looking into using a divided field task, and we may need to purchase a chin rest. My question is simple. How do I go about getting one for a reasonable cost? This is a pilot study and we do not need something that costs thousands of dollars. The lab is also unlikely to continue using this kind of equipment once I graduate.
Some key things that would help are: 1) The kinds of keyword searches that I should be using to avoid endless results about violins, 2) What should be some realistic expectations in terms of cost?
Also, to preempt any questions about what type of chin rest we need: If you know of something really cheap, I will make it work. Something that clamps to a table would be ideal but I am flexible on this point.
I will try to keep this abstract:
10 treatment and 10 control samples were measured every other day over the course of ca. two weeks, for a total of 7 levels (including Day 0). The treatment samples were initially acclimated to a certain environmental condition and then transferred to a different environmental condition at the start of the experiment, while the controls were exposed only to the 2nd environmental condition. The purpose of the experiment was to determine at which point the treatment acclimated to the new condition, i.e. when does the treatment stop being different from the control?
1) Seven different physiological variables were measured to track these changes, each with their own rate of change.
2) This experiment was performed on two different species, with the idea of determining which changes occurred faster or slower in each species. I am not particularly interested in comparing rates, though… I really just want to know how many days it takes for acclimation to complete.
Is repeated measures ANOVA the wrong approach? If so, what would you do? If not, how can I perform a post-hoc on only two groups?
Dear fellow researchers,
for my PhD project I want to build an app to fight climate change. In this app I want to show the CO2 footprint the user has and take them on journeys (a collection of tasks) to commit to a climate positive cause. To be more specific: I measure how much energy in your household is needed and give you all the data you need to change your energy provider.
Therefore, I would love to hear what features, ideas and functionalities you would absolutely want in a climate change app.
Hi, I am a german university student (business administration and psychology) and I am going to write my bachelor thesis.
I would like to research a correlation between stress and the language. For the following points I need your help:
- differently option for stress induction
- or unsolvable tasks for stress induction
- or questionnaire for stress induction
I know about the trier social stress test and the socially evaluative cold water stress test, so I need other options. The best way for me is, to have a computeraided stress test.
I hope you can help me and make my student life a little easier :-).
I am analyzing a data set of 100 subjects. I am looking at reaction time in different bins of 50 ms. However, each participants contains very few number of trials. I tried to group all trials from each subjects and treated all trials as if coming from a single participant. Is it correct? any other suggestions?
Thinking in terms of a social setting such as a dance, a concert, a meal, if an experiment were to be designed in such a way, how can the method be validated? Similarly, what role would reliability play in an experiment set in a social setting? How can you recreate social settings for further empirical study?
I would love to read some examples of studies if you are familiar with any!
For example, if 200 seconds of images and videos were used in negative mood induction protocol, it is mandatory to have same 200 seconds for positive induction protocol or it can differ +/- 100 seconds or more?.
Dear esteemed colleagues,
I recently conducted an experiment where I measure reaction time (RT) with a 2x2 factorial design. At first I tried analyzing the data using standard ANOVA, however I found no significant difference. I realized then that RTs have unique distributions; they are always positively skewed. After an extensive literature review, I concluded there are two most commonly used method to analyze RT, namely robust statistics and transformation. I tried using log transformation and yielded no satisfactory result. I am reluctant to transform the scores even further, worried that it may result in false significance. I am planning to try robust statistics, but I have difficulty understanding it. Do you have any recommendation regarding a beginner's guide to robust statistics? Or do you have other suggestions as to which method is better for RT analysis? I look forward to your answer(s).
I am interested in studying the minimal group paradigm (MGP; introduced in 1970s by Henri Tajfel) in the context of social categorization and prejudice. I was reviewing literature for the same. Is there any literature available on the topic that is in an Indian context or written by an Indian author(s). If not, could you suggest any studies from Asia, in general?
I'm interested in improving my skills about image analysis and processing for experimental psychology purposes. I'm talking about matching low level features (contrast, luminance, spatial frequency) and analyze the images that i need to use for the sake of improving the paradigms.
I'm looking for some resources (article, textbooks etc..) with a good theoretical framework. i'm not able to focus the topic of these issues for a targeted research on internet.
I am an MSc student in Psychology and right now I am in search of a research idea for my thesis. I am interested in neuropsychology / neuroscience / computational neuroscience / cognitive robotics and therefore I want to do a thesis on one of these areas because I am planning on applying for a relative PhD. The problem is that unfortunately, my University lacks lab equipment and therefore, any ideas that I have had so far are not testable because they require lab equipment.
Therefore, I was wondering if there are any alternative experimental methods for research in these areas that do not require lab equipment and if they do, I can run them on my laptop or a University computer.
Any suggestions (either experimental methods or reading) are more than welcome.
Thank you very much for your time.
I would like to test whether or not environmental awareness (or connectedness to nature) improves after watching a short film (10-15 minutes). There are three different films and the design is a between subjects design (each participant will only see one film).
I see the following options:
1) Either use the same test before and after the film, but then they will be answering the same answers twice in a short time which may reduce improvement.
2) I split the test in half (by random or by design) and use one half first and then another half afterwards. This way each question should be answered only once. But the problem here is that the reliability of each measurement is lower as only half the items are used.
3) I use two different scales that aim to capture the same construct, one before and one afterwards. I could switch the tasks for half the participants to get an average baseline for each test.
4) I use just one scale after the intervention to just compare group means and not mean group improvements. May not be so bad as there will be 40 participants in each group (n = 120)
5) Baseline test a week earlier would probably be best, but not so easy to achieve organisationally.
Which should I chose?
Do any other options come to mind?
Thank you very much in advance.
I am looking for a database that offers portraits of children that we would like to use in a research. Any information or resources would be most welcome!
I was wondering I could ask about gesture and sports perfomance.
I think that the gesture of some movements befor actual performance have some effect on following actual movement performances(accuracy, fluency, timing and so on).
In fact, in some sports, such as baseball, tabletenis or boxing, swinging bat or racket without real ball, or moving alone is poplar practice.
I found some gestrue studies involving speech and recognition for classification, but I couldn't find studies investigating if gestures, pantomimes or mimickings have influence to following action through experimental psychological methods.
It would be very great if anybody tell me about this research field.
Does anybody know a good source for finding an appropriate ego depletion task? I am looking for some kind of review/overview that summarizes and describes different types of tasks used in previous studies.
I am currently searching for some conference, congress or workshop in the field of experimental psychology and methods or cognitive psychology in the time span from octobre 2018 to march 2019. Any ideas or recommendations?
I want my participants to evaluate a video (only bidimensional scale) in real time, because of logistical issues that need to be online, does anyone know of an online platform to do this? or if I can send them a link where they can open it and perform it? Thanks!
Hi all. I have a stimuli set of pictures, all human faces. I need to create a new scrambled version of every one of them, while keeping the same luminance of the original photo. What software do you recommend? What's the more effective way to scramble them? Thanks!
Currently i´ve been working on the replication and adaptation of a computerized memory test using OpenSesame open access software but unfortunately i have some troubles with the script code.
I really appreciate some help, any comment would be useful.
I'm interested in estimating the tendency to lapse on the trait level per participant. For this I need a list of well established experiments, that have a good amount of literature behind them (e.g. psychophysical experiments, specific forced choice tasks, etc.).
The idea is for a given participant to take part in multiple experiments and to then estimate parameters within the participant. I am specifically looking for experiments that can be modeled with mixture-modeling, but even if it hasn't been done yet, that's not necessarily a problem (I'll just have to come up with an mixture-model on my own in that case for that given task).
Any and all suggestions would be much appreciated!
Let me describe my experimental question:
I actually have 395 children for which heart rate variability have been measured when sitting and observing for a runner.
DV (rMSSD/ continuous variable)
IV1 age (12, 13, 14, 15, 16, 17, 18/ categorical)
IV2 condition (still, speed1, speed2/ categorical)
IV3 children sex (male, female/ categorical)
IV4 runner sex (male, female/ categorical)
I would like to know if there is an interaction between condition and age on VD. An important thing is that the number of children is not the same for each age class.
Could you give me some advice on the best analysis to conduct?
I want to compare two populations, but we can only measure 6 participants at a time at most (the total sample is larger of course). Therefore running the task classically is difficult.
A possible solution is having participants play against an algorithm (tit-for-tat, or adaptive pavlov). However, I can't find any literature of humans vs. algorithm in the prisoner's dilemma.
Am I missing something?
I'm designing a research project for my experimental psychology class on the calming effects of a baby swing with the swinging and music it plays being in sync. Ive already decided I'm going to measure its calming ability through the change in cortisol levels in the infants blood when its stressed and then after being in the swing.
I need to figure out a way to stress out an infant in a way that raises its cortisol levels, but also is considered ethical. Any recommendations would be greatly appreciated!
In my study, participants are required to perform an online hazard perception test. During the test they will either have to do a voice-command to Siri / Google assistant, a tap-based command on their phone, or nothing (control). For example, they may be asked to set an alarm.
So there are 3 conditions. It is a within-subjects design, I want all the participants to participate in all 3 conditions. I am just struggling in working out how many participants I need, and how many trials they will need to perform in each condition.
alpha level = 0.05
I'm unsure how to calculate cohen's d etc.
This may be a silly question but I am new to quantitative psychology experiments! I am looking to conduct an experiment which uses stimuli from 4 different groups of people. These will then be used in 3 blocks (a different question is asked of the same stimuli for each block). Is there a way to determine the number of stimuli that should used from each group for the experiment to be valid?
In experimental psychology, what is the difference between within- and between-subjects experimental designs when you calculate effect size? Is there any empirical knowledge in the field about this?
I have to do a replication study on a model. How much different should it be from the original research? Obviously, it is going to have some aspects similar to the original research, but, in order to make the replication study meaningful, some new perspective is also expected. So my question has two aspects, regarding how much difference is expected so that the study is considered a replication and what novelty should be there.
First aspect: Which differences are acceptable? Like in respect to sample size, gender distribution of participants, age range, geographic distribution, cultural variations, study design etc.
Second aspect: What novelty can I add in the replication study? Like in respect to statistical analysis, study design etc.
On p. 12 of Fritz Morris, Richler (2011) in their article Effect Size Estimates: Current Use, Calculations, and Interpretation, they say, "The z value can be used to calculate an effect size, such as the r proposed by Cohen (1988); Cohen’s guidelines for r are that a large effect is .5, a medium effect is .3, and a small effect is .1 (Coolican, 2009, p. 395)." Shouldn't r be squared to determine the effect size with nonparametric data? Does anyone have a source to verify or refute this sentence?
The article can be found in the Journal of Experimental Psychology: General.
I've attached a figure depicting an interaction between one within-subjects variable (training) and one between-subjects variable (respite experience). This is output from a repeated measures analysis with one between-subjects variable employed. How do I compare specific points on this graph post-hoc? For example, what if I want to know if the two respite experience groups differ at training Time 1? Or perhaps I want to know if respite group 1 has changed from Time 1 to Time 2. I know I can run simple t-tests to answer these questions, but isn't it a better practice to set up some contrasts? I have a Ph.D. in experimental psychology with a minor in statistics but it's been 15 years since I've analyzed data and written results for academic purposes, and I just don't remember how to set up the contrasts. Thanks.
There seems to be a handful of papers on single-subject resting-state fMRI data but not for EEG data. Is this simply because it is too variable to be done? If not, what are the things to watch out for in data collection and in analysis?
I planning an experiment and need some feedback. I will test how well a test person react to a moving object. The object will be presented in three different forms. The interaction between the test person and the object will be via a computer “game”. I will not go into details but would just like to get general recommendation about the experimental design.
The object will be presented in 3 different forms each moving across the screen separately. The test person have to detect and respond to the moving objects (3 different forms that varies in detectability). I will score how successful the test person is to respond (success vs. no success).
Response variable, binomial: ‘success vs. no success’
Explanatory variable: ‘form of the object’ (3 object categories w/different difficulty levels)
Co-variable: ‘Age of the test person’, although I consider to have students with approximately the same age to elude this variable.
Co-factor: ‘Gender of the test person’
Random effect: 'Test person ID'
I would like to get recommendations about:
1. How many survey participants (test persons) do I need?
Each test person will only be tested once (i.e. one trial). In one trial, I will repeat each object category a number of times in a random order.
2. How many times should each object category be repeated?
The 3 object categories have different difficulty levels, by repeating them I will get a sufficient sample size. Although, the drawback by repeating them is that the test person will also improve his/her search image and become more effective. This could be controlled for by including a “time” or “stage” effect as a co-variable.
I would like to get the right balance between the number of survey participants and number of repeated moving objects per trial.
We have the opportunity to build a new laboratory for our psychological experiments with eye-trackers, emotion recognition, psychophysiological parameters, virtual reality devices and so on.
To get a real good laboratory, we want it to be soundproof, to have light control, temperature control, and moisture control. Especially, the to build it soundproof seems to be a bigger problem.
Does somebody know about a soundproof laboratory for psychological experiments?
Does somebody know about a analogous structure that could be adapted to an optimal psychological laboratory?
Thank you very much for every useful hint and detailed information.
I want to use a paradigm where subjects have to detect near-threshold targets in flickering noise at random moments, but without any classical trial structure, meaning that there is a continuous sequence of stimuli. The continuous structure with no trials has the consequence that I cannot compute sensitivity (d’) in classical signal detection framework: There is no number of stimulus absent trials and thus no probability to respond yes when there is no target (P(yes|absent)). In other words, although I have number of false alarms, I cannot translate this to false alarm rate, and am dependent on hit rate alone.
Does anyone know a potential measure alternative to d’, which could help me in this situation to quantify the participant’s perceptual sensitivity? I would appreciate any hint or idea.
We brought a few good response boxes, happily began running our experiments with them and yes they reduce RT's by an average of ~40ms but they also really f%$#ked up a few of our rt effects that we have replicated numerous times using keyboards as response media. for example, we have repeatedly shown that inserting a >300ms temporal lag between an action and its perceptual effect "kills" the speeding up of the response (vs. a no own-action effect condition). using the response box -- participants with the 450ms lag actually performed faster than the no effect group. any thoughts? similar experiences?
I would like to find a reference demonstrating empirically how long after an elicitation event (e.g., frightening someone), does the elicited emotion last within the individual. I am referring here mostly to facial expression generation studies.
I am trying to desing an experiment to establish the relationship between Inhibitory Control and Theory of Mind, and I would like to know if somebody has used similr experiments. My hypothesis is that Inhibitory Control modulate the expression of theory of mind. Your help will be valuable for me.
I have recently programmed a Stroop and Vigilance test on E-Prime for use in my PhD, however I am struggling to program an auditory-visual dual-task. Ideally it would require participants to respond rapidly to visual and auditory (i.e. two different tones of beeps) stimuli, randomly presented. I would be grateful for any advice on this - many thanks in advance.
I am working with DMDX to record vocal responses. With four stimuli, everything looks fine, but with 5, the program just stops working.
I am hoping to run a cognitive load manipulation while participants respond to an attribution questionnaire. However, rather than using a typical working memory task where participants are asked to memorize and recall a string of digits while responding to the questionnaire, I would rather have a distractor task such as a string of digits appearing and participants have to press a key if they see the digit "5" for example. Does anyone know of a program where I can run this type of task?
Can anyone tell me if there's a specific range for the immobility time/percentage that 'normal' adult C57BL6 display in the Porsolts Forced Swim Test (5 minutes analysis, only one trial of the test performed i.e. no priming 24 hours previously) ? The graphs in papers seem to show widely varying values.
If anyone has performed the experiment in multiple instances, how much variation do you observe in between these experiments wrt the immobility values?
The dual-task will comprise of a simultaneous auditory digit span [DS] and a visual response time [RT] test. Participants will hear a series of five, seven and nine-digit sequences that are presented in a random order, but will remain consistent for each participant. Participants will be asked to verbally repeat back the digits 2-s after the onset of the final digit in the correct order, with a time-limit of 1-s per digit to recall the sequence, and awarded 2 points for every correct digit recalled in the correct place and 1 point for each correct digit in the incorrect location. 1 point is deducted for each digit recalled that was not in the original sequence. The accuracy score is then converted into a percentage. Whilst engaging the auditory task, participants will simultaneously partake in a visual RT test which involves an image of a small football being randomly presented for 200ms on a white background in one of four quadrants of a computer screen. Participants will be required to press an allocated button with their dominant hand in response to the stimuli as quickly as possible. Images will be presented in pseudorandom interludes between the words “ready” and “go” on the DS test, and will appear 750-1,000ms prior to the onset of the next auditory stimulus. During each test, 96 footballs will be presented and reaction time score will be recorded as the average time (ms) taken to respond to the stimuli across these trials.
Any help would be much appreciated! So far I have four slides with the football image in each quadrant... so I've got a long way to go!
What sort of tests or scales are there to test how moral people are or not, by any definition?
There is the famous trick or treat objective self-awareness test where children were told to take one sweet (candy) in the presence and absence of a mirror (Beaman, Klentz, Diener, 1979).
There is another test where a confederate drops stuff in a corridor and sees whether passers-by stop to help to test helping behaviour (e.g. Monk-Turner, Blake, Chniel, et al., 2002)
There is the lost letter test (Milligram, 1965).
Research on eye staring posters (pictures of eyes) used a coffee donation piggy bank as a dependent variable (Ernest-Jones, D Nettle, M Bateson, 2011).
I have used test time limit cheating measuring how much longer after a test time limit will subjects keep responding (Heine, Takemoto, Moskalenko, Lasaletao, Henrich, 2008), but as with all the above subjects have to be test individually, in a lab or in the wild.
I am looking for a test/scale/questionnaire but there is "the liar paradox" sort of problem in that liars and other immoral people say that they are not.
E.g. Guttman (1984) found that religious school children were more moral on paper but cheated on a test more than secular children.
The Over claiming Questionnaire by Del Paulhus is a measure of self-enhancement but may also show how much of a dirty liar people are:-)
My students and I developed a liar test based upon a similar idea asking whether subjects had done really bad things and then mixed in some questions such as "Told a lie to a friend" and rated "never" responses as indications of immorality (based upon FBI lie-detection technique which uses similar questions to calibrate lie responses). It is in Japanese but I could translate it to English if anyone were interested. It correlated with something.
Perhaps an "intelligence test" or "kindness test" with incorrect "the best answers" on the obverse or upside down at the bottom of the page, and see the extent to which subjects use the wrong answers in an attempt to bump their score (especially they are told their score is going to be made public.
Suggestions of other tests and scales would be gratefully received.
I am looking for a free matrices-type IQ test to use it in my research. I have searched the literature but tests that I usually encounter, such as Raven, are quite expensive, and I am looking for a more affordable alternative. Please let me know if any of you have ever used or are aware of a different test that corresponds to my description.
I would like to keep my animals group-housed after they have been implanted with a microdrive array for electrophysiological recording. So far, we started singly-housing them right after the surgery to exclude the risk of other animals chewing on the implant. However, as we are investigating social behaviour, group-housing would be desirable. Does anyone have experience with group-housing them after implantation, for example in customized cages?
- For example applying breaks
- Preparatory behavior involved in it
- role of automaticity and readiness
Thank you for your time and effort.
In an operant conditioning experiment where a animal is presented with stimuli on either the left or the right side for around 150 trials a day, is it OK to use a truly random series or should one control for the number of consecutive left or right presentations? I know that in the long run they will be fairly equally distributed, but I am worried that long consecutive runs of the same side will disturb my experiment. Especially when these occur early during training. Should I use a Gellerman series like approach?
In the lab, we use DirectRT to construct stimulus presentation trials for experimental research purposes. I use the BIOPAC MP150 hardware to record EEG data. Data is analyzed in BIOPAC AcqKnowledge Software.
Record ERP differences in response to armed and unarmed black and white targets in a simulation.
How to I time-lock stimulus onset and participants responses in AcqKnowledge to DirectRT Software so that I can extract ERP data?
Would doing this require additional software/hardware?
What are other options commonly used for syncing neural activity to stimulus presentations?
Upon acquiring synced data, how to I extract ERPs during stimulus onset and participants responses in BIOPAC AcqKnowledge Software?
Thank you for any comments or links
Milgram (1974) proposed that humans exist in two different states: autonomy and agency. In an autonomous state, a human acts according to his/her own free will. However, when given instruction by an authority figure humans switch to agentic state of mind, where they see themselves as acting as agents for the authority figure. Milgram observed that many participants in his obedience study (1963) experience moral strain when ordered to harm another person. Moral strain occurs when people are asked to do something they would not choose to do themselves, and they feel is immoral or unjust. This moral strain results in an individual feeling very uncomfortable in the situation and, in extreme circumstances, they show anxiety and distress. This anxiety is felt as the individual contemplates dissent and considers behaving in a way that contradicts what he/she has been socialized to do. The shift into an agentic state of mind relieves moral strain as the individual displaces the responsibility of the situation onto the authority figure, thereby absolving his/herself of the consequence of his/her actions.
I wanna to test an incubation effect on divergent thinking by using alternative use test (AUT). there is an incubation time between the first and the second time AUTs. I do not know how to give instructions for the participants: in their second time AUT test, should they repeat writing down the ideas they have written down in their first time?
or should I tell the participants they are not allowed to write down the same ideas ?
I would like to know if there is a standardized cut point concerning missing data in terms of completion rate in a questionnaire?
I mean, above which percentage of missing values per subject do we chose to remove this subject from the analysis?
I believe this is an effect that can be found for reading text out-loud vs reciting text out loud as well, but I'm having a hard time finding references for instrumental performance.
I want to use a chi square test on 2 unequal samples. Both are in the hundreds so there is no issue with minimal cell count. I know that unequal sample size is not a problem for chi square test. However, I'm trying to find a stats book or a published article that I could refer to in order to make this argument in a manuscript. Anyone know of such a reference?
I would like to use skin conductance responses to indicate when a participant has seen something that is very salient to them and grabs their attention during unconstrained viewing of real, natural stimuli. This would involve using eye-tracking in combination with measuring SCR, and seeing what the participant was looking at when a SCR was evoked. However I am new to the SCR method and am not sure if this approach is feasible or is valid from a theoretical standpoint. Any advice or links to relevant papers would be greatly appreciated.