Discussion
Started 5th Aug, 2021

Video Database for Emotional Psychological Experiments

Hi everyone,
I am looking for a neutral video clip for my control group. I am not looking for specific face datasets, but rather a neutral video about mundane content (e.g. a person explaining something or a neutral/ordinary day-to-day activity such as making the laundry). The video clip should be around 4-5 minutes long and free to use for academic purposes (without copyright issues etc.). If anyone has any suggestions I would highly appreciate them! Thank you!

Most recent answer

João Lucas Hana Frade
University of São Paulo
Hello. I am not sure if you already made it, but you can try the resources of the Standford's Psychophysiology Laboratory (https://spl.stanford.edu/resources#films) or the videos from Soleymani et al. (2011) (https://mahnob-db.eu/hci-tagging/media/uploads/manual.pdf). Maybe one of their videos can meet your needs.

All replies (4)

Frank T. Edelmann
Otto-von-Guericke-Universität Magdeburg
Dear Sophie, thanks for posting this technical question on RG. As an inorganic chemist I'm absolutely not a proven expert in your field of research. However, I can suggest to you a few potentially useful links. For example, please have a look at the following article which is freely as public full text right here on RG:
Database of Emotional Videos from Ottawa (DEVO)
Also please check this link:
The Chieti Affective Action Videos database, a resource for the study of emotions in psychology
This article has been published Open Access so that it is freely accessible on the internet (see attached pdf file).
and
I hope this helps. Good luck with your work!
2 Recommendations
Sophie Slawik
Humboldt-Universität zu Berlin
Dear Frank, thank you so much for your reply. I came across some of your suggestions in the meanwhile too (DEVO and CAAV). I also had a look at the database by Schaefer et al. (2010). Unfortunately, the videos I am looking for (neutral) are always too short.
Thanks again for your help. I really appreciate it!
1 Recommendation
João Lucas Hana Frade
University of São Paulo
Hello. I am not sure if you already made it, but you can try the resources of the Standford's Psychophysiology Laboratory (https://spl.stanford.edu/resources#films) or the videos from Soleymani et al. (2011) (https://mahnob-db.eu/hci-tagging/media/uploads/manual.pdf). Maybe one of their videos can meet your needs.

Similar questions and discussions

How do I report the results of a linear mixed models analysis?
Question
46 answers
  • Subina SainiSubina Saini
1) Because I am a novice when it comes to reporting the results of a linear mixed models analysis, how do I report the fixed effect, including including the estimate, confidence interval, and p-value in addition to the size of the random effects. I am not sure how to report these in writing. For example, how do I report the confidence interval in APA format and how do I report the size of the random effects?
2) How do you determine the significance of the size of the random effects (i.e. how do you determine if the size of the random effects is too large and how do you determine the implications of that size)?
3) Our study consisted of 16 participants, 8 of which were assigned a technology with a privacy setting and 8 of which were not assigned a technology with a privacy setting. Survey data was collected weekly. Our fixed effect was whether or not participants were assigned the technology. Our random effects were week (for the 8-week study) and participant. How do I justify using a linear mixed model for this study design? Is it accurate to say that we used a linear mixed model to account for missing data (i.e. non-response; technology issues) and participant-level effects (i.e. how frequently each participant used the technology; differences in technology experience; high variability in each individual participant's responses to survey questions across the 8-week period). Is this a sufficient justification? 
I am very new to mixed models analyses, and I would appreciate some guidance. 
Converting from partial eta^2 to cohen's d for repeated measures (within-subject) designs?
Question
7 answers
  • Caitlin DuncanCaitlin Duncan
Dear all,
Is there a way to convert partial eta^2 to cohen's d for repeated measures designs? I ask because I am conducting a meta-analysis and need to convert the studies' results into a common effect size.
I read that it can be calculated in 2 steps: first cohen's f and then cohen's d.
(1) cohen's f can be calculated from partial eta^2 as follows:
cohen's f = sqrt(partialeta^2/1-partialeta^2)
(2) cohen's f can be converted to cohen's d as follows:
cohen's d = f*2
When I try this with an example from a paper in which there was a partial eta^2 of .42, I get the following:
f=.85
d=1.7
However, I also have the pre and post means and SDs, and when I calculate the cohen's drm according to Lakens (2013), I get very different values. (Note that I do not have the r-value so I will assume one of .5). Mpre = .92, SDpre = .09, Mpost = .98, SDpost = .02.
Cohen's drm = (Mdiff/sqrt(SDpre^2 + SDpost^2 - 2 * r * SDpre * SDpost)) * sqrt(2(1-r))
drm = .73.
This is a huge difference. Even when changing the r-values, this effect size does not get close to the d-value of 1.7 estimated from the partial eta^2 value.
As such, I presume that the calculation I used from partial eta^2 to cohen's d is incorrect for a repeated measures design, and that a correction needs to be applied at step 1 or 2 (or both). I have not been able to find information about this so far. Does anyone know how to do this conversion for repeated measures designs properly?
Thank you in advance for your help.
What is the best way to crush the soul of an assistant professor?
Discussion
10 replies
  • Thomas E. BeckerThomas E. Becker
Say that your department has just hired a smart, inspired, assistant professor. This person has a great education, is thoroughly knowledgeable about theory and methodology, and is genuinely enthusiastic about working for the department. What is the most effective way to destroy this person's inspiration and long-term (post-tenure) research productivity?
I argue that the most effective way is to:
1. Try to manage the person through transactional means. That is, emphasize the carrots and sticks related to their job. An example would be making it clear that if they don't perform to some (often arbitrary) standard, they will be fired. (The Stick.) In contrast, if they do perform at or above the standard, they will have lifetime job security, almost regardless of future performance. (The Carrot.) Also, be sure to conduct annual performance appraisals emphasizing their short-term productivity.
2. Make their evaluations contingent on publishing in journals on some list. This will direct their focus from topics that they love and maximizes their reading auidience to topics that are likely to get published in journals on the list and addresses a narrower audience. It also reinforces the transactional nature of the management system.
3. Introduce a large dose of politics into evaluations. One way is to have people who are negligibly qualified evaluate their work. For instance, have someone outside their field conduct annual evaluation. Staff T&P committees with people who cannot read and understand their work, and weight the opinion of these committees more strongly than those of, say, the external letters of experts in the field who have a deep knowledge of the person's publication. Give substantial weight to variables like citizenship and collegiality which can cover up personal biases and other factors irrelevant to productivity.
That's my list. What's yours?

Related Publications

Article
This article unveils some previously unknown facts about the short life of Max Friedrich (1856-1887), the author in 1881 of the first PhD dissertation on experimental psychology, written under the supervision of Wilhelm Wundt: "On the Duration of Apperception for Simple and Complex Visual Stimuli." The article describes Friedrich's family backgroun...
Got a technical question?
Get high-quality answers from experts.