Carl Viggo N. G. Lövenstierne’s research while affiliated with KTH Royal Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (1)


Figure 1 Analysis Pipeline
Figure 4 Examples of Original Coder-Model Disagreement and the New Coders' Codes
Figure 5 Language Significantly Associated With the Big Three Implicit Motives
Overview of Study Data
Summary of Implicit Motive Model Performance on the Holdout Data
Automatic Implicit Motive Codings Are at Least as Accurate as Humans’ and 99% Faster
  • Article
  • Full-text available

April 2025

·

61 Reads

·

1 Citation

·

J. Malte Runge

·

·

[...]

·

Implicit motives, nonconscious needs that influence individuals’ behaviors and shape their emotions, have been part of personality research for nearly a century but differ from personality traits. The implicit motive assessment is very resource-intensive, involving expert coding of individuals’ written stories about ambiguous pictures, and has hampered implicit motive research. Using large language models and machine learning techniques, we aimed to create high-quality implicit motive models that are easy for researchers to use. We trained models to code the need for power, achievement, and affiliation (N = 85,028 sentences). The person-level assessments converged strongly with the holdout data, intraclass correlation coefficient, ICC(1,1) = .85, .87, and .89 for achievement, power, and affiliation, respectively. We demonstrated causal validity by reproducing two classical experimental studies that aroused implicit motives. We let three coders recode sentences where our models and the original coders strongly disagreed. We found that the new coders agreed with our models in 85% of the cases (p < .001, ϕ = .69). Using topic and word embedding analyses, we found specific language associated with each motive to have a high face validity. We argue that these models can be used in addition to, or instead of, human coders. We provide a free, user-friendly framework in the established R-package text and a tutorial for researchers to apply the models to their data, as these models reduce the coding time by over 99% and require no cognitive effort for coding. We hope this coding automation will facilitate a historical implicit motive research renaissance.

Download

Citations (1)


... Currently, the scoring of tests for implicit motivation is far more time consuming and cumbersome. Nevertheless, this may very soon become a problem of the past, since modern computing with deep learning algorithms has already been applied to the analysis of texts originating from the picture story exercise (Gruber, 2022;Gruber & Jockisch, 2020;Nilsson et al., 2025;Pang & Ring, 2020;Young, 2024). ...

Reference:

Sexual Motivation (Desire): Problems with Current Preclinical and Clinical Evaluations of Treatment Effects and a Solution
Automatic Implicit Motive Codings Are at Least as Accurate as Humans’ and 99% Faster