How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
Introduction: Emotion regulation is an integral part of mental health, dynamically impacting brain function, as one’s emotions change continuously throughout the day. Impairments in emotion regulation are associated with a range of psychiatric disorders. Although the implications of emotion regulation are crucial to mental health, few studies have...
Advice-giving is a common theme of wisdom in daily life. Prior work showed that resting-state neural oscillations were associated with wisdom when advising from a second- but not a third-person perspective. In the current study, we hypothesized that resting-state neural activity should be associated with wise advising as a function of psychological...
Advice-giving is an important way to share life experiences and promote wisdom in society. We hypothesized that resting-state Default Mode Network (DMN) activity should be associated with increased wisdom when advising from a self-related perspective due to the DMN's involvement in reflection of personal life experiences. In our study, 52 participa...
Emotion regulation is an integral part of mental health, dynamically impacting brain function, as one’s emotions change continuously throughout the day. Impairments in emotion regulation are associated with a range of psychiatric disorders. Although the implications of emotion regulation are crucial to mental health, few studies have examined train...
In this essay, I will explore the hard problem of consciousness and its implications for guiding neuroscience. Firstly, I will explicate how the zeitgeist of the twenty-first century is inevitably guided by philosophical assumptions in scientific disciplines such as cognitive neuroscience, while presenting how this field has fundamentally neglected...
The traditional hierarchical model of face processing proposes dissociable pathways comprise the fusiform face area (FFA), which handles static face information such as identity and the superior temporal sulcus (STS), which processes dynamic face information such as expression. However, to the best of our knowledge no studies to date have examined...
I am currently working on a project with a colleague that uses EEG data to classify emotions using the circumplex model (i.e. valence/arousal). We plan to use the DEAP dataset for emotion calibration. However, one difference my colleague and I had was whether:
- Pre-recorded EEG data from the DEAP dataset can be directly used to train a classifier? OR
- Is it necessary to record participant's live EEG data, while being instructed to view items from the DEAP database to effectively categorize emotions with a classifier?
One issue I had is that pre-recorded EEG data from the DEAP database would not be as accurate for classification as having a group of participants view items from the DEAP database, while EEG activity is being recorded. However, my colleague suggests that recording raw EEG data from participants will be too time-consuming, and less effective. Does anyone familiar with EEG and emotion classification have any insights? Any suggestions are appreciated.
I am currently working on a project with a colleague using a Brain-Computer Music Interface (BCMI) to generate music from EEG signals. Affective states will be recorded with EEG signals and sent to a generative music algorithm. A couple of questions I had about the design:
1). We plan to use Emotiv Epoc+ (14 channels), does anyone know any way to run raw EEG data on EEGLAB in real-time, or must EEG data be recorded with external data acquisition software, and analyzed separately?
2). To run the generative music algorithm, is it necessary to train a classifier to model emotion? Can a SVM or random-forest classifier be used to classify emotion from EEG signals, which can then be fed into the generative music algorithm? Or is this step unnecessary?
3). We plan to use the DEAP dataset for emotion calibration. However, one difference my colleague and I had was whether:
- Pre-recorded EEG data from the DEAP dataset can be sent directly into the generative music algorithm? OR
- Is it necessary for participant's EEG data to be recorded, while being instructed to view items from the DEAP database to gauge their affective brain states?
The generative algorithm is inspired by Ehrlich et al. (2019), and is designed to generate sounds reflective of the user's affective state.
I have a multiple-choice quiz that has one correct answer, and I am trying to determine if people's answers did not occur due to chance, and there is statistical significance. There are 8 questions, and for each question, there is 1/4 correct answer. The average score is 83%.
Obviously anything above 0.25 (25%) is above chance, but there is still a chance people could get a score above 25% by chance. I am unsure what kind of statistical test I would use to determine this? I have not done a statistical test using multiple-choice.
Bonus points: if you know how to statistically analyze each question, that would be helpful too.
Rather than getting published in a scientific journal, I am working independently on a project, which is not in any labs. I will be writing a review article, and was wondering if anyone knows any third party sources to publish that work? Any help is appreciated.
A collaboration of wisdom researchers interested in uncovering the functional activity of brain regions that may be involved in wisdom, with the potential of discovering neural networks implicated in wisdom.
We are using transcranial magnetic stimulation to create a virtual lesion in the STS to understand its role in the face recognition and how it is specialized for dynamic emotional expression, which is dissociated from the static identity recognition processing in the OFA.