Science topic

Emotion Recognition - Science topic

Explore the latest questions and answers in Emotion Recognition, and find Emotion Recognition experts.
Questions related to Emotion Recognition
  • asked a question related to Emotion Recognition
Question
6 answers
Hello,
Can anyone share with me a Health-dataset that is related to emotion recognition, datasets that contents of Text, Audio, Visual, and physiological?
Relevant answer
Thank you so much,
  • asked a question related to Emotion Recognition
Question
34 answers
Are emotions just a body reaction? The "Perceptual Theory of Emotion" (see Printz) assumes that the feeling represents the internal state of the body, signaled by interoceptors located throughout the body, in all internal organs. They represent the quality of their functioning - the state of homeostasis. In addition to interoceptor signals, information is transmitted to all cells of the body in the form of hormones, neuromodulators, and neurotransmitters released into the bloodstream.
Of course, we also have higher mental states related to the awareness of our own emotional state. But now we are talking not about propositional and access awareness, but about perceptual and phenomenal awareness. So love is just butterflies in the stomach, general and sexual arousal, etc., according to every psychology textbook. Of course, there are also hopes for better financial position and housing, travel, joint children, etc., but this is precisely the propositional sphere.
Similarly, other feelings can be recognized and associated with the body's corresponding homeostatic/bodily, somatic, and behavioral responses. Bodily reactions represent the internal states of the organism and are closely related to the internal organs. We can mention here a plethora of examples: contraction and relaxation of smooth muscles, constriction of the bronchi, trachea, larynx, pharynx, obstruction of the airways, contractile gasping for air; behavioral reactions: emotions are expressed in facial expressions, body posture, the diaphragm tightens, which causes shallow breathing. Under stress, ignorant people tighten the anus and buttocks, and the weight of the body shifts from the metatarsus to the heels - that's why people move and stand differently. The kneecaps are pulled up, and the thighs are stiffened, the muscles lying along the spine are also strained, the hair stands out, eyes blink, the heart rhythm changes, palpitations. Next let us mention somatic reactions: sleep disturbances, headaches of various nature, pain in the spine and joints, lack of energy, hunger, thirst, heartburn, itching, burning, numbness, colic, tingling, redness, pain in various parts of the body, sweating. And on the part of the digestive system: spasms of the intestines, stomach, flatulence, belching, vomiting, nausea, indigestion, constipation, irritable bowel syndrome, etc. And other psychogenic factors such as teeth grinding, dizziness, euphoria, etc., etc.
Can we add anything more to this list? How about other emotions?
Relevant answer
Answer
Emotions (each, that are basic to all humans) are patterns of reaction to certain TYPES * of situations. All emotions (basically, except interest and distress -- these being present at birth or very soon after) ARE SOCIAL in nature and in good part (initially) a response to how behavior-patterns-in-an-environmental-situation ARE RECEIVED (viewed + or -) BY OTHERS . Of course, the reactions of others become internalized, so the same basic emotions can and will "come up" without others present.
* FOOTNOTE: the TYPES of situations reacted to initially have clear, discoverable (definable) concrete aspects. TYPES, at least at first, are not an abstraction
  • asked a question related to Emotion Recognition
Question
1 answer
I am currently working on setting up this EVAL-AD5940BIOZ (Bio-Electric Evaluation Board: https://www.analog.com/en/design-center/evaluation-hardware-and-software/evaluation-boards-kits/eval-ad5940bioz.html#eb-overview) for measuring the skin impedance value for my project. Due to the lack of proper documentation from the Analog devices team, I am facing difficulty in obtaining human's skin impedance data.
They have given documentation only for acquiring the data from their Z-Test board (which has various combinations of R, L, C parameters to mimick skin) but not for acquiring EDA data from human skin.
If anyone has experience with the setup, could you share your insights on it? It would be helpful to many researchers who are working in this domain.
P.S: I have tried asking for help in their forum but nothing worked for me as of now.
Regards
Lokesh
Relevant answer
In fact you can use any impedance meter to measure the skin impedance All what you need is to stick to metallic probes on the skin and measure the impedance between the two electrodes by this impedance meter. There are many RLC meters that measures the impedance of the skin where it is equivalent to a parallel combination of a resistance and a capacitance. You have to press the electrodes against the skin to avoid the existence of air gaps between the electrode and the skin. You can use salty solutions to wet the electrodes. In this way you will get reproducible results.
You can also use the method in the paper at the given link:
I used this method to measure the skin impedance and it gave good results.
Best wishes
  • asked a question related to Emotion Recognition
Question
8 answers
Hi everybody,
Given the different methods of speech feature extraction (IS09, IS10,...), which one do you suggest for EMODB and IEMOCAP datasets?
Relevant answer
Answer
The key to speech emotion recognition is the feature extraction process. The quality of the features directly influences the accuracy of classification results. If you are interested in typically feature extraction, the Mel-frequency Cepstrum coefficient (MFCC) is the most used representation of the spectral property of voice signals as well as you can try energy, pitch, formant frequency, Linear Prediction Cepstrum Coefficients (LPCC), and modulation spectral features (MSFs).
According to your suggested IS09 and IS10 which one is better so both are working good and there is no big difference but I recommend trying high-level (DL) features, it will be defiantly better than low-level.
  • asked a question related to Emotion Recognition
Question
9 answers
I am working on a paper and using the Schwartz's (1992) value scale. I have translated each value into my own language. Should I conduct a pilot study first in order to get the validity and reliability of the scale?
Relevant answer
Answer
I do recommend pilot testing. First, we must review the literature; explore the concept; list the themes; formulate the items, and select the judges.
It is at this point that we can say that we have created the instrument and therefore the instrument thus developed has content validity, but we have not yet assessed any of its metric properties.
Up to this point, we have not made use of statistics to corroborate the suitability of the instrument we are evaluating, therefore, at this point, we start the quantitative phase of instrument validation and this corresponds to the evaluation of its metric properties.
After the pilot test, consistency must be evaluated; the possible reduction of items and dimensions must be analysed; and finally, the identification of a criterion must be achieved.
  • asked a question related to Emotion Recognition
Question
22 answers
I am working on my Master's project. I need to consider emotions of learner in real time. Is there any free software available for this purpose.
I found
and mathworks code for behavioral detection and some other but they are too costly to afford can any one help in this regard.
Relevant answer
Answer
I think the go-to non-commercial (open source) tool is OpenFace: https://github.com/TadasBaltrusaitis/OpenFace
If you want to widen your scope from the face a bit (much of emotion research has focues on facial recognition), have a look at OpenPose as well: https://github.com/CMU-Perceptual-Computing-Lab/openpose
For the voice, consider OpenSmile: https://www.audeering.com/research/opensmile/
As concerns commercial systems for facial expression recognition, we, e.g., did a comparison of 8 commercial classifiers a while ago:
for the voice, have a look at, e.g., this recent paper in Emotion Review (Schuller & Schuller, 2020):
  • asked a question related to Emotion Recognition
Question
4 answers
I need a set of images of emotional faces (angry or sad; neutral; happy) of infants. I would like to frontally present these faces on a screen in a computer experiment. If infant images were matched in size, luminance, position, etc, with other images of adults, it would be great! Yhank you
  • asked a question related to Emotion Recognition
Question
2 answers
research paper regarding Emotion Recognition using big data analytics and Spark MLlib
Face and emotion detection using spark MLlib
Relevant answer
Answer
Michael Uebel Thank you
  • asked a question related to Emotion Recognition
Question
3 answers
Python library for detect emotion exactly from the Bengali text.
Relevant answer
Answer
Hi,
  • asked a question related to Emotion Recognition
Question
4 answers
What are the main feature you would extract from a social netowrk to model wellbeing and mental health.
Also what the common formulas for the following features: Engagment, popularity, participation, ego.
Relevant answer
Answer
Eiman Kanjo Here is a paper which created Mental Well-Being Index https://precog.iiitd.edu.in/pubs/mental-wellbeing-2017.pdf and analyzed college campuses in the US using Reddit.... Hope this is helpful... Feel free to reach out if you need any further information..
  • asked a question related to Emotion Recognition
Question
8 answers
The triangular theory of love by Sternberg, characterises love in an interpersonal relationship using three different dimensions: intimacy, passion, and commitment.According to this theory different stages and types of love can be categorised by different combinations of these three elements. My question is this: does this model fully capture the essense of what we call love. Can love not also have other attributes, we can have feelings of love towards a person without having either commitment, passion or intimacy, some people are asexual for example and lack commitment, others such as teenagers lack the understanding of what love is. Do you agree with this theory or do you like me see problems associated with such a sharp division of element of human affects. Best wishes Henrik
Relevant answer
Answer
For a complete answer to this interesting question (without any desire for prominence or any petulance) you can read some of my contributions here in "RG", specifically the article "Desire and love: the unfinished man" in which the two "Triangular Models" on Love and ways to evaluate and measure them:
-That of John Lee, from the University of Toronto, which offers us a wide typology of the ways of loving, thus –based on a questionnaire to measure falling in love– has established three primary types of love that are quite independent from each other: - EROS , LUDUS and STORGE, Identifying three “pure” combinations of these primary types: MANIA, PRAGMA and AGAPE.
-And, above all, that of Sternberg in his "Triangular Theory of Love" that considers three primary components: INTIMACY, PASSION and COMMITMENT, one in each vertex of the triangle, thus giving rise to 7 possibilities: INTIMACY: Like, friendship and affection without commitment or passion. ROMANTIC LOVE: Intimacy plus passion, with feelings of proximity and outbursts of passion. PASSION: Love at first sight, fickleness, mental and physical excitement. FATUO OR FALSE LOVE: Passion plus commitment, lightning engagement and wedding before intimacy develops, which usually leads to failure- COMMITMENT: Decision that one loves another person without intimacy or passion, it is an empty love or of convenience. COMPANION LOVE: Commitment plus intimacy, it is a solid but not romantic friendship.
As expected, in real life all these elements are mixed, with COMPLETE or CONSUMED LOVE being the one that combines intimacy, passion and commitment, in the center of the triangle.
  • asked a question related to Emotion Recognition
Question
2 answers
This is not a question:
We used an emotion-evaluated corpus consisting of 10 000 English sentences from 7 genres. We applied a specific phonemic decomposition based on the phonetic transcripts. The result of the applied principle component regression showed that the phonemic content is very strongly related (r = 0.96) with the ratings of emotional valence (positive-negative emotion), provided by readers. We designed an online experiment that aims at evaluating are the discovered dependencies valid out of the corpus and how... all this depends on the native language of the reader.
We will be grateful, dear colleagues, if you find time to assist this emotional sound-symbolic statistical analysis by participating in the online experiment here:
It will, unfortunately, take you about 20 minutes.
The experiment is horrible - reading eight texts and deciding how to group them depending on your feelings. The texts are one page long!
But we have no other possibility than to ask colleagues for scientific assistance.
Now - the question is this one:
Could you, please, find 20 minutes to participate?
Thank you in advance!
Velina Slavova
Relevant answer
Answer
I would love to know the results when you publish the study.
  • asked a question related to Emotion Recognition
Question
7 answers
I am engaged in a project "Human affect based threat prediction". I am interested to see if I can learn to correlate human emotion to predict human behavioral responses. But am collecting such references that make such claims or collecting literature on similar idea. Help is very much appreciated.
Relevant answer
Answer
Yes, reactions are resulting through intense emotins
  • asked a question related to Emotion Recognition
Question
5 answers
In all the Emotion Recognition through EEG research papers for DEAP Dataset, I noticed that the 60-second data was divided into many small chunks and then feature extraction methods were applied on that chunk. So we generally get several hundred values after feature extraction from that single 60-second data (DEAP Dataset has 1280, 60-second data with 128Hz sampling rate).
Now while training the model, all the research works just shuffled the data and divided it into training and testing dataset. So now for example let's say from a single 60-second data we got 100 values after feature extraction, so out of them 60 go-to training dataset and 40 go to testing dataset. So, now would this be a fair evaluation while training it on a classification model. As a part of the video used for evaluation is in the training dataset?
And, when trying out the same model such that the whole values extracted from a single 60-second data goes to either training or testing dataset and no splitting happens in it, then the accuracy drops significantly when compared to the above case or sometimes doesn't even learn anything.
Relevant answer
Answer
Look the link, maybe useful.
Regards,
Shafagat
  • asked a question related to Emotion Recognition
Question
1 answer
What are the next steps in speaker identification after we extract the MFCC features? Thank you so much.
  • asked a question related to Emotion Recognition
Question
7 answers
Dear community , currently working on emotions recognition , as a first step I'm trying to extract features , I was checking some recources , I found that they used the SEED dataset , it contains EEG signals of 15 subjects that were recorded while the subjects were watching emotional film clips. Each subject is asked to carry out the experiments in 3 sessions. There are 45 experiments in this dataset in total. Different film clips (positive, neutral, and negative emotions) were chosen to receive highest match across participants. The length of each film clip is about 4 minutes. The EEG signals of each subject were recorded as separate files containing the name of the subjects and the date. These files contain a preprocessed, down-sampled, and segmented version of the EEG data. The data was down-sampled to 200 Hz. A bandpass frequency filter from 0–75 Hz was used. The EEG segments associated with every movie were extracted. There are a total of 45 .mat files, one for each experiment. Every person carried out the experiment three times within a week. Every subject file includes 16 arrays; 15 arrays include preprocessed and segmented EEG data of 15 trials in one experiment. An array named LABELS contains the label of the corresponding emotion- al labels (−1 for negative, 0 for neutral, and +1 for positive). I found that they loaded each dataset separately (negative , neutral , positive) , and they fixed length of signal at 4096 and number of signal for each class at 100 , and fixed number of features extracted from Wavelet packet decomposition at 83 , my question is why they selected 83 , 4096 and 100 exactly ?
I know that my question is a bit long but I tried to explain clearly the situation , I appreciate your help thank you .
Relevant answer
Answer
As far as the problem is concerned, when DWT is executed as a feature extraction method, lots of wavelet coefficients are generated as the decomposition level increases. Therefore, choosing the necessary features from DWT coefficients becomes crucial for the effectiveness and conciseness of the classifier. Generally, this issue can be solved by manual selection of the features. However, to determine the practicability of DWT an active feature selection (ASF) strategy for DWT coefficients can be adapted. There are two phases contained in the ASF process. In phase 1, to reduce and provide a uniform feature dimensionality, we use relative power ratio (RPR) to pre-process each DWT coefficient obtained from digital responses. Further, in phase 2, the optimal DWT coefficients for classification with an automatic search method are identified to ensure that the outcomes of the whole searching process are the most useful features for the classification. Through the above phases, the original DWT features are refined and selected according to their RPR values. Concise features are finally achieved in an active manner with no human designation.
  • asked a question related to Emotion Recognition
Question
4 answers
Emotion recognition is an upcoming research area with strong applications in the next-generation communications using digital modes. To develop robust algorithms for this, we are looking for some good datasets.
Relevant answer
Answer
Thank you Muhammad Ali, Omar Bouhamed and Moreno Colombo for sharing these.
  • asked a question related to Emotion Recognition
Question
5 answers
Hi,
I am working on project which ultimate goal is emotion classification from speech and I want to try several approaches how to do this. One of them is training convolutional neural network using MFCC coefficients extracted from audio. I can easily extract them, since there are several python libraries capable of doing so, but I am not quite sure how to use them, since I have matrix of 13xN values depending on how long is audio input, but that obviously is not good as input for neural network.
I understand that coefficients are calculated for very short frames and that's my N, and I possibly could feed network frame after frame, but since emotions are not changing rapidly in miliseconds, I'd like to work in wider context, let's say I'd like to have 13x1 vector for every 3-4 seconds. Now let's say I am able to isolate coefficients for given time (e.g. 13x200 matrix = 3 seconds of audio), but how do I make it into 13x1 with considering the fact that this vector is intended for emotion recognition? I mean, am I supposed to calculate e.g. mean and use 13 means as neural network input? Or standard deviances, or something else, or combination of few? What about normalisation or some different preprocess of coefficients?
The most papers covering this issue are very vague about this part of whole process, usualy saying only something like "we used 13 mfcc coefficients as neural network input", but no details about how to actually use them.
Can someone tell me what are the best practices with mfcc in emotion recognition or can someone recommend some papers covering this problem?
  • asked a question related to Emotion Recognition
Question
4 answers
I am new to Machine Learning and I am currently doing research on speech emotion recognition using deep learning. I found out that recent literatures were using mostly CNN and there are only few literatures found for SER using RNN. I also found out that most approaches used MFCCs.
My questions are:
- Is it true that CNN has been proved to outperform RNN in SER?
- If yes, what are the limitations that RNN have compared with CNN?
- Also, what are the limitations of the existing CNN approaches in SER?
- Why MFCC is used the most in SER? Does MFCC have any limitations?
Any help or guidance would be appreciated.
Relevant answer
Answer
Answers to some of your queries are as follows:
- It depends on network configurations, the way one creates training examples & datasets. A lack of systematic benchmarking of existing methods however creates confusion. There are several studies which shows that LSTM outperforms CNN for speech emotion recognition. Specially LSTM with attention mechanism helps to boost emotion recognition performance. Some other studies report the opposite and CNN seems to be a better choice. You can do a quick literature survey on latest papers published in last couple of INTERSPEECH, ICASSP & ASRU.
- MFCC is the default choice for most speech processing tasks including speech emotion recognition. However, MFCC is not the optimal one as it lacks prosody information, long-term information. That's why often MFCC is augmented with pitch (to be more specific log F0) and/or shifted delta coefficients. These additional information help to boost the emotion recognition performance. MFCC lacks phase information but the role of phase in emotion recognition performance is not much investigated. The parameters for MFCC computation such as the number of filters, the frequency scale are chosen experimentally and they are dependent on the dataset and the backend classifier.
  • asked a question related to Emotion Recognition
Question
9 answers
I am currently working on emotion recognition. I am looking for a dataset that contains labeled image files for emotion (i.e. happiness, sadness, etc).
I've found several dataset but not exactly what I need.
· Google facial expression comparison dataset
There is no label (like emotion categories) for dataset.
· FER
The dataset does not include image files. Instead, pixel values of images are stored in CSV files.
· The Japanese Female Facial Expression (JAFFE) Database
Small dataset
· CK+
Small dataset
If you know a large dataset, could you please inform me? Thanks a lot.
Relevant answer
Answer
You easily find many datasets for speech emotion recognition.
Search IEMOCAP and EMO-DB cos these both are very popular and publically available. Both of them are acted SER datasets.
  • asked a question related to Emotion Recognition
Question
5 answers
I'm working on a research and I need references to studies, papers, web pages, etc., to know how a computer based system could in the future detect when a student is not understanding a topic of a lecture by detecting the emotions expressed in a student's face. So, imagine you are a Professor and you are explaining a new concept to the class. Maybe you see on your students' faces a confusion or surprise feeling that tells you they aren't understanding well the concept you are explaining. That's what I need, I need a source, reference to a paper or any oficial source of information to proof that what I'm saying aren't just word, but it is proven by researches. I want to prove that certain emotions can tell us the weaker knowledge points of students. All I've found is about how positive/negative emotions affect academic achievement, performance, etc., but I want to focus on the association of weak knowledge points with emotions, how we may know a student isn't understanding a topic by the emotion he is expressing.
Thank you so much and I hope I've explained myself.
Relevant answer
I Agree With Steve Schneider
  • asked a question related to Emotion Recognition
Question
11 answers
Affective technologies are the interfaces concerning the emotional artificial intelligence branch known as affective computing (Picard, 1997). Applications such as facial emotion recognition technologies, wearables that can measure your emotional and internal states, social robots interacting with the user by extracting and perhaps generating emotions, voice assistants that can detect your emotional states through modalities such as voice pitch and frequency and so on...
Since these technologies are relatively invasive to our private sphere (feelings), I am trying to find influencing factors that might enhance user acceptance of these types of technologies in everyday life (I am measuring the effects with the TAM). Factors such as trust and privacy might be very obvious, but moderating factors such as gender and age are also very interesting. Furthermore, I need relevant literature which I can ground my work on since I am writing a literature review on this topic.
I am thankful for any kind of help!
Relevant answer
Answer
Affective technologies like social robots must answer appropriately according to context. For example, if the goal is build empathy (towards human acceptance), social robot must imitate the affect state of humans. In any way, affective technologies need recognize humans emotions first. In this context, we development this paper:
I hope it will be useful
  • asked a question related to Emotion Recognition
Question
8 answers
I am researching on Affective Algorithmic Music Composition and will like to know more about the factors that influence emotions to be perceived or induced.
Relevant answer
Answer
A good question, it would be interesting to know more about what you find out.
It's almost certain that memory will play a role in whether or not someone feels or perceives an emotion in music and the lyrics are also likely to be contributory factors.
It's probably worth reading The Handbook of Music and Emotion by Patrik Juslin and John Sloboda (https://global.oup.com/academic/product/handbook-of-music-and-emotion-9780199604968?cc=gb&lang=en&), if you've not already!
  • asked a question related to Emotion Recognition
Question
3 answers
Emotion recognition is part of socially emotional skills. The development of these skills is very important in the modern world. Especially, in childhood. We want to use differential-psychological approach to measuring recognition of emotions in childhood. What literature would you recommend studying?
Emotion recognition is a complex and multifaceted construct. What is included in emotion recognition? Which construct should be measured? We want to create an objective test.
What age is the most interesting to consider emotion recognition? How do you think, how can we use the created test?
Relevant answer
Answer
In terms of currently available measures, I commonly use the social perception subtests (emotion recognition and theory of mind) from the NEPSY-II. I’ve also been looking into the literature on children and Alexithymia, which might be of interest.
  • asked a question related to Emotion Recognition
Question
6 answers
I am currently working on facial valance and arousal emotion prediction. I found only affect net dataset, but that has so many mislabeled images. Please share me the dataset links for affect emotion recognition.
Relevant answer
Answer
Hello, apart from AffectNet, two more widely used in-the-wild databases that are annotated in terms of valence and arousal (and have been developed by myself and the IBUG lab at Imperial College London) are the Aff-Wild and its greater extension Aff-Wild2. The databases along with their respective publications (IJCV, CVPR, BMVC) can be found here:
You can also drop me an email if you need assistance.
  • asked a question related to Emotion Recognition
Question
15 answers
I need help for a research project I'm doing. I want to know what expressions are expressed in students' faces at class when they don't understand a topic a Professor is explaining. Do you know where I can find information? Any recommendable paper? All help is welcome.
Thank you so much in advance.
Relevant answer
When you do not understand something you usually feel frustration. This feeling can be expressed in forms of anger, irritation, dislike for the professor and so on. If you do not have anyone who can help you may feel dissatisfaction, which feeling can translate into a sense of being more stupid than the other students. I attach something about students coping mechanism with their lives:
  • asked a question related to Emotion Recognition
Question
6 answers
Dear Researchers
We are in the process of developing a multimodel-multisensor wrist band with variety of sensors including Heart monitor, EDA, Accelerometer, body Temperature and others. Please drop a message here if you think that you will be interested in using such device.
Best wishes
Eiman
Relevant answer
Answer
Absolutely I will use such device
  • asked a question related to Emotion Recognition
Question
4 answers
I am currently doing "emotional voice conversion" but suffering from a lack of emotional speech database. Is there any emotional speech database that can be downloaded of academic purpose? I have checked a few databases but only have limited linguistic contents or few utterances for each emotion. IEMOCAP has many overlaps which are not suitable for speech synthesis...I would like to know if there is any database has many utterances with different contents for different emotions and with high speech quality/ no overlap?
Relevant answer
  • asked a question related to Emotion Recognition
Question
1 answer
I'm working on a little project to measure Emotions with different Methods (K-NN, DNN, NN, Random Forest ...)
At the moment I have two data sets, with one of them I train the different methods (public DEAP data set) and then I want to test the results on the other data set.
Now I am trying to get the two data sets to a suitable level and I noticed that the second data set, which was recorded with the EMOTIV EPOC+, has large peaks in different places.
Is it possible that the peaks were caused by the blinking ?
Furthermore, both datasets are recorded with 128Hz and have as unit µV but the EPOC+ dataset has a value range of [-1000 ; 1000] and the DEAP dataset only a value range of [-10; 10], what could be the cause ?
I uploaded a plot from an example of each dataset.
Relevant answer
Answer
The large peaks in the EPOC dataset do look like blinks/other eye movements. If you used the experiment outlined in the DEAP paper, you likely have a good deal of ocular artifacts in your data. For example, the large spike around ~24 seconds in the EPOC dataset looks like an eyeblink, whereas the large perterbations later on (38-47 seconds) look like generalized eye movement (participant scanning the image, moving eyes freely). Given the size of the artifacts in the EPOC dataset, it's not necessarily surprising that your y-axis is so large compared to the DEAP data. Of note, the EPOC data also have some pretty noticeable electrical artifact (the very fast oscillations seen across all channels from ~25-39 seconds).
I think with some basic artifact rejection and filtering you could knock out a lot of your artifacts. There are a number of approaches you could use depending on the additional sensors you used to record your data (e.g., EOG, earlobe/mastoid references, etc.). It looks like you are using the preprocessed DEAP data, which have already been corrected for these artifacts. To best match your data sets, I would recommend following the same procedures as outlined on the DEAP data website ( https://www.eecs.qmul.ac.uk/mmv/datasets/deap/readme.html ). Good luck!
  • asked a question related to Emotion Recognition
Question
3 answers
we are trying to build a cnn model to classify emotions from music. we know that only certain neurons will be triggered while identifying positive and negative emotions. But we can not identify the neurons and a way to trigger them.
Thanks
Relevant answer
Answer
a possible approach would be to observe the absolute values of the weights in your layers, such as done in:
  • asked a question related to Emotion Recognition
Question
2 answers
Our lab are very interested to use different behavioral measures to tax attentional biases in normal and clinical population, our purpose is collect and analyze data to allows build hypothesis about how different cognitive and emotional operations subserves to a specific cognitive and affective processes.
Relevant answer
Answer
you can use psychopy software.
for facial images databases, you can see this link:
  • asked a question related to Emotion Recognition
Question
12 answers
Hello I work with Convolutional Neural Network and LSTM in speech emotion recognition, in my result I see that CNN has shown better performance than the traditional LSTM in my speech recognition .
Why this?
Normaly LSTM should better in Speech recognition as I use sequential data.
Thanks
Relevant answer
Answer
The offset is the displacement from sample to sample. If you measure your sliding window from the beginning of your sample to your next sample, a sliding window offset of 25ms means there is no overlap between them. Lets take that you slice a sequence of phonemes every 25ms with a sliding window offset of 25ms, what guarantee do you have of obtaining all phonemes within those frames?That is why you do it with some offset x<25 ms.
  • asked a question related to Emotion Recognition
Question
9 answers
I wonder is there any way to map emotions on the two-dimensional circumplex space mode based on valence and arousal generated from either heart rate or GSR? (or any other biometrics)
I presume there should have coordinates for each of the emotion on the circumplex model, but I couldn't really fine one. I read several papers using self-report questionnaires - so you can say, for instance, (5 ,1) refers to happiness. But what if we use the results from biometrics such as GSR / heart rate / etc?
Thank you!
Relevant answer
Answer
Valence, Arousal and a basic emotion label. Just with two dimensions you can not discriminate hunger from fear.
  • asked a question related to Emotion Recognition
Question
10 answers
Social media is one of the top distribution channels for user created content. By analyzing social media platform user shares, like tweets, photos, likes, re-tweets etc. one can see which types of posts receive the most engagement and use that rich information to form marketing, communication, channel and content strategy. We have a project and articles on social media analytics https://www.researchgate.net/project/Multi-Channel-Social-Media-Predictive-Analytics
However we would like to hear suggestions and alternative recommendations from the researchers who are working on social media analytics.
What are the algorithms / tools / software packages do you recommend for Social Media Analytics ? How do you relate your solution to machine learning or deep learning frameworks?
Relevant answer
Answer
I agree with Osama Thamer Hassan
  • asked a question related to Emotion Recognition
Question
3 answers
I would like to hear from researchers who have used smart rings with heart rate monitors or any other biometric sensing mechanisms, what is your experience, do you have a favourate?
regards,
Eiman
Relevant answer
Answer
We have used plethysmography data. A plethysmograph measures blood volume in the subject’s thumb. We have used this data to detect emotions of the participants. But plethysmography measurement can also be used to compute the heart rate (HR). We have also seen that blood pressure and heart rate variability correlate with emotions. Further details can be found via
  • asked a question related to Emotion Recognition
Question
2 answers
I have a set of audio files with each audio annotated by >= 5 annotators, with annotations of the valence, activation and dominance (continuous units of affect). I want to measure the inter-rater agreement (and perhaps plot it). What metric is to be used here?
Relevant answer
Answer
Not my area of expertise
  • asked a question related to Emotion Recognition
Question
1 answer
Hello everyone!
I would like to know if there is currently a database of video-clip and / or images depicting virtual agents expressing emotions. I've searched everywhere, but I have not been able to find it...
Relevant answer
Answer
For the work of my master thesis (back in 2011) I needed an expressive virtual character, but like you I couldn't find anything, so I decided to create one myself.
I found the Daz3D Studio, which is free to download and use, but then every element you use you need to buy it, with the oldest models becoming free after some time.
So I installed the studio, found a free female model (back then it was Victoria, now I found Genesis, check links below), and so far I didn't pay for anything.
Eventually when I wanted to manipulate the facial expressions of the virtual character (model) I had to buy a relevant package and paid about 20$, which I see is still priced about that.
I am not an expert in 3D graphics, I only had a small experience, and to be honest in the beginning I found the interface a bit confusing, so it took me a couple of weeks to manage to create the expressions, but for me at that time it was the only solution.
Since I was already familiar with this method I did the same for a conference paper back in 2016.
Hope that helps! Good luck!!
  • asked a question related to Emotion Recognition
Question
3 answers
Dear Researchers,
I'm looking for neuroscience article on different type of emotion and how they influence our daily routines behaviours. I'm looking for type of emotion, feeling/affect and causes
Any sugegstion would be much appreciated
Thanks
Angok
Relevant answer
Answer
Dear Angok.
Neuroscintist Antonio Damasio has written a lot about emotions and fellings and their influence on one's behavior. You can proffit a lot from reading him.
As I see it, emotions are, say, the energizer of one's behavior.
Best regards,
Olando
  • asked a question related to Emotion Recognition
Question
26 answers
We are looking for a large database (200+) of pictures of human faces with neutral facial expression, in order to conduct an experiment on nonverbal learning mechanisms. We have difficulties finding appropriate pictures because we need people in the pictures to be  Caucasian, age 20 to 40, with neutral facial expression, and on neutral background. Also, it would be very good if the database is free for research purposes use. Can somebody please suggest the existing database that he or she knows?
Thank you in advance, 
Jovana
Relevant answer
Answer
Hi! For anyone else seeking such a database, I have combed through all of the resources listed here in addition to numerous other sources in order to construct the Face Image Meta-Database (fIMDb): https://cliffordworkman.com/resources/
The fIMDb includes info or estimates on: number of photo sets per source (and numbers of neutral and other sets — e.g., facial emotions), number of subjects per source (with approximate sex distributions), total number of images, approximate number of viewpoints, whether the sources includes photos from more than one ethnicity, whether it includes more than one age group, whether meta-data are available, the photo category (e.g., posed, wild), the reference(s) for the source (e.g. DOIs). I hope this will aid others interested in conducting research on responses to faces.
  • asked a question related to Emotion Recognition
Question
10 answers
My research group needs to annotate emotions through video observations with recordings of students' faces and screens as they interact with educational software. These annotations will be used in algorithms for detecting emotions. These annotations need to be made by observers and not by students themselves. We're struggling to find protocols for emotion annotation from videos. Does anyone know of any protocol with these characteristics or could to help us in this?
Relevant answer
Answer
Thanks
Jeffrey M Girard
and Anastasia Pampouchidou for the sugestion. I am studying now about CARMA and verifying whether it meets, even partially, our need. We need a method of annotating emotions through video. Encoders (note-takers) need to observe recordings of students' faces and screens while they are interacting with a computer-based learning environment and we are interested in nonbasic emotions like flow, confusion, frustration and boredom, which are common emotions in complex learning environments and in computer-based learning environments (our study target).
  • asked a question related to Emotion Recognition
Question
3 answers
Emotion recognition is a broader area in sentiment analysis. In the current scenarioo, most of the research works for emotion recognition is done in terms of using facial images, gestures and using signals like EEG,ECG,EMG and many,,
how about the use of EOG for emotion recognition? Is eye movement based emotion recognition possible with EOG?
It is fact that the human eye tend to show differences for each emotion by means of dilation in pupil
Is it possible to predict human emotion using EOG along..
Relevant answer
Answer
You can also find additional information in the book Artificial Cognition Architectures Springer. We discuss modeling human emotions among others.
  • asked a question related to Emotion Recognition
Question
4 answers
I want to take a deeper look at the facial expressions of people with profound intellectual and multiple disabilities (PIMD) to analyse their emotional expressions.
For this purpose, I am searching for a ready-to-use software, which combines image processing (e.g., OpenPose, OpenFace) and machine learning. In addition, I would prefer a software that is free (i.e., Open Source) or at least for non-commercial research purposes.
So, I am not looking for methods, but for ready-to-use software, which includes a feature to train my own models. The reason is that every person with PIMD shows very unique behaviour signals and, therefore, you need one model for each emotion of a person.
Finally, I do not need a GUI or visualization, a simple command line application would enough.
A hint would be very helpful.
Relevant answer
Answer
I do not know if there is a "ready-made" software available for this, but my collaborators at University of Basel, GRAVIS group in the Department of mathematics and informatics have been working on facial recognition software.
Their project that comes close to what you seek is "Social judgement of faces" lead by Sandro Schönborn.
You can visit their webpage here:
The software they use - scalismo - is opensource.
  • asked a question related to Emotion Recognition
Question
23 answers
Climate Change is among the most pressing issues facing human society today. Eventhough the science of climate change is complex, many studies have shown that the burning of fossil fuels is a major contributor. To reduce or mitigate the effects of climate change and create a sustainable society, sustainable development demands that we must move towards a low-carbon society. One sustainable energy technology that has emerged as a potential solution in addition to renewable energy, is carbon capture and storage (CCS). If you are not familiar with CCS, the video below by ZEP can give you an insight. 
According to the Intergovernmental Panel on Climate Change report, we are unlikely to meet our climate targets ( such as the Paris climate target) without CCS. The International Energy Agency's reports also show that renewable energy technologies cannot do it alone. CCS is an important part of the portfolio of low-carbon options we need to move our society into a sustainable direction in the short, medium and long-term. What is the problem? Whilst some are thinking CCS is a good solution, others think it is a bad idea. For example, Greenpeace tagged CCS as a scam. Do you think is a scam? I'd like to hear your opinion. Is carbon capture and storage a good idea or not? Give reasons for your opinion.
P.S
Relevant answer
Answer
There are uncertainties around the viability of CCS, since CCS requires energy and thus reduces the efficiency of the plants where it is used. CCS cannot be used on all emission sources, e.g. vehicles. The stored CO2 could potentially leak. CCS is not solving the cause of the problem, but rather deal with the consequences. This is not the optimal solution, but could be needed if we are not able to reduce the emissions fast enough by targeting the root causes.
Carbon Capture and Utilisation (CCU) is where the captured CO2 is used instead of being stored, e.g. to produce fuels. This could perhaps be a better way forward than CCS, since the CO2 is exploited and it avoids some of the challenges of CCS, e.g. leakages from storages. CCS or CCU combined with biomass could provide net reduction of CO2 from the atmosphere.
  • asked a question related to Emotion Recognition
Question
7 answers
I am working on facial emotion recognition task, for that I need either dynamic facial images or morphed one as an experimental stimuli (As static images I have already used). For that purpose, please do suggest some concerned database.
Relevant answer
Answer
Hi Kushi, you may find the RAVDESS helpful in your paradigm. It's a validated multimodal database of emotional speech and song. It contain 7356 recordings in English, with 8 emotions: calm, happy, sad, angry, fearful, surprise, disgust, and neutral, each at two emotional intensities.
  • asked a question related to Emotion Recognition
Question
7 answers
Dear All,
I am interested in automatic emotion recognition from human speech signals and would like to consider an interview for emotional speech elicitation. I would appreciate if someone has any recommendation of validated interviews for eliciting discrete emotions. Many thanks in advance!
Kind regards,
Claudia
Relevant answer
  • asked a question related to Emotion Recognition
Question
2 answers
We are working in emotional recognition of pre-symptomatic patients with Huntington's disease. We have read about the Bell Lysaker Emotion Recognition Task (BLERT) and its strong psychometric properties and we would like to use it in my research.
Relevant answer
Answer
Dear Stephen Joy,
Thanks for your suggestion.
  • asked a question related to Emotion Recognition
Question
3 answers
Has anyone encountered research literature surrounding the connection between emotion recognition or emotional intelligence and lineup identification or facial recognition?
  • asked a question related to Emotion Recognition
Question
5 answers
Out of interest and as part of my MSc I am looking to write a "research grant proposal" (as part of an assignment) for a study using TMS to manipulate facial emotional recognition.
I understand from my research that the fusiform gyrus (face area) is not accessible with rTMS but was wondering if there are any other areas/pathways that rTMS can be applied to manipulate facial emotion recognition?
If anyone has any information please you could kindly link it below, I would be very grateful.
Relevant answer
Answer
the rTMS is mostly used for depression therapy or similar things related with the psychological medication. In other hand, the rTMS may result some side effects to the respective subjects (e.g. hearing problem, headache, etc.). Based on my experience, these kind of problems may affect the data quality and made some problem during analysis.
There are some papers which analyze the rTMS and emotion relationship.
For example:
Choi, K.M., Scott, D.T. and Lim, S.L., 2016. The modulating effects of brain stimulation on emotion regulation and decision-making. Neuropsychiatric Electrophysiology, 2(1), p.4.
  • asked a question related to Emotion Recognition
Question
5 answers
I want to implement Deep Retinal Convolution Neural Network for Speech Emotion Recognition given in this paper https://arxiv.org/ftp/arxiv/papers/1707/1707.09917.pdf. The authors of this paper achieved 99% accuracy on IEMOCAP, EMO-DB databases.
What I understood from this paper is that I first have to convert voices in to spectogram by using Data Augmentation Algorithm Based on Retinal Imaging Principle (DAARIP) algorithm and then input these into DCNN.
I am having a hard time breaking down this approach in to easy steps.
Relevant answer
Answer
Hello Saad,
I took a brief look onto the attached paper. As far as I understood, the training/evaluation etc is just the standard procedure for CNNs. The paper uses an Alexnet-like architecture (similar to the literature cited in the introduction), there is nothing special about the architecture, training and evaluation.
The only novelty is the data augmentation scheme the authors are using. The authors treat the speech spectrograms as images and apply a lens distortion to the images, which they call "DAARIP algorithm". Multiple distorted versions (steps 3, 4 and 5) are added to the augmented dataset.
However, I would be careful with the claim to achieve 99% accuracy with this approach. The trick of that paper is that it uses data augmentation BEFORE dividing the dataset into training, validation and test. This means, that an augmented version of a spectrogram can be in the test dataset, while the original is used in the training dataset. So I am not sure whether this paper is only a strong overfitting of the network and inability of the authors to notice this.
You can use this technique to reproduce the result of that paper, however I expect that the performance of that approach will reduce significantly when testing data which is truly unseen to the network.
Regards,
Michael
  • asked a question related to Emotion Recognition
Question
3 answers
For building Speech Emotion Detection and Recognition system, which approach would be better? Hidden Markov Model or Deep Learning (RNN-LSTM) approach? I have to build a SER system and I am confused between the two. If there are better models than these two, kindly tell.
In addition to the question above, I watch a siraj ravel video ( https://www.youtube.com/watch?v=u9FPqkuoEJ8 ) in which he says that previously HMM were state of the art but now Deep Learning is much more accurate. I need a rationale for this statement
Relevant answer
Answer
"previously HMM were state of the art but now Deep Learning is much more accurate."
That is a rather big generalisation. In practice it depends on many factors. What are the criteria for assessing performance: algorithmic complexity (in training or in evaluation mode), accuracy, some other measure on the confusion matrix? What type of data? How accurate are the annotations (neural networks require labelled data)? Is it important to understand what you are actually modelling?
If you are happy to use a poorly understood structure like a deep learning neural network, with hundreds of layers, thousands of parameters, etc., possibly requiring many orders of magnitude more computation to train than a hidden Markov model, you can do that. It is true that the accuracy in certain image recognition and language processing problems is superior for deep learning. But the question should be: why is it superior? Do people really understand what is happening with this method?
For HMMs, you are doing a lot of the modelling based on knowledge of the problem domain in order to restrict or constrain the structure of the probabilistic (Markov chain) models. You use knowledge of the measurement process to construct sensible probabilistic models for the HMM. It is the combination of accurate modelling and a powerful theoretical framework that gives confidence in such a method.
This is in contrast to what happens in deep learning, where you hope that the network parameters will converge to some useful setting via gradient descent optimisation. I would call this a black box - the method performs but you don't understand why. If all you are worried about is overall accuracy, then maybe this is OK. The question really relates to whether you are doing science or just applying an interesting technique to some data.
  • asked a question related to Emotion Recognition
Question
4 answers
This is way outside my field and I wouldn't know where to begin.
Relevant answer
Answer
Not directly. There is some research displaying for what appears to be the amount of time spent online relating to a decrease in gray brain matter, aswell as other key functional parts of the brain (altho the content of what it means to be functional and for what purposes is not discussed here; being good at doing stuff online could be the functional of tommorow e.g.). Tie this with the idea that a more social brain (which presumably - because how much emotion is indeed recognized consciously and how much subconsciously - will be better at recognizing facial expressions that beget emotion) will most likely feature bigger versions of these parts of the brain (Robert Sapolsky argues in his book Behave, that the bigger the social group, the more social the brain, thus the more evolved these brain regions would be; to deal with deception, which includes recognizing aswell as supressing emotions and so on) and you get an indierct answer; the more time spent online most likely reduces your ability to recognize emotions displayed in facial expressions in vivo. If we reduce everything to these dimensions ofcourse.
Sources:
  • Hong, Soon-Beom, Jae-Won Kim, Eun-Jung Choi, Ho-Hyun Kim, Jeong-Eun Suh, Chang-Dai Kim, Paul Klauser, et al. “Reduced Orbitofrontal Cortical Thickness in Male Adolescents with Internet Addiction.” Behavioral and Brain Functions 9, no. 1 (2013): 11. doi:10.1186/1744-9081-9-11.
  • Hong, Soon-Beom, Andrew Zalesky, Luca Cocchi, Alex Fornito, Eun-Jung Choi, Ho-Hyun Kim, Jeong-Eun Suh, Chang-Dai Kim, Jae-Won Kim, and Soon-Hyung Yi. “Decreased Functional Brain Connectivity in Adolescents with Internet Addiction.” Edited by Xi-Nian Zuo. PLoS ONE 8, no. 2 (February 25, 2013): e57831. doi:10.1371/journal.pone.0057831.
  • Lin, Fuchun, Yan Zhou, Yasong Du, Lindi Qin, Zhimin Zhao, Jianrong Xu, and Hao Lei. “Abnormal White Matter Integrity in Adolescents with Internet Addiction Disorder: A Tract-Based Spatial Statistics Study.” PloS One 7, no. 1 (2012): e30253. doi:10.1371/journal.pone.0030253.
  • Yuan, Kai, Wei Qin, Guihong Wang, Fang Zeng, Liyan Zhao, Xuejuan Yang, Peng Liu, et al. “Microstructure Abnormalities in Adolescents with Internet Addiction Disorder.” Edited by Shaolin Yang. PLoS ONE 6, no. 6 (June 3, 2011): e20708. doi:10.1371/journal.pone.0020708.
  • Zhou, Yan, Fu-Chun Lin, Ya-Song Du, Ling-di Qin, Zhi-Min Zhao, Jian-Rong Xu, and Hao Lei. “Gray Matter Abnormalities in Internet Addiction: A Voxel-Based Morphometry Study.” European Journal of Radiology 79, no. 1 (July 2011): 92–95. doi:10.1016/j.ejrad.2009.10.025.
  • Sapolsky, Robert M. Behave: The biology of humans at our best and worst. Penguin, (2017): 429, 513, 430.
  • asked a question related to Emotion Recognition
Question
3 answers
I want to compare “neutral” baseline data with data recorded in a test session to finally be able to evaluate arousal/affect of the infant.
Which software would you recommend? Do you have any literature advice?
Any advice would be appreciated!
All the best
Sam
Relevant answer
Answer
Do you know this paper?
Shigeaki Amano, Tadahisa Kondo, and Sachiyo Kajikawa 2001. Analysis on infant speech with longitudinal recordings The Journal of the Acoustical Society of America 110, 2703 (2001);
  • asked a question related to Emotion Recognition
Question
6 answers
Research on the recognition of facial expression of emotions often seems to rely on pictures on which actors show different acted facial emotions (e. g. they are not genuinely surprised or angry, etc.) I was wondering wether results would be different with the use of real "authentic" facial emotions. Does anyone have one or two papers on the subject to suggest?
Relevant answer
Answer
Michelle,
Lie to Me is a good show for which Paul Ekman was an adviser. Pretty much of the fact on facial expression and what we can know and what we can't know with them are supported by scientific proof. But on the deception detection topic, the focus on facial expression, and more specifically on micro-expression is exagerated. A better focus on interviewing technique and strategies and on manipulation of the cognitive load by these techniques and strategies would have give a better picture of the possibilities given by the research on lie detection. 
But a very good show, I'm a fan ^^
Julien
  • asked a question related to Emotion Recognition
Question
4 answers
I have analysed a number of online articles and have their emotion analysis scores along with their sentiment (pos/neg/neutral) and the sentiment value. The fields are: Anger, Disgust, Fear, Sadness and Joy. What I would like to know if it is possible to somehow combine the values of the fields to represent them as one value. I also have comments related to those articles and have their sentiment and emotion scores as well in similar fields.
This would permit me to find a threshold so that I can use it to grade the article and the comments according to that single value. For example, an article might be: Anger=0.100637, Disgust=0.327951, Fear=0.243857, Joy=0.043951 and Sadness=0.364933.
Clearly in this example, the sadness value is the highest followed by disgust, but would it be right to ignore the lower score fields and classify that article as "sadness" related when "disgust" is that close? Would the "sadness" value be representative of that article? And what if another has 0.148988, 0.14043, 0.070271, 0.609123 and 0.103031? Equal parts "anger" and "disgust" but with 60% "Joy"? 
My first thought was to have some sort of mean but that would not be accurate at all as the difference in the different scores will certainly be lost.
Can someone please help me a little with this problem? Can all five values somehow be represented as one? Thank you.
Relevant answer
Answer
On your first example; when forwarding your interpretation, you could say that the article was mostly associated with sadness and disgust, nothing wrong with that. You could also add that the article was least associated wtih joy and anger.
On the other hand, you may decide to use one emotion (presumably the one with the highest value) in your interpretation of the article only.
A third option would include to simply combine both methods by setting a threshold. By that, I mean setting a value, that will determine when you use one or an n-number of emotions in your interpretation. Let's say in your example you use a threshold of 0.05. Since the difference between disgust and sadness is below that threshold, you include both emotions in your interpretation..."The articles was mostly associated with disgust and sadness." And since the other values are above, they no longer need to be included in your interpretation.
Make sure to state which methodology you will be using for your interpretations. The point is to use that same method for all of your evaluations, once stated. As such, there should be no problem, regardless of which option you choose.
  • asked a question related to Emotion Recognition
Question
4 answers
detecting emotion from bangla text.we will use fb posts at beginning
Relevant answer
Answer
Our software, EventIDE (www.okazolab.com) allows to perform real-time emotion recognition at the background of any behavioral task. The recognition is fully automated, it can work with any webcam and results can be recorded, monitored and synchronized with other data, such as eye-tracking, EEG, etc. The license cost are very affordable.   
  • asked a question related to Emotion Recognition
Question
3 answers
Hi everyone,
I am totally puzzled about how to proceed with the classification step, after feature extraction. Let's start from the beginning.
I designed an emotion recognition system which produces a massive amount of features. The features number varies depend on four parameters. An average number of features are 7000. My observations also depend on the database I use each time. One of them has 132 individuals. So my final features array is feat_vec(132, 7000). At this point in order to save time and optimise my system's accuracy I thought it would be a good idea not to use the whole amount of data as an entry to my classifier but instead to perform dimensional reduction to my feature array. After a couple of weeks of reading, I decided that the Principal Component Analysis was the best option.
I believe that the sequence of the process is: after the features extraction to use cross validation (5 folds in my case), then to perform PCA on the test features and training features respectively and finally to feed my training data and my test data along with my Labels to a classifier.
In order to do what I described above I wrote the following code:
Folds = 5;
PCA_Comp = 50;
Features = Feat_Vec;
Indices = crossvalind('Kfold',Labels,Folds);
for i = 1:Folds
test = (indices == i); train = ~test;
[TEcoeff, TEscore] = pca(Features(test,:));
[TRcoeff, TRscore] = pca(Features(train,:));
% My 1st question is how to calculate the new reduced data arrays
class = classify(Sample_Data,Training_Data,Labels(train));
end
For my project, I will be using Linear Discriminant Analysis (LDA). The reason I used the above method to classify my data is that I know how to find the accuracy of my system based on the 5-folds CV. I also know that Discriminant Analysis can be performed by <http://uk.mathworks.com/help/stats/discriminant-analysis.html this command/code>. My 2nd question is how do find the model's accuracy using fitcdiscr(X,Y) instead of use the traditional method classify(Xte,Xtr,L)?
Thank you for your time,
Costa
Relevant answer
Answer
Hi Konstantinos,
Process with "Cross-validation" method: 
In the case of loading whole dataset, LDA will train on the features that are extracted by "Principal components analysis" (i.e., LDA model gets built based on the training set from eg, 4-fold). Finally, LDA model is going to be tested on the remain portion of data. 
NB: You could apply attribute selection technique to those extracted features by rankiing them using a "Ranker" method. This may enhance the accuracy of your prediction model.
HTH.
Samer
  • asked a question related to Emotion Recognition
Question
3 answers
I'm looking for a test to evaluate emotional recognition (face, voice and body) in children, with basic emotions, not social/secundary emotions. I think DANVA-2 can be used, but I'm not sure if includes body recognition, and maybe de pictures can be outdated.
Thanks.
Relevant answer
Answer
Hi, you can search the website
and you will find there many tools that might be useful for your studies. Even though those tools were created for subjects with autism, they can also be used for the general population.
Bye
Luca
  • asked a question related to Emotion Recognition
Question
7 answers
I am working on a Facial Emotion Recognition task and for that I need a database that consists of facial emotions from USA sample. Specifically, facial expression should differ in terms of emotions (defined by valence and arousal).
Relevant answer
Answer
This may help to point you in the right direction.
  • asked a question related to Emotion Recognition
Question
3 answers
We conducted an experiment measuring automatically emotions in the face during three n-back tasks with different task difficulties. Often, the different emotions show a fast vacillating, i.e. significant changes of the intensity of the emotion with mostly a low amplitude.
In the literature oscillating or vacillating emotions are in most cases defined as changes between emotions. In papers that report about vacillation of the same emotion often the changes are not described as fast as we have observed (within three seconds or less).
Is there a theory about fast vacillating emotions (i.e. fast changes of its amplitude)?
Do you know good literature with more information (e.g. experiments, theory, observations, ...) about fast vacaillating emotions?
Relevant answer
Answer
Hello,
We had done some experiments on objectively quantifying the vacillating emotions that get expressed through facial expressions. You may find this paper to be useful. Let me know. This paper was published in IEEE International Conference on Automatic Face and Gesture Recognition in 2013. I have attached this paper.
Swapna Agararwal
Research Scholar
Indian Statistical Institute
  • asked a question related to Emotion Recognition
Question
7 answers
Hi all, 
I want to use a chi square test on 2 unequal samples.  Both are in the hundreds so there is no issue with minimal cell count.  I know that unequal sample size is not a problem for chi square test.  However, I'm trying to find a stats book or a published article that I could refer to in order to make this argument in a manuscript.  Anyone know of such a reference?
Relevant answer
Answer
hi. I would not even mention it or try to justify it. Thats just what the test is designed to do. If all the  groups were equal there would be nothing to test.
liz
  • asked a question related to Emotion Recognition
Question
8 answers
Hello everyone,
I want to know that how to detect emotion from a web page and analyse them? Is there any tool available to collect and store emotions?
Please help.
Thanks in advance.
Relevant answer
Answer
Thanks a lot @Valentina :)
Really helpful to me.
  • asked a question related to Emotion Recognition
Question
4 answers
Further is that..can we say that depression is emotion or some thing like mood??
Relevant answer
Answer
I think that depression should be embedded in robots to avoid humans to mistreat them.
  • asked a question related to Emotion Recognition
Question
4 answers
We are conducting a meta-analysis on the relationship between emotion recognition ability and intelligence. More specifically, we are seeking to investigate how different facets of cognitive intelligence correlate with people’s ability to accurately detect emotions in others and how test and study features moderate this relationship.
If you have conducted any published or unpublished studies in which you administered performance-based measures of both emotion recognition and cognitive intelligence to non-clinical adults, we would be very happy to include this data in our meta-analysis.
In this case, please write the following information:
Names of the emotion recognition and cognitive intelligence tests used (if the tests were custom-built for your study, please provide a short description), the zero-order correlation(s) between the tests, and study characteristics (sample size, mean age, gender composition, ethnic composition, year of the study, publication status – published or unpublished). If available, please also specify the number of items and Cronbach’s alpha of each test. In case you used tests that consist of different subtests (e.g., a vocal and a facial emotion recognition test; a test battery for different facets of intelligence), preferably provide the correlations for the subtests rather than the total scores.
Thank you in advance for your help!
Katja Schlegel and Judith Hall (Northeastern University)
Marianne Schmid Mast and Tristan Palese (University of Lausanne)
Nora Murphy (Loyola Marymount University)
Thomas Rammsayer (University of Berne)
Relevant answer
Answer
Thanks Fadi for pointing me to those papers, I have now requested access to them. Many thanks also to Dmitry and Romina, I have now included your results in our meta-analysis.
  • asked a question related to Emotion Recognition
Question
3 answers
Dear fellow researcher,
in a project we aim to measure emotions in real-time to loop this information back into a gamification module. We plan to integrate mainly body cues and facial expressions, but at least in the lab we also look at GSR, heartrate and brain signals. Can anyone recommend a solution (software library, development kit etc. - not a lab software) which integrates such signals and generates emotional states? We search for something like the SHORE kit (facial expressions), only with more modalities.
Best, Oliver 
Relevant answer
Answer
Dear Oliver,
I believe that our package, EventIDE (www.okazolab.com),  can carry all functions that you need alone. It 1) can be used to build rich, interactive games (including 3D graphics and VR), 2) ensures millisecond accuracy, 3) supports acquisition, synchronization and online processing of multiple bio-signals (GSR, HRV, EEG, eye-tracking, etc). Real-time emotion recognition via a web-cam is in development and will be available shortly. You can contact me for a demo on i.korjoukov@okazolab.com
  • asked a question related to Emotion Recognition
Question
4 answers
in emotion recognition from static images, we get the Gabor filters for each face part. the returning is matrices in terms of wavelengths and orientations. 
How to construct the feature vector from these Gabor filters?
Relevant answer
Answer
Yes the size of the vector will be too long (in one case I had a vector of 32 thousands of features from another descriptor), and then you use feature reduction, e.g. PCA, LDA, etc
  • asked a question related to Emotion Recognition
Question
6 answers
I want to compare our result with existing system of emotion detection in text. Can i get some recent paper on emotion detection in text???
Relevant answer
Answer
Yes, It is specifically related to your algorithm, data set and domain.
All the best
  • asked a question related to Emotion Recognition
Question
15 answers
I am looking for a dataset which contains a list of phrases and the emotion associated with it. For example for x="what the hell just happened", y='surprise'.
and, x="no one loves me",y='sad' etc.
Please its kind of urgent.
Thank you.
Relevant answer
Answer
  • asked a question related to Emotion Recognition
Question
10 answers
Assume we have these types of emotion:
1- Hate 2- Hope 3- Fear 4- Love 5- Dislike 6- Relief 7- Anger 8- Admiration 9- Shame 10- Disappointment 11- Resent 12- Joy 13- Like 14- Sadness
What events or actions would change any of these emotions to "Admiration"?
I would appreciate it if you support your answer with authentic references.
Relevant answer
Answer
Mani, Thanks to your response, I've been giving my own comments some thought.
Your question is a challenge to me, not so much as the task of giving examples, but to answer it consistent with my world view and conceptual framework. (Rogers, M. 1965, Science of Unitary Human Beings ) Thus I may not be not much help except to give grist for the mill. I admire you for pursuing this discussion; it gives me pleasure.
How to reach the admiration state from cognition and emotion. (or is it emotion and cognition?) the operative word is "and" , since there is simultaneity there.
the best I can offer right now is that (with extant knowledge)  there are  elements involved  of
  • observation
  • appraisal
  • emotion
  • passage of time
 Any which way one would like to order or sequence them is correct. They travel in a pattern much like a "Slinky" toy. in expanding, evolving concentric circles. Things are not linear. We think in linear ways because we crave order.
Every experience is unique to each human being. We do not know how any single person gets to a feeling. All we can do is come to some logical theory- based conclusions that will add to the body of knowledge.
Again I admire you. Your questions elicited many feelings. 
  • asked a question related to Emotion Recognition
Question
5 answers
OpenCV is normally used for face recognition applications. I want to embed an emotion recognition algorithm on openCV. How to do? Also which version of openCV is suitable for windows 7?
Relevant answer
Answer
Dear Geetha Venkat,
Whenever you hear the term face recognition, you instantly think of surveillance in videos. So performing face recognition in videos (e.g. webcam) is one of the most requested features I have got. I have heard your cries, so here it is. An application, that shows you how to do face recognition in videos! For the face detection part we’ll use the awesome CascadeClassifier and we’ll use FaceRecognizer for face recognition. This example uses the Fisherfaces method for face recognition, because it is robust against large changes in illumination.
Regards, Shafagat
  • asked a question related to Emotion Recognition
Question
7 answers
I need to detect small emotional responses on the audio stimuli. Unfortunately, I don't have access to the MRI or other complex technologies to track emotional changes, so I decided to use eye-tracker, but I am not sure it will give any information.
Relevant answer
Answer
Hey Ana,
you could use pupil size (more exactly: its change) to get a signal for arousal of the participant. If you have an emotional and a neutral stimulus category this could be used a very coarse measure of emotion.
Another option would be to use EMG: Record the "frowning" and "smiling" muscles. This might give you a better measure and it is also rather simple to set up.
Greetings, David
  • asked a question related to Emotion Recognition
Question
4 answers
I am new to counseling clients with schizoaffective disorder. I notice they have limited insight, an impoverished model of self, and frequently go off on perseverative tangents. I am looking for ways to hold their attention and boost their level of insight and self-awareness.
Relevant answer
Answer
Hello Harvey, 
In addition to Stephen's great suggestions, please see attached an article entitled "Individualizing Treatment for Patients With Schizoaffective Disorder" (author: Professor Eduard Vieta, MD, PhD). Hope it helps. 
Best wishes, 
Julio 
  • asked a question related to Emotion Recognition
Question
3 answers
I have provided a link to two separate YouTube videos in which each contain three short excerpts of music. One video focuses on orchestral scored music whilst the other focuses on an ambient/soundscape approach to production.
I am looking to gain feedback on whether the work truly reflects the emotions intended. Feel free to leave comments within the YouTube comments box in order to help me improve my work.
Kind Regards
Jamie
Relevant answer
Answer
My subjective opinion is that the orchestral pieces are fairly good stimuli for the emotions you've labeled them with. However, as Yevgen points out, the feelings evoked change after a short time as the themes in the music change.
The ambient music seems less effective: I don't see the 'wonder and anticipation' clip as conveying anticipation because it doesn't change much, and it's also not the kind of music that I would expect to change much. The 'expression of joy' clip seems more about movement / drive with a slight positive valence to it. Finally, the 'loss' clip was quite pleasant and calming for me, and didn't convey any negative sentiment at all - you could try exploring a more minor tonality to evoke sadness, or more dissonance to evoke tension. I also think the 'muffled' sound of the ambient clips tended to dampen the emotional impact.
I found myself comparing these clips to movie soundtracks, imagining what the music might convey about the characters or the action of the scenes. You could potentially find scenes from movies that convey the emotions you are looking for, and then use the music in those scenes as inspiration for the music you create.
How do you intend to use the musical clips? As stimuli in a study? Perhaps in a therapy session? Also of interest, did you create the music yourself?
  • asked a question related to Emotion Recognition
Question
7 answers
Thank you so much!
Relevant answer
Answer
I think  you should use the revised UCLA loneliness scale - revised version - 20 item
  • asked a question related to Emotion Recognition