ArticlePDF Available

Recognition of the Impulse of Love at First Sight Based on Electrocardiograph Signal

Authors:
  • zhejiang xianju junye pharmaceutical.,LTD

Abstract and Figures

The impulse of love at first sight (ILFS) is a well known but rarely studied phenomenon. Despite the privacy of these emotions, knowing how attractive one finds a partner may be beneficial for building a future relationship in an open society, where partners are accepting each other. Therefore, this study adopted the electrocardiograph (ECG) signal collection method, which has been widely used in wearable devices, to collect signals and conduct corresponding recognition analysis. First, we used photos to induce ILFS and obtained ECG signals from 46 healthy students (24 women and 22 men) in a laboratory. Second, we extracted the time- and frequency-domain features of the ECG signals and performed a nonlinear analysis. We subsequently used a feature selection algorithm and a set of classifiers to classify the features. Combined with the sequence floating forward selection and random forest algorithms, the identification accuracy of the ILFS was 69.07%. The sensitivity, specificity, F1, and area under the curve of the other parameters were all greater than 0.6. The classification of ECG signals according to their characteristics demonstrated that the signals could be recognized. Through the information provided by the ECG signals, it can be determined whether the participant possesses the desire to fall in love, helping to determine the right partner in the fastest time; this is conducive to establishing a romantic relationship.
This content is subject to copyright. Terms and conditions apply.
Research Article
Recognition of the Impulse of Love at First Sight Based on
Electrocardiograph Signal
Jin Zhang ,
1
Guangjie Yuan,
2
Huan Lu,
3
and Guangyuan Liu
1
,
2
,
3
1
College of Electronic and Information Engineering, Southwest University, Chongqing, China
2
Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China
3
Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University,
Chongqing, China
Correspondence should be addressed to Guangyuan Liu; liugy@swu.edu.cn
Received 10 December 2020; Revised 19 February 2021; Accepted 10 March 2021; Published 24 March 2021
Academic Editor: Fivos Panetsos
Copyright ©2021 Jin Zhang et al. is is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
e impulse of love at first sight (ILFS) is a well known but rarely studied phenomenon. Despite the privacy of these emotions,
knowing how attractive one finds a partner may be beneficial for building a future relationship in an open society, where partners
are accepting each other. erefore, this study adopted the electrocardiograph (ECG) signal collection method, which has been
widely used in wearable devices, to collect signals and conduct corresponding recognition analysis. First, we used photos to induce
ILFS and obtained ECG signals from 46 healthy students (24 women and 22 men) in a laboratory. Second, we extracted the time-
and frequency-domain features of the ECG signals and performed a nonlinear analysis. We subsequently used a feature selection
algorithm and a set of classifiers to classify the features. Combined with the sequence floating forward selection and random forest
algorithms, the identification accuracy of the ILFS was 69.07%. e sensitivity, specificity, F1, and area under the curve of the other
parameters were all greater than 0.6. e classification of ECG signals according to their characteristics demonstrated that the
signals could be recognized. rough the information provided by the ECG signals, it can be determined whether the participant
possesses the desire to fall in love, helping to determine the right partner in the fastest time; this is conducive to establishing a
romantic relationship.
1. Introduction
e impulse of love at first sight (ILFS) is a significant initial
attraction [1], that is, a strong desire to relate with another
person, and is a complex phenomenon that includes eval-
uation, appreciation, and subjective experience of physio-
logical changes. ILFS can be observed in many literary and
artistic works. In real life, the concept of ILFS is accepted by
most people. For example, approximately one-third of
westerners report that they have experienced ILFS [2].
Moreover, studies have observed that ILFS can affect rela-
tionships [3, 4]. A relationship between couples involving
the ILFS is more passionate, causing the relationship to be
more stable and satisfying [5]. Vico et al. [6] observed that, in
the presence of a favorite face, heart rate and skin con-
ductance activity increase, along with valence and arousal
and a reduction in dominance evaluation. Fisher [7] found
that the psychological responses of ILFS include excitement,
increased energy, tremor, rapid heartbeats, and shortness of
breath. Nevertheless, almost no research has been conducted
on recognizing the ILFS. In general, the ILFS is a type of
emotional state that can be studied by referring to previous
methods of emotion recognition.
In recent years, physiological signals, such as electro-
encephalograms [8], electrocardiograms (ECGs) [9–11],
electromyography [12], photoplethysmography [13], gal-
vanic skin [14] response, and respiration, have been widely
applied in the field of emotion recognition. On the one hand,
behavioral data (such as facial expressions and body pos-
tures) and voice data are easily manipulated by subjective
consciousness [15]; on the other hand, physiological signals
are real-time and continuous signals that can be used to
Hindawi
Computational Intelligence and Neuroscience
Volume 2021, Article ID 6631616, 9 pages
https://doi.org/10.1155/2021/6631616
better analyze the expression and conversion between dif-
ferent emotional states. Among these physiological signals,
emotion recognition using ECG signals has become an
important topic in the field of emotion computing. First,
ECG signal-derived features, such as heart rate (HR) and
heart rate variability (HRV), have been observed as reliable
physiological indicators of emotion recognition [16, 17]. For
example, Kreibig [18] demonstrated that happiness results in
a reduction in HRV while joy and entertainment increase
HRV. Research by Lichtenstein et al. [19] indicated that
there are significant differences in HRV corresponding to
anger and happiness, anger and satisfaction, and sadness and
happiness. In addition, Rainville [20] demonstrated that HR
and HRV characteristics can be used to distinguish four
emotions: anger, fear, happiness, and sadness. Second, ECG
signals have been widely used for emotion recognition owing
to the low cost, portability, wearability, and wireless ad-
vantages of ECG devices. Karthikeyan et al. [21] distin-
guished between relaxed and stressed states using ECG
signals and achieved a classification accuracy of 94.6%. Guo
et al. [22] extracted HRV features from ECG signals and used
support vector machines (SVMs) to classify different
emotional states. e results demonstrated that the two
emotional states (positive/negative) attained 71.4% accuracy.
Castaldo et al. [23] evaluated the potential of stress detection
using an ultra-short-term HRV analysis. e experimental
results showed that the sensitivity, specificity, and accuracy
of classification surpassed 60% using ultra-short-term HRV
features for classification. Hsu et al. [15] proposed an ECG-
based automatic emotion recognition algorithm. e clas-
sification accuracy of positive/negative valence, high/low
arousal, and three types of emotions (joy, sadness, and
peacefulness) using a least-squares SVM were 82.78%,
72.91%, and 61.25%, respectively.
In addition, in the study of emotion recognition, pictures
[24], music [25], movies [26], and text [27] are frequently
used to elicit emotions. is study examines ILFS when two
people meet each other. Conducting a speed dating scenario
with hundreds of participants in a laboratory environment is
not feasible. Moreover, the ILFS studied in this study can be
generated in a very short time. erefore, in this study, we
used images to induce ILFS and used ECG signals to classify
and recognize ILFS. Also, we designed an accurate experi-
ment to collect ECG signals from participants during the
viewing period. Subsequently, we developed an automatic
ILFS recognition algorithm to detect the Rwave, generate
important features related to the ILFS, and effectively
identify the ILFS.
e remainder of this paper is organized as follows:
Section 2 describes the experimental setup and protocol. e
proposed ECG-based ILFS recognition algorithm is intro-
duced in Section 3. Section 4 presents the results and cor-
responding discussion. Section 5 presents the conclusions of
this study.
2. Experimental Setup
2.1. Experiment Material. In this study, various factors were
comprehensively considered to select photos as the stimulus
material; 800 photos of smiling men and women were
purchased and downloaded from a photo website. Subse-
quently, these photos were cropped into bust photos with
uniform properties, for example, size, brightness, and
resolution.
Unified processed pictures were scored and formal test
materials were selected. Psychologists have shown that
facial attractiveness is strongly linked to ILFS [28, 29].
Every time the unit of attraction increases by one level, the
likelihood of the ILFS will increase by a factor of nine.
erefore, 60 college students (30 men and 30 women) with
no colorblindness or physical/mental health were recruited
from Southwestern University to evaluate the facial at-
tractiveness of photos of the opposite sex and were asked to
subjectively evaluate facial attractiveness on a scale of 1
(not at all) to 9 (extremely). We then selected 240 male and
240 female photos from those evaluated as material that
induced ILFS (high attraction: average: low
attraction 0.25 : 0.6 : 0.15).
2.2. Participants. e researchers recruited 46 healthy
Southwestern University students (24 women and 22 men;
mean age, 19.7 ±1.6 years). e participants were required to
abstain from vigorous exercise for 2 h before the experiment
to avoid a rapid heart rate, which would affect the experi-
mental data and results. However, owing to equipment
problems, the data of the three students were not used.
All participants provided written informed consent.
Before data collection, all methods were approved by the
Human Ethics Research Committee of Southwestern
University.
2.3. Experimental Context. is experiment was divided into
two sessions (two sessions were performed at least one day
apart). Each session contained 120 stimuli. Each session had
two blocks and each block contained 60 stimulus materials.
In the experiment, the presentation time of each stimulus
material was 10 s and the participants were evaluated
according to their emotional state. After each block, a
neutral landscape and a piece of light music were presented
for 4 min. e experimental paradigm is illustrated in
Figure 1.
At the beginning of the experiment, the subjects sat
quietly in a chair and their bodies were in a state of natural
relaxation. e corresponding picture stimulus materials
were then presented according to the written emotion-in-
duced experimental paradigm to induce the ILFS. After the
subjects watched the stimulus materials, they performed the
emotion induction evaluation and subjectively reported the
ILFS induction intensity for each stimulus material, in the
range of 0 (none) to 3 (extreme). ECG signals were collected
using an MP150 system and the sampling frequency was set
to 1000 Hz. After the experiment was completed, the subjects
were asked to look at the pictures again and subjectively
report their arousal, valence, dominance, and attraction, in
the range of 1 to 7. e self-report rating scale used here was
a Likert table [30].
2Computational Intelligence and Neuroscience
3. Methodology
In summary, ECG signals were recorded for 46 participants
observing 240 pictures of the opposite sex. Subsequently, the
ECG signals were preprocessed to remove the interference
and noise. After noise removal, feature extraction was
performed on the signals and the time domain, frequency,
and nonlinear features of the ECG signal were extracted.
After extracting some statistical features (indices), we
employed a feature selection algorithm to reduce the feature
dimensions, thereby reducing the computational cost. Fi-
nally, different classifiers are used for sentiment classifica-
tion. e frame diagram of the state recognition of the ILFS
is shown in Figure 2.
3.1. Preprocessing. Before preprocessing, the ECG signal was
downsampled to 200 Hz.
e ECG signal is a nonstationary weak signal that easily
receives interference from itself and the outside environ-
ment; this interference and noise may conceal useful in-
formation. Before feature extraction, the original ECG signal
must be preprocessed. ECG frequently includes baseline
drift below 1 Hz, power frequency interference at 50Hz, and
electromyographic interference. During preprocessing, a
discrete wavelet transform—a common method for re-
moving noise [31]—was used. e original ECG signal was
scaled using a discrete wavelet transform, the approximate
coefficients and detail coefficients of each layer were
extracted, and the soft threshold function was used to
process the detail coefficients. Subsequently, a pure ECG
signal was reconstructed.
e noise-removed ECG signal was divided into 10 s
time signals when the stimulus material appeared as the
starting point. Subsequently, the Pan–Tompkins peak de-
tection algorithm was used to locate the R-wave peak to
obtain the RR interval [32]. HRV parameters can be ob-
tained through feature extraction of the RR interval. HRV is
a reliable marker of activity in the autonomic nervous system
and reflects the time change of a continuous heartbeat [33].
3.2. Feature Extraction. In this study, twenty-five features
were extracted from the ECG signals, including the HRV
time domain, frequency domain, and nonlinear character-
istics; details of the feature information are presented in
Table 1.
3.3. Construction of ILFS and Non-ILFS Datasets. Before
constructing the datasets, we first removed the abnormal
data, which would have affected the classification results.
e median absolute deviation (MAD) algorithm can
effectively remove outliers from the data [34]. e MAD and
outlier removal methods are shown in the following
equations, respectively:
MAD medianiximedianjxj
􏼐 􏼑
􏼌􏼌􏼌􏼌􏼌􏼌􏼌􏼌􏼌􏼌
􏼒 􏼓,(1)
ximedian xi
 􏼁5×MAD,
ximedian xi
 􏼁+5×MAD,
􏼨(2)
where xjis one of the nsample values and medianiis the
median of the series.
60 stimuli
60 stimuli
60 stimuli
60 stimuli
Rest
Rest
Session 1 (120 stimuli)
Session 2 (120 stimuli)
Average High attraction Low attraction
Figure 1: Experimental paradigm.
Computational Intelligence and Neuroscience 3
By summarizing previous studies on the ILFS
[6, 7, 35], we consider that the ILFS exhibits the char-
acteristics of high arousal, high price, high attractiveness,
and high dominance. erefore, combining the two
evaluations in the experiment, the data with high arousal,
high price, high attractiveness, and high dominance were
screened from the ILFS data (data with a level of 1 for the
ILFS were not used) as the dataset of ILFS states. In
addition, for the non-ILFS data, data with low arousal, low
valence, low attractiveness, and low dominance were
selected from the data without the ILFS as the non-ILFS
dataset.
3.4. Feature Selection. A feature selection algorithm can
remove redundant features and reduce the quantity of data,
thereby improving the classification accuracy and signifi-
cantly reducing the computational cost [36]. We thus
Table 1: ECG characteristic description.
Number Symbol Feature description
Time-domain features
1 Mean_RR Mean of RR intervals
2 CVRR e coefficient of variance of RR intervals
3 SDRR Standard deviation of RR intervals
4 RMSSD Root mean square of successive differences of RR intervals
5 MSD Mean of the absolute values of the first differences of RR intervals
6 SDSD Standard deviation of successive differences of RR intervals
7 NN50 Number of interval differences of successive RR intervals greater than 50 ms
8 PNN50 Corresponding percentage of RR50
9 NN20 Number of interval differences of successive RR intervals greater than 20 ms
10 PNN20 Corresponding percentage of RR20
11 Mean_HR Average heart rate
12 QD Quartile deviation of RR intervals
Nonlinear features
13 SD1 Standard deviation for Tdirection in Poincare plot
14 SD2 Standard deviation for Ldirection in Poincare plot
15 SD1_SD2 SD1/SD2
16 CSI Cardiac sympathetic index
17 CVI Cardiac vagal index
18 modified_CSI Modified CSI
19 LZC LZ complexity
Frequency-features
20 TP Power of range 0.04–0.4 Hz of the PSD of RR intervals
21 LF Power of range 0.04–0.15 Hz of the PSD of RR intervals
22 HF Power of range 0.15–0.4 Hz of the PSD of RR intervals
23 LF/HF Proportion of LF to HF
24 nLFP Proportion of LF to LF + HF
25 nHFP Proportion of HF to LF + HF
Picture
ECG sensor
Collection ECG signal
Data preprocessing
Waveform detection
Feature extraction
Feature selection
Classication
model
e impulsion of love
recognition
(a) Data acquisition
(b) Data preprocessing
and waveform detection
(c) Feature processing (d) Emotion recognition
Figure 2: Frame diagram of state recognition of ILFS.
4Computational Intelligence and Neuroscience
selected the sequence floating forward selection (SFFS) al-
gorithm. e SFFS algorithm selects an optimal feature
subset as the classification input and can solve the local
optimization problem of the feature set to a certain extent
[37].
SFFS combines sequential forward selection (SFS) and
sequential backward selection (SBS) algorithms. e SFFS
has three parts: insertion, conditional exclusion, and
termination.
First, let Fkfi:1ik
􏼈 􏼉 be a feature subset com-
posed of kfeatures selected from the original feature set
Yyi:1in
􏼈 􏼉, where nis the total number of features.
e evaluation function of the optimal feature subset was
J(·).
Step 1. Inclusion: beginning from the empty set F,
use the SFS method to select the most important feature
f+from YFk
􏼈 􏼉and Fkto form a new feature subset
Fk+1, and Fk+1Fk+1+f+. Set kk+ 1 to execute Step
2.
Step 2. Conditional exclusions determine the most
important feature (f) from Fk+1, if fis the most
important feature in Fk+1, and J(Fk+1f)>J(Fk);
delete ffrom Fk+1to form a new feature subset Fk
and
Fk
Fk+1f. ereafter, Step 3 is performed. In
addition, if J(Fk+1f)<J(Fk), return to Step 1.
Step 3. Termination. Set kk– 1; if kis equal to the
expected number of features, stop. Otherwise, set
FkFk
,J(Fk) � J(Fk
), and return to Step 1.
In this study, two nested 10-fold cross-validation
schemes were used to obtain reliable model estimates for
feature selection and model training [35]. e best feature
subset is selected in the inner loop. In the outer loop, using
the selected best feature subset, the classifier was evaluated
using 10-fold cross-validation.
4. Results and Discussion
is section presents a series of results (feature analysis and
classification results) to evaluate the effectiveness of the
proposed approach. In addition, the results were compre-
hensively discussed.
4.1. Feature Analysis. We evaluated whether the charac-
teristics of the ECG are shown in Table 1; the characteristics
of the ILFS data sample, and the characteristics of the non-
ILFS data were significantly different. e Wilcoxon signed-
rank test is the most extensive nonparametric rank-sum test
method for two independent groups [38]. A pvalue of less
than 0.05 indicates that a significant difference exists be-
tween the ILFS and non-ILFS. As Figure 3 illustrates, the
results of the Wilcoxon test indicated that the differences in
ILFS and non-ILFS status for features #3, #7, #8, #9, #12, #17,
#20, and #21 were insignificant. Consistent with the results
in [6, 7], the Wilcoxon test results of feature #11 indicate that
the ILFS state exhibits a higher heart rate.
Although some features are not significantly different
between the ILFS and non-ILFS states, the classification
performance can be significantly improved when used in
combination with other features [39]. erefore, we used 25
heartbeat feature vectors to represent each sample in the
ILFS and non-ILFS state datasets.
Feature selection involves selecting the fewest features
without affecting the classification effect. us, in this study,
10-fold cross-validation schemes based on the SFFS algo-
rithm were used for feature selection. e number of features
was changed from 1 to 25 for training and the best feature
subset was selected. Figure 4 shows the accuracy of the five
classifiers for selecting different numbers of features. e
features corresponding to the maximum accuracy of the
different classifiers were used as the optimal feature subset of
the classifier.
Table 2 lists the best feature subsets of the different
classifiers. It can be seen from Table 2 that feature #1 is one
of the best performing features in each classifier and feature
#1 is reduced in the ILFS state. Consistent with the liter-
ature results [7, 35], ILFS produces physiological reactions,
for example, excitement, a rapid heartbeat, an increased
heart rate in the excited state, and a reduced average RR
interval.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
p value
Features
0.05
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Figure 3: Characteristic pvalue between the ILFS and non-ILFS.
Computational Intelligence and Neuroscience 5
4.2. Classification Result. In this research, the ILFS and non-
ILFS samples were classified using a set of widely used
classifiers, such as SVM, random forest (RF), and naive
Bayes (NB). In addition, sensitivity (Se), specificity (Sp), F1-
score (F1), area under the curve (AUC), accuracy (ACC),
and other parameters were used to evaluate the performance
of the classification scheme. Table 3 and Figure 5 present the
classification performance of five classifiers without feature
selection for ECG signals. Among these classifiers, RF ex-
hibits the best classification accuracy, with a result of 66.04%.
Other classifiers recognized the ILFS and their classification
accuracy was approximately 60%. e parameters Se, Sp, F1,
and AUC of the classifier were all approximately 0.6.
During the analysis presented in the previous section, the
optimal feature subset of the classifier was obtained based on
the SFFS algorithm and the optimal feature subset was used
to evaluate the classifier using 10-fold cross-validation.
Table 4 and Figure 6 show the classification performance of
the five classifiers after feature selection. e results dem-
onstrate that the highest accuracy rate of 69.07% is obtained
for the classifier RF and features #1, #3, #8, #12, and #24
constitute the best feature subset; the parameters Se, Sp, F1,
and AUC of the classifier RF were better than those of the
other classifiers; however, the parameters of all classifiers are
greater than 0.6, indicating that ILFS can be classified and
recognized.
Figure 7 shows that after using feature selection, the
classification effect of 5 classifiers is improved. In addition, it
can be seen that the feature selection method (combined
with the RF classifier) is optimal for identifying ILFS
emotions, compared to other machine learning algorithms.
In previous studies, few researchers have examined the
mapping pattern between the ILFS and physiological signals.
erefore, in this paper, a study on the classification and
recognition of the ILFS based on ECG signals is proposed,
that is, the use of an ECG signal to identify whether someone
0.5
0.52
0.54
0.56
0.58
0.6
0.62
0.64
0.66
0.68
0.7
Accuracy
Number of features
12345678910111213141516171819202122232425
RF
KNN
SVM
NB
DT
Figure 4: Results of the accuracy of five classifiers for selecting different numbers of features.
Table 2: e best feature subsets of five classifiers.
Classifier Selected features
SVM 1, 3, 8, 15, 14
RF 1, 3, 8, 12, 24
NB 1, 2, 11
KNN 4, 7, 8, 12
DT 1, 8, 24
Table 3: Classification performance of the five classifiers without
feature selection.
Classifier Se Sp F1 AUC ACC
SVM 0.7103 0.5305 0.6512 0.6209 0.6221
RF 0.6984 0.6186 0.6717 0.6616 0.6604
NB 0.5 0.6363 0.5513 0.6017 0.5988
KNN 0.5949 0.6363 0.6037 0.62 0.6128
DT 0.6001 0.6037 0.5977 0.6122 0.6011
SVM: support vector machine; RF: random forest; NB: naive bayes; KNN:
K-nearest neighbor; DT: decision tree.
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
SVM
RF
NB
KNN
DT
Se Sp F1 AUC ACC
Figure 5: Classification performance of the five classifiers without
feature selection.
6Computational Intelligence and Neuroscience
is in a state of ILFS. e best classification accuracy rate was
66.04% for all signal characteristics. With the SFFS feature
selection algorithm, the best classification accuracy in-
creased to 69.03%. e experimental results show that ILFS
can be classified and identified based on ECG signals;
however, the recognition and classification of the ILFS based
on ECG signals are not very accurate. e following may be
factors that affect the classification of ILFS:
(1) e recognition effect of the ILFS is related to the
classifier used. Selecting a more advanced classifica-
tion algorithm can improve the classification effect.
(2) e ILFS is highly related to the subjects’ aesthetic
preferences and the emotional intensity induced by
the selected stimulus photos is insufficient.
(3) e ILFS is a complex emotional state. Accurately
reflecting the changes in the ILFS using only an ECG
signal is difficult.
erefore, in future research, for better classification and
recognition of the ILFS, it is necessary to (1) determine a
more advanced classification algorithm, (2) use different
stimuli (e.g., video) to induce a higher intensity of the ILFS,
and (3) use a variety of physiological signals.
5. Conclusions
is study attempted to identify the ILFS based on ECG
signals. Our research demonstrated that the ILFS is sepa-
rable. Based on the recognition of the ILFS using the ECG,
through the information provided by the physiological
signal, people can determine whether they have the ILFS.
Determining the right partner in the fastest time is con-
ducive to establishing a relationship. Moreover, owing to the
advantages of low cost, portability, and wearable devices, the
ECG signal-based ILFS recognition algorithm can be
combined with wearable devices, which can better match the
cardiac target in specific scenarios, online or offline.
Data Availability
e data used to support the findings of this study are
available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
Acknowledgments
e authors thank all the subjects and the experimenters
participating in the experiment. is work was supported in
part by the National Natural Science Foundation of China
(nos. 61472330 and 61872301).
References
[1] F. Zsok, M. Haucke, C. Y. De Wit, and D. P. H. Barelds, “What
kind of love is love at first sight? An empirical investigation,”
Personal Relationships, vol. 24, no. 4, pp. 869–885, 2017.
[2] E. Naumann, “Love at first sight: the stories and science
behind instant attraction,” Frontiers in Psychology, vol. 7,
2001.
[3] L. Custer, D. Holmberg, K. L. Blair, and T. Orbuch, So How
Did You Two Meet?” Narratives of Relationship Initiation:
Handbook of Relationship Beginnings, Lawrence Erlbaum
Associates, Mahwah, NJ, USA, 2008.
[4] B. Fehr, Love: Conceptualization and Experience, American
Psychological Association, Worcester, MA, USA, 2015.
[5] N. Alea and S. C. Vick, “e first sight of love: relationship-
defining memories and marital satisfaction across adulthood,”
Memory, vol. 18, no. 7, pp. 730–742, 2010.
[6] C. Vico, P. Guerra, H. Robles, J. Vila, and L. Anllo-Vento,
“Affective processing of loved faces: contributions from pe-
ripheral and central electrophysiology,” Neuropsychologia,
vol. 48, no. 10, pp. 2894–2902, 2010.
Table 4: Classification performance of five classifiers after feature
selection.
Classifier Se Sp F1 AUC ACC
SVM 0.7684 0.5631 0.6908 0.656 0.6616
RF 0.7194 0.6635 0.6984 0.6903 0.6907
NB 0.627 0.6101 0.6021 0.6194 0.6232
KNN 0.6307 0.631 0.627 0.634 0.6325
DT 0.6343 0.5897 0.6216 0.6209 0.6163
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
SVM
RF
NB
KNN
DT
Se Sp F1 AUC ACC
Figure 6: Classification performance after feature selection.
0.6616
0.6907
0.6232
0.6325
0.6163
0.6221
0.6604
0.5988
0.6128
0.6011
0.54
0.56
0.58
0.6
0.62
0.64
0.66
0.68
0.7
DT
ACC
Feature selection
No feature selection
SVM RF NB KNN
Figure 7: Comparison of the classification accuracy of different
classifiers with no feature selection and feature selection.
Computational Intelligence and Neuroscience 7
[7] H. E. Fisher, A. Aron, D. Mashek, H. Li, and L. L. Brown,
“Defining the brain systems of lust, romantic attraction, and
attachment,” Archives of Sexual Behavior, vol. 31, no. 5,
pp. 413–419, 2002.
[8] R. Nawaz, K. H. Cheah, H. Nisar, and V. V. Yap, “Comparison
of different feature extraction methods for EEG-based
emotion recognition,” Biocybernetics and Biomedical Engi-
neering, vol. 40, no. 3, pp. 910–926, 2020.
[9] G. Valenza, A. Lanata, and E. P. Scilingo, “e role of
nonlinear dynamics in affective valence and arousal recog-
nition,” IEEE Transactions on Affective Computing, vol. 3,
no. 2, pp. 237–249, 2012.
[10] A. O. Akmandor and N. K. Jha, “Keep the stress away with
SoDA: stress detection and alleviation system,” IEEE Trans-
actions on Multi-Scale Computing Systems, vol. 3, no. 4,
pp. 269–282, 2017.
[11] A. Tjolleng, K. Jung, W. Hong et al., “Classification of a
Driver’s cognitive workload levels using artificial neural
network on ECG signals,” Applied Ergonomics, vol. 59,
pp. 326–332, 2017.
[12] B. Cheng and G. Liu, “Emotion recognition from surface
EMG signal using wavelet transform and neural network,”
Journal of Computer Applications, vol. 28, no. 2, pp. 1363–
1366, 2008.
[13] Y. K. Lee, O. W. Kwon, H. S. Shin, J. Jo, and Y. Lee, “Noise
reduction of PPG signals using a particle filter for robust
emotion recognition,” in Proceedings of the 2011 IEEE In-
ternational Conference on Consumer Electronics-Berlin (ICCE-
Berlin), pp. 202–205, Berlin, Germany, September 2011.
[14] W. Wen, G. Liu, N. Cheng, J. Wei, P. Shangguan, and
W. Huang, “Emotion recognition based on multi-variant
correlation of physiological signals,” IEEE Transactions on
Affective Computing, vol. 5, no. 2, pp. 126–140, 2014.
[15] Y.-L. Hsu, J.-S. Wang, W.-C. Chiang, and C.-H. Hung,
“Automatic ECG-based emotion recognition in music lis-
tening,” IEEE Transactions on Affective Computing, vol. 11,
no. 1, pp. 85–99, 2020.
[16] O. Alzoubi, S. K. D’Mello, and R. A. Calvo, “Detecting nat-
uralistic expressions of nonbasic affect using physiological
signals,” IEEE Transactions on Affective Computing, vol. 3,
no. 3, pp. 298–310, 2012.
[17] S. H. Fairclough and C. Dobbins, “Personal informatics and
negative emotions during commuter driving: effects of data
visualization on cardiovascular reactivity & mood,” Inter-
national Journal of Human-Computer Studies, vol. 144, Article
ID 102499, 2020.
[18] S. D. Kreibig, “Autonomic nervous system activity in emotion: a
review,” Biological Psychology, vol. 84, no. 3, pp. 394421, 2010.
[19] A. Lichtenstein, A. Oehme, S. Kupschick, and T. J¨urgensohn,
Comparing Two Emotion Models for Deriving Affective States
from Physiological Data,” Affect and Emotion in Human-
Computer Interaction, Springer Berlin Heidelberg, New York,
NY, USA, 2008.
[20] P. Rainville, A. Bechara, N. Naqvi, and A. R. Damasio, “Basic
emotions are associated with distinct patterns of cardiore-
spiratory activity,” International Journal of Psychophysiology,
vol. 61, no. 1, pp. 5–18, 2006.
[21] P. Karthikeyan, M. Murugappan, and S. Yaacob, “Analysis of
stroop color word test-based human stress detection using
electrocardiography and heart rate variability signals,” Ara-
bian Journal for Science and Engineering, vol. 39, no. 3,
pp. 1835–1847, 2014.
[22] H. Guo, Y. Huang, C. Lin, J. Chien, K. Haraikawa, and
J. Shieh, “Heart rate variability signal features for emotion
recognition by using principal component analysis and
support vectors machine,” in Proceedings of the IEEE Inter-
national Conference on Bioinformatics & Bioengineering,
pp. 274–277, IEEE, Taichung, Taiwan, October 2016.
[23] R. Castaldo, L. Montesinos, P. Melillo, S. Massaro, and
L. Pecchia, “To what extent can we shorten HRV analysis in
wearable sensing? a case study on mental stress detection,” in
Proceedings of the EMBEC & NBC 2017: Joint Conference of the
European Medical and Biological Engineering Conference
(EMBEC) and the Nordic-Baltic Conference on Biomedical
Engineering and Medical Physics (NBC), Tampere, Finland,
June 2017.
[24] M. Codispoti, M. M. Bradley, and P. J. Lang, “Affective re-
actions to briefly presented pictures,” Psychophysiology,
vol. 38, no. 3, 2001.
[25] J. Kim and E. Andre, “Emotion recognition based on phys-
iological changes in music listening,” IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 30, no. 12,
pp. 2067–2083, 2008.
[26] X.-W. Wang, D. Nie, and B.-L. Lu, “Emotional state classi-
fication from EEG data using machine learning approach,”
Neurocomputing, vol. 129, no. 10, pp. 94–106, 2014.
[27] C. O. Alm, D. Roth, and R. Sproat, “Emotions from text:
machine learning for text-based emotion prediction,” in
Proceedings of the Conference on Human Language, Tech-
nology and Empirical Methods in Natural Language Processing
- HLT ’05, Vancouver, CA, USA, October 2005.
[28] J. L. Sangrador and C. Yela, “‘What is beautiful is loved’:
physical attractiveness in love relationships in a representative
sample,” Social Behavior and Personality: An International
Journal, vol. 28, no. 3, pp. 207–218, 2000.
[29] P. W. Eastwick, L. B. Luchies, E. J. Finkel, and L. L. Hunt, “e
predictive validity of ideal partner preferences: a review and
meta-analysis,” Psychological Bulletin, vol. 140, no. 3,
pp. 623–665, 2014.
[30] R. Likert, “A technique for the measurement of attitudes,”
Archieves of Psychology, vol. 140, no. 22, pp. 1–55, 1932.
[31] H.-Y. Lin, S.-Y. Liang, Y.-L. Ho, Y.-H. Lin, and H.-P. Ma,
“Discrete-wavelet-transform-based noise removal and feature
extraction for ECG signals,” IRBM, vol. 35, no. 6, pp. 351–361,
2014.
[32] J. Pan and W. J. Tompkins, “A real-time QRS detection al-
gorithm,” IEEE Transactions on Biomedical Engineering,
vol. BME-32, no. 3, pp. 230–236, 1985.
[33] P. Ekman, W. V. Friesen, M. O’Sullivan et al., “Universals and
cultural differences in the judgments of facial expressions of
emotion,” Journal of Personality and Social Psychology, vol. 53,
no. 4, pp. 712–717, 1987.
[34] C. Leys, C. Ley, O. Klein, P. Bernard, and L. Licata, “Detecting
outliers: do not use standard deviation around the mean, use
absolute deviation around the median,” Journal of Experi-
mental Social Psychology, vol. 49, no. 4, pp. 764–766, 2013.
[35] S. Pourmohammadi and A. Maleki, “Stress detection using
ECG and EMG signals: a comprehensive study,” Computer
Methods and Programs in Biomedicine, vol. 193, Article ID
105482, 2020.
[36] J. Schenk, M. Kaiser, and G. Rigoll, “Selecting features in on-
line handwritten whiteboard note recognition: SFS or SFFS,”
in 10th International Conference on Document Analysis and
Recognition, ICDAR 2009, Barcelona, Spain, July 2009.
[37] P. Pudil and J. Novoviˇ
cov´
a, Novel Methods for Feature Subset
Selection with Respect to Problem Knowledge,” Feature Ex-
traction, Construction and Selection, pp. 101–116, Springer US,
New York, NY, USA, 1998.
8Computational Intelligence and Neuroscience
[38] F.. , K. S. K. Wilcoxon and R. A. Wilcox, Critical Values and
Probability Levels for the Wilcoxon Rank Sum Test and the
Wilcoxon Signed Rank Test,” Selected Tables in Mathematical
Statistics, pp. 171–259, American Cyanamid, Bridgewater
Township, NJ, USA, 1970.
[39] I. Guyon and A. Elisseeff, “An introduction to variable and
feature selection,” Journal of Machine Learning Research,
vol. 3, no. 6, pp. 1157–1182, 2003.
Computational Intelligence and Neuroscience 9
... The rationale for employing this question to measure romantic attraction is grounded in the understanding that the desire for emotional union with a potential romantic partner is one of the main character of romantic interest arousal 1,3 . Therefore, when an individual exhibits behaviors, such as expressing a wish to date a particular potential romantic partner again within the context of dating, it can be interpreted as an indication of romantic interest 7,9,21,22 . Finally, an empty screen lasting 2000 ms appeared. ...
... Specifically, the time-frequency representation was obtained through a five-cycle complex Morlet WT. The sliding windows were advanced in 12-ms and 1-Hz increments to estimate the changes in power over time and frequency in the five FBs: delta (1-4 Hz), theta (4-8 Hz), alpha (8-13 Hz), beta (13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30), and gamma (30)(31)(32)(33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45)(46)(47)(48)(49). The TF features of the EEG activities were calculated according to event-related oscillations 7,[26][27][28] . ...
... From Fig. 5, it is evident that the 12 most discriminative features originate from 10 electrode pairs, namely A09-B06, D18-B21, A11-B08, D01-C01, D15-B01, A07-B04, C25-C12, A06-B03, A12-B09, and D27-B17. These electrodes are primarily distributed over the frontal, parietal, and occipital lobes, which are regions closely associated with emotional processing 7,9,11,22,34 . These converging evidences suggest that the 10 pairs of electrodes mentioned above are likely the most effective electrodes in EEG based romantic attraction recognition compared to other electrodes. ...
Article
Full-text available
Recognizing an individual’s preference state for potential romantic partners based on electroencephalogram (EEG) signals holds significant practical value in enhancing matchmaking success rates and preventing romance fraud. Despite some progress has been made in this field, challenges such as high-dimensional feature space and channel redundancy limited the technology’s practical application. The aim of this study is to explore the most discriminative EEG features and channels, in order to enhance the recognition performance of the system, while maximizing the portable and practical value of EEG-based systems for recognizing romantic attraction. To achieve this goal, we first conducted an interesting simulated dating experiment to collect the necessary data. Next, EEG features were extracted from various dimensions, including band power and asymmetry index features. Then, we introduced a novel method for EEG feature and channel selection that combines the sequential forward selection (SFS) algorithm with the frequency-based feature subset integration (FFSI) algorithm. Finally, we used the random forest classifier (RFC) to determine a person's preference state for potential romantic partners. Experimental results indicate that the optimal feature subset, selected using the SFS-FFSI method, attained an average classification accuracy of 88.42%. Notably, these features were predominantly sourced from asymmetry index features of electrodes situated in the frontal, parietal, and occipital lobes.
... Thus, these signals open up new possibilities for identifying users' emotional responses and preferences for potential partners. For instance, Zhang et al. (2021) successfully identified participants' initial romantic interest to potential partners based on the features extracted from electrocardiogram signals, while Lu et al. (2020) successfully detected participants' initial romantic desire to potential romantic partners based on the information extracted from photoplethysmogram signals. These results demonstrate that IRA, as an important part of human emotion, can be recognized on the basis of periphery physiological signals (Lu et al., 2020;Zhang et al., 2021). ...
... For instance, Zhang et al. (2021) successfully identified participants' initial romantic interest to potential partners based on the features extracted from electrocardiogram signals, while Lu et al. (2020) successfully detected participants' initial romantic desire to potential romantic partners based on the information extracted from photoplethysmogram signals. These results demonstrate that IRA, as an important part of human emotion, can be recognized on the basis of periphery physiological signals (Lu et al., 2020;Zhang et al., 2021). ...
... Nine hundred and fifteen features were extracted from the EEG signals on 128 electrodes, which is undoubtedly a high-dimensional dataset. To effectively analyze the data and save computational resources, we conducted necessary feature selection before classification (Lu et al., 2020;Zhang et al., 2021). The paired sample t-test was used to screen out the feature subsets with significant differences between the IRA engendered and IRA un-engendered categories. ...
Article
Full-text available
Initial romantic attraction (IRA) refers to a series of positive reactions toward potential ideal partners based on individual preferences; its evolutionary value lies in facilitating mate selection. Although the EEG activities associated with IRA have been preliminarily understood; however, it remains unclear whether IRA can be recognized based on EEG activity. To clarify this, we simulated a dating platform similar to Tinder. Participants were asked to imagine that they were using the simulated dating platform to choose the ideal potential partner. Their brain electrical signals were recorded as they viewed photos of each potential partner and simultaneously assessed their initial romantic attraction in that potential partner through self-reported scale responses. Thereafter, the preprocessed EEG signals were decomposed into power-related features of different frequency bands using a wavelet transform approach. In addition to the power spectral features, feature extraction also accounted for the physiological parameters related to hemispheric asymmetries. Classification was performed by employing a random forest classifier, and the signals were divided into two categories: IRA engendered and IRA un-engendered. Based on the results of the 10-fold cross-validation, the best classification accuracy 85.2% (SD = 0.02) was achieved using feature vectors, mainly including the asymmetry features in alpha (8–13 Hz), beta (13–30 Hz), and theta (4–8 Hz) rhythms. The results of this study provide early evidence for EEG-based mate preference recognition and pave the way for the development of EEG-based romantic-matching systems.
... The capacity to accurately detect and interpret these emotional states is critically vital across various domains, including psychology [2], physiology [3], healthcare [4], safe driving [5], education [6], and marketing [7]. cephalogram (EEG) [25], electrocardiogram (ECG) [26], electromyogram (EMG) [27], and galvanic skin response (GSR) [22]. These modalities provide a comprehensive array of physiological and behavioral cues that are instrumental in identifying and interpreting the complex landscape of human emotions. ...
Article
Full-text available
Our research systematically investigates the cognitive and emotional processes revealed through eye movements within the context of virtual reality (VR) environments. We assess the utility of eye-tracking data for predicting emotional states in VR, employing explainable artificial intelligence (XAI) to advance the interpretability and transparency of our findings. Utilizing the VR Eyes: Emotions dataset (VREED) alongside an extra trees classifier enhanced by SHapley Additive ExPlanations (SHAP) and local interpretable model agnostic explanations (LIME), we rigorously evaluate the importance of various eye-tracking metrics. Our results identify significant correlations between metrics such as saccades, micro-saccades, blinks, and fixations and specific emotional states. The application of SHAP and LIME elucidates these relationships, providing deeper insights into the emotional responses triggered by VR. These findings suggest that variations in eye feature patterns serve as indicators of heightened emotional arousal. Not only do these insights advance our understanding of affective computing within VR, but they also highlight the potential for developing more responsive VR systems capable of adapting to user emotions in real-time. This research contributes significantly to the fields of human-computer interaction and psychological research, showcasing how XAI can bridge the gap between complex machine-learning models and practical applications, thereby facilitating the creation of reliable, user-sensitive VR experiences. Future research may explore the integration of multiple physiological signals to enhance emotion detection and interactive dynamics in VR.
... Considering the accuracy and reliability of the objective detection method of driving fatigue, more and more researchers have applied this method to their own research [11,12]. In recent years, there have been more and more studies on fatigued driving using drivers' physiological signals, such as EEG [13,14], EOG [15] EMG [16] and ECG methods [17,18]. Fatigue detection method based on EEG characteristics is recognized as the gold standard by researchers [19]. ...
Article
Full-text available
The accurate detection and alleviation of driving fatigue are of great significance to traffic safety. In this study, we tried to apply the modified multi-scale entropy (MMSE) approach, based on variational mode decomposition (VMD), to driving fatigue detection. Firstly, the VMD was used to decompose EEG into multiple intrinsic mode functions (IMFs), then the best IMFs and scale factors were selected using the least square method (LSM). Finally, the MMSE features were extracted. Compared with the traditional sample entropy (SampEn), the VMD-MMSE method can identify the characteristics of driving fatigue more effectively. The VMD-MMSE characteristics combined with a subjective questionnaire (SQ) were used to analyze the change trends of driving fatigue under two driving modes: normal driving mode and interesting auditory stimulation mode. The results show that the interesting auditory stimulation method adopted in this paper can effectively relieve driving fatigue. In addition, the interesting auditory stimulation method, which simply involves playing interesting auditory information on the vehicle-mounted player, can effectively relieve driving fatigue. Compared with traditional driving fatigue-relieving methods, such as sleeping and drinking coffee, this interesting auditory stimulation method can relieve fatigue in real-time when the driver is driving normally.
Conference Paper
Full-text available
Mental stress is one of the first causes of cog-nitive dysfunctions, cardiovascular disorders and depres-sion. In addition, it reduces performances, on the work place and in daily life. The diffusion of wearable sensors (embedded in smart-watches, phones, etc.) has opened up the potential to assess mental stress detection through ul-tra-short term Heart Rate Variability (HRV) analysis (i.e., less than 5 min). Although informative analyses of features coming from short HRV (i.e., 5 min) have already been per-formed, the reliability of ultra-short HRV remains unclear. This study aims to tackle this gap by departing from a sys-tematic review of the existing literature and investigating, in healthy subjects, the associations between acute mental stress and short/ultra-short term HRV features in time, fre-quency, and non-linear domains. Building on these find-ings, three experiments were carried out to empirically as-sess the usefulness of HRV for mental stress detection using ultra-short term analysis and wearable devices. Experi-ment 1 detected mental stress in a real life situation by ex-ploring to which extent HRV excerpts can be shortened without losing their ability to detect mental stress. This al-lowed us to advance a method to explore to what extent ul-tra-short HRV features can be considered as good surro-gates of 5 min HRV features. Experiment 2 and 3 sought to develop automatic classifiers to detect mental stress through 2 min HRV excerpts, by using a Stroop Color Word Test (CWT) and a highly paced video game, which are two common laboratory-based stressors. Results from experiment 1 demonstrated that 7 ultra-short HRV features can be considered as good surrogates of short HRV features in detecting mental stress in real life. By leveraging these 7 features, experiment 2 and 3 offered an automatic classifier detecting mental stress with ultra-short features (2min), achieving sensitivity, specificity and accuracy rate above 60%.
Article
Mobile technology and wearable sensors can provide objective measures of psychological stress in everyday life. Data from sensors can be visualized and viewed by the user to increase self-awareness and promote adaptive coping strategies. A capacity to effectively self-regulate negative emotion can mitigate the biological process of inflammation, which has implications for long-term health. Two studies were undertaken utilizing a mobile lifelogging platform to collect cardiovascular data over a week of real-life commuter driving. The first was designed to establish a link between cardiovascular markers of inflammation and the experience of anger during commuter driving in the real world. Results indicated that an ensemble classification model provided an accuracy rate of 73.12% for the binary classification of episodes of high vs. low anger based upon a combination of features derived from driving (e.g. vehicle speed) and cardiovascular psychophysiology (heart rate, heart rate variability, pulse transit time). During the second study, participants interacted with an interactive, geolocated visualisation of vehicle parameters, photographs and cardiovascular psychophysiology collected over two days of commuter driving (pre-test). Data were subsequently collected over two days of driving following their interaction with the dynamic, data visualization (post-test). A comparison of pre- and post-test data revealed that heart rate significantly reduced during episodes of journey impedance after interaction with the data visualization. There was also evidence that heart rate variability increased during the post-test phase, suggesting greater vagal activation and adaptive coping. Subjective mood data were collected before and after each journey, but no statistically significant differences were observed between pre- and post-test periods. The implications of both studies for ambulatory monitoring, user interaction and the capacity of personal informatics to enhance long-term health are discussed.
Article
EEG-based emotion recognition is a challenging and active research area in affective computing. We used three-dimensional (arousal, valence and dominance) model of emotion to recognize the emotions induced by music videos. The participants watched a video (1 min long) while their EEG was recorded. The main objective of the study is to identify the features that can best discriminate the emotions. Power, entropy, fractal dimension, statistical features and wavelet energy are extracted from the EEG signals. The effects of these features are investigated and the best features are identified. The performance of the two feature selection methods, Relief based algorithm and principle component analysis (PCA), is compared. PCA is adopted because of its improved performance and the efficacies of the features are validated using support vector machine, K-nearest neighbors and decision tree classifiers. Our system achieves an overall best classification accuracy of 77.62%, 78.96% and 77.60% for valence, arousal and dominance respectively. Our results demonstrated that time-domain statistical characteristics of EEG signals can efficiently discriminate different emotional states. Also, the use of three-dimensional emotion model is able to classify similar emotions that were not correctly classified by two-dimensional model (e.g. anger and fear). The results of this study can be used to support the development of real-time EEG-based emotion recognition systems.
Article
Background and Objective In recent years, stress and mental health have been considered as important worldwide concerns. Stress detection using physiological signals such as electrocardiogram (ECG), skin conductance (SC), electromyogram (EMG) and electroencephalogram (EEG) is a traditional approach. However, the effect of stress on the EMG signal of different muscles and the efficacy of combination of the EMG and other biological signals for stress detection have not been taken into account yet. This paper presents a comprehensive review of the EMG signal of the right and left trapezius and right and left erector spinae muscles for multi-level stress recognition. Also, the ECG signal was employed to evaluate the efficacy of EMG signals for stress detection. Methods Both EMG and ECG signals were acquired simultaneously from 34 healthy students (23 females and 11 males, aged 20-37 years). Mental arithmetic, Stroop color-word test, time pressure, and stressful environment were employed to induce stress in the laboratory. Results The accuracies of stress recognition in two, three and four levels were 100%, 97.6%, and 96.2%, respectively, obtained from the distinct combination of feature selection and machine learning algorithms. Conclusions The comparison of stress detection accuracies resulted from EMG and ECG indicators demonstrated the strong ability and the effectiveness of EMG signal for multi-level stress detection.
Article
This paper presents an automatic ECG-based emotion recognition algorithm for human emotion recognition. First, we adopt a musical induction method to induce participants' real emotional states and collect their ECG signals without any deliberate laboratory setting. Afterward, we develop an automatic ECG-based emotion recognition algorithm to recognize human emotions elicited by listening to music. Physiological ECG features extracted from the time-, and frequency-domain, and nonlinear analyses of ECG signals are used to find emotion-relevant features and to correlate them with emotional states. Subsequently, we develop a sequential forward floating selection-kernel-based class separability-based (SFFS-KBCS-based) feature selection algorithm and utilize the generalized discriminant analysis (GDA) to effectively select significant ECG features associated with emotions and to reduce the dimensions of the selected features, respectively. Positive/negative valence, high/low arousal, and four types of emotions (joy, tension, sadness, and peacefulness) are recognized using least squares support vector machine (LS-SVM) recognizers. The results show that the correct classification rates for positive/negative valence, high/low arousal, and four types of emotion classification tasks are 82.78%, 72.91%, and 61.52%, respectively.
Article
Love at first sight (LAFS) is a commonly known phenomenon, but has barely been investigated scientifically. Major psychological theories of love predict that LAFS is marked by high passion. However, it could also be a memory confabulation construed by couples to enhance their relationship. We investigated LAFS empirically by assessing feelings of love at the moment participants met potential partners for the first time. Data were collected from an online study, a laboratory study, and three dating events. Experiences of LAFS were marked neither by high passion, nor by intimacy, nor by commitment. Physical attraction was highly predictive of reporting LAFS. We therefore suggest that LAFS is not a distinct form of love, but rather a strong initial attraction that some label as LAFS, either in the moment of first sight or retrospectively.
Article
Long-term exposure to stress may lead to serious health problems such as those related to the immune, cardiovascular, and endocrine systems. Once having arisen, these problems require a considerable investment of time and money to recover from. With early detection and treatment, however, these health problems may be nipped in the bud, thus improving quality of life. We present an automatic stress detection and alleviation system, called SoDA, to address this issue. SoDA takes advantage of emerging wearable medical sensors (WMSs), specifically, electrocardiogram (ECG), galvanic skin response (GSR), respiration rate, blood pressure, and blood oximeter, to continuously monitor human stress levels and mitigate stress as it arises. It performs stress detection and alleviation in a user-transparent manner, i.e., without the need for user intervention. When it detects stress, SoDA employs a stress alleviation technique in an adaptive manner based on the stress response of the user. We establish the effectiveness of the proposed system through a detailed analysis of data collected from 32 participants. A total of four stressors and three stress reduction techniques are employed. In the stress detection stage, SoDA achieves 95.8% accuracy with a distinct combination of supervised feature selection and unsupervised dimensionality reduction. In the stress alleviation stage, we compare SoDA with the ‘no alleviation’ baseline and validate its efficacy in responding to and alleviating stress.