Access to this full-text is provided by Springer Nature.
Content available from Journal of Big Data
This content is subject to copyright. Terms and conditions apply.
Open Access
© The Author(s) 2025. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0
International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you
modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of
it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise
in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted
by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy
of this licence, visit http:// creat iveco mmons. org/ licen ses/ by- nc- nd/4. 0/.
RESEARCH
Singhetal. Journal of Big Data (2025) 12:22
https://doi.org/10.1186/s40537-025-01069-x
Journal of Big Data
Quantum-inspired framework forbig data
analytics: evaluating theimpact ofmovie trailers
andits nancial returns
Jaiteg Singh1*, Kamalpreet Singh Bhangu1, Farman Ali2*, Ahmad Ali AlZubi3 and Babar Shah4
Abstract
In the context of the growing influence of businesses and marketers on social media
platforms, understanding the impact of emotionally charged content on consumer
behavior has become increasingly crucial. This study proposes a novel framework,
leveraging quantum computing principles, to assess the emotional impact of movie
trailers. The framework incorporates big data analytics and utilizes Quantum Walk and-
Quantum Time Series models to investigate the relationship between a movie trailer’s
emotional intensity and its financial performance. Unlike sequential problem-solving
approach of traditional computing models, Quantum superposition allows exploring
multiple options at once. An analysis of 141 movie trailers released after January 1,
2022, revealed a positive correlation between a trailer’s emotive score and its financial
success. These findings suggest that trailers evoking a stronger emotional response
tend to achieve greater box office returns compared to those with a lower emotional
impact. This research underscores the pivotal role of emotionally resonant content
in shaping consumer behavior and cinematic outcomes. It would offer valuable
insights for filmmakers and marketers to optimize audience engagement and financial
returns.
Keywords: Quantum computing, Big data analytics, Fama–French model, Movie trailer
reviews, Financial returns
Introduction
Data-driven techniques, particularly machine learning approaches, have significantly
enhanced prediction accuracy across various domains, including healthcare, public
policy, finance, and entertainment. Social media is a powerful platform for businesses
to influence consumer opinions and purchasing behavior [1]. Marketing experts fre-
quently analyze social media content while deciding marketing mix decisions [2]. e
movie industry has witnessed tremendous growth. It has generated billions of dollars
through sale of movie tickets and video-on-demand services. Accurate estimation of
box office collection is instrumental in taking timely business decisions and guidance
to film production and distribution companies. Such estimation is indispensable for
sustained growth of global movie industry [3, 4]. Movie trailer is a crucial promotional
*Correspondence:
jaitegkhaira@gmail.com;
farman0977@skku.edu
1 Chitkara University Institute
of Engineering and Technology,
Chitkara University,
Punjab 140401, India
2 Department of Applied AI,
School of Convergence, College
of Computing and Informatics,
Sungkyunkwan University, Seoul,
Republic of Korea
3 Department of Computer
Science, Community College,
King Saud University, Riyadh,
Saudi Arabia
4 College of Technological
Innovation, Zayed University,
Dubai, United Arab Emirates
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 2 of 30
Singhetal. Journal of Big Data (2025) 12:22
medium for any film and its cast. It has the potential to significantly impact the films’
reception, popularity and financial success. A pre-release analysis of the theatrical
trailer could significantly contribute to optimizing its content and sequence. is is
particularly important because critical marketing decisions like trailer launch, adver-
tising strategies, distribution channels, and release timing are made well in advance of
the actual movie premiere [5]. US movie industry produces a large number of mov-
ies with an average investment of $60 million per month [6]. However, despite the
significant investment, the profitability of a movie remains highly uncertain [7]. For
movie houses and marketing managers, understanding the financial impact of movie
trailers prior to the movie release is crucial. Some of the studies have shown a strong
positive correlation between viewers’ emotional engagement on movie trailers and
box office performance of a movie [8]. Movie trailers are advertisements that not only
capture viewers’ attention but also aim to generate interest and anticipation for the
upcoming film [9].Trailers often showcase actual scenes from the movie and intend
to build audience expectations [10]. Despite the significant cost associated with pro-
duction of movie trailers, the research community has largely overlooked the impact
of movie trailer design on the financial valuation of the film [11]. Investors also rely
heavily on movie trailers to predict the box office success or failure of a film [12]. e
effectiveness of a movie trailer is likely directly proportional to its ability to instigate
emotional responses in viewers. Viewers evaluate films based on visual quality and
storyline, regardless of filming style, and render emotions, which ultimately affects
the film’s revenue [13–16].
Extant literature has predominantly relied on readily available metadata, such as genre
classifications, budget figures, and historical market performance of comparable films,
to generate predictive models [17]. Further, traditional linear statistical models and rudi-
mentary machine learning algorithms often prove inadequate in capturing the intricate,
non-linear relationships among the multifarious factors influencing a film’s commercial
success [18, 19]. However, to the best of our understanding, the advanced computing
techniques like Quantum computing has only recently enabled researchers to harness
the wealth of information contained in critical reviews and blog content.
Quantum computing, leveraging the principles of quantum mechanics, provides a rev-
olutionary approach towards computation, promising unprecedented computing speeds
compared to traditional methods. By processing multiple solutions concurrently, it dras-
tically reduces time taken to solve complex optimization problems. Unlike traditional
computing, its probability-based structure means it excels at navigating vast solution
spaces. Greater computational power and efficiency make it ideal for tackling today’s
complex data-rich problems like anticipating movie success.
is research intends to employ advanced quantum computing techniques, such as
Quantum Walk and Quantum Time Series modeling, to identify significant patterns in
the emotional dynamics underlying trailer performance and their effect on box office
result [20]. e Quantum Walk approach will model the complex, nonlinear relation-
ships between various factors impacting a film’s financial success, including the emo-
tional intensity of trailers, critic reviews, and social media engagement. Moreover,
Quantum Time Series analysis will be applied to capture the temporal dynamics and
higher-order correlations within the data, offering a comprehensive understanding of
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 3 of 30
Singhetal. Journal of Big Data (2025) 12:22
how trailer content evolves over time and its influence on a movie’s commercial perfor-
mance [21].
e rest of this paper is structured as follows: Sect.“Related work” presents a review of
the relevant literature, Sect."Results" details the experimental design, Sect."Discussion
of key findings" discusses the outcomes, and Sect."Conclusion" provides the conclusion.
Related work
Presently, revenue forecasts for the opening weekend box office earnings are categorized
according to the prediction algorithm [3, 4, 22–26] or the metadata [24, 27, 28] associ-
ated with the films. Several studies have been working on the development of predic-
tion models because the predictions of movie box-office revenues are accurate only to a
limited extent. e development of a multimodal framework that utilizes film trailers to
forecast the box office performance during the opening weekend of motion pictures is a
relatively recent trend.
Movie trailers are intended to attract audiences to theaters and their impact on the
financial success of a film has been extensively studied by researchers [29]. Investors
are willing to allocate significant resources to trailer advertising and use advanced tech-
nologies to create personalized trailers that captivate audiences [30]. Forecasts of movie
demand made during the pre-release stage have been found to be reasonably accurate
and new information can affect expectations about a film’s financial performance, lead-
ing to changes in stock prices [31]. Stock returns immediately following a movie’s release
are primarily driven by its performance [32]. Trailer advertising prior to a film’s release
can provide valuable information to viewers and investors and can generate expectations
about its future success [33]. Consequently, movie studios require a means of assessing
the profitability of their investment in a film well before it is released. is assessment
should concentrate on predictive factors that can be determined in advance [34].
e Mehrabian Russell PAD model suggests that advertising evokes emotional
responses related to pleasure and arousal [35]. e study of emotions was originally con-
ducted by psychologists, but it was later discovered that emotions play a crucial role in
determining consumer behavior [36]. Consumer research uses two primary approaches,
the Facial Action Coding System (FACS) and facial electromyography (EMG), to analyze
facial expressions and emotional responses [37]. Although these techniques are effec-
tive in capturing emotional responses, there is limited empirical evidence regarding
their effectiveness [38]. Advertisements include various factors that are designed to elicit
emotional responses from viewers. Recent advances in image analysis and pattern recog-
nition have made it possible to automatically detect and classify emotional and conver-
sational facial signals [39].
According to the available literature, emotions can be categorized as positive or nega-
tive valence [40]. Emotions can also be viewed as either Dimensional, which includes
Arousal and Valence, or Discrete, which consists of specific emotional states such as
’happy’, ’sad’, ’anger’, ’disgust’, and ’neutral’ [40]. Measuring discrete emotional states
requires additional hardware such as fMRI, EEG, and Galvanic Skin Response (GSR)
sensors, which can be costly and impractical for industries [41]. Instead, researchers
have focused on facial expressions as a key indicator of a person’s mood or emotional
states [42]. Several studies have attempted to use facial expressions to make inferences
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 4 of 30
Singhetal. Journal of Big Data (2025) 12:22
about emotional states [43]. Content-based psycholinguistic features have also been
studied to predict social media messages, and supervised machine learning has been
proposed to overcome the limitations of human and computer coding procedures [44].
Although facial expression recognition has improved over the years, it still remains chal-
lenging due to the subtle and variable nature of facial expressions [45]. Effective feature
extraction techniques such as Dlib-ml, which identifies key features of a face that con-
tribute to generating emotions, have been proposed to overcome these difficulties [46].
Movie revenue prediction based on purchase intention mining using YouTube trailer
reviews is a potential application of natural language processing (NLP) techniques. In
this approach, sentiment analysis and opinion mining are applied to user-generated
content, such as comments and reviews, to extract information on viewers’ purchase
intentions [47]. e Affective-Knowledge-Enhanced Graph Convolutional Networks
(AKE-GCN) model is proposed that is used for aspect-based sentiment analysis. It aims
to incorporate both affective knowledge and graph convolutional networks into the task
of sentiment analysis. e model employs multi-head attention to learn better represen-
tations of aspect-based sentiment [48]. A methodology is proposed for predicting the
box office revenue of movies using sentiment analysis on Twitter data. e authors lev-
erage the vast amount of information available on Twitter to extract insights into the
public’s perception of upcoming movies, and use this information to make predictions
about their success at the box office [49].
e proposed quantum framework uses a quantum computer to manipulate qubits
representing input data from YouTube movie reaction videos. It uses a Quantum Walk
model to encode, detect, and quantify emotions with high accuracy and speed, repre-
sented through a quantum circuit using a combination of quantum gates. As far as we
are aware, there has been no prior research on predicting human emotions from facial
expressions using a quantum walk model empowered by quantum time series. In the
upcoming section, a case study will be presented, which will be followed by a detailed
explanation of the proposed approach.
A case study togain insight intomarket behavior
e concept of event studies is based on the idea that an event of interest can have an
immediate effect on the stock price of a company. e event window is a specific time
frame used to measure this effect. In the case of a movie trailer release, the event win-
dow is typically a five to six-day period leading up to the release, as well as a one-day
period after the release [50]. By using a short event window, researchers can limit the
impact of confounding factors that might influence stock prices, such as other events
related to the movie or its competitors. is allows researchers to focus specifically on
the impact of the trailer release itself. Event studies have been widely used in the finance
industry to analyze the impact of events on stock prices, and they can be a useful tool
for investors and analysts seeking to understand the relationship between specific events
and stock price movements.
e case study was evaluated taken into consideration two hypotheses as given below:
H1: e release of a movie trailer does not have an impact on the stock value of the
movie.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 5 of 30
Singhetal. Journal of Big Data (2025) 12:22
H2: e emotional content of a movie trailer is not associated with the financial value
of the movie.
Calculating the normal return of a particular stock is a crucial step in event studies as
it provides a baseline against which to compare the stock’s performance during the event
window. e normal return is typically calculated as the average return over a period of
time when no significant events are taking place.
In the context of a movie trailer launch, the normal return for a given movie "m" is
determined by calculating the average of returns for the five-day period leading up to the
trailer release, from t = −6 to t = −1, using Eq.(1). To compute the return for a particu-
lar day, the natural logarithm of the ratio of the stock’s trading value for that day to its
trading value for the previous day is taken.
Once the normal return has been calculated, the abnormal return (ABR) can be deter-
mined by subtracting the expected return (Em) from the actual return (Am) during the
event window [51], as outlined in Eq.(2). e ABR represents the stock price fluctuation
that happens directly due to the event, and it is computed for each day within the event
window.
where ABRmt is Abnormal return on day t for movie m, Amt = Actual return on day t for
movie m, Em = Expected return (normal return) for movie m and σm = Standard devia-
tion of returns for movie m over the estimation window. In this equation, the ABR for
each day is calculated as the difference between the actual return on that day and the
expected return, divided by the standard deviation of returns for the estimation win-
dow. is helps to normalize the ABR and make it comparable across different stocks or
events.
By calculating the ABR for each day in the event window, researchers can analyze the
impact of the event on the stock price over time and identify any significant changes
or trends. is information can be used to inform investment decisions or provide
insights into market behavior. e ABR has relationship with the emotional scores of
the trailer, including the positive (PVE) and negative (NVE) emotional scores [52] as
explained in the algorithms in next section. Quantified emotions are used to predict eco-
nomic value following recommendations from the Fama–French model [53]. e Fama–
French model addresses common risk factors associated with abnormal returns (ABR)
on various stocks and bonds. Movie stock pricing data is sourced from the Hollywood
Stock Exchange (HSX), a virtual movie stock market with over two million participants,
where active traders are typically heavy consumers and early adopters of movies [54].
ese traders use virtual currency to enhance their net worth by trading movie stocks
and related financial products [55]. Studies have found that HSX traders’ forecasts are
fairly accurate in predicting actual box office returns [56]. Additionally, investors use
virtual stock markets to predict movie demand before a movie’s release [57]. e pro-
posed framework implements Fama–French inspired three-factor equations to predict
the impact of positive and negative valence emotions (PVE and NVE) on stock pricing.
(1)
Em =
(
−6
t=−1
Mt)/
6
(2)
ABRmt =(Amt −Em)/σ m
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 6 of 30
Singhetal. Journal of Big Data (2025) 12:22
e procedure to calculate PVE, NVE, and ABR is detailed in Algorithm1. e Fama–
French model’s relevance is particularly significant in relation to abnormal returns and
the timing window [58].
e experimental results discuss the statistical tests used for evaluation of hypothe-
sis H1. To evaluate hypothesis H2, the study considered two theatrical trailers for the
same movie, released on different dates. e algorithms in the subsequent section evalu-
ated the emotional response of viewers for these two movie trailers. is procedure was
repeated for the entire set of movies studied.
Methodology
is section describes the methodology used to test the hypothesis that emotional
responses evoked by movie trailers. e effectiveness of traditional methods such as sur-
veys, questionnaires, and interviews in measuring emotional responses generated by a
movie trailer is limited due to their inability to capture temporal emotions and their reli-
ance on a limited set of questions and choices. Moreover, in some cases, it may not be
feasible to question someone about personal opinions and emotional responses. In con-
trast, facial expressions provide a continuous and reliable source of emotional expres-
sion, accounting for 55% of the message conveyed by a person, followed by intonations
and verbal expressions [59]. erefore, analyzing facial expressions is an effective way to
reveal the actual emotional response of a person towards a movie trailer.
is paper proposes a Fama–French and Dlib-ml inspired unified framework for pre-
dicting the economic value of movie trailers. e approach involves sentiment analysis
based on facial expressions, performed as a quantum walk algorithm. e algorithm is
formalized as a quantum circuit, where the input wires are the qubits encoding 68 points
of the human face. e states of these qubits are changed via a quantum walk, which is
implemented as a sequence of rotations and controlled-phase-flippings, with the rota-
tion angles depending on facial movements. Finally, a Quantum Fourier Transform is
applied to detect the contribution, or intensity, of five target emotions: happiness, anger,
sadness, surprise, and neutrality. e quantified emotions are then used to predict the
economic value of the movie trailer using the recommendations of the Fama–French
model. e component diagram in Fig.1 illustrates the key elements of the proposed
framework.
e Hollywood Stock Exchange (HSX) was used to observe the stock value of the
movie in near real-time [60]. Assuming that there are no unexpected fluctuations in
stock prices or other market variables related to the movie, abnormal returns (ABR) can
occur if the release of a movie trailer causes a change in the movie production company’s
stock price. Efficient markets theory suggests that the stock price reflects all available
information, including the impact of events on a business, due to the rationality of inves-
tors and the availability of perfect information [61]. erefore, studying the variation in
stock prices of a production company following the release of a movie trailer can provide
valuable insights into the impact of such events on the company’s financial performance.
In this study, a total of 141 movie trailers that were released between January 1, 2022,
and March 31, 2023 were examined, which constituted only a subset of all 1334 movie
stocks listed on the HSX market during that period. e criteria for selecting the mov-
ies were based on the release date of the trailer, the initial release of the movie on 650 or
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 7 of 30
Singhetal. Journal of Big Data (2025) 12:22
more screens (considered as "wide releases" for HSX), and the availability of at least 90
days of trading history on HSX prior to the release date [51, 62, 63].
e economic worth of a movie trailer is commonly evaluated by abnormal returns
(ABR), which refer to the variation in the stock value of the movie production company
following the release of the trailer. To analyze this variation, they used HSX, a virtual
movie stock market (VSM) that has more than two million participants, including heavy
movie consumers and early adopters [64]. Traders use virtual currency to trade movie
stocks and other movie-related financial products on the platform, aiming to increase
their net worth. Previous research has demonstrated the reasonably accurate forecasting
abilities of HSX traders in predicting actual box office returns [65]. However, this study
aimed to examine the impact of trailer release on financial returns during the pre-release
period of a movie. e financial return measure was the movie’s "stock price" as traded
on HSX.
Overview ofthework
e process of feature extraction utilizes the Dlib-ml library, which is a freely available,
cross-platform open-source software library created to simplify software development
and research. It comprises separate software components, each with comprehensive
documentation and debugging modes, making it appropriate for both research and com-
mercial ventures [66]. e process of recognizing facial expressions is performed by
Dlib-ml using a set of 68 key facial landmarks [67]. ese landmarks are then utilized
as input states in a quantum-inspired model that is employed for quantifying emotions.
e 68 facial landmarks correspond to specific facial muscles located in the eyebrows,
eyes, nose, and mouth regions. ese muscles have been identified as contributing sig-
nificantly to the formation of different emotions.
To perform classification, proposed framework extracts these 68 facial landmarks
from the input video frames using Dlib-ml. e resulting feature vector is then passed
through quantum circuit to predict the emotional state of the subject. Any typical
Video Source
Frame by Frame
Analysis of Video
Quantum walk
Identification of
Emotions
Classification of
Emotions
Fama French Model
for compensating
uncontrolled market
factors
Calculating and Predicting
Positive and Negative
Valence
Economic value of
Investment
Fig. 1 The key elements of the proposed framework
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 8 of 30
Singhetal. Journal of Big Data (2025) 12:22
case of identifying emotions and facial expressions involves 68 landmarks as shown in
Fig.2 below:
e subsequent points involve explanation of the process.
1. Data collection: We collected a dataset of movie reaction videos from YouTube [68].
e videos were selected based on their popularity and variety in terms of the movies
and emotions expressed in them [69]. To develop our framework, we collected You-
Tube videos that included crowd-sourced trailer reviews. ese videos were shared
voluntarily by individuals on the platform. We gathered 153 reaction sequences or
videos related to the trailers of 141 movies from YouTube to collect facial expres-
sions, as shown in Fig.3.
Fig. 2 A Dlib-ml mark up of annotated 68 Facial landmarks
Fig. 3 Sample emotions identified during different reaction sequences
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 9 of 30
Singhetal. Journal of Big Data (2025) 12:22
e reliability of these channels was evaluated based on two parameters: the num-
ber of subscribers and the total views on the videos [70]. ese criteria were used to
select the reaction sequences.
e selection process involved the following minimum requirements:
a) e YouTube channel for the reaction sequence must have at least 5,000 sub-
scribers.
b) e YouTube channel for the reaction sequence must have at least 10,00,000
views in total.
Prominent YouTube channels that met these requirements were considered
to obtain the reaction sequences, as shown in Table1. ese channels were
selected because they have a large number of subscribers and views, indicating
that the reaction sequences are genuine and reliable indicators of authenticity.
2. Preprocessing: e collected videos will be preprocessed to extract frames from the
video and annotate the emotions expressed in them using established emotion rec-
ognition frameworks. In the given context, pre-processing is applied to video frames,
which involves resizing the frames to a specific size of 224 × 224 × 64, which is suita-
ble for the subsequent feature extraction step. is pre-processing enables the frames
to be compatible with a classification model and can be used for training and test-
ing. e proposed model uses facial expressions to identify and measure the inten-
sity of emotions such as smiles, eyebrow raises, anger, disgust, positive and negative
reactions. ese expressions are relevant to understanding the viewer’s response and
are validated by the research community. Classifiers are used to generate continuous
emotive outputs based on probabilities, as shown in Fig.4.
3. Proposed Quantum Framework: e preprocessed data is analyzed using quantum
computing to process emotions in a video. It does so by analyzing the facial land-
marks in each frame of the video and encoding them into a quantum circuit. Quan-
tum walk is used to encode the facial landmarks into quantum states. A quantum
walk is a type of quantum mechanical process that describes the behavior of a par-
ticle (such as an electron or photon) moving through a lattice or graph. In contrast
to classical random walks, where the particle moves randomly, the quantum walk is
a coherent process that involves the superposition of states. In a quantum walk, the
Table 1 Sample list of YouTube channels that met the minimum selection criteria based on their
number of subscribers and total views
YouTube channel Views Subscribers
ScreenJunkies 615 thousand 1.6 million
Beyond The Trailer 944 thousand 1.2 million
Dwayne N Jazz 1.7 billion 3.3 million
Tyrone Magnus 2.2 billion 2.9 million
The Reel Rejects 978 million 1.3 million
Blind Wave 515 million 837 thousand
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 10 of 30
Singhetal. Journal of Big Data (2025) 12:22
particle is described by a quantum state, which is a superposition of states that corre-
spond to different positions in the lattice or graph. As the particle moves, the super-
position evolves in a way that depends on the structure of the lattice or graph and the
interactions between the particle and the lattice. e resulting quantum state can be
used to describe the probability distribution of finding the particle at different posi-
tions. Quantum walks have a wide range of applications in quantum computing and
quantum information processing, such as quantum algorithms for search and opti-
mization, quantum simulation of complex systems, and quantum cryptography. ey
also have potential applications in other areas, such as quantum sensing and metrol-
ogy, and the study of complex systems in condensed matter physics. e quantum
circuit consists of a series of quantum gates that perform rotations on the qubits
based on the facial landmark values. ese rotations are then followed by a Fourier
transform, which is used to extract features from the circuit. Finally, measurements
are made on the qubits, and the resulting bit strings are used to determine the prob-
ability of each emotion (Angry, Happy, Sad, Surprised, Neutral) being expressed in
the frame.
4. Model evaluation: e model is evaluated on a test dataset to assess its performance
in detecting probability distribution and the positive and negative valence estimates
(pve and nve) based on whether the Neutral emotion has the highest probability in
movie reaction videos.
5. Result analysis: e results obtained from the model evaluation is analyzed to gain
insights into the emotional content of the movie reaction videos and the effective-
ness of the proposed Quantum model in quantification of emotions. e proposed
model detected five different emotions, which were happy, angry, sad, neutral and
surprise for each face in every video frame of the reaction sequences extracted from
YouTube. e classifier returned a probability value for each emotion correspond-
ing to every face identified in each frame. In the context of a theatrical movie trailer,
emotions like happy, sad, surprise, and angry were viewed as having positive valence,
whereas neutral had a negative valence. e probabilities of the viewer experienc-
ing each emotion at a given time were represented by each probability value. For
Marketing Stimulus Plotted Facial Landmarks Calculated Emotion
Metrics
Fig. 4 Probability based outputs for identified emotions as generated by proposed quantum model
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 11 of 30
Singhetal. Journal of Big Data (2025) 12:22
example, the probabilities for happy, angry, sad, surprise, disgust, and neutral in the
first frame of a viewer’s reaction sequence could be represented as [Ha1, An1, Sa1,
Su1, Ne1]. e total probability of the detected emotion was calculated as the sum
of the probabilities of all the frames of the video. e Positive Valence Emotive score
(PVEs) of the whole reaction sequence for a theatrical movie trailer with "n" frames
was determined by adding up the probabilities for happy, angry, sad, surprise, and
disgust, whereas the Negative Valence Emotive score (NVEs) was determined by
adding up the probability for neutral.
Architecture
Quantum walks are a generalization of classical random walks that can be used to study
quantum algorithms and simulate quantum systems related to emotion transmission [71].
ey are described by a unitary evolution operator that acts on a Hilbert space represent-
ing the state of the system. e evolution of a quantum walk can be described using the
Schrödinger equation given in Eq.3 [72]:
where
|�(t)�
is the state of the system at time t and H is the Hamiltonian of the system.
e Hamiltonian for a quantum walk can be written as:
where S is the coin operator that acts on the internal degree of freedom of the walker
(similar to a classical coin flip) and W is the shift operator that describes the movement
of the walker. e coin operator is usually written as a 2 × 2 matrix:
where the indices (0,0), (0,1), (1,0), and (1,1) correspond to the basis states |0⟩, |1⟩, |L⟩,
and |R⟩, respectively. e shift operator is a unitary operator that moves the walker one
step to the left or right depending on the internal state of the walker shown in Eq.5:
where |x⟩ represents the position of the walker and the tensor product ⊗ indicates
that the shift operator acts on both the position and internal degrees of freedom of the
walker.
e time evolution of the quantum walk is then given by Eq.6:
where
|�(0)�
is the initial state of the system. e evolution operator can be written as a
product of the coin and shift operators in Eq.7:
(3)
i∂∂t|�(t)�=H|�(t)�
(4)
H=SW
S=(s0,0s0,1s1,0s1,1)
(5)
W=∞|
x
−
1
x
|⊗|
0
0
|+|
x
+
1
x
|⊗|
1
1
|
(6)
|�(t)�=e−iHt|�(0)�
(7)
e−iHt =e−iSWt =e−iWte −iSt +O(t2)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 12 of 30
Singhetal. Journal of Big Data (2025) 12:22
where the second equality follows from the Trotter-Suzuki decomposition. is allows
the time evolution of the quantum walk to be efficiently simulated using a sequence of
coin and shift operations.
e encoded facial landmark data through quantum walk is transformed from the
time domain to the frequency domain using a Fourier transform circuit. is is achieved
using Quantum Fourier Transform(QFT), which generates a quantum circuit that per-
forms the Fourier transform on a set of input qubits [73]. e idea behind this transfor-
mation is to analyze the facial landmark data in terms of its frequency components. e
Fourier transform expresses a time-domain signal as a sum of sine and cosine functions
of different frequencies. By performing the Fourier transform on the facial landmark
data, it can be identified that which frequency components are present in the data and
how much each component contributes to the overall signal. is frequency-domain
representation of the facial landmark data can be useful in various applications such as
facial expression recognition, where certain facial expressions may be characterized by
specific frequency components [74].
e transformation of facial landmark data from the time domain to the frequency
domain using a quantum Fourier transform is a key aspect of this work. It is necessary
to perform this transformation in order to extract the relevant features from the facial
landmark data that can be used for emotion recognition [75]. By transforming the time
series of facial landmark data into the frequency domain, the model is able to capture the
patterns and variations in the data that are important for distinguishing between differ-
ent emotions. e use of a quantum Fourier transform is particularly interesting because
it allows for the exploration of the potential of quantum computing for processing and
analyzing complex data sets [76].
Quantum circuit is constructed using the Qiskit library. e circuit is constructed with
4 qubits = 4. Firstly, the quantum circuit applies a series of rotations qc.rx() and qc.ry()
to each qubit using the x and y-coordinates of each facial landmark as inputs. is step
is used to encode the information of facial landmarks into the quantum circuit. en,
a series of Controlled-X (CNOT) gates are applied to the qubits to create a "quantum
Fourier transform" (QFT) using qc.cx () gates. e FourierTransformCircuits() function
from the Qiskit library is used to construct the QFT circuit. After the QFT is applied,
the circuit measures the qubits and outputs a binary string that represents the state of
the qubits after measurement. is binary string is used to calculate the probability of
the circuit being in each of the five possible states: Angry, Happy, Sad, Surprised, and
Neutral.
e circuit is executed using the QASM simulator backend from the Aer module of
Qiskit. e execute () function runs the circuit with 1024 shots and returns a set of meas-
urement results. e probability of each emotion is calculated by counting the number
of times each outcome is observed and dividing by the total number of shots. e expla-
nation of quantum circuit and the states involved at each step is detailed subsequently.
Explanation ofthealgorithms
e algorithms used in this study aims to predict human emotions through quantum
computing. e procedure involves processing a frame using the OpenCV library to
detect faces, and subsequently applying a landmark detector to extract facial landmarks.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 13 of 30
Singhetal. Journal of Big Data (2025) 12:22
Quantum circuits are then constructed based on these landmarks, and the resulting
probabilities are used to predict the corresponding emotions. Specifically, the procedure
begins with converting the frame into grayscale and applying the face cascade classifier
to detect faces. e resulting faces are then used to extract facial landmarks through
Quantum walk, which are normalized and used to construct quantum circuits. the
quantum walk is used to encode graph information for the facial landmarks extracted
from a detected face. e quantum circuit is initialized with a specific number of qubits
(specified by n_qubits), and the quantum walk is performed by applying quantum gates
to the qubits. e angles of rotation gates (ry) and the Controlled-Z gates (cz) are chosen
based on the size of the input graph (i.e., the number of landmarks).
e encoded graph information via quantum walk is then used to calculate probabili-
ties for different emotions associated with the face, using quantum time series analysis.
e probabilities are obtained by measuring the output of the quantum circuit using the
qasm_simulator backend. e resulting probability distribution is then used to calculate
the positive and negative scores for the detected face. e quantum circuits are con-
structed using the QuantumCircuit library and the FourierTransformCircuits library,
and the resulting probabilities are obtained through executing the quantum circuits
using Aer.
e emotions predicted in this study are based on a set of pre-defined emotions,
including ’Angry’, ’Happy’, ’Sad’, ’Surprise’, and ’Neutral’. e resulting probabilities are
used to predict the corresponding emotion, with ’Neutral’ being a special case. If the
probability of ’Neutral’ is greater than 0.5, the emotion is classified as ’Negative’ and the
probabilities of the other emotions are summed to obtain the ’Negative’ probability. If
the probability of ’Neutral’ is less than 0.5, the emotion is classified as ’Positive’ and the
probabilities of the other emotions are summed to obtain the ’Positive’ probability. e
algorithm shown in Algorithm1 process all the videos individually of any movie trailer.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 14 of 30
Singhetal. Journal of Big Data (2025) 12:22
Algorithm1 illustrates procedure Process_video. The process_video() procedure takes a list of video paths as
input and returns three lists containing the probabilities, positive emotions, and negative emotions of each video
frame in each video file
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 15 of 30
Singhetal. Journal of Big Data (2025) 12:22
e algorithm shown in Algorithm2 below is to convert the input frame to grayscale
and detect faces using the face_cascade object. e grayscale image is stored in the vari-
able "gray," while the detected faces are stored in the variable "faces." Next, an empty
list called "probs" is created to store the probabilities of each emotion, and pve and nve
are initialized to zero. For each face in "faces," a region of interest (ROI) is defined using
the x, y, w, and h coordinates, and the face is extracted from the frame. e ROI is con-
verted to grayscale, and facial landmarks are detected using the "landmark_detector"
object. e landmark coordinates are normalized using mean and standard deviation
normalization and stored in the "landmarks" variable as a NumPy array. A quantum cir-
cuit is then constructed using the "QuantumCircuit" object and the number of qubits
specified by "n_qubits." e Rx and Ry gates are applied to each qubit using a for loop
that iterates over the "landmarks" array. e circuit consists of quantum walk algorithm
to encode graph information into a quantum circuit. e quantum walk is performed
using a sequence of rotations and controlled-Z gates. e rotation angle is determined
by the number of landmarks in the graph, and the rotation axis is the y-axis. e loop
iterates over all but the last qubit in the quantum circuit, applying the rotation gate to
each qubit and a controlled-Z gate between each adjacent pair of qubits. e last qubit is
then rotated by the same angle. e CNOT gates are applied to neighboring qubits using
another for loop, and the Fourier transform is applied to the circuit using the "Fourier-
TransformCircuits" object.
Finally, the qubits are measured, and the counts are obtained using the execute()
method from the "Aer" backend. e probabilities of each emotion are then calculated
using another for loop that iterates over the "emotions" list. e probability of the "Neu-
tral" emotion is obtained by setting the key to ’0000’, while the probabilities of the other
emotions are obtained by formatting the index of the emotion in the list as a binary
string with four bits. If the probability of the "Neutral" emotion is greater than 0.5, pve is
set to 0, and nve is set to the sum of the probabilities of the other emotions. Otherwise,
pve is set to the sum of the probabilities of the other emotions, and nve is set to the
probability of the "Neutral" emotion. Finally, the probabilities, pve, and nve are returned.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 16 of 30
Singhetal. Journal of Big Data (2025) 12:22
Algorithm2 iillustrates procedure process_frame. This pseudo code represents a procedure called "process_
frame" that takes a frame as input and returns the probabilities, pve (positive valence estimate), and nve (negative
valence estimate) of the emotion expressed in the frame
e role of quantum time series is to represent the landmarks extracted from the face
detected in each frame of a video as a quantum state. e quantum circuit built from
these quantum states is then used to compute the probability of different emotions based
on the facial landmarks. e idea is to use quantum computing to process and analyze
the facial landmarks in a more efficient and accurate way compared to classical comput-
ing. e Fourier transform circuits from the qiskit.circuit.library are used to convert the
time series of facial landmark data into the frequency domain. is conversion can help
identify patterns or features in the data that may be more useful for emotion recognition.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 17 of 30
Singhetal. Journal of Big Data (2025) 12:22
ree-factor Asset Pricing Model is employed to estimate the expected return is given
in Eq.8 [77]:
e equation provided describes how daily returns of a company are calculated rela-
tive to the risk-free rate (i.e., the one-month Treasury bill rate) and the excess return on
a value-weighted market portfolio consisting of stocks from NYSE, AMEX, and NAS-
DAQ. e Fama–French factors are based on six value-weighted portfolios formed on
size and book-to-market. e SMBt (Small Minus Big) term represents the difference
between the average return on three small portfolios and the average return on three
big portfolios for day t, while the HMLt (High Minus Low) term represents the differ-
ence between the average return on two value portfolios and the average return on two
growth portfolios for day t. ese terms, along with RMt—RFt, are used to evaluate the
impact of market, size, and book-to-market factors on returns [77].
e Daily Abnormal Return (AR) can be calculated by subtracting the return predicted
from the ree-Factor Model, as shown in Eq.9, from the actual return [78].
ese values are subsequently used in algorithm 3 as coefficients of Fama–French
three-factor model for calculation of abnormal returns ABR value.
e Algorithm3 implements a quantum algorithm for computing the alpha and
beta values of a stock using the Fama–French three-factor model. e algorithm uses
amplitude estimation, a quantum algorithm for estimating the amplitude of a specific
state in a superposition of states, to perform a linear algebra operation that computes
the alpha and beta values.
e quantum_ABR function takes two arguments: the positive and negative stock
returns (pve and nve, respectively). It first defines a quantum circuit with two qubits
and applies the Quantum Fourier Transform (QFT) to prepare a superposition of all
possible states. It then uses the Amplitude Estimation component from the Qiskit
Aqua library to estimate the amplitude of the state that corresponds to the linear
algebra operation defined in the linear_algebra_operation function. is function
computes the matrix multiplication of a 2 × 2 matrix A and a 2 × 1 vector b to obtain
the alpha and beta values.
After obtaining the estimated alpha value, the quantum_ABR function calculates
the beta value as the difference between the negative stock return and the product
of the positive stock return and the estimated alpha value, divided by the sum of the
positive and negative stock returns. Finally, the function returns both the alpha and
beta values. e compute_ABR function uses the main function to compute the alpha
and beta values and then calculates the ABR value using the Fama–French three-
factor model, which takes into account three factors that are believed to affect stock
returns: market risk (Rm_Rf), size risk (SMB), and value risk (HML). e values of
2.51, -5.59 and -9.01 respectively are taken values of three factor Fama–French model
for the month of March 2023 as per report given by University of Dartmouth. e
(8)
Rit −RFt =ai +mi(RMt −RFt)+siSMBt +hiHMLt
(9)
ARit =Rit −E(Rit)=Rit −[ai +RFt +mi(RMt −RFt)+siSMBt +hiHMLt]
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 18 of 30
Singhetal. Journal of Big Data (2025) 12:22
function takes five arguments: the positive and negative stock returns, and the values
of the three factors. It returns the computed abnormal returns ABR value.
Algorithm3 illustrates procedure to calculate abr values based on fama–french three-factor model
Quantum circuit
e circuit starts by initializing n_qubits qubits in the zero state. en, for each land-
mark in the face detected in the current frame, the corresponding qubit in the circuit
is rotated around the x and y axes by the x and y coordinates of the landmark, respec-
tively. is creates a quantum state that is dependent on the facial landmarks. Next, a
series of CNOT gates are applied to the qubits to create entanglement between them.
is is followed by the application of a Fourier transform circuit, which applies a
series of Hadamard and phase gates to the qubits to create a superposition of all pos-
sible states.
Finally, the circuit is measured in the computational basis, which collapses the super-
position into a classical probability distribution. e probability of each possible state is
then calculated by running the circuit shots number of times and counting the number
of times each state is observed. ese probabilities are returned by the function along
with the probabilities of positive and negative emotions based on the values of the prob-
ability distribution.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 19 of 30
Singhetal. Journal of Big Data (2025) 12:22
e use of 4 qubits is to represent the 2D coordinates (x and y) of 4 facial land-
marks. Each qubit is used to encode one coordinate of a landmark, with the amplitude
of the quantum state representing the value of the coordinate. e quantum circuit
applies rotations around the x and y axes to each qubit based on the corresponding
coordinate value, effectively transforming the input landmark data into quantum state
amplitudes.
e Fourier transform circuit converts the state from the time domain to the fre-
quency domain. is transformation allows the quantum algorithm to analyze the
input landmark data in a different representation, which can potentially reveal differ-
ent patterns or features [47].
e quantum circuit shown in Fig.5 takes in 4 qubits. It first applies RX and RY
rotations to each qubit, with the angle of rotation determined by the landmark coor-
dinates of the detected face. en, CNOT gates are applied between adjacent qubits,
except for the last qubit. Finally, a quantum Fourier transform (QFT) is applied to all
the qubits, and measurements are performed on each qubit.
e measurements result in a binary string of length 4, which represent the state
of the qubits at that moment. e circuit counts the number of times each possible
binary string occurs, by executing the circuit multiple times using a simulator.
Starting from the left, the first two boxes represent the input to the quantum cir-
cuit. In this case, there are 68 facial landmarks detected in the frame, so the input to
the circuit is a state vector of size 2^6 = 64, representing the probabilities of each pos-
sible combination of the landmarks being present or absent. e "Prep" box initializes
the input state vector, with the qubits in the |0 > state.
e next box, labeled "Hadamard", applies a Hadamard gate to each qubit in the
input state. is gate puts each qubit into a superposition of the |0 > and |1 > states,
which is a key step in the quantum Fourier transform (QFT) algorithm. e remain-
ing boxes in the circuit apply a sequence of controlled-phase (CP) gates, which are
used to implement the QFT. Each CP gate has a control qubit and a target qubit, and
applies a phase shift to the target qubit if the control qubit is in the |1 > state. e
phase shift depends on the position of the qubits in the circuit, and is determined by
the formula e^(2piikj/2^n), where k is the position of the control qubit, j is the posi-
tion of the target qubit, and n is the total number of qubits in the circuit.
e CP gates are arranged in a "reverse" order, with the qubits furthest to the right
in the circuit acting as the control qubits, and the qubits furthest to the left acting as
Fig. 5 Quantum circuit representation of our experiment
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 20 of 30
Singhetal. Journal of Big Data (2025) 12:22
the target qubits. is is because the QFT algorithm operates in reverse order, start-
ing with the most significant bit of the input state. e final box in the circuit is the
"Measure" box, which measures each qubit in the circuit and collapses it to either
the |0 > or |1 > state. e resulting measurement outcomes are classical bits, which are
used to calculate the probability amplitudes of each possible state in the input vector.
Overall, the quantum circuit implements the QFT algorithm on the input state vector,
which transforms the probabilities of each possible combination of facial landmarks into
their corresponding Fourier coefficients. ese coefficients can be used to analyze the
frequency components of the landmark data and identify patterns in the facial expres-
sions over time.
e circuit can be written in the following form as in Eq.10:
where|q0q1q2q3>represents the initial state of the qubits, and ⊕ denotes bitwise addi-
tion modulo 2. e state after the application of the H gates is a superposition of all
possible basis states, and the CNOT gates entangle the qubits in such a way that the
resulting state is inverted, i.e., each basis state is mapped to its bitwise complement.
Various states ofthequantum circuit
• Input state: e input to the quantum circuit is a 4-qubit register initialized to the
|0000⟩ state, which can be represented as:
|ψ�=|0000�
• Hadamard gate: e first operation in the circuit is a Hadamard gate applied to each
qubit, which creates a superposition state on all qubits. e Hadamard gate can be
represented by the following matrix:
e Hadamard gate applied to each qubit can be represented in Eq.11 as:
Expanding the tensor product, we get Eqs.12, 13, 14:
e resulting state is a uniform superposition of all possible 4-qubit states.
• Controlled-Z gate: e next operation in the circuit is a controlled-Z gate between
qubits 1 and 2. is gate flips the sign of the state |11⟩, and leaves all other states
(10)
|
q0q1q2q3�→Hadamard gates 1
√24
2
4
−1
x=0
|x�→CNOT gates 1
√24
2
4
−1
x=0
|x⊕15
�
H=1/√2∗[11;1−1]
(11)
|ψ�=(H⊗H⊗H⊗H)|0000�
(12)
|ψ�=(H|0�)⊗(H|0�)⊗(H|0�)⊗(H|0�)
(13)
|ψ�=1/2(|0�+|1�)⊗1/2(|0�+|1�)⊗1/2(|0�+|1�)⊗1/2(|0�+|1�)
(14)
|
ψ
=1
/
16
(
|0000+|0001+|0010+|0011+|0100+|0101
+|0110+|0111+|1000+|1001+|1010+|1011
+|1100+|1101+|1110+|1111
)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 21 of 30
Singhetal. Journal of Big Data (2025) 12:22
unchanged. e controlled-Z gate can be represented by the following matrix as in
Eq.15:
where I is the identity matrix, and Z is the Pauli-Z matrix. e controlled-Z gate
applied to qubits 1 and 2 can be represented as in equation16:
e state |11> is the only state that will be affected by the controlled-Z gate, so
the resulting state is given in equation17:
• Hadamard gate: e next operation in the circuit is a Hadamard gate applied to each
qubit, which creates a superposition state on all qubits. e Hadamard gate can be
represented by the following matrix:
e Hadamard gate applied to each qubit can be represented as in Eq.19:
Expanding the tensor product, we get Eq.20:
Applying the Hadamard gate to the |0⟩ state gives Eq.21:
Substituting this into the expanded equation, we get Eq.22:
e state
|ψ�
after the Hadamard gate operation is a superposition of all 16 possible
binary combinations of 4 qubits.
Results
A Shapiro–Wilk W test was employed to assess the normality of abnormal returns. e
observed values of W (0.91) and p (0.01) suggest a statistically significant positive impact
on abnormal returns, leading to the rejection of H1. Further, to further investigate the
impact of trailer release timing, the researchers analyzed two theatrical trailers for the
same film, released on different dates. e empirical evidence suggests that the release
of movie trailers can lead to positive abnormal returns in stock prices following the film’s
release, thereby refuting the null hypothesis that there is no relationship between trailer
(15)
CZ =|00|⊗I+|11|⊗Z
(16)
|ψ�=CZ(1,2)|ψ�
(17)
|
ψ
=1
/
16
(
|0000+|0001+|0010+|0011+|0100+|0101
+|0110−|0111+|1000+|1001+|1010−|1011
+|1100+|1101−|1110+|1111
)
(18)
H=1/√2∗[11;1−1]
(19)
|ψ�=(H⊗H⊗H⊗H)|ψ�
(20)
|ψ�=(H|0�)⊗(H|0�)⊗(H|0�)⊗(H|0�)
(21)
H|0�=1/√2∗|0�+1/√2∗|1�
(22)
|
ψ
=1
/
2∗
(
|0000+|0001+|0010+|0011+|0100+|0101
+|0110+|0111+|1000+|1001+|1010+|1011
+|1100+|1101+|1110−|1111
)
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 22 of 30
Singhetal. Journal of Big Data (2025) 12:22
release and stock value H2. A comparative analysis of the Positive Valence Emotions
(PVE) and Negative Valence Emotions (NVE)) and abnormal returns (ABR) for the top-
performing movie trailers of a film, released at different time points is summarized in
Table2. e results indicate a strong correlation between ABR and the emotional valence
of the trailer (PVE and NVE). Aa increase in PVE tends to increase ABR whereas an
increased NVE results in low ABR. Furthermore, the first trailer exhibited a higher PVE
than the second trailer, suggesting that it evoked a wider range of emotions among view-
ers than the latter.
is suggests that pre-release analysis of movie trailers can be a valuable tool for pre-
dicting a film’s financial performance and for crafting trailers that may evoke higher
positive emotions. Additionally, this study highlights the importance of trailer release
timing, suggesting that strategic scheduling of multiple trailer releases for a single film
can influence stock value returns. Figure6 evaluates the effectiveness of movie trail-
ers by examining their ability to elicit positive and negative valence emotions and their
Table 2 Calculated values of PVEs and NVEs and ABR for ten best performing theatrical trailers
released on different dates
Evaluated movie Trailer 1 Trailer 2
ABRPVE NVE ABRPVE NVE
Samaritan 16 0.51 0.002 6 0.47 0.12
Operation Fortune 15 0.46 0.002 6 0.38 0.14
Fall 12 0.51 0.0012 5 0.35 0.2
The Invitation 11 0.313 0.001 5 0.39 0.21
Transformers: Rise of the Beasts 10 0.35 0.07 3 0.11 0.1
The Enforcer 9 0.494 0.011 3 0.21 0.16
Pinocchio 9 0.31 0.02 3 0.2 0.11
Strange World 8 0.47 0 2 0.29 0.14
Blood 8 0.47 0.018 3 0.41 0.18
The Whale 7 0.32 0.065 1 0.2 0.1
Fig. 6 Abnormal return (ABR) of two movie trailers based on values of PVEs and NVEs
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 23 of 30
Singhetal. Journal of Big Data (2025) 12:22
impact on ABR. e findings further identify pre-release analysis of movie trailers as a
valuable tool for predicting financial performance and for optimizing trailer content to
evoke positive emotions. Furthermore, the study establishes a direct link between the-
atrical trailer releases and stock value fluctuations. e emotionally intense trailers may
lead to higher abnormal returns compared to those with moderate emotional intensity.
Additionally, platforms such as YouTube can serve as a reliable source of data for train-
ing emotion-recognition models.
By employing abnormal return (ABR) values to represent movie trailer review videos
on YouTube, we were able to quantitatively assess the emotional responses of viewers
through the proposed Quantum inspired computing model. While previous research
has explored the application of quantum computing for facial expression recognition
and emotion quantification [79, 80], the proposed quantum model demonstrates supe-
rior performance compared to classical models such as CNN, Autoencoder, GoogleNet
with One-Class Support Vector Machines (OCSVM), Histogram of Oriented Gradients
(HOG) with One-Class Support Vector Machines (OCSVM), and RGB and Flow two-
stream networks as shown in Table3.
Table 3 Performance comparison of proposed and classical models
Models Accuracy Precision Recall F1-score
Proposed Quantum Model 95.65 0.86 0.98 0.92
CNN 85.12 0.84 0.85 0.84
AutoEncoder 83.84 0.79 0.72 0.75
GoogleNet with OCSVM 81.75 0.82 0.78 0.79
HOG with OCSVM 78.15 0.75 0.83 0.78
RGB and flow two-stream networks 86.41 0.85 0.76 0.8
Fig. 7 Comparison of different models based on AUC values
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 24 of 30
Singhetal. Journal of Big Data (2025) 12:22
e experimental results show that the quantum model performed better in various
metrics, including accuracy, precision, recall, F1-score, ROC, and AUC. e quantum
model’s accuracy was 95.65%, significantly higher than other machine learning models
with improvements in overall classification accuracy. e Receiver Operating Charac-
teristic (ROC) curve effectively visualizes the trade-off between true positive rate (sen-
sitivity) and false positive rate (specificity) across different classification thresholds. is
analysis reveals an inverse relationship between these two metrics. To further compare
the performance of various algorithms, the Area Under the Curve (AUC) metric was
employed. Figure 7 presents a comparison of AUC values for popular classification
algorithms and the proposed quantum model. e results indicate that the proposed
quantum model achieves a superior AUC value of 0.99, outperforming conventional
classifiers.
Table 4 shows the emotive scores of ten theatrical trailers which yielded best per-
formance. Figure8 briefs the measured PVEs and NVEs of best performing theatrical
Table 4 Calculated values of PVEs and NVEs by proposed model for best performing theatrical
trailers
Evaluated movies Stock price uctuation PVE NVE
Samaritan 16 0.71 0.001
Operation Fortune 15 0.67 0.012
Fall 12 0.51 0.0012
The Invitation 11 0.313 0.001
Transformers: Rise of the Beasts 10 0.35 0.07
The Enforcer 9 0.494 0.011
Pinocchio 9 0.31 0.02
Strange World 8 0.47 0
Blood 8 0.47 0.018
The Whale 7 0.34 0.063
Fig. 8 Measured PVEs and NVEs of best performing theatrical trailers
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 25 of 30
Singhetal. Journal of Big Data (2025) 12:22
trailers. Subsequently, Table5 shows the emotive scores of ten theatrical trailers with
worst performance. Figure9 briefs the measured PVEs and NVEs of worst performing
theatrical trailers. Tables4 and 5 suggest that the theatrical trailers with high PVE low
NVE resulted in better returns on stock value.
Discussion ofkey ndings
e central finding of this study is the identification of a positive correlation between the
emotional intensity of movie trailers and the financial success of the corresponding films
at the box office. Specifically, the results reveal that trailers evoking higher emotional
responses from viewers tend to translate into greater commercial performance for the
movies.
From a theoretical perspective, these findings lend support to the Mehrabian-Russell
PAD (Pleasure, Arousal, Dominance) model, which posits that advertising content can
elicit specific emotional responses that drive consumer behavior and decision-making.
Table 5 Calculated values of PVEs and NVEs by proposed model for worst performing theatrical
trailers
Evaluated movies Stock price uctuation PVE NVE
Green Lanter 6 0.31 0.14
Ambush 6 0.33 0.01
Scream VI 5 0.17 0.21
The Son 5 0.21 0.1
Poker Face 4 0.1 0.099
The Fabelmans 4 0.11 0.13
Barbie 4 0.17 0.099
Plane 4 0.21 0.13
Till 3 0.1 0.21
Kaleidoscope 3 0.14 0.2
Fig. 9 Measured PVEs and NVEs of worst performing theatrical trailers
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 26 of 30
Singhetal. Journal of Big Data (2025) 12:22
e current study demonstrates the applicability of this theoretical framework in the
context of movie marketing and trailer design.
On a practical level, the results provide valuable insights for filmmakers, producers,
and marketing teams. By understanding the importance of emotional content in movie
trailers, they can optimize the design and execution of trailer campaigns to better engage
audiences and maximize the financial returns of their films. is knowledge can inform
strategic decisions regarding trailer content, messaging, and distribution channels to
enhance audience appeal and drive increased box office revenues.
Implications andcontributions
e use of advanced quantum computing techniques, such as Quantum Walk and Quan-
tum Time Series modeling, represents a significant methodological advancement over con-
ventional linear and basic machine learning approaches. is quantum-inspired framework
allows for the capture of complex, nonlinear relationships between various factors influenc-
ing a film’s commercial performance, providing a more accurate and nuanced understand-
ing of the key drivers of box office success.
By leveraging the power of quantum computing, the proposed approach can uncover
meaningful patterns in the emotional dynamics underlying movie trailer performance
and consumer decision-making. is represents a novel, data-driven approach to predict-
ing box office outcomes, which can potentially aid filmmakers, producers, and marketing
teams in their decision-making processes.
Limitations andfuture research
While the study offers important insights, it is not without limitations. Primarily this study
is reliant on the HSX as the sole virtual stock market to measure financial returns for movie
trading. While existing literature supports the authenticity of HSX, a comprehensive vali-
dation using another virtual stock market remains elusive. Nevertheless, this study offers
valuable insights into the relationship between emotional content in movie trailers and rela-
tive box office returns. Further, it highlights the potential benefits of pre-release emotive
analysis. Furthermore, the analysis was conducted on a relatively small sample of 141 movie
trailers only released after a specific cutoff date. Expanding the dataset to include a larger
and more diverse set of trailers over a longer time period could further validate the findings
and improve the generalizability of the results.
Additionally, the study focused solely on the emotional intensity of movie trailers and its
relationship with financial success. Other factors, such as the specific types of emotions
evoked, the narrative structure of the trailers, and their interaction with other marketing
channels, were not explored in depth. Future research could delve deeper into these aspects
to provide a more comprehensive understanding of the drivers of a film’s commercial
performance.
e current model relies on the identification of macro-facial expressions, while micro-
expressions, though indicative of core emotional valence, are not explicitly considered
due to limited available datasets. Future research could explore the integration of micro-
expression analysis to enhance the model’s accuracy. Incorporating additional data sources,
such as audience sentiment analysis from social media, critic reviews, and box office data,
could further strengthen the predictive capabilities of the proposed technique. Exploring
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 27 of 30
Singhetal. Journal of Big Data (2025) 12:22
the application of this approach to other industries and domains beyond the movie industry
like advertisement and marketing could also be a fruitful avenue for future investigation.
Overall, this study represents an important step forward in understanding the role of
emotional content in movie marketing and its impact on financial success. e innovative
use of quantum computing techniques opens up new avenues for researchers and indus-
try practitioners to uncover meaningful insights and enhance decision-making in the fast-
paced and highly competitive entertainment industry.
Conclusion
e proposed quantum-inspired model has the potential to empower filmmakers and
production houses to create emotionally resonant movie trailers. e proposed model
showed promising results in predicting emotional intensity of movie trailers, that could
be helpful to design trailers that elicit positive emotions. Future research could explore
the model’s effectiveness in predicting the financial success of low-budget and regional
movies that are not listed on HSX. Overall, the proposed quantum model holds prom-
ise as a powerful tool for the movie and the advertisement industry, enabling produc-
tion houses and marketers to make data-driven decisions to create emotionally engaging
content that resonates with their target audience. is application could have a substan-
tial impact to efficiently allocate advertising budgets and maximize profits. Addition-
ally, the framework could be extended to evaluate product design, optimize advertising
efforts, and make informed marketing decisions for various businesses.
Author contributions
JS: prepared the first draft of manuscript and performed experimentation and supervised analysis, KB: compiled the data,
performed analysis and validation of results, FA: Helped in reviewing and proofreading, AZ: arranged funds BS: helped in
revising proofs.
Funding
This work was supported by the Researchers Supporting Project number (RSP2025R395), King Saud University,
Riyadh, Saudi Arabia.
Data availability
No datasets were generated or analysed during the current study.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Received: 18 December 2023 Accepted: 9 January 2025
References
1. Rokhade AA, Deivam A, Shettigar AJV, TM, Prasad VRB. Intelligent advertisement generation: harnessing deep learn-
ing techniques. In 2024 3rd International Conference on Applied Artificial Intelligence and Computing (ICAAIC), Jun.
2024, pp. 628–637. https:// doi. org/ 10. 1109/ ICAAI C60222. 2024. 10575 640.
2. Jamil K, Dunnan L, Gul RF, Shehzad MU, Gillani SHM, Awan FH. Role of social media marketing activities in influenc-
ing customer intentions: a perspective of a new emerging era. Front Psychol. 2022. https:// doi. org/ 10. 3389/ fpsyg.
2021. 808525.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 28 of 30
Singhetal. Journal of Big Data (2025) 12:22
3. Madongo CT, Zhongjun T. A movie box office revenue prediction model based on deep multimodal features.
Multimed Tools Appl. 2023;82(21):31981–2009. https:// doi. org/ 10. 1007/ s11042- 023- 14456-4.
4. Ni Y, Dong F, Zou M, Li W. Movie box office prediction based on multi-model ensembles. Information.
2022;13(6):Article no 6. https:// doi. org/ 10. 3390/ info1 30602 99.
5. Kishan MR, Mahadev DN. Reverse marketing strategies (review rating, paid critics, and peer pressure) for content
delivery in modern movie making: a comparative analysis of past and present practices. Educ Admin Theory Pract.
2024;30(5):Article no 5. https:// doi. org/ 10. 53555/ kuey. v30i5. 5478.
6. Cuntz A, Muscarnera A, Oguguo PC, Sahli M. Million dollar baby—a primer on film finance practices in the US movie
industry. Ind Innov. 2024. https:// doi. org/ 10. 1080/ 13662 716. 2024. 23280 04.
7. Liu H, Shi H. Analysis report on the development of the Chinese Film Industry in 2023. J Chin Film Stud.
2024;4(1):121–51. https:// doi. org/ 10. 1515/ jcfs- 2024- 0019.
8. Madongo CT, Tang Z, Jahanzeb H. Movie box-office revenue prediction model by mining deep features from trailers
using recurrent neural networks. SSRN Electron J. 2022. https:// doi. org/ 10. 2139/ ssrn. 41395 65.
9. Abdulrashid I, Ahmad IS, Musa A, Khalafalla M. Impact of social media posts’ characteristics on movie performance
prior to release: an explainable machine learning approach. Electron Commer Res. 2024. https:// doi. org/ 10. 1007/
s10660- 024- 09852-3.
10. Wang D et al. A movie box office revenues prediction algorithm based on human-machine collaboration feature
processing. J Eng Res. 2022. https:// kuwai tjour nals. org/ jer/ index. php/ JER/ artic le/ view/ 19489. Accessed 3 Sep 2024.
11. Iida T, Goto A, Fukuchi S, Amasaka K. A study on effectiveness of movie trailers boosting customers appreciation
desire: a customer science approach using statistics and GSR. JBER. 2012;10(6):375. https:// doi. org/ 10. 19030/ jber.
v10i6. 7028.
12. McGowan N, Sagredo-Olivenza I, Fraile-Narvaez M. Metrics of film success: defining the 21st-century blockbuster
in the USA through theatrical release and profitability. Creative Ind J. 2024. https:// doi. org/ 10. 1080/ 17510 694. 2024.
23577 87.
13. Sharda R, Delen D. Predicting box-office success of motion pictures with neural networks. Expert Syst Appl.
2006;30(2):243–54. https:// doi. org/ 10. 1016/j. eswa. 2005. 07. 018.
14. Zhang L, Luo J, Yang S. Forecasting box office revenue of movies with BP neural network. Expert Syst Appl.
2009;36(3Part 2):6580–7. https:// doi. org/ 10. 1016/j. eswa. 2008. 07. 064.
15. Quader N, Gani MO, Chaki D, Ali MH. A machine learning approach to predict movie box-office success. In 2017
20th International Conference of Computer and Information Technology (ICCIT ), Dec. 2017, pp. 1–7. https:// doi. org/
10. 1109/ ICCIT ECHN. 2017. 82818 39.
16. Parimi R, Caragea D. Pre-release box-office success prediction for motion pictures. In: Perner P, editor. Machine learn-
ing and data mining in pattern recognition. Berlin, Heidelberg: Springer; 2013. p. 571–85.
17. Bhadrashetty A, Patil S. Movie success and rating prediction using data mining. J Sci Res Technol. 2024. https:// doi.
org/ 10. 61808/ jsrt78.
18. Ahmad IS, Bakar AA, Yaakub MR, Muhammad SH. A survey on machine learning techniques in movie revenue
prediction. SN Comput Sci. 2020;1(4):235. https:// doi. org/ 10. 1007/ s42979- 020- 00249-1.
19. Mbunge E, Fashoto SG, Bimha H. Prediction of box-office success: a review of trends and machine learning compu-
tational models. Int J Bus Intell Data Mining. 2022;20(2):192–207. https:// doi. org/ 10. 1504/ IJBIDM. 2022. 120825.
20. Chen M, Ferro GM, Sornette D. On the use of discrete-time quantum walks in decision theory. PLoS ONE. 2022;17(8):
e0273551. https:// doi. org/ 10. 1371/ journ al. pone. 02735 51.
21. Behrens R, et al. Leveraging analytics to produce compelling and profitable film content. J Cult Econ.
2021;45(2):171–211. https:// doi. org/ 10. 1007/ s10824- 019- 09372-1.
22. An Y, An J, Cho S. Artificial intelligence-based predictions of movie audiences on opening Saturday. Int J Forecast.
2021;37(1):274–88. https:// doi. org/ 10. 1016/j. ijfor ecast. 2020. 05. 005.
23. Wang Z, Zhang J, Ji S, Meng C, Li T, Zheng Y. Predicting and ranking box office revenue of movies based on big data.
Inf Fusion. 2020;60:25–40. https:// doi. org/ 10. 1016/j. inffus. 2020. 02. 002.
24. Liao Y, Peng Y, Shi S, Shi V, Yu X. Early box office prediction in China’s film market based on a stacking fusion model.
Ann Oper Res. 2022;308(1):321–38. https:// doi. org/ 10. 1007/ s10479- 020- 03804-4.
25. Tang Z, Dong S. A total sales forecasting method for a new short life-cycle product in the pre-market period based
on an improved evidence theory: application to the film industry. Int J Prod Res. 2021;59(22):6776–90. https:// doi.
org/ 10. 1080/ 00207 543. 2020. 18258 61.
26. Zhou Y, Yen GG. Evolving deep neural networks for movie box-office revenues prediction. In 2018 IEEE Congress on
Evolutionary Computation (CEC). 2018, pp. 1–8. https:// doi. org/ 10. 1109/ CEC. 2018. 84776 91.
27. Sahu S, Kumar R, Pathan MS, Shafi J, Kumar Y, Ijaz MF. Movie popularity and target audience prediction using the
content-based recommender system. IEEE Access. 2022;10:42044–60. https:// doi. org/ 10. 1109/ ACCESS. 2022. 31681
61.
28. Lash MT, Zhao K. Early predictions of movie success: the who, what, and when of profitability. J Manag Inf Syst.
2016;33(3):874–903. https:// doi. org/ 10. 1080/ 07421 222. 2016. 12439 69.
29. Xue D. A study of evolution of film marketing in the digital age. SHS Web Conf. 2024;193:04003. https:// doi. org/ 10.
1051/ shsco nf/ 20241 93040 03.
30. Papalampidi P, Keller F, Lapata M. Finding the right moment: human-assisted trailer creation via task composition.
IEEE Trans Pattern Anal Mach Intell. 2024;46(1):292–304. https:// doi. org/ 10. 1109/ TPAMI. 2023. 33230 30.
31. Souza TLD, Nishijima M, Pires R. Revisiting predictions of movie economic success: random forest applied to profits.
Multimed Tools Appl. 2023;82(25):38397–420. https:// doi. org/ 10. 1007/ s11042- 023- 15169-4.
32. Dwivedi YK, et al. Setting the future of digital and social media marketing research: perspectives and research
propositions. Int J Inf Manage. 2021;59: 102168. https:// doi. org/ 10. 1016/j. ijinf omgt. 2020. 102168.
33. Kampani J, Nicolaides C. Information consistency as response to pre-launch advertising communications: The case
of YouTube trailers. Front Commun. 2023. https:// doi. org/ 10. 3389/ fcomm. 2022. 10221 39.
34. Kim A, Trimi S, Lee S-G. Exploring the key success factors of films: a survival analysis approach. Serv Bus.
2021;15(4):613–38. https:// doi. org/ 10. 1007/ s11628- 021- 00460-x.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 29 of 30
Singhetal. Journal of Big Data (2025) 12:22
35. Xu W, Yao Z, He D, Cao L. Understanding online review helpfulness: a pleasure-arousal-dominance (PAD) model
perspective. Aslib J Inf Manag. 2023. https:// doi. org/ 10. 1108/ AJIM- 04- 2023- 0121.
36. Manthiou A, Hickman E, Klaus P. Beyond good and bad: challenging the suggested role of emotions in customer
experience (CX) research. J Retail Consum Serv. 2020;57: 102218. https:// doi. org/ 10. 1016/j. jretc onser. 2020. 102218.
37. Namba S, Sato W, Osumi M, Shimokawa K. Assessing automated facial action unit detection systems for analyzing
cross-domain facial expression databases. Sensors. 2021;21(12):Article no 12. https:// doi. org/ 10. 3390/ s2112 4222.
38. Zou Z, Mubin O, Alnajjar F, Ali L. A pilot study of measuring emotional response and perception of LLM-gener-
ated questionnaire and human-generated questionnaires. Sci Rep. 2024;14(1):2781. https:// doi. org/ 10. 1038/
s41598- 024- 53255-1.
39. Höfling TTA, Alpers GW. Automatic facial coding predicts self-report of emotion, advertisement and brand effects
elicited by video commercials. Front Neurosci. 2023. https:// doi. org/ 10. 3389/ fnins. 2023. 11259 83.
40. Sels L, Tran A, Greenaway KH, Verhofstadt L, Kalokerinos EK. The social functions of positive emotions. Curr Opin
Behav Sci. 2021;39:41–5. https:// doi. org/ 10. 1016/j. cobeha. 2020. 12. 009.
41. Nagai Y, Jones CI, Sen A. Galvanic Skin Response (GSR)/electrodermal/skin conductance biofeedback on epilepsy: a
systematic review and meta-analysis. Front Neurol. 2019. https:// doi. org/ 10. 3389/ fneur. 2019. 00377.
42. Jabbooree AI, Khanli LM, Salehpour P, Pourbahrami S. A novel facial expression recognition algorithm using geom-
etry β-skeleton in fusion based on deep CNN. Image Vis Comput. 2023;134: 104677. https:// doi. org/ 10. 2139/ ssrn.
42687 67.
43. Pise AA, et al. Methods for facial expression recognition with applications in challenging situations. Comput Intell
Neurosci. 2022;2022(1):9261438. https:// doi. org/ 10. 1155/ 2022/ 92614 38.
44. Díaz P, Vásquez E, Shiguihara P. A survey of video analysis based on facial expression recognition. Eng Proc.
2023;42(1):1. https:// doi. org/ 10. 3390/ engpr oc202 30420 03.
45. Rifai S, Bengio Y, Courville A, Vincent P, Mirza M. Disentangling factors of variation for facial expression recognition.
In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C, editors. Computer vision—ECCV 2012. Berlin, Heidelberg:
Springer; 2012. p. 808–22.
46. Essa IA, Pentland AP. Coding, analysis, interpretation, and recognition of facial expressions. IEEE Trans Pattern Anal
Mach Intell. 1997;19(7):757–63. https:// doi. org/ 10. 1109/ 34. 598232.
47. Ahmad IS, Bakar AA, Yaakub MR. Movie revenue prediction based on purchase intention mining using YouTube
trailer reviews. Inf Process Manage. 2020;57(5): 102278. https:// doi. org/ 10. 1016/j. ipm. 2020. 102278.
48. Cui X, Tao W, Cui X. Affective-knowledge-enhanced graph convolutional networks for aspect-based sentiment
analysis with multi-head attention. Appl Sci. 2023;13(7):Article no 7. https:// doi. org/ 10. 3390/ app13 074458.
49. Hur M, Kang P, Cho S. Box-office forecasting based on sentiments of movie reviews and Independent subspace
method. Inf Sci. 2016;372:608–24. https:// doi. org/ 10. 1016/j. ins. 2016. 08. 027.
50. Wiles MA, Danielova A. The worth of product placement in successful films : an event study analysis. Int Retail Mar-
ket Rev. 2013;9(1):23–48. https:// doi. org/ 10. 10520/ EJC14 2832.
51. Martinez-Blasco M, Serrano V, Prior F, Cuadros J. Analysis of an event study using the Fama-French five-factor model:
teaching approaches including spreadsheets and the R programming language. Financ Innov. 2023;9(1):76. https://
doi. org/ 10. 1186/ s40854- 023- 00477-3.
52. Karray S, Debernitz L. The effectiveness of movie trailer advertising. Int J Advert. 2017;36(2):368–92. https:// doi. org/
10. 1080/ 02650 487. 2015. 10905 21.
53. Blitz D, Hanauer MX, Honarvar I, Huisman R, van Vliet P. Beyond Fama-French factors: Alpha from short-term signals.
Financ Anal J. 2023;79(4):96–117. https:// doi. org/ 10. 1080/ 00151 98X. 2023. 21734 92.
54. Plastun A, Sibande X, Gupta R, Ji Q. Price effects after one-day abnormal returns and crises in the stock markets. Res
Int Bus Financ. 2024;70: 102308. https:// doi. org/ 10. 1016/j. ribaf. 2024. 102308.
55. Agarwal JD, Agarwal M, Agarwal A, Agarwal Y. Economics of cryptocurrencies: artificial intelligence, blockchain, and
digital currency. In Information for efficient decision making. WORLD SCIENTIFIC, 2020, pp. 331–430. https:// doi. org/
10. 1142/ 97898 11220 470_ 0013.
56. Kim HH-D, Park K. Impact of environmental disaster movies on corporate environmental and financial performance.
Sustainability. 2021. https:// doi. org/ 10. 3390/ su130 20559.
57. Goyal G, Singh J, Inder S. A novel framework for correlating content quality on OTT platforms with their stock value.
In 2020 International Conference on Smart Electronics and Communication (ICOSEC). 2020, pp. 377–382. https://
doi. org/ 10. 1109/ ICOSE C49089. 2020. 92154 00.
58. Munawaroh U, Sunarsih S. The effects of Fama-French five factor and momentum factor on Islamic stock portfolio
excess return listed in ISSI. J Eko Keu Isl. 2020. https:// doi. org/ 10. 20885/ jeki. vol6. iss2. art4.
59. Sarvakar K, Senkamalavalli R, Raghavendra S, Santosh Kumar J, Manjunath R, Jaiswal S. Facial emotion recognition
using convolutional neural networks. Mater Today Proc. 2023;80:3560–4. https:// doi. org/ 10. 1016/j. matpr. 2021. 07.
297.
60. Kim J, Song R, Kang W. The effect of temporal variation of prelaunch expectations on stock market response in the
motion picture industry. J Prod Innov Manag. 2022;39(4):515–33. https:// doi. org/ 10. 1111/ jpim. 12616.
61. Delcey T, Sergi F. The efficient market hypothesis and rational expectations macroeconomics. How did they meet
and live (happily) ever after? Eur J Hist Econ Thought. 2023;30(1):86–116. https:// doi. org/ 10. 1080/ 09672 567. 2022.
21088 69.
62. Simon FM, Schroeder R. Big data goes to hollywood: the emergence of big data as a tool in the American film
industry. In: Hunsinger J, Allen MM, Klastrup L, editors. Second international handbook of internet research.
Dordrecht: Springer, Netherlands; 2020. p. 549–67. https:// doi. org/ 10. 1007/ 978- 94- 024- 1555-1_ 63.
63. Singh J, Goyal G. Anticipating movie success through crowdsourced social media videos. Comput Hum Behav.
2019;101:484–94. https:// doi. org/ 10. 1016/j. chb. 2018. 08. 050.
64. Fan Y, Foutz N, James GM, Jank W. Functional response additive model estimation with online virtual stock markets.
Ann Appl Stat. 2014;8(4):2435–60. https:// doi. org/ 10. 1214/ 14- AOAS7 81.
65. Canbolat M, Sohn K, Gardner JT. A parsimonious predictive model of movie performance: a managerial tool for sup-
ply chain members. IJORIS. 2020;11(4):46–61. https:// doi. org/ 10. 4018/ IJORIS. 20201 00103.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Page 30 of 30
Singhetal. Journal of Big Data (2025) 12:22
66. El-Sappagh S, Ali F, Abuhmed T, Singh J, Alonso JM. Automatic detection of Alzheimer’s disease progression: an
efficient information fusion approach with heterogeneous ensemble classifiers. Neurocomputing. 2022;512:203–24.
https:// doi. org/ 10. 1016/j. neucom. 2022. 09. 009.
67. ElSayed Y, ElSayed A, Abdou MA. An automatic improved facial expression recognition for masked faces. Neural
Comput Appl. 2023;35(20):14963–72. https:// doi. org/ 10. 1007/ s00521- 023- 08498-w.
68. Peres VMX, Musse SR. Towards the creation of spontaneous datasets based on Youtube reaction videos. In: Bebis
G, Athitsos V, Yan T, Lau M, Li F, Shi C, Yuan X, Mousas C, Bruder G, editors. Advances in visual computing. Cham:
Springer International Publishing; 2021. p. 203–15. https:// doi. org/ 10. 1007/ 978-3- 030- 90436-4_ 16.
69. Mokryn O, Bodoff D, Bader N, Albo Y, Lanir J. Sharing emotions: determining films’ evoked emotional experience
from their online reviews. Inf Retrieval J. 2020;23(5):475–501. https:// doi. org/ 10. 1007/ s10791- 020- 09373-1.
70. Lopezosa C, Orduna-Malea E, Pérez-Montoro M. Making video news visible: identifying the optimization strategies
of the cybermedia on YouTube using web metrics. J Pract. 2020;14(4):465–82. https:// doi. org/ 10. 1080/ 17512 786.
2019. 16286 57.
71. Egger DJ, et al. Quantum computing for finance: state-of-the-ar t and future prospects. IEEE Trans Quant Eng.
2020;1:1–24. https:// doi. org/ 10. 1109/ TQE. 2020. 30303 14.
72. Qu D, Marsh S, Wang K, Xiao L, Wang J, Xue P. Deterministic search on star graphs via quantum walks. Phys Rev Lett.
2022;128(5): 050501. https:// doi. org/ 10. 1103/ PhysR evLett. 128. 050501.
73. Shakeel A. Efficient and scalable quantum walk algorithms via the quantum Fourier transform. Quantum Inf Process.
2020;19(9):323. https:// doi. org/ 10. 1007/ s11128- 020- 02834-y.
74. Elberse A, Anand B. The effectiveness of pre-release advertising for motion pictures: an empirical investigation using
a simulated market. Inf Econ Policy. 2007;19(3):319–43. https:// doi. org/ 10. 1016/j. infoe copol. 2007. 06. 003.
75. Kumar S, Sagar V, Punetha D. A comparative study on facial expression recognition using local binary patterns,
convolutional neural network and frequency neural network. Multimed Tools Appl. 2023;82(16):24369–85. https://
doi. org/ 10. 1007/ s11042- 023- 14753-y.
76. Tang Y, Zhang X, Hu X, Wang S, Wang H. Facial expression recognition using frequency neural network. IEEE Trans
Image Process. 2021;30:444–57. https:// doi. org/ 10. 1109/ TIP. 2020. 30374 67.
77. Fama EF, French KR. Multifactor explanations of asset pricing anomalies. J Finance. 1996;51(1):55–84. https:// doi. org/
10. 1111/j. 1540- 6261. 1996. tb052 02.x.
78. Kim T, Kim TS, Park YJ. Cross-sectional expected returns and predictability in the Korean stock market. Emerg Mark
Financ Trade. 2020;56(15):3763–84. https:// doi. org/ 10. 1080/ 15404 96X. 2019. 15761 26.
79. Mengoni R, Incudini M, Di Pierro A. Facial expression recognition on a quantum computer. Quantum Mach Intell.
2021;3(1):8. https:// doi. org/ 10. 1007/ s42484- 020- 00035-5.
80. Singh J, Ali F, Shah B, Bhangu KS, Kwak D. Emotion quantification using variational quantum state fidelity estimation.
IEEE Access. 2022;10:115108–19. https:// doi. org/ 10. 1109/ ACCESS. 2022. 32168 90.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
1.
2.
3.
4.
5.
6.
Terms and Conditions
Springer Nature journal content, brought to you courtesy of Springer Nature Customer Service Center GmbH (“Springer Nature”).
Springer Nature supports a reasonable amount of sharing of research papers by authors, subscribers and authorised users (“Users”), for small-
scale personal, non-commercial use provided that all copyright, trade and service marks and other proprietary notices are maintained. By
accessing, sharing, receiving or otherwise using the Springer Nature journal content you agree to these terms of use (“Terms”). For these
purposes, Springer Nature considers academic use (by researchers and students) to be non-commercial.
These Terms are supplementary and will apply in addition to any applicable website terms and conditions, a relevant site licence or a personal
subscription. These Terms will prevail over any conflict or ambiguity with regards to the relevant terms, a site licence or a personal subscription
(to the extent of the conflict or ambiguity only). For Creative Commons-licensed articles, the terms of the Creative Commons license used will
apply.
We collect and use personal data to provide access to the Springer Nature journal content. We may also use these personal data internally within
ResearchGate and Springer Nature and as agreed share it, in an anonymised way, for purposes of tracking, analysis and reporting. We will not
otherwise disclose your personal data outside the ResearchGate or the Springer Nature group of companies unless we have your permission as
detailed in the Privacy Policy.
While Users may use the Springer Nature journal content for small scale, personal non-commercial use, it is important to note that Users may
not:
use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
use bots or other automated methods to access the content or redirect messages
override any security feature or exclusionary protocol; or
share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at
onlineservice@springernature.com