
Frank Earl Pollick- PhD
- University of Glasgow
Frank Earl Pollick
- PhD
- University of Glasgow
About
228
Publications
35,717
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,720
Citations
Introduction
Current institution
Publications
Publications (228)
This paper presents the first in-car VR motion sickness (VRMS) detection model based on lower face action units (LF-AUs). Initially developed in a simulated in-car environment with 78 participants, the model’s generalizability was later tested in realworld driving conditions. Motion sickness was induced using visual linear motion in the VR headset...
Semi-autonomous vehicles allowdrivers to engage with non-driving
related tasks (NDRTs). However, these tasks interfere with the driver’s
situational awareness, key when they need to safely retake
control of the vehicle. This paper investigates if Augmented Reality
(AR) could be used to present NDRTs to reduce their impact
on situational awareness....
When viewing the actions of others, we not only see patterns of body movements, but we also "see" the intentions and social relations of people. Experienced forensic examiners – Closed Circuit Television (CCTV) operators – have been shown to convey superior performance in identifying and predicting hostile intentions from surveillance footage than...
Cyclists encounter drivers in many traffic scenarios; good communication is key to avoiding collisions. Little is known about everyday driver-cyclist interaction and communication. This is important in designing Automated Vehicles (AVs) that must drive safely around cyclists. We explored driver-cyclist interaction across diverse scenarios through i...
We use functional Magnetic Resonance Imaging (fMRI) to explore synchronized neural responses between observers of audiovisual presentation of a string quartet performance during free viewing. Audio presentation was accompanied by visual presentation of the string quartet as stick figures observed from a static viewpoint. Brain data from 18 musical...
When viewing the actions of others, we not only see patterns of body movements, but we also “see” the intentions and social relations of people, enabling us to understand the surrounding social environment. Previous research has shown that experienced forensic examiners—Closed Circuit Television (CCTV) operators—convey superior performance in ident...
How the performance of autonomic physiological, and human vestibular network (HVN)-based brain functional connectivity (BFC) features differ in a VR sickness classification task is underexplored. Therefore, this paper presents an AI-aided comparative study of the two. Results from different AI models all show that autonomic physiological features r...
In everyday life, emotional information is often conveyed by both the face and the voice. Consequently, information presented by one source can alter the way in which information from the other source is perceived, leading to emotional incongruence. Here, we used functional magnetic resonance imaging (fMRI) to examine neutral correlates of two diff...
Successful adoption of autonomous systems requires appropriate trust from human users, with trust calibrated to reflect true system performance. Autonomous image classifiers are one such example and can be used in a variety of settings to independently identify the contents of image data. We investigated users’ trust when collaborating with an auto...
Promoting mental health in the workplace and creating a supportive environment for those experiencing poor mental health are important strategies that can be implemented by workplaces. The present research evaluates the effectiveness of Headtorch WORKS, a mental health and well-being intervention consisting of three online episodes, including origi...
fMRI Neurofeedback (NF) is a promising tool to study the relationship between behaviour and brain activity. It enables people to self-regulate their brain signal. Here we applied fMRI NF to train healthy participants to increase activity in their supplementary motor area (SMA) during a Motor Imagery (MI) task of complex body movements while they re...
This study aimed to extend previous research on the experiences and factors that impact law enforcement personnel when working with distressing materials such as child sexual abuse content. A sample of 22 law enforcement personnel working within one law enforcement organisation in England, United Kingdom participated in anonymous semi-structured in...
With the development of consumer virtual reality (VR), people have increasing opportunities to experience cybersickness (CS)-a kind of visually-induced motion sickness (MS). In view of the importance of CS mitigation (CSM), this paper reviews the methods of electrostimulation-based CSM (e-CSM), broadly categorised as either "VR-centric" or "Human-c...
Many head-mounted virtual reality display (VR-HMD) applications that involve moving visual environments (e.g., virtual rollercoaster, car and airplane driving) will trigger cybersickness (CS). Previous research Arshad et al. (2015) has explored the inhibitory effect of cathodal transcranial direct current stimulation (tDCS) on vestibular cortical e...
Multivariate Pattern Analysis (MVPA) has grown in importance due to its capacity to use both coarse and fine scale patterns of brain activity. However, a major limitation of multivariate analysis is the difficulty of aligning features across brains, which makes MVPA a subject specific analysis. Recent work by Haxby et al. (2011) introduced a method...
Background:
Biological motion, namely the movement of others, conveys information that allows the identification of affective states and intentions. This makes it an important avenue of research in autism spectrum disorder where social functioning is one of the main areas of difficulty. We aimed to create a quantitative summary of previous finding...
Previous research using reverse correlation to explore the relationship between brain activity and presented image information found that Face Fusiform Area (FFA) activity could be related to the appearance of faces during free viewing of the Hollywood movie The Good, the Bad, and the Ugly (Hasson, et al, 2004). We applied this approach to the natu...
The goal of Information Retrieval (IR) systems is to satisfy searchers' Information Need (IN). Our research focuses on next-generation IR engines, which can proactively detect, identify, and serve INs without receiving explicit queries. It is essential, therefore, to be able to detect when INs occur. Previous research has established that a realisa...
The right anterior insula (AI), known to have a key role in the processing and understanding of social emotions, is activated during tasks that involve the act of empathising. Neurofeedback provides individuals with a visualisation of their own brain activity, enabling them to regulate and modify this activity. Following previous research investiga...
It is believed that Mirror Visual Feedback (MVF) increases the interlimb transfer but the exact mechanism is still a matter of debate. The aim of this study was to compare between a bimanual task (BM) and a MVF task, within functionally rather than geometrically defined cortical domains. Measure Projection Analysis (MPA) approach was applied to com...
The main goal of information retrieval (IR) is to satisfy information need (IN). IN refers to a complex concept: at the very initial state of the phenomenon (that is, at a visceral level), even the searcher may not be aware of its existence. Thus, despite advances in the past few decades in both the IR and relevant scientific communities, we do not...
Background
Autism spectrum disorders (ASD) are lifelong neurodevelopmental disorders. It is not clear whether working memory (WM) deficits are commonly experienced by individuals with ASD.
Aim
To determine whether individuals with ASD experience significant impairments in WM and whether there are specific domains of working memory that are impaire...
Full search strategy of Medline database.
(DOCX)
References of studies included in the meta-analyses.
(DOCX)
Haptic feedback has been widely studied for in-car interactions. However, most of this research has used vibrotactile cues. This paper presents two studies that examine novel thermal feedback for navigation during simulated driving for a lane change task. In the first, we compare the distraction and time differences of audio and thermal feedback. T...
Inter-subject correlation (ISC) based analysis is a conceptually simple approach to analyze functional magnetic resonance imaging (fMRI) data acquired under naturalistic stimuli such as a movie. We describe and validate the statistical approaches for comparing ISCs between two groups of subjects implemented in the ISC toolbox, which is an open sour...
Multisensory processing is a core perceptual capability, and the need to understand its neural bases provides a fundamental problem in the study of brain function. Both synchrony and temporal order judgments are commonly used to investigate synchrony perception between different sensory cues and multisensory perception in general. However, extensiv...
To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally oc...
Search is one of the most performed activities on the World Wide Web. Various conceptual models postulate that the search process can be broken down into distinct emotional and cognitive states of searchers while they engage in a search process. These models significantly contribute to our understanding of the search process. However, they are typi...
The fields of neural prosthetic technologies and Brain-Computer Interfaces (BCIs) have witnessed in the past 15 years an unprecedented development, bringing together theories and methods from different scientific fields, digital media, and the arts. More in particular, artists have been amongst the pioneers of the design of relevant applications si...
How the brain contends with naturalistic viewing conditions when it must cope with concurrent streams of diverse sensory inputs and internally generated thoughts is still largely an open question. In this study, we used fMRI to record brain activity while a group of 18 participants watched an edited dance duet accompanied by a soundtrack. After sca...
In this demonstration we show a thermal interaction design on the steering wheel for navigational cues in a car. Participants will be able to use a thermally enhanced steering wheel to follow instructions given in a turn-to-turn based navigation task in a virtual city. The thermal cues will be provided on both sides of the steering wheel and will i...
Emotional engagement and aesthetic appreciation can be prime motivations for engaging with dance. Dance can therefore offer a valuable tool for the neuroscientific study of emotion processing. This idea underpinned the project Watching Dance, which investigated the neural correlates of subjective emotional response. Participants watched a four-minu...
With advancement in research in a given field, there should be parallel development in visualisation methods to understand the data accrued. 3D visualisation and interactive visual applications can facilitate synthesis and understanding of high dimensional data. This concept has been applied within varying fields of research, though it has yet to b...
Until full autonomy is achieved in cars, drivers will still be expected to take over control of driving, and critical warnings will be essential. This paper presents a comparison of abstract versus language-based multimodal warnings signifying handovers of control in autonomous cars. While using an autonomous car simulator, participants were distra...
Experienced CCTV operators are better at predicting violent behaviour than novices. It has been suggested that experienced, trained operators acquire gaze strategies to process complex visual scenes more efficiently. We examined whether the coherence of the gaze path of an operator aids in perceiving harmful intent and how this compares to the situ...
The new commercial-grade Electroencephalography (EEG)-based Brain-Computer Interfaces (BCIs) have led to a phenomenal development of applications across health, entertainment and the arts, while an increasing interest in multi-brain interaction has emerged. In the arts, there is already a number of works that involve the interaction of more than on...
The uncanny valley effect (UVE) is a negative emotional response experienced when encountering entities that appear almost human. Research on the UVE typically investigates individual, or collections of, near human entities but may be prone to methodological circularity unless the properties that give rise to the emotional response are appropriatel...
Introduction: A major limitation of Multivariate Pattern Analysis (MVPA) is the difficulty of aligning features of fMRI data across brains due to high variability in individual responses. To address this issue Haxby et al. (2011), suggested a method called Hyperalignment to align participant’s patterns of ventral temporal cortex during object recog...
Using fMRI decoding techniques we recently demonstrated that early visual cortex contains content-specific information from sounds in the absence of visual stimulation (Vetter, Smith & Muckli, Current Biology, 2014). Here we studied whether the emotional valence of sounds can be decoded in early visual cortex during emotionally ambiguous visual sti...
Understanding the intentions of others by viewing their actions in a complex visual scene is a challenging task. Does experience change looking behavior in that gaze to specific action cues creates a fixation "signature" unique to experienced observers? To address this question we analyzed eye movements between experienced surveillance (CCTV) opera...
To discern intention, based on the observation of complex human behaviour, typically requires eye-movements that actively explore the visual context. We examined whether the gaze patterns of novice and experienced observers would differentially inform intention judgments of naïve viewers. To achieve this we first obtained real-life surveillance (CC...
The raison d'etre of IR is to satisfy human information need. But, do we really understand information need? Despite advances in the past few decades in both the IR and relevant scientific communities, this question is largely unanswered. We do not really understand how an information need emerges and how it is physically manifested. Information ne...
The objective of the present work was the characterization of mechanisms by which affective experiences are elicited in observers when watching dance movements. A total of 203 dance stimuli from a normed stimuli library were used in a series of independent experiments. The following measures were obtained: (i) subjective measures of 97 dance-naïve...
EPoster Introduction: Recent human neuroimaging studies using Univariate analyses (Vogt et al, 2013) have shown common activation areas between action observation (AO), execution (AE) and motor imagery (MI) in premotor cortex (PM), Supplementary Motor Area (SMA) and Primary motor cortex (M1), as well as primary sensory cortex (S1), superior parie...
We utilize qualitative audience research and functional brain imaging (fMRI) to examine the aesthetic experience of watching dance both with and without music. This transdisciplinary approach was motivated by the recognition that the aesthetic experience of dance revealed through conscious interpretation could have neural correlates in brain activi...
We describe the creation of the first multisensory stimulus set that consists of dyadic, emotional, point-light interactions combined with voice dialogues. Our set includes 238 unique clips, which present happy, angry and neutral emotional interactions at low, medium and high levels of emotional intensity between nine different actor dyads. The set...
This paper presents a first evaluation of multimodal language-based warnings for handovers of control in autonomous cars. A set of possible handover situations varying in urgency is described. A set of multimodal, language-based warnings for these situations is then introduced. All combinations of audio, tactile and visual warnings for handovers we...
This paper examines the response of 10 typical and 10 autism spectrum participants in their reaction to a set of uni- and multisensory warning signals designed to indicate different levels of urgency. The warnings were composed of auditory, visual and tactile signals that were presented alone or in combination. Two experiments were conducted, a fir...
Relevance is a central notion in Information Retrieval, but it is considered to be a difficult concept to define. We analyse brain signals for the first 800 milliseconds (ms) of a relevance assessment process to answer the question "when relevance is happening in the brain?" with the belief that it will lead to better operational definitions of rel...
Intersubject correlation (ISC) analysis of functional magnetic resonance imaging (fMRI) data provides insight into how continuous streams of sensory stimulation are processed by groups of observers. Although edited movies are frequently used as stimuli in ISC studies, there has been little direct examination of the effect of edits on the resulting...
Interactive new media art and games belong to distinctive fields, but nevertheless share common grounds, tools, methodologies, challenges, and goals, such as the use of applications and devices for engaging multiple participants and players, and more recently electroencephalography (EEG)-based brain-computer interfaces (BCIs). At the same time, an...
Audiovisual perception of emotions has been typically examined using displays of a solitary character (e.g., the face-voice and/or body-sound of one actor). However, in real life humans often face more complex multisensory social situations, involving more than one person. Here we ask if the audiovisual facilitation in emotion recognition previousl...
Multimodal displays are increasingly being utilized as driver warnings. Abstract warnings, without any semantic association to the signified event, and language-based warnings are examples of such displays. This paper presents a first comparison between these two types, across all combinations of audio, visual and tactile modalities. Speech, text a...
Although the use of Brain-Computer Interfaces (BCIs) in the arts originates in the 1960s, there is a limited number of known applications in the context of real-time audio-visual and mixed-media performances and accordingly the knowledge base of this area has not been developed sufficiently. Among the reasons are the difficulties and the unknown pa...
Human beings often observe other people's social interactions without being a part of them. Whereas the implications of some brain regions (e.g. amygdala) have been extensively examined, the implication of the precuneus remains yet to be determined. Here we examined the implication of the precuneus in third-person perspective of social interaction...
This paper describes two experiments evaluating a set of speech and tactile driver warnings. Six speech messages of three urgency levels were designed, along with their tactile equivalents, Speech Tactons. These new tactile warnings retained the rhythm of speech and used different levels of roughness and intensity to convey urgency. The perceived u...
Music is an integral part of dance. Over the last 10 years, however, dance stimuli (without music) have been repeatedly used to study action observation processes, increasing our understanding of the influence of observer’s physical abilities on action perception. Moreover, beyond trained skills and empathy traits, very little has been investigated...
The ability to integrate auditory and visual information is crucial to everyday life and there are mixed results regarding how Autism Spectrum Disorder (ASD) influences audiovisual integration. The audiovisual Temporal Integration Window (TIW) indicates how precisely sight and sound need to be temporally aligned to perceive a unitary audiovisual ev...
Background / Purpose: In this work we derive the dynamics of the blood oxygen level dependent (BOLD) signal based on a generic description of the dynamics of a group of neurons. Main conclusion: Considering all participants, aesthetic activation areas are larger than emotion activation areas. It implies that human has more diverse responses to aest...
Time perception may often be difficult to define in terms of processes and specific brain areas but it is ever-present in all aspects of our daily life. The study of timing has been central in Cognitive Sciences but recently research has been more vigorous extending from static and/or unimodal stimulations to moving and/or multimodal presentations....
We used a combination of behavioral, computational vision and fMRI methods to examine human brain activity while viewing a 386 s video of a solo Bharatanatyam dance. A computational analysis provided us with a Motion Index (MI) quantifying the silhouette motion of the dancer throughout the dance. A behavioral analysis using 30 naïve observers provi...
The uncanny valley hypothesis states that the acceptability of an artificial character will not increase linearly in relation to its likeness to human form. Instead, after an initial rise in acceptability there will be a pronounced decrease when the character is similar, but not identical to human form (Mori, 1970/2012). Moreover, it has been claim...
We show that the way people observe video sequences, other than what they observe, is important for the understanding and the prediction of human activities. In this study, we consider 36 surveillance videos, organized in four categories (confront, nothing, fight, play): the videos are observed by 19 people, ten of them are experienced operators an...
Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments...
It has been proposed that we make sense of the movements of others by observing fluctuations in the kinematic properties of their actions. At the neural level, activity in the human motion complex (hMT+) and posterior superior temporal sulcus (pSTS) has been implicated in this relationship. However, previous neuroimaging studies have largely utiliz...
The ability to integrate auditory and visual information is a crucial part of everyday life. The Temporal Integration Window (TIW) provides a measure of how much asynchrony can be tolerated between auditory and visual streams before one loses the perception of a unitary audiovisual event. Previous investigations of the TIW in individuals with Autis...
The superior temporal sulcus (STS) and gyrus (STG) are commonly identified to be functionally relevant for multisensory integration of audiovisual (AV) stimuli. However, most neuroimaging studies on AV integration used stimuli of short duration in explicit evaluative tasks. Importantly though, many of our AV experiences are of a long duration and a...
Relevance is one of the key concepts in Information Retrieval (IR). A huge body of research exists that attempts to understand this concept so as to operationalize it for IR systems. Despite advances in the past few decades, answering the question "How does relevance happen?" is still a big challenge. In this paper, we investigate the connection be...
Synchrony judgments involve deciding whether cues to an event are in synch or out of synch, while temporal order judgments involve deciding which of the cues came first. When the cues come from different sensory modalities these judgments can be used to investigate multisensory integration in the temporal domain. However, evidence indicates that th...
This panel will discuss opportunities and challenges involved in applying cognitive neuroscience and neuroimaging in information science. The panelists will discuss lessons learned from related disciplines and will consider how neuroimaging tools, such as fMRI, fNIRS, and EEG, could contribute to information science.
Many discussions of biological motion perception involve a description of observers' attunements for recognizing gender, emotion, action, and identity from point-light displays. This chapter describes an often-neglected determinant of biological motion perception: the role of expertise. First, the authors describe how variability among observers is...
This book explores new developments in the dialogues between science and theatre and offers an introduction to a fast-expanding area of research and practice. The cognitive revolution in the humanities is creating new insights into the audience experience, performance processes and training. Scientists are collaborating with artists to investigate...
Our recent work has focused on using fMRI to investigate how experience in observing action is reflected in brain activity. In this talk I will concentrate on two studies that span a range of experience. The first study investigates brain activity of inexperienced observers when they view a complex 6-minute dance to which they have no familiarity....
Background / Purpose:
We investigated whether emotional context as communicated through auditory stimulation can influence the activity patterns in early visual cortex.
Main conclusion:
When viewing an emotionally ambiguous interaction between two point-light walkers, activity patterns in early visual areas are influenced by emotional informat...
Background:
Grey matter (GM) volume and cortical thickness (CT) have been shown to correlate with performance and expertise on a number of tasks in typically developed individuals (e.g., Maguire et al., 2003). In people with ASDs it has been suggested that GM abnormalities are linked to behavioural deficits (Hadjikhani et al., 2006). Structural m...
Our understanding of the perceived actions of those around us includes an ability to segment this continuous stream of activity into discrete events. We studied naïve observers' abilities to segment a video of an unfamiliar dance style into events using a combination of behavioural, computational vision and brain imaging methods. A 386 s video of a...