Luis Patino

Luis Patino
University of Reading

Ph.D.

About

48
Publications
5,652
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,119
Citations
Introduction
I am researcher in the Computer Vision Group, at the University of Reading, UK. My research is in the fields of Human behaviour characterisation and Information extraction from video; Knowledge-based systems for activity recognition and Coherence analysis and interpretation of biomedical signals. The methodology I employ in my research makes use of Signal Correlation, Clustering & Data mining, Knowledge Discovery & Machine Learning.

Publications

Publications (48)
Article
Full-text available
Wide area surveillance has become of critical importance, particularly for border control between countries where vast forested land border areas are to be monitored. In this paper, we address the problem of the automatic detection of activity in forbidden areas, namely forested land border areas. In order to avoid false detections, often triggered...
Chapter
Full-text available
Chapter 16, “FOLDOUT: A Through Foliage Surveillance System for Border Security” was previously published non-open access. It has now been changed to open access under a CC BY 4.0 license and the copyright holder updated to ‘The Author(s)’. The book has also been updated with this change.
Article
The objective of the European Union (EU) in the field of external border protection is to safeguard the freedom of movement within the Schengen area, and to ensure efficient monitoring of people who cross EU's external borders. To achieve an effective and efficient border management, there is a need for applying enhanced technologies and methods th...
Article
In this work, we present a Fusion and Tracking system developed within the EU project FOLDOUT aimed to facilitate border guards work by fusing separate sensor information and presenting automatic tracking of objects detected in the surveillance area. The focus of FOLDOUT is on through-foliage detection in the inner and outermost regions of the EU....
Article
The objective of the European Union (EU) in the field of external border protection is to safeguard the freedom of movement within the Schengen area, and to ensure efficient monitoring of people who cross EU's external borders. To achieve an effective and efficient border management, there is a need for applying enhanced technologies and methods th...
Chapter
Full-text available
Improved methods for border surveillance are necessary to ensure an effective and efficient EU border management. In the border control context, as defined by the Schengen Border Code, border surveillance is defined as “the surveillance of borders between border crossing points and the surveillance of border crossing points outside the fixed openin...
Article
Full-text available
Pervasive and useR fOcused biomeTrics bordEr projeCT (PROTECT) is an EU project funded by the Horizon 2020 research and Innovation Programme. The main aim of PROTECT was to build an advanced biometric-based person identification system that works robustly across a range of border crossing types and that has strong user-centric features. This work p...
Conference Paper
In the EU FP7 project IPATCH, we are researching components for a maritime piracy early detection and avoidance system for deployment on merchant vessels. The system combines information from on-board sensors with intelligence from external sources in order to give early warnings about piracy threats. In this paper we present the ongoing work with...
Article
This paper addresses the issue of activity understanding from video and its semantics-rich description. A novel approach is presented where activities are characterised and analysed at different resolutions. Semantic information is delivered according to the resolution at which the activity is observed. Furthermore, the multiresolution activity cha...
Conference Paper
This paper describes the dataset and vision challenges that form part of the PETS 2014 workshop. The datasets are multisensor sequences containing different activities around a parked vehicle in a parking lot. The dataset scenarios were filmed from multiple cameras mounted on the vehicle itself and involve multiple actors. In PETS2014 workshop, 22...
Conference Paper
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic inf...
Article
We present a method for the recognition of complex actions. Our method combines automatic learning of simple actions and manual definition of complex actions in a single grammar. Contrary to the general trend in complex action recognition, that consists in dividing recognition into two stages, our method performs recognition of simple and complex a...
Conference Paper
Full-text available
In this paper we present a set of activity recognition and localization algorithms that together assemble a large amount of information about activities on a parking lot. The aim is to detect and recognize events that may pose a threat to truck drivers and trucks. The algorithms perform zone-based activity learning, individual action recognition an...
Article
Introduction State of the Art Pre-Processing of the Data Activity Analysis and Automatic Classification Results and Evaluations Conclusion Bibliography
Conference Paper
Full-text available
The present work introduces a new method for activity extraction from video. To achieve this, we focus on the modelling of context by developing an algorithm that automatically learns the main activity zones of the observed scene by taking as input the trajectories of detected mobiles. Automatically learning the context of the scene (activity zones...
Article
Full-text available
In this work we present a system to extract in an unsu-pervised manner the main activities that can be observed by a camera monitoring a scene on the long-term and with the ultimate aim to dis-cover abnormal events. To allow for semantically interpretable results, the activities are characterised by referring them to contextual elements of the obse...
Conference Paper
Full-text available
In this work we present a novel approach for activity extraction and knowledge discovery from video employing fuzzy relations. Spatial and temporal properties from detected mobile objects are modeled with fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure ca...
Article
Full-text available
During steady muscle contractions, the human sensorimotor cortex generates oscillations in the beta-frequency range (15-30 Hz) that are coherent with the activity of contralateral spinal motoneurons. This corticospinal coherence is thought to favor stationary motor states, but its mode of operation remains elusive. We hypothesized that corticospina...
Article
Full-text available
Scene understanding corresponds to the real time process of perceiving, analysing and elaborating an interpretation of a 3D dynamic scene observed through a network of cameras. The whole challenge consists in managing this huge amount of information and in structuring all the knowledge. On-line Clustering is an efficient manner to process such huge...
Article
Full-text available
The present work presents a new method for activity extraction and reporting from video based on the aggregation of fuzzy relations. Trajectory clustering is first employed mainly to discover the points of entry and exit of mobiles appearing in the scene. In a second step, proximity relations between resulting clusters of detected mobiles and conte...
Conference Paper
Full-text available
The present work presents a novel approach for activity extraction and knowledge discovery from video. Spatial and temporal properties from detected mobile objects are modeled employing fuzzy relations. These can then be aggregated employing typical soft-computing algebra. A clustering algorithm based on the transitive closure calculation of the fu...
Article
Full-text available
In this paper, we review the recently finished CARETAKER project outcomes from a system point of view. The IST FP6-027231 CARETAKER project aimed at studying, developing and assessing multimedia knowledge-based content analysis, knowledge extraction components, and metadata management sub-systems in the context of automated situation awareness and...
Article
This work aims at recognizing activities from large video datasets, using the object trajectories as the activity descriptors. We make usage of a compact structure based on 6 features to represent trajectories. This structure allows us to apply standard techniques for unsupervised clustering. We present a method to optimize trajectory clustering by...
Article
Full-text available
Extracting the hidden and useful knowledge embedded within video sequences and thereby discovering relations between the various elements to help an efficient decision-making process is a challenging task. The task of knowledge discovery and information analysis is possible because of recent advancements in object detection and tracking. The author...
Article
Recently, we studied corticomuscular coherence (CMC) in a visuomotor task and showed for the first time gamma-range (30-45 Hz) CMC during isometric compensation of a periodically modulated dynamic force. We speculated that for the control of such forces, the sensorimotor system resonates at gamma-range frequencies to rapidly integrate the visual an...
Article
Full-text available
The exploration of large video data is a task which is now possible because of the advances made on object detection and tracking. Data mining techniques such as clustering are typically employed. Such techniques have mainly been applied for segmentation/indexation of video but knowledge extraction of the activity contained in the video has been on...
Article
Full-text available
The management and extraction of structured knowledge from large video recordings is at the core of urban/environment planning, resource optimization. We have addressed this issue for the networks of camera deployed in two underground systems in Italy. In this paper we show how meaningful events are detected directly from the streams of video. Late...
Article
Although corticomuscular synchronization in the beta range (15-30 Hz) was shown to occur during weak steady-state contractions, an examination of low-level forces around 10% of the maximum voluntary contraction (MVC) is still missing. We addressed this question by investigating coherence between electroencephalogram (EEG) and electromyogram (EMG) a...
Article
Full-text available
The exploration of large video data is a task which is now possible because of the advances made on object detection and tracking. Data mining techniques such as clustering are typically employed. Such techniques have mainly been applied for segmentation/indexation of video but knowledge extraction on the activity contained in the video has been on...
Article
The steady-state motor output, occurring during static force, is characterized by synchronization between oscillatory cortical motor and muscle activity confined to the beta frequency range (15-30 Hz). The functional significance of this beta-range coherence remains unclear. We hypothesized that if the beta-range coherence had a functional role, it...
Article
Full-text available
Most video applications fail to capture in an efficient knowledge representation model interactions between subjects themselves and interactions between subjects and contextual objects of the observed scene. In this paper we propose a knowledge modelling format which allows efficient knowledge representation. Furthermore, we show how advanced algor...
Article
The beta-range synchronization between cortical motor and muscular activity as revealed by EEG/MEG-EMG coherence has been extensively investigated for steady-state motor output. However, there is a lack of information on the modulation of the corticomuscular coherence in conjunction with dynamic force output. We addressed this question comparing th...
Article
Full-text available
Over the last few years much research has been devoted to investigating the synchronization between cortical motor and muscular activity as measured by EEG/MEG-EMG coherence. The main focus so far has been on corticomuscular coherence (CMC) during static force condition, for which coherence in beta-range has been described. In contrast, we showed i...
Article
Little is known about the influence of the afferent peripheral feedback on the sensorimotor cortex activation. To answer this open question we investigated the alpha and beta band task-related spectral power decreases (TRPow) in the deafferented patient G.L. and compared the results to those of six healthy subjects. The patient has been deafferente...
Article
This paper presents a novel method to minimise the over-segmentation that inherently results after applying a watershed algorithm. The proposed technique characterises each of the segmented regions and then employs the composition of fuzzy relations to group together similar regions.
Article
Intelligent image analysis is becoming increasingly important in biological and medical imaging applications. We present here an adaptable and intelligent image analysis system based on a combination of neural networks and fuzzy logic. The system has been applied successfully are diagnostic application such as recognition of the left ventricle in b...
Conference Paper
The authors have developed an algorithm to extract ventricular contours in gated single photon emission computed tomography (G-SPECT) images of the blood pool. In this kind of images, the authors have to deal mainly with 3 problems. First of all, there is a superposition of nuclear emission sources within the epicardium making difficult the separat...

Questions

Questions (5)
Question
The International BMTT-PETS 2017 workshop of tracking and surveillance (in conjunction with CVPR 2017) has a new extended deadline for submissions set to April 12th 2017.
The BMTT Challenge (Benchmarking of Multi-Target Tracking) and PETS (Performance Evaluation of Tracking and Surveillance) have joined to organise the
*** First BMTT-PETS workshop of tracking and surveillance ***
in conjunction with CVPR 2017.
Due to numerous requests, we have extended the deadline for our Call for Papers on our IEEE International BMTT-PETS 2017 workshop of tracking and surveillance to April 12th 2017.
We have 5 exciting challenges for the first edition of the BMTT-PETS workshop!:
Challenge 1: Detection and tracking in low-density scenarios
Challenge 2: Detection in crowded scenarios
Challenge 3: Tracking in crowded scenarios
Challenge 4: Atomic event detection
Challenge 5: Complex threat event detection
We invite you to visit our webpage https://motchallenge.net/workshops/bmtt-pets2017/
Question
The International BMTT-PETS 2017 workshop of tracking and surveillance has an extended deadline: April 5th, 2017.
Remember, the author instructions and submission guidelines can be found here:
 
Question
The BMTT Challenge (Benchmarking of Multi-Target Tracking) and PETS (Performance Evaluation of Tracking and Surveillance) have joined to organise the
*** First BMTT-PETS workshop of tracking and surveillance ***
in conjunction with CVPR 2017.
The idea behind PETS 2017 is to continue the evaluation theme of on-board surveillance systems for protection of mobile critical assets as set in PETS 2016. From the BMTT 2017 side, we want to shift our attention to detections and their interaction with tracking.
We are looking forward to welcoming researchers and industry affiliates in computer vision, machine learning, image analysis and related fields, to present and discuss their work. A single-track program with keynote talks, oral and poster presentations shall provide ample opportunities for scientific exchange and discussion.
BMTT-PETS 2017 invites submissions of high-quality research results as full papers.
* Important Dates: *
Paper submission deadline: March 24th, 2017
Notification of acceptance: April 25th, 2017
Camera-ready: May 1st, 2017
Workshop date: July 26th, 2017
There will be 5 challenges, 3 on object detection/tracking and 2 on surveillance/event detection. We encourage authors to submit their results to one or more of the challenges. For more details, please visit the website for each of the challenges.
Question
The International Workshop on Performance Evaluation of Tracking and Surveillance (PETS 2016) has an extended deadline: March 29, 2016.
The goal of the PETS workshop has been to foster the emergence of computer vision technologies for detection and tracking by providing evaluation datasets and metrics that allow an accurate assessment and comparison of such methodologies.
PETS 2016 addresses the application of on-board multi sensor surveillance for protection of mobile critical assets. Such assets (including trucks, trains, and shipping vessels) could be considered as targets for criminals, activists or even terrorists. The sensors (visible and thermal cameras) are mounted on the asset itself and surveillance is performed around the asset. Two datasets are provided, the first is a land case dataset; the second a maritime dataset.
visit www.pets2016.net for all details
Question
Let me kindly inform you that there is an open call for papers for the International Workshop PETS 2014(www.pets2014.net). The 2014 International Workshop on Performance Evaluation of Tracking and Surveillance (PETS) continues the series of highly successful PETS workshops held for over ten years. The goal of the PETS workshop has been to foster the emergence of computer vision technologies for detection and tracking by providing evaluation datasets and metrics that allow an accurate assessment and comparison of such methodologies.
PETS 2014 is sponsored by the EU project ARENA, which is making available for this workshop the 'ARENA Dataset'. The dataset comprises of a series of multi-camera video recordings where the main subject is the detection and understanding of human behaviour around a parked vehicle; with a main focus on discriminating behaviour between normal; abnormal/rare behaviour and real threats. The main objective is to detect and understand the different behaviours from four visual (RGB) cameras mounted on the vehicle itself.
Detailed information can be found in the workshop website www.pets2014.net

Network

Cited By