ArticlePublisher preview available
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Statistical analysis of dynamic systems, such as videos and dynamic functional connectivity, is often translated into a problem of analyzing trajectories of relevant features, particularly covariance matrices. As an example, in video-based action recognition, a natural mathematical representation of activity videos is as parameterized trajectories on the set of symmetric, positive-definite matrices (SPDMs). The variable execution-rates of actions, implying arbitrary parameterizations of trajectories, complicates their analysis and classification. To handle this challenge, we represent covariance trajectories using transported square-root vector fields (TSRVFs), constructed by parallel translating scaled-velocity vectors of trajectories to their starting points. The space of such representations forms a vector bundle on the SPDM manifold. Using a natural Riemannian metric on this vector bundle, we approximate geodesic paths and geodesic distances between trajectories in the quotient space of this vector bundle. This metric is invariant to the action of the reparameterization group, and leads to a rate-invariant analysis of trajectories. In the process, we remove the parameterization variability and temporally register trajectories during analysis. We demonstrate this framework in multiple contexts, using both generative statistical models and discriminative data analysis. The latter is illustrated using several applications involving video-based action recognition and dynamic functional connectivity analysis.
This content is subject to copyright. Terms and conditions apply.
Journal of Mathematical Imaging and Vision (2018) 60:1306–1323
https://doi.org/10.1007/s10851-018-0814-0
Rate-Invariant Analysis of Covariance Trajectories
Zhengwu Zhang1·Jingyong Su2·Eric Klassen3·Huiling Le4·Anuj Srivastava5
Received: 10 August 2016 / Accepted: 31 March 2018 / Published online: 24 April 2018
© Springer Science+Business Media, LLC, part of Springer Nature 2018
Abstract
Statistical analysis of dynamic systems, such as videos and dynamic functional connectivity, is often translated into a problem of
analyzing trajectories of relevant features, particularly covariance matrices. As an example, in video-based action recognition,
a natural mathematical representation of activity videos is as parameterized trajectories on the set of symmetric, positive-
definite matrices (SPDMs). The execution rates of actions, implying arbitrary parameterizations of trajectories, complicate their
analysis. To handle this challenge, we represent covariance trajectories using transported square-root vector fields, constructed
by parallel translating scaled-velocity vectors of trajectories to their starting points. The space of such representations forms
a vector bundle on the SPDM manifold. Using a natural Riemannian metric on this vector bundle, we approximate geodesic
paths and geodesic distances between trajectories in the space of this vector bundle. This metric is invariant to the action of the
re-parameterization group, and leads to a rate-invariant analysis of trajectories. In the process, we remove the parameterization
variability and temporally register trajectories. We demonstrate this framework in multiple contexts, using both generative
statistical models and discriminative data analysis. The latter is illustrated using several applications involving video-based
action recognition and dynamic functional connectivity analysis.
Keywords SPDM Riemannian structure ·SPDM parallel transport ·Invariant metrics ·Covariance trajectories ·
Vector bundles ·Rate-invariant classification
Electronic supplementary material The online version of this article
(https://doi.org/10.1007/s10851-018- 0814-0) contains supplementary
material, which is available to authorized users.
BZhengwu Zhang
zhengwu_zhang@urmc.rochester.edu
Jingyong Su
jingyong.su@ttu.edu
Eric Klassen
klassen@math.fsu.edu
Huiling Le
huiling.le@nottingham.ac.uk
Anuj Srivastava
anuj@stat.fsu.edu
1Department of Biostatistics and Computational Biology,
University of Rochester, Rochester, NY, USA
2Department of Mathematics and Statistics, Texas Tech
University, Lubbock, TX, USA
3Department of Mathematics, Florida State University,
Tallahassee, FL, USA
4School of Mathematical Sciences, University of Nottingham,
Nottingham, UK
1 Introduction
The problem of studying dynamical systems using image
sequences (such as videos) is both important and chal-
lenging. It has applications in many areas including video
surveillance, lip reading, pedestrian tracking, hand gesture
recognition, human–machine interfaces, brain functional
connectivity analysis and medical diagnosis. Since the size
of video data is generally very high, analyses are often
performed by extracting certain low-dimensional features
of interest—geometric, motion and colorimetric features,
etc—from each frame and then forming temporal sequences
of these features for full videos. Consequently, analysis of
videos gets replaced by analysis of longitudinal observations
in a certain feature space. (Some papers (e.g., [13,40]) dis-
card temporal structure by pooling all the feature together but
that may represent a severe loss of information.) Since many
features are naturally constrained to lie on nonlinear mani-
folds, the corresponding representations form parameterized
trajectories on these manifolds. Examples of these manifolds
5Department of Statistics, Florida State University,
Tallahassee, FL, USA
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
... More recently, the Fisher-Rao Riemannian metric has been used to separate the phase and amplitude parts of 1D functional data on [0, 1] in [28]. Alignment of functional data on the manifold, i.e., g : [0, 1] → M where M is a nonlinear manifold, has been investigated in [29]- [31]. However, the ConCon function f is defined on a product manifold domain Ω × Ω, which is significantly different from the previous works. ...
Preprint
Brain networks are typically represented by adjacency matrices, where each node corresponds to a brain region. In traditional brain network analysis, nodes are assumed to be matched across individuals, but the methods used for node matching often overlook the underlying connectivity information. This oversight can result in inaccurate node alignment, leading to inflated edge variability and reduced statistical power in downstream connectivity analyses. To overcome this challenge, we propose a novel framework for registering high resolution continuous connectivity (ConCon), defined as a continuous function on a product manifold space specifically, the cortical surface capturing structural connectivity between all pairs of cortical points. Leveraging ConCon, we formulate an optimal diffeomorphism problem to align both connectivity profiles and cortical surfaces simultaneously. We introduce an efficient algorithm to solve this problem and validate our approach using data from the Human Connectome Project (HCP). Results demonstrate that our method substantially improves the accuracy and robustness of connectome-based analyses compared to existing techniques.
... For example, neuroscientists study dynamic functional connectivity, where one records longitudinal samples of time-varying covariances at increasingly high resolutions [10], whereas collections of multiple functional time series will generate multiple spectral density operators [33]. Furthermore, data in the form of time-varying covariance matrices is increasingly common, and has recently attracted significant interest from the statistical community [10,18,48,35] mostly due to a growing interest in studying the brain functional connectome generated from functional MRI [45]. ...
Preprint
Full-text available
We develop a statistical framework for conducting inference on collections of time-varying covariance operators (covariance flows) over a general, possibly infinite dimensional, Hilbert space. We model the intrinsically non-linear structure of covariances by means of the Bures- Wasserstein metric geometry. We make use of the Riemmanian-like structure induced by this metric to define a notion of mean and covariance of a random flow, and develop an associ- ated Karhunen-Lo`eve expansion. We then treat the problem of estimation and construction of functional principal components from a finite collection of covariance flows, observed fully or irregularly. Our theoretical results are motivated by modern problems in functional data analysis, where one observes operator-valued random processes – for instance when analysing dynamic functional connectivity and fMRI data, or when analysing multiple functional time se- ries in the frequency domain. Nevertheless, our framework is also novel in the finite-dimensions (matrix case), and we demonstrate what simplifications can be afforded then. We illustrate our methodology by means of simulations and data analyses.
... Another metric that also provides a Riemannian symmetric structure on Sym + (n) was used in [29,36]. It was introduced directly by the quotient structure detailed in Proposition 3.1 but with the submersion √ π : A ∈ GL + (n) −→ √ AA ∈ Sym + (n) based on the polar decomposition of A (and without the coefficient 4). ...
Article
Full-text available
Symmetric Positive Definite (SPD) matrices are ubiquitous in data analysis under the form of covariance matrices or correlation matrices. Several O(n)-invariant Riemannian metrics were defined on the SPD cone, in particular the kernel metrics introduced by Hiai and Petz. The class of kernel metrics interpolates between many classical O(n)-invariant metrics and it satisfies key results of stability and completeness. However, it does not contain all the classical O(n)-invariant metrics. Therefore in this work, we investigate super-classes of kernel metrics and we study which key results remain true. We also introduce an additional key result called cometric-stability, a crucial property to implement geodesics with a Hamiltonian formulation. Our method to build intermediate embedded classes between O(n)-invariant metrics and kernel metrics is to give a characterization of the whole class of O(n)-invariant metrics on SPD matrices and to specify requirements on metrics one by one until we reach kernel metrics. As a secondary contribution, we synthesize the literature on the main O(n)-invariant metrics, we provide the complete formula of the sectional curvature of the affine-invariant metric and the formula of the geodesic parallel transport between commuting matrices for the Bures-Wasserstein metric.
Chapter
The square root velocity transformation is crucial for efficiently employing the elastic approach in functional and shape data analysis of curves. We study fundamental geometric properties of curves under this transformation. Moreover, utilizing natural geometric constructions, we employ the approach for intrinsic comparison within several classes of surfaces and augmented curves, which arise in the real world applications such as tubes, ruled surfaces, spherical strips, protein molecules and hurricane tracks.
Article
Full-text available
Functional data analysis (FDA) is a fast-growing area of research and development in statistics. While most FDA literature imposes the classical L2\mathbb {L}^2 L 2 Hilbert structure on function spaces, there is an emergent need for a different, shape-based approach for analyzing functional data. This paper reviews and develops fundamental geometrical concepts that help connect traditionally diverse fields of shape and functional analyses. It showcases that focusing on shapes is often more appropriate when structural features (number of peaks and valleys and their heights) carry salient information in data. It recaps recent mathematical representations and associated procedures for comparing, summarizing, and testing the shapes of functions. Specifically, it discusses three tasks: shape fitting, shape fPCA, and shape regression models. The latter refers to the models that separate the shapes of functions from their phases and use them individually in regression analysis. The ensuing results provide better interpretations and tend to preserve geometric structures. The paper also discusses an extension where the functions are not real-valued but manifold-valued. The article presents several examples of this shape-centric functional data analysis using simulated and real data.
Chapter
This chapter reviews some past and recent developments in shape comparison and analysis of curves based on the computation of intrinsic Riemannian metrics on the space of curve modulo shape-preserving transformations. We summarize the general construction and theoretical properties of quotient elastic metrics for Euclidean as well as non-Euclidean curves before considering the special case of the square root velocity metric for which the expression of the resulting distance simplifies through a particular transformation. We then examine the different numerical approaches that have been proposed to estimate such distances in practice and in particular to quotient out curve reparametrization in the resulting minimization problems.
Conference Paper
Full-text available
This paper focuses on the study of open curves in a manifold M, and its aim is to define a reparameterization invariant distance on the space of such paths. We use the square root velocity function (SRVF) introduced by Srivastava et al. in [11] to define a reparameterization invariant metric on the space of immersions M=Imm([0,1],M)\mathcal {M}=\text {Imm}([0,1],M) by pullback of a metric on the tangent bundle TM\text {T}\mathcal {M} derived from the Sasaki metric. We observe that such a natural choice of Riemannian metric on TM\text {T}\mathcal {M} induces a first-order Sobolev metric on M\mathcal {M} with an extra term involving the origins, and leads to a distance which takes into account the distance between the origins and the distance between the image curves by the SRVF parallel transported to a same vector space, with an added curvature term. This provides a generalized theoretical SRV framework for curves lying in a general manifold M.
Book
From the reviews: "This book provides a very readable introduction to Riemannian geometry and geometric analysis. The author focuses on using analytic methods in the study of some fundamental theorems in Riemannian geometry,e.g., the Hodge theorem, the Rauch comparison theorem, the Lyusternik and Fet theorem and the existence of harmonic mappings. With the vast development of the mathematical subject of geometric analysis, the present textbook is most welcome. It is a good introduction to Riemannian geometry. The book is made more interesting by the perspectives in various sections, where the author mentions the history and development of the material and provides the reader with references." Math. Reviews. The second edition contains a new chapter on variational problems from quantum field theory, in particular the Seiberg-Witten and Ginzburg-Landau functionals. These topics are carefully and systematically developed, and the new edition contains a thorough treatment of the relevant background material, namely spin geometry and Dirac operators. The new material is based on a course "Geometry and Physics" at the University of Leipzig that was attented by graduate students, postdocs and researchers from other areas of mathematics. Much of the material is included here for the first time in a textbook, and the book will lead the reader to some of the hottest topics of contemporary mathematical research.
Conference Paper
We study the question of feature sets for robust visual object recognition, adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of Histograms of Oriented Gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.
Chapter
Action videos are multidimensional data and can be naturally represented as data tensors. While tensor computing is widely used in computer vision, the geometry of tensor space is often ignored. The aim of this paper is to demonstrate the importance of the intrinsic geometry of tensor space which yields a very discriminating structure for action recognition. We characterize data tensors as points on a product manifold and model it statistically using least squares regression. To this aim, we factorize a data tensor relating to each order of the tensor using higher order singular value decomposition (HOSVD) and then impose each factorized element on a Grassmann manifold. Furthermore, we account for underlying geometry on manifolds and formulate least squares regression as a composite function. This gives a natural extension from Euclidean space to manifolds. Consequently, classification is performed using geodesic distance on a product manifold where each factor manifold is Grassmannian. Our method exploits appearance and motion without explicitly modeling the shapes and dynamics. We assess the proposed method using three gesture databases, namely the Cambridge hand-gesture, the UMD Keck body-gesture, and the CHALEARN gesture challenge data sets. Experimental results reveal that not only does the proposed method perform well on the standard benchmark data sets, but also it generalizes well on the one-shot-learning gesture challenge. Furthermore, it is based on a simple statistical model and the intrinsic geometry of tensor space.
Conference Paper
This paper seeks to discover common change-point patterns, associated with functional connectivity (FC) in human brain, across multiple subjects. FC, represented as a covariance or a correlation matrix, relates to the similarity of fMRI responses across different brain regions, when a brain is simply resting or performing a task under an external stimulus. While the dynamical nature of FC is well accepted, this paper develops a formal statistical test for finding change-points in times series associated with FC observed over time. It represents instantaneous connectivity by a symmetric positive-definite matrix, and uses a Riemannian metric on this space to develop a graphical method for detecting change-points in a time series of such matrices. It also provides a graphical representation of estimated FC for stationary subintervals in between detected change-points. Furthermore, it uses a temporal alignment of the test statistic, viewed as a real-valued function over time, to remove temporal variability and to discover common change-point patterns across subjects, tasks, and regions. This method is illustrated using HCP database for multiple subjects and tasks.
Book
This textbook for courses on function data analysis and shape data analysis describes how to define, compare, and mathematically represent shapes, with a focus on statistical modeling and inference. It is aimed at graduate students in analysis in statistics, engineering, applied mathematics, neuroscience, biology, bioinformatics, and other related areas. The interdisciplinary nature of the broad range of ideas covered—from introductory theory to algorithmic implementations and some statistical case studies—is meant to familiarize graduate students with an array of tools that are relevant in developing computational solutions for shape and related analyses. These tools, gleaned from geometry, algebra, statistics, and computational science, are traditionally scattered across different courses, departments, and disciplines; Functional and Shape Data Analysis offers a unified, comprehensive solution by integrating the registration problem into shape analysis, better preparing graduate students for handling future scientific challenges. Recently, a data-driven and application-oriented focus on shape analysis has been trending. This text offers a self-contained treatment of this new generation of methods in shape analysis of curves. Its main focus is shape analysis of functions and curves—in one, two, and higher dimensions—both closed and open. It develops elegant Riemannian frameworks that provide both quantification of shape differences and registration of curves at the same time. Additionally, these methods are used for statistically summarizing given curve data, performing dimension reduction, and modeling observed variability. It is recommended that the reader have a background in calculus, linear algebra, numerical analysis, and computation. • Presents a complete and detailed exposition on statistical analysis of shapes that includes appendices, background material, and exercises, making this text a self-contained reference • Addresses and explores the next generation of shape analysis • Focuses on providing a working knowledge of a broad range of relevant material, foregoing in-depth technical details and elaborate mathematical explanations Anuj Srivastava is a Professor in the Department of Statistics and a Distinguished Research Professor at Florida State University. His areas of interest include statistical analysis on nonlinear manifolds, statistical computer vision, functional data analysis, and statistical shape theory. He has been the associate editor for the Journal of Statistical Planning and Inference, and several IEEE journals. He is a fellow of the International Association of Pattern Recognition(IAPR) and a senior member of the Institute for Electrical and Electronic Engineers (IEEE). Eric Klassen is a Professor in the Department of Mathematics at Florida State University. His mathematical interests include topology, geometry, and shape analysis. In his spare time, he enjoys playing the piano, riding his bike, and contra dancing.