About
135
Publications
44,455
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
6,872
Citations
Introduction
Additional affiliations
May 2018 - July 2018
August 2007 - present
Publications
Publications (135)
CANDECOMP/PARAFAC (CP) approximates mul-tiway data by sum of rank-1 tensors. Unlike matrix decomposition , the procedure which estimates the best rank-R tensor approximation through R sequential best rank-1 approximations does not work for tensors, because the deflation does not always reduce the tensor rank. In this paper, we propose a novel defla...
In Part 1 of the study of the tensor deflation for CANDECOMP/PARAFAC, we have shown that the rank-1 tensor deflation is applicable under some conditions. Part 2 of the study presents several initialization algorithms suitable for the algorithm proposed in Part 1. In addition, Part 2 contains an algorithm for the case when one or more factor matrice...
CANDECOMP/PARAFAC (CPD) approximates multiway data by sum of rank-1 tensors.
Our recent study has presented a method to rank-1 tensor deflation, i.e.
sequential extraction of the rank-1 components. In this paper, we extend the
method to block deflation problem. When at least two factor matrices have full
column rank, one can extract two rank-1 tens...
Tensor decomposition of convolutional and fully-connected layers is an effective way to reduce parameters and FLOP in neural networks. Due to memory and power consumption limitations of mobile or embedded devices, the quantization step is usually necessary when pre-trained models are deployed. A conventional post-training quantization approach appl...
"Faster, higher, stronger" is the motto of any professional athlete. Does that apply to brain dynamics as well? In our paper, we performed a series of EEG experiments on Visually Evoked Potentials and a series of cognitive tests-reaction time and visual search, with professional eSport players in Counter-Strike: Global Offensive (CS:GO) and novices...
Paper
https://www.researchgate.net/publication/370766076_Inexact_higher-order_proximal_algorithms_for_tensor_factorization
Pedestrian Attribute Recognition (PAR) deals with the problem of identifying features in a pedestrian image. It has found interesting applications in person retrieval, suspect re-identification and soft biometrics. In the past few years, several Deep Neural Networks (DNNs) have been designed to solve the task; however, the developed DNNs predominan...
This paper presents a pixel selection method for compact image representation based on superpixel segmentation and tensor completion. Our method divides the image into several regions that capture important textures or semantics and selects a representative pixel from each region to store. We experiment with different criteria for choosing the repr...
In the last decades, Matrix Factorization (MF) models and their multilinear extension-Tensor Factorization (TF) models have been shown to be powerful tools for high dimensional data analysis and features extraction. Computing MF's or TF's are commonly achieved by solving a constrained optimization subproblem on each block of variables, where the su...
In this paper, we propose a new adaptive cross algorithm for computing a low tubal rank approximation of third-order tensors, with less memory and demands lower computational complexity than the truncated tensor SVD (t-SVD). This makes it applicable for decomposing large-scale tensors. We conduct numerical experiments on synthetic and real-world da...
In this paper, we extend the Discrete Empirical Interpolation Method (DEIM) to the third-order tensor case based on the t-product and use it to select important/ significant lateral and horizontal slices/features. The proposed Tubal DEIM (TDEIM) is investigated both theoretically and numerically. The experimental results show that the TDEIM can pro...
Low-rank matrix/tensor decompositions are promising methods for reducing the inference time, computation, and memory consumption of deep neural networks (DNNs). This group of methods decomposes the pre-trained neural network weights through low-rank matrix/tensor decomposition and replaces the original layers with lightweight factorized layers. A m...
This paper proposes a general framework to use the cross tensor approximation or tensor CUR approximation for reconstructing incomplete images and videos. The new algorithms are simple and easy to be implemented with low computational complexity. For the case of data tensors with 1) structural missing components or 2) a high missing rate, we propos...
We present a novel procedure for optimization based on the combination of efficient quantized tensor train representation and a generalized maximum matrix volume principle. We demonstrate the applicability of the new Tensor Train Optimizer (TTOpt) method for various tasks, ranging from minimization of multidimensional functions to reinforcement lea...
Electroencephalogram (EEG) is a noninvasive method to detect spatio-temporal electric signals in human brain, actively used in the recent development of Brain Computer Interfaces (BCI). EEG’s patterns are affected by the task, but also other variable factors influence the subject focus on the task and result in noisy EEG signals difficult to deciph...
A rising problem in the compression of Deep Neural Networks is how to reduce the number of parameters in convolutional kernels and the complexity of these layers by low-rank tensor approximation. Canonical polyadic tensor decomposition (CPD) and Tucker tensor decomposition (TKD) are two solutions to this problem and provide promising results. Howev...
Cross Tensor Approximation (CTA) is a generalization of Cross/skeleton matrix and CUR Matrix Approximation (CMA) and is a suitable tool for fast low-rank tensor approximation. It facilitates interpreting the underlying data tensors and decomposing/compressing tensors in such a way that their structures such as nonnegativity, smoothness, or sparsity...
Prediction of the real-time multiplayer online battle arena (MOBA) games' match outcome is one of the most important and exciting tasks in Esports analytical research. This research paper predominantly focuses on building predictive machine and deep learning models to identify the outcome of the Dota 2 MOBA game using the new method of multi-forwar...
Structured Tucker tensor decomposition models complete or incomplete multiway data sets (tensors), where the core tensor and the factor matrices can obey different constraints. The model includes block-term decomposition or canonical polyadic decomposition as special cases. We propose a very flexible optimization method for the structured Tucker de...
Big data analysis has become a crucial part of new emerging technologies such as the internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and feature extraction. Randomized algorithms are efficient...
Optimal rank selection is an important issue in tensor decomposition problems, especially for Tensor Train (TT) and Tensor Ring (TR) (also known as Tensor Chain) decompositions. In this paper, a new rank selection method for TR decomposition has been proposed for automatically finding near optimal TR ranks, which result in a lower storage cost, esp...
Most state-of-the-art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tenso...
Most state of the art deep neural networks are overparameterized and exhibit a high computational cost. A straightforward approach to this problem is to replace convolutional kernels with its low-rank tensor approximations, whereas the Canonical Polyadic tensor Decomposition is one of the most suited models. However, fitting the convolutional tenso...
Randomized algorithms are efficient techniques for big data tensor analysis. In this tutorial paper, we review and extend a variety of randomized algorithms for decomposing large-scale data tensors in Tensor Ring (TR) format. We discuss both adaptive and nonadaptive randomized algorithms for this task. Our main focus is on the random projection tec...
Decoding brain signals has gained many attention and has found much applications in recent years such as Brain Computer Interfaces, communicating with controlling external devices using the user's intentions, occupies an emerging field with the potential of changing the world, with diverse applications from rehabilitation to human augmentation. Thi...
The modern convolutional neural networks although achieve great results in solving complex computer vision tasks still cannot be effectively used in mobile and embedded devices due to the strict requirements for computational complexity, memory and power consumption. The CNNs have to be compressed and accelerated before deployment. In order to solv...
A novel algorithm to solve the quadratic programming problem over ellipsoids is proposed. This is achieved by splitting the problem into two optimisation sub-problems, quadratic programming over a sphere and orthogonal projection. Next, an augmented-Lagrangian algorithm is developed for this multiple constraint optimisation. Benefit from the fact t...
Tensor networks are linear algebraic representations of quantum many-body states based on their entanglement structure. People are exploring their applications to machine learning. Deep convolutional neural networks achieve state of the art results in computer vision and other areas. Supposedly this happens because of parameter sharing, locality, a...
Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model that represents data as an ordered network of subtensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-called...
Big data analysis has become a crucial part of new emerging technologies such as the
internet of things, cyber-physical analysis, deep learning, anomaly detection, etc. Among many other
techniques, dimensionality reduction plays a key role in such analyses and facilitates feature selection and
feature extraction. Randomized algorithms are efficient...
Neurons selective for faces exist in humans and monkeys. However, characteristics of face cell receptive fields are poorly understood. In this theoretical study, we explore the effects of complexity, defined as algorithmic information (Kolmogorov complexity) and logical depth, on possible ways that face cells may be organized. We use tensor decompo...
Canonical polyadic (CP) tensor decomposition is an important task in many applications. Many times, the true tensor rank is not known, or noise is present, and in such situations, different existing CP decomposition algorithms provide very different results. In this paper, we introduce a notion of
$sensitivity$
of CP decomposition and suggest to...
The canonical polyadic decomposition (CPD) is a convenient and intuitive tool for tensor factorization; however, for higher order tensors, it often exhibits high computational cost and permutation of tensor entries, and these undesirable effects grow exponentially with the tensor order. Prior compression of tensor in-hand can reduce the computation...
In CANDECOMP/PARAFAC tensor decomposition, degeneracy often occurs in some difficult scenarios, especially, when the rank exceeds the tensor dimension, or when the loading components are highly collinear in several or all modes, or when CPD does not have an optimal solution. In such cases, norms of some rank-1 tensors become significantly large and...
The Canonical Polyadic decomposition (CPD) is a convenient and intuitive tool for tensor factorization; however, for higher-order tensors, it often exhibits high computational cost and permutation of tensor entries, these undesirable effects grow exponentially with the tensor order. Prior compression of tensor in-hand can reduce the computational c...
In CANDECOMP/PARAFAC tensor decomposition, degeneracy often occurs in some difficult scenarios, especially, when the rank exceeds the tensor dimension, or when the loading components are highly collinear in several or all modes, or when CPD does not have an optimal solution. In such cases, norms of some rank-1 tensors become significantly large and...
The Canonical Polyadic decomposition (CPD) is a convenient and intuitive tool for tensor factorization; however, for higher-order tensors, it often exhibits high computational cost and permutation of tensor entries, these undesirable effects grow exponentially with the tensor order. Prior compression of tensor in-hand can reduce the computational c...
In this article, we introduce a matrix product operator, also called tensor train matrix, representation of discrete-time multilinear state space models and develop a corresponding system identification method. Using matrix product operators allows us to store the exponential number of model coefficients with a linear storage complexity. The derive...
A novel algorithm is proposed for CANDECOMP/PARAFAC tensor decomposition to exploit best rank-1 tensor approximation. Different from the existing algorithms, our algorithm updates rank-1 tensors simultaneously in parallel. In order to achieve this, we develop new all-at-once algorithms for best rank-1 tensor approximation based on the Levenberg-Mar...
In CANDECOMP/PARAFAC tensor decomposition, degeneracy often occurs in some difficult scenarios, e.g., when the rank exceeds the tensor dimension, or when the loading components are highly collinear in several or all modes, or when CPD does not have an optimal solution. In such the cases, norms of some rank-1 terms become significantly large and can...
Tensor diagonalization means transforming a given tensor to an exactly or nearly diagonal form through multiplying the tensor by non-orthogonal invertible matrices along selected dimensions of the tensor. It has a link to an approximate joint diagonalization (AJD) of a set of matrices. In this paper, we derive (1) a new algorithm for a symmetric AJ...
Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emp...
Decomposition of symmetric tensors has found numerous applications in blind sources separation, blind identification , clustering, and analysis of social interactions. In this paper, we consider fourth order symmetric tensors, and its symmetric tensor decomposition. By imposing unit-length constraints on components, we resort the optimisation probl...
This paper deals with estimation of structured signals such as damped sinusoids, exponentials, polynomials, and their products from single channel data. It is shown that building tensors from this kind of data results in tensors with hidden block structure which can be recovered through the tensor diagonalization. The tensor diagonalization means m...
This monograph builds on Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions by discussing tensor network models for super-compressed higher-order representation of data/parameters and cost functions, together with an outline of their applications in machine learning and data analytics. A...
Tensor train (TT) is one of the modern tensor decomposition models for low-rank approximation of high-order tensors. For nonnegative multiway array data analysis, we propose a nonnegative TT (NTT) decomposition algorithm for the NTT model and a hybrid model called the NTT-Tucker model. By employing the hierarchical alternating least squares approac...
Decompositions of tensors into factor matrices, which interact through a core tensor, have found numerous applications in signal processing and machine learning. A more general tensor model which represents data as an ordered network of sub-tensors of order-2 or order-3 has, so far, not been widely considered in these fields, although this so-calle...
Canonical polyadic decomposition (CPD), also known as PARAFAC, is a representation of a given tensor as a sum of rank-one tensors. Traditional method for accomplishing CPD is the alternating least squares (ALS) algorithm. This algorithm is easy to implement with very low computational complexity per iteration. A disadvantage is that in difficult sc...
Machine learning and data mining algorithms are becoming increasingly important in analyzing large volume, multi-relational and multi--modal datasets, which are often conveniently represented as multiway arrays or tensors. It is therefore timely and valuable for the multidisciplinary research community to review tensor decompositions and tensor net...
Canonical polyadic decomposition (CPD), also known as parallel factor analysis, is a representation of a given tensor as a sum of rank-one components. Traditional method for accomplishing CPD is the alternating least squares (ALS) algorithm. Convergence of ALS is known to be slow, especially when some factor matrices of the tensor contain nearly co...
Canonical polyadic decomposition of tensor is to approximate or express the tensor by sum of rank-1 tensors. When all or almost all components of factor matrices of the tensor are highly collinear, the decomposition becomes difficult. Algorithms , e.g., the alternating algorithms, require plenty of iterations , and may get stuck in false local mini...
In this paper, a numerical method is proposed for canonical polyadic (CP) decomposition of small size tensors. The focus is primarily on decomposition of tensors that correspond to small matrix multiplications. Here, rank of the tensors is equal to the smallest number of scalar multiplications that are necessary to accomplish the matrix multiplicat...
Modern applications in engineering and data science are increasingly based on multidimensional data of exceedingly high volume, variety, and structural richness. However, standard machine learning algorithms typically scale exponentially with data volume and complexity of cross-modal couplings-the so called curse of dimensionality-which is prohibit...
CANDECOMP/PARAFAC (CP) approximates multiway data by a sum of rank-1 tensors. Our recent study has presented a method to rank-1 tensor deflation, i.e. sequential extraction of rank-1 tensor components. In this paper, we extend the method to block deflation problem. When at least two factor matrices have full column rank, one can extract two rank-1...
In this paper, we propose a low-rank tensor deconvolution problem which seeks multiway replicative patterns and corresponding activating tensors of rank-1. An alternating least squares (ALS) algorithm has been derived for the model to sequentially update loading components and the patterns. In addition, together with a good initialisation method us...
This paper presents algorithms for two-sided diagonalization of order-three tensors. It is another expression for joint non-symmetric approximate diagonalization of a set of square ma trices, say T 1 ,…, T M : We seek two non-orthogonal matrices A and B such that the products AT m B T are close to diagonal in a sense. The algorithms can be used for...
Nonnegative Matrix Factorization (NMF) is a key tool for model dimensionality reduction in supervised classification. Several NMF algorithms have been developed for this purpose. In a majority of them, the training process is improved by using discriminant or nearest-neighbor graph-based constraints that are obtained from the knowledge on class lab...
CANDECOMP/PARAFAC tensor decomposition (CPD) approximates multiway data by rank-1 tensors. Unlike matrix decomposition, the procedure which estimates the best rank-R tensor approximation through R sequential best rank-1 approximations does not work for tensors, because the deflation does not always reduce the tensor rank. In this paper we propose a...
We propose algorithms for Tucker tensor decomposition, which can avoid computing singular value decomposition or eigenvalue decomposition of large matrices as in the work-horse higher order orthogonal iteration (HOOI) algorithm. The novel algorithms require computational cost of O(I3R), which is cheaper than O(I3R + IR4 + R6) of HOOI for multilinea...
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essential...
This paper presents an algorithm for analysis of order-3 tensors, which is
similar to CANDECOMP/PARAFAC (CP) decomposition, but has a different criterion
of success. The tensor diagonalization allows exploratory analysis of the
multilinear data in cases for which the CP decomposition is best replaced by a
block term decomposition. The algorithm is...
Blind separation of underdetermined instantaneous mixtures is a popular solution to inverse problems encountered in audio or biomedical applications where the number of sources exceeds the number of sensors. There are two non-equivalent tasks: to identify the mixing matrix and to separate the original sources. In this paper, we focus on the latter...
We propose a novel decomposition approach to impute missing val-ues in tensor data. The method uses smaller scale multiway patches to model the whole data or a small volume encompassing the ob-served missing entries. Simulations on color images show that our method can recover color images using only 5-10% of pixels, and outperforms other available...
A novel approach is proposed to extract high-rank patterns from multiway data. The method is useful when signals comprise collinear components or complex structural patterns. Alternating least squares and multiplication algorithms are developed for the new model with/without non negativity constraints. Experimental results on synthetic data and rea...
Various tensor decompositions use different arrangements of factors to explain multi-way data. Components from different decompositions can vary in the number of parameters. Allowing a model to contain components from different decompositions results in a combinatoric number of possible models. Model selection balances approximation error and the n...
In general, algorithms for order-3 CANDECOMP/PARAFAC (CP), also coined canonical polyadic decomposition
(CPD), are easy to implement and can be extended to higher order
CPD. Unfortunately, the algorithms become computationally
demanding, and they are often not applicable to higher order
and relatively large scale tensors. In this paper, by exploiti...
CANDECOMP/PARAFAC (CP) has found numer-ous applications in wide variety of areas such as in chemomet-rics, telecommunication, data mining, neuroscience, separated representations. For an order-N tensor, most CP algorithms can be computationally demanding due to computation of gradients which are related to products between tensor unfoldings and Kha...
Several variants of Nonnegative Matrix Factorization (NMF) have been proposed for supervised classification of various objects. Graph regularized NMF (GNMF) incorporates the information on the data geometric structure to the training process, which considerably improves the classification results. However, the multiplicative algorithms used for upd...