Article

Bounds on the Spectral Norm and the Nuclear Norm of a Tensor Based on Tensor Partitions

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

It is known that computing the spectral norm and the nuclear norm of a tensor is NP-hard in general. In this paper, we provide neat bounds for the spectral norm and the nuclear norm of a tensor based on tensor partitions. The spectral norm (respectively, the nuclear norm) can be lower and upper bounded by manipulating the spectral norms (respectively, the nuclear norms) of its subtensors. The bounds are sharp in general. When a tensor is partitioned into its matrix slices, our inequalities provide polynomial-time worst-case approximation bounds for computing the spectral norm and the nuclear norm of the tensor.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... 2 The nuclear norm is fundamental for tensor products of Banach spaces. It was introduced quite a while ago by Schatten [31] and Grothendieck [14], but is enjoying a renewed interest presently; see, e.g., [8], [12], [13], [19], and [21]. 3 β-stars are hypergraphs such that all edges share the same vertex and no two edges share other vertices. ...
... and (19) implies that Proof of Theorem 2 Suppose that A is symmetric and nonnegative r-matrix of order n. First, we prove the assertion for p > r, and then obtain the case p = r by passing to limit. ...
... (n 1 · · · n r ) 1/2 |A| 2 , and could be better than the more complicated version of Li [19]. However, bound (24) is ill-suited to matrices with small slice sums; e.g., if the slice sums are zero, then bound (24) is vacuous. ...
Preprint
The spectral p-norm of r-matrices generalizes the spectral 2-norm of 2-matrices. In 1911 Schur gave an upper bound on the spectral 2-norm of 2-matrices, which was extended in 1934 by Hardy, Littlewood, and Polya to r-matrices. Recently, Kolotilina, and independently the author, strengthened Schur's bound for 2-matrices. The main result of this paper extends the latter result to r-matrices, thereby improving the result of Hardy, Littlewood, and Polya. The proof is based on combinatorial concepts like r-partite r-matrix and symmetrant of a matrix, which appear to be instrumental in the study of the spectral p-norm in general. Thus, another application shows that the spectral p-norm and the p-spectral radius of a symmetric nonnegative r-matrix are equal whenever prp\geq r. This result contributes to a classical area of analysis, initiated by Mazur and Orlicz around 1930. Additionally, a number of bounds are given on the p-spectral radius and the spectral p-norm of r-matrices and r-graphs.
... It is NP-hard to compute them [1]. It is an active research topic to study them more [2,3,4]. ...
... Definition 2.2. The spectral norm of A ∈ d1×d2×d3 is defined [1,2,3,4] as ...
... Then the spectral norm of A is equal to the largest singular value of A [1,2,3,4]. ...
Article
Full-text available
The spectral norm and the nuclear norm of a third order tensor play an important role in the tensor completion and recovery problem. We show that the spectral norm of a third order tensor is equal to the square root of the spectral norm of three positive semi-definite biquadratic tensors, and the square roots of the nuclear norms of those three positive semi-definite biquadratic tensors are lower bounds of the nuclear norm of that third order tensor. This provides a way to estimate and to evaluate the spectral norm and the nuclear norm of that third order tensor. Some upper and lower bounds for the spectral norm and nuclear norm of a third order tensor, by spectral radii and nuclear norms of some symmetric matrices, are presented.
... The results of Corollary 1 are a particular case of a more general result, proven in [23]. That result states that the nuclear norm of a tensor T , can be bound by the nuclear norm of any regular partition of T (defined in [23]) ...
... The results of Corollary 1 are a particular case of a more general result, proven in [23]. That result states that the nuclear norm of a tensor T , can be bound by the nuclear norm of any regular partition of T (defined in [23]) ...
... The lower bound is attained following [23]. First, we can bound the spectral norm in the following way: ...
... Wang et al. [29] systematically studied the tensor spectral p-norm via various matrix unfoldings and tensor unfoldings. Li [14] proposed a novel approach to study the tensor spectral norm and nuclear norm via tensor partitions, a concept generalizing block tensors by Ragnarsson and Van Loan [25]. Some neat bounds of the tensor spectral norm (respectively, nuclear norm) via the spectral norms (respectively, nuclear norms) of subtensors in any regular partition were proposed, and a conjecture [14,Conjecture 3.5] on the bounds in any tensor partition was proposed. ...
... Li [14] proposed a novel approach to study the tensor spectral norm and nuclear norm via tensor partitions, a concept generalizing block tensors by Ragnarsson and Van Loan [25]. Some neat bounds of the tensor spectral norm (respectively, nuclear norm) via the spectral norms (respectively, nuclear norms) of subtensors in any regular partition were proposed, and a conjecture [14,Conjecture 3.5] on the bounds in any tensor partition was proposed. ...
... In this paper, we systematically study the tensor spectral p-norm and nuclear p-norm via the partition approach in [14]. We prove that for the most general partition called arbitrary partition, the bounds of the tensor spectral p-norm and nuclear p-norm via subtensors can be established for any 1 ≤ p ≤ ∞ . ...
Article
Full-text available
This paper presents a generalization of the spectral norm and the nuclear norm of a tensor via arbitrary tensor partitions, a much richer concept than block tensors. We show that the spectral p-norm and the nuclear p-norm of a tensor can be lower and upper bounded by manipulating the spectral p-norms and the nuclear p-norms of subtensors in an arbitrary partition of the tensor for 1p1\le p\le \infty. Hence, it generalizes and answers affirmatively the conjecture proposed by Li (SIAM J Matrix Anal Appl 37:1440–1452, 2016) for a tensor partition and p=2. We study the relations of the norms of a tensor, the norms of matrix unfoldings of the tensor, and the bounds via the norms of matrix slices of the tensor. Various bounds of the tensor spectral and nuclear norms in the literature are implied by our results.
... However, most tensor norms are NP-hard to compute [19], such as the tensor spectral norm [17] and the tensor nuclear norm [12]. As a useful method to approximate matrix norms via block matrices, the computation of tensor norms via block tensors is straightforward and becomes increasingly important within the field of numerical linear algebra [9,29,41,42]. When a tensor is partitioned into subtensors, not necessarily having the same size, some tensor norms of these subtensors form a tensor called a norm compression tensor. ...
... They [42] further applied block tensors to symmetric embeddings of tensors. Extending block tensors, Li [29] proposed more general concepts of tensor partitions and provided bounds of the spectral norm and the nuclear norm of a tensor via norms of subtensors in a regular partition of the tensor. The results were further generalized to the spectral p-norm and the nuclear p-norm of a tensor and to arbitrary partitions of the tensor [9]. ...
... The results were further generalized to the spectral p-norm and the nuclear p-norm of a tensor and to arbitrary partitions of the tensor [9]. This paper explores the structure of block tensors instead of treating subtensors merely as elements in [9,29] and proposes more accurate estimation of the spectral norm of a tensor, albeit block tensors are special but most common types of regular partitions [29] and arbitrary partitions [9]. It is worth mentioning that bounds of the tensor spectral pnorm have been extensively studied in the literature [16][17][18]20,38,44,48], in particular in the area of polynomial optimization [30]. ...
Article
Full-text available
When a tensor is partitioned into subtensors, some tensor norms of these subtensors form a tensor called a norm compression tensor. Norm compression inequalities for tensors focus on the relation of the norm of this compressed tensor to the norm of the original tensor. We prove that for the tensor spectral norm, the norm of the compressed tensor is an upper bound of the norm of the original tensor. This result can be extended to a general class of tensor spectral norms. We discuss various applications of norm compression inequalities for tensors. These inequalities improve many existing bounds of tensor norms in the literature, in particular tightening the general bound of the tensor spectral norm via tensor partitions. We study the extremal ratio between the spectral norm and the Frobenius norm of a tensor space, provide a general way to estimate its upper bound, and in particular, improve the current best upper bound for third order nonnegative tensors and symmetric tensors. We also propose a faster approach to estimate the spectral norm of a large tensor or matrix via sequential norm compression inequalities with theoretical and numerical evidence. For instance, the complexity of our algorithm for the matrix spectral norm is O(n2+ϵ)O\left( n^{2+\epsilon }\right) where ϵ\epsilon ranges from 0 to 1 depending on the partition and the estimate ranges correspondingly from some close upper bound to the exact spectral norm.
... The results of Corollary 1 are a particular case of a more general result, proven in [23]. That result states that the nuclear norm of a tensor T , can be bound by the nuclear norm of any regular partition of T (defined in [23]) ...
... The results of Corollary 1 are a particular case of a more general result, proven in [23]. That result states that the nuclear norm of a tensor T , can be bound by the nuclear norm of any regular partition of T (defined in [23]) ...
... The lower bound is attained following [23]. First, we can bound the spectral norm in the following way: ...
Preprint
Full-text available
We consider a synthetic aperture imaging configuration, such as synthetic aperture radar (SAR), where we want to first separate reflections from moving targets from those coming from a stationary background, and then to image separately the moving and the stationary reflectors. For this purpose, we introduce a representation of the data as a third order tensor formed from data coming from partially overlapping sub-apertures. We then apply a tensor robust principal component analysis (TRPCA) to the tensor data which separates them into the parts coming from the stationary and moving reflectors. Images are formed with the separated data sets. Our analysis shows a distinctly improved performance of TRPCA, compared to the usual matrix case. In particular, the tensor decomposition can identify motion features that are undetectable when using the conventional motion estimation methods, including matrix RPCA. We illustrate the performance of the method with numerical simulations in the X-band radar regime.
... They proposed to minimize a tensor nuclear norm directly and proved that such an approach improves the sample size requirement. This leads research enthusiasm on the tensor nuclear norm and its dual norm, i.e., the tensor spectral norm [4,5,6,7,8,9], though this is in fact a NP-hard problem [2]. ...
... is a rank-one kth order tensor. The nuclear norm of A is defined [2,4,7] as ...
... Then the spectral norm of A is defined [2,4,7,11] as ...
Preprint
We show that the nuclear norm of the tensor product of two tensors is not greater than the product of the nuclear norms of these two tensors. As an application, we give lower bounds for the nuclear norm of an arbitrary tensor. We show that the spectral norm of the tensor product of two tensors is not greater than the product of the spectral norm of one tensor, and the nuclear norm of another tensor. By this result, we give some lower bounds for the product of the nuclear norm and the spectral norm of an arbitrary tensor. The first result also shows that the nuclear norm of the square matrix is a matrix norm. We then extend the concept of matrix norm to tensor norm. A real function defined for all real tensors is called a tensor norm if it is a norm for any tensor space with fixed dimensions, and the norm of the tensor product of two tensors is always not greater than the product of the norms of these two tensors. We show that the 1-norm, the Frobenius norm and the nuclear norm of tensors are tensor norms but the infinity norm and the spectral norm of tensors are not tensor norms.
... It is NP-hard to compute them [1]. It is an active research topic to study them more [2,3,4]. ...
... Definition 2.2 The spectral norm of A ∈ ℜ d 1 ×d 2 ×d 3 is defined [1,2,3,4] as ...
... Then the spectral norm of A is equal to the largest singular value of A [1,2,3,4]. ...
Preprint
The spectral norm and the nuclear norm of a third order tensor play an important role in the tensor completion and recovery problem. We show that the spectral norm of a third order tensor is equal to the square root of the spectral norm of three fourth order positive semi-definite bisymmetric tensors, and the square roots of the nuclear norms of those three fourth order positive semi-definite bisymmetric tensors are lower bounds of the nuclear norm of that third order tensor. This provides a way to estimate and to evaluate the spectral norm and the nuclear norm of that third order tensor. Some upper and lower bounds for the spectral norm and nuclear norm of a third order tensor, by spectral radii and nuclear norms of some symmetric matrices, are presented.
... In [15], Li proposed an e cient way for the estimation of the tensor spectral and nuclear norms based on tensor partitions, which is de ned as follows. ...
... De nition 1.3. [15] A partition {T , T , · · · , Tm} is called a general tensor partition of a tensor T ∈ R n ×n ×···×n d if every T j (j = , , · · · , m) is a subtensor of T, every pair of subtensors {T i , T j } with i ≠ j has no common entry of T, and every entry of T belongs to one of the subtensors in {T , T , · · · , Tm}. ...
... Li also proposed a special tensor partition called regular tensor partition based on which the bounds of tensor norms were established [15]. The partition is obtained via several tensor cuts. ...
Article
Full-text available
On estimations of the lower and upper bounds for the spectral and nuclear norm of a tensor, Li established neat bounds for the two norms based on regular tensor partitions, and proposed a conjecture for the same bounds to be hold based on general tensor partitions [Z. Li, Bounds on the spectral norm and the nuclear norm of a tensor based on tensor partition, SIAM J. Matrix Anal. Appl., 37 (2016), pp. 1440-1452]. Later, Chen and Li provided a solution to the conjecture [Chen B., Li Z., On the tensor spectral p -norm and its dual norm via partitions]. In this short paper, we present a concise and different proof for the validity of the conjecture, which also offers a new and simpler proof to the bounds of the spectral and nuclear norms established by Li for regular tensor partitions. Two numerical examples are provided to illustrate tightness of these bounds.
... and inequality (20) follows. ...
... The nuclear norm is fundamental for tensor products of Banach spaces. It was introduced quite a while ago by Schatten[30] and Grothendieck[15], but is enjoying a renewed interest presently; see, e.g.,[8],[12],[13],[14],[20], and[22]. ...
... and could be better than the more complicated version of Li [20]. However, bound (24) is ill-suited to matrices with small slice sums; e.g., if the slice sums are zero, then bound (24) is vacuous. ...
Article
The spectral p-norm of r-matrices generalizes the spectral 2-norm of 2-matrices. In 1911 Schur gave an upper bound on the spectral 2-norm of 2-matrices, which was extended in 1934 by Hardy, Littlewood, and Polya to r-matrices. Recently, Kolotilina, and independently the author, strengthened Schur's bound for 2-matrices. The main result of this paper extends the latter result to r-matrices, thereby improving the result of Hardy, Littlewood, and Polya. The proof is based on new combinatorial concepts like r\emph{-partite }% r\emph{-matrix} and \emph{symmetrant} of a matrix, which appear to be instrumental in the study of the spectral p-norm in general. Thus, another application shows that the spectral p-norm and the p-spectral radius of a symmetric nonnegative r-matrix are equal whenever prp\geq r. This result contributes to a classical area of analysis, initiated by Mazur and Orlicz around 1930. Additionally, a number of bounds are given on the p-spectral radius and the spectral p-norm of r-matrices and r-graphs.\medskip
... Another version of this method was used for non-symmetric tensors [19]. In addition, the SVD of matrix flattening of a tensor has been used to find a rank-1 decomposition that approximates its nuclear norm [31]. ...
... Another relaxation can be defined for problem (3.1) by using the L 1 -norm. The L 1 -norm [31] of the tensor A ∈ R n1×n2×···×nm is defined as ...
Article
The best rank-1 approximation of a real mth-order tensor is equal to solving m 2-norm optimization problems that each corresponds to a factor of the best rank-1 approximation. In this paper, these problems are relaxed by using the Frobenius and L1-norms instead of the 2-norm. It is shown that the solution for the Frobenius relaxation of optimization problems is the leading eigenvector of a positive semi-definite matrix which is closely related to higher-order singular value decomposition and the solution of the L1-relaxation can be obtained efficiently by summing over all modes of the associated tensor but one. The numerical examples show that these relaxations can be used to initialize the alternating least-squares (ALS) method and they are reasonably close to the solutions obtained by the ALS method.
... This crucial fact has resulted alternative concepts for the tensor nuclear norm in practice, such as the average nuclear norms of the matrix flattenings from three different ways. In terms of approximating the tensor nuclear norm, the best polynomialtime worst-case approximation bound is Ω 1 √ n via matrix flattenings [18] or partitions into matrix slices [22]. This bound is worse than the best known one Ω ln n n for the tensor spectral norm. ...
... 1 √ n k . There are two methods to achieve this bound, one is via matrix flattenings of the tensor [18] and the other is via partitioning the tensor into matrix slices [22]. This bound is worse than the best one for the tensor spectral norm. ...
Preprint
The matrix spectral and nuclear norms appear in enormous applications. The generalizations of these norms to higher-order tensors is becoming increasingly important but unfortunately they are NP-hard to compute or even approximate. Although the two norms are dual to each other, the best known approximation bound achieved by polynomial-time algorithms for the tensor nuclear norm is worse than that for the tensor spectral norm. In this paper, we bridge this gap by proposing deterministic algorithms with the best bound for both tensor norms. Our methods not only improve the approximation bound for the nuclear norm, but are also data independent and easily implementable comparing to existing approximation methods for the tensor spectral norm. The main idea is to construct a selection of unit vectors that can approximately represent the unit sphere, in other words, a collection of spherical caps to cover the sphere. For this purpose, we explicitly construct several collections of spherical caps for sphere covering with adjustable parameters for different levels of approximations and cardinalities. These readily available constructions are of independent interest as they provide a powerful tool for various decision making problems on spheres and related problems. We believe the ideas of constructions and the applications to approximate tensor norms can be useful to tackle optimization problems over other sets such as the binary hypercube.
... Perhaps the only known method to compute the tensor nuclear norm is based on the sums-of-squares relaxation by Nie [30] but it only works for symmetric tensors and efficient for low dimensions. In terms of polynomial-time approximation methods, the best approximation bound is 1 √ , either via matrix flattenings of the tensor [18] or via partitioning the tensor into matrix slices [25]. We will also show in this paper that for fixed , the nuclear norm of T ∈ R ×m×n can be computed in polynomial time. ...
... see [25] for details. Both Mat(T ) σ 2 and i=1 T i σ 2 can be proven to be at most Mat(T ) σ 2 and are indeed easy to compute. ...
Preprint
The recent decade has witnessed a surge of research in modelling and computing from two-way data (matrices) to multiway data (tensors). However, there is a drastic phase transition for most tensor optimization problems when the order of a tensor increases from two (a matrix) to three: Most tensor problems are NP-hard while that for matrices are easy. It triggers a question on where exactly the transition occurs. The paper aims to study this kind of question for the spectral norm and the nuclear norm. Although computing the spectral norm for a general ×m×n\ell\times m\times n tensor is NP-hard, we show that it can be computed in polynomial time if \ell is fixed. This is the same for the nuclear norm. While these polynomial-time methods are not implementable in practice, we propose fully polynomial-time approximation schemes (FPTAS) for the spectral norm based on spherical grids and for the nuclear norm with further help of duality theory and semidefinite optimization. Numerical experiments on simulated data show that our FPTAS can compute these tensor norms for small 6\ell \le 6 but large m,n50m, n\ge50. To the best of our knowledge, this is the first method that can compute the nuclear norm of general asymmetric tensors. Both our polynomial-time algorithms and FPTAS can be extended to higher-order tensors as well.
... In addition, we found the following definition of the tensor spectral norm from Li (2016). ...
... Definition 2.1 (Li 2016) For a given tensor T ∈ R I 1 ×I 2 ×···×I N , the spectral norm of T , denoted by T σ , is defined as ...
Article
The spectral norm of an even-order tensor is defined and investigated. An equivalence between the spectral norm of tensors and matrices is given. Using derived representations of some tensor expressions involving the Moore–Penrose inverse, we investigate the perturbation theory for the Moore–Penrose inverse of tensor via Einstein product. The classical results derived by Stewart (SIAM Rev 19:634–662, 1977) and Wedin (BIT 13:217–232, 1973) for the matrix case are extended to even-order tensors. An implementation in the Matlab programming language is developed and used in deriving appropriate numerical examples.
... we need to find an upper bound to the operator norm. We invoke Theorem 3.1 in Ref. [97]: ...
Preprint
Full-text available
The double descent phenomenon challenges traditional statistical learning theory by revealing scenarios where larger models do not necessarily lead to reduced performance on unseen data. While this counterintuitive behavior has been observed in a variety of classical machine learning models, particularly modern neural network architectures, it remains elusive within the context of quantum machine learning. In this work, we analytically demonstrate that quantum learning models can exhibit double descent behavior by drawing on insights from linear regression and random matrix theory. Additionally, our numerical experiments on quantum kernel methods across different real-world datasets and system sizes further confirm the existence of a test error peak, a characteristic feature of double descent. Our findings provide evidence that quantum models can operate in the modern, overparameterized regime without experiencing overfitting, thereby opening pathways to improved learning performance beyond traditional statistical learning theory.
... Theorem 4.3 with an appropriate example in Theorem 3.1, as long as they are not tall tensors, whose ratio is provided by(17) in Proposition 4.1. In order to get an explicit upper bound instead of an order of magnitude, we now apply Theorem 3.1 again to estimate φ(R n 1 ×n 2 ×···×n d + ) using the power of two. ...
Preprint
Full-text available
One of the fundamental problems in multilinear algebra, the minimum ratio between the spectral and Frobenius norms of tensors, has received considerable attention in recent years. While most values are unknown for real and complex tensors, the asymptotic order of magnitude and tight lower bounds have been established. However, little is known about nonnegative tensors. In this paper, we present an almost complete picture of the ratio for nonnegative tensors. In particular, we provide a tight lower bound that can be achieved by a wide class of nonnegative tensors under a simple necessary and sufficient condition, which helps to characterize the extreme tensors and obtain results such as the asymptotic order of magnitude. We show that the ratio for symmetric tensors is no more than that for general tensors multiplied by a constant depending only on the order of tensors, hence determining the asymptotic order of magnitude for real, complex, and nonnegative symmetric tensors. We also find that the ratio is in general different to the minimum ratio between the Frobenius and nuclear norms for nonnegative tensors, a sharp contrast to the case for real tensors and complex tensors.
... The nuclear and spectral norms of tensors play an important role in tensor completion problems [33]. Di↵erent methods to estimate and to evaluate the spectral norm and the nuclear norm and their upper and lower bounds have been studied by several authors (see [16,20,21,25]). ...
Preprint
Full-text available
We develop algebraic methods for computations with tensor data. We give 3 applications: extracting features that are invariant under the orthogonal symmetries in each of the modes, approximation of the tensor spectral norm, and amplification of low rank tensor structure. We introduce colored Brauer diagrams, which are used for algebraic computations and in analyzing their computational complexity. We present numerical experiments whose results show that the performance of the alternating least square algorithm for the low rank approximation of tensors can be improved using tensor amplification.
Article
Tensor spectral p{\textbf{p}}-norms are generalizations of matrix induced norms. Matrix induced norms are an important type of matrix norms, and tensor spectral p{\textbf{p}}-norms are also important in applications. We discuss some basic properties of tensor spectral p{\textbf{p}}-norms. We extend the submultiplicativity of the matrix spectral 2-norm to the tensor case, based on which we give a bound of the tensor spectral 2-norm and provide a fast method for computing spectral 2-norms of sum-of-squares tensors. To compute tensor spectral p{\textbf{p}}-norms, we propose a higher order power method. Experiments show the high efficiency of the proposed methods and numerical results on spectral p{\textbf{p}}-norms of random tensors are also given.
Article
In this paper, we extend Hardy’s inequality to infinite tensors. To do so, we introduce Cesàro tensors ℭ {\mathfrak{C}} , and consider them as tensor maps from sequence spaces into tensor spaces. In fact, we prove inequalities of the form ∥ ℭ ⁢ x k ∥ t , 1 ≤ U ⁢ ∥ x ∥ l p k \|\mathfrak{C}x^{k}\|_{t,1}\leq U\|x\|_{l_{p}}^{k} ( k = 1 , 2 k=1,2 ), where x is a sequence, ℭ ⁢ x k {\mathfrak{C}x^{k}} is a tensor, and ∥ ⋅ ∥ t , 1 {\|\cdot\|_{t,1}} , ∥ ⋅ ∥ l p {\|\cdot\|_{l_{p}}} are the tensor and sequence norms, respectively. The constant U is independent of x , and we seek the smallest possible value of U .
Article
Several basic properties of tensor nuclear norms are established in [S. Friedland and L.-H. Lim, Math. Comp., 87 (2018), pp. 1255–1281]. In this work, we give further studies on tensor nuclear norms. We present some special cases of tensor nuclear decompositions. We list some examples to show basic relationships among tensor rank, orthogonal rank and nuclear rank. Spectral and nuclear norms of Hermitian tensors are studied. We show that spectral and nuclear norms of real Hermitian decomposable tensors do not depend on the choice of base field. At last, we extend matrix polar decompositions to the tensor case, which is the product of a Hermitian tensor and a tensor whose spectral norm equals one. That is, we establish a link between any tensor and a Hermitian tensor. Bounds of nuclear rank are given based on tensor polar decompositions.
Article
Full-text available
Finding the rank of a tensor is a problem that has many applications. Unfortunately it is often very difficult to determine the rank of a given tensor. Inspired by the heuristics of convex relaxation, we consider the nuclear norm instead of the rank of a tensor. We determine the nuclear norm of various tensors of interest. Along the way, we also do a systematic study various measures of orthogonality in tensor product spaces and we give a new generalization of the Singular Value Decomposition to higher order tensors.
Article
Full-text available
We discuss a technique that allows blind recovery of signals or blind identification of mixtures in instances where such recovery or identification were previously thought to be impossible: (i) closely located or highly correlated sources in antenna array processing, (ii) highly correlated spreading codes in CDMA radio communication, (iii) nearly dependent spectra in fluorescent spectroscopy. This has important implications --- in the case of antenna array processing, it allows for joint localization and extraction of multiple sources from the measurement of a noisy mixture recorded on multiple sensors in an entirely deterministic manner. In the case of CDMA, it allows the possibility of having a number of users larger than the spreading gain. In the case of fluorescent spectroscopy, it allows for detection of nearly identical chemical constituents. The proposed technique involves the solution of a bounded coherence low-rank multilinear approximation problem. We show that bounded coherence allows us to establish existence and uniqueness of the recovered solution. We will provide some statistical motivation for the approximation problem and discuss greedy approximation bounds. To provide the theoretical underpinnings for this technique, we develop a corresponding theory of sparse separable decompositions of functions, including notions of rank and nuclear norm that specialize to the usual ones for matrices and operators but apply to also hypermatrices and tensors.
Article
The Hlderp-norm of anmn matrix has no explicit representation unlessp=1,2 or . It is shown here that thep-norm can be estimated reliably inO(mn) operations. A generalization of the power method is used, with a starting vector determined by a technique with a condition estimation flavour. The algorithm nearly always computes ap-norm estimate correct to the specified accuracy, and the estimate is always within a factorn 1–1/p of A p . As a by-product, a new way is obtained to estimate the 2-norm of a rectangular matrix; this method is more general and produces better estimates in practice than a similar technique of Cline, Conn and Van Loan.
Article
In this paper we propose an algorithm to estimate missing values in tensors of visual data. Our methodology is built on recent studies about matrix completion using the matrix trace norm. The contribution of our paper is to extend the matrix case to the tensor case by proposing the first definition of the trace norm for tensors and then by building a working algorithm. First, we propose a definition for the tensor trace norm, that generalizes the established definition of the matrix trace norm. Second, similar to matrix completion, the tensor completion is formulated as a convex optimization problem. We developed three algorithms: SiLRTC, FaLRTC, and HaLRTC. The SiLRTC algorithm is simple to implement and employs a relaxation technique to separate the dependant relationships and uses the block coordinate descent (BCD) method to achieve a globally optimal solution; The FaLRTC algorithm utilizes a smoothing scheme to transform the original nonsmooth problem into a smooth one; The HaLRTC algorithm applies the alternating direction method of multipliers (ADMM) to our problem. Our experiments show potential applications of our algorithms and the quantitative evaluation indicates that our methods are more accurate and robust than heuristic approaches.
Article
In this paper, we consider approximation algorithms for optimizing a generic multi-variate homogeneous polynomial function, subject to homogeneous quadratic constraints. Such optimiza- tion models have wide applications, e.g., in signal processing, magnetic resonance imaging (MRI), data training, approximation theory, and portfolio selection. Since polynomial functions are non- convex in general, the problems under consideration are all NP-hard. In this paper we shall focus on polynomial-time approximation algorithms. In particular, we flrst study optimization of a multi-linear tensor function over the Cartesian product of spheres. We shall propose approxi- mation algorithms for such problem and derive worst-case performance ratios, which are shown to depend only on the dimensions of the models. The methods are then extended to optimize a generic multi-variate homogeneous polynomial function with spherical constraints. Likewise, approximation algorithms are proposed with provable relative approximation performance ra- tios. Furthermore, the constraint set is relaxed to be an intersection of co-centered ellipsoids. In particular, we consider maximization of a homogeneous polynomial over the intersection of ellip- soids centered at the origin, and propose polynomial-time approximation algorithms with provable worst-case performance ratios. Numerical results are reported, illustrating the efiectiveness of the approximation algorithms studied.