December 2024
·
14 Reads
·
2 Citations
Optimization
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
December 2024
·
14 Reads
·
2 Citations
Optimization
October 2024
·
81 Reads
·
2 Citations
Deep nonnegative matrix factorization (deep NMF) has recently emerged as a valuable technique for extracting multiple layers of features across different scales. However, all existing deep NMF models and algorithms have primarily centered their evaluation on the least squares error, which may not be the most appropriate metric for assessing the quality of approximations on diverse data sets. For instance, when dealing with data types such as audio signals and documents, it is widely acknowledged that ß-divergences offer a more suitable alternative. In this article, we develop new models and algorithms for deep NMF using some ß-divergences, with a focus on the Kullback-Leibler divergence. Subsequently, we apply these techniques to the extraction of facial features, the identification of topics within document collections, and the identification of materials within hyperspectral images.
March 2024
·
41 Reads
·
5 Citations
Journal of Global Optimization
In this paper, we propose an inertial alternating direction method of multipliers for solving a class of non-convex multi-block optimization problems with nonlinear coupling constraints. Distinctive features of our proposed method, when compared with other alternating direction methods of multipliers for solving non-convex problems with nonlinear coupling constraints, include: (i) we apply the inertial technique to the update of primal variables and (ii) we apply a non-standard update rule for the multiplier by scaling the multiplier by a factor before moving along the ascent direction where a relaxation parameter is allowed. Subsequential convergence and global convergence are presented for the proposed algorithm.
December 2023
·
264 Reads
·
32 Citations
Journal of Optimization Theory and Applications
We develop an inexact primal-dual first-order smoothing framework to solve a class of non-bilinear saddle point problems with primal strong convexity. Compared with existing methods, our framework yields a significant improvement over the primal oracle complexity, while it has competitive dual oracle complexity. In addition, we consider the situation where the primal-dual coupling term has a large number of component functions. To efficiently handle this situation, we develop a randomized version of our smoothing framework, which allows the primal and dual sub-problems in each iteration to be solved by randomized algorithms inexactly in expectation. The convergence of this framework is analyzed both in expectation and with high probability. In terms of the primal and dual oracle complexities, this framework significantly improves over its deterministic counterpart. As an important application, we adapt both frameworks for solving convex optimization problems with many functional constraints. To obtain an -optimal and -feasible solution, both frameworks achieve the best-known oracle complexities (in terms of their dependence on ).
September 2023
·
52 Reads
A significant limitation of one-class classification anomaly detection methods is their reliance on the assumption that unlabeled training data only contains normal instances. To overcome this impractical assumption, we propose two novel classification-based anomaly detection methods. Firstly, we introduce a semi-supervised shallow anomaly detection method based on an unbiased risk estimator. Secondly, we present a semi-supervised deep anomaly detection method utilizing a nonnegative (biased) risk estimator. We establish estimation error bounds and excess risk bounds for both risk minimizers. Additionally, we propose techniques to select appropriate regularization parameters that ensure the nonnegativity of the empirical risk in the shallow model under specific loss functions. Our extensive experiments provide strong evidence of the effectiveness of the risk-based anomaly detection methods.
December 2022
·
116 Reads
In this paper, we propose an inertial alternating direction method of multipliers for solving a class of non-convex multi-block optimization problems with \emph{nonlinear coupling constraints}. Distinctive features of our proposed method, when compared with other alternating direction methods of multipliers for solving non-convex problems with nonlinear coupling constraints, include: (i) we apply the inertial technique to the update of primal variables and (ii) we apply a non-standard update rule for the multiplier by scaling the multiplier by a factor before moving along the ascent direction where a relaxation parameter is allowed. Subsequential convergence and global convergence are presented for the proposed algorithm.
September 2022
·
171 Reads
·
17 Citations
Computational Optimization and Applications
In this paper, we propose an algorithmic framework, dubbed inertial alternating direction methods of multipliers (iADMM), for solving a class of nonconvex nonsmooth multiblock composite optimization problems with linear constraints. Our framework employs the general minimization-majorization (MM) principle to update each block of variables so as to not only unify the convergence analysis of previous ADMM that use specific surrogate functions in the MM step, but also lead to new efficient ADMM schemes. To the best of our knowledge, in the nonconvex nonsmooth setting, ADMM used in combination with the MM principle to update each block of variables, and ADMM combined with inertial terms for the primal variables have not been studied in the literature. Under standard assumptions, we prove the subsequential convergence and global convergence for the generated sequence of iterates. We illustrate the effectiveness of iADMM on a class of nonconvex low-rank representation problems.
March 2022
·
260 Reads
·
17 Citations
SIAM Journal on Mathematics of Data Science
January 2022
·
298 Reads
This paper considers a multiblock nonsmooth nonconvex optimization problem with nonlinear coupling constraints. By developing the idea of using the information zone and adaptive regime proposed in [J. Bolte, S. Sabach and M. Teboulle, Nonconvex Lagrangian-based optimization: Monitoring schemes and global convergence, Mathematics of Operations Research, 43: 1210--1232, 2018], we propose a multiblock alternating direction method of multipliers for solving this problem. We specify the update of the primal variables by employing a majorization minimization procedure in each block update. An independent convergence analysis is conducted to prove subsequential as well as global convergence of the generated sequence to a critical point of the augmented Lagrangian. We also establish iteration complexity and provide preliminary numerical results for the proposed algorithm.
August 2021
·
27 Reads
·
8 Citations
... Although convergence analysis does not rely on the boundedness of the multiplier sequence, it requires dynamically generated parameters during the backtracking process to ensure the boundedness of the generated sequence. Paper [13] used the upper bound minimization method to update the primal block variables. Building on this, paper [4] introduced inertia techniques and proposed a convergent alternating direction multiplier method with scaling factors to establish update rules. ...
December 2024
Optimization
... LMMs are most widely used because of their simplicity and effectiveness; they are based on the assumption that the photons reaching the hyperspectral sensor must interact with only one material so that each mixed pixel can be expressed as a linear combination of a finite number of endmembers weighted by the corresponding abundances [1]. In this study, among LMMs, we present a comparative analysis between two particular fully unsupervised approaches: deep nonnegative matrix factorization (DNMF) [2,3] and artificial neural network autoencoder (AE)-based methods [4]. In the peculiar context of HU, DNMF represents the recent deep extension of nonnegative matrix factorizations (NMF), which largely demonstrated their capabilities of automatically extracting latent feature representation from HSIs also preserving the physical nonnegativity of data [5]. ...
October 2024
... The typical example of problem (2) is the following Logistic Matrix Factorization problem [4]: ...
March 2024
Journal of Global Optimization
... Since the proposed model is non-convex, we adopt the alternating direction method of multipliers (ADMM) as the outer-layer optimization strategy. While ADMM was originally designed for 2-block convex optimization problems, recent theoretical advances have demonstrated its empirical effectiveness in handling non-convex objective functions or non-convex sets [26][27][28][29][30][31]. This motivates our application of ADMM to the AO retinal image restoration task. ...
September 2022
Computational Optimization and Applications
... In contrast, our problem (9) presents unique computational advantages when its multi-block structure is properly leveraged. Although some studies, such as BMME [25], have addressed multi-block optimization by applying BPG to individual subproblems, our approach is fundamentally distinguished by the W -subproblem in (9) admitting a closed-form solution -a distinctive feature that sets our method apart from existing frameworks. A comprehensive summary is presented in Table 1. ...
March 2022
SIAM Journal on Mathematics of Data Science
... Leveraging the convergence outcomes established for BSUM, TITAN, and BMMe, numerous algorithms addressing low-rank factorization problems come with guaranteed convergence. For example, BSUM assures the convergence of a perturbed Multiplicative Update (MU) and a block mirror descent method for KL NMF, see ; TITAN provides convergence guarantees for accelerated algorithms dealing with min-vol NMF [Thanh et al., 2021], sparse NMF and matrix completion [Hien et al., 2023]; BMMe guarantees convergence of MU with extrapolation for β-NMF with β ∈ [1, 2]. ...
August 2021
... Ahookhosh, Hien, Gillis and Patrinos [3] proposed a multi-block transformation of the proximal alternating linearized minimization method, and an adaptive version for minimizing the sum of a multi-block relatively smooth function and a block separable (nonconvex) nonsmooth function. Differently from the multi-block relative smoothness condition in [3] that uses a fixed kernel function for all blocks, the authors in [2] gave a block relative smoothness condition permitting disparate kernel functions for different blocks. Then they proposed a block inertial Bregman proximal algorithm and established the sequence convergence. ...
June 2021
Journal of Optimization Theory and Applications
... where f : R n → R has block coordinate-wise Lipschitz gradient, ψ : R n → R is twice differentiable (both functions possibly nonseparable and nonconvex), and φ : R n → R is the indicator function of a convex closed separable set Q = Π n i=1 Q i . Optimization problems having this composite structure arise in many applications such as orthogonal nonnegative matrix factorization [2] and distributed control [9]. When the dimension of these problems is large, the usual methods based on full gradient and Hessian perform poorly. ...
June 2021
Computational Optimization and Applications
... We considered other NMF schemes minimizing the matrix KL divergence [14,15], but we found that the multiplicative update (MU) scheme worked best. The initial condition for the NMF computation was based on the SVD of the target matrix following the prescription in [16]. ...
June 2021
Journal of Scientific Computing
... These include, for instance, multiplicative updates [26], hierarchical alternating least-squares [51], alternating direction method of multipliers [18] related to non-negative matrix factorization, or more general interior-point methods [48] for quadratic programs; see, e.g., [5], [22,Section 5.6], and [10,Chapter 4] for overviews. Extending vanilla alternating non-negative strategies, further acceleration and extrapolation methods are developed in order to improve (empirical) convergence speed for alternating non-negative matrix and tensor factorization; see, e.g., [47,Section 3.4] as well as [29,31] for some recent works. ...
March 2021
Numerical Linear Algebra with Applications