Le Thi Khanh Hien’s research while affiliated with University of Mons and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (36)


Multiblock ADMM for nonsmooth nonconvex optimization with nonlinear coupling constraints
  • Article

December 2024

·

14 Reads

·

2 Citations

Optimization

Le Thi Khanh Hien

·


Deep NMF applied on the urban hyperspectral image, an aerial image of a Walmart in Copperas Cove, Texas. We can, for example, easily identify the rooftop and the parking lot of the store; see the fourth and fifth images in panel a, respectively. Using deep NMF with two layers, we obtain the following: (a) layer 1 with r1=6 contains the abundance maps H1 corresponding to the spectral signatures in W1, and (b) layer 2 with r2=2 contains the abundance maps H2H1 corresponding to the spectral signatures in W2. As the factorization unfolds, deep NMF generates denser abundance maps, which are combinations of abundance maps from previous layers. Here, the first level extracts six materials (including grass, rooftops and dirt, trees, other rooftops, road, and dirt), which are merged into vegetation versus non-vegetation at the second layer.
Evolution of the median errors at the different levels of deep β-NMF with β=32 (initialized with multilayer β-NMF after 500 iterations) divided by the error of multilayer β-NMF after 1000 iterations.
Example of facial features extracted by multilayer β-NMF versus deep β-NMF for β=32.
Evolution of the error at the different levels of deep KL-NMF divided by the error of multilayer KL-NMF.
HSI data set. Moffett Field acquired by AVIRIS in 1997 and the region of interest (right) represented in synthetic colors. Source: figure reproduced from Dobigeon et al. (2009). Image (left) courtesy NASA/JPL-Caltech.

+4

Deep Nonnegative Matrix Factorization With Beta Divergences
  • Article
  • Full-text available

October 2024

·

81 Reads

·

2 Citations

Deep nonnegative matrix factorization (deep NMF) has recently emerged as a valuable technique for extracting multiple layers of features across different scales. However, all existing deep NMF models and algorithms have primarily centered their evaluation on the least squares error, which may not be the most appropriate metric for assessing the quality of approximations on diverse data sets. For instance, when dealing with data types such as audio signals and documents, it is widely acknowledged that ß-divergences offer a more suitable alternative. In this article, we develop new models and algorithms for deep NMF using some ß-divergences, with a focus on the Kullback-Leibler divergence. Subsequently, we apply these techniques to the extraction of facial features, the identification of topics within document collections, and the identification of materials within hyperspectral images.

Download

iADMMn for solving Problem (1)
Evolution of the log of mean of the objective values with respect to time
An inertial ADMM for a class of nonconvex composite optimization with nonlinear coupling constraints

March 2024

·

41 Reads

·

5 Citations

Journal of Global Optimization

In this paper, we propose an inertial alternating direction method of multipliers for solving a class of non-convex multi-block optimization problems with nonlinear coupling constraints. Distinctive features of our proposed method, when compared with other alternating direction methods of multipliers for solving non-convex problems with nonlinear coupling constraints, include: (i) we apply the inertial technique to the update of primal variables and (ii) we apply a non-standard update rule for the multiplier by scaling the multiplier by a factor before moving along the ascent direction where a relaxation parameter is allowed. Subsequential convergence and global convergence are presented for the proposed algorithm.


Deterministic IPDS framework
Randomized primal-dual smoothed gap reduction framework
An Inexact Primal-Dual Smoothing Framework for Large-Scale Non-Bilinear Saddle Point Problems

December 2023

·

264 Reads

·

32 Citations

Journal of Optimization Theory and Applications

We develop an inexact primal-dual first-order smoothing framework to solve a class of non-bilinear saddle point problems with primal strong convexity. Compared with existing methods, our framework yields a significant improvement over the primal oracle complexity, while it has competitive dual oracle complexity. In addition, we consider the situation where the primal-dual coupling term has a large number of component functions. To efficiently handle this situation, we develop a randomized version of our smoothing framework, which allows the primal and dual sub-problems in each iteration to be solved by randomized algorithms inexactly in expectation. The convergence of this framework is analyzed both in expectation and with high probability. In terms of the primal and dual oracle complexities, this framework significantly improves over its deterministic counterpart. As an important application, we adapt both frameworks for solving convex optimization problems with many functional constraints. To obtain an ε\varepsilon-optimal and ε\varepsilon-feasible solution, both frameworks achieve the best-known oracle complexities (in terms of their dependence on ε\varepsilon).


Anomaly detection with semi-supervised classification based on risk estimators

September 2023

·

52 Reads

A significant limitation of one-class classification anomaly detection methods is their reliance on the assumption that unlabeled training data only contains normal instances. To overcome this impractical assumption, we propose two novel classification-based anomaly detection methods. Firstly, we introduce a semi-supervised shallow anomaly detection method based on an unbiased risk estimator. Secondly, we present a semi-supervised deep anomaly detection method utilizing a nonnegative (biased) risk estimator. We establish estimation error bounds and excess risk bounds for both risk minimizers. Additionally, we propose techniques to select appropriate regularization parameters that ensure the nonnegativity of the empirical risk in the shallow model under specific loss functions. Our extensive experiments provide strong evidence of the effectiveness of the risk-based anomaly detection methods.


An inertial ADMM for a class of nonconvex composite optimization with nonlinear coupling constraints

December 2022

·

116 Reads

In this paper, we propose an inertial alternating direction method of multipliers for solving a class of non-convex multi-block optimization problems with \emph{nonlinear coupling constraints}. Distinctive features of our proposed method, when compared with other alternating direction methods of multipliers for solving non-convex problems with nonlinear coupling constraints, include: (i) we apply the inertial technique to the update of primal variables and (ii) we apply a non-standard update rule for the multiplier by scaling the multiplier by a factor before moving along the ascent direction where a relaxation parameter is allowed. Subsequential convergence and global convergence are presented for the proposed algorithm.


Evolution of the segmentation error rate and the objective function value with respect to time. For Hopkins155, the results are the average values over 156 sequences
Fig. 2 Evolution of the average value of the segmentation error rate and the objective function value with respect to time on Hopkins155
Fig. 6 Evolution of the average value of the objective function value of Problem (64) with respect to time on the image data sets CBCL (top left), ORL (top right), Frey (bottom left) and Umist (bottom right)
Inertial alternating direction method of multipliers for non-convex non-smooth optimization

September 2022

·

171 Reads

·

17 Citations

Computational Optimization and Applications

In this paper, we propose an algorithmic framework, dubbed inertial alternating direction methods of multipliers (iADMM), for solving a class of nonconvex nonsmooth multiblock composite optimization problems with linear constraints. Our framework employs the general minimization-majorization (MM) principle to update each block of variables so as to not only unify the convergence analysis of previous ADMM that use specific surrogate functions in the MM step, but also lead to new efficient ADMM schemes. To the best of our knowledge, in the nonconvex nonsmooth setting, ADMM used in combination with the MM principle to update each block of variables, and ADMM combined with inertial terms for the primal variables have not been studied in the literature. Under standard assumptions, we prove the subsequential convergence and global convergence for the generated sequence of iterates. We illustrate the effectiveness of iADMM on a class of nonconvex low-rank representation problems.



The final fitting error of mADMM and prox-linear in solving Problem (67) with real data sets.
The final optimality gap and feasibility error of mADMM and GenELin in solving the GEV problem.
Multiblock ADMM for nonsmooth nonconvex optimization with nonlinear coupling constraints

January 2022

·

298 Reads

This paper considers a multiblock nonsmooth nonconvex optimization problem with nonlinear coupling constraints. By developing the idea of using the information zone and adaptive regime proposed in [J. Bolte, S. Sabach and M. Teboulle, Nonconvex Lagrangian-based optimization: Monitoring schemes and global convergence, Mathematics of Operations Research, 43: 1210--1232, 2018], we propose a multiblock alternating direction method of multipliers for solving this problem. We specify the update of the primal variables by employing a majorization minimization procedure in each block update. An independent convergence analysis is conducted to prove subsequential as well as global convergence of the generated sequence to a critical point of the augmented Lagrangian. We also establish iteration complexity and provide preliminary numerical results for the proposed algorithm.



Citations (18)


... Although convergence analysis does not rely on the boundedness of the multiplier sequence, it requires dynamically generated parameters during the backtracking process to ensure the boundedness of the generated sequence. Paper [13] used the upper bound minimization method to update the primal block variables. Building on this, paper [4] introduced inertia techniques and proposed a convergent alternating direction multiplier method with scaling factors to establish update rules. ...

Reference:

The Proximal Alternating Direction Method of Multipliers for a Class of Nonlinear Constrained Optimization Problems
Multiblock ADMM for nonsmooth nonconvex optimization with nonlinear coupling constraints
  • Citing Article
  • December 2024

Optimization

... LMMs are most widely used because of their simplicity and effectiveness; they are based on the assumption that the photons reaching the hyperspectral sensor must interact with only one material so that each mixed pixel can be expressed as a linear combination of a finite number of endmembers weighted by the corresponding abundances [1]. In this study, among LMMs, we present a comparative analysis between two particular fully unsupervised approaches: deep nonnegative matrix factorization (DNMF) [2,3] and artificial neural network autoencoder (AE)-based methods [4]. In the peculiar context of HU, DNMF represents the recent deep extension of nonnegative matrix factorizations (NMF), which largely demonstrated their capabilities of automatically extracting latent feature representation from HSIs also preserving the physical nonnegativity of data [5]. ...

Deep Nonnegative Matrix Factorization With Beta Divergences

... Since the proposed model is non-convex, we adopt the alternating direction method of multipliers (ADMM) as the outer-layer optimization strategy. While ADMM was originally designed for 2-block convex optimization problems, recent theoretical advances have demonstrated its empirical effectiveness in handling non-convex objective functions or non-convex sets [26][27][28][29][30][31]. This motivates our application of ADMM to the AO retinal image restoration task. ...

Inertial alternating direction method of multipliers for non-convex non-smooth optimization

Computational Optimization and Applications

... In contrast, our problem (9) presents unique computational advantages when its multi-block structure is properly leveraged. Although some studies, such as BMME [25], have addressed multi-block optimization by applying BPG to individual subproblems, our approach is fundamentally distinguished by the W -subproblem in (9) admitting a closed-form solution -a distinctive feature that sets our method apart from existing frameworks. A comprehensive summary is presented in Table 1. ...

Block Bregman Majorization Minimization with Extrapolation
  • Citing Article
  • March 2022

SIAM Journal on Mathematics of Data Science

... Leveraging the convergence outcomes established for BSUM, TITAN, and BMMe, numerous algorithms addressing low-rank factorization problems come with guaranteed convergence. For example, BSUM assures the convergence of a perturbed Multiplicative Update (MU) and a block mirror descent method for KL NMF, see ; TITAN provides convergence guarantees for accelerated algorithms dealing with min-vol NMF [Thanh et al., 2021], sparse NMF and matrix completion [Hien et al., 2023]; BMMe guarantees convergence of MU with extrapolation for β-NMF with β ∈ [1, 2]. ...

Inertial Majorization-Minimization Algorithm for Minimum-Volume NMF
  • Citing Conference Paper
  • August 2021

... Ahookhosh, Hien, Gillis and Patrinos [3] proposed a multi-block transformation of the proximal alternating linearized minimization method, and an adaptive version for minimizing the sum of a multi-block relatively smooth function and a block separable (nonconvex) nonsmooth function. Differently from the multi-block relative smoothness condition in [3] that uses a fixed kernel function for all blocks, the authors in [2] gave a block relative smoothness condition permitting disparate kernel functions for different blocks. Then they proposed a block inertial Bregman proximal algorithm and established the sequence convergence. ...

A Block Inertial Bregman Proximal Algorithm for Nonsmooth Nonconvex Problems with Application to Symmetric Nonnegative Matrix Tri-Factorization

Journal of Optimization Theory and Applications

... where f : R n → R has block coordinate-wise Lipschitz gradient, ψ : R n → R is twice differentiable (both functions possibly nonseparable and nonconvex), and φ : R n → R is the indicator function of a convex closed separable set Q = Π n i=1 Q i . Optimization problems having this composite structure arise in many applications such as orthogonal nonnegative matrix factorization [2] and distributed control [9]. When the dimension of these problems is large, the usual methods based on full gradient and Hessian perform poorly. ...

Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization

Computational Optimization and Applications

... We considered other NMF schemes minimizing the matrix KL divergence [14,15], but we found that the multiplicative update (MU) scheme worked best. The initial condition for the NMF computation was based on the SVD of the target matrix following the prescription in [16]. ...

Algorithms for Nonnegative Matrix Factorization with the Kullback–Leibler Divergence

Journal of Scientific Computing

... These include, for instance, multiplicative updates [26], hierarchical alternating least-squares [51], alternating direction method of multipliers [18] related to non-negative matrix factorization, or more general interior-point methods [48] for quadratic programs; see, e.g., [5], [22,Section 5.6], and [10,Chapter 4] for overviews. Extending vanilla alternating non-negative strategies, further acceleration and extrapolation methods are developed in order to improve (empirical) convergence speed for alternating non-negative matrix and tensor factorization; see, e.g., [47,Section 3.4] as well as [29,31] for some recent works. ...

Accelerating block coordinate descent for nonnegative tensor factorization
  • Citing Article
  • March 2021

Numerical Linear Algebra with Applications