Nicolas NadisicGhent University and Royal Institute for Cultural Heritage (KIK-IRPA)
Nicolas Nadisic
PhD
About
20
Publications
2,322
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
35
Citations
Introduction
Publications
Publications (20)
Paintings deteriorate over time due to aging and storage conditions, with cracks being a common form of degradation. Detecting and mapping these cracks is crucial for art analysis and restoration but it presents challenges. Traditional methods often require tedious manual effort, while deep learning (DL) relies on large annotated datasets, which ar...
The simultaneous sparse coding (SSC) problem consists in approximating several data points as linear combinations of the same few basis elements selected within a given dictionary. It is key in many applications of machine learning and signal processing. Solving SSC up to global optimality has never been explored in the literature, to the best of o...
Nonnegative least squares problems with multiple right-hand sides (MNNLS) arise in models that rely on additive linear combinations. In particular, they are at the core of most nonnegative matrix factorization algorithms and have many applications. The nonnegativity constraint is known to naturally favor sparsity, that is, solutions with few non-ze...
The successive projection algorithm (SPA) is a widely used algorithm for nonnegative matrix factorization (NMF). It is based on the separability assumption. In hyperspectral unmixing, that is, the extraction of materials in a hyperspectral image, separability is equivalent to the pure-pixel assumption and states that for each material present in th...
Given a set of data points belonging to the convex hull of a set of vertices, a key problem in data analysis and machine learning is to estimate these vertices in the presence of noise. Many algorithms have been developed under the assumption that there is at least one nearby data point to each vertex; two of the most widely used ones are vertex co...
The k-sparse nonnegative least squares (NNLS) problem is a variant of the standard least squares problem, where the solution is constrained to be nonnegative and to have at most k nonzero entries. Several methods exist to tackle this NP-hard problem, including fast but approximate heuristics, and exact methods based on brute-force or branch-and-bou...
We propose a new variant of nonnegative matrix factorization (NMF), combining separability and sparsity assumptions. Separability requires that the columns of the first NMF factor are equal to columns of the input matrix, while sparsity requires that the columns of the second NMF factor are sparse. We call this variant sparse separable NMF (SSNMF),...
Nonnegative least squares (NNLS) problems arise in models that rely on additive linear combinations. In particular, they are at the core of nonnegative matrix factorization (NMF) algorithms. The nonnegativity constraint is known to naturally favor sparsity, that is, solutions with few non-zero entries. However, it is often useful to further enhance...
We propose a new variant of nonnegative matrix factorization (NMF), combining separability and sparsity assumptions. Separability requires that the columns of the first NMF factor are equal to columns of the input matrix, while sparsity requires that the columns of the second NMF factor are sparse. We call this variant sparse separable NMF (SSNMF),...
We propose a novel approach to solve exactly the sparse nonnegative least squares problem, under hard L0 sparsity constraints. This approach is based on a dedicated branch-and-bound algorithm. This simple strategy is able to compute the optimal solution even in complicated cases such as noisy or ill-conditioned data, where traditional approaches fa...
Many data mining tasks rely on pattern mining. To identify the patterns of interest in a dataset, an analyst may define several measures that score, in different ways, the relevance of a pattern. Until recently, most algorithms have only handled constraints in an efficient way, i.e., every measure had to be associated with a user-defined threshold,...
Transactional datasets are 0/1 matrices, which generically stand for objects having Boolean properties. If every cell of the matrix is additionally associated with a real number called utility, a high-utility itemset relates to a all-ones sub-matrix with utilities that sum to a high-enough value. This article shows that “having a total utility exce...