Conference Paper

# A fast weighted median algorithm based on Quickselect

Dept. of Electr. & Comput. Eng., Univ. of Delaware, Delaware, OH, USA

DOI: 10.1109/ICIP.2010.5651855 Conference: Image Processing (ICIP), 2010 17th IEEE International Conference on Source: IEEE Xplore

**ABSTRACT**

Weighted median filters are increasingly being used in signal processing applications and thus fast implementations are of importance. This paper introduces a fast algorithm to compute the weighted median of N samples which has linear time and space complexity as opposed to O(N logN) which is the time complexity of traditional sorting algorithms. The proposed algorithm is based on Quickselect which is closely related to the well known Quicksort. We compare the runtime and the complexity to Floyd and Rivest's algorithm SELECT which to date has been the fastest median finding algorithm and show that our algorithm is up to 30% faster.

Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable.

- [Show abstract] [Hide abstract]

**ABSTRACT:**In this paper, we propose a simple and robust algorithm for compressive sensing (CS) signal reconstruction based on the weighted median (WM) operator. The proposed approach addresses the reconstruction problem by solving a l<sub>0</sub>-regularized least absolute deviation (l<sub>0</sub>-LAD) regression problem with a tunable regularization parameter, being suitable for applications where the underlying contamination follows a statistical model with heavier-than-Gaussian tails. The solution to this regularized LAD regression problem is efficiently computed, under a coordinate descent framework, by an iterative algorithm that comprises two stages. In the first stage, an estimation of the sparse signal is found by recasting the reconstruction problem as a parameter location estimation for each entry in the sparse vector leading to the minimization of a sum of weighted absolute deviations. The solution to this one-dimensional minimization problem turns out to be the WM operator acting on a shifted-and-scaled version of the measurement samples with weights taken from the entries in the measurement matrix. The resultant estimated value is then passed to a second stage that identifies whether the corresponding entry is relevant or not. This stage is achieved by a hard threshold operator with adaptable thresholding parameter that is suitably tuned as the algorithm progresses. This two-stage operation, WM operator followed by a hard threshold operator, adds the desired robustness to the estimation of the sparse signal and, at the same time, ensures the sparsity of the solution. Extensive simulations demonstrate the reconstruction capability of the proposed approach under different noise models. We compare the performance of the proposed approach to those yielded by state-of-the-art CS reconstruction algorithms showing that our approach achieves a better performance for different noise distributions. In particular, as the distribution tails become heavier the performance gai n achieved by the proposed approach increases significantly. - [Show abstract] [Hide abstract]

**ABSTRACT:**In this paper, we address the compressive sensing signal reconstruction problem by solving an ℓ0-regularized Least Absolute Deviation (LAD) regression problem. A coordinate descent algorithm is developed to solve this ℓ0-LAD optimization problem leading to a two-stage operation for signal estimation and basis selection. In the first stage, an estimation of the sparse signal is found by a weighted median operator acting on a shifted-and-scaled version of the measurement samples with weights taken from the entries of the projection matrix. The resultant estimated value is then passed to the second stage that tries to identify whether the corresponding entry is relevant or not. This stage is achieved by a hard threshold operator with adaptable thresholding parameter that is suitably tuned as the algorithm progresses. - [Show abstract] [Hide abstract]

**ABSTRACT:**Low-rank tensor factorization (LRTF) provides a useful mathematical tool to reveal and analyze multi-factor structures underlying data in a wide range of practical applications. One challenging issue in LRTF is how to recover a low-rank higher-order representation of the given high dimensional data in the presence of outliers and missing entries, i.e., the so-called robust LRTF problem. The L 1-norm LRTF is a popular strategy for robust LRTF due to its intrinsic robustness to heavy-tailed noises and outliers. However, few L 1-norm LRTF algorithms have been developed due to its non-convexity and non-smoothness, as well as the high order structure of data. In this paper we propose a novel cyclic weighted median (CWM) method to solve the L 1-norm LRTF problem. The main idea is to recursively optimize each coordinate involved in the L 1-norm LRTF problem with all the others fixed. Each of these single-scalar-parameter sub-problems is convex and can be easily solved by weighted median filter, and thus an effective algorithm can be readily constructed to tackle the original complex problem. Our extensive experiments on synthetic data and real face data demonstrate that the proposed method performs more robust than previous methods in the presence of outliers and/or missing entries.