Jian Xu’s research while affiliated with Xi’an University of Posts and Telecommunications and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (21)


Image Super-Resolution Based on Frequency Division Generative Adversarial Network
  • Conference Paper

March 2022

·

4 Reads

Jian Xu

·

Qiannan Gao

An Efficient Infrared Pedestrian Segmentation Algorithm Based on Weighted Maximum Entropy Thresholding

January 2022

·

18 Reads

·

1 Citation

Infrared pedestrian segmentation is a challenging problem in infrared image processing. Thresholding algorithm is one of the commonly used image segmentation techniques. Classical entropy-based thresholding methods cannot detect the pedestrian in the infrared image with complex background. In this paper, a weighted maximum entropy thresholding algorithm is proposed for the infrared pedestrian segmentation. The novel algorithm weighted the entropy of the object and the background by the Gaussian filtering histogram, which makes the optimal threshold not only has large entropy value, but also close to the flat valley of the histogram. The time complexity of the proposed algorithm is linear in the gray level of image. The weighted entropy thresholding algorithm can detect the pedestrian in the infrared images quickly and effectively. In the experiment, a number of infrared images in different scenes from the OTCBVS data set have been tested. The experimental results verify the performance of the weighted entropy thresholding method, which shows that the proposed algorithm is a simple and effective infrared pedestrian segmentation method.


Two-direction Self-learning Super-Resolution Propagation Based on Neighbor Embedding

February 2021

·

22 Reads

·

7 Citations

Signal Processing

Jian Xu

·

Yan Gao

·

Jun Xing

·

[...]

·

Neighbor embedding (NE) is a widely used super-resolution (SR) algorithm, but the one-to-many problem always degrades the performance of NE. The simplest way to avoid this performance degradation is to extract image features from low-resolution (LR) patches, that correctly reflect the features in the corresponding high-resolution (HR) patches. In this paper, we propose several feature extraction methods to extract patch features in LR space, and use coarse-to-fine patch matching methods to select matching patches for each test patch and find the best matching patch candidate to update the matching patch. Since finding matching patches using a training set is exhaustive work in NE, we explore the traditional local/non-local similarity prior and propose vertical similarity in the image pyramid to accelerate the matching patch search process. To accomplish this, we propose a random oscillation+horizontal propagation+vertical propagation strategy to update the matching patches. The experimental results show that the proposed method is superior to many existing self-learning-based methods, but it is inferior to many external-learning-based methods. These results show that the proposed feature extraction method and oscillation propagation method are useful for finding proper matching patches in NE.


Task of image upscaling
Illustration of the blurring image
(a) Blurred image, (b) Desired image
Illustration of the jagged artefacts
(a) Degraded image with jagged artefacts, (b) Image enhanced by the IBP algorithm, (c) Image enhanced by the traditional TV regularisation algorithm, (d) Image enhanced by the proposed algorithm, (e) Desired image
Illustration of the screw artefact
(a) Degraded image with the screw artefact, (b) Image enhanced by the IBP algorithm, (c) Image enhanced by the traditional TV regularisation algorithm, (d) Image enhanced by the proposed algorithm, (e) Desired image
Illustration of a deforming curve
(a) Curve C with artefacts, (b) Ideal curve C. We can shorten the arc length of the curve C to make it similar to the ideal edge to reduce jagged artefacts

+19

Discarding jagged artifacts in image-upscaling with total variation regularization
  • Article
  • Publisher preview available

October 2019

·

138 Reads

·

3 Citations

Image upscaling is needed in many areas. There are two types of methods: methods based on a simple hypothesis and methods based on machine learning. Most of the machine learning‐based methods have disadvantages: no support is provided for a variety of upscaling factors, a training process with a high time cost is required, and a large amount of storage space and high‐end equipment are required. To avoid the disadvantages of machine learning, upscaling images with a simple hypothesis is a promising strategy but simple hypothesis always produces jaggy artifacts. The authors propose a new method to remove these jagged artifacts. They consider an edge in an image as a deformed curve. Removing jagged artefacts is considered equivalent to shortening the full arc length of a curve. By optimising the regularization model, the severity of the artifacts decreases as the number of iterations increases. They compare nine existing methods on the Set5, Set14, and Urban100 datasets. Without using any external data, the proposed algorithm has high visual quality, has few jagged artefacts and performs similarly to very recent state‐of‐the‐art deep convolutional network‐based approaches. Compared to other methods without external data, the proposed algorithm balances the quality and time cost well.

View access options

Self-Learning Super-Resolution Using Convolutional Principal Component Analysis and Random Matching

September 2018

·

46 Reads

·

13 Citations

IEEE Transactions on Multimedia

Self-learning super-resolution (SLSR) algorithms have the advantage of being independent of an external training database. This paper proposes a self-learning super-resolution algorithm that uses convolutional principal component analysis (CPCA) and random matching. The technologies of CPCA and random matching greatly improve the efficiency of self-learning. There are two main steps in this algorithm: forming the training and testing the data sets and patch matching. In the data set forming step, we propose the CPCA to extract the lowdimensional features of the data set. The CPCA uses a convolutional method to quickly extract the PCA features of each image patch in every training and testing image. In the patch matching step, we propose a two-step random oscillation accompanied with propagation to accelerate the matching process. This patch matching method avoids exhaustive searching by utilizing the local similarity prior of natural images. The two-step random oscillation first performs a coarse patch matching using the variance feature and then performs a detailed matching using the PCA feature, which is useful to find reliable matching patches. The propagation strategy enables patches to propagate the good matching patches to their neighbors. The experimental results demonstrate that the proposed algorithm has a substantially lower time cost than that of many existing self-learning algorithms, leading to better reconstruction quality.


3-D weighted CBCT iterative reconstruction with TV minimization from angularly under-sampled projection data

June 2018

·

38 Reads

·

3 Citations

Optik

With increasing cone angle in cone beam computed tomography (CBCT), the CB artifacts due to cone angle in reconstructed images deteriorate, mainly because of increased severity of violating the data sufficiency condition (DSC). The CB artifacts in analytic reconstruction can be partially attributed to the data inconsistency between the conjugate rays that are 180° apart in their view angles; while those in iterative reconstruction result from the hypothesis that the error propagates preferentially along the rays with larger cone angles. The generation of CB artifacts in both sorts of reconstructions is closely related to the cone angle. In this paper, we analyze how the errors in the CB reconstructions are induced and propagated; meanwhile, inspired by the success of and in a way similar to the three-dimensional (3-D) weighted CB-FBP algorithm, we incorporate a similar 3-D weighting into the total-variation (TV) minimization based CB iterative algorithm for reconstructing images from angularly under-sampled projection data and suppressing the CB artifacts in the reconstructed images. Numerical simulation studies with the FORBILD thoracic phantom have been conducted. In the 3-D weighted CB iterative reconstruction, the CB artifacts can be effectively reduced, although there is an obvious distinction in the appearances of the CB artifacts in analytic and iterative reconstructions. In this work, we have developed a 3-D weighted CB iterative reconstruction with TV minimization, which does suppress the CB artifacts and improve the quality of the reconstructed images.


Super-resolution via adaptive combination of color channels

January 2017

·

3,348 Reads

·

5 Citations

Super-resolution (SR) is a technology to reconstruct a clear high-resolution image with plausible details according to one or a group of observed low- resolution images. However, many existing methods require the help of “tools”, which results in high time and memory costs for training and storing these “tools”. This paper proposes an SR method that can be executed without these training “tools”. This method comprises two stages: color channel adaptive combination and regularization. In the first stage, color channel adaptive combination assembles the textures captured by different color channels into the luminance component. This strategy is helpful in effectively utilizing the luminance information of different light bands. In the second stage, an improved total variation (TV) regularization method is proposed to suppress artifacts and sharpen edges. The TV regularization adds a new item in the iteration formula to enable the edges to be similar to the desired high-resolution image. Next, iterative back projection is used to fit the high-resolution image to the observed low-resolution image. The experimental results demonstrate that the proposed algorithm is superior to many existing learning-based methods and has low time cost.


Noisy image magnification with total variation regularization and order-changed dictionary learning

December 2015

·

132 Reads

·

5 Citations

EURASIP Journal on Advances in Signal Processing

Noisy low resolution (LR) images are always obtained in real applications, but many existing image magnification algorithms can not get good result from a noisy LR image. We propose a two-step image magnification algorithm to solve this problem. The proposed algorithm takes the advantages of both regularization-based method and learning-based method. The first step is based on total variation (TV) regularization and the second step is based on sparse representation. In the first step, we add a constraint on the TV regularization model to magnify the LR image and at the same time to suppress the noise in it. In the second step, we propose an order-changed dictionary training algorithm to train the dictionaries which is dominated by texture details. Experimental results demonstrate that the proposed algorithm performs better than many other algorithms when the noise is not serious. The proposed algorithm can also provide better visual quality on natural LR images.


Learning-based Super-resolution via Canonical Correlation Analysis

June 2015

·

90 Reads

·

3 Citations

International Journal of Signal Processing Image Processing and Pattern Recognition

The task of image super-resolution is to up sample a low resolution (LR) image while recovering sharp edges and high frequency details. In this paper, a single image super-resolution algorithm via canonical correlation analysis (CCA) is proposed. This method is based on the assumption that the corresponding LR and high resolution (HR) images have high correlation coefficients when transformed into a special space. The proposed approach includes two stages: training and testing. In the training stage, a couple of canonical bases for transformation are calculated with the prepared coupled training sets. In the testing stage, the HR image can be recovered by using the canonical bases obtained in the training stage. In addition, an iterative back projection algorithm is used to further improve the image quality. The experiments demonstrate that our algorithm can reconstruct richer details, with fewer artifacts. Moreover, this algorithm is of less complexity.


Wavelet Domain Multidictionary Learning for Single Image Super-Resolution

June 2015

·

264 Reads

·

5 Citations

Image super-resolution (SR) aims at recovering the high-frequency (HF) details of a high-resolution (HR) image according to the given low-resolution (LR) image and some priors about natural images. Learning the relationship of the LR image and its corresponding HF details to guide the reconstruction of the HR image is needed. In order to alleviate the uncertainty in HF detail prediction, the HR and LR images are usually decomposed into 4 subbands after 1-level discrete wavelet transformation (DWT), including an approximation subband and three detail subbands. From our observation, we found the approximation subbands of the HR image and the corresponding bicubic interpolated image are very similar but the respective detail subbands are different. Therefore, an algorithm to learn 4 coupled principal component analysis (PCA) dictionaries to describe the relationship between the approximation subband and the detail subbands is proposed in this paper. Comparisons with various state-of-the-art methods by experiments showed that our proposed algorithm is superior to some related works.


Citations (13)


... They used the concept of nested optimization for two level image segmentation. The idea of weighted maximum entropy thresholding is discussed in [15]. The method is used for infrared pedestrian segmentation. ...

Reference:

A Novel Practical Decisive Row-Class Entropy-Based Technique for Multilevel Threshold Selection Using Opposition Flow Directional Algorithm
An Efficient Infrared Pedestrian Segmentation Algorithm Based on Weighted Maximum Entropy Thresholding
  • Citing Chapter
  • January 2022

... Learning-based methods achieve SR reconstruction by learning the mapping relationship between HR images and LR images in feature space. Learning-based methods can be divided into three categories: neighborhood embedding methods [15], sparse representation methods [16], and deep learning methods [17]. In recent years, the mature application of artificial intelligence technology in many fields has also brought many new solutions to the challenges faced by image SR reconstruction technology. ...

Two-direction Self-learning Super-Resolution Propagation Based on Neighbor Embedding
  • Citing Article
  • February 2021

Signal Processing

... Total variation regularization pursues minimizing the total variation over pixels; meanwhile, allowing insignificant or little changes to be made. That is, the method fulfills smoothing the change of pixel values while it still retains edges [15]. ...

Discarding jagged artifacts in image-upscaling with total variation regularization

... To fairly evaluate the SR performance of DSRCNN, quantitative and qualitative analysis are used to conduct experiments. The quantitative analysis includes PSNR [44] and SSIM [44] of popular methods, i.e., Bicubic, A+ [7], jointly optimized regressors (JOR) [45], RFL [6], self-exemplars super-resolution (SelfEx) [36], CSCN [18], RED [19], a denoising convolutional neural network (DnCNN) [46], trainable nonlinear reaction diffusion (TNRD) [47], fast dilated residual SR convolutional network (FDSR) [48], SRCNN [10], fast SR CNN (FSRCNN) [14], very deep SR network (VDSR) [19], deeply-recursive convolutional network (DRCN) [13], context wise network fusion (CNF) [49], Laplacian SR network (LapSRN) [50], deep persistent memory network (MemNet) [11], CARN-M [22], wavelet domain residual network (WaveResNet) [51], convolutional principal component (CPCA) [52], new architecture of deep recursive convolution networks for SR (NDRCN) [53], LESRCNN [8], LESRCNN-S [8], and DSRCNN on four public datasets, i.e., Set5, Set14, B100, and U100. In terms of quantitative analysis, our proposed DSRCNN has obtained the best SR results in most circumstances as shown in Tables 2-5. ...

Self-Learning Super-Resolution Using Convolutional Principal Component Analysis and Random Matching
  • Citing Article
  • September 2018

IEEE Transactions on Multimedia

... The regularization term can be built on various prior models, among which the total variation (TV) [10][11][12] is the most commonly used. Based on the TV minimization, several CBCT reconstruction algorithms have been proposed [13,14]. However, these TV-based methods suffer from the staircase effect due to the piecewise assumption being used. ...

3-D weighted CBCT iterative reconstruction with TV minimization from angularly under-sampled projection data
  • Citing Article
  • June 2018

Optik

... Developing a method to adaptively select or combine these channels when constructing the final Mel-spectrogram could be a promising avenue for improving the quality of the generated audio. Each R, G, and B channel exhibits distinct characteristics, as noted in previous research (Xu et al., 2017). In the VAE encoder, we replicate the gray mel-spectrogram across three identical channels, but the VAE decoder does not enforce channel consistency. ...

Super-resolution via adaptive combination of color channels

... For machine learning methods, various strategies for learning parameters (including convolutional neural networks (CNNs) [3][4][5][6][7][8][9][10], generative adversarial networks (GANs) [11], dictionaries for sparse representations [12][13][14][15][16][17][18][19][20][21], canonical correlation analysis [22][23][24], and combinations of different tools [25][26][27]) are utilised to obtain identification of the relationship between low-resolution (LR) and high-resolution (HR) images. ...

Learning-based Super-resolution via Canonical Correlation Analysis
  • Citing Article
  • June 2015

International Journal of Signal Processing Image Processing and Pattern Recognition

... where p and q are the dimensions of high-and low-resolution patch vectors, respectively and λ is the regularization parameter. Eq. 8 is efficiently solved by the existing coupled K-SVD algorithm [46]. Next, for the GSR, we learn selfadaptive group dictionaries D gi for the individual patch-groups Y gi rather than learning a single overcomplete dictionary D g . ...

Coupled K-SVD dictionary training for super-resolution
  • Citing Article
  • January 2015

... The main difficulty in solving (1.10) is the linearization of the highly nonlinear term −τ ∇. ∇u |∇u| 2 + δ . In practice, a large literature is devoted to the numerical study of (1.10)(e.g., [71,88]), moreover authors proved that TV regularisation can very well remove noise and preserve sharp edges at the same time. However, this approach is criticized for the loss of contrast in the restored image. ...

Noisy image magnification with total variation regularization and order-changed dictionary learning

EURASIP Journal on Advances in Signal Processing