ArticlePDF Available

# Reliable Recurrence Algorithm for High-Order Krawtchouk Polynomials

Authors:

## Abstract and Figures

Krawtchouk polynomials (KPs) and their moments are promising techniques for applications of information theory, coding theory, and signal processing. This is due to the special capabilities of KPs in feature extraction and classification processes. The main challenge in existing KPs recurrence algorithms is that of numerical errors, which occur during the computation of the coefficients in large polynomial sizes, particularly when the KP parameter (p) values deviate away from 0.5 to 0 and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefficients of KPs in high orders. In particular, this paper discusses the development of a new algorithm and presents a new mathematical model for computing the initial value of the KP parameter. In addition, a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal recurrence algorithm was derived from the existing n direction and x-direction recurrence algorithms. The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP coefficients. First, the KP coefficients were computed for one partition after dividing the KP plane into four. To compute the KP coefficients in the other partitions, the symmetry relations were exploited. The performance evaluation of the proposed recurrence algorithm was determined through different comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polynomial-size, and computation cost. The obtained results indicate that the proposed algorithm is reliable and computes lesser coefficients when compared to the existing algorithms across wide ranges of parameter values of p and polynomial sizes N. The results also show that the improvement ratio of the computed coefficients ranges from 18.64% to 81.55% in comparison to the existing algorithms. Besides this, the proposed algorithm can generate polynomials of an order ∼8.5 times larger than those generated using state-of-the-art algorithms.
Content may be subject to copyright.
entropy
Article
Reliable Recurrence Algorithm for High-Order
Krawtchouk Polynomials
Khaled A. AL-Utaibi 1,*,† , Sadiq H. Abdulhussain 2,† , Basheera M. Mahmmod 2,,
Marwah Abdulrazzaq Naser 3,† , Muntadher Alsabah 4,† and Sadiq M. Sait 5,†


Citation: AL-Utaibi, K.A.;
Abdulhussain, S.H.; Mahmmod, B.M.;
Naser, M.A.; Alsabah, M.; Sait, S.M.
Reliable Recurrence Algorithm for
High-Order Krawtchouk
Polynomials. Entropy 2021,23, 1162.
https://doi.org/10.3390/e23091162
Accepted: 1 September 2021
Published: 3 September 2021
Publisher’s Note: MDPI stays neutral
with regard to jurisdictional claims in
published maps and institutional afﬁl-
iations.
conditions of the Creative Commons
4.0/).
1Department of Computer Engineering, University of Ha’il, Ha’il 682507, Saudi Arabia; alutaibi@uoh.edu.sa
marwahabdalkhafaji@gmail.com
4Department of Electronic and Electrical Engineering, University of Shefﬁeld, Shefﬁeld S1 4ET, UK;
mqalsabah@gmail.com
5Department of Computer Engineering, Interdisciplinary Research Center for Intelligent Secure Systems,
Research Institute, King Fahd University of Petroleum & Minerals, Dhahran 31261, Saudi Arabia;
*Correspondence: alutaibi@uoh.edu.sa
These authors contributed equally to this work.
Abstract:
Krawtchouk polynomials (KPs) and their moments are promising techniques for applica-
tions of information theory, coding theory, and signal processing. This is due to the special capabilities
of KPs in feature extraction and classiﬁcation processes. The main challenge in existing KPs recur-
rence algorithms is that of numerical errors, which occur during the computation of the coefﬁcients
in large polynomial sizes, particularly when the KP parameter (
p
) values deviate away from 0.5 to 0
and 1. To this end, this paper proposes a new recurrence relation in order to compute the coefﬁcients
of KPs in high orders. In particular, this paper discusses the development of a new algorithm and
presents a new mathematical model for computing the initial value of the KP parameter. In addition,
a new diagonal recurrence relation is introduced and used in the proposed algorithm. The diagonal
recurrence algorithm was derived from the existing
n
direction and
x
direction recurrence algorithms.
The diagonal and existing recurrence algorithms were subsequently exploited to compute the KP
coefﬁcients. First, the KP coefﬁcients were computed for one partition after dividing the KP plane into
four. To compute the KP coefﬁcients in the other partitions, the symmetry relations were exploited.
The performance evaluation of the proposed recurrence algorithm was determined through different
comparisons which were carried out in state-of-the-art works in terms of reconstruction error, polyno-
mial size, and computation cost. The obtained results indicate that the proposed algorithm is reliable
and computes lesser coefﬁcients when compared to the existing algorithms across wide ranges of
parameter values of
p
and polynomial sizes
N
. The results also show that the improvement ratio of
the computed coefﬁcients ranges from 18.64% to 81.55% in comparison to the existing algorithms.
Besides this, the proposed algorithm can generate polynomials of an order
8.5 times larger than
those generated using state-of-the-art algorithms.
Keywords:
discrete Krawtchouk polynomials; Krawtchouk moments; propagation error; energy
compaction; computation cost
1. Introduction
Digital image processing plays an essential role in several aspects of our daily lives.
Image signals are subject to several processes such as transmission [
1
], enhancement [
2
],
transformation [
3
], hiding [
4
], and compression [
5
,
6
]. Similarly to image processing, speech
signals processing is also essential [
7
], and involves several stages such as transfer [
8
], acqui-
sition [
9
], and coding [
10
]. Pattern recognition, which is considered an automated process,
Entropy 2021,23, 1162. https://doi.org/10.3390/e23091162 https://www.mdpi.com/journal/entropy
Entropy 2021,23, 1162 2 of 24
is widely used in various applications such as computer vision [
11
], statistical data analysis
[
12
], information retrieval [
13
], shot boundary detection [
14
], and bio-informatics [
15
].
However, the accuracy of extracting the signiﬁcant features in these essential signal pro-
cessing approaches is crucial [
16
]. Feature extraction, in particular, is used to reduce the
dimensionality of the signals to a ﬁnite size [
17
,
18
]. Speciﬁcally, a ﬁnite number of features
can be used to represent the signals. These ﬁnite features can be considered the most
signiﬁcant ones and need to be extracted using efﬁcient methods. As such, to achieve
the best signal representation, a fast and robust feature extraction mechanism becomes
necessary. To this end, such features’ extraction mechanism needs to meet the desired
accuracy concerns by extracting the most signiﬁcant features efﬁciently with low processing
times. Furthermore, the energy compaction and localization of the signals can also be
considered as essential factors in signal compression [
19
]. This is attributed to the fact that
using fewer effective coefﬁcients results in a more accurate representation of the signals.
Hence, orthogonal polynomials are an effective tool that can be applied to meet these
desired requirements and features characterization.
Continuous and discrete orthogonal polynomials are commonly used in many signal
processing applications and feature characteristics. Continuous orthogonal polynomials
are used in speech and image applications, for example, in pattern recognition, robot vision,
face recognition, object classiﬁcation, hiding information, data compression, template
matching, and in edge detection for image data compression [
20
23
]. The performance of
orthogonal polynomials is evaluated according to their ability to extract distinct features
from signals in a fast and efﬁcient way. This special ability of feature extraction can be
quantiﬁed using properties such as a) energy compaction; b) signal representation without
redundancy; c) numerical stability; and d) localization [2426].
Discrete orthogonal polynomials are widely used to extract features from images [
27
].
There are different types of discrete polynomials. Examples of these include discrete
Tchebichef polynomials [
28
], Chebyshev polynomials [
29
], discrete Charlier polynomials
[
26
], discrete Krawtchouk polynomials (DKPs), and discrete Meixner polynomials [
30
].
Among these polynomials, DKPs are widely exploited in image processing. This is due
to their salient characteristics, which can be used to extract local features from images.
Speciﬁcally, by exploiting the localization property of the DKPs, images can be efﬁciently
represented by using a ﬁnite number of features [
31
]. The localization property is carried
out by controlling the parameter value (
p
) of the DKPs. Typically, discrete orthogonal mo-
ments are generated using DKPs. Discrete orthogonal moments are extensively exploited in
image and signal processing [
11
,
32
34
], coding theory [
35
], and information theory [
36
,
37
].
However, reconstructing a signal using moments and maintaining the orthogonality has
to date been considered a challenging task. In addition to this, the discretization error is
another challenge that appears when reconstructing the signal, especially when moments
are used in the implementation. This discretization error, however, increases with the
moments’ order. For example, when the order of the moments increases, the discretization
error increases accordingly. As such, the accuracy of the moments’ computation is reduced,
resulting in an inaccurate representation of images [3840].
Several studies have been performed on the discrete Krawtchouk polynomials and
methods developed to efﬁciently compute their coefﬁcients, for example see [
25
,
31
,
41
44
].
These research works utilize a three-term recurrence algorithm [
25
pergeometric series and gamma functions are widely applied in image processing [
25
].
However, the aforementioned research works use functions that require a long time to
execute and process the signals. Furthermore, these functions become numerically unstable
when the order of the moments increases. Instead, a three-terms recurrence algorithm
can be applied to come up with the aforementioned time and accuracy issues. To this
end, Yap et al. [
41
] presented a recurrence algorithm in the
n
direction to calculate the
Krawtchouk polynomial coefﬁcients (KPCs). Due to the propagation error, this recurrence
algorithm becomes unstable—especially when the polynomial size increases. In general,
such a propagation error increases through the computation of polynomial coefﬁcients.
Entropy 2021,23, 1162 3 of 24
This is attributed to the fact that pitfalls may happen even when small errors in ﬂoating
numbers occur. As such, there is an essential need to reduce the number of recurrences,
especially when the polynomial size is increased. Furthermore, such a reduction could also
lead to a reduction in the propagation error, thereby leading to a more stable computation
of polynomial coefﬁcients, as desired. The work in [
31
] proposes a modiﬁed recurrence
algorithm in the
n
direction (RAN) by partitioning the KP array into two partitions. There-
fore, only 50% of the coefﬁcients need to be computed. However, the partitioning of the
KP array generates a larger polynomial size, which is undesirable. On a similar basis, the
work in [
42
] proposes a recurrence algorithm in the
x
direction (RAX) by partitioning the
KP plane into two partitions. Speciﬁcally, the
x
direction of the recurrence algorithm is
used to compute the KPCs. The results show that the RAX algorithm outperforms the
RAN algorithm. It is worth noting that the RAN and RAX algorithms use a symmetric
property to compute the polynomial coefﬁcients of the second portion. A novel bi-recursive
relation algorithm in the n direction (BRRN) was proposed in [
43
]. In this method, the KP
array is divided into four partitions. However, the KPC coefﬁcients are computed for two
partitions only, i.e., 50% of the coefﬁcients are computed. Then, a symmetric property is
used to compute the KPCs for the remaining partitions. The results indicate that the BRRN
algorithm provides higher gain than the RAX algorithm for limited values of parameter
p
,
i.e., the polynomial size. Abdulhussain et al. [
44
] developed an algorithm and presented
new properties of orthogonal polynomials such that the KP plane is divided into four por-
tions and only the KPCs for one portion are computed. For this, the size of the generated
polynomials is increased, but it is still limited, especially for parameter
p
less than 0.25 and
greater than 0.75. This is because the initial values or sets become zero as the polynomial
size increases. Recently, the work in [
25
] proposed a recurrence relation algorithm that has
the ability to compute KPCs with very large sizes. However, the proposed algorithm is
limited to the parameter value of p=0.5.
The existing algorithms suffer from the following limitations: (1) no initial value is
provided; (2) the propagation error is high; and (3) the implementation of these algorithms
is limited to a speciﬁc value of the parameter
p
. They also suffer from numerical instabilities,
especially when the polynomials orders and sizes become high. Therefore, an advanced
and reliable recurrence algorithm for high-order polynomials and large sizes is required.
Therefore, a new recurrence algorithm is presented in this paper, which handles the
numerical instabilities issue of using high orders of polynomials and large sizes. The
proposed algorithm is able to compute the KPCs for all values of the parameter
p
. In
addition to this, this paper presents the development of a new mathematical model for
computing the initial value of
p
. In particular, the initial value is accurately computed for all
values of parameter
p
. Furthermore, a new relation to compute the values of the initial sets
is derived. To this end, a diagonal recurrence relation is introduced. The proposed diagonal
recurrence algorithm is derived from the existing
n
and
x
directions of the recurrence
algorithm. The diagonal and the existing recurrence algorithm are exploited to compute
the KP coefﬁcients. The KP coefﬁcients are then used for one partition after dividing the KP
plane into four partitions. To compute the KP coefﬁcients in other partitions, a symmetric
property relation is utilized.
Organization of the paper
: This paper is organized as follows. Section 2presents
the mathematical formulations of the orthogonal polynomials and moments. In Section
3, the methodology of the proposed recurrence algorithm is provided. This methodology
involves providing a discussion about the initial value selection of parameter
p
this section explains how the Krawtchouk polynomial’s coefﬁcients can be computed. In
order to characterize the performance of the proposed approaches, Section 4provides the
numerical results. Finally, conclusions are discussed in Section 5.
Notation: In this paper, the operator transpose is denoted by
(·)T
and
(a
b)
denotes the
binomial coefﬁcients.
Entropy 2021,23, 1162 4 of 24
2. Preliminaries
This section presents the Krawtchouk polynomials and their recurrence relation. To
this end, the
n
-th order of the Krawtchouk polynomials based on the hypergeometric series
is given as
ˆ
Kp
n(x) = 2F1nx
N+1;1
p. (1)
The weighted function
ω(x
,
p)
and the norm function
ρ(n
,
p)
are used to generate the
weighted and normalized KP coefﬁcients as given in [
31
]. To this end, the weighted and
normalized KP coefﬁcients can be written as in (2) and (3), respectively:
ω(x,p) = N1
xpx(1p)Nx1(2)
ρ(n,p) = (1)n1p
pnn!
(N+1)n(3)
The Pochhammer symbol
(·)c
, which is known as an ascending or rising factorial function,
can be written as [45]
(a)c=Γ(a+c)
Γ(a)=a(a+1)(a+2)· · · (a+c1), (4)
where
Γ(
.
)
denotes the Gamma function. To this end, using the weight and norm functions,
the weighted and normalized Krawtchouk polynomials of the
n
-th order for a signal of
size Nare given as [31]
Kp
n(x)=sω(n,x)
ρ(n,x)ˆ
Kp
n(x), (5)
Kp
n(x)=sN1
nN1
x p
1pn+x
2F1nx
N+1;1
p, (6)
n,x=0, 1, . . . , N1; p(0, 1),
where 2F1describes the hypergeometric series and can be written as
2F1nx
N+1;1
p=
k=0
(n)k(x)k
(N+1)k,k!1
pk
. (7)
3. Proposed Recurrence Algorithm
This section describes the methodology of the proposed recurrence algorithm.
3.1. Computing the Initial Value
The problem with traditional approaches for computing the initial value in the
n=
0
and
x=
0 directions is the numerical instability. For example, the traditional methods
provide zero values of the initial
Kp
0(0)
—which is unstable. The initial value can be
computed as
Kp
0(0)=q(1p)N1. (8)
The expression in
(8)
makes the initial value (
Kp
0(0)
) decrease to zero for different
values of parameter
p
, especially for large polynomial sizes
N
. Figure 1shows the values
of
Kp
0(0)
for different values of parameter
p
and size
N
. The results show that the value of
Entropy 2021,23, 1162 5 of 24
Kp
0(0)
starts to fall to zero when
p
becomes larger than 0.1. Speciﬁcally, as the values of
parameter
p
increase, the value of
Kp
0(0)
falls to zero. For example, for
p=
0.15, the value
of
Kp
0(0)
becomes zero for
N>
5000 while for
p=
0.4, the value of
Kp
0(0)
becomes zero
earlier for
N>
2000. This makes it impossible to compute the rest of the KP coefﬁcients’
values. Therefore, there is an essential need to ﬁnd an efﬁcient method for computing the
initial value of
p
, which prevents the initial value (
Kp
0(0)
) from dropping to zero. To this
end, this paper identiﬁes the suitable non-zero values in the KP plane that need to be used
as an initial value. As such, there is an essential need to plot the values of coefﬁcient
Kp
0(x)
for different values of parameter
p
. Figure 2shows the plots of the values of coefﬁcient
Kp
0(x)
for different values of parameter
p
. Clearly, the results in Figure 2show that the
values of
Kp
0(x)
drop to a very small number. In addition, we observe that using non-small values as an
initial set of parameter pseems useful to compute other values of the KP coefﬁcients.
Figure 1.
The computation of the initial values (
Kp
0(0)
) as a function of different polynomial sizes
using Equation (8).
Figure 2. Plots of Kp
0(x)for a wide range of parameter pand N=500.
Figure 2demonstrates that the peak values can be located at
x=N p
. In this paper,
the value of
x=N p
is denoted by
x0
. To this end, a general formula for computing
Kp
0(x0)
can be written as
Entropy 2021,23, 1162 6 of 24
Kp
0(x0)=Kp
0(Np)×sω(N p;p)
ρ(0; p),
Kp
0(x0)=1×s(N1
Np )pNp(1p)Np+N1
1,
Kp
0(x0)=sN1
Np pN p (1p)N p+N1. (9)
Computing
Kp
0(x0)
using expression
(9)
may also lead to unstable numerical values
of coefﬁcients with errors, especially for high-order polynomials. This is because the
binomial coefﬁcients function tends to be very large and close to inﬁnity. To demonstrate
this behavior, Figure 3is provided.
Figure 3.
The computation of the initial values (
Kp
0(x0)
) as a function of different polynomial sizes
using Equation (9).
Figure 3shows that the initial values (
Kp
0(x0)
) are still inaccurate where these values
record either NaN or Inf values. This is due to the nature of the polynomial coefﬁcients
that are obtained by the expression in
(9)
. Thus, the initial values (
Kp
0(x0)
) seem difﬁcult to
be computed for large polynomial sizes (
N
). To overcome this issue, this paper proposes
an efﬁcient and suitable approach that makes the value of
Kp
0(x0)
commutable. It is
worth noting that the values obtained from the polynomial coefﬁcients’ formula, especially
the Gamma function, should be reduced when the coefﬁcients become large since their
argument value is increased. This can be achieved using
arg =exp(ln(arg)) = eln (arg)
and
the initial values can be computed as
Kp
0(x0) = e0.5×z, (10)
where zis given as
z=ln Γ(N)log Γ(x0+1)log Γ(Nx0)x0ln1p
p+ (N1)ln(1p).
Entropy 2021,23, 1162 7 of 24
A proof of expression (10) is presented in Appendix A.
Figure 4shows a plot of the proposed initial values of (
Kp
0(x0)
) in the developed
expression in
(10)
for various values of parameter
p
as a function of polynomials size
N
.
The results show that the proposed initial values are more computable for wide ranges of
parameter
p
and large polynomial sizes
N
, as desired. Hence, this signiﬁes the feasibility
of the proposed formula for practical implementations compared with state-of-the-art
equations.
Figure 4.
The computation of the initial sets’ values (
Kp
0(x0)
) as a function of different polynomial
sizes using Equation (10).
3.2. The Fundamental Computation of the Initial Values
Typically, for any orthogonal polynomial, the computation of coefﬁcients requires
the evaluation of a signiﬁcant number of fundamental initials. Thus, based on the ﬁrst
initial value
Kp
0(x0)
, computed using the proposed formula, the KP coefﬁcients are ob-
tained
Kp
0(x1)
,
Kp
1(x0)
, and
Kp
1(x1)
(see Figure 5). Therefore, this section shows how the
aforementioned coefﬁcients values are computed.
First,
Kp
0(x1)
is computed using the proposed derived formula, which provides the
two terms relation between the
Kp
0(x0)
and
Kp
0(x1)
. To this end, this relation/ratio between
the coefﬁcients can be formulated as
Kp
0(x1)
Kp
0(x0)=v
u
u
t
(N1
Np+1)pN p+1(1p)N p+N2
(N1
Np )pNp (1p)Np+N1,
=v
u
u
u
t
(N1)!
(Np+1)!(NN p2)!
(N1)!
(Np)!(NN p1)!
·1p
p,
=sNNp 1
Np +1·p
1p, (11)
where x1=x0+1=Np +1. Thus, the expression in (11) can be further simpliﬁed to:
Entropy 2021,23, 1162 8 of 24
Kp
0(x1) = sNNp 1
Np +1·p
1pKp
0(x0). (12)
Figure 5.
The fundamental computation of initial values according to the
x
and
n
directions in the
KP plane.
Then,
Kp
1(x0)
and
Kp
1(x1)
can be computed using a two-term recurrence relation
with
Kp
0(x0)
and
Kp
0(x0)
, respectively. To derive the recurrence relation of the proposed
approach, the following formulas are used:
Kp
0(x)=qωp
K(x,N), (13)
Kp
1(x)=qωp
K(x,N),p(N1)x
p(N1)p(1p). (14)
From (13) and (14), Kp
1(x)can be simpliﬁed to:
Kp
1(x)=p(N1)x
p(N1)p(1p)Kp
0(x). (15)
Using the expression in (15), Kp
1(x0)and Kp
1(x1)can be further simpliﬁed to
Kp
1(x0)=p(N1)Np
p(N1)p(1p)Kp
0(x0)
=p
p(N1)p(1p)Kp
0(x0)(16)
Kp
1(x1)=p(N1)(Np +1)
p(N1)p(1p)Kp
0(x1)
=p+1
p(N1)p(1p)Kp
0(x1). (17)
Entropy 2021,23, 1162 9 of 24
3.3. The Computation of the Initial Sets
In this section, the computation of the initial sets is discussed. These initial sets are
shown in Figure 6. Figure 6shows parts of the KP coefﬁcients, which are covered in this
section. The initial sets are deﬁned in the ranges of x=x0,x1and n=2, 3, . . . , x.
Figure 6. A diagram shows the location of initial sets in the KP plane.
To compute the values of the initial set, the recurrence in the
n
direction is used. To
this end, the formulation of recurrence is given as
Kp
n(x)=α1n,xKp
n1(x)α2n,xKp
n2(x), (18)
α1n,x=(N2n+1)p+nx1
pp(1p)n(Nn),
α2n,x=s(n1)(Nn+1)
n(Nn),
x=x0,x1and n=2, 3, . . . , x.
After computing the initial sets, the values in the ranges
x=x0
,
x1
and
n=
0, 1,
. . .
,
x
are used as the initials to compute the rest of the KP coefﬁcients values.
3.4. Computation of the Coefﬁcients Values for KP
In this section, the rest of the coefﬁcient values are computed. These coefﬁcients are
shown in Figure 7. As depicted in Figure 7, there are two main parts, which are located at
the left (Part 1) and the right (Part 2) sides of the initial sets. In addition, the coefﬁcients are
located at the right side of the initial sets and can be divided into three sub-parts. These
parts are Part 2-1, Part 2-2, and Part 2-3. The detailed description of the computation of
each part is presented in the following subsections.
Entropy 2021,23, 1162 10 of 24
Figure 7.
A diagram shows the parts’ locations in the KP coefﬁcients’ plane in the
x
and
n
directions
based on the proposed algorithm.
3.4.1. Computation of the Coefﬁcients Located at Part 1
In this section, the values of KP coefﬁcients in Part 1, shown in Figure 7, are computed
using a backward
x
recurrence relation. The backward recurrence relation is obtained from
the traditional recurrence relation in the xdirection as
Kp
n(x1) = β1n,xKp
n(x)β2n,xKp
n(x+1), (19)
β1n,x=(N2x1)pn+x
pp(1p)x(Nx),
β2n,x=s(Nx1)(x+1)
x(Nx),
n=0, 1, . . . , x0and x=x0,x01, . . . , n.
The values of KP coefﬁcients become unstable as the index of xgoes towards n. This
is because the values of the coefﬁcients tend to be less than 10
7
. To overcome this issue,
the condition of a threshold value is used to stop the recurrence for each value of index
n
.
The proposed condition is given by
|Kn(x)|<105and |Kn(x+1)|<107. (20)
3.4.2. Computation of the Coefﬁcients Located at Part 2-1
In this section, the values of KP coefﬁcients in Part 2-1, given in Figure 7, were
computed using a forward xrecurrence relation as given in (21):
Kp
n(x+1) = γ1n,xKp
n(x)γ2n,xKp
n(x1)(21)
γ1n,x=(N2x1)pn+x
pp(1p)(x+1)(Nx1)
γ2n,x=s(Nx)x
(x+1)(Nx1)
n=0, 1, . . . , x0and x=x0,x0+1, . . . , Nn1
The aforementioned recurrence relation, which is used to compute the values in Part
2-1, is subject to the following condition:
|Kn(x)|<105and |Kn(x+1)|<107. (22)
Entropy 2021,23, 1162 11 of 24
3.4.3. Computation of the Coefﬁcients Located at Part 2-2
This section presents two new recurrence relations’ approaches to compute the KP
coefﬁcient values diagonally. This diagonal calculation is given in Part 2-2 Figure 7. The
values in the diagonal of Figure 7are then used to compute the coefﬁcients’ values in Part
2-3 in Figure 7. This is because the recurrence relation in the
n
direction cannot be used to
compute the coefﬁcients’ values. Consequently, some values in Part 2 become zero, which
results from the condition used to prevent the occurrence of unstable values.
This paper derives the recurrence relations provided in Figure 8. From Figure 8a, it can
be seen that the elements computed for
x0
and
x1
can be used to compute the coefﬁcients
along the main diagonal
n=x
and
n=x
1. Furthermore, to compute the coefﬁcients’
values of KP
Kp
n(x+
1
)
, the coefﬁcient value
Kp
n+1(x)
is computed using the
n
direction
recurrence algorithm. The similarity across the main diagonal (
n=x
) is exploited for
simplicity where
Kp
n(x+
1
) = Kp
n+1(x)
. To this end, the KP coefﬁcients along
n=x
1
are computed as
Kp
n(x+1) = δ1n,x,Kp
n(x)δ2n,x,Kp
n1(x), (23)
δ1n,x=(N2x1)pn+x
pp(1p)x(Nx),
δ2n,x=s(Nx)x
(x+1)(Nx1),
x=x0+1, x0+2, N
21 and n=x.
Figure 8.
A diagram shows the coefﬁcients’ locations that are used to compute the values in Part 2-2.
To compute the values at the main diagonal where
n=x
, a new recurrence relation
approach is developed. This is achieved by combining both
n
and
x
directions recurrences.
Suppose that the values at
(n,x+1), and (n1, x+1)
are known (see circulated values
I
and
K
in Figure 8d). Then, the value at
n+
1,
x+
1 (see circulated values
L
in Figure 8d)
can be computed using the ndirection recurrence relation as
Kp
n+1(x+1) = α1n+1,x+1,Kp
n(x+1)α2n+1,x+1,Kp
n1(x+1). (24)
The value at
(n
1,
x+
1
)
can be computed using the
x
direction recurrence relation as
Kp
n1(x+1) = γ1n1,x,Kp
n1(x)γ2n1,x,Kp
n1(x1). (25)
Entropy 2021,23, 1162 12 of 24
Substituting Equation
(25)
in
(24)
yields the following general expression of the recur-
rence relation:
Kp
n+1(x+1) = α1n+1,x+1Kp
n(x+1)α2n+1,x+1γ1n1,xKp
n1(x)γ2n1,xKp
n1(x1)
=α1n+1,x+1Kp
n(x+1)α2n+1,x+1γ1n1,xKp
n1(x) + α2n+1,x+1γ2n1,xKp
n1(x1)
=η1n,x,Kp
n(x+1)η2n,xKp
n1(x) + η3n,xKp
n1(x1)(26)
η1n,x=α1n+1,x+1=(N2n1)p+nx1
pp(1p)(n+1)(Nn1)
η2n,x=α2n+1,x+1γ1n1,x=sn(Nn)((N2x1)p+xn+1)2
p(1p)(n+1)(Nn1)(x+1)(Nx1)
η3n,x=α2n+1,x+1γ2n1,x=sn(Nn)x(Nx)
(n+1)(Nn1)(x+1)(Nx1)
x=x1,x1+1, · · · ,N/2 1; and n=x
This recurrence relation is termed as the four-term recurrence relation in the
n− −x
direction. This new development approach is used to compute the KP coefﬁcients in the
range
x=x1+
1,
x1+
2,
. . .
,
N/
2
+
1 and
n=x
1, and
x=x1+
1,
x1+
2,
. . .
,
N/
2 and
n=xas shown in Figure 9:
Figure 9. A diagram shows a location of the coefﬁcients in Part 2-2.
3.4.4. Computation of the Coefﬁcients Located at Part 2-3
This section presents the computation of the KP coefﬁcients located at Part 2-3 in
Figure 7. These values are computed using
(21)
in the ranges
n=x1
,
x1+
1,
N/
2
2 and
n+2xNn+1. However, the following condition should be met:
|Kn(x)|<105and |Kn(x+1)|<107. (27)
Entropy 2021,23, 1162 13 of 24
3.5. Computation of the Rest of the KP Coefﬁcients
This subsection provides the computation of the rest of the KP coefﬁcients. To this
end, the rest of the coefﬁcients can be computed using a similarity relation of the KP. The
coefﬁcients in the ranges
x=
0, 1,
. . .
,
N/
2
1 and
n=x+
1,
x+
2,
. . .
,
Nx
1 are
given as Kp
n(x)=Kp
x(n). (28)
The coefﬁcients in the ranges
x=
0, 1,
. . .
,
N
1 and
n=Nx
,
Nx+
1,
. . .
,
N
1
are computed using the following expression:
Kp
n(x)= (1)Nnx1Kp
Nn(Nx). (29)
In addition, to calculate the KP coefﬁcients for
p>
0.5, ﬁrstly the value of
p
is set to
1
p
. Then, the KP coefﬁcients are computed using the proposed methodology. Finally,
the following formula is applied for all coefﬁcients [44]:
Kp
n(x)= (1)nKp
n(Nx1). (30)
3.6. Summary of the Proposed Algorithm
In this subsection, a summary of the proposed algorithm is presented. To this end, a
ﬂow chart of the proposed recurrence is shown in Figure 10. In addition to this, a pseudo-
code is presented (see Algorithm 1) for more clariﬁcation. In addition, 3D plots of the KP
coefﬁcients are given in this subsection.
Figure 10. Flowchart of the proposed algorithm.
Entropy 2021,23, 1162 14 of 24
Algorithm 1 Computation of Krawtchouk polynomials using the proposed algorithm.
Input: N,p
Nrepresents the size of the Krawtchouk polynomial,
prepresents the parameter of the Krawtchouk polynomials.
Output: Kp
n(x)
1: Flag=False
2: if p>0.5 then
3: Flag=True; pp1
4: end if
5: x0N p,x1x0+1
6: Compute Kp
0(x0)using (10)
7: Compute Kp
0(x1)using (12)
8: Compute Kp
1(x0)and Kp
1(x1)using (16) and (17)
9: .Compute initial set
10: for x=x0:x1do
11: for n=2 : xdo
12: Compute Kp
n(x)using (18)
13: end for
14: end for
15: .Compute coefﬁcient values in Part 1
16: for n=0 : x0do
17: for x=x0:1 : ndo .inner loop
18: Compute Kp
n(x)using (19)
19: if |Kn(x)|<105and |Kn(x+1)|<107then
20: Exit inner loop
21: end if
22: end for
23: end for
24: .Compute coefﬁcient values in Part 2-1
25: for n=0 : x0do
26: for x=x0:Nn1do .inner loop
27: Compute Kp
n(x)using (21)
28: if |Kn(x+1)|<107and |Kn(x)|<105then
29: Exit inner loop
30: end if
31: end for
32: end for
33: .Compute coefﬁcient values in Part 2-2
34: for x=x0:N/2 1do
35: nx
36: Compute Kp
n(x)using (23)
37: end for
38: for x=x1:N/2 1do
39: nx
40: Compute Kp
n(x)using (26)
41: end for
42: .Compute coefﬁcient values in Part 2-3
43: for n=x1:N/2 2do
44: for x=n+2 : Nn1do .inner loop
45: Compute Kp
n(x)using (21)
46: if |Kn(x)|<105and |Kn(x+1)|<107then
47: Exit inner loop
48: end if
49: end for
50: end for
51: Compute the rest of the coefﬁcients using the similarity relations (28) and (29)
52: if Flag=True then
53: Apply (30)
54: end if
Figures 11 and 12 show a 3D plot of the KP coefﬁcients, which are generated using
the proposed recurrence algorithm with
N=
2000 and different values of the
p
parameter
ranging between <0.5 and >0.5, respectively.
Entropy 2021,23, 1162 15 of 24
Figure 11. 3D plot of the KP coefﬁcients computed for N=2000 and p<0.5.
Figure 12. 3D plot of the KP coefﬁcients computed for N=2000 and p>0.5.
4. Numerical Results and Analyses
This section presents the results obtained using the proposed recurrence algorithm.
In addition, a comprehensive comparison is conducted with the existing recurrence algo-
rithms. The comparison is carried out in terms of the energy compaction, reconstruction
error, and computation cost.
Entropy 2021,23, 1162 16 of 24
4.1. Energy Compaction Analysis
The order of moments
n
impacts the process of signal reconstruction, energy com-
paction, and information retrieval. The order of the KP moments is given by
n=
0, 1, 2,
. . .
,
N
1. The energy compaction is utilized to check the impact of using KP
to transform a large fraction of the signal energy into relatively few coefﬁcients of moments.
To ﬁnd the impact of using the KP parameter (
p
) on the energy compaction property, the
procedure given by [
46
] is employed. The stationary Markov sequence with length
N
and
zero mean is analyzed. A matrix Lwith covariance coefﬁcients (ρ) is deﬁned as [27]:
L=
1ρ· · · ρN1
ρ1· · · .
.
.
.
.
........
.
.
ρN1· · · ρ1
(31)
The matrix
L
is then transformed to the Krawtchouk domain. As such, the coefﬁcients
in the main diagonal of the transformed matrix (
S
) are computed. The matrix
S
represents
the variance σ2
l, which can be computed as
S=RLRT, (32)
where
R
denotes the KP matrix and
(·)T
refers to the matrix transpose operation. In
addition, the normalized restriction error (Jm) can be computed using:
Jm=N1
q=mσ2
q
N1
q=0σ2
q
, (33)
m=0, 1, . . . , N1, (34)
where
σ2
q
represents the order of
σ2
l
sorted in descending order. In the experiment, the
normalized restriction error is performed by considering different covariance coefﬁcients
(ρ) and different values of the parameters MNP.
Figure 13 shows the normalized restriction error for different values of parameters (
p
)
with the covariance coefﬁcient
ρ=
0.93. It can be observed from Figure 13 that when the
value of
p
is equal to 1
p
, the normalized restriction error becomes equal. For example,
the normalized restriction error for
p=
0.05 is equal to
p=
1
0.05
=
the energy compaction is inﬂuenced by the KP parameter
p
. For instance, as the parameter
p
increases from 0.05 to 0.45, the performance of the KP in terms of energy compaction
is changed. Furthermore, the energy compaction at parameter
p=
0.45 shows better
performance for parameter
p=
0.05 because the normalized restriction error (
Jm
) reaches
zero values. However, small values of parameter
p
shows better performance in terms of
feature extraction, as proven in [
44
]. Thus, it can be concluded that KP provides further
performance improvement as the parameter
p
reaches 0.5. Furthermore, a more accurate
result can be achieved when the parameter pis deviates from 0.5 [44].
Figure 13. Energy compaction for different values of the parameter p.
Entropy 2021,23, 1162 17 of 24
4.2. Analysis of Reconstruction Error
In this section, the proposed recurrence algorithm is evaluated by carrying out recon-
struction error analysis (REA). This REA was conducted for the proposed and the existing
works. The REA was performed using an image formed from 16 well-known images as
shown in Figure 14. In addition, the comparison was performed on an image with a large
size, i.e., (4096
×
4096). Different values of parameter
p
were considered in the analysis.
These values are p=0.1, 0.2, 0.3, and 0.4.
Figure 14. Evaluation of an image used for REA.
First, the WNKP (
R
) is generated using the proposed and existing algorithms. Then,
the KMs (
ψ
) of the image are computed. Then, the image is reconstructed from the
computed moments using a limited number of moments. Finally, the normalized mean
square error (NMSE) is calculated between the input image and the reconstructed version
of the image. Hence, the NMSE is given as [44]
NMSE(I,IRec) = x,y(I(x,y)IRec (x,y))2
x,y(I(x,y))2, (35)
where parameters
I
and
IRec
denote the original image and the reconstructed image, respectively.
The NMSE and the reconstructed image for
p=
0.1
and p=
0.2 are shown in
Figure 15
and Figure 16, respectively. The ﬁrst row depicts the reconstructed images by utilizing the
FRK [
44
], while the second row represents the reconstructed images using the proposed
algorithm. The FRK algorithm is unable to reconstruct the image of the low-order moments.
In addition, it is unable to reconstruct the image with high orders. However, the proposed
algorithm is capable of fully reconstructing the image of different order values. In addition,
the NMSE is minimized in the proposed algorithm using a moment order of 680, which
is
16%. Figure 15 shows that the proposed algorithm achieves an NMSE of 0.72 while
the FRK algorithm records a value of 0.84. This implies that the proposed algorithm
outperforms the FRK algorithm. Moreover, the NMSE reaches zero when the proposed
algorithm is used, while it records 0.64 when the FRK is used. The limitation with the FRK
algorithm is due to the initial set that was computed, which makes the KPCs values tend
towards zero, and thus, the NMSE is increased. Figure 17 provides a plot of experiments
using
p=
0.3. The results show that the proposed algorithm provides better NMSE than
the FRK starting from a moment order of 680. In addition, the performance improvement
Entropy 2021,23, 1162 18 of 24
increases when the moment order increases until it reaches the full order of (4096). At the
full order, the NMSE using the proposed algorithm reaches 0, while it reaches 0.18 using
the FRK algorithm.
Figure 15.
The NMSE performance comparing the proposed algorithm and the RAK algorithm [
44
]
with p=0.1.
Figure 16.
The NMSE performance comparing the proposed algorithm and the RAK algorithm [
44
]
with p=0.2.
Entropy 2021,23, 1162 19 of 24
Figure 17.
The NMSE performance comparing the proposed algorithm and the RAK algorithm [
44
]
with p=0.3.
Figure 18 shows a new performance evaluation using
p=
0.4. The results show that
the proposed algorithm has the ability to accurately generate the KP coefﬁcients while
for the FRK, the KP coefﬁcients remain inaccurate. This is attributed to the zero initial
value obtained in the FRK algorithm. It is also worth noting that the proposed algorithm is
able to generate KP coefﬁcients for large polynomial sizes and at a high polynomial order,
which was experimentally found to be greater than 8192.
Figure 18.
The NMSE performance comparing the proposed algorithm and the RAK algorithm [
44
]
with p=0.4.
Entropy 2021,23, 1162 20 of 24
Finally, this paper provides a comparison of a maximum polynomial size between the
proposed and existing algorithms. The maximum size is obtained for different values of
the parameter
p
. For each recurrence algorithm, the polynomial (
R
) is computed ﬁrst with
a size and order of N×N. Then, the following criterion is applied:
Err =sum(diagonal(R×RT)I(N)) <Th, (36)
where
Th
is the threshold value over which 10
5
is chosen.
I(N)
is the identity matrix with
a size of
N×N
. It is worth noting that the maximum value of
N
is set to 20,480. Table 1
lists the obtained maximum size for each algorithm for different values of parameter
p
.
This table demonstrates that the proposed algorithm is capable of generating an orthogonal
polynomial with a large size and for different values of parameter
p
. The proposed
algorithm outperforms the existing recurrence algorithms for all values of the parameters
p
considered. For example, at
p=
0.05, the proposed algorithm generates a size of 82
×
larger than the RAN algorithm, 243
×
larger than the RAX algorithm, and 16
×
larger than
the FRK algorithm. Thus, the proposed recurrence algorithm can be used to process large
signals sizes quickly and accurately.
Table 1. Maximum size of KP that is generated using the proposed and existing algorithms.
pAlgorithm pAlgorithm
RAN RAX FRK Proposed RAN RAX FRK Proposed
0.05 248 84 1236 20,480 0.55 926 932 2428 20,480
0.10 324 132 2250 20,480 0.60 808 812 2880 20,480
0.15 392 196 2252 20,480 0.65 706 708 3368 20,480
0.20 462 276 2980 20,480 0.70 618 618 3058 20,480
0.25 538 436 3400 20,480 0.75 538 490 3400 20,480
0.30 618 676 3058 20,480 0.80 462 318 2980 20,480
0.35 710 1234 3368 20,480 0.85 390 202 2252 20,480
0.40 814 1428 2880 20,480 0.90 322 140 2250 20,480
0.45 936 1220 2428 20,480 0.95 240 88 1236 20,480
4.3. Computation of the Cost Analysis
The computation cost is considered an important factor to evaluate the performance
of the proposed recurrence algorithm [
44
]. Thus, the computation cost of the proposed
algorithm is compared with the existing works. These algorithms the are recurrence
algorithm in the
x
direction (RAX), the recurrence algorithm in the
n
direction (RAN),
and FRK. The computation cost is performed using the number of computed coefﬁcients.
Table 2shows the ratio of computed coefﬁcients (RCCs) using the proposed algorithm for
different polynomials’ sizes. Full moments’ orders are considered. From Table 2, it can
be observed that the RCC for small values of
p
and 1
p
achieves a low percentage. This
percentage increases as the value of
p
increases towards 0.5. The average RCC for
p=
0.05
and
p=
0.95 is 9.22% while for
p=
0.5, the average RCC is 20.35%. This is because the
effective coefﬁcient is shaping a circle for
p=
0.5, and they are shaping a rotated ellipse as
the parameter
p
deviates towards 0 or 1, which allows the number of effective coefﬁcients
to be reduced. Consequently, the percentage of the computed coefﬁcients is reduced.
Entropy 2021,23, 1162 21 of 24
Table 2. The ratio comparison of the computed coefﬁcients of the proposed recurrence algorithm.
Krawtchouk Parameter (p)
0.05,
0.95
0.1,
0.9
0.15,
0.85
0.2,
0.8
0.25,
0.75
0.3,
0.7
0.35,
0.65
0.4,
0.6
0.45,
0.55 0.5
Polynomial size (N)
1024 10.03 13.31 15.56 17.24 18.52 19.51 20.24 20.75 21.05 21.16
2048 9.48 12.74 14.99 16.68 17.97 18.96 19.69 20.20 20.50 20.60
3072 9.25 12.51 14.76 16.45 17.74 18.73 19.47 19.97 20.27 20.38
4096 9.13 12.38 14.63 16.32 17.61 18.60 19.34 19.85 20.14 20.24
5120 9.05 12.30 14.54 16.23 17.53 18.52 19.25 19.76 20.06 20.16
6144 8.99 12.24 14.48 16.17 17.47 18.46 19.19 19.70 20.00 20.10
7168 8.95 12.19 14.44 16.13 17.42 18.41 19.15 19.65 19.95 20.05
8192 8.91 12.15 14.40 16.09 17.38 18.38 19.11 19.62 19.92 20.02
Average 9.22 12.48 14.73 16.41 17.71 18.70 19.43 19.94 20.24 20.34
Table 3demonstrates the performance improvement ratio between the proposed and
the existing algorithms, namely RAN, RAX, and FRK. The RAN and RAX algorithms
compute 50% of the KPCs for all values of parameter
p
because the
nx
plane is divided
into two portions. On the other hand, the FRK algorithm computes 25% of the KPCs as it
divides the
nx
plane into four partitions. The improvement ratio between the proposed
and existing algorithms is obtained as
Improvement Ratio =1RCC of the proposed algorithm
RCC of the existing algorithm . (37)
According to Table 3, the proposed polynomial shows an improvement ratio of 18.64%
at
p=
0.5 when compared to the FRK algorithm and 59.32% when compared to the RAN
and the RAX algorithm. The improvement ratio increases as the value of the parameter
p
deviates towards 0 and 1. For example, the results show that at parameter
p=
0.25, the
proposed method achieves an improvement ratio of 29.18% compared to the FRK algorithm,
and 64.59% compared to other algorithms. In addition, a maximum improvement ratio of
63.11% is achieved by the proposed algorithm when it compares with the FRK algorithm
while it is 81.55% when it compares with the RAN and RAX algorithms.
Table 3.
Comparing the improvement ratio of the computed coefﬁcients of the proposed algorithm
and the existing algorithms.
Krawtchouk Polynomial Parameter (p)
0.05,
0.95
0.1,
0.9
0.15,
0.85
0.2,
0.8
0.25,
0.75
0.3,
0.7
0.35,
0.65
0.4,
0.6
0.45,
0.55 0.5
Proposed 9.22 12.48 14.73 16.41 17.71 18.70 19.43 19.94 20.24 20.34
FRK 25.00 25.00 25.00 25.00 25.00 25.00 25.00 25.00 25.00 25.00
RAN and RAX 50.00 50.00 50.00 50.00 50.00 50.00 50.00 50.00 50.00 50.00
Improvement
over
FRK
63.11 50.09 41.10 34.35 29.18 25.22 22.28 20.25 19.05 18.64
Improvement
over
RAN and RAX
81.55 75.05 70.55 67.18 64.59 62.61 61.14 60.12 59.52 59.32
5. Conclusions
This paper described that state-of-the-art algorithms suffer from high error and that
the implementation of these algorithms is limited to a speciﬁc value of parameter
p
. In
Entropy 2021,23, 1162 22 of 24
addition to this, the state-of-the-art algorithms do not provide any computation of the initial
value. Furthermore, to date, no algorithm is proposed for computing the KP coefﬁcients
with high-order polynomial and large polynomial sizes. To address these limitations,
a new recurrence relation was proposed in this paper. To this end, a new initial value
was presented and derived. In addition to this, a new diagonal recurrence relation was
introduced. The proposed algorithm divided the KP plane into four triangles and only
the coefﬁcients in the upper triangle are computed. The coefﬁcients in the upper triangle
were divided into fundamental initial sets, initial sets, Part-1 and Part-2, respectively. The
n
-recurrence,
x
-recurrence, backwards
x
-recurrence, and diagonal recurrence relations
were used to compute the values of the coefﬁcients in the upper triangle. In addition, the
identities were used to compute the values of the coefﬁcients in the other triangles. The
proposed algorithm was evaluated and compared with the existing recurrence algorithms.
The comparison was carried out based on the reconstruction error, energy compaction, and
computation cost. The experimental results showed that the proposed algorithm achieves
a remarkable improvement over the existing algorithms in terms of the maximum size
generated and the number of coefﬁcients computed. Although the proposed algorithm
outperforms state-of-the-art algorithms, the computational complexity is still high and can
be further reduced. This can be achieved by implementing the proposed algorithm in a
multi-processing environment (parallelizing) rather than in sequential form, as considered
in this paper. Our future work is also directed towards implementing the proposed
algorithm for KP with other orthogonal polynomials. This is expected to result in a new
OP that has the potential of using orthogonal polynomials as well as the properties of
KP coefﬁcients.
Appendix A. Proof of Equation (10)
This section presents a proof through Equation (10):
Kp
0(x0) =sN1
Np pN p (1p)N p+N1(A1)
=N1
Np pN p (1p)N p+N11/2
(A2)
=(N1)!
(x0)!(Nx01)!×px0(1p)N1
(1p)x01/2
(A3)
= Γ(N)
Γ(x0+1)Γ(Nx0)×1p
px0
(1p)N1!1/2
(A4)
(A5)
By taking the exponential and natural log (elog), then (A5) can be simpliﬁed to:
Kp
0(x0) =elogΓ(N)
Γ(x0+1)Γ(Nx0)×1p
px0(1p)N11/2
(A6)
=e
1
2logΓ(N)
Γ(x0+1)Γ(Nx0)×1p
px0(1p)N1(A7)
=e
1
2log Γ(N)
log Γ(x0+1)log Γ(Nx0)×1p
px0(1p)N1(A8)
=e1
2log Γ(N)log Γ(x0+1)log Γ(Nx0)x0log(1p
p)+(N1)log(1p). (A9)
Author Contributions:
Conceptualization, S.H.A., K.A.A. and B.M.M.; methodology, S.H.A. and
B.M.M.; software, S.H.A., K.A.A. and M.A.N.; validation, M.A., S.M.S. and M.A.N.; formal analysis,
M.A.; investigation, M.A. and K.A.A.; writing—original draft preparation, S.H.A., M.A., M.A.N. and
B.M.M.; writing—review and editing, K.A.A., S.M.S. and M.A.; visualization, S.H.A. and B.M.M.;
Entropy 2021,23, 1162 23 of 24
supervision, S.H.A. and S.M.S.; project administration, S.H.A. All authors have read and agreed to
the published version of the manuscript.
Funding: This research received no external funding.
Data Availability Statement: All data are available within the manuscript.
Conﬂicts of Interest: The authors declare no conﬂict of interest.
References
1.
Chen, Y.; Lin, W.; Wen, Y.; Wang, B.; Zhang, S.; Zhang, Y.; Yu, S. Image Signal Transmission Passing over a Barrier enabled by
Optical Accelerating Beams. In Imaging Systems and Applications; Optical Society of America: Washington, DC, USA, 2020; p.
JF1E-5.
2.
Park, K.; Chae, M.; Cho, J.H. Image Pre-Processing Method of Machine Learning for Edge Detection with Image Signal Processor
Enhancement. Micromachines 2021,12, 73.
3.
Xiao, H. A Nonlinear Modulation Algorithm based on Orthogonal Polynomial in MIMO Radar. In Proceedings of the 2020
International Conference on Microwave and Millimeter Wave Technology (ICMMT), Shanghai, China, 20–23 September 2020; pp.
1–3. doi:10.1109/ICMMT49418.2020.9386969.
4.
Radeaf, H.S.; Mahmmod, B.M.; Abdulhussain, S.H.; Al-Jumaeily, D. A steganography based on orthogonal moments. In ICICT
’19—International Conference on Information and Communication Technology; ACM Press: New York, New York, USA, 2019; pp.
147–153. doi:10.1145/3321289.3321324.
5.
Naser, M.A.; Alsabah, M.; Mahmmod, B.M.; Noordin, N.K.; Abdulhussain, S.H.; Baker, T. Downlink Training Design for FDD
Massive MIMO Systems in the Presence of Colored Noise. Electronics 2020,9, 2155. doi:10.3390/electronics9122155.
6.
Alsabah, M.; Naser, M.A.; Mahmmod, B.M.; Noordin, N.K.; Abdulhussain, S.H. Sum Rate Maximization Versus
MSE Minimization in FDD Massive MIMO Systems With Short Coherence Time. IEEE Access
2021
,9, 108793–108808.
doi:10.1109/ACCESS.2021.3100799.
7.
Guido, R.C.; Pedroso, F.; Contreras, R.C.; Rodrigues, L.C.; Guariglia, E.; Neto, J.S. Introducing the Discrete Path Transform (DPT)
and its applications in signal analysis, artefact removal, and spoken word recognition. Digit. Signal Process. 2021,117, 103158.
8.
Azam, M.H.; Berger, J.; Guernouti, S.; Poullain, P.; Musy, M. Parametric PGD model used with orthogonal polyno-
mials to assess efﬁciently the building’s envelope thermal performance. J. Build. Perform. Simul.
2021
,14, 132–154,
doi:10.1080/19401493.2020.1868577.
9.
Antonir, A.; Wijenayake, C.; Ignjatovi´c, A. Acquisition of high bandwidth signals by sampling an analog chromatic derivatives
ﬁlterbank. In Proceedings of the 2021 IEEE International Symposium on Circuits and Systems (ISCAS), Daegu, Korea, 22–28 May
2021; pp. 1–5. doi:10.1109/ISCAS51556.2021.9401478.
10.
Vlaši´c, T.; Ralaši´c, I.; Tafro, A.; Serši´c, D. Spline-like Chebyshev polynomial model for compressive imaging. J. Vis. Commun.
Image Represent. 2020,66, 102731.
11.
Abdulhussain, S.H.; Mahmmod, B.M.; Naser, M.A.; Alsabah, M.Q.; Ali, R.; Al-Haddad, S.A.R. A Robust Handwritten Numeral
Recognition Using Hybrid Orthogonal Polynomials and Moments. Sensors 2021,21, 1999. doi:10.3390/s21061999.
12.
Barranco-Chamorro, I.; Grentzelos, C. Some uses of orthogonal polynomials in statistical inference. Comput. Math. Methods
2020
,
e1144. doi:10.1002/cmm4.1144.
13.
Krishnamoorthy, R.; Devi, S.S. Image retrieval using edge based shape similarity with multiresolution enhanced orthogonal
polynomials model. Digit. Signal Process. 2013,23, 555–568.
14.
Idan, Z.N.; Abdulhussain, S.H.; Mahmmod, B.M.; Al-Utaibi, K.A.; Al-Hadad, S.; Sait, S.M. Fast Shot Boundary Detection Based
on Separable Moments and Support Vector Machine. IEEE Access 2021,9, 106412–106427. doi:10.1109/ACCESS.2021.3100139.
15.
Nafees, S.; Rice, S.H.; Phillips, C. Analyzing Genomic Data Using Tensor-based Orthogonal Polynomials. In Proceedings of the
2018 ACM International Conference on Bioinformatics, Computational Biology, and Health Informatics, Washington, DC, USA,
29 August–1 September 2018; pp. 584–584.
16.
Igawa, R.A.; Barbon Jr, S.; Paulo, K.C.S.; Kido, G.S.; Guido, R.C.; Júnior, M.L.P.; da Silva, I.N. Account classiﬁcation in online
social networks with LBCA and wavelets. Inf. Sci. 2016,332, 72–83.
17.
Hameed, I.M.; Abdulhussain, S.H.; Mahmmod, B.M. Content-based image retrieval: A review of recent trends. Cogent Eng.
2021
,
8, 1927469. doi:10.1080/23311916.2021.1927469.
18.
Yang, L.; Su, H.; Zhong, C.; Meng, Z.; Luo, H.; Li, X.; Tang, Y.Y.; Lu, Y. Hyperspectral image classiﬁcation using wavelet
transform-based smooth ordering. Int. J. Wavelets Multiresolut. Inf. Process. 2019,17, 1950050.
19.
Guariglia, E.; Silvestrov, S. Fractional-Wavelet Analysis of Positive deﬁnite Distributions and Wavelets on
D0(C)
. In Engineering
Mathematics II; Silvestrov, S., Ranˇci´c, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 337–353.
20.
Jassim, W.A.; Raveendran, P. Face recognition using discrete Tchebichef-Krawtchouk transform. In Proceedings of the 2012 IEEE
International Symposium on Multimedia, Irvine, CA, USA, 10–12 December 2012; pp. 120–127.
21.
Mahmmod, B.M.; Abdulhussain, S.H.; Naser, M.A.; Alsabah, M.; Mustaﬁna, J. Speech Enhancement Algorithm Based on a
Hybrid Estimator. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2021; Volume 1090, p.
012102. doi:10.1088/1757-899X/1090/1/012102.
Entropy 2021,23, 1162 24 of 24
22.
Abdulhussain, S.H.; Ramli, A.R.; Hussain, A.J.; Mahmmod, B.M.; Jassim, W.A. Orthogonal polynomial embedded image kernel.
In Proceedings of the International Conference on Information and Communication Technology, Baghdad, Iraq, 15–16 April 2019;
pp. 215–221.
23.
Wang, W.; Zhao, J. Hiding depth information in compressed 2D image/video using reversible watermarking. Multimed. Tools
Appl. 2016,75, 4285–4303.
24.
Wang, Y.; Vilermo, M.; Yaroslavsky, L. Energy compaction property of the MDCT in comparison with other transforms. In Audio
Engineering Society Convention 109; Audio Engineering Society: New York, NY, USA, 2000.
25. Mahmmod, B.M.; Abdul-Hadi, A.M.; Abdulhussain, S.H.; Hussien, A. On Computational Aspects of Krawtchouk Polynomials
for High Orders. J. Imaging 2020,6, 81. doi:10.3390/jimaging6080081.
26.
Abdul-Hadi, A.M.; Abdulhussain, S.H.; Mahmmod, B.M. On the computational aspects of Charlier polynomials. Cogent Eng.
2020,7, 1763553. doi:10.1080/23311916.2020.1763553.
27.
Zhu, H.; Liu, M.; Shu, H.; Zhang, H.; Luo, L. General form for obtaining discrete orthogonal moments. Iet Image Process.
2010
,
4, 335. doi:10.1049/iet-ipr.2009.0195.
28.
Mukundan, R.; Ong, S.; Lee, P. Image analysis by Tchebichef moments. IEEE Trans. Image Process.
2001
,10, 1357–1364.
doi:10.1109/83.941859.
29.
Mizel, A.K.E. Orthogonal functions solving linear functional differential equationsusing chebyshev polynomial. Baghdad Sci. J.
2008,5, 143–148.
30.
Abdulhussain, S.H.; Mahmmod, B.M. Fast and efﬁcient recursive algorithm of Meixner polynomials. J. Real-Time Image Process.
2021, 1–13. doi:10.1007/s11554-021-01093-z.
31.
Pew-Thian Yap.; Paramesran, R.; Seng-Huat Ong. Image analysis by krawtchouk moments. IEEE Trans. Image Process.
2003
,
12, 1367–1377. doi:10.1109/TIP.2003.818019.
32.
Pew-Tian Yap.; Paramesran, R. Local watermarks based on Krawtchouk moments. In Proceedings of the 2004 IEEE Region 10
Conference TENCON 2004, Chiang Mai, Thailand, 24 Novemver 2004; pp. 73–76. doi:10.1109/TENCON.2004.1414534.
33.
Akhmedova, F.; Liao, S. Face Recognition with Discrete Orthogonal Moments. In Recent Advances in Computer Vision; Springer:
Cham, Switzerland, 2019; pp. 189–209.
34.
Tsougenis, E.D.; Papakostas, G.A.; Koulouriotis, D.E. Image watermarking via separable moments. Multimed. Tools Appl.
2015
,
74, 3985–4012. doi:10.1007/s11042-013-1808-y.
35.
Zhou, Z.; Li, X.; Tang, C.; Ding, C. Binary LCD codes and self-orthogonal codes from a generic construction. IEEE Trans. Inf.
Theory 2018,65, 16–27.
36.
Heo, J.; Kiem, Y.H. On characterizing integral zeros of Krawtchouk polynomials by quantum entanglement. Linear Algebra Appl.
2019,567, 167–179.
37. Pierce, J.R. An Introduction to Information Theory: Symbols, Signals and Noise; Courier Corporation: Chelmsford, MA, USA, 2012.
38.
Liao, S.X.; Pawlak, M. On the accuracy of Zernike moments for image analysis. IEEE Trans. Pattern Anal. Mach. Intell.
1998
,
20, 1358–1364.
39. Liao, S.X.; Pawlak, M. On image analysis by moments. IEEE Trans. Pattern Anal. Mach. Intell. 1996,18, 254–266.
40.
Kamgar-Parsi, B.; Kamgar-Parsi, B. Evaluation of quantization error in computer vision. In Physics-Based Vision: Principles and
Practice: Radiometry, Volume 1; CRC Press: Boca Raton, FL, USA, 1993; p. 293.
41.
Yap, P.T.; Raveendran, P.; Ong, S.H. Krawtchouk moments as a new set of discrete orthogonal moments for image reconstruction.
In Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN’02 (Cat. No.02CH37290), Honolulu, HI,
USA, 12–17 May 2002; pp. 908–912. doi:10.1109/IJCNN.2002.1005595.
42.
Jassim, W.A.; Raveendran, P.; Mukundan, R. New orthogonal polynomials for speech signal and image processing. Iet Signal
Process. 2012,6, 713–723.
43.
Zhang, G.; Luo, Z.; Fu, B.; Li, B.; Liao, J.; Fan, X.; Xi, Z. A symmetry and bi-recursive algorithm of accurately computing
Krawtchouk moments. Pattern Recognit. Lett. 2010,31, 548–554. doi:10.1016/j.patrec.2009.12.007.
44.
Abdulhussain, S.H.; Ramli, A.R.; Al-Haddad, S.A.R.; Mahmmod, B.M.; Jassim, W.A. Fast Recursive Computation of Krawtchouk
Polynomials. J. Math. Imaging Vis. 2018,60, 285–303. doi:10.1007/s10851-017-0758-9.
45.
Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions: With Formulas, Graphs, and Mathematical Tables; Dover
Publications: Mineola, NY, USA, 1964.
46. Jain, A.K. Fundamentals of Digital Image Processing; Prentice-Hall, Inc.: Hoboken, NJ, USA, 1989.
... Tajeripour and Fekri-Ershad proposed detecting abnormalities in stone textures based on one-dimensional local binary patterns, and the proposed approach is fully automatic and all of the necessary parameters can be tuned [11]. Al-Utaibi et al. and Basheera M. Mahmmod used Krawtchouk and Hahn polynomials to reconstruct images and analyzed the influence of different parameters on the quality of reconstructed images to complete the detection of the targets [12,13]. Currently, widely used image segmentation algorithms include threshold segmentation, watershed algorithm, cluster segmentation, genetic algorithm, and so on. ...
... R(x, y) � log a S(x, y) − k log a S(x, y) * F(x, y), (12) where k is the light adjustment parameter; F(x,y) is the Gaussian kernel function; R(x, i) is the illumination image; and a is the number of image channels, usually 3. 6 Advances in Civil Engineering After Gaussian filtering is performed, the filtering results on different scales are averagely weighted to obtain the estimated illuminance image. ...
... Use hole filling to optimize the morphology of the binary image. (11) Detect the spread rate of gravel.(12) Establish the relationship between spreading rate and spreading amount. ...
Article
Full-text available
Synchronous chip seal is an advanced road constructing technology, and the gravel coverage rate is an important indicator of the construction quality. The traditional method to measure the gravel coverage rate usually depends on observation by human eyes, which is rough and inefficient. In this paper, a detection method of gravel coverage based on improved wavelet algorithm is proposed. By decomposing the image with two-dimensional discrete wavelet, the high-frequency and low-frequency coefficients are extracted. The noise of the high-frequency coefficients in the image is removed by improving the threshold function, and the contrast of the gravel target in the low-frequency coefficients is improved by the multiscale Retinex algorithm, and then two-dimensional wavelet reconstruction is carried out. Finally, the gravel target is segmented by the block threshold method, and the pixel ratio of the gravel is calculated to complete the detection of the gravel coverage. The experimental results show that the proposed method can effectively segment the gravel target and reduce the influence of environmental factors on the detection accuracy. The detection accuracy error is within ±2%, which can meet the detection requirements. The improved wavelet algorithm improves the signal-to-noise ratio of the denoised image, reduces the mean square error, and achieves a relatively good denoising effect.
... Likewise, different polynomials-based recurrence algorithms are promising techniques for signal processing due to their special capabilities in feature extraction. Nevertheless, they also just shorten the computational cost instead of calibration time [21][22][23]. Deep learning has been widely used in computer vision, natural language processing, and physiological signal analysis [24][25][26][27][28]. Nevertheless, deep learning needs lots of labelled samples from the target subject to show its superiority. ...
... However, it will yield computational burden. us, the overall loss function of sMEKT can be formulated by min P S ,P TL αD S,TL + βtr P T S S s w P S , subject to: P T S S s b P S � I. (22) Let P � [P S ; P TL ](P ∈ R 2 d×q ). en, the corresponding Lagrange function is ...
Article
Full-text available
A long calibration procedure limits the use in practice for a motor imagery (MI)-based brain-computer interface (BCI) system. To tackle this problem, we consider supervised and semisupervised transfer learning. However, it is a challenge for them to cope with high intersession/subject variability in the MI electroencephalographic (EEG) signals. Based on the framework of unsupervised manifold embedded knowledge transfer (MEKT), we propose a supervised MEKT algorithm (sMEKT) and a semisupervised MEKT algorithm (ssMEKT), respectively. sMEKT only has limited labelled samples from a target subject and abundant labelled samples from multiple source subjects. Compared to sMEKT, ssMEKT adds comparably more unlabelled samples from the target subject. After performing Riemannian alignment (RA) and tangent space mapping (TSM), both sMEKT and ssMEKT execute domain adaptation to shorten the differences among subjects. During domain adaptation, to make use of the available samples, two algorithms preserve the source domain discriminability, and ssMEKT preserves the geometric structure embedded in the labelled and unlabelled target domains. Moreover, to obtain a subject-specific classifier, sMEKT minimizes the joint probability distribution shift between the labelled target and source domains, whereas ssMEKT performs the joint probability distribution shift minimization between the unlabelled target domain and all labelled domains. Experimental results on two publicly available MI datasets demonstrate that our algorithms outperform the six competing algorithms, where the sizes of labelled and unlabelled target domains are variable. Especially for the target subjects with 10 labelled samples and 270/190 unlabelled samples, ssMEKT shows 5.27% and 2.69% increase in average accuracy on the two abovementioned datasets compared to the previous best semisupervised transfer learning algorithm (RA-regularized common spatial patterns-weighted adaptation regularization, RA-RCSP-wAR), respectively. Therefore, our algorithms can effectively reduce the need of labelled samples for the target subject, which is of importance for the MI-based BCI application.
... Te ANN requires features to be fed with. Tere are several kinds of features which may be considered, including polynomial features [41][42][43]. However, we opted for two kinds of popular and widely used hand-crafted features: gray-level co-occurrence matrix (GLCM) features and local binary pattern (LBP) features. ...
Article
Full-text available
Te deployment of photovoltaic (PV) cells as a renewable energy resource has been boosted recently, which enhanced the need to develop an automatic and swift fault detection system for PV cells. Prior to isolation for repair or replacement, it is critical to judge the level of the fault that occurred in the PV cell. Te aim of this research study is the fault-level grading of PV cells employing deep neural network models. Te experiment is carried out using a publically available dataset of 2,624 electroluminescence images of PV cells, which are labeled with four distinct defect probabilities defned as the defect levels. Te deep architectures of the classical artifcial neural networks are developed while employing hand-crafted texture features extracted from the EL image data. Moreover, optimized architectures of the convolutional neural network are developed with a specifc emphasis on lightweight models for real-time processing. Te experiments are performed for two-way binary classifcation and multiclass classifcation. For the frst binary categorization, the proposed CNN model outperformed the state-of-the-art solution with a margin of 1.3% in accuracy with a signifcant 50% less computational complexity. In the second binary classifcation task, the CPU-based proposed model outperformed the GPU-based solution with a margin of 0.9% accuracy with an 8× lighter architecture. Finally, the multiclass categorization of PV cells is performed and the state-of-the-art results with 83.5% accuracy are achieved. Te proposed models ofer a lightweight, efcient, and computationally cheaper CPU-based solution for the real-time fault-level categorization of PV cells.
... Several signal processing applications, such as denoising, have been implemented in many works of literature [8][9][10][11]. ECG signal denoising aims to eliminate as much noise as feasible while preserving as much signal as possible. Daqrouq used discrete wavelet transform (DWT) to reduce the ECG baseline wandering [12]. ...
Article
Full-text available
Background: The electrocardiogram (ECG) is a widely used diagnostic that observes the heart activities of patients to ascertain a heart abnormality diagnosis. The artifacts or noises are primarily associated with the problem of ECG signal processing. Conventional denoising techniques have been proposed in previous literature; however, some lacks, such as the determination of suitable wavelet basis function and threshold, can be a time-consuming process. This paper presents end-to-end learning using a denoising auto-encoder (DAE) for denoising algorithms and convolutional-bidirectional long short-term memory (ConvBiLSTM) for ECG delineation to classify ECG waveforms in terms of the PQRST-wave and isoelectric lines. The denoising reconstruction using unsupervised learning based on the encoder-decoder process can be proposed to improve the drawbacks. First, The ECG signals are reduced to a low-dimensional vector in the encoder. Second, the decoder reconstructed the signals. The last, the reconstructed signals of ECG can be processed to ConvBiLSTM. The proposed architecture of DAE-ConvBiLSTM is the end-to-end diagnosis of heart abnormality detection. Results: As a result, the performance of DAE-ConvBiLSTM has obtained an average of above 98.59% accuracy, sensitivity, specificity, precision, and F1 score from the existing studies. The DAE-ConvBiLSTM has also experimented with detecting T-wave (due to ventricular repolarisation) morphology abnormalities. Conclusion: The development architecture for detecting heart abnormalities using an unsupervised learning DAE and supervised learning ConvBiLSTM can be proposed for an end-to-end learning algorithm. In the future, the precise accuracy of the ECG main waveform will affect heart abnormalities detection in clinical practice.
... Feature extraction and analysis of heart sound signals is a significant part of establishing a cardiovascular disease diagnosis system [5][6][7]. Different features can reflect the state of heart function from various aspects. ...
Article
Full-text available
Cardiovascular disease (CVD) is considered one of the leading causes of death worldwide. In recent years, this research area has attracted researchers’ attention to investigate heart sounds to diagnose the disease. To effectively distinguish heart valve defects from normal heart sounds, adaptive empirical mode decomposition (EMD) and feature fusion techniques were used to analyze the classification of heart sounds. Based on the correlation coefficient and Root Mean Square Error (RMSE) method, adaptive EMD was proposed under the condition of screening the intrinsic mode function (IMF) components. Adaptive thresholds based on Hausdorff Distance were used to choose the IMF components used for reconstruction. The multidimensional features extracted from the reconstructed signal were ranked and selected. The features of waveform transformation, energy and heart sound signal can indicate the state of heart activity corresponding to various heart sounds. Here, a set of ordinary features were extracted from the time, frequency and nonlinear domains. To extract more compelling features and achieve better classification results, another four cardiac reserve time features were fused. The fusion features were sorted using six different feature selection algorithms. Three classifiers, random forest, decision tree, and K-nearest neighbor, were trained on open source and our databases. Compared to the previous work, our extensive experimental evaluations show that the proposed method can achieve the best results and have the highest accuracy of 99.3% (1.9% improvement in classification accuracy). The excellent results verified the robustness and effectiveness of the fusion features and proposed method.
... To this end, the process of object localization and object normalization is considered essential for feature extraction technique [37]. As such, essential issues in object recognition and computer vision applications are the extraction of significant features from objects [38]. Object recognition to date is still a challenging problem that affects pattern recognition. ...
Article
Full-text available
Three-dimensional (3D) image and medical image processing, which are considered big data analysis, have attracted significant attention during the last few years. To this end, efficient 3D object recognition techniques could be beneficial to such image and medical image processing. However, to date, most of the proposed methods for 3D object recognition experience major challenges in terms of high computational complexity. This is attributed to the fact that the computational complexity and execution time are increased when the dimensions of the object are increased, which is the case in 3D object recognition. Therefore, finding an efficient method for obtaining high recognition accuracy with low computational complexity is essential. To this end, this paper presents an efficient method for 3D object recognition with low computational complexity. Specifically, the proposed method uses a fast overlapped technique, which deals with higher-order polynomials and high-dimensional objects. The fast overlapped block-processing algorithm reduces the computational complexity of feature extraction. This paper also exploits Charlier polynomials and their moments along with support vector machine (SVM). The evaluation of the presented method is carried out using a well-known dataset, the McGill benchmark dataset. Besides, comparisons are performed with existing 3D object recognition methods. The results show that the proposed 3D object recognition approach achieves high recognition rates under different noisy environments. Furthermore, the results show that the presented method has the potential to mitigate noise distortion and outperforms existing methods in terms of computation time under noise-free and different noisy environments.
... x = 0, 1, . . . , N − 1, and p ∈ (0, 1) Similar to DTchPs, to compute the DKraPs coefficient with low computation cost and without numerical error, the recurrence algorithms are utilized [8], [9], [21], [34], [35]. In this paper, we utilized the recurrence algorithm in the x-direction [9], which is given by: ...
Article
Full-text available
Image pattern classification is considered a significant step for image and video processing. Although various image pattern algorithms have been proposed so far that achieved adequate classification, achieving higher accuracy while reducing the computation time remains challenging to date. A robust image pattern classification method is essential to obtain the desired accuracy. This method can be accurately classify image blocks into plain, edge, and texture (PET) using an efficient feature extraction mechanism. Moreover, to date, most of the existing studies are focused on evaluating their methods based on specific orthogonal moments, which limits the understanding of their potential application to various Discrete Orthogonal Moments (DOMs). Therefore, finding a fast PET classification method that accurately classify image pattern is crucial. To this end, this paper proposes a new scheme for accurate and fast image pattern classification using an efficient DOM. To reduce the computational complexity of feature extraction, an election mechanism is proposed to reduce the number of processed block patterns. In addition, support vector machine is used to classify the extracted features for different block patterns. The proposed scheme is evaluated by comparing the accuracy of the proposed method with the accuracy achieved by state-of-the-art methods. In addition, we compare the performance of the proposed method based on different DOMs to get the robust one. The results show that the proposed method achieves the highest classification accuracy compared with the existing methods in all the scenarios considered.
... Furthermore, a number of underwater image enhancement algorithms [24][25][26][27][28] have been proposed, which can be beneficial to the subsequent segmentation. If orthogonal polynomials are used [29,30], the accuracy of segmentation can be further improved. To improve the quality of images, some advanced underwater image sensors have been proposed [31][32][33]. ...
Article
Full-text available
Image segmentation plays an important role in the sensing systems of autonomous underwater vehicles for fishing. Via accurately perceiving the marine organisms and surrounding environment, the automatic catch of marine products can be implemented. However, existing segmentation methods cannot precisely segment marine animals due to the low quality and complex shapes of collected marine images in the underwater situation. A novel multi-scale transformer network (MulTNet) is proposed for improving the segmentation accuracy of marine animals, and it simultaneously possesses the merits of a convolutional neural network (CNN) and a transformer. To alleviate the computational burden of the proposed network, a dimensionality reduction CNN module (DRCM) based on progressive downsampling is first designed to fully extract the low-level features, and then they are fed into a proposed multi-scale transformer module (MTM). For capturing the rich contextural information from different subregions and scales, four parallel small-scale encoder layers with different heads are constructed, and then they are combined with a large-scale transformer layer to form a multi-scale transformer module. The comparative results demonstrate MulTNet outperforms the existing advanced image segmentation networks, with MIOU improvements of 0.76% in the marine animal dataset and 0.29% in the ISIC 2018 dataset. Consequently, the proposed method has important application value for segmenting underwater images.
... Similarly, other classes of orthogonal polynomial have also shown promising results in representing higher order signals in orthogonal basis for efficient feature extraction. These polynomials are Krawtchouk Polynomials (KPs) [14], Charlier polynomials [15], Meixner orthogonal polynomials [16], and Tchebichef polynomials [17], [18]. ...
Article
Full-text available
Compressive sensing allows the reconstruction of original signals from a much smaller number of samples as compared to the Nyquist sampling rate. The effectiveness of compressive sensing motivated the researchers for its deployment in a variety of application areas. The use of an efficient sampling matrix for high-performance recovery algorithms improves the performance of the compressive sensing framework significantly. This paper presents the underlying concepts of compressive sensing as well as previous work done in targeted domains in accordance with the various application areas. To develop prospects within the available functional blocks of compressive sensing frameworks, a diverse range of application areas are investigated. The three fundamental elements of a compressive sensing framework (signal sparsity, subsampling, and reconstruction) are thoroughly reviewed in this work by becoming acquainted with the key research gaps previously identified by the research community. Similarly, the basic mathematical formulation is used to outline some primary performance evaluation metrics for 1D and 2D compressive sensing.
Article
Discrete Tchebichef polynomials (DTPs) and their moments are effectively utilized in different fields such as video and image coding, pattern recognition, and computer vision due to their remarkable performance. However, when the moments order becomes large (high), DTPs prone to exhibit numerical instabilities. In this article, a computationally efficient and numerically stable recurrence algorithm is proposed for high order of moment. The proposed algorithm is based on combining two recurrence algorithms, which are the recurrence relations in the n$$n$$ and x$$x$$‐directions. In addition, an adaptive threshold is used to stabilize the generation of the DTP coefficients. The designed algorithm can generate the DTP coefficients for high moment's order and large signal size. By large signal size, we mean the samples of the discrete signal are large. To evaluate the performance of the proposed algorithm, a comparison study is performed with state‐of‐the‐art algorithms in terms of computational cost and capability of generating DTPs with large polynomial size and high moment order. The results show that the proposed algorithm has a remarkably low computation cost and is numerically stable, where the proposed algorithm is 27×$$\times$$ times faster than the state‐of‐the‐art algorithm.
Article
Full-text available
The increasing demand for higher data rates motivates the exploration of advanced techniques for future wireless networks. To this end, massive multiple-input multiple-output (mMIMO) is envisioned as the most essential technique to meet this demand. However, the expansion of the number of antennas in mMIMO systems with short coherence time makes the downlink channel estimation (DCE) overhead potentially overwhelming. As such, the number of training sequence (TS) needs to be significantly reduced. However, reducing the number of TS reduces the mean-squared error (MSE) accuracy significantly and to date it is not clear to what extend can this TS reduction affects the achievable sum rate performance. Therefore, this paper develops a low complexity and tractable TS solution for DCE and establishes an analytical framework for the optimum TS. Furthermore, the tradeoff between the achievable sum rate maximization criteria and the MSE minimization criteria is investigated. This investigation is essential to characterize the optimum TS length and the actual performance of mMIMO systems when the channel exhibits a limited coherence time. To this end, the statistical structure of mMIMO channels is exploited. In addition, this paper utilizes a random matrix theory (RMT) method to characterize the downlink achievable sum rate and MSE in a closed-form. This paper shows that maximizing the downlink sum rate criterion is more important than minimizing the MSE of the SINR only, which is typically considered in the conventional MIMO systems and/or in the time division duplex (TDD) mMIMO systems. The results demonstrate that a feasible downlink achievable sum rate can be achieved in an frequency division duplex (FDD) mMIMO system. This finding is necessary to extend the benefit of mMIMO systems to high frequency bands such as millimeter-wave (mmWave) and Terahertz (THZ) communications.
Article
Full-text available
The large number of visual applications in multimedia sharing websites and social networks contribute to the increasing amounts of multimedia data in cyberspace. Video data is a rich source of information and considered the most demanding in terms of storage space. With the huge development of digital video production, video management becomes a challenging task. Video content analysis (VCA) aims to provide big data solutions by automating the video management. To this end, shot boundary detection (SBD) is considered an essential step in VCA. It aims to partition the video sequence into shots by detecting shot transitions. High computational cost in transition detection is considered a bottleneck for real-time applications. Thus, in this paper, a balance between detection accuracy and speed for SBD is addressed by presenting a new method for fast video processing. The proposed SBD framework is based on the concept of candidate segment selection with frame active area and separable moments. First, for each frame, the active area is selected such that only the informative content is considered. This leads to a reduction in the computational cost and disturbance factors. Second, for each active area, the moments are computed using orthogonal polynomials. Then, an adaptive threshold and inequality criteria are used to eliminate most of the non-transition frames and preserve candidate segments. For further elimination, two rounds of bisection comparisons are applied. As a result, the computational cost is reduced in the subsequent stages. Finally, machine learning statistics based on the support vector machine is implemented to detect the cut transitions. The enhancement of the proposed fast video processing method over existing methods in terms of computational complexity and accuracy is verified. The average improvements in terms of frame percentage and transition accuracy percentage are 1.63% and 2.05%, respectively. Moreover, for the proposed SBD algorithm, a comparative study is performed with state-of-the-art algorithms. The comparison results confirm the superiority of the proposed algorithm in computation time with improvement of over 38%.
Article
Full-text available
With the availability of internet technology and the low-cost of digital image sensor, enormous amount of image databases have been created in different kind of applications. These image databases increase the demand to develop efficient image retrieval search methods that meet user requirements. Great attention and efforts have been devoted to improve content-based image retrieval method with a particular focus on reducing the semantic gap between low-level features and human visual perceptions. Due to the increasing research in this field, this paper surveys, analyses and compares the current state-of-the-art methodologies over the last six years in the CBIR field. This paper also provides an overview of CBIR framework, recent low-level feature extraction methods, machine learning algorithms, similarity measures, and a performance evaluation to inspire further research efforts.
Article
Full-text available
Meixner polynomials (MNPs) and their moments are considered significant feature extraction tools because of their salient representation in signal processing and computer vision. However, the existing recurrence algorithm of MNPs exhibits numerical instabilities of coefficients for high-order polynomials. This paper proposed a new recurrence algorithm to compute the coefficients of MNPs for high-order polynomials. The proposed algorithm is based on a derived identity for MNPs that reduces the number of the utilized recurrence times and the computed number of MNPs coefficients. To minimize the numerical errors, a new form of the recurrence algorithm is presented. The proposed algorithm computes ∼50% of the MNP coefficients. A comparison with different state-of-the-art algorithms is performed to evaluate the performance of the proposed recurrence algorithm in terms of computational cost and reconstruction error. In addition, an investigation is performed to find the maximum generated size. The results show that the proposed algorithm remarkably reduces the computational cost and increases the generated size of the MNPs. The proposed algorithm shows an average improvement of ∼77% in terms of computation cost. In addition, the proposed algorithm exhibits an improvement of ∼1269% in terms of generated size.
Conference Paper
Full-text available
Speech is the essential way to interact between humans or between human and machine. However, it is always contaminated with different types of environment noise. Therefore, speech enhancement algorithms (SEA) have appeared as a significant approach in speech processing filed to suppress background noise and return back the original speech signal. In this paper, a new efficient two-stage SEA with low distortion is proposed based on minimum mean square error sense. The estimation of clean signal is performed by taking the advantages of Laplacian speech and noise modeling based on orthogonal transform (Discrete Krawtchouk-Tchebichef transform) coefficients distribution. The Discrete Krawtchouk-Tchebichef transform (DKTT) has a high energy compaction and provides a high matching between Laplacian density and its coefficients distribution that affects positively on reducing residual noise without sacrificing speech components. Moreover, a cascade combination of hybrid speech estimator is proposed by using two stages filters (non-linear and linear) based on DKTT domain to lessen the residual noise effectively without distorting the speech signal. The linear estimator is considered as a post processing filter that reinforces the suppression of noise by regenerate speech components. To this end, the output results have been compared with existing work in terms of different quality and intelligibility measures. The comparative evaluation confirms the superior achievements of the proposed SEA in various noisy environments. The improvement ratio of the presented algorithm in terms of PESQ measure are 5.8% and 1.8% for white and babble noise environments, respectively. In addition, the improvement ratio of the presented algorithm in terms of OVL measure are 15.7% and 9.8% for white and babble noise environments, respectively.
Article
Full-text available
Citation: Abdulhussain, S.H.; Mahmmod, B.M.; Naser, M.A.; Alsabah, M.; Ali, R.; Al-Haddad, S.A.R.
Article
This article introduces the Discrete Path Transform (DPT). Designed to serve as a new tool for handcraft feature extraction (FE), it improves the elementary analysis provided by signal energy (E) and enhances the humble spectral investigation granted by zero-crossing rates (ZCRs). C/C++ source-codes to realize both the DPT direct and inverse (IDPT) forms are presented together with a few hypothetical numerical examples and an application involving general signal analysis, artefact removal from biomedical signals, and spoken word recognition (SWR), thus demonstrating how useful and effective the proposed transform is. Brief comparisons with Teager Energy Operator (TEO) and a list of important references were also included.
Conference Paper
We demonstrate a new type of image signal transmission based on nonconvex accelerating beams, with an image encoded in its angular spectrum successfully retrieved after multiple self-bending without and with obstruction in the transmission link.