To read the full-text of this research, you can request a copy directly from the authors.
... Thus, quaternions have reduced the number of parameters and operations required for these applications [5]. ...
... Quaternions are an associative but not commutative algebra over R. Next, this subsection specifies some definitions, properties, and notations of quaternion algebra according to [3], [4], [5], [10], [11], [12], [13], [14], [15], and [16]. ...
... B. SPLIT QUATERNION ALGEBRA Split quaternions are also an associative and noncommutative algebra over R. This subsection provides some definitions, properties, and notations for split quaternion algebra according to [3], [4], [5], [10], [11], [12], [13], [14], [15], [16], and [17]. Definition 10: A split quaternion is defined as a vector x in a four-dimensional vector space, which is ...
In this study, two models of multilayer quaternionic feedforward neural networks are presented. Whereas the first model is based on quaternion algebra, the second model uses split quaternion algebra. For both quaternionic neural networks, a learning algorithm was derived using an adaptation of the extended Kalman filter. In addition, to analyze the performance of these two neural network models, they were applied to address the problem of enhancing low-light color images, which for this work consists particularly in the recovery of illuminated color images by quaternionic neural network processing from underexposed images. The quaternion neural network enhances images in the RGB color space (Euclidean metric), whereas the split quaternion neural network enhances images in the HSV color space (Minkowski metric). From the results, we can observe that the split quaternion neural network using the HSV color model shows advantages that were not previously published and were not shown by the quaternion neural network using the RGB color model. Therefore, this article introduces a novel quaternionic neural network that uses the Minkowski metric for color image processing, which can be advantageously used by practitioners interested in working with the HSV color model.
... The work presented in this thesis focuses on quaternion neural networks trained with the classical and widely spread first order backpropagation algorithm. Indeed, the recent advances on general HR calculus that derive new methods for training quaternion neural networks are not considered, and represent an entire different topic of research (Dongpo Xu et al. 2015;Xu, L. Zhang, and H. Zhang 2017;Popa 2018). Consequently, this Chapter aims to list, describe and motivate all the milestones works on the litterature of standard gradient-based quaternion neural networks (Section 3.2), quaternion Hofield neural networks (Section 3.2.2), ...
... Chen, D. P. Kingma, et al. 2016;Razavi, Oord, and Vinyals 2019), or text processing (Pu et al. 2016), and sequence to sequence networks (Seq2Seq) on neural machine translation (Sutskever, Vinyals, and Le 2014), language modelling (Sriram et al. 2017) or even drug discovery (Z. Xu et al. 2017). ...
... Therefore, (Dongpo Xu et al. 2015) have proposed a generalized version of the HR calculus called GHR calculus. Based on the GHR framework, new learning algorithms have been developed, such as the resilient backpropagation, the conjugate and the scaled conjugate gradient, or the Gauss-Newton algorithms (Popa 2018;Xu, L. Zhang, and H. Zhang 2017). GHR based algorithms have been recently compared to the traditional backpropagation algorithm in a toy task of timeprediction forecasting by (Popa 2018), and have reached higher accuracies. ...
In the recent years, deep learning has become the leading approach to modern artificial intelligence (AI). The important improvement in terms of processing time required for learning AI based models alongside with the growing amount of available data made of deep neural networks (DNN) the strongest solution to solve complex real-world problems. However, a major challenge of artificial neural architectures lies on better considering the high-dimensionality of the data.To alleviate this issue, neural networks (NN) based on complex and hypercomplex algebras have been developped. The natural multidimensionality of the data is elegantly embedded within complex and hypercomplex neurons composing the model. In particular, quaternion neural networks (QNN) have been proposed to deal with up to four dimensional features, based on the quaternion representation of rotations and orientations. Unfortunately, and conversely to complex-valued neural networks that are nowadays known as a strong alternative to real-valued neural networks, QNNs suffer from numerous limitations that are carrefuly addressed in the different parts detailled in this thesis.The thesis consists in three parts that gradually introduce the missing concepts of QNNs, to make them a strong alternative to real-valued NNs. The first part introduces and list previous findings on quaternion numbers and quaternion neural networks to define the context and strong basics for building elaborated QNNs.The second part introduces state-of-the-art quaternion neural networks for a fair comparison with real-valued neural architectures. More precisely, QNNs were limited by their simple architectures that were mostly composed of a single and shallow hidden layer. In this part, we propose to bridge the gap between quaternion and real-valued models by presenting different quaternion architectures. First, basic paradigms such as autoencoders and deep fully-connected neural networks are introduced. Then, more elaborated convolutional and recurrent neural networks are extended to the quaternion domain. Experiments to compare QNNs over equivalents NNs have been conducted on real-world tasks across various domains, including computer vision, spoken language understanding and speech recognition. QNNs increase performances while reducing the needed number of neural parameters compared to real-valued neural networks.Then, QNNs are extended to unconventional settings. In a conventional QNN scenario, input features are manually segmented into three or four components, enabling further quaternion processing. Unfortunately, there is no evidence that such manual segmentation is the representation that suits the most to solve the considered task. Morevover, a manual segmentation drastically reduces the field of application of QNNs to four dimensional use-cases. Therefore the third part introduces a supervised and an unsupervised model to extract meaningful and disantengled quaternion input features, from any real-valued input signal, enabling the use of QNNs regardless of the dimensionality of the considered task. Conducted experiments on speech recognition and document classification show that the proposed approaches outperform traditional quaternion features.
... Gates are defined in the quaternion space following [5]. Indeed, the gate mechanism implies a component-wise product of the components of the quaternion-valued signal with the gate potential in a split manner [15]. Let ft,it, ot, ct, and ht be the forget, input, output gates, cell states and the hidden state of a QLSTM cell at time-step t. ...
... with σ and α the Sigmoid and Tanh quaternion split activations [15,9]. Bidirectional connections allow (Q)LSTM networks to consider the past and future information at a specific time step, enabling the model to capture a more global context [2]. ...
Deep neural networks (DNNs) and more precisely recurrent neural networks (RNNs) are at the core of modern automatic speech recognition systems, due to their efficiency to process input sequences. Recently, it has been shown that different input representations, based on multidimensional algebras, such as complex and quaternion numbers, are able to bring to neural networks a more natural, compressive and powerful representation of the input signal by outperforming common real-valued NNs. Indeed, quaternion-valued neural networks (QNNs) better learn both internal dependencies, such as the relation between the Mel-filter-bank value of a specific time frame and its time derivatives, and global dependencies, describing the relations that exist between time frames. Nonetheless, QNNs are limited to quaternion-valued input signals, and it is difficult to benefit from this powerful representation with real-valued input data. This paper proposes to tackle this weakness by introducing a real-to-quaternion encoder that allows QNNs to process any one dimensional input features, such as traditional Mel-filter-banks for automatic speech recognition.
... Gates are defined in the quaternion space following [5]. Indeed, the gate mechanism implies a component-wise product of the components of the quaternion-valued signal with the gate potential in a split manner [15]. Let ft,it, ot, ct, and ht be the forget, input, output gates, cell states and the hidden state of a QLSTM cell at time-step t. ...
... with σ and α the Sigmoid and Tanh quaternion split activations [15,9]. Bidirectional connections allow (Q)LSTM networks to consider the past and future information at a specific time step, enabling the model to capture a more global context [2]. ...
Deep neural networks (DNNs) and more precisely recurrent neural networks (RNNs) are at the core of modern automatic speech recognition systems, due to their efficiency to process input sequences. Recently, it has been shown that different input representations, based on multidimensional algebras, such as complex and quaternion numbers, are able to bring to neural networks a more natural, compressive and powerful representation of the input signal by outperforming common real-valued NNs. Indeed, quaternion-valued neural networks (QNNs) better learn both internal dependencies, such as the relation between the Mel-filter-bank value of a specific time frame and its time derivatives, and global dependencies, describing the relations that exist between time frames. Nonetheless, QNNs are limited to quaternion-valued input signals, and it is difficult to benefit from this powerful representation with real-valued input data. This paper proposes to tackle this weakness by introducing a real-to-quaternion encoder that allows QNNs to process any one dimensional input features, such as traditional Mel-filter-banks for automatic speech recognition.
... Based on [17], we propose to extend this mechanism to quaternion numbers. Therefore, the gate action is characterized by an independent modification of each component of the quaternion-valued signal following a component-wise product (i.e. in a split fashion [18]) with the quaternionvalued gate potential. Let f t ,i t , o t , c t , and h t be the forget, input, output gates, cell states and the hidden state of a LSTM cell at time-step t. ...
... with σ and α the sigmoid and tanh quaternion split activations [18,11,19,10]. The quaternion weight and bias matrices are initialized following the proposal of [15]. ...
Recurrent neural networks (RNN) are at the core of modern automatic speech recognition (ASR) systems. In particular, long-short term memory (LSTM) recurrent neu-ral networks have achieved state-of-the-art results in many speech recognition tasks, due to their efficient representation of long and short term dependencies in sequences of interdependent features. Nonetheless, internal dependencies within the element composing multidimensional features are weakly considered by traditional real-valued representations. We propose a novel quaternion long-short term memory (QL-STM) recurrent neural network that takes into account both the external relations between the features composing a sequence , and these internal latent structural dependencies with the quaternion algebra. QLSTMs are compared to LSTMs during a memory copy-task and a realistic application of speech recognition on the Wall Street Journal (WSJ) dataset. QLSTM reaches better performances during the two experiments with up to 2.8 times less learning parameters, leading to a more expressive representation of the information.
... where α is a quaternion split activation function (Xu et al., 2017;Tripathi, 2016) defined as: ...
... and α is the quaternion split activation function (Xu et al., 2017) of a quaternion Q defined as: ...
Recurrent neural networks (RNNs) are powerful architectures to model sequential data, due to their capability to learn short and long-term dependencies between the basic elements of a sequence. Nonetheless, popular tasks such as speech or images recognition, involve multi-dimensional input features that are characterized by strong internal dependencies between the dimensions of the input vector. We propose a novel quaternion recurrent neural network (QRNN), alongside with a quaternion long-short term memory neural network (QLSTM), that take into account both the external relations and these internal structural dependencies with the quaternion algebra. Similarly to capsules, quaternions allow the QRNN to code internal dependencies by composing and processing multidimensional features as single entities, while the recurrent operation reveals correlations between the elements composing the sequence. We show that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition. Finally, we show that QRNN and QLSTM reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and LSTMs to reach better results, leading to a more compact representation of the relevant information.
... Thereafter, novel quaternion gradient algorithms using GHR calculus were proposed [107], [108]. Although in [109] is shown that a QMLP trained with a GHR-based algorithm obtains better prediction gains, on the 4D Saito's chaotic signal task, than other quaternion-based learning algorithms [18], [110], [57], further experimental analysis on a standard benchmark and proper comparison with real and complex counterparts is required. To this date, there is no published work on the use of QRH calculus on QCNNs. ...
Since their first applications, Convolutional Neural Networks (CNNs) have solved problems that have advanced the state-of-the-art in several domains. CNNs represent information using real numbers. Despite encouraging results, theoretical analysis shows that representations such as hyper-complex numbers can achieve richer representational capacities than real numbers, and that Hamilton products can capture intrinsic interchannel relationships. Moreover, in the last few years, experimental research has shown that Quaternion-Valued CNNs (QCNNs) can achieve similar performance with fewer parameters than their real-valued counterparts. This paper condenses research in the development of QCNNs from its very beginnings. We propose a conceptual organization of current trends and analyze the main building blocks used in the design of QCNN models. Based on this conceptual organization, we propose future directions of research.
... where the matrix of quaternion Hahn moments QM convolves with the L th filter with size of N × N , S is the number of input channels, W is the weight quaternion matrix with size (S, N, N ), a, b are the indices of the output position, c, d are the indices of the input position, b L is the bias quaternion vector, and ψ is the quaternion split activation function [52] defined as: ...
Color image recognition has recently attracted more researchers’ attention. Many methods based on quaternions have been developed to improve the classification accuracies. Some approaches have currently used quaternions with convolutional neural network (CNN). Despite the obtained results, these approaches have some weakness such as the computational complexity. In fact, the large size of the input color images necessitates a high number of layers and parameters during the learning process which can generate errors calculation and hence influence the recognition rate. In this paper, a new architecture called quaternion discrete orthogonal Hahn moments convolutional neural network (QHMCNN) for color image classification and face recognition is proposed to reduce the computational complexity of CNN while improving the classification rate. The quaternion Hahn moments are used to extract pertinent and compact features from images and introduced them in quaternion convolutional neural network. Experimental simulations conducted on various databases are demonstrated the performance of the proposed architecture QHMCNN against other relevant methods in state-of-the-art and the robustness under different noise conditions.
... Similar definitions hold for any real-valued activation function, and many QNNs utilize these split activation functions even when quaternionic functions, such as the quaternion-valued hyperbolic tangent function, are available. Research has indicated that true quaternionic activation functions can improve performance over split activation functions [18], but they require special considerations since their analyticity can only be defined over a localized domain, and the composition of two locally analytic quaternion functions is generally not locally analytic [19], providing limited utility in deep neural networks. Additionally, many complex and quaternion-valued elementary transcendental functions, including the hyperbolic tangent, are unbounded and contain singularities [20] that make neural network training difficult. ...
In recent years, real-valued neural networks have demonstrated promising, and often striking, results across a broad range of domains. This has driven a surge of applications utilizing high-dimensional datasets. While many techniques exist to alleviate issues of high-dimensionality, they all induce a cost in terms of network size or computational runtime. This work examines the use of quaternions, a form of hypercomplex numbers, in neural networks. The constructed networks demonstrate the ability of quaternions to encode high-dimensional data in an efficient neural network structure, showing that hypercomplex neural networks reduce the number of total trainable parameters compared to their real-valued equivalents. Finally, this work introduces a novel training algorithm using a meta-heuristic approach that bypasses the need for analytic quaternion loss or activation functions. This algorithm allows for a broader range of activation functions over current quaternion networks and presents a proof-of-concept for future work.
... is an output of (k − 1) quaternion hidden layer, 1 ≤ k ≤ M , M − 1 is the number of quaternion hidden layer, (H i ) is the weight quaternion matrix connecting the previous layer to next layer, b (H i ) is the bais quaternion vector, Y H 0 = X is the input vector, Y H M = O is an output layer and H i (.) is the quaternion split activation function applied in quaternion hidden layer H k defined by [46]: with (.) correspond the conventional activation functions. The Tanh and Softsign are adopted in this framework. ...
In recent years, with the rapid development of multimedia technologies, color face recognition has attracted more attention in various areas related to the computer vision. Extracting pertinent features from color image is a challenging problem due to the lack of efficient descriptors. Many methods in literature have been reported. However, some inconveniences arising from these methods are: insufficient color information and time consuming for features extraction. In this paper, a new model quaternion discrete orthogonal moments neural networks (QDOMNN) is proposed to improve the accuracy of color face recognition. The quaternion representation is used to represent color image in a holistic manner instead of monochromatic intensity information. Furthermore, the discrete orthogonal moments are used to extract compact and pertinent features from quaternion representation of image. The main purpose of the utilization of quaternion discrete orthogonal moments is to reduce the number of parameters in the input vector of the model, and consequently decreasing the computational time of training process, while improving the classification rate. The performance of our model is evaluated on some face databases, we obtain 100% as classification accuracy on faces94, grimace and GT, 91.93% on FEI, more than 94.72% on faces95 and more than 98.01% on faces96. Experiment results show the outperformance of our model (QDOMNN) against other existing methods in terms of classification rate, and robustness in noisy conditions.
... Some optimisation methods typically used in literature [9,34] All the previous algorithms are based on their real equivalent, but other quaternion-based methods have been recently developed [35]. Those methods define new quaternion operations with interesting properties which allow to define the product and the chain rule, as well as gradients and the Hessian matrix. ...
Machine Learning has recently emerged as a new paradigm for processing all types of information. In particular, Artificial Intelligence is attractive to corporations & research institutions as it provides innovative solutions for unsolved problems, & it enjoys a great popularity among the general public. However, despite the fact that Machine Learning offers huge opportunities for the IT industry, Artificial Intelligence technology is still at its infancy, with many issues to be addressed. In this paper, we present a survey of quaternion applications in Neural Networks, one of the most promising research lines in artificial vision which also has a great potential in several other topics. The aim of this paper is to provide a better understanding of the design challenges of Quaternion Neural Networks & identify important research directions in this increasingly important area.
... Therefore, Xu et al. (2015) have proposed a generalized version of the HR calculus called GHR calculus. Based on the GHR framework, new learning algorithms have been developed, such as the resilient backpropagation, the conjugate and the scaled conjugate gradient, or the Gauss-Newton algorithms (Popa 2018;Xu et al. 2017). GHR based algorithms have been compared to the traditional bacpropagation algorithm in a toy task of time-prediction forecasting by Popa (2018), and have reached higher accuracies. ...
Quaternion neural networks have recently received an increasing interest due to noticeable improvements over real-valued neural networks on real world tasks such as image, speech and signal processing. The extension of quaternion numbers to neural architectures reached state-of-the-art performances with a reduction of the number of neural parameters. This survey provides a review of past and recent research on quaternion neural networks and their applications in different domains. The paper details methods, algorithms and applications for each quaternion-valued neural networks proposed.
... Similarly, QCNN cannot be without an activation. Many activations have been proposed for quaternion [11], whereas the split activation is applied in the proposed model (the method is mentioned in [12], [13]) is defined as follows: ...
The convolutional neural network is widely popular for solving the problems of color image feature extraction. However, in the general network, the interrelationship of the color image channels is neglected. Therefore, a novel quaternion convolutional neural network (QCNN) is proposed in this paper, which always treats color triples as a whole to avoid information loss. The original quaternion convolution operation is presented and constructed to fully mix the information of color channels. The quaternion batch normalization and pooling operations are derived and designed in quaternion domain to further ensure the integrity of color information. Meanwhile, the knowledge of the attention mechanism is incorporated to boost the performance of the proposed QCNN. Experiments demonstrate that the proposed model is more efficient than the traditional convolutional neural network and another QCNN with the same structure, and has better performance in color image classification and color image forensics.
... where α is a quaternion split activation function [41] defined as: ...
Neural network architectures are at the core of powerful automatic speech recognition systems (ASR). However, while recent researches focus on novel model architectures, the acoustic input features remain almost unchanged. Traditional ASR systems rely on multidimensional acoustic features such as the Mel filter bank energies alongside with the first, and second order derivatives to characterize time-frames that compose the signal sequence. Considering that these components describe three different views of the same element, neural networks have to learn both the internal relations that exist within these features, and external or global dependencies that exist between the time-frames. Quaternion-valued neural networks (QNN), recently received an important interest from researchers to process and learn such relations in multidimensional spaces. Indeed, quaternion numbers and QNNs have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with up to four times less learning parameters than real-valued models. We propose to investigate modern quaternion-valued models such as convolutional and recurrent quaternion neural networks in the context of speech recognition with the TIMIT dataset. The experiments show that QNNs always outperform real-valued equivalent models with way less free parameters, leading to a more efficient, compact, and expressive representation of the relevant information.
... where α is a quaternion split activation function [41] defined as: ...
Neural network architectures are at the core of powerful automatic speech recognition systems (ASR). However, while recent researches focus on novel model architectures, the acoustic input features remain almost unchanged. Traditional ASR systems rely on multidimensional acoustic features such as the Mel filter bank energies alongside with the first, and second order derivatives to characterize time-frames that compose the signal sequence. Considering that these components describe three different views of the same element, neural networks have to learn both the internal relations that exist within these features, and external or global dependencies that exist between the time-frames. Quaternion-valued neural networks (QNN), recently received an important interest from researchers to process and learn such relations in multidimensional spaces. Indeed, quaternion numbers and QNNs have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with up to four times less learning parameters than real-valued models. We propose to investigate modern quaternion-valued models such as convolutional and recurrent quaternion neural networks in the context of speech recognition with the TIMIT dataset. The experiments show that QNNs always outperform real-valued equivalent models with way less free parameters, leading to a more efficient, compact, and expressive representation of the relevant information.
... Therefore, a traditional 1D convolutional layer, with a kernel that contains K × K feature maps, is split into 4 parts: the first part equal to r, the second one to xi, the third one to yj and the last one to zk of a quaternion Q = r1 + xi + yj + zk. The backpropagation is ensured by differentiable cost and activation functions that have already been investigated for quaternions in [15] and [16]. As a result, the so-called split approach [8,6,9,17] is used as a quaternion equivalence of real-valued activation functions. ...
Convolutional neural networks (CNN) have recently achieved state-of-the-art results in various applications. In the case of image recognition, an ideal model has to learn independently of the training data, both local dependencies between the three components (R,G,B) of a pixel, and the global relations describing edges or shapes, making it efficient with small or heterogeneous datasets. Quaternion-valued convo-lutional neural networks (QCNN) solved this problematic by introducing multidimensional algebra to CNN. This paper proposes to explore the fundamental reason of the success of QCNN over CNN, by investigating the impact of the Hamilton product on a color image reconstruction task performed from a gray-scale only training. By learning independently both internal and external relations and with less parameters than real valued convolutional encoder-decoder (CAE), quaternion convolutional encoder-decoders (QCAE) perfectly reconstructed unseen color images while CAE produced worst and gray-scale versions.
... Nonetheless, an important condition to perform backpropagation in either real, complex or quaternion neural networks is to have cost and activation functions that are differentiable with respect to each part of the real, complex or quaternion number. Many activation functions for quaternion have been investigated [15] and a quaternion backpropagation algorithm have been proposed in [16]. Consequently, the split activation [17,18] function is applied to every layer and is defined as follows: ...
Recently, the connectionist temporal classification (CTC) model coupled with recurrent (RNN) or convolutional neural networks (CNN), made it easier to train speech recognition systems in an end-to-end fashion. However in real-valued models, time frame components such as mel-filter-bank energies and the cepstral coefficients obtained from them, together with their first and second order derivatives, are processed as individual elements, while a natural alternative is to process such components as composed entities. We propose to group such elements in the form of quaternions and to process these quaternions using the established quaternion algebra. Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies, and to solve many tasks with less learning parameters than real-valued models. This paper proposes to integrate multiple feature views in quaternion-valued convolutional neural network (QCNN), to be used for sequence-to-sequence mapping with the CTC model. Promising results are reported using simple QCNNs in phoneme recognition experiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.
... Nonetheless, an important condition to perform backpropagation in either real, complex or quaternion neural networks is to have cost and activation functions that are differentiable with respect to each part of the real, complex or quaternion number. Many activation functions for quaternion have been investigated [15] and a quaternion backpropagation algorithm have been proposed in [16]. Consequently, the split activation [17,18] function is applied to every layer and is defined as follows: ...
Recently, the connectionist temporal classification (CTC) model coupled with recurrent (RNN) or convolutional neural networks (CNN), made it easier to train speech recognition systems in an end-to-end fashion. However in real-valued models , time frame components such as mel-filter-bank energies and the cepstral coefficients obtained from them, together with their first and second order derivatives, are processed as individual elements, while a natural alternative is to process such components as composed entities. We propose to group such elements in the form of quaternions and to process these quaternions using the established quaternion algebra. Quaternion numbers and quaternion neural networks have shown their efficiency to process multidimensional inputs as entities, to encode internal dependencies , and to solve many tasks with less learning parameters than real-valued models. This paper proposes to integrate multiple feature views in quaternion-valued convolutional neu-ral network (QCNN), to be used for sequence-to-sequence mapping with the CTC model. Promising results are reported using simple QCNNs in phoneme recognition experiments with the TIMIT corpus. More precisely, QCNNs obtain a lower phoneme error rate (PER) with less learning parameters than a competing model based on real-valued CNNs.
Since their first applications, Convolutional Neural Networks (CNNs) have solved problems that have advanced the state-of-the-art in several domains. CNNs represent information using real numbers. Despite encouraging results, theoretical analysis shows that representations such as hyper-complex numbers can achieve richer representational capacities than real numbers, and that Hamilton products can capture intrinsic interchannel relationships. Moreover, in the last few years, experimental research has shown that Quaternion-valued CNNs (QCNNs) can achieve similar performance with fewer parameters than their real-valued counterparts. This paper condenses research in the development of QCNNs from its very beginnings. We propose a conceptual organization of current trends and analyze the main building blocks used in the design of QCNN models. Based on this conceptual organization, we propose future directions of research.
The neurocomputing communities have focused much interest on quaternionic-valued neural networks (QVNNs) due to the natural extension in quaternionic signals, learning of inter and spatial relationships between the features, and remarkable improvement against real-valued neural networks (RVNNs) and complex-valued neural networks (CVNNs). The excellent learning capability of QVNN inspired the researchers working on various applications in image processing, signal processing, computer vision, and robotic control system. Apart from its applications, many researchers have proposed new structures of quaternionic neurons and extended the architecture of QVNN for specific applications containing high-dimensional information. These networks have revealed their performance with a lesser number of parameters over conventional RVNNs. This paper focuses on past and recent studies of simple and deep QVNNs architectures and their applications. This paper provides the future directions to prospective researchers to establish new architectures and to extend the existing architecture of high-dimensional neural networks with the help of quaternion, octonion, or sedenion for appropriate applications.
This paper introduces a new architecture named QTMCNN for color image classification based on quaternion discrete Tchebichef moments (QTM) and quaternion convolutional neural network (QCNN) to improve the classification accuracy and to reduce the time of learning process. Color image is represented as a single quaternion matrix where each color pixel is represented as a pure quaternion. From this representation, quaternion Tchebichef moments are used to generate a matrix of low-dimensional significant features and fed to QCNN as input layer instead of color image. The proposed architecture reduces tremendously the number of parameters and consequently decreases the computational complexity while improving the classification rates. Experiments are conducted on Coil-100 and ETH-80 datasets to demonstrate the performance of the proposed architecture. The obtained results outperform other approaches in terms of classification accuracy and GPU elapsed time.KeywordsQuaternion Tchebichef momentsQuaternion convolutional neural networkClassificationColor imageComplexity
Over the few decades onwards all the researchers got marvelous attention on machine learning due to a lot of applications in different fields like image processing, speech processing, etc. Automatic Speech Recognition (ASR) is an application of speech processing that achieved tremendous results due to the usage of recurrent neural networks (RNN). A unique type of recurrent neural network is long short-term memory (LSTM), to keeps away from the problem of long term dependencies. In this paper, multidimensional octonion algebra is used to process the input entities with multidimensions efficiently, when compared to real-valued models octonion numbers and octonion neural networks solve many tasks with less learning parameters. We propose a new octonion value long-short term memory (OVLSTM) to efficiently represent long-term dependencies among features in speech sequences prediction. The TIMIT dataset taken for experiments for speech recognition and results are compared with QLSTMS and LSTMs, OVLSTM with less learning parameters reaches better results.
This document introduces a new class of adaptive filters, namely Geometric-Algebra Adaptive Filters (GAAFs). Those are generated by formulating the underlying minimization problem (a least-squares cost function) from the perspective of Geometric Algebra (GA), a comprehensive mathematical language well-suited for the description of geometric transformations. Also, differently from the usual linear-algebra approach, Geometric Calculus (the extension of Geometric Algebra to differential calculus) allows to apply the same derivation techniques regardless of the type (subalgebra) of the data, i.e., real, complex-numbers, quaternions etc. Exploiting those characteristics (among others), a general least-squares cost function is posed, from which the GAAFs are designed. They provide a generalization of regular adaptive filters for any subalgebra of GA. From the obtained update rule, it is shown how to recover the following least-mean squares (LMS) adaptive filter variants: real-entries LMS, complex LMS, and quaternions LMS. Mean-square analysis and simulations in a system identification scenario are provided, showing almost perfect agreement for different levels of measurement noise.
ResearchGate has not been able to resolve any references for this publication.