FIGURE 3 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Multiple quality recovery network (MQRNet) with classifiers. MQRNet learns the network parameters with the classification losses of multiple classifiers. After learning, MQRNet generates the recovery images for the other classifiers that are not used during training.
Source publication
This paper proposes a quality recovery network (QRNet) that recovers the image quality from distorted images and improves the classification accuracy for image classification using these recovered images as the classifier inputs, which are optimized for image quality loss and classification loss. In certain image classification tasks, classifiers b...
Context in source publication
Context 1
... learning the parameters of QRNet with the losses of multiple classifiers, we expect the proposed method to perform well, even for a classifier that has not been learned. Figure 3 depicts the framework for training with multiple classifiers. In this case, QRNet is trained with multiple classifiers, C 1 and C 2 . ...
Similar publications
Image Inpainting, the task of recovering missing information in an image, is of great practical interest to users who wish to recover damaged photographs or remove unwanted parts of images. Advances in Convolutional Neural Networks and Adversarial Networks have produced large gains in this problem. Nevertheless, Inpainting algorithms may fail to re...
Remarkable success has been made by deep convolutional neural network (CNN) models in semantic image segmentation. However, most segmentation models are based on classification networks which tend to learn image-level features and lost abundant spatial information due to repeated pooling and downsampling operations, and the CNN-based methods are no...
Prostate cancer is the most prevalent cancer among men in Western countries, with 1.1 million new diagnoses every year. The gold standard for the diagnosis of prostate cancer is a pathologists’ evaluation of prostate tissue. To potentially assist pathologists deep-learning-based cancer detection systems have been developed. Many of the state-of-the...
In traditional practices of transportation system's constructions, traffic-related information is collected based on dedicated sensor networks, which are not only coverage-limited but also cost-consuming. With the enrichment of the concepts concerning 'social sensors' and 'social transportation', Sparse Mobile Crowdsensing (MCS) is proposed to coll...
Citations
... [Душкин, 2019]. Для некоторых из перечисленных задач решения на базе методов искусственного интеллекта уже превзошли по точности и эффективности человеческий уровень [Takagi et al., 2019]. ...
This paper presents a model of hierarchical associative memory, which can be used as a basis for building artificial cognitive agents of general purpose. With the help of this model, one of the most important problems of modern machine learning and artificial intelligence in general can be solved - the ability for a cognitive agent to use "life experience" to process the context of the situation in which he was, is and, possibly, will be. This model is applicable for artificial cognitive agents functioning both in specially designed virtual worlds and in objective reality. The use of hierarchical associative memory as a long-term memory of artificial cognitive agents will allow the latter to effectively navigate both in the general knowledge accumulated by mankind and in their life experience. The novelty of the presented work is based on the author's approach to the construction of context-dependent artificial cognitive agents using an interdisciplinary approach, in particular, based on the achievements of artificial intelligence, cognitology, neurophysiology, psychology and sociology. The relevance of this work is based on the keen interest of the scientific community and the high social demand for the creation of general-level artificial intelligence systems. Associative hierarchical memory, based on the use of an approach similar to the hypercolumns of the human cerebral cortex, is becoming one of the important components of an artificial intelligent agent of the general level. The article will be of interest to all researchers working in the field of building artificial cognitive agents and related fields.
... Modern achievements in the field of artificial intelligence are based on the solution of certain cognitive tasks in such areas as the search for and detection of hidden patterns, pattern recognition, understanding the meaning of natural language statements, decision support, generation of realistic images, etc. [Dushkin, 2019]. For some of the listed tasks, solutions based on artificial intelligence methods have already surpassed the human level in accuracy and efficiency [Takagi et al., 2019]. ...
This paper presents a model of hierarchical associative memory, which can be used as a basis for building artificial cognitive agents of general purpose. With the help of this model, one of the most important problems of modern machine learning and artificial intelligence in general can be solved — the ability for a cognitive agent to use "life experience" to process the context of the situation in which he was, is and, possibly, will be. This model is applicable for artificial cognitive agents functioning both in specially designed virtual worlds and in objective reality. The use of hierarchical associative memory as a long-term memory of artificial cognitive agents will allow the latter to effectively navigate both in the general knowledge accumulated by mankind and in their life experience. The novelty of the presented work is based on the author’s approach to the construction of context-dependent artificial cognitive agents using an interdisciplinary approach, in particular, based on the achievements of artificial intelligence, cognitology, neurophysiology, psychology and sociology. The relevance of this work is based on the keen interest of the scientific community and the high social demand for the creation of general-level artificial intelligence systems. Associative hierarchical memory, based on the use of an approach similar to the hypercolumns of the human cerebral cortex, is becoming one of the important components of an artificial intelligent agent of the general level. The article will be of interest to all researchers working in the field of building artificial cognitive agents and related fields.
... To recover image quality in distorted images, we adopt the training procedure proposed for a training quality recovery network [41]. We now briefly describe this training procedure. ...
Adversarial examples can be used to exploit vulnerabilities in neural networks and threaten their sensitive applications. Adversarial attacks are evolving daily, and are rapidly rendering defense methods that assume specific attacks obsolete. This paper proposes a new defense method that does not assume a specific adversarial attack, and shows that it can be used efficiently to protect a network from a variety of adversarial attacks. Adversarial perturbations are small values; consequently, an image quality recovery method is considered to be an effective way to remove adversarial perturbations because such a method often includes a smoothing effect. The proposed method, called the denoising-based perturbation removal network (DPRNet), aims to eliminate perturbations generated by an adversarial attack for image classification tasks. DPRNet is an encoder–decoder network that excludes adversarial images during training and can reconstruct a correct image from an adversarial image. To optimize DPRNet’s parameters for eliminating adversarial perturbations, we also propose a new perturbation removal loss (PRloss) metric, which consists of a reconstructed loss and a Kullback–Leibler divergence loss that expresses the class probability distribution difference between an original image and a reconstructed image. To remove adversarial perturbation, the proposed network is trained using various types of distorted images considering the proposed PRloss metric. Thus, DPRNet eliminates image perturbations, allowing the images to be classified easily. We evaluate the proposed method using the MNIST, CIFAR-10, SVHN, and Caltech 101 datasets and show that the proposed defense method invalidates 99.8%, 95.1%, 98.7%, and 96.0% of the adversarial images that are generated by several adversarial attacks in the MNIST, CIFAR-10, SVHN, and Caltech 101 datasets, respectively.
... Hence, there have been many related studies on data repair in image processing. Specifically, with the development of neural networks and deep learning in recent years, many algorithms based on these approaches have achieved good results in the field of image reconstruction and denoising [9]- [15]. However, these deep learning algorithms often need to learn prior knowledge about the signals through a large amount of data, which increases the cost of algorithm. ...
In recent years, the use of wireless sensor networks has become increasingly widespread. Because of the instability of wireless networks, packet loss occasionally occurs. To reduce the impact of packet loss on data integrity, we take advantage of the deep neural network’s excellent ability to understand natural data and propose a data repair method based on a deep convolutional neural network with an encoder–decoder architecture. Compared with common interpolation algorithms and compressed sensing algorithms, this method obtains better repair results, is suitable for a wider range of applications, and does not need prior knowledge. This method adopts measures such as preparing training set data as well as the design and optimization of loss functions to achieve faster convergence speed, higher repair accuracy, and better stability. To fairly compare the repair performance of different signals, the mean squared error, relative peak-to-peak average error, and relative peak-to-peak max error are adopted to quantitatively evaluate the repair results of different signals. Comparative experiments prove that this method has better data recovery performance than traditional interpolation and compressed sensing algorithms.
... The second studied classifier was the CNN classifier, based on multi-layer neural networks [17]. The reason to choose SVM and CNN as classifiers was their excellent performance for various classification problems reported in the recent literature [18][19][20]. ...
This paper proposes to treat the jammer classification problem in the Global Navigation Satellite System bands as a black-and-white image classification problem, based on a time-frequency analysis and image mapping of a jammed signal. The paper also proposes to apply machine learning approaches in order to sort the received signal into six classes, namely five classes when the jammer is present with different jammer types and one class where the jammer is absent. The algorithms based on support vector machines show up to 94.90% accuracy in classification, and the algorithms based on convolutional neural networks show up to 91.36% accuracy in classification. The training and test databases generated for these tests are also provided in open access.
In image classification, a deep neural network (DNN) that is trained on undistorted images constitutes an effective decision boundary. Unfortunately, this boundary does not support distorted images, such as noisy or blurry ones, leading to accuracy drop-off. As a simple approach for classifying distorted images as well as undistorted ones, previous methods have optimized the trained DNN again on both kinds of images. However, in these methods, the decision boundary may become overly complicated during optimization because there is no regularization of the decision boundary. Consequently, this decision boundary limits efficient optimization. In this paper, we study a simple yet effective decision boundary for distorted image classification through the use of a novel loss, called a "neural activation pattern matching (NAPM) loss". The NAPM loss is based on recent findings that the decision boundary is a piecewise linear function, where each linear segment is constructed from a neural activation pattern in the DNN when an image is fed to it. The NAPM loss extracts the neural activation patterns when the distorted image and its undistorted version are fed to the DNN and then matches them with each other via the sigmoid cross-entropy. Therefore, it constrains the DNN to classify the distorted image and its undistorted version by the same linear segment. As a result, our loss accelerates efficient optimization by preventing the decision boundary from becoming overly complicated. Our experiments demonstrate that our loss increases the accuracy of the previous methods in all conditions evaluated.