Masafumi HAGIWARA’s research while affiliated with Keio University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (3)


Defense Against Adversarial Examples Using Quality Recovery for Image Classification
  • Article

August 2020

·

43 Reads

·

1 Citation

Journal of Japan Society for Fuzzy Theory and Intelligent Informatics

Motohiro TAKAGI

·

Masafumi HAGIWARA

Adversarial examples can be used to exploit vulnerabilities in neural networks and threaten their sensitive applications. Adversarial attacks are evolving daily, and are rapidly rendering defense methods that assume specific attacks obsolete. This paper proposes a new defense method that does not assume a specific adversarial attack, and shows that it can be used efficiently to protect a network from a variety of adversarial attacks. Adversarial perturbations are small values; consequently, an image quality recovery method is considered to be an effective way to remove adversarial perturbations because such a method often includes a smoothing effect. The proposed method, called the denoising-based perturbation removal network (DPRNet), aims to eliminate perturbations generated by an adversarial attack for image classification tasks. DPRNet is an encoder–decoder network that excludes adversarial images during training and can reconstruct a correct image from an adversarial image. To optimize DPRNet’s parameters for eliminating adversarial perturbations, we also propose a new perturbation removal loss (PRloss) metric, which consists of a reconstructed loss and a Kullback–Leibler divergence loss that expresses the class probability distribution difference between an original image and a reconstructed image. To remove adversarial perturbation, the proposed network is trained using various types of distorted images considering the proposed PRloss metric. Thus, DPRNet eliminates image perturbations, allowing the images to be classified easily. We evaluate the proposed method using the MNIST, CIFAR-10, SVHN, and Caltech 101 datasets and show that the proposed defense method invalidates 99.8%, 95.1%, 98.7%, and 96.0% of the adversarial images that are generated by several adversarial attacks in the MNIST, CIFAR-10, SVHN, and Caltech 101 datasets, respectively.


Discriminative Convolutional Neural Network for Image Quality Assessment with Fixed Convolution Filters

November 2019

·

39 Reads

IEICE Transactions on Information and Systems

Current image quality assessment (IQA) methods require the original images for evaluation. However, recently, IQA methods that use machine learning have been proposed. These methods learn the relationship between the distorted image and the image quality automatically. In this paper, we propose an IQA method based on deep learning that does not require a reference image. We show that a convolutional neural network with distortion prediction and fixed filters improves the IQA accuracy.


FIGURE 3. Multiple quality recovery network (MQRNet) with classifiers. MQRNet learns the network parameters with the classification losses of multiple classifiers. After learning, MQRNet generates the recovery images for the other classifiers that are not used during training.
FIGURE 6. Performance of Multiple QRNet at each distortion level on the Caltech101 dataset.
FIGURE 7. Relationship between the image quality metrics and QRNet performance at each distortion level on the Caltech101 dataset: (a) Relationship between the PSNR and accuracy for Gaussian noise images, (b) Relationship between the SSIM and accuracy for Gaussian noise images, (c) Relationship between the PSNR and accuracy for Gaussian blurred images, and (d) Relationship between the SSIM and accuracy for Gaussian blurred images.
FIGURE 8. Samples of the quality recovered images from the distorted images. The quality recovered images are reconstructed using the proposed network, QRNet.
Quality Recovery for Image Recognition
  • Article
  • Full-text available

August 2019

·

397 Reads

·

8 Citations

IEEE Access

This paper proposes a quality recovery network (QRNet) that recovers the image quality from distorted images and improves the classification accuracy for image classification using these recovered images as the classifier inputs, which are optimized for image quality loss and classification loss. In certain image classification tasks, classifiers based on deep neural networks achieve higher performance compared to those realized by humans. However, these tasks are based on images that are not distorted. To address distorted images, the classifier is fine-tuned with distorted images for practical applications. However, fine-tuning is insufficient for classifying images that include multiple distortion types with severe distortions and often requires the classifier to be retrained for adapting to distorted images, which is a time-consuming process. Therefore, we propose QRNet that generates recovered images for input to the classifier. To address multiple severe distortions, the proposed network is trained using multiple distortion-type images with our proposed loss, which comprises the image quality and classification losses. Moreover, by training the proposed network with multiple classifiers, the recovered images can be easily classified by a new classifier that is not used for training. The new classifier can classify the recovered images without retraining for adapting to distorted images. We evaluate our proposed network with classifiers on public datasets and demonstrate that it improves the classification accuracy for distorted images. Moreover, the experimental results demonstrate that our proposed network with the new classifier improves the classification accuracy.

Download

Citations (2)


... It makes subtle changes to the image's detail or texture. 7 is the most significant bit-plane, having the most considerable impact on the overall intensity. Changes in this plane can dramatically alter the image's appearance by toggling between higher and lower intensity values. ...

Reference:

Enhancing Machine Learning Resilience to Adversarial Attacks through Bit Plane Slicing Optimized by Genetic Algorithms
Defense Against Adversarial Examples Using Quality Recovery for Image Classification
  • Citing Article
  • August 2020

Journal of Japan Society for Fuzzy Theory and Intelligent Informatics

... [Душкин, 2019]. Для некоторых из перечисленных задач решения на базе методов искусственного интеллекта уже превзошли по точности и эффективности человеческий уровень [Takagi et al., 2019]. ...

Quality Recovery for Image Recognition

IEEE Access