Conference Paper

Improved evolution of generative adversarial networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... When the ideal number is unknown the proposed process of chromosomes with variable length is better especially if resources for computation are limited. Generic algorithms and GANs are an interesting field for hyperparameter optimisation as shown above and additional research done in [12], [13], [14], [15], [16] As Pix2Pix has a variable length when training the generator due to the skip connection described in Section 3 each training cycle is using a different amount of neurons leading to the variable length. The above described extension of generic algorithm can be interesting for the future to test how well this works with Pix2Pix. ...
Chapter
Full-text available
Generative models and their possible applications are almost limitless. But there are still problems that such models have. On one hand, the models are difficult to train. Stability in training, mode collapse or non convergence, together with the huge parameter space make it extremely costly and difficult to train and optimize generative models. The following paper proposes an optimization method limited to a few hyperparameters with grid-search and early stopping which selects the best hyperparameter combination based on the results obtained with the Universal Image Quality Index (UIQ) by creating a copy of the source image and comparing it with the generated target. The proposed method allows to directly measure the impact of hyperparameter tuning by comparing the achieved UIQ score against a baseline.
... A stopping criterion stops the search if the criterion is met or the generations are not improving any further. There are several different ways for genetic algorithms and GANs for hyperparameter optimisation as shown in additional research done in [22], [23], [24], [25], [26], [27]. Performance wise grid search and genetic algorithms took nearly the same time to find results of equal accuracy and roughly twice as long to achieve the same accuracy as random search using the CIFAR-10 dataset. ...
Conference Paper
Full-text available
Hyperparameter tuning is an important aspect in machine-learning especially for deep generative models. Tuning models to stabilize training and to get the best accuracy can be a time consuming and protracted process. Generative models have a large search space requiring resources and knowledge to find the best parameters. Therefore, in most cases the search space is reduced and parameters are limited to a selected few to save time and computation time. This paper explores three different strategies to predict high impact hyperparameters for Pix2Pix. The achieved results show, that binary classification and regression achieve good results and reliably predict good hyperparameter combinations.
Chapter
Adversarial Evolutionary Learning (AEL) is concerned with competing adversaries that are adapting over time. This competition can be defined as a minimization–maximization problem. Different methods exist to model the search for solutions to this problem, such as the Competitive Coevolutionary Algorithm, Multi-agent Reinforcement Learning, Adversarial Machine Learning, and Evolutionary Game Theory. This chapter introduces an overview of AEL. We focus on spatially distributed competitive coevolution for adversarial evolutionary learning to deal with the Generative Adversarial Networks (GANs) training challenges. A population of multiple individual solutions, parameterized artificial neural networks (ANN), provides diversity to the gradient-based GAN learning and increases the robustness of the GAN training. The computational complexity is reduced by using a spatial topology that decreases the number of evaluations and facilitates scalability. In addition, the topology enables diverse hyper-parameters, objectives, search operators, and data. We present a design and an implementation of an AEL system with spatial competitive coevolution and gradient-based adversarial learning. We demonstrate how the increase in diversity improves the performance of generative learning tasks on image data. Moreover, the distributed population in AEL can help overcome some hardware limitations for ANN architectures.
Article
Full-text available
Recognized as a realistic image generator, Generative Adversarial Network (GAN) occupies a progressive section in deep learning. Using generative modeling, the underlying generator model learns the real target distribution and outputs fake samples from the generated replica distribution. The discriminator attempts to distinguish the fake and the real samples and sends feedback to the generator so that the generator can improve the fake samples. Recently, GANs have been competing with the state-of-the-art in various tasks including image processing, missing data imputation, text-to-image translation and adversarial example generation. However, the architecture suffers from training instability, resulting in problems like non-convergence, mode collapse and vanishing gradients. The research community has been studying and devising modified architectures, alternative lost functions and techniques to address these concerns. A section of publications has studied Adversarial Training, alongside GANs. This review covers the existing works on the instability of GANs from square one and a portion of recent publications to illustrate the trend of research. It also gives insight to studies on adversarial attacks and research discussing Adversarial Attacks with GANs. To put it more eloquently, this study intends to guide researchers interested in studying improvisations made to GANs for stable training, in the presence of Adversarial Attacks.
Conference Paper
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.
Conference Paper
One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.
The relativistic discriminator: a key element missing from standard GAN
  • Alexia Jolicoeur-Martineau
  • Jolicoeur-Martineau Alexia
Large Scale GAN Training for High Fidelity Natural Image Synthesis
  • Andrew Brock
  • Jeff Donahue
  • Karen Simonyan
  • Brock Andrew
Victor Costa, Nuno Lourenço, and Penousal Machado. 2019. Co-evolution of Generative Adversarial Networks
  • Victor Costa
  • Nuno Lourenço
  • Penousal Machado
  • Costa Victor