Fig 1 - uploaded by Dirk Hölscher
Content may be subject to copyright.
Source publication
Generative models and their possible applications are almost limitless. But there are still problems that such models have. On one hand, the models are difficult to train. Stability in training, mode collapse or non convergence, together with the huge parameter space make it extremely costly and difficult to train and optimize generative models. Th...
Context in source publication
Context 1
... (cGAN) Pix2Pix [3] is a paired image-to-image translation conditional Generative Adversarial (cGAN) Network, with the ability to learn the features of an input image and translate them into a different output without changing the composition and shapes of the original image (ground truth). The condition with pix2pix is the input image itself. Fig. 1 shows a simplified Pix2Pix architecture with a generator G and a discriminator D. Generator and discriminator are trained in turns. The input image used as a condition is x and the generated image created by the generator G(x). The label or target to which the [3] network should translate x to is y (the target domain). The ...
Citations
... This data was used to train a Pix2Pix GAN which was trained to create exact copies [10] of the scans. Based on our previous work [11], [12] and [13] we optimised Pix2Pix to create high quality samples using the evaluation of the Universal Quality Index Metric (UIQ) [14] to optimise the generated images towards an ideal UIQ score by using hyperparameter tuning and then evaluating the results using UIQ. In addition, we created, based on our experiments a prediction network which is able to predict if a hyperparameter combination is able to generate better results or not. ...