
Brian Bernhard MoserRPTU - Rheinland-Pfälzische Technische Universität Kaiserslautern Landau | TUK
Brian Bernhard Moser
Master of Science
A Ph.D. student with a focus on Deep Learning and Image Super-Resolution with Diffusion Models
About
12
Publications
989
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
12
Citations
Citations since 2017
Introduction
I was born in Kaiserslautern, Germany, in 1995, with Thai and German heritage. From a young age, I aspired to delve into Computer Science, leading me to invest most of my free time into coding. My passion lies in problem-solving and the logical intricacies of coding. I firmly believe that transformative changes can emerge from a few precise lines of code or a groundbreaking mathematical concept. Presently, my research is focused on Image Super-Resolution and the vast expanse of Deep Learning.
Additional affiliations
November 2018 - present
Education
December 2018 - December 2021
Technische Universität Kaiserslautern
Field of study
- Computer Science - Artificial Intelligence
December 2014 - December 2018
Technische Universität Kaiserslautern
Field of study
- Computer Science
Publications
Publications (12)
Neural Architecture Search (NAS) defines the design of Neural Networks as a search problem. Unfortunately, NAS is computationally intensive because of various possibilities depending on the number of elements in the design and the possible connections between them. In this work, we extensively analyze the role of the dataset size based on several s...
With the advent of Deep Learning (DL), Super-Resolution (SR) has also become a thriving research area. However, despite promising results, the field still faces challenges that require further research, e.g., allowing flexible upsampling, more effective loss functions, and better evaluation metrics. We review the domain of SR in light of recent adv...
This paper presents a novel Diffusion-Wavelet (DiWa) approach for Single-Image Super-Resolution (SISR). It leverages the strengths of Denoising Diffusion Probabilistic Models (DDPMs) and Discrete Wavelet Transformation (DWT). By enabling DDPMs to operate in the DWT domain, our DDPM models effectively hallucinate high-frequency information for super...
This work introduces Differential Wavelet Amplifier (DWA), a drop-in module for wavelet-based image Super-Resolution (SR). DWA invigorates an approach recently receiving less attention, namely Discrete Wavelet Transformation (DWT). DWT enables an efficient image representation for SR and reduces the spatial area of its input by a factor of 4, the o...
This work introduces "You Only Diffuse Areas" (YODA), a novel method for partial diffusion in Single-Image Super-Resolution (SISR). The core idea is to utilize diffusion selectively on spatial regions based on attention maps derived from the low-resolution image and the current time step in the diffusion process. This time-dependent targeting enabl...
This work introduces Differential Wavelet Amplifier (DWA), a drop-in module for wavelet-based image Super-Resolution (SR). DWA invigorates an approach recently receiving less attention, namely Discrete Wavelet Transformation (DWT). DWT enables an efficient image representation for SR and reduces the spatial area of its input by a factor of 4, the o...
In the main text, we complemented previous surveys by
critically identifying current strategies and new research
areas. The supplementary material hereby gives further
information and visualizations on the topics discussed. It
supports understanding the main concepts and ideas examined in the main text.
With the advent of Deep Learning (DL), Super-Resolution (SR) has also become a thriving research area. However, despite promising results, the field still faces challenges that require further research e.g., allowing flexible upsampling, more effective loss functions, and better evaluation metrics. We review the domain of SR in light of recent adva...
Neural Architecture Search (NAS) defines the design of Neural Networks as a search problem. Unfortunately, NAS is computationally intensive because of various possibilities depending on the number of elements in the design and the possible connections between them. In this work, we extensively analyze the role of the dataset size based on several s...
We present new Recurrent Neural Network (RNN) cells for image classification using a Neural Architecture Search (NAS) approach called DARTS. We are interested in the ReNet architecture, which is a RNN based approach presented as an alternative for convolutional and pooling steps. ReNet can be defined using any standard RNN cells, such as LSTM and G...
We present new Recurrent Neural Network (RNN) cells for image classification using a Neural Architecture Search (NAS) approach called DARTS. We are interested in the ReNet architecture, which is a RNN based approach presented as an alternative for convolutional and pooling steps. ReNet can be defined using any standard RNN cells, such as LSTM and G...
The goal of this paper is to explore the benefits of using RNNs instead of using CNNs for image transformation tasks. We are interested in two models for image transformation: U-Net (based on CNNs) and U-ReNet (partially based on CNNs and RNNs). In this work, we propose a novel U-ReNet which is almost entirely RNN based. We compare U-Net, U-ReNet (...