This thesis sheds new light on three different topics in the context of inverse problems, variational regularization and optimization in imaging.
In the first of its three major parts, we introduce a more exotic use of Bregman distances in order to reduce or entirely remove a systematical error, more precisely a bias, which arises when using convex variational methods. To this end, we explore the structure of roots of Bregman distances (respectively their infimal convolution) and show that the related sets form a suitable tool to tackle the bias. We carry out the analysis for many well-established regularization functionals such as isotropic and anisotropic total variation or polyhedral regularization, and show experimentally that the resulting method substantially improves the results.
In the second part we recycle the derived concepts in order to define a variational method for joint reconstruction. We generalize the concept of Bregman iterations to multiple channels and show that the resulting method, in the context of total variation regularization, is able to couple the edge information of the respective channels. Over the iterations this leads to a similar structure in all images, which we prove to be very effective e.g. for medical imaging in the context of positron emission tomography and magnetic resonance imaging.
As a last brick of this thesis we extend some well-established primal-dual methods for the solution of modern variational methods to an inexact setting. We allow different types of errors in the individual steps of the algorithms, and analyze their convergence rate in dependence of the errors. We show that large errors substantially slow down the algorithm, while a sufficient decay of the errors over the iterations ensures the same rate as the error- free algorithm. In the context of nested algorithms our analysis does not only give reasoning why a lot of heuristic approaches work in practice, but also gives criteria how these can be improved.