Tomer Michaeli's research while affiliated with Technion - Israel Institute of Technology and other places

Publications (89)

Preprint
Stochastic restoration algorithms allow to explore the space of solutions that correspond to the degraded input. In this paper we reveal additional fundamental advantages of stochastic methods over deterministic ones, which further motivate their use. First, we prove that any restoration algorithm that attains perfect perceptual quality and whose o...
Preprint
Full-text available
Power consumption is a major obstacle in the deployment of deep neural networks (DNNs) on end devices. Existing approaches for reducing power consumption rely on quite general principles, including avoidance of multiplication operations and aggressive quantization of weights and activations. However, these methods do not take into account the preci...
Preprint
The lower the distortion of an estimator, the more the distribution of its outputs generally deviates from the distribution of the signals it attempts to estimate. This phenomenon, known as the perception-distortion tradeoff, has captured significant attention in image restoration, where it implies that fidelity to ground truth images comes at the...
Preprint
Models for audio generation are typically trained on hours of recordings. Here, we illustrate that capturing the essence of an audio source is typically possible from as little as a few tens of seconds from a single training signal. Specifically, we present a GAN-based generative model that can be trained on one short audio signal from any domain (...
Article
Fast acquisition of depth information is crucial for accurate 3D tracking of moving objects. Snapshot depth sensing can be achieved by wavefront coding, in which the point-spread function (PSF) is engineered to vary distinctively with scene depth by altering the detection optics. In low-light applications, such as 3D localization microscopy, the pr...
Article
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their critic (discriminator) network during training. In this paper, we analyze the popular spectral normalization scheme, find a significant drawback and introduce sparsity aware normalization (SAN), a new alternative approach for stabilizing GAN tr...
Preprint
Generative adversarial networks (GANs) are known to benefit from regularization or normalization of their critic (discriminator) network during training. In this paper, we analyze the popular spectral normalization scheme, find a significant drawback and introduce sparsity aware normalization (SAN), a new alternative approach for stabilizing GAN tr...
Conference Paper
In this talk I will describe how joint optimization of the microscope’s point-spread-function alongside the image processing algorithm, both using neural nets, enables dense emitter fitting for super-resolution microscopy and other challenging volumetric microscopy applications.
Preprint
Recent research has shown remarkable success in revealing "steering" directions in the latent spaces of pre-trained GANs. These directions correspond to semantically meaningful image transformations e.g., shift, zoom, color manipulations), and have similar interpretable effects across all categories that the GAN can generate. Some methods focus on...
Preprint
Full-text available
Contrastive divergence (CD) learning is a classical method for fitting unnormalized statistical models to data samples. Despite its wide-spread use, the convergence properties of this algorithm are still not well understood. The main source of difficulty is an unjustified approximation which has been used to derive the gradient of the loss. In this...
Preprint
We introduce a new generator architecture, aimed at fast and efficient high-resolution image-to-image translation. We design the generator to be an extremely lightweight function of the full-resolution image. In fact, we use pixel-wise networks; that is, each pixel is processed independently of others, through a composition of simple affine transfo...
Preprint
A long-standing challenge in multiple-particle-tracking is the accurate and precise 3D localization of individual particles at close proximity. One established approach for snapshot 3D imaging is point-spread-function (PSF) engineering, in which the PSF is modified to encode the axial information. However, engineered PSFs are challenging to localiz...
Article
Full-text available
An outstanding challenge in single-molecule localization microscopy is the accurate and precise localization of individual point emitters in three dimensions in densely labeled samples. One established approach for three-dimensional single-molecule localization is point-spread-function (PSF) engineering, in which the PSF is engineered to vary disti...
Article
Full-text available
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
Preprint
The ever-growing amounts of visual contents captured on a daily basis necessitate the use of lossy compression methods in order to save storage space and transmission bandwidth. While extensive research efforts are devoted to improving compression techniques, every method inevitably discards information. Especially at low bit rates, this informatio...
Preprint
It is well known that (stochastic) gradient descent has an implicit bias towards wide minima. In deep neural network training, this mechanism serves to screen out minima. However, the precise effect that this has on the trained network is not yet fully understood. In this paper, we characterize the wide minima in linear neural networks trained with...
Preprint
Single image super resolution (SR) has seen major performance leaps in recent years. However, existing methods do not allow exploring the infinitely many plausible reconstructions that might have given rise to the observed low-resolution (LR) image. These different explanations to the LR image may dramatically vary in their textures and fine detail...
Preprint
Localization microscopy is an imaging technique in which the positions of individual nanoscale point emitters (e.g. fluorescent molecules) are determined at high precision from their images. This is the key ingredient in single/multiple-particle-tracking and several super-resolution microscopy approaches. Localization in three-dimensions (3D) can b...
Preprint
We introduce SinGAN, an unconditional generative model that can be learned from a single natural image. Our model is trained to capture the internal distribution of patches within the image, and is then able to generate high quality, diverse samples that carry the same visual content as the image. SinGAN contains a pyramid of fully convolutional GA...
Article
Full-text available
Deep learning has become an extremely effective tool for image classification and image restoration problems. Here, we apply deep learning to microscopy and demonstrate how neural networks can exploit the chromatic dependence of the point-spread function to classify the colors of single emitters imaged on a grayscale camera. While existing localiza...
Preprint
Lossy compression algorithms are typically designed and analyzed through the lens of Shannon's rate-distortion theory, where the goal is to achieve the lowest possible distortion (e.g., low MSE or high SSIM) at any given bit rate. However, in recent years, it has become increasingly accepted that "low distortion" is not a synonym for "high perceptu...
Chapter
We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once a...
Chapter
This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptual-driven methods...
Preprint
Deep net architectures have constantly evolved over the past few years, leading to significant advancements in a wide array of computer vision tasks. However, besides high accuracy, many applications also require a low computational load and limited memory footprint. To date, efficiency has typically been achieved either by architectural choices at...
Preprint
This paper reports on the 2018 PIRM challenge on perceptual super-resolution (SR), held in conjunction with the Perceptual Image Restoration and Manipulation (PIRM) workshop at ECCV 2018. In contrast to previous SR challenges, our evaluation methodology jointly quantifies accuracy and perceptual quality, therefore enabling perceptual-driven methods...
Preprint
Full-text available
Deep learning has become an extremely effective tool for image classification and image restoration problems. Here, we apply deep learning to microscopy, and demonstrate how neural networks can exploit the chromatic dependence of the point-spread function to classify the colors of single emitters imaged on a grayscale camera. While existing single-...
Article
Sparse representation over redundant dictionaries constitutes a good model for many classes of signals (e.g., patches of natural images, segments of speech signals, etc.). However, despite its popularity, very little is known about the representation capacity of this model. In this paper, we study how redundant a dictionary must be so as to allow a...
Article
Lossy compression algorithms aim to compactly encode images in a way which enables to restore them with minimal error. We show that a key limitation of existing algorithms is that they rely on error measures that are extremely sensitive to geometric deformations (e.g. SSD, SSIM). These force the encoder to invest many bits in describing the exact g...
Article
We present an ultra-fast, precise, parameter-free method, which we term Deep-STORM, for obtaining super-resolution images from stochastically-blinking emitters, such as fluorescent molecules used for localization microscopy. Deep-STORM uses a deep convolutional neural network that can be trained on simulated data or experimental measurements, both...
Article
In recent years, deep neural networks (DNNs) achieved unprecedented performance in many low-level vision tasks. However, state-of-the-art results are typically achieved by very deep networks, which can reach tens of layers with tens of millions of parameters. To make DNNs implementable on platforms with limited resources, it is necessary to weaken...
Article
Image restoration algorithms are typically evaluated by some distortion measure (e.g. PSNR, SSIM) or by human opinion scores that directly quantify perceived perceptual quality. In this paper, we prove mathematically that distortion and perceptual quality are at odds with each other. Specifically, we study the optimal probability for discriminating...
Article
The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for pr...
Article
Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the "repeated Eigen-directions" phenomenon. That is, many of the embedding coordinates they produce typical...
Conference Paper
Image priors play a key role in low-level vision tasks. Over the years, many priors have been proposed, based on a wide variety of principles. While different priors capture different geometric properties, there is currently no unified approach to interpreting and comparing priors of different nature. This limits our ability to analyze failures or...
Article
Canonical correlation analysis (CCA) is a fundamental technique in multi-view data analysis and representation learning. Several nonlinear extensions of the classical linear CCA method have been proposed, including kernel and deep neural network methods. These approaches restrict attention to certain families of nonlinear projections, which the use...
Article
We present an algorithm for automatically detecting and visualizing small non-local variations between repeating structures in a single image. Our method allows to automatically correct these variations, thus producing an 'idealized' version of the image in which the resemblance between recurring structures is stronger. Alternatively, it can be use...
Article
Extended target tracking arises in situations where the resolution of the sensor is high enough to allow multiple returns from the target of interest corresponding to its different parts. Various formulations and solutions may be found in the literature. We concentrate on the data association aspect involved in the tracking problem and propose util...
Conference Paper
Recurrence of small image patches across different scales of a natural image has been previously used for solving ill-posed problems (e.g. super- resolution from a single image). In this paper we show how this multi-scale property can also be used for “blind-deblurring”, namely, removal of an unknown blur from a blurry image. While patches repeat ‘...
Article
Full-text available
A generalized state space representation of dynamical systems with random modes switching according to a white random process is presented. The new formulation includes a term, in the dynamics equation, that depends on the most recent linear minimum mean squared error (LMMSE) estimate of the state. This can model the behavior of a feedback control...
Article
Extended target tracking arises in situations where the resolution of the sensor is high enough to allow multiple returns from the target of interest corresponding to its different parts. Various formulations and solutions may be found in the literature. We concentrate on the data association aspect involved in the tracking problem and propose util...
Conference Paper
Full-text available
Super resolution (SR) algorithms typically assume that the blur kernel is known (either the Point Spread Function 'PSF' of the camera, or some default low-pass filter, e.g. a Gaussian). However, the performance of SR methods significantly deteriorates when the assumed blur kernel deviates from the true one. We propose a general framework for "blind...
Article
The superior resolution of optical coherence tomography (OCT) with respect to alternative imaging modalities makes it highly attractive, and some of its applications are already in extensive clinical use. However, one of the major limitations of OCT is that the tomographic picture it generates is depth-limited to approximately 1 mm in most biologic...
Conference Paper
We consider the problem of tracking a target that may split, several times, into separate targets, such that each split becomes an autonomous target. This setting arises, for example, when an aircraft launches a series of missiles, or a ballistic missile with multiple warheads breaks into pieces in the reentry phase. The problem is cast into a fram...
Article
Full-text available
A generalized state space representation of a dynamical system with random modes is presented. The new formulation includes a term, in the dynamics equation, which depends on the most recent state's linear minimum mean squared error (LMMSE) estimate. This can be used to model the behavior of a feedback control system featuring a state estimator. Th...
Article
We address the problems of multi-domain and single-domain regression based on distinct and unpaired labeled training sets for each of the domains and a large unlabeled training set from all domains. We formulate these problems as a Bayesian estimation with partial knowledge of statistical relations. We propose a worst-case design strategy and study...
Conference Paper
We address the problem of estimating a random vector X from two sets of measurements Y and Z, such that the estimator is linear in Y. We show that the partially linear minimum mean squared error (PLMMSE) estimator requires knowing only the second-order moments of X and Y, making it of potential interest in various applications. We demonstrate the u...
Conference Paper
We address the problem of recovering signals from samples taken at their rate of innovation. Our only assumption is that the sampling system is such that the parameters defining the signal can be stably determined from the samples. As such, our analysis subsumes previously studied nonlinear acquisition devices and nonlinear signal classes. Our stra...
Conference Paper
We address the problems of multi-domain and single-domain regression based on distinct labeled training sets for each of the domains and a large unlabeled training set from all domains. We formulate these problems as ones of Bayesian estimation with partial knowledge of statistical relations. We propose a worst-case design strategy and study the re...
Conference Paper
Full-text available
We present a unified approach to the problem of state estimation under measurement and model uncertainties. The approach allows formulation of problems such as maneuvering target tracking, target tracking in clutter, and multiple target tracking using a single state-space system with random matrix coefficients. Consequently, all may be solved effic...
Article
Full-text available
A generalized state space representation of a dynamical system with random modes is presented. The dynamics equation includes the effect of the state's linear minimum mean squared error (LMMSE) optimal estimate, representing the behavior of a closed loop control system featuring a state estimator. The measurement equation is allowed to depend on pa...
Article
We address the problem of Bayesian estimation where the statistical relation between the signal and measurements is only partially known. We propose modeling partial Bayesian knowledge by using an auxiliary random vector called instrument. The statistical relations between the instrument and the signal and between the instrument and the measurement...
Article
We address the problem of recovering signals from samples taken at their rate of innovation. Our only assumption is that the sampling system is such that the parameters defining the signal can be stably determined from the samples, a condition that lies at the heart of every sampling theorem. Consequently, our analysis subsumes previously studied n...
Article
Full-text available
Causal processing of a signal's samples is crucial in on-line applications such as audio rate conversion, compression, tracking and more. This paper addresses the problems of predicting future samples and causally interpolating deterministic signals. We treat a rich variety of sampling mechanisms encountered in practice, namely in which each sampli...
Conference Paper
Causal processing of a signal's samples is crucial in on-line applications such as audio rate conversion, compression, tracking and more. This paper addresses the problem of causally reconstructing continuous-time signals from their samples. We treat a rich variety of sampling mechanisms encountered in practice, namely in which each sampling functi...
Article
We address the problem of estimating a random vector X from two sets of measurements Y and Z, such that the estimator is linear in Y. We show that the partially linear minimum mean squared error (PLMMSE) estimator does not require knowing the joint distribution of X and Y in full, but rather only its second-order moments. This renders it of potenti...
Conference Paper
In this paper, we derive lower bounds on the estimation error of finite rate of innovation signals from noisy measurements. We first obtain a fundamental limit on the estimation accuracy attainable regardless of the sampling technique. Next, we provide a bound on the performance achievable using any specific sampling method. Essential differences b...
Article
In this paper, we consider the problem of estimating finite rate of innovation (FRI) signals from noisy measurements, and specifically analyze the interaction between FRI techniques and the underlying sampling methods. We first obtain a fundamental limit on the estimation accuracy attainable regardless of the sampling method. Next, we provide a bou...
Article
Full-text available
Gap junctions produce low resistance pathways between cardiomyocytes and are major determinants of electrical conduction in the heart. Altered distribution and function of connexin 43 (Cx43), the major gap junctional protein in the ventricles, can slow conduction, and thus contribute to arrhythmogenesis in experimental models such as ischemic rat h...
Article
Full-text available
Time-frequency analysis, such as the Gabor transform, plays an important role in many signal processing applications. The redundancy of such representations is often directly related to the computational load of any algorithm operating in the transform domain. To reduce complexity, it may be desirable to increase the time and frequency sampling int...
Article
Full-text available
Time-frequency analysis, such as the Gabor transform, plays an important role in many signal processing applications. The redundancy of such representations is often directly related to the computational load of any algorithm operating in the transform domain. To reduce complexity, it may be desirable to increase the time and frequency sampling int...
Conference Paper
We address the problem of Bayesian estimation where the statistical relation between the signal and measurements is only partially known. We propose modeling partial Baysian knowledge by using an auxiliary random vector called instrument. The joint probability distributions of the instrument and the signal, and of the instrument and the measurement...
Conference Paper
In this paper, we address the problem of enhancement of a noisy GARCH process using a particle filter. We compare our approach experimentally to a previously developed recursive estimation scheme. Simulations indicate that a significant gain in performance is obtained, at the cost of higher sensitivity to errors in the GARCH parameters. The propose...
Article
We revisit the problem of recovering a continuous-time signal lying within a known shift-invariant subspace from nonlinear and nonideal samples. Recently, an iterative algorithm for perfect recovery of such signals was proposed. This method requires operations which are not linear time-invariant (LTI), rendering it impractical due to its polynomial...
Conference Paper
We address the problem of motion blur removal from an image sequence that was acquired by a sensor with nonlinear response. Motion blur removal in purely linear settings has been studied extensively in the past. In practice however, sensors exhibit nonlinearities, which also need to be compensated for. In this paper we study the problem of joint mo...
Article
Sampling theory has benefited from a surge of research in recent years, due in part to intense research in wavelet theory and the connections made between the two fields. In this chapter we present several extensions of the Shannon theorem, which treat a wide class of input signals, as well as nonideal-sampling and constrained-recovery procedures....
Conference Paper
We address the problem of recovering a continuous-time (space) signal from several blurred and noisy sampled versions of it, a scenario commonly encountered in super-resolution (SR) and array-processing. We show that discretization, a key step in many SR algorithms, inevitably leads to inaccurate modeling. Instead, we treat the problem entirely in...
Article
Full-text available
Time-frequency analysis, such as the Gabor transform, plays an important role in many signal processing applications. The redundancy of such representations is often directly related to the computational load of any algorithm operating in the transform domain. To reduce complexity, it may be desirable to increase the time and frequency sampling int...
Article
Digital applications have developed rapidly over the last few decades. Since many sources of information are of analog or continuous-time nature, discrete-time signal processing (DSP) inherently relies on sampling a continuous-time signal to obtain a discrete-time representation. Consequently, sampling theories lie at the heart of signal processing...
Article
We address the problem of reconstructing a random signal from samples of its filtered version using a given interpolation kernel. In order to reduce the mean squared error (MSE) when using a nonoptimal kernel, we propose a high rate interpolation scheme in which the interpolation grid is finer than the sampling grid. A digital correction system tha...
Conference Paper
We address the problem of minimum mean-squared error (MMSE) estimation where the estimator is constrained to belong to a pre- dened set of functions. We derive a simple closed form formula that reveals the structure of the restricted estimator for a wide class of constraints. Using this formula we study various types of con- strained estimation pro...
Article
Sampling theory has benefited from a surge of research in recent years, due in part to the intense research in wavelet theory and the connections made between the two fields. In this survey we present several extensions of the Shannon theorem, that have been developed primarily in the past two decades, which treat a wide class of input signals as w...
Conference Paper
We address the problem of minimum mean-squared error (MMSE) estimation under convex constraints. The familiar orthogonality principle, developed for linear constraints, is generalized to include convex restrictions. Using the extended principle, we study two types of convex constraints: constraints on the estimated vector (e.g. bounded norm), and c...
Article
We address the problem of purely-translational super-resolution (SR) for signals in arbitrary dimensions. We show that discretization, a key step in many SR algorithms, inevitably leads to inaccurate modeling. Instead, we treat the problem entirely in the continuous domain by modeling the signal as a continuous-space random process and deriving its...

Citations

... Deep optics has been applied to low-level problems, such as color imaging and demosaicking [95], extended depth of field and superresolution imaging [96], high dynamic range (HDR) imaging [97], and depth estimation [98], [99], [100], and high level problems, such as classification [101] and object detection [102]. Deep optics has also been used in time of flight imaging [103], [104], [105], computational microscopy [106], [107], [108], and imaging through scattering [109]. Existing work primarily focuses on optimizing the parameters of the optical element, sensor, and image signal processor (ISP), each of which can be considered a core block of the imaging pipeline and can be filled with different components in a plug-and-play manner. ...
... Implicit neural representations (INRs) [1,2,3] have recently shown their superiorities in computer vision, which aims to parameterize the signal as a continuous function that maps low-dimension spatial/temporal coordinates to the value space. Previous INRs approaches mainly focus on low-level tasks, e.g., image super-resolution [4,5,6,7,8], image/video compression [6,9,10,11], or image translation [12,13]. Despite its gratifying progress, little attention has been paid to high-level vision tasks, e.g., image classification or segmentation. ...
... A common estimation approach attempts to minimize the average discrepancy between the source image 1 While it is outside the scope of this paper to reference this vast literature, we do provide links to leading such techniques in Section 2. Figure 1. Reconstruction examples of highly compressed JPEG images using [5] and our proposed method. Our method produces stochastic outputs, all of which are perfectly consistent with the compressed image, like [5], but with far better perceptual quality. ...
... Departing from the single-shot approach, Refs. (18,19) from the optical domain optimized exactly two illumination patterns for a specific task and one chosen noise level; again, no systematic analysis of the role of noise on the learned illumination patterns was presented and strong-noise regimes were not considered. ...
... Although the perceptual quality of those images is high, it is well-known that some of these SR models introduce artifacts into the SR image that are not present in real images (Bhadra et al., 2020). In addition, most of these models do not enforce physically-based consistency between the super-resolved image and its low-resolution counterpart (Bahat and Michaeli, 2020). This limits the applicability of SR models to domains such as remote sensing where the safety and consistency are critical, e.g. for scientific instrumentation and decision making. ...