Categories
Uncategorized

New preprint

We uploaded our new preprint: “Towards optimal algorithms for the recovery of low-dimensional models with linear rates” Y. Traonmilin , J.-F. Aujol, A. Guennec

Abstract: We consider the problem of recovering elements of a low-dimensional model from linear measurements. From signal and image processing to inverse problems in data science, this question has been at the center of many applications. Lately, with the success of models and methods relying on deep neural networks leading to non-convex formulations, traditional convex variational approaches have shown their limits. Furthermore, the multiplication of algorithms and recovery results makes identifying the best methods a complex task. In this article, we study recovery with a class of widely used algorithms without considering any underlying functional. This result leads to a class of projected gradient descent algorithms that recover a given low-dimensional with linear rates. The obtained rates decouple the impact of the quality of the measurements with respect to the model from its intrinsic complexity. As a consequence, we can directly measure the performance of this class of projected gradient descents through a restricted Lipschitz constant of the projection. By optimizing this constant, we define optimal algorithms. Our general approach provides an optimality result in the case of sparse recovery. Moreover, we uncover underlying linear rates of convergence for some “plug and play” imaging methods relying on deep priors by interpreting our results in this context, thus linking low-dimensional recovery and recovery with deep priors under a unified theory, validated by experiments on synthetic and real data.

Categories
Uncategorized

Paper accepted

Our paper “Sketched over-parametrized projected gradient descent for sparse spike estimation” (https://hal.science/hal-04584951v1) has been accepted to Signal Processing Letters.

This is the last work of PJ Bénard for his PhD, his defense is next week! A nice application of compressed sensing in spaces of measures.

Categories
Paper

New preprint

“Joint structure-texture low dimensional modeling for image decomposition with a plug and play framework” (Guennec, Aujol, YT) https://hal.science/hal-04648963v1 We describe how structure-texture decomposition is directly linked to the (difficult) design of a regularizer for a complex combination of low dimensional models. Thanks to the PnP approach and DNN, we are able to explicitly design such regularizer in practice with promising results on natural images and inpainting problems.

Categories
Paper Students

Papers accepted !

My two Phd students will present their latest work at

@eusipco2024 in Lyon (congrats!):

P.-J. Bénard : Projected Block Coordinate Descent for sparse spike estimation https://hal.science/hal-04462779v1 (accelerating off-the-grid estimation by leveraging the structure of the problem)

Antoine Guennec : Adaptive parameter selection for gradient-sparse plus low patch-rank recovery: application to image decomposition https://hal.science/hal-04207313v1 (the first application of our work on optimal convex regularization)

Categories
Non classé

New preprint

We uploaded the huge work of P.-J. Bénard for his PhD:

Estimation of off-the-grid sparse spikes with over-parametrized projected gradient descent: theory and application. P.-J. Bénard, Y. Traonmilin, J.-F. Aujol and E. Soubies, 2023.

Abstract: “In this article, we study the problem of recovering sparse spikes with overparametrized projected descent. We first provide a theoretical study of approximate recovery with our chosen initialization method: Continuous Orthogonal Matching Pursuit without Sliding. Then we study the effect of over-parametrization on the gradient descent which highlights the benefits of the projection step. Finally, we show the improved calculation times of our algorithm compared to state-of-the-art model-based methods on realistic simulated microscopy data.”

Categories
Non classé Paper

New preprint

We uploaded the last work of Hui Shi during her PhD thesis:

Batch-less stochastic gradient descent for compressive learning of deep regularization for image denoising, H. Shi, Y. Traonmilin and J.-F. Aujol, 2023.

Abstract: “We consider the problem of denoising with the help of prior information taken from a database of clean signals or images. Denoising with variational methods is very efficient if a regularizer well adapted to the nature of the data is available. Thanks to the maximum a posteriori Bayesian framework, such regularizer can be systematically linked with the distribution of the data. With deep neural networks (DNN), complex distributions can be recovered from a large training database. To reduce the computational burden of this task, we adapt the compressive learning framework to the learning of regularizers parametrized by DNN. We propose two variants of stochastic gradient descent (SGD) for the recovery of deep regularization parameters from a heavily compressed database. These algorithms outperform the initially proposed method that was limited to low-dimensional signals, each iteration using information from the whole database. They also benefit from classical SGD convergence guarantees. Thanks to these improvements we show that this method can be applied for patch based image denoising.”

Categories
Paper

New preprint

Adaptive Parameter Selection For Gradient-sparse + Low Patch-rank Recovery: Application To Image Decomposition. A. Guennec, J.-F. Aujol, Y. Traonmilin. 2023.

Abstract: “In this work, we are interested in gradient sparse + low patchrank signal recovery for image structure-texture decomposition. We locally model the structure as gradient-sparse and the texture as of low patch-rank. Moreover, we propose a rule based upon theoretical results of sparse + low-rank matrix recovery in order to automatically tune our model depending on the local content and we numerically validate this proposition.”

Categories
Paper Talk

SampTA Paper

Our paper “Disentangled latent representations of images with atomic autoencoders” will be presented at the SampTA conference by A. Newson.

Abstract: “We present the atomic autoencoder architecture, which decomposes an image as the sum of elementary parts that are parametrized by simple separate blocks of latent codes. We show that this simple architecture is induced by the definition of a general atomic low-dimensional model of the considered data. We also highlight the fact that the atomic autoencoder achieves disentangled low-dimensional representations under minimal hypotheses. Experiments show that their implementation with deep neural networks is successful at learning disentangled representations on two different examples: images constructed with simple parametric curves and images of filtered off-the-grid spikes.”

Categories
Paper

New preprint

We uploaded the following preprint on the geometry of non-convex sparse spike estimation:

On strong basins of attractions for non-convex sparse spike estimation: upper and lower bounds, Y. Traonmilin, J.F. Aujol, A. Leclaire and P.J. Bénard. (EFFIREG)

“Abstract: In this article, we study the size of strong basins of attractions for the non-convex sparse spike estimation problem. We first extend previous results to obtain a lower bound on the size of sets where gradient descent converges with a linear rate to the minimum of the non-convex objective functional. We then give an upper bound that shows that the dependency of the lower bound with respect to the number of measurements reflects well the true size of basins of attraction for random Gaussian Fourier measurements. These theoretical results are confirmed by experiments.”

Categories
Non classé

Compressive learning of deep regularization for denoising

Hui Shi will be presenting “Compressive learning of deep regularization for denoising” at SSVM 2023.

Abstract: “Solving ill-posed inverse problems can be done accurately if a regularizer well adapted to the nature of the data is available. Such regularizer can be systematically linked with the distribution of the data itself through the maximum a posteriori Bayesian framework. Recently, regularizers designed with the help of deep neural networks received impressive success. Such regularizers are typically learned from voluminous training data. To reduce the computational burden of this task, we propose to adapt the compressive learning framework to the learning of regularizers parametrized by deep neural networks (DNN). Our work shows the feasibility of batchless learning of regularizers from a compressed dataset. In order to achieve this, we propose an approximation of the compression operator that can be calculated explicitly for the task of learning a regularizer by DNN. We show that the proposed regularizer is capable of modeling complex regularity prior and can be used to solve the denoising inverse problem.”