I am organizing March 23rd a workshop on Imaging inverse problems – regularization, low dimensional models and applications. Check https://gdr-mia.math.cnrs.fr/events/journee_problemes_inverses2023/ if you want to participate.
Author: ytraonmilin
New preprint
A preprint of the work on off-the-grid super resolution of our student P.J. Bénard is available.
Fast off-the-grid sparse recovery with over-parametrized projected gradient descent, P.J. Bénard, Y. Traonmilin and J.F. Aujol
Abstract: “We consider the problem of recovering off-the-grid spikes from Fourier measurements. Successful methods such as sliding Frank-Wolfe and continuous orthogonal matching pursuit (OMP) iteratively add spikes to the solution then perform a costly (when the number of spikes is large) descent on all parameters at each iteration. In 2D, it was shown that performing a projected gradient descent (PGD) from a gridded over-parametrized initialization was faster than continuous orthogonal matching pursuit. In this paper, we propose an off-the-grid over-parametrized initialization of the PGD based on OMP that permits to fully avoid grids and gives faster results in 3D.”
New preprint
Our student Axel Baldanza uploaded a new preprint: “Piecewise linear prediction model for action tracking in sports“.
Abstract: “Recent tracking methods in professional team sports
reach very high accuracy by tracking the ball and players.
However, it remains difficult for these methods to perform
accurate real-time tracking in amateur acquisition conditions
where the vertical position or orientation of the camera is not
controlled and cameras use heterogeneous sensors. This article
presents a method for tracking interesting content in an amateur
sport game by analyzing player displacements. Defining optical
flow of the foreground in the image as the player motions,
we propose a piecewise linear supervised learning model for
predicting the camera global motion needed to follow the action.”
New preprint
We uploaded our new preprint:
"A theory of optimal convex regularization for low-dimensional recovery", Y. Traonmilin, R. Gribonval and S. Vaiter
Abstract : We consider the problem of recovering elements of a low-dimensional model from under-determined linear measurements. To perform recovery, we consider the minimization of a convex regularizer subject to a data fit constraint. Given a model, we ask ourselves what is the “best” convex regularizer to perform its recovery. To answer this question, we define an optimal regularizer as a function that maximizes a compliance measure with respect to the model. We introduce and study several notions of compliance. We give analytical expressions for compliance measures based on the best-known recovery guarantees with the restricted isometry property. These expressions permit to show the optimality of the ℓ 1-norm for sparse recovery and of the nuclear norm for low-rank matrix recovery for these compliance measures. We also investigate the construction of an optimal convex regularizer using the example of sparsity in levels.
I will be giving a talk on the basins of attraction of non-convex methods at the
2021 FNRS Contact Group on “Wavelets and Applications” Workshop
on the 14th of December.
Minisymposium at SIAM IS 22
We organize a minisymposium (2 parts) at SIAM IS 22 with Luca Calatroni and Paul Escande:
Non-Convex Optimization Methods for Inverse Problems in Imaging: From Theory to Applications
Come join us on Thursday, March 24, 2022!
Abstract:
Over the past decade, there has been a growing interest in the imaging community to use non-convex sparse optimization methods. These approaches are now ubiquitous in a plethora of real-world applications. Many recent theoretical contributions have proven the success of these methods. With respect to sparse regularization models, non-convexity arises naturally when dealing with efficient approximations to the pseudo-normality l_0 and/or when dealing with joint optimization problems where non-convexity is the by-product of a cross-regularization term and/or non-convex data models. The objective of this minisymposium is to gather experts in the field of non-convex regularization methods for inverse imaging problems to provide an overview of the field ranging from recent theoretical results to the design of numerical optimization methods that could be used effectively in a variety of applications.
With respect to theoretical developments, this minisymposium will focus on convergence guarantees and derivation of convergence rates of non-convex methods and their relation to the specific structure of imaging problems and low-dimensional models. A selection of contributions dealing with the design of efficient algorithms for new non-convex formulations of imaging problems will then be presented. Finally, some presentations on the actual use of these methods in real applications such as microscopic imaging, medical imaging and sparse signal recovery will be given.
New preprint
The final version of our work on sketched image denoising is available as a preprint: “Compressive learning for patch-based image denoising” , Hui Shi, Yann Traonmilin and Jean-François Aujol.
Habilitation à diriger des recherche
My “Habilitation à diriger des recherche” thesis defense will be the 6th october 10am at Institut de Mathématiques de Bordeaux. It is possible to follow it online at the following link
https://streaming.math.u-bordeaux.fr/soutenance-yann-traonmilin/
A. Baldanza at ORASIS 2021
Our student A. Baldanza (Rematch, IMB) will be presenting his work “Découpage automatique de vidéos de sport amateur par détection de personnes et analyse de contenu colorimétrique”, A. Baldanza, J-F Aujol, Y. Traonmilin and F. Alary at the ORASIS 2021 conference (13th-17th Sept 2021).
New preprint
We have uploaded a new preprint “Sketched learning for image denoising“, Hui Shi, Yann Traonmilin and Jean-François Aujol.
Abstract: The Expected Patch Log-Likelihood algorithm (EPLL) and its extensions have shown good performances for image denoising. It estimates a Gaussian mixture model (GMM) from a training database of image patches and it uses the GMM as a prior for denoising. In this work, we adapt the sketching framework to carry out the compressive estimation of Gaussian mixture models with low rank covariances for image patches. With this method, we estimate models from a compressive representation of the training data with a learning cost that does not depend on the number of items in the database. Our method adds another dimension reduction technique (low-rank modeling of covariances) to the existing sketching methods in order to reduce the dimension of model parameters and to add flexibility to the modeling. We test our model on synthetic data and real large-scale data for patch-based image denoising. We show that we can produce denoising performance close to the models estimated from the original training database, opening the way for the study of denoising strategies using huge patch databases.