Greedy sparse recovery under structure: weighted generalization and deep unrolling.

Sina Mohammad-Taheri
Concordia

Sparse recovery methods seek the most parsimonious reconstruction from limited linear measurements, possibly corrupted with noise. In a stark contrast, deep neural networks rely on increasing the number of parameters to discover emergent patterns, which despite giving these networks remarkable approximation capabilities render them extremely hard to interpret and analyze. This raises serious concerns regarding their safety, especially in critical domains such as health, as they lack “recovery guarantees”. In this talk, focusing on the class of greedy sparse recovery solvers as efficient alternatives to convex methods, I will consider two sparse reconstruction setups in tandem to the latent structure within the data: First, when a priori knowledge about the signal is available, we incorporate it into “weights” by designing weighted algorithms, which are superior to their unweighted versions. Second, when such knowledge is not provided, hidden structures within the data are learned utilizing explainable deep learning models designed based on “unrolling” sparse recovery algorithms. To do so, we resolve the non-differentiability issue of the argsort operator in greedy sparse recovery algorithms by approximating their permutation matrices in a differentiable manner. These settings span a range of applications, of particular interest those arising in medical signal processing and function approximation. Also, both settings are proposed with recovery guarantees, for deep networks marking a significant step towards addressing their stability.