Iteratively reweighted least squares minimization for sparse recovery

Ingrid Daubechies, Ronald Devore, Massimo Fornasier, C. Sinan Gunturk

Research output: Contribution to journalArticle

Abstract

Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (wherem < N), vectors x ∈ R{double-struck}N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y:= Φx even though Φ-1(y). is typically an (N-m)-dimensional hyperplane; in addition, x is then equal to the element in Φ-1(y) of minimal l1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ-1(y) with smallest l2.(w)-norm. If x(n) is the solution at iteration step n, then the new weight w(n)i/ is defined by for a decreasing sequence of adaptively defined εn; this updated weight is then used to obtain x(n+1)/ and the process is repeated. We prove that when ̂ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ-1(y), contains a sparse vector. If there is a sparse vector in Φ-1(y), then the limit is this sparse vector, and when x.(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the "heavier" weight, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for approaching 0.

Original languageEnglish (US)
Pages (from-to)1-38
Number of pages38
JournalCommunications on Pure and Applied Mathematics
Volume63
Issue number1
DOIs
StatePublished - Jan 2010

Fingerprint

Iteratively Reweighted Least Squares
Recovery
Converge
Monotonic decreasing sequence
Linear Convergence
L1-norm
Least Square Algorithm
Numerical Optimization
Local Convergence
Terminology
Isometry
Hyperplane
Linear programming
Iteration
Norm
Alternatives

ASJC Scopus subject areas

  • Mathematics(all)
  • Applied Mathematics

Cite this

Iteratively reweighted least squares minimization for sparse recovery. / Daubechies, Ingrid; Devore, Ronald; Fornasier, Massimo; Gunturk, C. Sinan.

In: Communications on Pure and Applied Mathematics, Vol. 63, No. 1, 01.2010, p. 1-38.

Research output: Contribution to journalArticle

Daubechies, Ingrid ; Devore, Ronald ; Fornasier, Massimo ; Gunturk, C. Sinan. / Iteratively reweighted least squares minimization for sparse recovery. In: Communications on Pure and Applied Mathematics. 2010 ; Vol. 63, No. 1. pp. 1-38.
@article{28bb166d26954dd99fd3fc0251848ed9,
title = "Iteratively reweighted least squares minimization for sparse recovery",
abstract = "Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (wherem < N), vectors x ∈ R{double-struck}N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y:= Φx even though Φ-1(y). is typically an (N-m)-dimensional hyperplane; in addition, x is then equal to the element in Φ-1(y) of minimal l1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ-1(y) with smallest l2.(w)-norm. If x(n) is the solution at iteration step n, then the new weight w(n)i/ is defined by for a decreasing sequence of adaptively defined εn; this updated weight is then used to obtain x(n+1)/ and the process is repeated. We prove that when ̂ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ-1(y), contains a sparse vector. If there is a sparse vector in Φ-1(y), then the limit is this sparse vector, and when x.(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the {"}heavier{"} weight, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for approaching 0.",
author = "Ingrid Daubechies and Ronald Devore and Massimo Fornasier and Gunturk, {C. Sinan}",
year = "2010",
month = "1",
doi = "10.1002/cpa.20303",
language = "English (US)",
volume = "63",
pages = "1--38",
journal = "Communications on Pure and Applied Mathematics",
issn = "0010-3640",
publisher = "Wiley-Liss Inc.",
number = "1",

}

TY - JOUR

T1 - Iteratively reweighted least squares minimization for sparse recovery

AU - Daubechies, Ingrid

AU - Devore, Ronald

AU - Fornasier, Massimo

AU - Gunturk, C. Sinan

PY - 2010/1

Y1 - 2010/1

N2 - Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (wherem < N), vectors x ∈ R{double-struck}N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y:= Φx even though Φ-1(y). is typically an (N-m)-dimensional hyperplane; in addition, x is then equal to the element in Φ-1(y) of minimal l1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ-1(y) with smallest l2.(w)-norm. If x(n) is the solution at iteration step n, then the new weight w(n)i/ is defined by for a decreasing sequence of adaptively defined εn; this updated weight is then used to obtain x(n+1)/ and the process is repeated. We prove that when ̂ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ-1(y), contains a sparse vector. If there is a sparse vector in Φ-1(y), then the limit is this sparse vector, and when x.(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the "heavier" weight, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for approaching 0.

AB - Under certain conditions (known as the restricted isometry property, or RIP) on the m × N matrix Φ (wherem < N), vectors x ∈ R{double-struck}N that are sparse (i.e., have most of their entries equal to 0) can be recovered exactly from y:= Φx even though Φ-1(y). is typically an (N-m)-dimensional hyperplane; in addition, x is then equal to the element in Φ-1(y) of minimal l1-norm. This minimal element can be identified via linear programming algorithms. We study an alternative method of determining x, as the limit of an iteratively reweighted least squares (IRLS) algorithm. The main step of this IRLS finds, for a given weight vector w, the element in Φ-1(y) with smallest l2.(w)-norm. If x(n) is the solution at iteration step n, then the new weight w(n)i/ is defined by for a decreasing sequence of adaptively defined εn; this updated weight is then used to obtain x(n+1)/ and the process is repeated. We prove that when ̂ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ-1(y), contains a sparse vector. If there is a sparse vector in Φ-1(y), then the limit is this sparse vector, and when x.(n) is sufficiently close to the limit, the remaining steps of the algorithm converge exponentially fast (linear convergence in the terminology of numerical optimization). The same algorithm with the "heavier" weight, can recover sparse solutions as well; more importantly, we show its local convergence is superlinear and approaches a quadratic rate for approaching 0.

UR - http://www.scopus.com/inward/record.url?scp=77949704355&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77949704355&partnerID=8YFLogxK

U2 - 10.1002/cpa.20303

DO - 10.1002/cpa.20303

M3 - Article

VL - 63

SP - 1

EP - 38

JO - Communications on Pure and Applied Mathematics

JF - Communications on Pure and Applied Mathematics

SN - 0010-3640

IS - 1

ER -