### Abstract

We consider the problem of estimating a random vector x from noisy linear measurements y = Ax + w in the setting where parameters θ on the distribution of x and w must be learned in addition to the vector x. This problem arises in a wide range of statistical learning and linear inverse problems. Our main contribution shows that a computationally simple iterative message passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large system limit (LSL) under very general parametrizations. Importantly, this LSL applies to all right-rotationally random A - a much larger class of matrices than i.i.d. sub-Gaussian matrices to which many past message passing approaches are restricted. In addition, a simple testable condition is provided in which the mean square error (MSE) on the vector x matches the Bayes optimal MSE predicted by the replica method. The proposed algorithm uses a combination of Expectation-Maximization (EM) with a recently-developed Vector Approximate Message Passing (VAMP) technique. We develop an analysis framework that shows that the parameter estimates in each iteration of the algorithm converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations. The SE equations, which extends those of VAMP without parameter adaptation, depend only on the initial parameter estimates and the statistical properties of the problem and can be used to predict consistency and precisely characterize other performance measures of the method.

Original language | English (US) |
---|---|

Pages (from-to) | 2546-2555 |

Number of pages | 10 |

Journal | Advances in Neural Information Processing Systems |

Volume | 2017-December |

State | Published - Jan 1 2017 |

Event | 31st Annual Conference on Neural Information Processing Systems, NIPS 2017 - Long Beach, United States Duration: Dec 4 2017 → Dec 9 2017 |

### Fingerprint

### ASJC Scopus subject areas

- Computer Networks and Communications
- Information Systems
- Signal Processing

### Cite this

*Advances in Neural Information Processing Systems*,

*2017-December*, 2546-2555.

**Rigorous dynamics and consistent estimation in arbitrarily conditioned linear systems.** / Fletcher, Alyson K.; Sahraee-Ardakan, Mojtaba; Rangan, Sundeep; Schniter, Philip.

Research output: Contribution to journal › Conference article

*Advances in Neural Information Processing Systems*, vol. 2017-December, pp. 2546-2555.

}

TY - JOUR

T1 - Rigorous dynamics and consistent estimation in arbitrarily conditioned linear systems

AU - Fletcher, Alyson K.

AU - Sahraee-Ardakan, Mojtaba

AU - Rangan, Sundeep

AU - Schniter, Philip

PY - 2017/1/1

Y1 - 2017/1/1

N2 - We consider the problem of estimating a random vector x from noisy linear measurements y = Ax + w in the setting where parameters θ on the distribution of x and w must be learned in addition to the vector x. This problem arises in a wide range of statistical learning and linear inverse problems. Our main contribution shows that a computationally simple iterative message passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large system limit (LSL) under very general parametrizations. Importantly, this LSL applies to all right-rotationally random A - a much larger class of matrices than i.i.d. sub-Gaussian matrices to which many past message passing approaches are restricted. In addition, a simple testable condition is provided in which the mean square error (MSE) on the vector x matches the Bayes optimal MSE predicted by the replica method. The proposed algorithm uses a combination of Expectation-Maximization (EM) with a recently-developed Vector Approximate Message Passing (VAMP) technique. We develop an analysis framework that shows that the parameter estimates in each iteration of the algorithm converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations. The SE equations, which extends those of VAMP without parameter adaptation, depend only on the initial parameter estimates and the statistical properties of the problem and can be used to predict consistency and precisely characterize other performance measures of the method.

AB - We consider the problem of estimating a random vector x from noisy linear measurements y = Ax + w in the setting where parameters θ on the distribution of x and w must be learned in addition to the vector x. This problem arises in a wide range of statistical learning and linear inverse problems. Our main contribution shows that a computationally simple iterative message passing algorithm can provably obtain asymptotically consistent estimates in a certain high-dimensional large system limit (LSL) under very general parametrizations. Importantly, this LSL applies to all right-rotationally random A - a much larger class of matrices than i.i.d. sub-Gaussian matrices to which many past message passing approaches are restricted. In addition, a simple testable condition is provided in which the mean square error (MSE) on the vector x matches the Bayes optimal MSE predicted by the replica method. The proposed algorithm uses a combination of Expectation-Maximization (EM) with a recently-developed Vector Approximate Message Passing (VAMP) technique. We develop an analysis framework that shows that the parameter estimates in each iteration of the algorithm converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations. The SE equations, which extends those of VAMP without parameter adaptation, depend only on the initial parameter estimates and the statistical properties of the problem and can be used to predict consistency and precisely characterize other performance measures of the method.

UR - http://www.scopus.com/inward/record.url?scp=85047020210&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85047020210&partnerID=8YFLogxK

M3 - Conference article

AN - SCOPUS:85047020210

VL - 2017-December

SP - 2546

EP - 2555

JO - Advances in Neural Information Processing Systems

JF - Advances in Neural Information Processing Systems

SN - 1049-5258

ER -