AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

Mark Borgerding, Philip Schniter, Sundeep Rangan

Research output: Contribution to journalArticle

Abstract

Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a "learned AMP" network that significantly improves upon Gregor and LeCun"s "learned ISTA." Second, inspired by the recently proposed "vector AMP" (VAMP) algorithm, we propose a "learned VAMP" network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.

Original languageEnglish (US)
Article number7934066
Pages (from-to)4293-4308
Number of pages16
JournalIEEE Transactions on Signal Processing
Volume65
Issue number16
DOIs
StatePublished - Aug 15 2017

Fingerprint

Message passing
Inverse problems
Channel estimation
Network architecture
Backpropagation
MIMO systems
Neural networks
Communication
Deep learning

Keywords

  • approximate message passing
  • compressive sensing
  • Deep learning
  • massive MIMO
  • random access

ASJC Scopus subject areas

  • Signal Processing
  • Electrical and Electronic Engineering

Cite this

AMP-Inspired Deep Networks for Sparse Linear Inverse Problems. / Borgerding, Mark; Schniter, Philip; Rangan, Sundeep.

In: IEEE Transactions on Signal Processing, Vol. 65, No. 16, 7934066, 15.08.2017, p. 4293-4308.

Research output: Contribution to journalArticle

Borgerding, Mark ; Schniter, Philip ; Rangan, Sundeep. / AMP-Inspired Deep Networks for Sparse Linear Inverse Problems. In: IEEE Transactions on Signal Processing. 2017 ; Vol. 65, No. 16. pp. 4293-4308.
@article{cbadc712c01645dd9f17de272c3c183d,
title = "AMP-Inspired Deep Networks for Sparse Linear Inverse Problems",
abstract = "Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a {"}learned AMP{"} network that significantly improves upon Gregor and LeCun{"}s {"}learned ISTA.{"} Second, inspired by the recently proposed {"}vector AMP{"} (VAMP) algorithm, we propose a {"}learned VAMP{"} network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.",
keywords = "approximate message passing, compressive sensing, Deep learning, massive MIMO, random access",
author = "Mark Borgerding and Philip Schniter and Sundeep Rangan",
year = "2017",
month = "8",
day = "15",
doi = "10.1109/TSP.2017.2708040",
language = "English (US)",
volume = "65",
pages = "4293--4308",
journal = "IEEE Transactions on Signal Processing",
issn = "1053-587X",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "16",

}

TY - JOUR

T1 - AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

AU - Borgerding, Mark

AU - Schniter, Philip

AU - Rangan, Sundeep

PY - 2017/8/15

Y1 - 2017/8/15

N2 - Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a "learned AMP" network that significantly improves upon Gregor and LeCun"s "learned ISTA." Second, inspired by the recently proposed "vector AMP" (VAMP) algorithm, we propose a "learned VAMP" network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.

AB - Deep learning has gained great popularity due to its widespread success on many inference problems. We consider the application of deep learning to the sparse linear inverse problem, where one seeks to recover a sparse signal from a few noisy linear measurements. In this paper, we propose two novel neural-network architectures that decouple prediction errors across layers in the same way that the approximate message passing (AMP) algorithms decouple them across iterations: through Onsager correction. First, we propose a "learned AMP" network that significantly improves upon Gregor and LeCun"s "learned ISTA." Second, inspired by the recently proposed "vector AMP" (VAMP) algorithm, we propose a "learned VAMP" network that offers increased robustness to deviations in the measurement matrix from i.i.d. Gaussian. In both cases, we jointly learn the linear transforms and scalar nonlinearities of the network. Interestingly, with i.i.d. signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP. Finally, we apply our methods to two problems from 5G wireless communications: compressive random access and massive-MIMO channel estimation.

KW - approximate message passing

KW - compressive sensing

KW - Deep learning

KW - massive MIMO

KW - random access

UR - http://www.scopus.com/inward/record.url?scp=85028375200&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85028375200&partnerID=8YFLogxK

U2 - 10.1109/TSP.2017.2708040

DO - 10.1109/TSP.2017.2708040

M3 - Article

VL - 65

SP - 4293

EP - 4308

JO - IEEE Transactions on Signal Processing

JF - IEEE Transactions on Signal Processing

SN - 1053-587X

IS - 16

M1 - 7934066

ER -