Inference for generalized linear models via alternating directions and bethe free energy minimization

Sundeep Rangan, Alyson K. Fletcher, Philip Schniter, Ulugbek S. Kamilov

Research output: Contribution to journalArticle

Abstract

Generalized linear models, where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z= A x, arise in a range of applications in nonlinear filtering and regression. Approximate message passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can diverge for general A. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe free energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the alternating direction method of multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minimum of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.

Original languageEnglish (US)
Article number7600459
Pages (from-to)676-697
Number of pages22
JournalIEEE Transactions on Information Theory
Volume63
Issue number1
DOIs
StatePublished - Jan 1 2017

Fingerprint

linear model
Free energy
Message passing
energy
multiplier
Nonlinear filtering
penalty
regression
simulation

Keywords

  • ADMM
  • Belief propagation
  • generalized linear models
  • message passing
  • variational optimization

ASJC Scopus subject areas

  • Information Systems
  • Computer Science Applications
  • Library and Information Sciences

Cite this

Inference for generalized linear models via alternating directions and bethe free energy minimization. / Rangan, Sundeep; Fletcher, Alyson K.; Schniter, Philip; Kamilov, Ulugbek S.

In: IEEE Transactions on Information Theory, Vol. 63, No. 1, 7600459, 01.01.2017, p. 676-697.

Research output: Contribution to journalArticle

Rangan, Sundeep ; Fletcher, Alyson K. ; Schniter, Philip ; Kamilov, Ulugbek S. / Inference for generalized linear models via alternating directions and bethe free energy minimization. In: IEEE Transactions on Information Theory. 2017 ; Vol. 63, No. 1. pp. 676-697.
@article{ddeaecfe174143a0b236760cda923266,
title = "Inference for generalized linear models via alternating directions and bethe free energy minimization",
abstract = "Generalized linear models, where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z= A x, arise in a range of applications in nonlinear filtering and regression. Approximate message passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can diverge for general A. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe free energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the alternating direction method of multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minimum of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.",
keywords = "ADMM, Belief propagation, generalized linear models, message passing, variational optimization",
author = "Sundeep Rangan and Fletcher, {Alyson K.} and Philip Schniter and Kamilov, {Ulugbek S.}",
year = "2017",
month = "1",
day = "1",
doi = "10.1109/TIT.2016.2619373",
language = "English (US)",
volume = "63",
pages = "676--697",
journal = "IEEE Transactions on Information Theory",
issn = "0018-9448",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

TY - JOUR

T1 - Inference for generalized linear models via alternating directions and bethe free energy minimization

AU - Rangan, Sundeep

AU - Fletcher, Alyson K.

AU - Schniter, Philip

AU - Kamilov, Ulugbek S.

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Generalized linear models, where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z= A x, arise in a range of applications in nonlinear filtering and regression. Approximate message passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can diverge for general A. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe free energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the alternating direction method of multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minimum of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.

AB - Generalized linear models, where a random vector x is observed through a noisy, possibly nonlinear, function of a linear transform z= A x, arise in a range of applications in nonlinear filtering and regression. Approximate message passing (AMP) methods, based on loopy belief propagation, are a promising class of approaches for approximate inference in these models. AMP methods are computationally simple, general, and admit precise analyses with testable conditions for optimality for large i.i.d. transforms A. However, the algorithms can diverge for general A. This paper presents a convergent approach to the generalized AMP (GAMP) algorithm based on direct minimization of a large-system limit approximation of the Bethe free energy (LSL-BFE). The proposed method uses a double-loop procedure, where the outer loop successively linearizes the LSL-BFE and the inner loop minimizes the linearized LSL-BFE using the alternating direction method of multipliers (ADMM). The proposed method, called ADMM-GAMP, is similar in structure to the original GAMP method, but with an additional least-squares minimization. It is shown that for strictly convex, smooth penalties, ADMM-GAMP is guaranteed to converge to a local minimum of the LSL-BFE, thus providing a convergent alternative to GAMP that is stable under arbitrary transforms. Simulations are also presented that demonstrate the robustness of the method for non-convex penalties as well.

KW - ADMM

KW - Belief propagation

KW - generalized linear models

KW - message passing

KW - variational optimization

UR - http://www.scopus.com/inward/record.url?scp=85008450538&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85008450538&partnerID=8YFLogxK

U2 - 10.1109/TIT.2016.2619373

DO - 10.1109/TIT.2016.2619373

M3 - Article

AN - SCOPUS:85008450538

VL - 63

SP - 676

EP - 697

JO - IEEE Transactions on Information Theory

JF - IEEE Transactions on Information Theory

SN - 0018-9448

IS - 1

M1 - 7600459

ER -