More data means less inference: A pseudo-max approach to structured learning

David Sontag, Ofer Meshi, Tommi Jaakkola, Amir Globerson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010
StatePublished - 2010
Event24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010 - Vancouver, BC, Canada
Duration: Dec 6 2010Dec 9 2010

Other

Other24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010
CountryCanada
CityVancouver, BC
Period12/6/1012/9/10

Fingerprint

Labels

ASJC Scopus subject areas

  • Information Systems

Cite this

Sontag, D., Meshi, O., Jaakkola, T., & Globerson, A. (2010). More data means less inference: A pseudo-max approach to structured learning. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010

More data means less inference : A pseudo-max approach to structured learning. / Sontag, David; Meshi, Ofer; Jaakkola, Tommi; Globerson, Amir.

Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010. 2010.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Sontag, D, Meshi, O, Jaakkola, T & Globerson, A 2010, More data means less inference: A pseudo-max approach to structured learning. in Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010. 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010, Vancouver, BC, Canada, 12/6/10.
Sontag D, Meshi O, Jaakkola T, Globerson A. More data means less inference: A pseudo-max approach to structured learning. In Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010. 2010
Sontag, David ; Meshi, Ofer ; Jaakkola, Tommi ; Globerson, Amir. / More data means less inference : A pseudo-max approach to structured learning. Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010. 2010.
@inproceedings{368d6a3bd81047f1829daf179832508a,
title = "More data means less inference: A pseudo-max approach to structured learning",
abstract = "The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used.",
author = "David Sontag and Ofer Meshi and Tommi Jaakkola and Amir Globerson",
year = "2010",
language = "English (US)",
isbn = "9781617823800",
booktitle = "Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010",

}

TY - GEN

T1 - More data means less inference

T2 - A pseudo-max approach to structured learning

AU - Sontag, David

AU - Meshi, Ofer

AU - Jaakkola, Tommi

AU - Globerson, Amir

PY - 2010

Y1 - 2010

N2 - The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used.

AB - The problem of learning to predict structured labels is of key importance in many applications. However, for general graph structure both learning and inference are intractable. Here we show that it is possible to circumvent this difficulty when the distribution of training examples is rich enough, via a method similar in spirit to pseudo-likelihood. We show that our new method achieves consistency, and illustrate empirically that it indeed approaches the performance of exact methods when sufficiently large training sets are used.

UR - http://www.scopus.com/inward/record.url?scp=84860649508&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84860649508&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84860649508

SN - 9781617823800

BT - Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010, NIPS 2010

ER -