Ensemble nystrom method

Sanjiv Kumar, Mehryar Mohri, Ameet Talwalkar

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices. We introduce a new family of algorithms based on mixtures of Nystrom approximations, ensemble Nystrom algorithms, that yield more accurate low-rank approximations than the standard Nystrom method. We give a detailed study of variants of these algorithms based on simple averaging, an exponential weight method, or regression-based methods. We also present a theoretical analysis of these algorithms, including novel error bounds guaranteeing a better convergence rate than the standard Nystrom method. Finally, we report results of extensive experiments with several data sets containing up to 1M points demonstrating the significant improvement over the standard Nystrom approximation.

Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference
Pages1060-1068
Number of pages9
StatePublished - 2009
Event23rd Annual Conference on Neural Information Processing Systems, NIPS 2009 - Vancouver, BC, Canada
Duration: Dec 7 2009Dec 10 2009

Other

Other23rd Annual Conference on Neural Information Processing Systems, NIPS 2009
CountryCanada
CityVancouver, BC
Period12/7/0912/10/09

Fingerprint

Experiments

ASJC Scopus subject areas

  • Information Systems

Cite this

Kumar, S., Mohri, M., & Talwalkar, A. (2009). Ensemble nystrom method. In Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference (pp. 1060-1068)

Ensemble nystrom method. / Kumar, Sanjiv; Mohri, Mehryar; Talwalkar, Ameet.

Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference. 2009. p. 1060-1068.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Kumar, S, Mohri, M & Talwalkar, A 2009, Ensemble nystrom method. in Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference. pp. 1060-1068, 23rd Annual Conference on Neural Information Processing Systems, NIPS 2009, Vancouver, BC, Canada, 12/7/09.
Kumar S, Mohri M, Talwalkar A. Ensemble nystrom method. In Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference. 2009. p. 1060-1068
Kumar, Sanjiv ; Mohri, Mehryar ; Talwalkar, Ameet. / Ensemble nystrom method. Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference. 2009. pp. 1060-1068
@inproceedings{8a9b299f2bb3403081ca4e99db0202cc,
title = "Ensemble nystrom method",
abstract = "A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices. We introduce a new family of algorithms based on mixtures of Nystrom approximations, ensemble Nystrom algorithms, that yield more accurate low-rank approximations than the standard Nystrom method. We give a detailed study of variants of these algorithms based on simple averaging, an exponential weight method, or regression-based methods. We also present a theoretical analysis of these algorithms, including novel error bounds guaranteeing a better convergence rate than the standard Nystrom method. Finally, we report results of extensive experiments with several data sets containing up to 1M points demonstrating the significant improvement over the standard Nystrom approximation.",
author = "Sanjiv Kumar and Mehryar Mohri and Ameet Talwalkar",
year = "2009",
language = "English (US)",
isbn = "9781615679119",
pages = "1060--1068",
booktitle = "Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference",

}

TY - GEN

T1 - Ensemble nystrom method

AU - Kumar, Sanjiv

AU - Mohri, Mehryar

AU - Talwalkar, Ameet

PY - 2009

Y1 - 2009

N2 - A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices. We introduce a new family of algorithms based on mixtures of Nystrom approximations, ensemble Nystrom algorithms, that yield more accurate low-rank approximations than the standard Nystrom method. We give a detailed study of variants of these algorithms based on simple averaging, an exponential weight method, or regression-based methods. We also present a theoretical analysis of these algorithms, including novel error bounds guaranteeing a better convergence rate than the standard Nystrom method. Finally, we report results of extensive experiments with several data sets containing up to 1M points demonstrating the significant improvement over the standard Nystrom approximation.

AB - A crucial technique for scaling kernel methods to very large data sets reaching or exceeding millions of instances is based on low-rank approximation of kernel matrices. We introduce a new family of algorithms based on mixtures of Nystrom approximations, ensemble Nystrom algorithms, that yield more accurate low-rank approximations than the standard Nystrom method. We give a detailed study of variants of these algorithms based on simple averaging, an exponential weight method, or regression-based methods. We also present a theoretical analysis of these algorithms, including novel error bounds guaranteeing a better convergence rate than the standard Nystrom method. Finally, we report results of extensive experiments with several data sets containing up to 1M points demonstrating the significant improvement over the standard Nystrom approximation.

UR - http://www.scopus.com/inward/record.url?scp=84862292598&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84862292598&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:84862292598

SN - 9781615679119

SP - 1060

EP - 1068

BT - Advances in Neural Information Processing Systems 22 - Proceedings of the 2009 Conference

ER -