Generalization bounds for learning kernels

Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using Li regularization admits only a √logp dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an L2 regularization whose dependency on p is also tight and only in p1/4. We present similar results for Lq regularization with other values of q, and outline the relevance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.

Original languageEnglish (US)
Title of host publicationICML 2010 - Proceedings, 27th International Conference on Machine Learning
Pages247-254
Number of pages8
StatePublished - 2010
Event27th International Conference on Machine Learning, ICML 2010 - Haifa, Israel
Duration: Jun 21 2010Jun 25 2010

Other

Other27th International Conference on Machine Learning, ICML 2010
CountryIsrael
CityHaifa
Period6/21/106/25/10

Fingerprint

learning
Experiments
experiment
Values

ASJC Scopus subject areas

  • Artificial Intelligence
  • Education

Cite this

Cortes, C., Mohri, M., & Rostamizadeh, A. (2010). Generalization bounds for learning kernels. In ICML 2010 - Proceedings, 27th International Conference on Machine Learning (pp. 247-254)

Generalization bounds for learning kernels. / Cortes, Corinna; Mohri, Mehryar; Rostamizadeh, Afshin.

ICML 2010 - Proceedings, 27th International Conference on Machine Learning. 2010. p. 247-254.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Cortes, C, Mohri, M & Rostamizadeh, A 2010, Generalization bounds for learning kernels. in ICML 2010 - Proceedings, 27th International Conference on Machine Learning. pp. 247-254, 27th International Conference on Machine Learning, ICML 2010, Haifa, Israel, 6/21/10.
Cortes C, Mohri M, Rostamizadeh A. Generalization bounds for learning kernels. In ICML 2010 - Proceedings, 27th International Conference on Machine Learning. 2010. p. 247-254
Cortes, Corinna ; Mohri, Mehryar ; Rostamizadeh, Afshin. / Generalization bounds for learning kernels. ICML 2010 - Proceedings, 27th International Conference on Machine Learning. 2010. pp. 247-254
@inproceedings{3eb81643d75e4b628697fb83c22d8a15,
title = "Generalization bounds for learning kernels",
abstract = "This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using Li regularization admits only a √logp dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an L2 regularization whose dependency on p is also tight and only in p1/4. We present similar results for Lq regularization with other values of q, and outline the relevance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.",
author = "Corinna Cortes and Mehryar Mohri and Afshin Rostamizadeh",
year = "2010",
language = "English (US)",
isbn = "9781605589077",
pages = "247--254",
booktitle = "ICML 2010 - Proceedings, 27th International Conference on Machine Learning",

}

TY - GEN

T1 - Generalization bounds for learning kernels

AU - Cortes, Corinna

AU - Mohri, Mehryar

AU - Rostamizadeh, Afshin

PY - 2010

Y1 - 2010

N2 - This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using Li regularization admits only a √logp dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an L2 regularization whose dependency on p is also tight and only in p1/4. We present similar results for Lq regularization with other values of q, and outline the relevance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.

AB - This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using Li regularization admits only a √logp dependency on the number of kernels, which is tight and considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a non-negative combination of p base kernels with an L2 regularization whose dependency on p is also tight and only in p1/4. We present similar results for Lq regularization with other values of q, and outline the relevance of our proof techniques to the analysis of the complexity of the class of linear functions. Experiments with a large number of kernels further validate the behavior of the generalization error as a function of p predicted by our bounds.

UR - http://www.scopus.com/inward/record.url?scp=77956550918&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77956550918&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781605589077

SP - 247

EP - 254

BT - ICML 2010 - Proceedings, 27th International Conference on Machine Learning

ER -