### Abstract

The choice of the kernel is critical to the success of many learning algorithms but it is typically left to the user. Instead, the training data can be used to learn the kernel by selecting it out of a given family, such as that of non-negative linear combinations of p base kernels, constrained by a trace or L1 regularization. This paper studies the problem of learning kernels with the same family of kernels but with an L2 regularization instead, and for regression problems. We analyze the problem of learning kernels with ridge regression. We derive the form of the solution of the optimization problem and give an efficient iterative algorithm for computing that solution. We present a novel theoretical analysis of the problem based on stability and give learning bounds for orthogonal kernels that contain only an additive termO(p/m) when compared to the standard kernel ridge regression stability bound. We also report the results of experiments indicating that L1 regularization can lead to modest improvements for a small number of kernels, but to performance degradations in larger-scale cases. In contrast, L2 regularization never degrades performance and in fact achieves significant improvements with a large number of kernels.

Original language | English (US) |
---|---|

Title of host publication | Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009 |

Pages | 109-116 |

Number of pages | 8 |

State | Published - 2009 |

Event | 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009 - Montreal, QC, Canada Duration: Jun 18 2009 → Jun 21 2009 |

### Other

Other | 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009 |
---|---|

Country | Canada |

City | Montreal, QC |

Period | 6/18/09 → 6/21/09 |

### Fingerprint

### ASJC Scopus subject areas

- Artificial Intelligence
- Applied Mathematics

### Cite this

*Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009*(pp. 109-116)

**L
2 regularization for learning kernels.** / Cortes, Corinna; Mohri, Mehryar; Rostamizadeh, Afshin.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009.*pp. 109-116, 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009, Montreal, QC, Canada, 6/18/09.

}

TY - GEN

T1 - L 2 regularization for learning kernels

AU - Cortes, Corinna

AU - Mohri, Mehryar

AU - Rostamizadeh, Afshin

PY - 2009

Y1 - 2009

N2 - The choice of the kernel is critical to the success of many learning algorithms but it is typically left to the user. Instead, the training data can be used to learn the kernel by selecting it out of a given family, such as that of non-negative linear combinations of p base kernels, constrained by a trace or L1 regularization. This paper studies the problem of learning kernels with the same family of kernels but with an L2 regularization instead, and for regression problems. We analyze the problem of learning kernels with ridge regression. We derive the form of the solution of the optimization problem and give an efficient iterative algorithm for computing that solution. We present a novel theoretical analysis of the problem based on stability and give learning bounds for orthogonal kernels that contain only an additive termO(p/m) when compared to the standard kernel ridge regression stability bound. We also report the results of experiments indicating that L1 regularization can lead to modest improvements for a small number of kernels, but to performance degradations in larger-scale cases. In contrast, L2 regularization never degrades performance and in fact achieves significant improvements with a large number of kernels.

AB - The choice of the kernel is critical to the success of many learning algorithms but it is typically left to the user. Instead, the training data can be used to learn the kernel by selecting it out of a given family, such as that of non-negative linear combinations of p base kernels, constrained by a trace or L1 regularization. This paper studies the problem of learning kernels with the same family of kernels but with an L2 regularization instead, and for regression problems. We analyze the problem of learning kernels with ridge regression. We derive the form of the solution of the optimization problem and give an efficient iterative algorithm for computing that solution. We present a novel theoretical analysis of the problem based on stability and give learning bounds for orthogonal kernels that contain only an additive termO(p/m) when compared to the standard kernel ridge regression stability bound. We also report the results of experiments indicating that L1 regularization can lead to modest improvements for a small number of kernels, but to performance degradations in larger-scale cases. In contrast, L2 regularization never degrades performance and in fact achieves significant improvements with a large number of kernels.

UR - http://www.scopus.com/inward/record.url?scp=77958134983&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77958134983&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:77958134983

SP - 109

EP - 116

BT - Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009

ER -