On-line learning algorithms for path experts with non-additive losses

Corinna Cortes, Vitaly Kuznetsov, Mehryar Mohri, Manfred K. Warmuth

Research output: Contribution to journalArticle

Abstract

We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Followthe- Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.

Original languageEnglish (US)
JournalJournal of Machine Learning Research
Volume40
Issue number2015
StatePublished - 2015

Fingerprint

Learning algorithms
Learning Algorithm
Path
Loss Function
Regret
Time Complexity
Covering
Polynomials
Polynomial
Prediction
Family

Keywords

  • Experts
  • Non-additive losses
  • On-line learning
  • Structured prediction

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Cite this

On-line learning algorithms for path experts with non-additive losses. / Cortes, Corinna; Kuznetsov, Vitaly; Mohri, Mehryar; Warmuth, Manfred K.

In: Journal of Machine Learning Research, Vol. 40, No. 2015, 2015.

Research output: Contribution to journalArticle

Cortes, Corinna ; Kuznetsov, Vitaly ; Mohri, Mehryar ; Warmuth, Manfred K. / On-line learning algorithms for path experts with non-additive losses. In: Journal of Machine Learning Research. 2015 ; Vol. 40, No. 2015.
@article{81e5123160d946d1a5e7b0f75b117f3a,
title = "On-line learning algorithms for path experts with non-additive losses",
abstract = "We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Followthe- Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.",
keywords = "Experts, Non-additive losses, On-line learning, Structured prediction",
author = "Corinna Cortes and Vitaly Kuznetsov and Mehryar Mohri and Warmuth, {Manfred K.}",
year = "2015",
language = "English (US)",
volume = "40",
journal = "Journal of Machine Learning Research",
issn = "1532-4435",
publisher = "Microtome Publishing",
number = "2015",

}

TY - JOUR

T1 - On-line learning algorithms for path experts with non-additive losses

AU - Cortes, Corinna

AU - Kuznetsov, Vitaly

AU - Mohri, Mehryar

AU - Warmuth, Manfred K.

PY - 2015

Y1 - 2015

N2 - We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Followthe- Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.

AB - We consider two broad families of non-additive loss functions covering a large number of applications: rational losses and tropical losses. We give new algorithms extending the Followthe- Perturbed-Leader (FPL) algorithm to both of these families of loss functions and similarly give new algorithms extending the Randomized Weighted Majority (RWM) algorithm to both of these families. We prove that the time complexity of our extensions to rational losses of both FPL and RWM is polynomial and present regret bounds for both. We further show that these algorithms can play a critical role in improving performance in applications such as structured prediction.

KW - Experts

KW - Non-additive losses

KW - On-line learning

KW - Structured prediction

UR - http://www.scopus.com/inward/record.url?scp=84984698983&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84984698983&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:84984698983

VL - 40

JO - Journal of Machine Learning Research

JF - Journal of Machine Learning Research

SN - 1532-4435

IS - 2015

ER -