Accelerating online convex optimization via adaptive prediction

Mehryar Mohri, Scott Yang

Research output: Contribution to conferencePaper

Abstract

We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.

Original languageEnglish (US)
Pages848-856
Number of pages9
StatePublished - Jan 1 2016
Event19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 - Cadiz, Spain
Duration: May 9 2016May 11 2016

Conference

Conference19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016
CountrySpain
CityCadiz
Period5/9/165/11/16

Fingerprint

Online Optimization
Convex optimization
Randomisation
Convex Optimization
Design of Algorithms
Regret
Dependent
Prediction
Dependent Data
Adaptivity
Regularization
Optimization Algorithm
Gradient
Series
Framework

ASJC Scopus subject areas

  • Artificial Intelligence
  • Statistics and Probability

Cite this

Mohri, M., & Yang, S. (2016). Accelerating online convex optimization via adaptive prediction. 848-856. Paper presented at 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain.

Accelerating online convex optimization via adaptive prediction. / Mohri, Mehryar; Yang, Scott.

2016. 848-856 Paper presented at 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain.

Research output: Contribution to conferencePaper

Mohri, M & Yang, S 2016, 'Accelerating online convex optimization via adaptive prediction', Paper presented at 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, 5/9/16 - 5/11/16 pp. 848-856.
Mohri M, Yang S. Accelerating online convex optimization via adaptive prediction. 2016. Paper presented at 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain.
Mohri, Mehryar ; Yang, Scott. / Accelerating online convex optimization via adaptive prediction. Paper presented at 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain.9 p.
@conference{9839824442214c8d82fb58f41bf1634d,
title = "Accelerating online convex optimization via adaptive prediction",
abstract = "We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.",
author = "Mehryar Mohri and Scott Yang",
year = "2016",
month = "1",
day = "1",
language = "English (US)",
pages = "848--856",
note = "19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016 ; Conference date: 09-05-2016 Through 11-05-2016",

}

TY - CONF

T1 - Accelerating online convex optimization via adaptive prediction

AU - Mohri, Mehryar

AU - Yang, Scott

PY - 2016/1/1

Y1 - 2016/1/1

N2 - We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.

AB - We present a powerful general framework for designing data-dependent online convex optimization algorithms, building upon and unifying recent techniques in adaptive regularization, optimistic gradient predictions, and problem-dependent randomization. We first present a series of new regret guarantees that hold at any time and under very minimal assumptions, and then show how different relaxations recover existing algorithms, both basic as well as more recent sophisticated ones. Finally, we show how combining adaptivity, optimism, and problem-dependent randomization can guide the design of algorithms that benefit from more favorable guarantees than recent state-of-the-art methods.

UR - http://www.scopus.com/inward/record.url?scp=85067543222&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85067543222&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:85067543222

SP - 848

EP - 856

ER -