### Abstract

We present empirical evidence that the halting times for a class of optimization algorithms are universal. The algorithms we consider come from quadratic optimization, spin glasses and machine learning. A universality theorem is given in the case of the quadratic gradient descent flow. More precisely, given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time of the algorithm follow a distribution that, after centering and scaling, appears invariant under changes in the distribution on the landscape - universality is present.

Original language | English (US) |
---|---|

Pages (from-to) | 289-301 |

Number of pages | 13 |

Journal | Quarterly of Applied Mathematics |

Volume | 76 |

Issue number | 2 |

DOIs | |

State | Published - Jan 1 2018 |

### Fingerprint

### ASJC Scopus subject areas

- Applied Mathematics

### Cite this

*Quarterly of Applied Mathematics*,

*76*(2), 289-301. https://doi.org/10.1090/qam/1483

**Universal halting times in optimization and machine learning.** / Sagun, Levent; Trogdon, Thomas; LeCun, Yann.

Research output: Contribution to journal › Article

*Quarterly of Applied Mathematics*, vol. 76, no. 2, pp. 289-301. https://doi.org/10.1090/qam/1483

}

TY - JOUR

T1 - Universal halting times in optimization and machine learning

AU - Sagun, Levent

AU - Trogdon, Thomas

AU - LeCun, Yann

PY - 2018/1/1

Y1 - 2018/1/1

N2 - We present empirical evidence that the halting times for a class of optimization algorithms are universal. The algorithms we consider come from quadratic optimization, spin glasses and machine learning. A universality theorem is given in the case of the quadratic gradient descent flow. More precisely, given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time of the algorithm follow a distribution that, after centering and scaling, appears invariant under changes in the distribution on the landscape - universality is present.

AB - We present empirical evidence that the halting times for a class of optimization algorithms are universal. The algorithms we consider come from quadratic optimization, spin glasses and machine learning. A universality theorem is given in the case of the quadratic gradient descent flow. More precisely, given an algorithm, which we take to be both the optimization routine and the form of the random landscape, the fluctuations of the halting time of the algorithm follow a distribution that, after centering and scaling, appears invariant under changes in the distribution on the landscape - universality is present.

UR - http://www.scopus.com/inward/record.url?scp=85039950078&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85039950078&partnerID=8YFLogxK

U2 - 10.1090/qam/1483

DO - 10.1090/qam/1483

M3 - Article

AN - SCOPUS:85039950078

VL - 76

SP - 289

EP - 301

JO - Quarterly of Applied Mathematics

JF - Quarterly of Applied Mathematics

SN - 0033-569X

IS - 2

ER -