Greedy Criterion in Orthogonal Greedy Learning

Lin Xu, Shaobo Lin, Jinshan Zeng, Xia Liu, Yi Fang, Zongben Xu

Research output: Contribution to journalArticle

Abstract

Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we found that SGD is not the unique greedy criterion and introduced a new greedy criterion, called as ' δ-greedy threshold' for learning. Based on this new greedy criterion, we derived a straightforward termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Numerical experiments are also provided to support that this new scheme can achieve almost optimal generalization performance while requiring less computation than OGL.

Original languageEnglish (US)
Article number7862802
Pages (from-to)955-966
Number of pages12
JournalIEEE Transactions on Cybernetics
Volume48
Issue number3
DOIs
StatePublished - Mar 1 2018

Fingerprint

Glossaries
Atoms
Experiments

Keywords

  • Generalization performance
  • greedy algorithms
  • greedy criterion
  • orthogonal greedy learning (OGL)
  • supervised learning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Information Systems
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering

Cite this

Greedy Criterion in Orthogonal Greedy Learning. / Xu, Lin; Lin, Shaobo; Zeng, Jinshan; Liu, Xia; Fang, Yi; Xu, Zongben.

In: IEEE Transactions on Cybernetics, Vol. 48, No. 3, 7862802, 01.03.2018, p. 955-966.

Research output: Contribution to journalArticle

Xu, L, Lin, S, Zeng, J, Liu, X, Fang, Y & Xu, Z 2018, 'Greedy Criterion in Orthogonal Greedy Learning', IEEE Transactions on Cybernetics, vol. 48, no. 3, 7862802, pp. 955-966. https://doi.org/10.1109/TCYB.2017.2669259
Xu, Lin ; Lin, Shaobo ; Zeng, Jinshan ; Liu, Xia ; Fang, Yi ; Xu, Zongben. / Greedy Criterion in Orthogonal Greedy Learning. In: IEEE Transactions on Cybernetics. 2018 ; Vol. 48, No. 3. pp. 955-966.
@article{d931d790091d421eb906a4a40797356b,
title = "Greedy Criterion in Orthogonal Greedy Learning",
abstract = "Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we found that SGD is not the unique greedy criterion and introduced a new greedy criterion, called as ' δ-greedy threshold' for learning. Based on this new greedy criterion, we derived a straightforward termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Numerical experiments are also provided to support that this new scheme can achieve almost optimal generalization performance while requiring less computation than OGL.",
keywords = "Generalization performance, greedy algorithms, greedy criterion, orthogonal greedy learning (OGL), supervised learning",
author = "Lin Xu and Shaobo Lin and Jinshan Zeng and Xia Liu and Yi Fang and Zongben Xu",
year = "2018",
month = "3",
day = "1",
doi = "10.1109/TCYB.2017.2669259",
language = "English (US)",
volume = "48",
pages = "955--966",
journal = "IEEE Transactions on Cybernetics",
issn = "2168-2267",
publisher = "IEEE Advancing Technology for Humanity",
number = "3",

}

TY - JOUR

T1 - Greedy Criterion in Orthogonal Greedy Learning

AU - Xu, Lin

AU - Lin, Shaobo

AU - Zeng, Jinshan

AU - Liu, Xia

AU - Fang, Yi

AU - Xu, Zongben

PY - 2018/3/1

Y1 - 2018/3/1

N2 - Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we found that SGD is not the unique greedy criterion and introduced a new greedy criterion, called as ' δ-greedy threshold' for learning. Based on this new greedy criterion, we derived a straightforward termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Numerical experiments are also provided to support that this new scheme can achieve almost optimal generalization performance while requiring less computation than OGL.

AB - Orthogonal greedy learning (OGL) is a stepwise learning scheme that starts with selecting a new atom from a specified dictionary via the steepest gradient descent (SGD) and then builds the estimator through orthogonal projection. In this paper, we found that SGD is not the unique greedy criterion and introduced a new greedy criterion, called as ' δ-greedy threshold' for learning. Based on this new greedy criterion, we derived a straightforward termination rule for OGL. Our theoretical study shows that the new learning scheme can achieve the existing (almost) optimal learning rate of OGL. Numerical experiments are also provided to support that this new scheme can achieve almost optimal generalization performance while requiring less computation than OGL.

KW - Generalization performance

KW - greedy algorithms

KW - greedy criterion

KW - orthogonal greedy learning (OGL)

KW - supervised learning

UR - http://www.scopus.com/inward/record.url?scp=85014287018&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85014287018&partnerID=8YFLogxK

U2 - 10.1109/TCYB.2017.2669259

DO - 10.1109/TCYB.2017.2669259

M3 - Article

VL - 48

SP - 955

EP - 966

JO - IEEE Transactions on Cybernetics

JF - IEEE Transactions on Cybernetics

SN - 2168-2267

IS - 3

M1 - 7862802

ER -