Transformation invariance in pattern recognition: Tangent distance and propagation

Patrice Y. Simard, Yann LeCun, John S. Denker, Bernard Victorri

Research output: Contribution to journalArticle

Abstract

In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. We introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, tangent distance and tangent propagation, which make use of these invariances to improve performance.

Original languageEnglish (US)
Pages (from-to)181-197
Number of pages17
JournalInternational Journal of Imaging Systems and Technology
Volume11
Issue number3
DOIs
StatePublished - 2000

Fingerprint

Invariance
tangents
pattern recognition
Pattern recognition
invariance
propagation
resources
regression analysis
output

ASJC Scopus subject areas

  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering
  • Atomic and Molecular Physics, and Optics

Cite this

Transformation invariance in pattern recognition : Tangent distance and propagation. / Simard, Patrice Y.; LeCun, Yann; Denker, John S.; Victorri, Bernard.

In: International Journal of Imaging Systems and Technology, Vol. 11, No. 3, 2000, p. 181-197.

Research output: Contribution to journalArticle

Simard, Patrice Y. ; LeCun, Yann ; Denker, John S. ; Victorri, Bernard. / Transformation invariance in pattern recognition : Tangent distance and propagation. In: International Journal of Imaging Systems and Technology. 2000 ; Vol. 11, No. 3. pp. 181-197.
@article{f211344943204e469435bf9146e8a0e6,
title = "Transformation invariance in pattern recognition: Tangent distance and propagation",
abstract = "In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. We introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, tangent distance and tangent propagation, which make use of these invariances to improve performance.",
author = "Simard, {Patrice Y.} and Yann LeCun and Denker, {John S.} and Bernard Victorri",
year = "2000",
doi = "10.1002/1098-1098(2000)11:3<181::AID-IMA1003>3.0.CO;2-E",
language = "English (US)",
volume = "11",
pages = "181--197",
journal = "International Journal of Imaging Systems and Technology",
issn = "0899-9457",
publisher = "John Wiley and Sons Inc.",
number = "3",

}

TY - JOUR

T1 - Transformation invariance in pattern recognition

T2 - Tangent distance and propagation

AU - Simard, Patrice Y.

AU - LeCun, Yann

AU - Denker, John S.

AU - Victorri, Bernard

PY - 2000

Y1 - 2000

N2 - In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. We introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, tangent distance and tangent propagation, which make use of these invariances to improve performance.

AB - In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. We introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, tangent distance and tangent propagation, which make use of these invariances to improve performance.

UR - http://www.scopus.com/inward/record.url?scp=0034467942&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0034467942&partnerID=8YFLogxK

U2 - 10.1002/1098-1098(2000)11:3<181::AID-IMA1003>3.0.CO;2-E

DO - 10.1002/1098-1098(2000)11:3<181::AID-IMA1003>3.0.CO;2-E

M3 - Article

AN - SCOPUS:0034467942

VL - 11

SP - 181

EP - 197

JO - International Journal of Imaging Systems and Technology

JF - International Journal of Imaging Systems and Technology

SN - 0899-9457

IS - 3

ER -