Transformation invariance in pattern recognition - Tangent distance and tangent propagation

Patrice Y. Simard, Yann LeCun, John S. Denker, Bernard Victorri

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, "tangent distance" and "tangent propagation", which make use of these invariances to improve performance.

Original languageEnglish (US)
Title of host publicationNeural Networks: Tricks of the Trade
Pages235-269
Number of pages35
Volume7700 LECTURE NO
DOIs
StatePublished - 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7700 LECTURE NO
ISSN (Print)03029743
ISSN (Electronic)16113349

Fingerprint

Invariance
Tangent line
Pattern Recognition
Pattern recognition
Propagation
Tangent vector
Resources
Statistical Modeling
Trivial
Optimal Solution
Regression
Converge
Output
Knowledge

ASJC Scopus subject areas

  • Computer Science(all)
  • Theoretical Computer Science

Cite this

Simard, P. Y., LeCun, Y., Denker, J. S., & Victorri, B. (2012). Transformation invariance in pattern recognition - Tangent distance and tangent propagation. In Neural Networks: Tricks of the Trade (Vol. 7700 LECTURE NO, pp. 235-269). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7700 LECTURE NO). https://doi.org/10.1007/978-3-642-35289-8-17

Transformation invariance in pattern recognition - Tangent distance and tangent propagation. / Simard, Patrice Y.; LeCun, Yann; Denker, John S.; Victorri, Bernard.

Neural Networks: Tricks of the Trade. Vol. 7700 LECTURE NO 2012. p. 235-269 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7700 LECTURE NO).

Research output: Chapter in Book/Report/Conference proceedingChapter

Simard, PY, LeCun, Y, Denker, JS & Victorri, B 2012, Transformation invariance in pattern recognition - Tangent distance and tangent propagation. in Neural Networks: Tricks of the Trade. vol. 7700 LECTURE NO, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 7700 LECTURE NO, pp. 235-269. https://doi.org/10.1007/978-3-642-35289-8-17
Simard PY, LeCun Y, Denker JS, Victorri B. Transformation invariance in pattern recognition - Tangent distance and tangent propagation. In Neural Networks: Tricks of the Trade. Vol. 7700 LECTURE NO. 2012. p. 235-269. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-642-35289-8-17
Simard, Patrice Y. ; LeCun, Yann ; Denker, John S. ; Victorri, Bernard. / Transformation invariance in pattern recognition - Tangent distance and tangent propagation. Neural Networks: Tricks of the Trade. Vol. 7700 LECTURE NO 2012. pp. 235-269 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inbook{47d9539ba845406d95577251d0d0de84,
title = "Transformation invariance in pattern recognition - Tangent distance and tangent propagation",
abstract = "In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, {"}tangent distance{"} and {"}tangent propagation{"}, which make use of these invariances to improve performance.",
author = "Simard, {Patrice Y.} and Yann LeCun and Denker, {John S.} and Bernard Victorri",
year = "2012",
doi = "10.1007/978-3-642-35289-8-17",
language = "English (US)",
isbn = "9783642352881",
volume = "7700 LECTURE NO",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
pages = "235--269",
booktitle = "Neural Networks: Tricks of the Trade",

}

TY - CHAP

T1 - Transformation invariance in pattern recognition - Tangent distance and tangent propagation

AU - Simard, Patrice Y.

AU - LeCun, Yann

AU - Denker, John S.

AU - Victorri, Bernard

PY - 2012

Y1 - 2012

N2 - In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, "tangent distance" and "tangent propagation", which make use of these invariances to improve performance.

AB - In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, "tangent distance" and "tangent propagation", which make use of these invariances to improve performance.

UR - http://www.scopus.com/inward/record.url?scp=84872577241&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84872577241&partnerID=8YFLogxK

U2 - 10.1007/978-3-642-35289-8-17

DO - 10.1007/978-3-642-35289-8-17

M3 - Chapter

SN - 9783642352881

VL - 7700 LECTURE NO

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 235

EP - 269

BT - Neural Networks: Tricks of the Trade

ER -