Abstract
In many machine learning applications, one has not only training data but also some high-level information about certain invariances that the system should exhibit. In character recognition, for example, the answer should be invariant with respect to small spatial distortions in the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that minimizes the derivative of the classifier outputs with respect to distortion operators of our choosing. This not only produces tremendous speed advantages, but also provides a powerful language for specifying what generalizations we wish the network to perform.
Original language | English (US) |
---|---|
Title of host publication | Conference B |
Subtitle of host publication | Pattern Recognition Methodology and Systems |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 651-655 |
Number of pages | 5 |
Volume | 2 |
ISBN (Print) | 0818629150 |
DOIs | |
State | Published - Jan 1 1992 |
Event | 11th IAPR International Conference on Pattern Recognition, IAPR 1992 - The Hague, Netherlands Duration: Aug 30 1992 → Sep 3 1992 |
Other
Other | 11th IAPR International Conference on Pattern Recognition, IAPR 1992 |
---|---|
Country | Netherlands |
City | The Hague |
Period | 8/30/92 → 9/3/92 |
Fingerprint
ASJC Scopus subject areas
- Computer Vision and Pattern Recognition
Cite this
An efficient algorithm for learning invariances in adaptive classifiers. / Simard, P.; LeCun, Yann; Denker, J.; Victorri, B.
Conference B: Pattern Recognition Methodology and Systems. Vol. 2 Institute of Electrical and Electronics Engineers Inc., 1992. p. 651-655 201861.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - An efficient algorithm for learning invariances in adaptive classifiers
AU - Simard, P.
AU - LeCun, Yann
AU - Denker, J.
AU - Victorri, B.
PY - 1992/1/1
Y1 - 1992/1/1
N2 - In many machine learning applications, one has not only training data but also some high-level information about certain invariances that the system should exhibit. In character recognition, for example, the answer should be invariant with respect to small spatial distortions in the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that minimizes the derivative of the classifier outputs with respect to distortion operators of our choosing. This not only produces tremendous speed advantages, but also provides a powerful language for specifying what generalizations we wish the network to perform.
AB - In many machine learning applications, one has not only training data but also some high-level information about certain invariances that the system should exhibit. In character recognition, for example, the answer should be invariant with respect to small spatial distortions in the input images (translations, rotations, scale changes, etcetera). We have implemented a scheme that minimizes the derivative of the classifier outputs with respect to distortion operators of our choosing. This not only produces tremendous speed advantages, but also provides a powerful language for specifying what generalizations we wish the network to perform.
UR - http://www.scopus.com/inward/record.url?scp=77953494628&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77953494628&partnerID=8YFLogxK
U2 - 10.1109/ICPR.1992.201861
DO - 10.1109/ICPR.1992.201861
M3 - Conference contribution
AN - SCOPUS:77953494628
SN - 0818629150
VL - 2
SP - 651
EP - 655
BT - Conference B
PB - Institute of Electrical and Electronics Engineers Inc.
ER -