### Abstract

In pattern recognition, statistical modeling, or regression, the amount of data is a critical factor affecting the performance. If the amount of data and computational resources are unlimited, even trivial algorithms will converge to the optimal solution. However, in the practical case, given limited data and other resources, satisfactory performance requires sophisticated methods to regularize the problem by introducing a priori knowledge. Invariance of the output with respect to certain transformations of the input is a typical example of such a priori knowledge. In this chapter, we introduce the concept of tangent vectors, which compactly represent the essence of these transformation invariances, and two classes of algorithms, "tangent distance" and "tangent propagation", which make use of these invariances to improve performance.

Original language | English (US) |
---|---|

Title of host publication | Neural Networks |

Subtitle of host publication | Tricks of the Trade |

Editors | Gregoire Montavon, Klaus-Robert Muller, Genevieve B. Orr, Klaus-Robert Muller |

Pages | 235-269 |

Number of pages | 35 |

DOIs | |

State | Published - Dec 1 2012 |

### Publication series

Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|

Volume | 7700 LECTURE NO |

ISSN (Print) | 0302-9743 |

ISSN (Electronic) | 1611-3349 |

### ASJC Scopus subject areas

- Theoretical Computer Science
- Computer Science(all)

## Fingerprint Dive into the research topics of 'Transformation invariance in pattern recognition - Tangent distance and tangent propagation'. Together they form a unique fingerprint.

## Cite this

*Neural Networks: Tricks of the Trade*(pp. 235-269). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 7700 LECTURE NO). https://doi.org/10.1007/978-3-642-35289-8-17