Scene parsing with multiscale feature learning, purity trees, and optimal covers

Clément Farabet, Camille Couprie, Laurent Najman, Yann LeCun

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Scene parsing consists in labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features. In parallel to feature extraction, a tree of segments is computed from a graph of pixel dissimilarities. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average "purity" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The system yields record accuracies on the the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) and near-record accuracy on the Stanford Background Dataset (8 classes), while being an order of magnitude faster than competing approaches, producing a 320 x 240 image labeling in less than 1 second, including feature extraction.

Original languageEnglish (US)
Title of host publicationProceedings of the 29th International Conference on Machine Learning, ICML 2012
Pages575-582
Number of pages8
Volume1
StatePublished - 2012
Event29th International Conference on Machine Learning, ICML 2012 - Edinburgh, United Kingdom
Duration: Jun 26 2012Jul 1 2012

Other

Other29th International Conference on Machine Learning, ICML 2012
CountryUnited Kingdom
CityEdinburgh
Period6/26/127/1/12

Fingerprint

Pixels
Labeling
learning
Feature extraction
Classifiers

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Education

Cite this

Farabet, C., Couprie, C., Najman, L., & LeCun, Y. (2012). Scene parsing with multiscale feature learning, purity trees, and optimal covers. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012 (Vol. 1, pp. 575-582)

Scene parsing with multiscale feature learning, purity trees, and optimal covers. / Farabet, Clément; Couprie, Camille; Najman, Laurent; LeCun, Yann.

Proceedings of the 29th International Conference on Machine Learning, ICML 2012. Vol. 1 2012. p. 575-582.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Farabet, C, Couprie, C, Najman, L & LeCun, Y 2012, Scene parsing with multiscale feature learning, purity trees, and optimal covers. in Proceedings of the 29th International Conference on Machine Learning, ICML 2012. vol. 1, pp. 575-582, 29th International Conference on Machine Learning, ICML 2012, Edinburgh, United Kingdom, 6/26/12.
Farabet C, Couprie C, Najman L, LeCun Y. Scene parsing with multiscale feature learning, purity trees, and optimal covers. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012. Vol. 1. 2012. p. 575-582
Farabet, Clément ; Couprie, Camille ; Najman, Laurent ; LeCun, Yann. / Scene parsing with multiscale feature learning, purity trees, and optimal covers. Proceedings of the 29th International Conference on Machine Learning, ICML 2012. Vol. 1 2012. pp. 575-582
@inproceedings{105178556d4248f190120770c9429935,
title = "Scene parsing with multiscale feature learning, purity trees, and optimal covers",
abstract = "Scene parsing consists in labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features. In parallel to feature extraction, a tree of segments is computed from a graph of pixel dissimilarities. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average {"}purity{"} of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The system yields record accuracies on the the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) and near-record accuracy on the Stanford Background Dataset (8 classes), while being an order of magnitude faster than competing approaches, producing a 320 x 240 image labeling in less than 1 second, including feature extraction.",
author = "Cl{\'e}ment Farabet and Camille Couprie and Laurent Najman and Yann LeCun",
year = "2012",
language = "English (US)",
isbn = "9781450312851",
volume = "1",
pages = "575--582",
booktitle = "Proceedings of the 29th International Conference on Machine Learning, ICML 2012",

}

TY - GEN

T1 - Scene parsing with multiscale feature learning, purity trees, and optimal covers

AU - Farabet, Clément

AU - Couprie, Camille

AU - Najman, Laurent

AU - LeCun, Yann

PY - 2012

Y1 - 2012

N2 - Scene parsing consists in labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features. In parallel to feature extraction, a tree of segments is computed from a graph of pixel dissimilarities. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average "purity" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The system yields record accuracies on the the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) and near-record accuracy on the Stanford Background Dataset (8 classes), while being an order of magnitude faster than competing approaches, producing a 320 x 240 image labeling in less than 1 second, including feature extraction.

AB - Scene parsing consists in labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features. In parallel to feature extraction, a tree of segments is computed from a graph of pixel dissimilarities. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average "purity" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The system yields record accuracies on the the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) and near-record accuracy on the Stanford Background Dataset (8 classes), while being an order of magnitude faster than competing approaches, producing a 320 x 240 image labeling in less than 1 second, including feature extraction.

UR - http://www.scopus.com/inward/record.url?scp=84867136939&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84867136939&partnerID=8YFLogxK

M3 - Conference contribution

SN - 9781450312851

VL - 1

SP - 575

EP - 582

BT - Proceedings of the 29th International Conference on Machine Learning, ICML 2012

ER -