Indoor scene segmentation using a structured light sensor

Nathan Silberman, Rob Fergus

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding.

Original languageEnglish (US)
Title of host publication2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011
Pages601-608
Number of pages8
DOIs
StatePublished - 2011
Event2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011 - Barcelona, Spain
Duration: Nov 6 2011Nov 13 2011

Other

Other2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011
CountrySpain
CityBarcelona
Period11/6/1111/13/11

Fingerprint

Sensors
Labels

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition

Cite this

Silberman, N., & Fergus, R. (2011). Indoor scene segmentation using a structured light sensor. In 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011 (pp. 601-608). [6130298] https://doi.org/10.1109/ICCVW.2011.6130298

Indoor scene segmentation using a structured light sensor. / Silberman, Nathan; Fergus, Rob.

2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011. 2011. p. 601-608 6130298.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Silberman, N & Fergus, R 2011, Indoor scene segmentation using a structured light sensor. in 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011., 6130298, pp. 601-608, 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011, Barcelona, Spain, 11/6/11. https://doi.org/10.1109/ICCVW.2011.6130298
Silberman N, Fergus R. Indoor scene segmentation using a structured light sensor. In 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011. 2011. p. 601-608. 6130298 https://doi.org/10.1109/ICCVW.2011.6130298
Silberman, Nathan ; Fergus, Rob. / Indoor scene segmentation using a structured light sensor. 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011. 2011. pp. 601-608
@inproceedings{154fc65341424977979de9ec0e007d18,
title = "Indoor scene segmentation using a structured light sensor",
abstract = "In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding.",
author = "Nathan Silberman and Rob Fergus",
year = "2011",
doi = "10.1109/ICCVW.2011.6130298",
language = "English (US)",
isbn = "9781467300629",
pages = "601--608",
booktitle = "2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011",

}

TY - GEN

T1 - Indoor scene segmentation using a structured light sensor

AU - Silberman, Nathan

AU - Fergus, Rob

PY - 2011

Y1 - 2011

N2 - In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding.

AB - In this paper we explore how a structured light depth sensor, in the form of the Microsoft Kinect, can assist with indoor scene segmentation. We use a CRF-based model to evaluate a range of different representations for depth information and propose a novel prior on 3D location. We introduce a new and challenging indoor scene dataset, complete with accurate depth maps and dense label coverage. Evaluating our model on this dataset reveals that the combination of depth and intensity images gives dramatic performance gains over intensity images alone. Our results clearly demonstrate the utility of structured light sensors for scene understanding.

UR - http://www.scopus.com/inward/record.url?scp=84856656491&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84856656491&partnerID=8YFLogxK

U2 - 10.1109/ICCVW.2011.6130298

DO - 10.1109/ICCVW.2011.6130298

M3 - Conference contribution

SN - 9781467300629

SP - 601

EP - 608

BT - 2011 IEEE International Conference on Computer Vision Workshops, ICCV Workshops 2011

ER -