Semantic segmentation guided SLAM using Vision and LIDAR

Naman Patel, Prashanth Krishnamurthy, Farshad Khorrami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper presents a novel framework for incorporating semantic information in a Simultaneous Localization and Mapping (SLAM) framework based on LIDAR and camera to improve navigation accuracy and alleviate drifts caused by translation and rotation errors. Specifically, an unmanned ground vehicle (UGV) equipped with a camera and LIDAR, operating in an indoor environment is considered. The proposed method uses features extracted from a camera and its correspondences in the LIDAR depth map to obtain the pose relative to a keyframe which is refined by semantic features obtained from a deep neural network. Additionally, each point in the map is associated with a semantic label to perform semantically guided local and global pose optimization. Since semantically correlated features can be expected to have higher likelihood of correct data association, the proposed coupling of semantic labeling and SLAM provides better robustness and accuracy. We demonstrate our approach using an unmanned ground vehicle (UGV) operating in an indoor environment equipped with a camera and a LIDAR.

Original languageEnglish (US)
Title of host publication50th International Symposium on Robotics, ISR 2018
PublisherVDE Verlag GmbH
Pages352-358
Number of pages7
ISBN (Electronic)9781510870314
StatePublished - Jan 1 2018
Event50th International Symposium on Robotics, ISR 2018 - Munich, Germany
Duration: Jun 20 2018Jun 21 2018

Other

Other50th International Symposium on Robotics, ISR 2018
CountryGermany
CityMunich
Period6/20/186/21/18

Fingerprint

Semantics
Cameras
Unmanned vehicles
Ground vehicles
Global optimization
Labeling
Labels
Navigation

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Cite this

Patel, N., Krishnamurthy, P., & Khorrami, F. (2018). Semantic segmentation guided SLAM using Vision and LIDAR. In 50th International Symposium on Robotics, ISR 2018 (pp. 352-358). VDE Verlag GmbH.

Semantic segmentation guided SLAM using Vision and LIDAR. / Patel, Naman; Krishnamurthy, Prashanth; Khorrami, Farshad.

50th International Symposium on Robotics, ISR 2018. VDE Verlag GmbH, 2018. p. 352-358.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Patel, N, Krishnamurthy, P & Khorrami, F 2018, Semantic segmentation guided SLAM using Vision and LIDAR. in 50th International Symposium on Robotics, ISR 2018. VDE Verlag GmbH, pp. 352-358, 50th International Symposium on Robotics, ISR 2018, Munich, Germany, 6/20/18.
Patel N, Krishnamurthy P, Khorrami F. Semantic segmentation guided SLAM using Vision and LIDAR. In 50th International Symposium on Robotics, ISR 2018. VDE Verlag GmbH. 2018. p. 352-358
Patel, Naman ; Krishnamurthy, Prashanth ; Khorrami, Farshad. / Semantic segmentation guided SLAM using Vision and LIDAR. 50th International Symposium on Robotics, ISR 2018. VDE Verlag GmbH, 2018. pp. 352-358
@inproceedings{fb8b70a4a0c84b9dbc2c4ce20c085dcb,
title = "Semantic segmentation guided SLAM using Vision and LIDAR",
abstract = "This paper presents a novel framework for incorporating semantic information in a Simultaneous Localization and Mapping (SLAM) framework based on LIDAR and camera to improve navigation accuracy and alleviate drifts caused by translation and rotation errors. Specifically, an unmanned ground vehicle (UGV) equipped with a camera and LIDAR, operating in an indoor environment is considered. The proposed method uses features extracted from a camera and its correspondences in the LIDAR depth map to obtain the pose relative to a keyframe which is refined by semantic features obtained from a deep neural network. Additionally, each point in the map is associated with a semantic label to perform semantically guided local and global pose optimization. Since semantically correlated features can be expected to have higher likelihood of correct data association, the proposed coupling of semantic labeling and SLAM provides better robustness and accuracy. We demonstrate our approach using an unmanned ground vehicle (UGV) operating in an indoor environment equipped with a camera and a LIDAR.",
author = "Naman Patel and Prashanth Krishnamurthy and Farshad Khorrami",
year = "2018",
month = "1",
day = "1",
language = "English (US)",
pages = "352--358",
booktitle = "50th International Symposium on Robotics, ISR 2018",
publisher = "VDE Verlag GmbH",
address = "Germany",

}

TY - GEN

T1 - Semantic segmentation guided SLAM using Vision and LIDAR

AU - Patel, Naman

AU - Krishnamurthy, Prashanth

AU - Khorrami, Farshad

PY - 2018/1/1

Y1 - 2018/1/1

N2 - This paper presents a novel framework for incorporating semantic information in a Simultaneous Localization and Mapping (SLAM) framework based on LIDAR and camera to improve navigation accuracy and alleviate drifts caused by translation and rotation errors. Specifically, an unmanned ground vehicle (UGV) equipped with a camera and LIDAR, operating in an indoor environment is considered. The proposed method uses features extracted from a camera and its correspondences in the LIDAR depth map to obtain the pose relative to a keyframe which is refined by semantic features obtained from a deep neural network. Additionally, each point in the map is associated with a semantic label to perform semantically guided local and global pose optimization. Since semantically correlated features can be expected to have higher likelihood of correct data association, the proposed coupling of semantic labeling and SLAM provides better robustness and accuracy. We demonstrate our approach using an unmanned ground vehicle (UGV) operating in an indoor environment equipped with a camera and a LIDAR.

AB - This paper presents a novel framework for incorporating semantic information in a Simultaneous Localization and Mapping (SLAM) framework based on LIDAR and camera to improve navigation accuracy and alleviate drifts caused by translation and rotation errors. Specifically, an unmanned ground vehicle (UGV) equipped with a camera and LIDAR, operating in an indoor environment is considered. The proposed method uses features extracted from a camera and its correspondences in the LIDAR depth map to obtain the pose relative to a keyframe which is refined by semantic features obtained from a deep neural network. Additionally, each point in the map is associated with a semantic label to perform semantically guided local and global pose optimization. Since semantically correlated features can be expected to have higher likelihood of correct data association, the proposed coupling of semantic labeling and SLAM provides better robustness and accuracy. We demonstrate our approach using an unmanned ground vehicle (UGV) operating in an indoor environment equipped with a camera and a LIDAR.

UR - http://www.scopus.com/inward/record.url?scp=85059426753&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85059426753&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:85059426753

SP - 352

EP - 358

BT - 50th International Symposium on Robotics, ISR 2018

PB - VDE Verlag GmbH

ER -