Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments

Naman Patel, Anna Choromanska, Prashanth Krishnamurthy, Farshad Khorrami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.

Original languageEnglish (US)
Title of host publicationIROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1531-1536
Number of pages6
Volume2017-September
ISBN (Electronic)9781538626825
DOIs
StatePublished - Dec 13 2017
Event2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017 - Vancouver, Canada
Duration: Sep 24 2017Sep 28 2017

Other

Other2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017
CountryCanada
CityVancouver
Period9/24/179/28/17

Fingerprint

Fusion reactions
Cameras
Network architecture
Sensors
Ground vehicles
Pixels
Neural networks
Geometry
Deep learning
Deep neural networks

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Computer Vision and Pattern Recognition
  • Computer Science Applications

Cite this

Patel, N., Choromanska, A., Krishnamurthy, P., & Khorrami, F. (2017). Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments. In IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems (Vol. 2017-September, pp. 1531-1536). [8205958] Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/IROS.2017.8205958

Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments. / Patel, Naman; Choromanska, Anna; Krishnamurthy, Prashanth; Khorrami, Farshad.

IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September Institute of Electrical and Electronics Engineers Inc., 2017. p. 1531-1536 8205958.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Patel, N, Choromanska, A, Krishnamurthy, P & Khorrami, F 2017, Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments. in IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. vol. 2017-September, 8205958, Institute of Electrical and Electronics Engineers Inc., pp. 1531-1536, 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2017, Vancouver, Canada, 9/24/17. https://doi.org/10.1109/IROS.2017.8205958
Patel N, Choromanska A, Krishnamurthy P, Khorrami F. Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments. In IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September. Institute of Electrical and Electronics Engineers Inc. 2017. p. 1531-1536. 8205958 https://doi.org/10.1109/IROS.2017.8205958
Patel, Naman ; Choromanska, Anna ; Krishnamurthy, Prashanth ; Khorrami, Farshad. / Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments. IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems. Vol. 2017-September Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1531-1536
@inproceedings{8bc45601118c4ea290a2d3bf7be27445,
title = "Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments",
abstract = "We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.",
author = "Naman Patel and Anna Choromanska and Prashanth Krishnamurthy and Farshad Khorrami",
year = "2017",
month = "12",
day = "13",
doi = "10.1109/IROS.2017.8205958",
language = "English (US)",
volume = "2017-September",
pages = "1531--1536",
booktitle = "IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
address = "United States",

}

TY - GEN

T1 - Sensor modality fusion with CNNs for UGV autonomous driving in indoor environments

AU - Patel, Naman

AU - Choromanska, Anna

AU - Krishnamurthy, Prashanth

AU - Khorrami, Farshad

PY - 2017/12/13

Y1 - 2017/12/13

N2 - We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.

AB - We present a novel end-to-end learning framework to enable ground vehicles to autonomously navigate unknown environments by fusing raw pixels from cameras and depth measurements from a LiDAR. A deep neural network architecture is introduced to effectively perform modality fusion and reliably predict steering commands even in the presence of sensor failures. The proposed network is trained on our own dataset, from LiDAR and a camera mounted on a UGV taken in an indoor corridor environment. Comprehensive experimental evaluation to demonstrate the robustness of our network architecture is performed to show that the proposed deep learning neural network is able to autonomously navigate in the corridor environment. Furthermore, we demonstrate that the fusion of the camera and LiDAR modalities provides further benefits beyond robustness to sensor failures. Specifically, the multimodal fused system shows a potential to navigate around static and dynamic obstacles and to handle changes in environment geometry without being trained for these tasks.

UR - http://www.scopus.com/inward/record.url?scp=85041958768&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041958768&partnerID=8YFLogxK

U2 - 10.1109/IROS.2017.8205958

DO - 10.1109/IROS.2017.8205958

M3 - Conference contribution

VL - 2017-September

SP - 1531

EP - 1536

BT - IROS 2017 - IEEE/RSJ International Conference on Intelligent Robots and Systems

PB - Institute of Electrical and Electronics Engineers Inc.

ER -