A deep learning gated architecture for UGV navigation robust to sensor failures

Naman Patel, Anna Choromanska, Prashanth Krishnamurthy, Farshad Khorrami

Research output: Contribution to journalArticle

Abstract

In this paper, we introduce a novel methodology for fusing sensors and improving robustness to sensor failures in end-to-end learning based autonomous navigation of ground vehicles in unknown environments. We propose the first learning based camera–LiDAR fusion methodology for autonomous in-door navigation. Specifically, we develop a multimodal end-to-end learning system, which maps raw depths and pixels from LiDAR and camera, respectively, to the steering commands. A novel gating based dropout regularization technique is introduced which effectively performs multimodal sensor fusion and reliably predicts steering commands even in the presence of various sensor failures. The robustness of our network architecture is demonstrated by experimentally evaluating its ability to autonomously navigate in the indoor corridor environment. Specifically, we show through various empirical results that our framework is robust to sensor failures, partial image occlusions, modifications of the camera image intensity, and the presence of noise in the camera or LiDAR range images. Furthermore, we show that some aspects of obstacle avoidance are implicitly learned (while not being specifically trained for it); these learned navigation capabilities are shown in ground vehicle navigation around static and dynamic obstacles.

Original languageEnglish (US)
Pages (from-to)80-97
Number of pages18
JournalRobotics and Autonomous Systems
Volume116
DOIs
StatePublished - Jun 1 2019

Fingerprint

Navigation
Sensor
Camera
Lidar
Sensors
Ground vehicles
Cameras
Robustness
Autonomous Navigation
Fusion reactions
Sensor Fusion
Depth Map
Range Image
Regularization Technique
Obstacle Avoidance
Methodology
Drop out
Learning Systems
Network Architecture
Occlusion

Keywords

  • Autonomous vehicles
  • Deep learning for autonomous navigation
  • Learning from demonstration
  • Robustness to sensor failures
  • Sensor fusion
  • Vision/LiDAR based navigation

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Mathematics(all)
  • Computer Science Applications

Cite this

A deep learning gated architecture for UGV navigation robust to sensor failures. / Patel, Naman; Choromanska, Anna; Krishnamurthy, Prashanth; Khorrami, Farshad.

In: Robotics and Autonomous Systems, Vol. 116, 01.06.2019, p. 80-97.

Research output: Contribution to journalArticle

@article{55b6bbffa7d947be82b6ccceb8b785b9,
title = "A deep learning gated architecture for UGV navigation robust to sensor failures",
abstract = "In this paper, we introduce a novel methodology for fusing sensors and improving robustness to sensor failures in end-to-end learning based autonomous navigation of ground vehicles in unknown environments. We propose the first learning based camera–LiDAR fusion methodology for autonomous in-door navigation. Specifically, we develop a multimodal end-to-end learning system, which maps raw depths and pixels from LiDAR and camera, respectively, to the steering commands. A novel gating based dropout regularization technique is introduced which effectively performs multimodal sensor fusion and reliably predicts steering commands even in the presence of various sensor failures. The robustness of our network architecture is demonstrated by experimentally evaluating its ability to autonomously navigate in the indoor corridor environment. Specifically, we show through various empirical results that our framework is robust to sensor failures, partial image occlusions, modifications of the camera image intensity, and the presence of noise in the camera or LiDAR range images. Furthermore, we show that some aspects of obstacle avoidance are implicitly learned (while not being specifically trained for it); these learned navigation capabilities are shown in ground vehicle navigation around static and dynamic obstacles.",
keywords = "Autonomous vehicles, Deep learning for autonomous navigation, Learning from demonstration, Robustness to sensor failures, Sensor fusion, Vision/LiDAR based navigation",
author = "Naman Patel and Anna Choromanska and Prashanth Krishnamurthy and Farshad Khorrami",
year = "2019",
month = "6",
day = "1",
doi = "10.1016/j.robot.2019.03.001",
language = "English (US)",
volume = "116",
pages = "80--97",
journal = "Robotics and Autonomous Systems",
issn = "0921-8890",
publisher = "Elsevier",

}

TY - JOUR

T1 - A deep learning gated architecture for UGV navigation robust to sensor failures

AU - Patel, Naman

AU - Choromanska, Anna

AU - Krishnamurthy, Prashanth

AU - Khorrami, Farshad

PY - 2019/6/1

Y1 - 2019/6/1

N2 - In this paper, we introduce a novel methodology for fusing sensors and improving robustness to sensor failures in end-to-end learning based autonomous navigation of ground vehicles in unknown environments. We propose the first learning based camera–LiDAR fusion methodology for autonomous in-door navigation. Specifically, we develop a multimodal end-to-end learning system, which maps raw depths and pixels from LiDAR and camera, respectively, to the steering commands. A novel gating based dropout regularization technique is introduced which effectively performs multimodal sensor fusion and reliably predicts steering commands even in the presence of various sensor failures. The robustness of our network architecture is demonstrated by experimentally evaluating its ability to autonomously navigate in the indoor corridor environment. Specifically, we show through various empirical results that our framework is robust to sensor failures, partial image occlusions, modifications of the camera image intensity, and the presence of noise in the camera or LiDAR range images. Furthermore, we show that some aspects of obstacle avoidance are implicitly learned (while not being specifically trained for it); these learned navigation capabilities are shown in ground vehicle navigation around static and dynamic obstacles.

AB - In this paper, we introduce a novel methodology for fusing sensors and improving robustness to sensor failures in end-to-end learning based autonomous navigation of ground vehicles in unknown environments. We propose the first learning based camera–LiDAR fusion methodology for autonomous in-door navigation. Specifically, we develop a multimodal end-to-end learning system, which maps raw depths and pixels from LiDAR and camera, respectively, to the steering commands. A novel gating based dropout regularization technique is introduced which effectively performs multimodal sensor fusion and reliably predicts steering commands even in the presence of various sensor failures. The robustness of our network architecture is demonstrated by experimentally evaluating its ability to autonomously navigate in the indoor corridor environment. Specifically, we show through various empirical results that our framework is robust to sensor failures, partial image occlusions, modifications of the camera image intensity, and the presence of noise in the camera or LiDAR range images. Furthermore, we show that some aspects of obstacle avoidance are implicitly learned (while not being specifically trained for it); these learned navigation capabilities are shown in ground vehicle navigation around static and dynamic obstacles.

KW - Autonomous vehicles

KW - Deep learning for autonomous navigation

KW - Learning from demonstration

KW - Robustness to sensor failures

KW - Sensor fusion

KW - Vision/LiDAR based navigation

UR - http://www.scopus.com/inward/record.url?scp=85063442642&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063442642&partnerID=8YFLogxK

U2 - 10.1016/j.robot.2019.03.001

DO - 10.1016/j.robot.2019.03.001

M3 - Article

AN - SCOPUS:85063442642

VL - 116

SP - 80

EP - 97

JO - Robotics and Autonomous Systems

JF - Robotics and Autonomous Systems

SN - 0921-8890

ER -