Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion

Naman Patel, Prashanth Krishnamurthy, Yi Fang, Farshad Khorrami

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a novel framework for operator assistance in indoor navigation and map building wherein the ground vehicle learns to navigate by imitating the operator commands while training. Our framework reduces the workload on the human operator simplifying the process of human robot interaction. An end to end architecture is presented which takes inputs from camera and LIDAR and outputs the steering angle for the ground vehicle to navigate through an indoor environment. The presented framework includes static obstacle avoidance during navigation and map building. The architecture is made more reliable by an on-line mechanism in which the robot introspects its output and decides whether to rely on its output or transfer vehicle control to a human pilot. The end to end trained framework implicitly learns to avoid obstacles. We show that our framework works under various cases where other frameworks fail.

Original languageEnglish (US)
Title of host publicationHRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction
PublisherIEEE Computer Society
Pages253-254
Number of pages2
VolumePart F126657
ISBN (Electronic)9781450348850
DOIs
StatePublished - Mar 6 2017
Event12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017 - Vienna, Austria
Duration: Mar 6 2017Mar 9 2017

Other

Other12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017
CountryAustria
CityVienna
Period3/6/173/9/17

Fingerprint

Ground vehicles
Navigation
Fusion reactions
Robots
Human robot interaction
Sensors
Collision avoidance
Cameras

Keywords

  • computer vision
  • ground robots
  • machine learning
  • map building
  • navigation

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction
  • Electrical and Electronic Engineering

Cite this

Patel, N., Krishnamurthy, P., Fang, Y., & Khorrami, F. (2017). Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion. In HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (Vol. Part F126657, pp. 253-254). IEEE Computer Society. https://doi.org/10.1145/3029798.3038368

Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion. / Patel, Naman; Krishnamurthy, Prashanth; Fang, Yi; Khorrami, Farshad.

HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Vol. Part F126657 IEEE Computer Society, 2017. p. 253-254.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Patel, N, Krishnamurthy, P, Fang, Y & Khorrami, F 2017, Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion. in HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. vol. Part F126657, IEEE Computer Society, pp. 253-254, 12th Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017, Vienna, Austria, 3/6/17. https://doi.org/10.1145/3029798.3038368
Patel N, Krishnamurthy P, Fang Y, Khorrami F. Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion. In HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Vol. Part F126657. IEEE Computer Society. 2017. p. 253-254 https://doi.org/10.1145/3029798.3038368
Patel, Naman ; Krishnamurthy, Prashanth ; Fang, Yi ; Khorrami, Farshad. / Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion. HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. Vol. Part F126657 IEEE Computer Society, 2017. pp. 253-254
@inproceedings{f4df6320af1b46a6a7a5d306b7ea9447,
title = "Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion",
abstract = "We present a novel framework for operator assistance in indoor navigation and map building wherein the ground vehicle learns to navigate by imitating the operator commands while training. Our framework reduces the workload on the human operator simplifying the process of human robot interaction. An end to end architecture is presented which takes inputs from camera and LIDAR and outputs the steering angle for the ground vehicle to navigate through an indoor environment. The presented framework includes static obstacle avoidance during navigation and map building. The architecture is made more reliable by an on-line mechanism in which the robot introspects its output and decides whether to rely on its output or transfer vehicle control to a human pilot. The end to end trained framework implicitly learns to avoid obstacles. We show that our framework works under various cases where other frameworks fail.",
keywords = "computer vision, ground robots, machine learning, map building, navigation",
author = "Naman Patel and Prashanth Krishnamurthy and Yi Fang and Farshad Khorrami",
year = "2017",
month = "3",
day = "6",
doi = "10.1145/3029798.3038368",
language = "English (US)",
volume = "Part F126657",
pages = "253--254",
booktitle = "HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction",
publisher = "IEEE Computer Society",
address = "United States",

}

TY - GEN

T1 - Reducing operator workload for indoor navigation of autonomous robots via multimodal sensor fusion

AU - Patel, Naman

AU - Krishnamurthy, Prashanth

AU - Fang, Yi

AU - Khorrami, Farshad

PY - 2017/3/6

Y1 - 2017/3/6

N2 - We present a novel framework for operator assistance in indoor navigation and map building wherein the ground vehicle learns to navigate by imitating the operator commands while training. Our framework reduces the workload on the human operator simplifying the process of human robot interaction. An end to end architecture is presented which takes inputs from camera and LIDAR and outputs the steering angle for the ground vehicle to navigate through an indoor environment. The presented framework includes static obstacle avoidance during navigation and map building. The architecture is made more reliable by an on-line mechanism in which the robot introspects its output and decides whether to rely on its output or transfer vehicle control to a human pilot. The end to end trained framework implicitly learns to avoid obstacles. We show that our framework works under various cases where other frameworks fail.

AB - We present a novel framework for operator assistance in indoor navigation and map building wherein the ground vehicle learns to navigate by imitating the operator commands while training. Our framework reduces the workload on the human operator simplifying the process of human robot interaction. An end to end architecture is presented which takes inputs from camera and LIDAR and outputs the steering angle for the ground vehicle to navigate through an indoor environment. The presented framework includes static obstacle avoidance during navigation and map building. The architecture is made more reliable by an on-line mechanism in which the robot introspects its output and decides whether to rely on its output or transfer vehicle control to a human pilot. The end to end trained framework implicitly learns to avoid obstacles. We show that our framework works under various cases where other frameworks fail.

KW - computer vision

KW - ground robots

KW - machine learning

KW - map building

KW - navigation

UR - http://www.scopus.com/inward/record.url?scp=85016390758&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85016390758&partnerID=8YFLogxK

U2 - 10.1145/3029798.3038368

DO - 10.1145/3029798.3038368

M3 - Conference contribution

VL - Part F126657

SP - 253

EP - 254

BT - HRI 2017 - Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction

PB - IEEE Computer Society

ER -