Human gaze-driven spatial tasking of an autonomous MAV

Liangzhe Yuan, Christopher Reardon, Garrett Warnell, Giuseppe Loianno

Research output: Contribution to journalArticle

Abstract

In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human's head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.

Original languageEnglish (US)
Article number8626140
Pages (from-to)1343-1350
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume4
Issue number2
DOIs
StatePublished - Apr 1 2019

Fingerprint

Micro air vehicle (MAV)
Units of measurement
Robots
Glass
Navigation
Cameras
Eye Tracking
Robot
Virtual reality
Camera
Pipelines
Inspection
Unit
Disability
Virtual Reality
3D
Human
Neural Networks
Scenarios
Motion

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Human-Computer Interaction
  • Biomedical Engineering
  • Mechanical Engineering
  • Control and Optimization
  • Artificial Intelligence
  • Computer Science Applications
  • Computer Vision and Pattern Recognition

Cite this

Human gaze-driven spatial tasking of an autonomous MAV. / Yuan, Liangzhe; Reardon, Christopher; Warnell, Garrett; Loianno, Giuseppe.

In: IEEE Robotics and Automation Letters, Vol. 4, No. 2, 8626140, 01.04.2019, p. 1343-1350.

Research output: Contribution to journalArticle

Yuan, Liangzhe ; Reardon, Christopher ; Warnell, Garrett ; Loianno, Giuseppe. / Human gaze-driven spatial tasking of an autonomous MAV. In: IEEE Robotics and Automation Letters. 2019 ; Vol. 4, No. 2. pp. 1343-1350.
@article{a9228eb194124e72b23ca5503a40b8a9,
title = "Human gaze-driven spatial tasking of an autonomous MAV",
abstract = "In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human's head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.",
author = "Liangzhe Yuan and Christopher Reardon and Garrett Warnell and Giuseppe Loianno",
year = "2019",
month = "4",
day = "1",
doi = "10.1109/LRA.2019.2895419",
language = "English (US)",
volume = "4",
pages = "1343--1350",
journal = "IEEE Robotics and Automation Letters",
issn = "2377-3766",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "2",

}

TY - JOUR

T1 - Human gaze-driven spatial tasking of an autonomous MAV

AU - Yuan, Liangzhe

AU - Reardon, Christopher

AU - Warnell, Garrett

AU - Loianno, Giuseppe

PY - 2019/4/1

Y1 - 2019/4/1

N2 - In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human's head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.

AB - In this letter, we address the problem of providing human-assisted quadrotor navigation using a set of eye tracking glasses. The advent of these devices (i.e., eye tracking glasses, virtual reality tools, etc.) provides the opportunity to create new, noninvasive forms of interaction between humans and robots. We show how a set of glasses equipped with gaze tracker, a camera, and an inertial measurement unit (IMU) can be used to estimate the relative position of the human with respect to a quadrotor, and decouple the gaze direction from the head orientation, which allows the human to spatially task (i.e., send new 3-D navigation waypoints to) the robot in an uninstrumented environment. We decouple the gaze direction from head motion by tracking the human's head orientation using a combination of camera and IMU data. In order to detect the flying robot, we train and use a deep neural network. We experimentally evaluate the proposed approach, and show that our pipeline has the potential to enable gaze-driven autonomy for spatial tasking. The proposed approach can be employed in multiple scenarios including inspection and first response, as well as by people with disabilities that affect their mobility.

UR - http://www.scopus.com/inward/record.url?scp=85063310637&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85063310637&partnerID=8YFLogxK

U2 - 10.1109/LRA.2019.2895419

DO - 10.1109/LRA.2019.2895419

M3 - Article

AN - SCOPUS:85063310637

VL - 4

SP - 1343

EP - 1350

JO - IEEE Robotics and Automation Letters

JF - IEEE Robotics and Automation Letters

SN - 2377-3766

IS - 2

M1 - 8626140

ER -