Ontology-based state representations for intention recognition in human-robot collaborative environments

Craig Schlenoff, Anthony Pietromartire, Zeid Kootbally, Stephen Balakirsky, Sebti Foufou

Research output: Contribution to journalArticle

Abstract

In this paper, we describe a novel approach for representing state information for the purpose of intention recognition in cooperative human-robot environments. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. This approach is applied to a manufacturing kitting operation, where humans and robots are working together to develop kits. Based upon a set of predefined high-level state relationships that must be true for future actions to occur, a robot can use the detailed state information described in this paper to infer the probability of subsequent actions occurring. This would allow the robot to better help the human with the task or, at a minimum, better stay out of his or her way.

Original languageEnglish (US)
Pages (from-to)1224-1234
Number of pages11
JournalRobotics and Autonomous Systems
Volume61
Issue number11
DOIs
StatePublished - Nov 1 2013

Fingerprint

Collaborative Environments
Ontology
Robot
Robots
Cartesian
Manufacturing
Human
Relationships

Keywords

  • Human-robot safety
  • Intention recognition
  • Ontologies
  • RCC8
  • State-based representation

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Software
  • Mathematics(all)
  • Computer Science Applications

Cite this

Ontology-based state representations for intention recognition in human-robot collaborative environments. / Schlenoff, Craig; Pietromartire, Anthony; Kootbally, Zeid; Balakirsky, Stephen; Foufou, Sebti.

In: Robotics and Autonomous Systems, Vol. 61, No. 11, 01.11.2013, p. 1224-1234.

Research output: Contribution to journalArticle

Schlenoff, Craig ; Pietromartire, Anthony ; Kootbally, Zeid ; Balakirsky, Stephen ; Foufou, Sebti. / Ontology-based state representations for intention recognition in human-robot collaborative environments. In: Robotics and Autonomous Systems. 2013 ; Vol. 61, No. 11. pp. 1224-1234.
@article{88ad5d60f1fd4bb4a2d6bceb2db8ddcb,
title = "Ontology-based state representations for intention recognition in human-robot collaborative environments",
abstract = "In this paper, we describe a novel approach for representing state information for the purpose of intention recognition in cooperative human-robot environments. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. This approach is applied to a manufacturing kitting operation, where humans and robots are working together to develop kits. Based upon a set of predefined high-level state relationships that must be true for future actions to occur, a robot can use the detailed state information described in this paper to infer the probability of subsequent actions occurring. This would allow the robot to better help the human with the task or, at a minimum, better stay out of his or her way.",
keywords = "Human-robot safety, Intention recognition, Ontologies, RCC8, State-based representation",
author = "Craig Schlenoff and Anthony Pietromartire and Zeid Kootbally and Stephen Balakirsky and Sebti Foufou",
year = "2013",
month = "11",
day = "1",
doi = "10.1016/j.robot.2013.04.004",
language = "English (US)",
volume = "61",
pages = "1224--1234",
journal = "Robotics and Autonomous Systems",
issn = "0921-8890",
publisher = "Elsevier",
number = "11",

}

TY - JOUR

T1 - Ontology-based state representations for intention recognition in human-robot collaborative environments

AU - Schlenoff, Craig

AU - Pietromartire, Anthony

AU - Kootbally, Zeid

AU - Balakirsky, Stephen

AU - Foufou, Sebti

PY - 2013/11/1

Y1 - 2013/11/1

N2 - In this paper, we describe a novel approach for representing state information for the purpose of intention recognition in cooperative human-robot environments. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. This approach is applied to a manufacturing kitting operation, where humans and robots are working together to develop kits. Based upon a set of predefined high-level state relationships that must be true for future actions to occur, a robot can use the detailed state information described in this paper to infer the probability of subsequent actions occurring. This would allow the robot to better help the human with the task or, at a minimum, better stay out of his or her way.

AB - In this paper, we describe a novel approach for representing state information for the purpose of intention recognition in cooperative human-robot environments. States are represented by a combination of spatial relationships in a Cartesian frame along with cardinal direction information. This approach is applied to a manufacturing kitting operation, where humans and robots are working together to develop kits. Based upon a set of predefined high-level state relationships that must be true for future actions to occur, a robot can use the detailed state information described in this paper to infer the probability of subsequent actions occurring. This would allow the robot to better help the human with the task or, at a minimum, better stay out of his or her way.

KW - Human-robot safety

KW - Intention recognition

KW - Ontologies

KW - RCC8

KW - State-based representation

UR - http://www.scopus.com/inward/record.url?scp=84885308654&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84885308654&partnerID=8YFLogxK

U2 - 10.1016/j.robot.2013.04.004

DO - 10.1016/j.robot.2013.04.004

M3 - Article

AN - SCOPUS:84885308654

VL - 61

SP - 1224

EP - 1234

JO - Robotics and Autonomous Systems

JF - Robotics and Autonomous Systems

SN - 0921-8890

IS - 11

ER -