Automatically identifying targets users interact with during real world tasks

Amy Hurst, Scott E. Hudson, Jennifer Mankoff

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.

Original languageEnglish (US)
Title of host publicationIUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces
Pages11-20
Number of pages10
DOIs
StatePublished - Apr 26 2010
Event14th ACM International Conference on Intelligent User Interfaces, IUI 2010 - Hong Kong, China
Duration: Feb 7 2010Feb 10 2010

Publication series

NameInternational Conference on Intelligent User Interfaces, Proceedings IUI

Conference

Conference14th ACM International Conference on Intelligent User Interfaces, IUI 2010
CountryChina
CityHong Kong
Period2/7/102/10/10

Fingerprint

Application programming interfaces (API)
Hooks
Computer vision
Learning systems
Identification (control systems)
Innovation

Keywords

  • Computer accessibility
  • Pointing input
  • Target identification
  • Usability analysis

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction

Cite this

Hurst, A., Hudson, S. E., & Mankoff, J. (2010). Automatically identifying targets users interact with during real world tasks. In IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces (pp. 11-20). (International Conference on Intelligent User Interfaces, Proceedings IUI). https://doi.org/10.1145/1719970.1719973

Automatically identifying targets users interact with during real world tasks. / Hurst, Amy; Hudson, Scott E.; Mankoff, Jennifer.

IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces. 2010. p. 11-20 (International Conference on Intelligent User Interfaces, Proceedings IUI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Hurst, A, Hudson, SE & Mankoff, J 2010, Automatically identifying targets users interact with during real world tasks. in IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces. International Conference on Intelligent User Interfaces, Proceedings IUI, pp. 11-20, 14th ACM International Conference on Intelligent User Interfaces, IUI 2010, Hong Kong, China, 2/7/10. https://doi.org/10.1145/1719970.1719973
Hurst A, Hudson SE, Mankoff J. Automatically identifying targets users interact with during real world tasks. In IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces. 2010. p. 11-20. (International Conference on Intelligent User Interfaces, Proceedings IUI). https://doi.org/10.1145/1719970.1719973
Hurst, Amy ; Hudson, Scott E. ; Mankoff, Jennifer. / Automatically identifying targets users interact with during real world tasks. IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces. 2010. pp. 11-20 (International Conference on Intelligent User Interfaces, Proceedings IUI).
@inproceedings{1c224437ca8440ee9b22fcff0aadb0b4,
title = "Automatically identifying targets users interact with during real world tasks",
abstract = "Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89{\%} accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74{\%} of the targets were correctly identified by the API alone.",
keywords = "Computer accessibility, Pointing input, Target identification, Usability analysis",
author = "Amy Hurst and Hudson, {Scott E.} and Jennifer Mankoff",
year = "2010",
month = "4",
day = "26",
doi = "10.1145/1719970.1719973",
language = "English (US)",
isbn = "9781605585154",
series = "International Conference on Intelligent User Interfaces, Proceedings IUI",
pages = "11--20",
booktitle = "IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces",

}

TY - GEN

T1 - Automatically identifying targets users interact with during real world tasks

AU - Hurst, Amy

AU - Hudson, Scott E.

AU - Mankoff, Jennifer

PY - 2010/4/26

Y1 - 2010/4/26

N2 - Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.

AB - Information about the location and size of the targets that users interact with in real world settings can enable new innovations in human performance assessment and soft-ware usability analysis. Accessibility APIs provide some information about the size and location of targets. How-ever this information is incomplete because it does not sup-port all targets found in modern interfaces and the reported sizes can be inaccurate. These accessibility APIs access the size and location of targets through low-level hooks to the operating system or an application. We have developed an alternative solution for target identification that leverages visual affordances in the interface, and the visual cues produced as users interact with targets. We have used our novel target identification technique in a hybrid solution that combines machine learning, computer vision, and accessibility API data to find the size and location of targets users select with 89% accuracy. Our hybrid approach is superior to the performance of the accessibility API alone: in our dataset of 1355 targets covering 8 popular applications, only 74% of the targets were correctly identified by the API alone.

KW - Computer accessibility

KW - Pointing input

KW - Target identification

KW - Usability analysis

UR - http://www.scopus.com/inward/record.url?scp=77951109801&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=77951109801&partnerID=8YFLogxK

U2 - 10.1145/1719970.1719973

DO - 10.1145/1719970.1719973

M3 - Conference contribution

AN - SCOPUS:77951109801

SN - 9781605585154

T3 - International Conference on Intelligent User Interfaces, Proceedings IUI

SP - 11

EP - 20

BT - IUI 2010 - Proceedings of the 14th ACM International Conference on Intelligent User Interfaces

ER -