Crowdsourced facial expression mapping using a 3D avatar

Crystal Butler, Lakshminarayanan Subramanian, Stephanie Michalowicz

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Facial expression mapping is the process of attributing signal values to a particular set of muscle activations in the face. This paper proposes the development of a broad lexicon of quantifiable, reproducible facial expressions with known signal values using an expressive 3D model and crowdsourced labeling data. Traditionally, coding muscle movements in the face is a time-consuming manual process performed by specialists. Identifying the communicative content of an expression generally requires generating large sets of posed photographs, with identifying labels chosen from a circumscribed list. Consequently, the widely accepted collection of configurations with known meanings is limited to six basic expressions of emotion. Our approach defines mappings from parameterized facial expressions displayed by a 3D avatar to their semantic representations. By collecting large, free-response label sets from naïve raters and using natural language processing techniques, we converge on a semantic centroid, or single label quickly and with low overhead.

Original languageEnglish (US)
Title of host publicationCHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
Pages2798-2804
Number of pages7
Volume07-12-May-2016
ISBN (Electronic)9781450340823
DOIs
StatePublished - May 7 2016
Event34th Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2016 - San Jose, United States
Duration: May 7 2016May 12 2016

Other

Other34th Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2016
CountryUnited States
CitySan Jose
Period5/7/165/12/16

Fingerprint

Labels
Muscle
Semantics
Labeling
Chemical activation
Processing

Keywords

  • 3D facial modeling
  • Avatars
  • Crowdsourcing
  • Expression recognition
  • Facial expressions
  • Facs

ASJC Scopus subject areas

  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design
  • Software

Cite this

Butler, C., Subramanian, L., & Michalowicz, S. (2016). Crowdsourced facial expression mapping using a 3D avatar. In CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems (Vol. 07-12-May-2016, pp. 2798-2804). Association for Computing Machinery. https://doi.org/10.1145/2851581.2892535

Crowdsourced facial expression mapping using a 3D avatar. / Butler, Crystal; Subramanian, Lakshminarayanan; Michalowicz, Stephanie.

CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems. Vol. 07-12-May-2016 Association for Computing Machinery, 2016. p. 2798-2804.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Butler, C, Subramanian, L & Michalowicz, S 2016, Crowdsourced facial expression mapping using a 3D avatar. in CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems. vol. 07-12-May-2016, Association for Computing Machinery, pp. 2798-2804, 34th Annual CHI Conference on Human Factors in Computing Systems, CHI EA 2016, San Jose, United States, 5/7/16. https://doi.org/10.1145/2851581.2892535
Butler C, Subramanian L, Michalowicz S. Crowdsourced facial expression mapping using a 3D avatar. In CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems. Vol. 07-12-May-2016. Association for Computing Machinery. 2016. p. 2798-2804 https://doi.org/10.1145/2851581.2892535
Butler, Crystal ; Subramanian, Lakshminarayanan ; Michalowicz, Stephanie. / Crowdsourced facial expression mapping using a 3D avatar. CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems. Vol. 07-12-May-2016 Association for Computing Machinery, 2016. pp. 2798-2804
@inproceedings{c11a1a3e29894ae69dfc040bc2b1fdc8,
title = "Crowdsourced facial expression mapping using a 3D avatar",
abstract = "Facial expression mapping is the process of attributing signal values to a particular set of muscle activations in the face. This paper proposes the development of a broad lexicon of quantifiable, reproducible facial expressions with known signal values using an expressive 3D model and crowdsourced labeling data. Traditionally, coding muscle movements in the face is a time-consuming manual process performed by specialists. Identifying the communicative content of an expression generally requires generating large sets of posed photographs, with identifying labels chosen from a circumscribed list. Consequently, the widely accepted collection of configurations with known meanings is limited to six basic expressions of emotion. Our approach defines mappings from parameterized facial expressions displayed by a 3D avatar to their semantic representations. By collecting large, free-response label sets from na{\"i}ve raters and using natural language processing techniques, we converge on a semantic centroid, or single label quickly and with low overhead.",
keywords = "3D facial modeling, Avatars, Crowdsourcing, Expression recognition, Facial expressions, Facs",
author = "Crystal Butler and Lakshminarayanan Subramanian and Stephanie Michalowicz",
year = "2016",
month = "5",
day = "7",
doi = "10.1145/2851581.2892535",
language = "English (US)",
volume = "07-12-May-2016",
pages = "2798--2804",
booktitle = "CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems",
publisher = "Association for Computing Machinery",

}

TY - GEN

T1 - Crowdsourced facial expression mapping using a 3D avatar

AU - Butler, Crystal

AU - Subramanian, Lakshminarayanan

AU - Michalowicz, Stephanie

PY - 2016/5/7

Y1 - 2016/5/7

N2 - Facial expression mapping is the process of attributing signal values to a particular set of muscle activations in the face. This paper proposes the development of a broad lexicon of quantifiable, reproducible facial expressions with known signal values using an expressive 3D model and crowdsourced labeling data. Traditionally, coding muscle movements in the face is a time-consuming manual process performed by specialists. Identifying the communicative content of an expression generally requires generating large sets of posed photographs, with identifying labels chosen from a circumscribed list. Consequently, the widely accepted collection of configurations with known meanings is limited to six basic expressions of emotion. Our approach defines mappings from parameterized facial expressions displayed by a 3D avatar to their semantic representations. By collecting large, free-response label sets from naïve raters and using natural language processing techniques, we converge on a semantic centroid, or single label quickly and with low overhead.

AB - Facial expression mapping is the process of attributing signal values to a particular set of muscle activations in the face. This paper proposes the development of a broad lexicon of quantifiable, reproducible facial expressions with known signal values using an expressive 3D model and crowdsourced labeling data. Traditionally, coding muscle movements in the face is a time-consuming manual process performed by specialists. Identifying the communicative content of an expression generally requires generating large sets of posed photographs, with identifying labels chosen from a circumscribed list. Consequently, the widely accepted collection of configurations with known meanings is limited to six basic expressions of emotion. Our approach defines mappings from parameterized facial expressions displayed by a 3D avatar to their semantic representations. By collecting large, free-response label sets from naïve raters and using natural language processing techniques, we converge on a semantic centroid, or single label quickly and with low overhead.

KW - 3D facial modeling

KW - Avatars

KW - Crowdsourcing

KW - Expression recognition

KW - Facial expressions

KW - Facs

UR - http://www.scopus.com/inward/record.url?scp=85014582204&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85014582204&partnerID=8YFLogxK

U2 - 10.1145/2851581.2892535

DO - 10.1145/2851581.2892535

M3 - Conference contribution

VL - 07-12-May-2016

SP - 2798

EP - 2804

BT - CHI EA 2016: #chi4good - Extended Abstracts, 34th Annual CHI Conference on Human Factors in Computing Systems

PB - Association for Computing Machinery

ER -