More than a feeling: The MiFace framework for defining facial communication mappings

Crystal Butler, Stephanie Michalowicz, Lakshminarayanan Subramanian, Winslow Burleson

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.

Original languageEnglish (US)
Title of host publicationUIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology
PublisherAssociation for Computing Machinery, Inc
Pages773-786
Number of pages14
ISBN (Electronic)9781450349819
DOIs
StatePublished - Oct 20 2017
Event30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017 - Quebec City, Canada
Duration: Oct 22 2017Oct 25 2017

Other

Other30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017
CountryCanada
CityQuebec City
Period10/22/1710/25/17

Fingerprint

Muscle
Labels
Communication
Display devices
Processing

Keywords

  • 3D modeling
  • Affective computing
  • Avatars
  • Facial expression recognition
  • Natural language processing
  • Social signal processing
  • Virtual humans

ASJC Scopus subject areas

  • Computer Graphics and Computer-Aided Design
  • Human-Computer Interaction
  • Software

Cite this

Butler, C., Michalowicz, S., Subramanian, L., & Burleson, W. (2017). More than a feeling: The MiFace framework for defining facial communication mappings. In UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology (pp. 773-786). Association for Computing Machinery, Inc. https://doi.org/10.1145/3126594.3126640

More than a feeling : The MiFace framework for defining facial communication mappings. / Butler, Crystal; Michalowicz, Stephanie; Subramanian, Lakshminarayanan; Burleson, Winslow.

UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, 2017. p. 773-786.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Butler, C, Michalowicz, S, Subramanian, L & Burleson, W 2017, More than a feeling: The MiFace framework for defining facial communication mappings. in UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, pp. 773-786, 30th Annual ACM Symposium on User Interface Software and Technology, UIST 2017, Quebec City, Canada, 10/22/17. https://doi.org/10.1145/3126594.3126640
Butler C, Michalowicz S, Subramanian L, Burleson W. More than a feeling: The MiFace framework for defining facial communication mappings. In UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc. 2017. p. 773-786 https://doi.org/10.1145/3126594.3126640
Butler, Crystal ; Michalowicz, Stephanie ; Subramanian, Lakshminarayanan ; Burleson, Winslow. / More than a feeling : The MiFace framework for defining facial communication mappings. UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. Association for Computing Machinery, Inc, 2017. pp. 773-786
@inproceedings{ff4b8d5deef5431ebb4c855055ce8fd1,
title = "More than a feeling: The MiFace framework for defining facial communication mappings",
abstract = "Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.",
keywords = "3D modeling, Affective computing, Avatars, Facial expression recognition, Natural language processing, Social signal processing, Virtual humans",
author = "Crystal Butler and Stephanie Michalowicz and Lakshminarayanan Subramanian and Winslow Burleson",
year = "2017",
month = "10",
day = "20",
doi = "10.1145/3126594.3126640",
language = "English (US)",
pages = "773--786",
booktitle = "UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology",
publisher = "Association for Computing Machinery, Inc",

}

TY - GEN

T1 - More than a feeling

T2 - The MiFace framework for defining facial communication mappings

AU - Butler, Crystal

AU - Michalowicz, Stephanie

AU - Subramanian, Lakshminarayanan

AU - Burleson, Winslow

PY - 2017/10/20

Y1 - 2017/10/20

N2 - Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.

AB - Facial expressions transmit a variety of social, grammatical, and affective signals. For technology to leverage this rich source of communication, tools that better model the breadth of information they convey are required. MiFace is a novel framework for creating expression lexicons that map signal values to parameterized facial muscle movements inferred by trained experts. The set of generally accepted expressions established in this way is limited to six basic displays of affect. In contrast, our approach generatively simulates muscle movements on a 3D avatar. By applying natural language processing techniques to crowdsourced free-response labels for the resulting images, we efficiently converge on an expression's value across signal categories. Two studies returned 218 discriminable facial expressions with 51 unique labels. The six basic emotions are included, but we additionally define such nuanced expressions as embarrassed, curious, and hopeful.

KW - 3D modeling

KW - Affective computing

KW - Avatars

KW - Facial expression recognition

KW - Natural language processing

KW - Social signal processing

KW - Virtual humans

UR - http://www.scopus.com/inward/record.url?scp=85041550861&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041550861&partnerID=8YFLogxK

U2 - 10.1145/3126594.3126640

DO - 10.1145/3126594.3126640

M3 - Conference contribution

AN - SCOPUS:85041550861

SP - 773

EP - 786

BT - UIST 2017 - Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology

PB - Association for Computing Machinery, Inc

ER -