Major cast detection in video using both speaker and face information

Zhu Liu, Yao Wang

Research output: Contribution to journalArticle

Abstract

Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.

Original languageEnglish (US)
Pages (from-to)89-101
Number of pages13
JournalIEEE Transactions on Multimedia
Volume9
Issue number1
DOIs
StatePublished - Jan 2007

Fingerprint

Anchors

Keywords

  • Content-based multimedia indexing
  • Face detection
  • Major cast detection
  • Media integration
  • Speaker segmentation
  • Video browsing
  • Video summary

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Computer Graphics and Computer-Aided Design
  • Software

Cite this

Major cast detection in video using both speaker and face information. / Liu, Zhu; Wang, Yao.

In: IEEE Transactions on Multimedia, Vol. 9, No. 1, 01.2007, p. 89-101.

Research output: Contribution to journalArticle

@article{38d94c5b497347879c3c0582f815c89f,
title = "Major cast detection in video using both speaker and face information",
abstract = "Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.",
keywords = "Content-based multimedia indexing, Face detection, Major cast detection, Media integration, Speaker segmentation, Video browsing, Video summary",
author = "Zhu Liu and Yao Wang",
year = "2007",
month = "1",
doi = "10.1109/TMM.2006.886360",
language = "English (US)",
volume = "9",
pages = "89--101",
journal = "IEEE Transactions on Multimedia",
issn = "1520-9210",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

TY - JOUR

T1 - Major cast detection in video using both speaker and face information

AU - Liu, Zhu

AU - Wang, Yao

PY - 2007/1

Y1 - 2007/1

N2 - Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.

AB - Major casts, for example, the anchor persons or reporters in news broadcast programs and the principle characters in movies, play an important role in video, and their occurrences provide meaningful indices for organizing and presenting video content. This paper describes a new approach for automatically generating a list of major casts in a video sequence based on multiple modalities, specifically, speaker information in audio track and face information in video track. The core algorithm is composed of three steps. First, speaker boundaries are detected and speaker segments are clustered in audio stream. Second, face appearances are tracked and face tracks are clustered in video stream. Finally, correspondences between speakers and faces are determined based on their temporal co-occurrence. A list of major casts is constructed and ranked in an order that reflects each cast's importance, which is determined by the accumulative temporal and spatial presence of the cast. The proposed algorithm has been integrated in a major cast based video browsing system, which presents the face icon and marks the speech locations in time stream for each detected major cast. The system provides a semantically meaningful summary of the video content, which helps the user to effectively digest the theme of the video.

KW - Content-based multimedia indexing

KW - Face detection

KW - Major cast detection

KW - Media integration

KW - Speaker segmentation

KW - Video browsing

KW - Video summary

UR - http://www.scopus.com/inward/record.url?scp=33846216333&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33846216333&partnerID=8YFLogxK

U2 - 10.1109/TMM.2006.886360

DO - 10.1109/TMM.2006.886360

M3 - Article

VL - 9

SP - 89

EP - 101

JO - IEEE Transactions on Multimedia

JF - IEEE Transactions on Multimedia

SN - 1520-9210

IS - 1

ER -