For the current REF see the REF 2021 website REF 2021 logo

Output details

13 - Electrical and Electronic Engineering, Metallurgy and Materials

University of Surrey

Return to search Previous output Next output
Output 0 of 0 in the submission
Article title

Model-Based Synthesis of Visual Speech Movements from 3D Video

Type
D - Journal article
Title of journal
EURASIP Journal on Audio, Speech, and Music Processing
Article number
-
Volume number
2009
Issue number
-
First page of article
1
ISSN of journal
1687-4722
Year of publication
2009
URL
-
Number of additional authors
-
Additional information

This paper explains how to synthesise visual-speech movements using a lip-shape-and-velocity parameterisation of 3D face dynamics. Although faces are vital in communication, human sensitivity to facial perception makes such synthesis challenging. In this paper, visual speech animations are constructed from audio input by mapping onto lip movements and selecting stored phonetic units to the target utterance. This combination of representation with unit selection shows visible improvement, facilitating facial animation solutions for film products of actors' digital doubles (Framestore - Nico Scapel, Head of Animation <n.scapel@framestore.com>) and for visual communication.

Interdisciplinary
-
Cross-referral requested
-
Research group
None
Proposed double-weighted
No
Double-weighted statement
-
Reserve for a double-weighted output
No
Non-English
No
English abstract
-