For the current REF see the REF 2021 website REF 2021 logo

Output details

11 - Computer Science and Informatics

University of Edinburgh

Return to search Previous output Next output
Output 325 of 401 in the submission
Output title

Speech-driven lip motion generation with a trajectory HMM

Type
E - Conference contribution
DOI
-
Name of conference/published proceedings
INTERSPEECH 2008, 9th Annual Conference of the International Speech Communication Association, Brisbane, Australia, September 22-26, 2008
Volume number
-
Issue number
-
First page of article
2314
ISSN of proceedings
-
Year of publication
2008
Number of additional authors
2
Additional information

<22> Originality: Investigation into the optimal model unit for lip synchronisation with speech for speech-driven animated agents based on statistical models. The first system using trajectory hidden Markov models to automatically map audio speech features to lip motions without using heuristic smoothing filters.

Significance: The objective and subjective comparisons done in the study revealed the quality of synthesised lip motions depends on the model unit for lip motions, i.e. visemes, and suggested optimal selection of unit is possible rather than defining the unit heuristically.

Rigour: Developed 3D motion capture data for speech-driven lip synchronisation.

Interdisciplinary
-
Cross-referral requested
-
Research group
D - Institute for Language, Cognition & Computation
Citation count
0
Proposed double-weighted
No
Double-weighted statement
-
Reserve for a double-weighted output
No
Non-English
No
English abstract
-