Check nearby libraries
Buy this book
Speech perception involves two senses, hearing and seeing: we hear and lip-read speech. The synchrony between acoustic and visual speech is a driving force for the development of facial animators, which predict facial motion given an acoustic speech input. A computationally and conceptually simple model of audio-visual speech production is examined, one that relates acoustic speech and visual speech through a linear transformation. The model is tested with a large group of generic sentences, as well as sentences composed of highly controlled contexts to elicit specific types of articulator motion. On average, the transformation can account for 70% of facial motion, but this result is influenced by the context of the sentence. Results are interpreted in terms of the model assumptions in approximating the speech production system. This research is relevant to animating faces for multimedia or rehabilitation purposes, and falls in the framework of current speech perception theories.
Check nearby libraries
Buy this book
Edition | Availability |
---|---|
1
Exploring the mathematical relationship between acoustic and visual speech.
2005
in English
0494071001 9780494071007
|
aaaa
|
Book Details
Edition Notes
Source: Masters Abstracts International, Volume: 44-02, page: 0957.
Thesis (M.A.Sc.)--University of Toronto, 2005.
Electronic version licensed for access by U. of T. users.
GERSTEIN MICROTEXT copy on microfiche (2 microfiches).
The Physical Object
ID Numbers
Community Reviews (0)
History
- Created October 21, 2008
- 2 revisions
Wikipedia citation
×CloseCopy and paste this code into your Wikipedia page. Need help?
December 15, 2009 | Edited by WorkBot | link works |
October 21, 2008 | Created by ImportBot | Imported from University of Toronto MARC record |