Journal of Computer and Communications

Volume 5, Issue 10 (August 2017)

ISSN Print: 2327-5219   ISSN Online: 2327-5227

Google-based Impact Factor: 1.12  Citations  

HMM-Based Photo-Realistic Talking Face Synthesis Using Facial Expression Parameter Mapping with Deep Neural Networks

HTML  XML Download Download as PDF (Size: 2680KB)  PP. 50-65  
DOI: 10.4236/jcc.2017.510006    963 Downloads   2,004 Views  Citations

ABSTRACT

This paper proposes a technique for synthesizing a pixel-based photo-realistic talking face animation using two-step synthesis with HMMs and DNNs. We introduce facial expression parameters as an intermediate representation that has a good correspondence with both of the input contexts and the output pixel data of face images. The sequences of the facial expression parameters are modeled using context-dependent HMMs with static and dynamic features. The mapping from the expression parameters to the target pixel images are trained using DNNs. We examine the required amount of the training data for HMMs and DNNs and compare the performance of the proposed technique with the conventional PCA-based technique through objective and subjective evaluation experiments.

Share and Cite:

Sato, K. , Nose, T. and Ito, A. (2017) HMM-Based Photo-Realistic Talking Face Synthesis Using Facial Expression Parameter Mapping with Deep Neural Networks. Journal of Computer and Communications, 5, 50-65. doi: 10.4236/jcc.2017.510006.

Cited by

[1] Convolution-Based Design for Real-Time Pose Recognition and Character Animation Generation
Wireless Communications and Mobile Computing, 2022
[2] Face Modeling
2020
[3] DNN-Based Talking Movie Generation with Face Direction Consideration
Recent Advances in Intelligent Information Hiding and Multimedia Signal Processing, 2018

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.