Punchline-Driven Hierarchical Facial Animation via Multimodal Large Language Models
Abstract
Speech-driven 3D facial animation has achieved high phonetic realism, but current models often fail to convey the expressive peaks, such as punchlines, that are critical for engaging communication. This paper introduces a novel framework that addresses this gap by leveraging a Multimodal Large Language Model (MLLM) for a deep, semantic understanding of speech. Our core innovation is a system that explicitly models and animates the climax of an utterance. The framework first employs a multimodal punchline detection module to identify moments of high expressive intent from both acoustic and textual cues. This signal guides our Punchline-Driven Hierarchical Animator (PDHA), which functionally decomposes the face into distinct regions and generates motion in a coordinated cascade, allowing the punchline to dynamically amplify expression in the upper face while preserving articulatory precision in the mouth. A final cross-modal fusion decoder refines the output for precise temporal alignment. Comprehensive experiments on the VOCASET dataset show that our model not only sets a new state-of-the-art in geometric fidelity, reducing Vertex Error by 7.8% compared to the state-of-the-art FaceFormer baseline, but is also rated as significantly more expressive and natural in user studies (p < 0.01), confirming its ability to capture the emotional impact of a punchline.DOI:
https://doi.org/10.31449/inf.v49i25.11394Downloads
Published
How to Cite
Issue
Section
License
I assign to Informatica, An International Journal of Computing and Informatics ("Journal") the copyright in the manuscript identified above and any additional material (figures, tables, illustrations, software or other information intended for publication) submitted as part of or as a supplement to the manuscript ("Paper") in all forms and media throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication. This transfer includes the right to reproduce and/or to distribute the Paper to other journals or digital libraries in electronic and online forms and systems.
I understand that I retain the rights to use the pre-prints, off-prints, accepted manuscript and published journal Paper for personal use, scholarly purposes and internal institutional use.
In certain cases, I can ask for retaining the publishing rights of the Paper. The Journal can permit or deny the request for publishing rights, to which I fully agree.
I declare that the submitted Paper is original, has been written by the stated authors and has not been published elsewhere nor is currently being considered for publication by any other journal and will not be submitted for such review while under review by this Journal. The Paper contains no material that violates proprietary rights of any other person or entity. I have obtained written permission from copyright owners for any excerpts from copyrighted works that are included and have credited the sources in my article. I have informed the co-author(s) of the terms of this publishing agreement.
Copyright © Slovenian Society Informatika







