BVPEC: A Cross-modal BERT-ViT Framework for Performance Emotion Recognition from Multimodal Acting Data

Abstract

Performance emotional computing is a key technology for understanding and evaluating actors' artistic expression, and it is of great value in film and television analysis, drama education, and other fields. Aiming at the problem that traditional single-modal methods make it difficult to fully capture the rich text and visual emotional information in performances, this study innovatively proposes a performance emotional computing framework (BVPEC) based on the BERT-Vision cross-modal pre-training model. First, the framework deeply integrates the text information of script lines with the video information of actors' performances. It uses the BERT model to deal with lines' semantics and emotional tendencies. Secondly, a Vision Transformer (ViT) is used to extract visual features such as facial expressions and body movements of actors, and a cross-modal adaptive fusion mechanism is designed to achieve information complementarity between modes. Finally, experiments on public data sets (such as the LIRIS-ACCEDE emotional video set) and self-built performance clip data sets show that the BVPEC framework is significantly better than the single-modal model and traditional fusion method in emotion recognition accuracy (up to 89.7%), effectively improving the accuracy and robustness of performance emotion understanding, and providing new ideas for intelligent performing arts analysis.

Authors

  • Yizhu Lin School of Fashion, Dalian Polytechnic University

DOI:

https://doi.org/10.31449/inf.v49i21.10434

Downloads

Published

12/15/2025

How to Cite

Lin, Y. (2025). BVPEC: A Cross-modal BERT-ViT Framework for Performance Emotion Recognition from Multimodal Acting Data. Informatica, 49(21). https://doi.org/10.31449/inf.v49i21.10434