Sentiment Analysis Using Multi modal Fusion: A Weighted Integration of BERT, Res Net, and CNN
Abstract
With the rapid advancement of artificial intelligence, sentiment analysis has expanded beyond traditional text-based approaches to include speech and image modalities. Traditional sentiment analysis methods, which rely solely on single-modal data, fail to capture the complementary nature of different modalities, leading to optimal performance. This study proposes a novel multi modal sentiment analysis framework that integrates textual, speech, and image data through a weighted fusion mechanism. Text data is processed using a per-trained Bidirectional Encoder Representations from Transformers (BERT) model, which extracts contextualized semantic features. Speech data undergoes feature extraction using a hybrid Long Short-Term Memory (LSTM) and Convectional Neural Network (CNN) architecture to capture both temporal and local acoustic characteristics. Image data is analyzed with a Residual Network (Res-net) to extract facial expression features relevant to sentiment classification. A weighted fusion strategy is then applied to integrate the extracted features from the three modalities, assigning optimal weights dynamically based on their contribution to sentiment classification. Our model outperforms uni modal approaches, achieving an accuracy of 93.8%, which surpasses baseline models including single-modality BERT (91.2%), LSTM-CNN (89.7%), and ResNet (88.3%). Statistical significance tests confirm that the performance improvement is significant (p < 0.05). These results highlight the efficacy of multi modal fusion in sentiment analysis, providing new insights for sentiment classification tasks in complex environments.DOI:
https://doi.org/10.31449/inf.v49i24.8315Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







