Sentiment Analysis Using Multi modal Fusion: A Weighted Integration of BERT, Res Net, and CNN
Abstract
With the rapid advancement of artificial intelligence, sentiment analysis has expanded beyond traditional text-based approaches to include speech and image modalities. Traditional sentiment analysis methods, which rely solely on single-modal data, fail to capture the complementary nature of different modalities, leading to optimal performance. This study proposes a novel multi modal sentiment analysis framework that integrates textual, speech, and image data through a weighted fusion mechanism. Text data is processed using a per-trained Bidirectional Encoder Representations from Transformers (BERT) model, which extracts contextualized semantic features. Speech data undergoes feature extraction using a hybrid Long Short-Term Memory (LSTM) and Convectional Neural Network (CNN) architecture to capture both temporal and local acoustic characteristics. Image data is analyzed with a Residual Network (Res-net) to extract facial expression features relevant to sentiment classification. A weighted fusion strategy is then applied to integrate the extracted features from the three modalities, assigning optimal weights dynamically based on their contribution to sentiment classification. Our model outperforms uni modal approaches, achieving an accuracy of 93.8%, which surpasses baseline models including single-modality BERT (91.2%), LSTM-CNN (89.7%), and ResNet (88.3%). Statistical significance tests confirm that the performance improvement is significant (p < 0.05). These results highlight the efficacy of multi modal fusion in sentiment analysis, providing new insights for sentiment classification tasks in complex environments.DOI:
https://doi.org/10.31449/inf.v49i24.8315Downloads
Published
How to Cite
Issue
Section
License
I assign to Informatica, An International Journal of Computing and Informatics ("Journal") the copyright in the manuscript identified above and any additional material (figures, tables, illustrations, software or other information intended for publication) submitted as part of or as a supplement to the manuscript ("Paper") in all forms and media throughout the world, in all languages, for the full term of copyright, effective when and if the article is accepted for publication. This transfer includes the right to reproduce and/or to distribute the Paper to other journals or digital libraries in electronic and online forms and systems.
I understand that I retain the rights to use the pre-prints, off-prints, accepted manuscript and published journal Paper for personal use, scholarly purposes and internal institutional use.
In certain cases, I can ask for retaining the publishing rights of the Paper. The Journal can permit or deny the request for publishing rights, to which I fully agree.
I declare that the submitted Paper is original, has been written by the stated authors and has not been published elsewhere nor is currently being considered for publication by any other journal and will not be submitted for such review while under review by this Journal. The Paper contains no material that violates proprietary rights of any other person or entity. I have obtained written permission from copyright owners for any excerpts from copyrighted works that are included and have credited the sources in my article. I have informed the co-author(s) of the terms of this publishing agreement.
Copyright © Slovenian Society Informatika







