An Enhanced Aspect-Based Sentiment Analysis Model Based on RoBERTa For Text Sentiment Analysis
Abstract
Using an aspect-based sentiment analysis task, sentiment polarity towards specific aspect phrases within the same sentence or document is to be identified. The process of mechanically determining the underlying attitude or opinion indicated in the text is known as sentiment analysis. One of the most important aspects of natural language processing is sentiment analysis. The RoBERTa transformer model was pretrained in a self-supervised manner using a substantial corpus of English data. This means it was pretrained solely with raw texts and an algorithmic process to generate inputs and labels from those texts. No human labelling was involved, allowing it to utilise a vast amount of publicly available data. The authors of this work provide a thorough investigation of aspect-based sentiment analysis with RoBERTa. The RoBERTa model and its salient characteristics are outlined in this work, followed by an analysis of the model’s optimisation by the authors for aspect-based sentiment analysis. The authors compare the RoBERTa model with other state-of-the-art models and evaluate its performance on multiple benchmark datasets. Our experimental results show that the RoBERTa model is effective for this important natural language processing task, outperforming competing models on sentiment analysis tasks. Based on the SemEval-2014 variant benchmarking datasets, the restaurant and laptop domains have the highest accuracy, scoring 92.35 % and 82.33 %, respectively.References
Y. Kang, Z. Cai, C.-W. Tan, Q. Huang, and
H. Liu, “Natural language processing (nlp) in
management research: A literature review,”
Journal of Management Analytics, vol. 7,
no. 2, pp. 139–172, 2020.
P. Juyal, “Classification accuracy in sen-
timent analysis using hybrid and ensem-
ble methods,” in 2022 IEEE World Confer-
ence on Applied Intelligence and Computing
(AIC). IEEE, 2022, pp. 583–587.
H. H. Do, P. W. Prasad, A. Maag, and A. Al-
sadoon, “Deep learning for aspect-based sen-
timent analysis: a comparative review,” Ex-
pert systems with applications, vol. 118, pp.
–299, 2019.
Z. Yang, Z. Dai, Y. Yang, J. Carbonell,
R. R. Salakhutdinov, and Q. V. Le, “Xlnet:
Generalized autoregressive pretraining for
language understanding,” Advances in neu-
ral information processing systems, vol. 32,
K. Scaria, H. Gupta, S. A. Sawant, S. Mishra,
and C. Baral, “Instructabsa: Instruction
learning for aspect based sentiment analy-
sis,” arXiv preprint arXiv:2302.08624, 2023.
H. Yang and K. Li, “Improving implicit sen-
timent learning via local sentiment aggrega-
tion,” arXiv e-prints, pp. arXiv–2110, 2021.
H. Yang, B. Zeng, J. Yang, Y. Song, and
R. Xu, “A multi-task learning model for
chinese-oriented aspect polarity classification
and aspect term extraction,” Neurocomput-
ing, vol. 419, pp. 344–356, 2021.
E. H. d. Silva and R. M. Marcacini, “Aspect-
based sentiment analysis using bert with dis-
entangled attention,” in Proceedings, 2021.
Y. Zhang, M. Zhang, S. Wu, and J. Zhao,
“Towards unifying the label space for
aspect-and sentence-based sentiment analy-
sis,” arXiv preprint arXiv:2203.07090, 2022.
J. Dai, H. Yan, T. Sun, P. Liu, and X. Qiu,
“Does syntax matter? a strong baseline
for aspect-based sentiment analysis with
roberta,” arXiv preprint arXiv:2104.04986,
B. Xing and I. W. Tsang, “Understand me,
if you refer to aspect knowledge: Knowledge-
aware gated recurrent memory network,”IEEE Transactions on Emerging Topics in
Computational Intelligence, vol. 6, no. 5, pp.
–1102, 2022.
A. Rietzler, S. Stabinger, P. Opitz, and
S. Engl, “Adapt or get left behind: Domain
adaptation through bert language model
finetuning for aspect-target sentiment clas-
sification,” arXiv preprint arXiv:1908.11860,
A. Karimi, L. Rossi, and A. Prati,
“Improving bert performance for aspect-
based sentiment analysis,” arXiv preprint
arXiv:2010.11731, 2020.
Y. Song, J. Wang, T. Jiang, Z. Liu,
and Y. Rao, “Attentional encoder network
for targeted sentiment classification,” arXiv
preprint arXiv:1902.09314, 2019.
D. Kirange, R. R. Deshmukh, and M. Ki-
range, “Aspect based sentiment analysis
semeval-2014 task 4,” Asian Journal of Com-
puter Science and Information Technology
(AJCSIT) Vol, vol. 4, 2014.
R. JeffreyPennington and C. Manning,
“Glove: Global vectors for word representa-
tion,” in Conference on Empirical Methods in
Natural Language Processing. Citeseer, 2014.
V. Jakkula, “Tutorial on support vector ma-
chine (svm),” School of EECS, Washington
State University, vol. 37, no. 2.5, p. 3, 2006.
G. Biau, “Analysis of a random forests
model,” The Journal of Machine Learning
Research, vol. 13, no. 1, pp. 1063–1095, 2012.
Y. Yu, X. Si, C. Hu, and J. Zhang, “A review
of recurrent neural networks: Lstm cells and
network architectures,” Neural computation,
vol. 31, no. 7, pp. 1235–1270, 2019.
Y. Song, J. Wang, T. Jiang, Z. Liu,
and Y. Rao, “Attentional encoder network
for targeted sentiment classification,” arXiv
preprint arXiv:1902.09314, 2019.
P. He, X. Liu, J. Gao, and W. Chen,
“Deberta: Decoding-enhanced bert with
disentangled attention,” arXiv preprint
arXiv:2006.03654, 2020.
DOI:
https://doi.org/10.31449/inf.v49i14.5423Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







