Cross-domain Fake Review Detection Based on Deep Learning MultiLevel Generic Features Extraction Fusion
Abstract
The fake review detection aims to identify fake reviews that affect regular competition of online marketplaces. Existing research on fake review detection mainly focuses on deep learning and featurebased methods. Feature-based methods make it difficult to obtain potential semantic information, while deep learning methods are less likely to consider multi-granularity information based on text structure. Neither is very satisfactory when it comes to cross-domain detection. In this paper, we present a cross-domain fake review detection method based on multi-level generic feature extraction fusion. The important information prevalent in different domains is purposefully extracted from the review text based on its structure for cross-domain fake review detection. At the word level, in order to obtain potential semantic information, the paper combined GloVe and TF-IDF weights as well as CNN to extract multi-granularity semantic information from the word level. In order to obtain sentence-level generic features, sentence-level syntactic information of comments is extracted through lexical annotation techniques. In order to obtain finer-grained emotion features at the document level, the paper annotates the dataset comments with distilBERT, a pre-trained language model that has been fine-tuned on the emotion classification task. Moreover, the generic features extracted at each layer that are applicable across domains are fused using a multi-head attention mechanism. Finally, classification is performed in the classification layer. Experimental results on public datasets show that the model proposed in this paper has significantly improved performance in cross-domain detection, achieving 83% and 78.0% accuracy on the restaurant and doctor domain datasets, respectively. It outperforms the state-of-the-art method by 10.7% on the restaurant dataset
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i18.7071

This work is licensed under a Creative Commons Attribution 3.0 License.