English Text Classification Model Based on Graph Neural Network Algorithm and Contrastive Learning
Abstract
Current English text classification methods mostly rely on bag-of-words models or CNN (Convolutional Neural Network), but there are limitations in processing text structure and semantics. Especially in long texts and complex contexts, it is difficult to capture the long-distance dependency and structured semantics between words. To this end, this article combines GNN (Graph Neural Network) with contrastive learning to build an English text classification model. First, a text graph is constructed through word co-occurrence to capture the long-distance dependency of words. Then, a multi-layer graph convolutional network is designed, and residual connections and normalization are applied to improve model performance. A contrast learning module is added after each layer of graph convolution to improve node features and semantic representation. Triplet Loss is a loss function, and Hard Negative Mining chooses negative samples to improve efficiency.
DOI: https://doi.org/10.31449/inf.v49i11.8454
This work is licensed under a Creative Commons Attribution 3.0 License.








