STA-ViT: A Spatiotemporal Self-Attention Vision Transformer for Learning Behavior Recognition and Intervention

Xiao Zhang

Abstract


This study proposes an improved Spatiotemporal Attention-enhanced Vision Transformer (STA-ViT) model to enhance the accuracy of learning behavior recognition and optimize intervention strategies. This model combines Vision Transformer (ViT) with a spatiotemporal self-attention feature flow buffer. The model incorporates a feature flow caching mechanism that effectively alleviates memory usage issues in long video processing while enhancing spatiotemporal feature modeling capabilities. Experiments are conducted on three public datasets: Human Motion Database 51 (HMDB51), University of Central Florida 101 Actions (UCF101), and Something-Something V1 (Sth-Sth V1). Each sample in the dataset contains 32 to 64 frames on average, with Top-1 accuracy and Top-5 accuracy serving as evaluation indicators. Compared to the baseline ViT model, STA-ViT achieves improvements of 13.5%, 9.37%, and 5.41% in Top-1 accuracy, and 2.04%, 0.82%, and 4.63% in Top-5 accuracy on these three datasets, respectively. Furthermore, on a self-collected dataset of student learning behaviors, SAT-ViT demonstrates high recognition accuracy, with Top-1 accuracy and Top-5 accuracy reaching 83.2% and 96.5%, respectively, proving its advantage in learning behavior recognition tasks. Based on the recognition capabilities of this model, three intervention strategies are proposed: real-time feedback mechanisms, personalized learning path planning, and classroom management optimization. It aims to improve student learning efficiency and optimize classroom management, particularly suitable for intelligent education and remote teaching scenarios. The findings of this study offer effective technical support and application prospects for learning behavior analysis and intervention in intelligent education and remote teaching.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i7.8653

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.