Research on Dynamic Big Data Processing and Optimization Model Based on Optimized PSO and Deep Reinforcement Learning

Zhenwei Tang, Chuanjian Jiang, Xi Zhang

Abstract


In the era of big data, the optimal processing of dynamically better data has become the core concern of academia and industry, and traditional methods are often difficult to meet the high standards of real-time and accuracy due to the complexity, variability, high-speed liquidity, and large scale of data. To this end, this study proposes a dynamic big data processing and optimization model, which improves the particle swarm optimization algorithm (PSO) by introducing adaptive weights and dynamic learning coefficients to improve the global exploration ability and convergence speed, and integrates a hybrid framework of deep reinforcement learning (DRL) (combined with policy gradients such as proximal policy optimization PPO and Q learning) to achieve big data optimization by using its feature extraction and policy adjustment capabilities. Based on real datasets such as 1.2 million pieces of financial transaction data and 8.5 million pieces of social media travel data, the results show that compared with traditional methods (such as traditional PSO, DQN, and independent LSTM-Transformer models), the new model has a 35% increase in data processing speed, a 20% increase in the accuracy of classification tasks (F1 score: 0.92 vs. 0.76 of DQN), a 40% increase in the real-time response ability of dynamic data streams, and a 25% increase in computing resource utilization efficiency. The study elaborates on model architecture innovation, dataset source and scale, benchmarking methods, and key performance indicators (such as processing speed, accuracy, real-time response, and resource efficiency), and provides efficient and scalable solutions for scenarios such as financial risk management and real-time recommendation systems.


Full Text:

PDF


DOI: https://doi.org/10.31449/inf.v49i33.8627

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.