Deep Q-Network-Based Reinforcement Learning for Medium and Short-Term Reserve Capacity Classification in Power Systems

Abstract

Modern power systems encounter significant challenges in maintaining reliability and operational balance due to the intermittent nature of renewable energy sources and variable demand. Accurate prediction and optimization of reserve capacity are essential to ensure grid stability, especially within medium and short-term regulatory timeframes. Traditional reserve estimation methods often lack the adaptability required for dynamic operational data, leading to inefficient reserve allocation. This study introduces a Deep Reinforcement Learning (DRL) framework aimed at enhancing reserve capacity classification and regulation. A Deep Q-Network (DQN)-based agent is developed and trained on a Reserve Capacity Prediction (RCP) dataset consisting of 2000-time steps and ten critical system features. The data underwent preprocessing steps such as categorical encoding, normalization, and environment modeling. The DQN receives a 9-dimensional input vector and uses two hidden ReLU-activated layers (64 and 32 units) to predict reserve capacity classes: Low, Optimal, and High. A reward mechanism and experience replay were applied during training. Experimental results show the DQN outperforms Logistic Regression, Random Forest, and SVM, achieving 90% accuracy, 92% precision, 88% recall, 89.8% F1-score, and 0.86 MCC. This approach shows promise for intelligent and adaptive reserve management in power systems.

Authors

  • Yi Wang
  • Gang Wu
  • Chuan He
  • Ruiguang Ma
  • Jing Xiang
  • Tiannan Ma
  • Feng Liu

DOI:

https://doi.org/10.31449/inf.v49i34.9288

Downloads

Published

08/26/2025

How to Cite

Wang, Y., Wu, G., He, C., Ma, R., Xiang, J., Ma, T., & Liu, F. (2025). Deep Q-Network-Based Reinforcement Learning for Medium and Short-Term Reserve Capacity Classification in Power Systems. Informatica, 49(34). https://doi.org/10.31449/inf.v49i34.9288