Optimization of Dynamic Energy Management Strategy for New Energy Vehicles Based on Multi-Agent Reinforcement Learning

Abstract

The development of New Energy Vehicles (NEVs), such as battery electric vehicles, is vital to addressing global issues like environmental pollution and fossil fuel depletion. However, optimizing their energy management strategies (EMSs) is complex due to conflicting goals, dynamic driving conditions, and system nonlinearity. This study proposes a dynamic EMS based on Multi-Agent Reinforcement Learning (MARL) using a Scalable Satin Bowerbird Optimizer-driven Multi-Agent Deep Q-Network (SSB-MADQN). The approach aims to enhance fuel economy, maintain battery State of Charge (SOC), and reduce battery degradation in real-time driving scenarios. Prior to training, data preprocessing—including min-max normalization and Principal Component Analysis (PCA)—improves learning efficiency. The MADQN framework consists of agents representing subsystems such as the engine, battery, and regenerative braking, each trained using a deep Q-network with three hidden layers (128-64-32 neurons). The dataset comprises 5,000 samples with 13 features, including vehicle speed, power demand, and battery performance. Evaluated on HWFET and WLTC driving cycles, the proposed strategy reduces fuel consumption by 0.912 L (WLTC) and 0.681 L (HWFET) compared to traditional methods. It effectively regulates SOC and reduces high-power discharge events, confirming the robustness of MARL for adaptive and efficient EMS in NEVs.

Authors

  • Xiaoyu Zhang

DOI:

https://doi.org/10.31449/inf.v49i12.8907

Downloads

Published

11/22/2025

How to Cite

Zhang, X. (2025). Optimization of Dynamic Energy Management Strategy for New Energy Vehicles Based on Multi-Agent Reinforcement Learning. Informatica, 49(12). https://doi.org/10.31449/inf.v49i12.8907