Adaptive UAV Inspection and Path Planning for Distribution Networks Using Multi-Agent Deep Reinforcement Learning

Abstract

Efficient inspection of distribution networks is crucial for ensuring the stable operation of the power system. To address the limitations of existing methods in complex environments, this paper proposes an adaptive UAV inspection path planning strategy based on multi-agent deep reinforcement learning, specifically employing the Multi-Agent Actor-Attention-Critic (MAAC) algorithm. This method constructs a reinforcement learning environment with a designed reward function to enable UAVs to collaboratively learn optimal inspection paths. Simulation experiments conducted on the AirSim platform demonstrate the proposed method's superior performance compared to Enhanced Particle Swarm Optimization (EPSO) and Double Deep Q-Network (DDQN). The MAAC-based model achieved a higher cumulative discounted reward (4.56 vs. 2.21 for EPSO and 4.37 for DDQN) and a reduced average running time (812.85 seconds vs. 833.45 and 923.41 seconds, respectively), validating its advantages in both solution quality and computational efficiency. The results indicate strong adaptability and high computational efficiency, offering significant potential for practical UAV inspection applications. Future work will focus on integrating advanced techniques to further improve robustness and learning efficiency.

Authors

  • Tingting Yang Shandong Business Institute
  • Xin Wang

DOI:

https://doi.org/10.31449/inf.v50i14.13609

Downloads

Published

05/13/2026

How to Cite

Yang, T., & Wang, X. (2026). Adaptive UAV Inspection and Path Planning for Distribution Networks Using Multi-Agent Deep Reinforcement Learning. Informatica, 50(12). https://doi.org/10.31449/inf.v50i14.13609