RL-Tree: A Reinforcement Learning-Based Adaptive and Secure Routing Protocol for Wireless Sensor Networks

Abstract

In the field of wireless sensor networks (WSNs), this study proposes RL-Tree, a reinforcement learning (RL)-based adaptive and secure routing protocol. The protocol enables nodes to dynamically select optimal parent nodes by applying a Q-learning algorithm with a multi-objective reward function combining energy efficiency, transmission delay, and link security. To enhance data reliability under non-Gaussian noise, an adaptive filter integrating a variable scale factor and the Half Quadratic Criterion (HQC) is designed. The experimental platform was implemented on low-power MCUs to simulate a real WSN environment. Performance was benchmarked against RPL, AODV, LEACH, and QELAR. Results demonstrate that RL-Tree reduces average node energy consumption by 30% and achieves a data transmission delay of 0.07 seconds, outperforming baseline protocols. Integrated security mechanisms—including identity verification, encryption, and traffic monitoring—further improve network resilience under attack scenarios.

Authors

  • Jianzhen Zhang
  • Jiong Chen
  • Ya Dai
  • Shuo Wang
  • Yanjun Qi

DOI:

https://doi.org/10.31449/inf.v46i23.11214

Downloads

Published

12/18/2025

How to Cite

Zhang, J., Chen, J., Dai, Y., Wang, S., & Qi, Y. (2025). RL-Tree: A Reinforcement Learning-Based Adaptive and Secure Routing Protocol for Wireless Sensor Networks. Informatica, 49(23). https://doi.org/10.31449/inf.v46i23.11214