RL-Tree: A Reinforcement Learning-Based Adaptive and Secure Routing Protocol for Wireless Sensor Networks
Abstract
In the field of wireless sensor networks (WSNs), this study proposes RL-Tree, a reinforcement learning (RL)-based adaptive and secure routing protocol. The protocol enables nodes to dynamically select optimal parent nodes by applying a Q-learning algorithm with a multi-objective reward function combining energy efficiency, transmission delay, and link security. To enhance data reliability under non-Gaussian noise, an adaptive filter integrating a variable scale factor and the Half Quadratic Criterion (HQC) is designed. The experimental platform was implemented on low-power MCUs to simulate a real WSN environment. Performance was benchmarked against RPL, AODV, LEACH, and QELAR. Results demonstrate that RL-Tree reduces average node energy consumption by 30% and achieves a data transmission delay of 0.07 seconds, outperforming baseline protocols. Integrated security mechanisms—including identity verification, encryption, and traffic monitoring—further improve network resilience under attack scenarios.DOI:
https://doi.org/10.31449/inf.v46i23.11214Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







