Hybrid MIRL and ACO-based Approach for Real-Time Path Planning in Visual SLAM

Abstract

For autonomous robotic systems, real-time establishing paths in dynamic situations is still a major difficulty. Current Visual SLAM-integrated planners, including A\ and Dijkstra, frequently perform poorly in uncertain situations due to a lack of flexibility and collaborative intelligence. In order to improve navigation, this study presents MIRL-ACO-SLAM, a hybrid framework that combines Ant Colony Optimization (ACO) and Multi-Intelligence Reinforcement Learning (MIRL). Real-time spatial maps are created by the network using Visual SLAM (ORB-SLAM3), which allows agents to make decisions locally through reinforcement learning. Pheromone-based optimization guarantees global convergence to the best solutions. Pheromone-guided pruning and selective agent activation improve scalability on bigger maps while lowering computing costs. In contrast with traditional SLAM-based planners, MIRL-ACO-SLAM delivers 18.6% shorter route length, 24.3% quicker scheduling speed, with a 94% route completion rate in dynamic situations, according to empirical assessments conducted utilizing the Zurich MAV dataset. The suggested system opens the door for implementation in mission-critical applications like transportation and search-and-rescue by offering a dependable, scalable, and adaptable answer for real-time robotic navigation.

Author Biographies

Jin Li, College of Big Data and Artificial Intelligence, Xinyang University, Xinyang 464000, Henan, China

 College of Big Data and Artificial Intelligence

Chunyu Yang, School of Information Engineering, Nanyang Vocational College of Science and Technology, Dengzhou 474150, Henan, China

School of Information Engineering

References

Roy, R., Tu, Y. P., Sheu, L. J., Chieng, W. H., Tang, L. C., & Ismail, H. (2023). Path Planning and Motion Control of Indoor Mobile Robot under Exploration-Based SLAM (e-SLAM). Sensors, 23(7), 3606.

Mughal, U. A., Ahmad, I., Pawase, C. J., & Chang, K. (2022). UAVs path planning by particle swarm optimization based on visual-SLAM algorithm. In Intelligent Unmanned Air Vehicles Communications for Public Safety Networks (pp. 169-197). Singapore: Springer Nature Singapore.

McDonald, J., Kaess, M., Cadena, C., Neira, J., & Leonard, J. J. (2013). Real-time 6-DOF multi-session visual SLAM over large-scale environments. Robotics and Autonomous Systems, 61(10), 1144-1158]

Dong, J., Yassine, A., Armitage, A., & Hossain, M. S. (2023). Multiagent reinforcement learning for intelligent V2G integration in future transportation systems. IEEE Transactions on Intelligent Transportation Systems, 24(12), 15974-15983.

El Fazazi, H., Elgarej, M., Qbadou, M., & Mansouri, K. (2021). Design of an adaptive e-learning system based on multiagent approach and reinforcement learning. Engineering, Technology & Applied Science Research, 11(1), 6637-6644.

Zhou, T., Tang, D., Zhu, H., & Zhang, Z. (2021). Multiagent reinforcement learning for online scheduling in smart factories. Robotics and computer-integrated Manufacturing, 72, 102202.

Firos, A. (2024). Fuzzy logic and bio-inspired Ant Colony Algorithm-based technique to find relative desirability in IoT-based healthcare system. In Bio-Inspired Data-driven Distributed Energy in Robotics and Enabling Technologies (pp. 182-202). CRC Press.

Abid, A., Kallel, I., Sanchez-Medina, J. J., & Ayed, M. B. (2023). Parameters sensitivity analysis of ant colony based clustering: Application for student grouping in collaborative learning environment. IEEE Access, 12, 24751-24761.

Najafi Mohsenabad, H., & Tut, M. A. (2024). Optimizing cybersecurity attack detection in computer networks: A comparative analysis of bio-inspired optimization algorithms using the CSE-CIC-IDS 2018 dataset. Applied Sciences, 14(3), 1044.

Ali, Z. A., Zhangang, H., & Hang, W. B. (2021). Cooperative path planning of multiple UAVs by using max–min ant colony optimization along with cauchy mutant operator. Fluctuation and Noise Letters, 20(01), 2150002.

A. Al-Sarayrah, "Recent advances and applications of Apriori algorithm in exploring insights from healthcare data patterns,"PatternIQ Mining, vol. 1, no. 2, pp. 27–39, 2024, DOI: 10.70023/piqm24123.

A. Almusawi and S. Pugazhenthi, "Innovative resource distribution through multiagent supply chain scheduling leveraging honey bee optimization techniques,"PatternIQ Mining, vol. 1, no. 3, pp. 48–62, 2024, DOI: 10.70023/piqm24305.

Xu, L., Feng, C., Kamat, V. R., & Menassa, C. C. (2019). An occupancy grid mapping enhanced visual SLAM for real-time locating applications in indoor GPS-denied environments. Automation in Construction, 104, 230-245.

Sud, A., Andersen, E., Curtis, S., Lin, M. C., & Manocha, D. (2008). Real-time path planning in dynamic virtual environments using multiagent navigation graphs. IEEE transactions on visualization and computer graphics, 14(3), 526-538.

Hu, Y., Xie, F., Yang, J., Zhao, J., Mao, Q., Zhao, F., & Liu, X. (2024). Efficient Path Planning Algorithm Based on Laser SLAM and an Optimized Visibility Graph for Robots. Remote Sensing, 16(16), 2938.

Cao, H., Xu, J., Yang, Z., Shangguan, L., Zhang, J., He, X., & Liu, Y. (2023). Scaling up edge-assisted real-time collaborative visual SLAM applications. IEEE/ACM Transactions on Networking, 32(2), 1823-1838.

Ubaid, M. M., Sana, M. S., Salim, K., Khalid, S., Batool, I., Gilani, S. H., & Gilani, S. S. (2023). UAVs Path Planning Using Visual-SLAM Technique Based Hybrid Particle Swarm Optimization. Journal of Smart Internet of Things, 2023(2), 133-141.

Kim, Y. G., Lee, S., Son, J., Bae, H., & Do Chung, B. (2020). Multiagent system and reinforcement learning approach for distributed intelligence in a flexible smart manufacturing system. Journal of Manufacturing Systems, 57, 440-450.

Xia, Z., Du, J., Wang, J., Jiang, C., Ren, Y., Li, G., & Han, Z. (2021). Multiagent reinforcement learning aided intelligent UAV swarm for target tracking. IEEE Transactions on Vehicular Technology, 71(1), 931-945.

Canese, L., Cardarilli, G. C., Dehghan Pir, M. M., Di Nunzio, L., & Spanò, S. (2024). Design and Development of Multiagent Reinforcement Learning Intelligence on the Robotarium Platform for Embedded System Applications. Electronics, 13(10), 1819.

Belgacem, A., Mahmoudi, S., & Kihl, M. (2022). Intelligent multiagent reinforcement learning model for resources allocation in cloud computing. Journal of King Saud University-Computer and Information Sciences, 34(6), 2391-2404.

Khan, F. A., Ullah, K., ur Rahman, A., & Anwar, S. (2023). Energy optimization in smart urban buildings using bio-inspired ant colony optimization. Soft Computing, 27(2), 973-989.

Wang, J., Cao, J., Li, B., Lee, S., & Sherratt, R. S. (2015). Bio-inspired ant colony optimization based clustering algorithm with mobile sinks for applications in consumer home automation networks. IEEE Transactions on Consumer Electronics, 61(4), 438-444.

Zhang, Z., Long, K., Wang, J., & Dressler, F. (2013). On swarm intelligence inspired self-organized networking: its bionic mechanisms, designing principles and optimization approaches. IEEE Communications Surveys & Tutorials, 16(1), 513-537.

Rokbani, N., Kumar, R., Abraham, A., Alimi, A. M., Long, H. V., Priyadarshini, I., & Son, L. H. (2021). Bi-heuristic ant colony optimization-based approaches for traveling salesman problem. Soft Computing, 25, 3775-3794.

https://www.kaggle.com/datasets/mrisdal/zurich-urban-micro-aerial

Khamis, A., Hussein, A., & Elmogy, A. (2018). Autonomous Robots, 46(5), 789–804 Yang, S., Luo, C., & Liu, Y. (2022).

Authors

  • Jin Li College of Big Data and Artificial Intelligence, Xinyang University, Xinyang 464000, Henan, China
  • Chunyu Yang School of Information Engineering, Nanyang Vocational College of Science and Technology, Dengzhou 474150, Henan, China

DOI:

https://doi.org/10.31449/inf.v50i11.9866

Downloads

Published

04/23/2026

How to Cite

Li, J., & Yang, C. (2026). Hybrid MIRL and ACO-based Approach for Real-Time Path Planning in Visual SLAM. Informatica, 50(11). https://doi.org/10.31449/inf.v50i11.9866