Bio-Inspired Adaptive Rate Modulation for Multi-task Learning in Progressive Neural Networks

Abstract

Sequential multi-task learning faces the fundamental challenge of acquiring new tasks while retaining performance on previously learned tasks. Progressive Neural Networks address catastrophic forgetting through architectural isolation but rely on fixed learning strategies that limit adaptive efficiency. Current adaptive methods typically optimize single factors, missing the coordinated nature observed in biological adaptive systems. Inspired by neuromodulatory systems where four key neurotransmitters (dopamine, serotonin, norepinephrine, and acetylcholine) coordinate learning and adaptation, we propose a Bio-Inspired Adaptive Rate Modulation framework that coordinates four computational modules using organizational principles derived from neuromodulatory systems: reward-based adaptation, stability control, attention gating, and knowledge integration. These modules translate the organizational principles of biological neuromodulation into practical optimization mechanisms for Progressive Neural Networks using standardneural network components. Evaluation on four OpenAI Gymnasium environments demonstrates performance improvements ranging from 1.06% to 155.94% over baseline Progressive Neural Networks, with up to 92% variance reduction and performance retention averaging 98.0% . The framework achievesthese gains with reasonable computational overhead. Comprehensive ablation studies confirm each module’s contribution, validating the four-factor design. Results demonstrate that neuromodulation-inspired coordination of multiple adaptive factors significantly outperforms fixed learning strategies, providing a principled approach to adaptive optimization in sequential multi-task learning.

Author Biography

Chourouk Guettas

Assistant Professor at El Oued University, but affiliate to Lesia Laboratory, Biskra University, Algeria.

References

Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671. DOI: 10.48550/arXiv.1606.04671.

Hervella, A. S., Rouco, J., Novo, J., & Ortega, M. (2024). Multi-adaptive optimization for multi-task learning with deep neural networks. Neural Networks, 170, 254–265. DOI: 10.1016/j.neunet.2023.11.038

Mirzadeh, S. I., Farajtabar, M., Pascanu, R., & Ghasemzadeh, H. (2020). Understanding the role of training regimes in continual learning. NeurIPS 33, 7308–7320. DOI: 10.48550/arXiv.2006.06958.

Mei, J., Meshkinnejad, R., & Mohsenzadeh, Y. (2023). Effects of neuromodulation-inspired mechanisms on the performance of deep neural networks in a spatial learning task. iScience, 26(2), 106043. DOI: 10.1016/j.isci.2023.106043.

Vecoven, N., Ernst, D., Wehenkel, A., & Drion, G. (2020). Introducing neuromodulation in deep neural networks to learn adaptive behaviours. PLOS ONE, 15(1), e0227922. DOI: 10.1371/journal.pone.0227922.

Schmidgall, S., Ziaei, R., Achterberg, J., Kirsch, L., Hajiseyedrazi, S. P., & Eshraghian, J. (2024). Brain-inspired learning in artificial neural networks: A review. APL Machine Learning, 2(2), 021501. DOI: 10.1063/5.0186054.

Cardozo Pinto, D. F., et al. (2024). Opponent control of reinforcement by striatal dopamine and serotonin. Nature. DOI: 10.1038/s41586-024-08412-x.

Avery, M. C., & Krichmar, J. L. (2017). Neuromodulatory systems and their interactions: A review of models, theories, and experiments. Frontiers in Neural Circuits, 11, 108. DOI: 10.3389/fncir.2017.00108.

van de Ven, G. M., Tuytelaars, T., & Tolias, A. S. (2024). Continual learning and catastrophic forgetting. arXiv preprint arXiv:2403.05175. DOI: 10.48550/arXiv.2403.05175.

Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), 3521–3526. DOI: 10.1073/pnas.1611835114.

Mallya, A., Davis, D., & Lazebnik, S. (2018). PackNet: Adding multiple tasks to a single network by iterative pruning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 7765–7773. DOI: 10.1109/CVPR.2018.00805.

Xu, Z., Zhou, Y., Wu, T., Ni, B., Yan, S., & Liu, X. (2020). Learn to grow: A continual structure learning framework for overcoming catastrophic forgetting. ICLR (2020). DOI: 10.48550/arXiv.1808.03314.

Ardywibowo, R., Boluki, S., Gong, X., Wang, Z., & Qian, X. (2022). VariGrow: Variational architecture growing for task-agnostic continual learning based on Bayesian novelty. ICML. DOI: 10.48550/arXiv.2205.09325.

Van de Ven, G. M., Siegelmann, H. T., & Tolias, A. S. (2020). Brain-inspired replay for continual learning with artificial neural networks. Nature Communications, 11(1), 4069. DOI: 10.1038/s41467-020-17866-2.

Aljundi, R., Lin, M., Goujaud, B., & Bengio, Y. (2019). Gradient based sample selection for online continual learning. Advances in Neural Information Processing Systems, pp. 11816–11825. DOI: 10.48550/arXiv.1903.08671.

Zenke, F., Poole, B., & Ganguli, S. (2017). Continual learning through synaptic intelligence. Proceedings of the International Conference on Machine Learning, PMLR, pp. 3987–3995. DOI: 10.48550/arXiv.1703.04200.

Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2121–2159. DOI: 10.5555/1953048.2021065.

Zeiler, M. D. (2012). ADADELTA: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. DOI: 10.48550/arXiv.1212.5701.

Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. ICLR (2015). DOI: 10.48550/arXiv.1412.6980.

Loshchilov, I., & Hutter, F. (2019). Decoupled weight decay regularization. ICLR (2019). DOI: 10.48550/arXiv.1711.05101.

Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., & Han, J. (2019). On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265. DOI: 10.48550/arXiv.1908.03265.

Shazeer, N., & Stern, M. (2018). Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235. DOI: 10.48550/arXiv.1804.04235.

FitzGerald, T. H. B., Dolan, R. J., & Friston, K. (2015). Dopamine, reward learning, and active inference. Frontiers in Computational Neuroscience, 9, 136. DOI: 10.3389/fncom.2015.00136.

Guerrero-Criollo, R. J., Castaño-López, J. A., Hurtado-López, J., & Ramirez-Moreno, D. F. (2023). Bio-inspired neural networks for decision-making mechanisms and neuromodulation for motor control in a differential robot. Frontiers in Neurorobotics, 17, 1078074. DOI: 10.3389/fnbot.2023.1078074.

Ostapenko, O., Puscas, M., Klein, T., Jahnichen, P., & Nabi, M. (2019). Learning to remember: A synaptic plasticity driven framework for continual learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11321–11329. DOI: 10.1109/CVPR.2019.01158.

Zhang, Z., Zou, Y., Lai, J. and Xu, Q. (2023). M2DQN: A robust method for accelerating deep Q-learning network. Proceedings of the International Conference on Machine Learning and Computing, pp. 116–120.

Authors

  • Chourouk Guettas
  • Foudil Cherif

DOI:

https://doi.org/10.31449/inf.v49i21.9873

Downloads

Published

12/15/2025

How to Cite

Guettas, C., & Cherif, F. (2025). Bio-Inspired Adaptive Rate Modulation for Multi-task Learning in Progressive Neural Networks. Informatica, 49(21). https://doi.org/10.31449/inf.v49i21.9873