SwarmShield: A Fully Decentralized Trust-Based Defense Against Adversarial Poisoning in Federated Learning

Abstract

Adversarial poisoning attacks in federated learning systems can severely compromise model integrity, especiallywhen malicious nodes inject corrupted updates. Existing defenses often rely on a trusted centralaggregator, introducing a Single Point of Failure and limiting scalability. To overcome these challenges,we propose SwarmShield, a decentralized, trust-aware defense framework based on Swarm Learning.SwarmShield eliminates the need for a central coordinator by redistributing trust evaluation andmodel merging across peer nodes. It selectively transmits intermediate model layers, applies dimensionalityreduction, and clusters parameter vectors to assess similarity. Trust scores are dynamically computedfor each node based on its proximity to the cluster centroid, and nodes with low trust are excludedfrom aggregation. A secure, trust-weighted averaging mechanism is used for model updates, with integrityensured through cryptographic hashing and blockchain logging. Extensive experimentation with differenttypes of adversarial data poisoning attacks on CIFAR10 dataset with Resnet50 model demonstratean average improvement in accuracies by 24.8%. Additionally, its generalizability is robustly demonstratedthrough successful application to the real-world DermaMNIST medical imaging dataset, whereSwarmShield consistently maintained or improved model accuracy across diverse attack scenarios. Wealso evaluate SwarmShield on the TwoLeadECG time series dataset, highlighting its behavior under temporaladversarial settings. These results validate SwarmShield’s effectiveness, scalability, and resiliencein adversarial federated learning settings. Further analysis through ablation studies validates the framework’sdesign by quantifying the contribution of each component, while robustness tests demonstrate itsresilience across varying ratios of malicious nodes. Our experimental results demonstrate that the proposedapproach significantly outperforms existing state-of-the-art methods.

Author Biographies

Cynara Justine, College of Engineering Trivandrum, APJ Abdul Kalam Technological University, Kerala, India

Cynara Justine received the B.Tech degree in Electronics and Communication engineering from College of Engineering Trivandrum, India, in 2008, and the M.S. degree in Software Systems from BITS Pilani, India in 2012. She is currently a Ph.D. Candidate in College of Engineering Trivandrum and working as a Senior Software Engineer in Microsoft R&D, India. She has over 10 years of R&D industry experience and earlier she has worked as Senior Specialist Systems Software Engineer in Hewlett Packard Enterprise. She has filed patents and invention disclosures. Her research interests include artificial intelligence, federated learning and computer security.

Sathyanarayanan Manamohan, Senior Principal Engineer (AI and ML), Hewlett Packard Enterprise and Shiv Nadar University, Chennai, India

Sathyanarayanan Manamohan (Member, IEEE)received the B.E degree in Computer Science and Engineering from B.S. Abdur Rahman Crescent Institute Of Science And Technology, India, in 2000, and the M.Tech. degree in Software Systems from BITS Pilani, India in 2018. He is currently a Ph.D. Candidate in Shiv Nadar University Chennai and working as Senior Principal Engineer (AI and ML) in Hewlett-Packard Enterprise Labs, India. He has over 23 years of R&D industry experience. He is the co-inventor of HPE Swarm Learning Framework. He has several granted patents. His research interests include artificial intelligence, federated learning and Trustworthy AI.

Linu Shine, College of Engineering Trivandrum, APJ Abdul Kalam Technological University, Kerala, India

Linu Shine received her B.Tech. Degree in Electronics Engineering from Cochin University of Science and Technology, Kerala, India, in 2002 and M.Tech degree in electronic design technology from Indian Institute of Science Bangalore, India, in 2012 and Ph.D. in Electronics and Communication from University of Kerala in 2022. She is working as Associate Professor in Electronics and Communication Engineering Department, College of Engineering Trivandrum. Her research interests include computer vision and Deep learning and its applications.

Jiji Victor Charangatt, Professor Department of Computer Science and Engineering Shiv Nadar University, Chennai, India

Jiji C. V. (Senior Member, IEEE) received his B. Tech in Electronics and Communication from T K M College of Engineering (University of Kerala) in 1988; M. Tech. in Communication Engineering from Indian Institute of Technology, Mumbai in 1997; PhD from the Department of Electrical Engineering, Indian Institute of Technology, Mumbai in 2007. He is currently a Professor with Department of Computer Science and Engineering, Shiv Nadar University Chennai, India. His research areas are computer vision, deep learning, image processing, computational photography and signal processing.

References

B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–1282.

S. Wilson and A. Dawson, “Owasp top 10 for large language model applications,” Oct 2023. [Online]. Available: https://owasp.org

M. Goldblum, D. Tsipras, C. Xie, X. Chen, A. Schwarzschild, D. Song, A. Madry, B. Li, and T. Goldstein, “Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.

P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine tolerant gradient descent,” Advances in neural information processing systems, vol. 30, 2017.

C. Fung, C. J. Yoon, and I. Beschastnikh, “The limitations of federated learning in sybil settings,” in 23rd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2020), 2020, pp. 301–316.

T. Li, S. Hu, A. Beirami, and V. Smith, “Ditto: Fair and robust federated learning through personalization,” in International conference on machine learning. PMLR, 2021, pp. 6357–6368.

E. M. E. Mhamdi, R. Guerraoui, and S. Rouault, “The Hidden Vulnerability of Distributed Learning in Byzantium,” in Proceedings of the 35th International Conference on Machine Learning. PMLR, 2018, pp. 3518–3527.

D. Yin, Y. Chen, R. Kannan, and P. Bartlett, “Byzantine-robust distributed learning: Towards optimal statistical rates,” in International Conference on Machine Learning. PMLR, 2018, pp. 5650–5659.

C. Fung, C. J. Yoon, and I. Beschastnikh, “Mitigating sybils in federated learning poisoning,” arXiv preprint arXiv:1808.04866, 2018.

S. Warnat-Herresthal, H. Schultze, K. L. Shastry, S. Manamohan, S. Mukherjee, V. Garg, R. Sarveswara, K. H¨andler, P. Pickkers, N. A. Aziz et al., “Swarm learning for decentralized and confidential clinical machine learning,” Nature, vol. 594, no. 7862, pp. 265–270, 2021.

P. Ramanan and K. Nakayama, “Baffle: Blockchain based aggregator free federated learning,” in 2020 IEEE international conference

on blockchain (Blockchain). IEEE, 2020, pp. 72–81.

X. Cao, M. Fang, J. Liu, and N. Z. Gong, “Fltrust: Byzantine-robust federated learning via trust bootstrapping,” arXiv preprint

arXiv:2012.13995, 2020.

Y. Zhou, J. Wu, H. Wang, and J. He, “Adversarial robustness through bias variance decomposition: A new perspective for federated learning,” in Proceedings of the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 2753–2762.

W. Jin, Y. Yao, S. Han, C. Joe-Wong, S. Ravi, S. Avestimehr, and C. He, “Fedml-he: An efficient homomorphic-encryption-based privacypreserving federated learning system,” arXiv preprint arXiv:2303.10837, 2023.

J. Zhang, B. Li, C. Chen, L. Lyu, S. Wu, S. Ding, and C. Wu, “Delving into the adversarial robustness of federated learning,” in Proceedings of the AAAI conference on artificial intelligence, vol. 37, no. 9, 2023, pp. 11 245–11 253.

N. M. Jebreel and J. Domingo-Ferrer, “Fldefender: Combating targeted attacks in federated learning,” Knowledge-Based Systems, vol. 260, p. 110178, 2023.

M. Vucovich, D. Quinn, K. Choi, C. Redino, A. Rahman, and E. Bowen, “Fedbayes: A zerotrust federated learning aggregation to defend against adversarial attacks,” in 2024 IEEE 14th Annual Computing and Communication Workshop and Conference (CCWC). IEEE, 2024, pp. 0028–0035.

R. Xu, S. Gao, C. Li, J. Joshi, and J. Li, “Dual defense: Enhancing privacy and mitigating poisoning attacks in federated learning,” NeurIPS 2024, 2024.

D. T. Nguyen, Z. An, T. T. Johnson, M. Ma, and K. Leach, “Formal logic-guided robust federated learning against poisoning attacks,” arXiv preprint arXiv:2411.03231, 2024. [Online]. Available: https://arxiv.org/abs/2411.03231

U. Zafar, A. Teixeira, and S. Toor, “Robust federated learning against poisoning attacks: A gan-based defense framework,” arXiv preprint arXiv:2503.20884, 2025. [Online]. Available: https://arxiv.org/abs/2503.20884

Z. Ye, “Mitigating poisoning attacks in decentralized federated learning through moving target defense,” ZORA University of Zurich, 2024. [Online]. Available: https: //www.zora.uzh.ch/id/eprint/262742

N. Azeri, O. Hioual, and O. Hioual, “Efficient vanilla split learning for privacy-preserving collaboration in resource-constrained cyber-physical systems,” Informatica, vol. 48, no. 11, 2024.

Y. Chen, Y. Yang, Y. Liang, T. Zhu, and D. Huang, “Federated learning with privacy preservation in large-scale distributed systems using differential privacy and homomorphic encryption,” Informatica, vol. 49, no. 13, 2025.

N. Papernot, F. Faghri, N. Carlini, I. Goodfellow, R. Feinman, A. Kurakin, C. Xie, Y. Sharma, T. Brown, A. Roy, A. Matyasko, V. Behzadan, K. Hambardzumyan, Z. Zhang, Y.-L. Juang, Z. Li, R. Sheatsley, A. Garg, J. Uesato, W. Gierke, Y. Dong, D. Berthelot, P. Hendricks, J. Rauber, and R. Long, “Technical report on the cleverhans v2.1.0 adversarial examples library,” arXiv preprint arXiv:1610.00768, 2018.

I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks,” arXiv preprint arXiv:1706.06083, 2017.

B. Biggio, B. Nelson, and P. Laskov, “Support vector machines under adversarial label noise,” in Asian conference on machine learning. PMLR, 2011, pp. 97–112.

K. T. Co, L. Mu˜noz-Gonz´alez, S. de Maupeou, and E. C. Lupu, “Procedural noise adversarial examples for black-box attacks on deep convolutional networks,” in Proceedings of the 2019 ACM SIGSAC conference on computer and communications security, 2019, pp. 275–289.

S. Warnat-Herresthal, H. Schultze, K. L. Shastry, S. Manamohan, S. Mukherjee, V. Garg, R. Sarveswara, K. H¨andler, P. Pickkers, N. A. Aziz et al., “Swarm learning as a privacypreserving machine learning approach for disease classification,” BioRxiv, pp. 2020–06, 2020.

S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” IEEE journal on selected areas in communications, vol. 37, no. 6, pp. 1205–1221, 2019.

N. M. Jebreel and J. Domingo-Ferrer, “Fldefender: Combating targeted attacks in federated learning,” Knowledge-Based Systems, p. 110178, 2022.

J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, vol. 5. University of California press, 1967, pp. 281–298.

M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “Lof: identifying density-based local outliers,” in Proceedings of the 2000 ACM SIGMOD international conference on Management of data, 2000, pp. 93–104.

National Institute of Standards and Technology, “FIPS PUB 180-4: Secure Hash Standard (SHS),” https://nvlpubs.nist.gov/nistpubs/

FIPS/NIST.FIPS.180-4.pdf, August 2015.

D. Eastlake and T. Hansen, “Us secure hash algorithms (sha and sha-based hmac and hkdf),” Internet Engineering Task Force (IETF), RFC 6234, May 2011, rFC 6234. [Online]. Available: https://doi.org/10.17487/RFC6234

C. Mao, Z. Zhong, J. Yang, C. Vondrick, and B. Ray, “Metric learning for adversarial robustness,” Advances in neural information processing systems, vol. 32, 2019.

Z. Li, T. Zhou, C. Li, Y. Yu, and J. Zhu, “Soften to defend: Towards adversarial robustness via self-guided label refinement,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.

D. Hendrycks, M. Mazeika, and T. Dietterich, “Using trusted data to train deep networks on labels corrupted by severe noise,” in Advances in Neural Information Processing Systems (NeurIPS), vol. 31, 2018.

Y. Wang, X. Ma, Z. Chen, Y. Luo, J. Yi, and J. Bailey, “Symmetric cross entropy for robust learning with noisy labels,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 322–330.

S. Bayram and K. Barner, “Great: A graph regularized adversarial training method,” IEEE Access, 2024.

T.-H. Wu, W.-H. Luo, and C. Chou, “Revisiting semi-supervised adversarial robustness via noiseaware online robust distillation,” arXiv preprint arXiv:2409.12946, 2024.

M. Abrishami, S. Dadkhah, E. C. P. Neto, P. Xiong, S. Iqbal, S. Ray, and A. A. Ghorbani, “Classification and analysis of adversarial machine learning attacks in iot: a label flipping attack case study,” in 2022 32nd Conference of Open Innovations Association (FRUCT). IEEE, 2022, pp. 3–14.

Authors

  • Cynara Justine College of Engineering Trivandrum, APJ Abdul Kalam Technological University, Kerala, India
  • Sathyanarayanan Manamohan Senior Principal Engineer (AI and ML), Hewlett Packard Enterprise and Shiv Nadar University, Chennai, India
  • Linu Shine College of Engineering Trivandrum, APJ Abdul Kalam Technological University, Kerala, India
  • Jiji Victor Charangatt Professor Department of Computer Science and Engineering Shiv Nadar University, Chennai, India

DOI:

https://doi.org/10.31449/inf.v50i9.9501

Downloads

Published

03/12/2026

How to Cite

Justine, C., Manamohan, S., Shine, L., & Charangatt, J. V. (2026). SwarmShield: A Fully Decentralized Trust-Based Defense Against Adversarial Poisoning in Federated Learning. Informatica, 50(9). https://doi.org/10.31449/inf.v50i9.9501