Exploiting the Exponent of Floating-Point: A Novel Pathway to Efficient Federated Learning
Abstract
Federated Learning (FL) enables decentralized model training by having clients process local data andtransmit only learned updates to a central server, but faces significant communication bottlenecks due tofrequent exchanges of high-dimensional model parameters. While existing compression techniques suchas sparsification and quantization reduce overhead, they fail to fully exploit the structural properties offloating-point representations. This paper introduces a novel compression strategy that innovativelyexploits the exponent field of float16 representations to encode sequences of negligible parameter updates,achieving substantial communication reduction while preserving model integrity. The proposed methodoperates through four main steps: (1) client-side model subtraction between local and global parameters,(2) downcasting to float16, (3) threshold-based pruning to remove insignificant values, and (4) exponentbased encoding to compactly represent sequences of negligible updates, followed by server-sidedecompression through exponent extraction and sequence reconstruction. On MNIST, the methodachieves an 88.4% size reduction (threshold 0.001) with merely 0.2% accuracy loss versus baseline, whilea lighter threshold (0.0001) improves accuracy by 0.1% at 64.7% compression; on CIFAR-10, it yields64% compression with maintained accuracy (+0.1%) and 42% faster convergence through critical weightpreservation, with lighter thresholds (0.0001) achieving 51.4% size reduction and 1.2% accuracyimprovement. The technique’s computational efficiency and compatibility with existing FL frameworksmake it particularly suitable for resource-constrained edge environments, bridging a critical gap incommunication-efficient FL through innovative use of floating-point exponent manipulation for scalable,privacy-preserving distributed learning.References
J. Ayeelyan, S. Utomo, A. Rouniyar, H.-C. Hsu, and P.-A. Hsiung, "Federated learning design and functional models: survey," Artificial Intelligence Review, vol. 58, no. 1, p. 21, 2024/11/16 2024, doi: 10.1007/s10462-024-10969-y.
G. S. Nariman and H. K. Hamarashid, "Hierarchical federated learning for health trend prediction and anomaly detection using pharmacy data: from zone to national scale," International Journal of Data Science and Analytics, 2025/03/24 2025, doi: 10.1007/s41060-025-00756-5.
E. M. El Houby, "A Review of Machine Learning Techniques in the Medical Domain," Informatica, vol. 49, no. 16, 2025.
B. Mohammed, "A Comprehensive Overview of Federated Learning for NextGeneration Smart Agriculture: Current Trends, Challenges, and Future Directions," Informatica, vol. 49, no. 1, 2025.
N. Azeri, O. Hioual, and O. Hioual, "Efficient Vanilla Split Learning for Privacy-Preserving Collaboration in Resource-Constrained Cyber-Physical Systems," Informatica, vol. 48, no. 11, 2024.
L. Xiao, H. Shan, J. Zhu, R. Mao, and S. Pan, "FD3QN: A Federated Deep Reinforcement Learning Approach for Cross-Domain Resource Cooperative Scheduling in Hybrid Cloud Architecture," Informatica, vol. 49, no. 10, 2025.
M. Harasic, F.-S. Keese, D. Mattern, and A. Paschke, "Recent advances and future challenges in federated recommender systems," International Journal of Data Science and Analytics, vol. 17, no. 4, pp. 337-357, 2024/05/01 2024, doi: 10.1007/s41060-023-00442-4.
I. Varlamis et al., "Using big data and federated learning for generating energy efficiency recommendations," International Journal of Data Science and Analytics, vol. 16, no. 3, pp. 353-369, 2023/09/01 2023, doi: 10.1007/s41060-022-00331-2.
I. Ullah, X. Deng, X. Pei, H. Mushtaq, and M. Uzair, "IoV-SFL: A blockchain-based federated learning framework for secure and efficient data sharing in the internet of vehicles," Peer-to-Peer Networking and Applications, vol. 18, no. 1, p. 34, 2024/12/06 2024, doi: 10.1007/s12083-024-01821-9.
M. K. Pulligilla and C. Vanmathi, "A traffic flow prediction framework based on integrated federated learning and Recurrent Long short-term networks," Peer-to-Peer Networking and Applications, vol. 17, no. 6, pp. 4131-4155, 2024/11/01 2024, doi: 10.1007/s12083-024-01792-x.
D. S. B. Naik and V. Dondeti, "Trust-based secure federated learning framework to mitigate internal attacks for intelligent vehicular networks," Peer-to-Peer Networking and Applications, vol. 18, no. 2, p. 10, 2025/01/03 2025, doi: 10.1007/s12083-024-01835-3.
B. Li, Y. Jiang, Q. Pei, T. Li, L. Liu, and R. Lu, "FEEL: Federated End-to-End Learning With Non-IID Data for Vehicular Ad Hoc Networks," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 16728-16740, 2022, doi: 10.1109/TITS.2022.3190294.
H. Zhang, Z. Li, S. Xi, X. Zhao, J. Liu, and P. Zhang, "Heterogeneity-aware device selection for clustered federated learning in IoT," Peer-to-Peer Networking and Applications, vol. 18, no. 1, p. 29, 2024/12/04 2024, doi: 10.1007/s12083-024-01869-7.
G. S. Nariman and H. K. Hamarashid, "Communication overhead reduction in federated learning: a review," International Journal of Data Science and Analytics, 2024/12/04 2024, doi: 10.1007/s41060-024-00691-x.
B. Li, Y. Shi, Q. Kong, Q. Du, and R. Lu, "Incentive-Based Federated Learning for Digital-Twin-Driven Industrial Mobile Crowdsensing," IEEE Internet of Things Journal, vol. 10, no. 20, pp. 17851-17864, 2023, doi: 10.1109/JIOT.2023.3279657.
J. Hao, L. Zhang, and Y. Zhao, "Grouped federated learning for time-sensitive tasks in industrial IoTs," Peer-to-Peer Networking and Applications, vol. 17, no. 2, pp. 819-833, 2024/03/01 2024, doi: 10.1007/s12083-023-01616-4.
Z. Wang, M. Wen, Y. Xu, Y. Zhou, J. H. Wang, and L. Zhang, "Communication compression techniques in distributed deep learning: A survey," Journal of Systems Architecture, p. 102927, 2023.
W. Dai, J. Fan, Y. Miao, and K. Hwang, "Deep Learning Model Compression With Rank Reduction in Tensor Decomposition," IEEE Transactions on Neural Networks and Learning Systems, 2023.
F. M. A. Khan, H. Abou-Zeid, and S. A. Hassan, "Model Pruning for Efficient Over-the-Air Federated Learning in Tactical Networks," in 2023 IEEE International Conference on Communications Workshops (ICC Workshops), 2023: IEEE, pp. 1806-1811.
Y. Jiang et al., "Model pruning enables efficient federated learning on edge devices," IEEE Transactions on Neural Networks and Learning Systems, 2022.
Y. Mao et al., "Communication-efficient federated learning with adaptive quantization," ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1-26, 2022.
C. Chen et al., "Communication-efficient federated learning with adaptive parameter freezing," in 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), 2021: IEEE, pp. 1-11.
R. Song, L. Zhou, L. Lyu, A. Festag, and A. Knoll, "Resfed: Communication-efficient federated learning with deep compressed residuals," IEEE Internet of Things Journal, vol. 11, no. 6, pp. 9458-9472, 2023.
W. Yang, Y. Yang, Y. Xi, H. Zhang, and W. Xiang, "FLCP: federated learning framework with communication-efficient and privacy-preserving," Applied Intelligence, vol. 54, no. 9, pp. 6816-6835, 2024.
Q. Li, Y. Diao, Q. Chen, and B. He, "Federated learning on non-iid data silos: An experimental study," in 2022 IEEE 38th international conference on data engineering (ICDE), 2022: IEEE, pp. 965-978.
J. Ayeelyan, S. Utomo, A. Rouniyar, H.-C. Hsu, and P.-A. Hsiung, "Federated learning design and functional models: survey," Artificial Intelligence Review, vol. 58, no. 1, p. 21, 2024/11/16 2024, doi: 10.1007/s10462-024-10969-y.
G. S. Nariman and H. K. Hamarashid, "Hierarchical federated learning for health trend prediction and anomaly detection using pharmacy data: from zone to national scale," International Journal of Data Science and Analytics, 2025/03/24 2025, doi: 10.1007/s41060-025-00756-5.
E. M. El Houby, "A Review of Machine Learning Techniques in the Medical Domain," Informatica, vol. 49, no. 16, 2025.
B. Mohammed, "A Comprehensive Overview of Federated Learning for NextGeneration Smart Agriculture: Current Trends, Challenges, and Future Directions," Informatica, vol. 49, no. 1, 2025.
N. Azeri, O. Hioual, and O. Hioual, "Efficient Vanilla Split Learning for Privacy-Preserving Collaboration in Resource-Constrained Cyber-Physical Systems," Informatica, vol. 48, no. 11, 2024.
L. Xiao, H. Shan, J. Zhu, R. Mao, and S. Pan, "FD3QN: A Federated Deep Reinforcement Learning Approach for Cross-Domain Resource Cooperative Scheduling in Hybrid Cloud Architecture," Informatica, vol. 49, no. 10, 2025.
M. Harasic, F.-S. Keese, D. Mattern, and A. Paschke, "Recent advances and future challenges in federated recommender systems," International Journal of Data Science and Analytics, vol. 17, no. 4, pp. 337-357, 2024/05/01 2024, doi: 10.1007/s41060-023-00442-4.
I. Varlamis et al., "Using big data and federated learning for generating energy efficiency recommendations," International Journal of Data Science and Analytics, vol. 16, no. 3, pp. 353-369, 2023/09/01 2023, doi: 10.1007/s41060-022-00331-2.
I. Ullah, X. Deng, X. Pei, H. Mushtaq, and M. Uzair, "IoV-SFL: A blockchain-based federated learning framework for secure and efficient data sharing in the internet of vehicles," Peer-to-Peer Networking and Applications, vol. 18, no. 1, p. 34, 2024/12/06 2024, doi: 10.1007/s12083-024-01821-9.
M. K. Pulligilla and C. Vanmathi, "A traffic flow prediction framework based on integrated federated learning and Recurrent Long short-term networks," Peer-to-Peer Networking and Applications, vol. 17, no. 6, pp. 4131-4155, 2024/11/01 2024, doi: 10.1007/s12083-024-01792-x.
D. S. B. Naik and V. Dondeti, "Trust-based secure federated learning framework to mitigate internal attacks for intelligent vehicular networks," Peer-to-Peer Networking and Applications, vol. 18, no. 2, p. 10, 2025/01/03 2025, doi: 10.1007/s12083-024-01835-3.
B. Li, Y. Jiang, Q. Pei, T. Li, L. Liu, and R. Lu, "FEEL: Federated End-to-End Learning With Non-IID Data for Vehicular Ad Hoc Networks," IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 9, pp. 16728-16740, 2022, doi: 10.1109/TITS.2022.3190294.
H. Zhang, Z. Li, S. Xi, X. Zhao, J. Liu, and P. Zhang, "Heterogeneity-aware device selection for clustered federated learning in IoT," Peer-to-Peer Networking and Applications, vol. 18, no. 1, p. 29, 2024/12/04 2024, doi: 10.1007/s12083-024-01869-7.
G. S. Nariman and H. K. Hamarashid, "Communication overhead reduction in federated learning: a review," International Journal of Data Science and Analytics, 2024/12/04 2024, doi: 10.1007/s41060-024-00691-x.
B. Li, Y. Shi, Q. Kong, Q. Du, and R. Lu, "Incentive-Based Federated Learning for Digital-Twin-Driven Industrial Mobile Crowdsensing," IEEE Internet of Things Journal, vol. 10, no. 20, pp. 17851-17864, 2023, doi: 10.1109/JIOT.2023.3279657.
J. Hao, L. Zhang, and Y. Zhao, "Grouped federated learning for time-sensitive tasks in industrial IoTs," Peer-to-Peer Networking and Applications, vol. 17, no. 2, pp. 819-833, 2024/03/01 2024, doi: 10.1007/s12083-023-01616-4.
Z. Wang, M. Wen, Y. Xu, Y. Zhou, J. H. Wang, and L. Zhang, "Communication compression techniques in distributed deep learning: A survey," Journal of Systems Architecture, p. 102927, 2023.
W. Dai, J. Fan, Y. Miao, and K. Hwang, "Deep Learning Model Compression With Rank Reduction in Tensor Decomposition," IEEE Transactions on Neural Networks and Learning Systems, 2023.
F. M. A. Khan, H. Abou-Zeid, and S. A. Hassan, "Model Pruning for Efficient Over-the-Air Federated Learning in Tactical Networks," in 2023 IEEE International Conference on Communications Workshops (ICC Workshops), 2023: IEEE, pp. 1806-1811.
Y. Jiang et al., "Model pruning enables efficient federated learning on edge devices," IEEE Transactions on Neural Networks and Learning Systems, 2022.
Y. Mao et al., "Communication-efficient federated learning with adaptive quantization," ACM Transactions on Intelligent Systems and Technology (TIST), vol. 13, no. 4, pp. 1-26, 2022.
C. Chen et al., "Communication-efficient federated learning with adaptive parameter freezing," in 2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS), 2021: IEEE, pp. 1-11.
R. Song, L. Zhou, L. Lyu, A. Festag, and A. Knoll, "Resfed: Communication-efficient federated learning with deep compressed residuals," IEEE Internet of Things Journal, vol. 11, no. 6, pp. 9458-9472, 2023.
W. Yang, Y. Yang, Y. Xi, H. Zhang, and W. Xiang, "FLCP: federated learning framework with communication-efficient and privacy-preserving," Applied Intelligence, vol. 54, no. 9, pp. 6816-6835, 2024.
Q. Li, Y. Diao, Q. Chen, and B. He, "Federated learning on non-iid data silos: An experimental study," in 2022 IEEE 38th international conference on data engineering (ICDE), 2022: IEEE, pp. 965-978.
P. Riedel, L. Schick, R. von Schwerin, M. Reichert, D. Schaudt, and A. Hafner, "Comparative analysis of open-source federated learning frameworks - a literature-based survey and review," International Journal of Machine Learning and Cybernetics, vol. 15, no. 11, pp. 5257-5278, 2024/11/01 2024, doi: 10.1007/s13042-024-02234-z.
P. Riedel, L. Schick, R. von Schwerin, M. Reichert, D. Schaudt, and A. Hafner, "Comparative analysis of open-source federated learning frameworks - a literature-based survey and review," International Journal of Machine Learning and Cybernetics, vol. 15, no. 11, pp. 5257-5278, 2024/11/01 2024, doi: 10.1007/s13042-024-02234-z.
DOI:
https://doi.org/10.31449/inf.v50i7.8833Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







