Learning Without Sharing: A Privacy-aware Federated Learning Architecture for IoT Networks
Abstract
The boost of internet of things (IoT) devices in healthcare and everyday life has made it possible to monitor personal data such as vital signs, activity patterns, and information about people’s behavior, continuously. While that is in support of intelligent applications, centralized machine learning systems involve the transmission of sensitive data to cloud servers, which runs the risk of data breaches, misuse, and cyber-attacks. Federated learning (FL) is one way to get around this problem by allowing models to be trained in a decentralised manner without the sharing of raw data. However, the current federated frameworks assume that all devices involved are trustworthy, but current IoT environments are not composed of trustworthy devices operating in isolation, especially because some devices might be compromised or faulty, which creates resource limitations. The absence of dynamic trust assessment in existing federated systems is a gap that is identified in this research. To overcome this limitation, a trust-aware federated learning framework is proposed. The framework measures the device reliability with the help of update consistency, gradient deviation, and past performance before aggregating model updates. Adaptive weighting is used during the fusion of the models, which helps to reduce the impact of the suspicious or low-quality participants while still preserving decentralized privacy. Experimental analysis under the simulated adversarial scenarios shows the enhanced robustness against the poisoning attacks, stable convergence and the prediction accuracy is maintained compared to the standard federated averaging. The proposed approach achieves the objectives of providing improved security and reliability, which provides a scalable and practical solution for privacy-preserving intelligent IoT healthcare systems.
References
H. B. McMahan et al., “Communication-efficient learning of deep networks from decentralized data,” in Proc. 20th Int. Conf. Artif. Intell. Stat. (AISTATS), Fort Lauderdale, FL, USA, Apr. 2017, pp. 1273–1282.
P. Kairouz and H. B. McMahan, “Advances and open problems in federated learning,” Found. Trends Mach. Learn., vol. 14, nos. 1–2, pp. 1–210, 2021.
Q. Li et al., “A survey on federated learning systems: Vision, hype and reality for data privacy and protection,” IEEE Trans. Knowl. Data Eng., vol. 35, no. 4, pp. 3347–3366, Apr. 2023.
K. Bonawitz et al., “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur. (CCS), Dallas, TX, USA, Oct. 2017, pp. 1175–1191.
R. C. Geyer, T. Klein, and M. Nabi, “Differentially private federated learning: A client-level perspective,” arXiv preprint, Dec. 2017.
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in Proc. 23rd Int. Conf. Artif. Intell. Stat. (AISTATS), Palermo, Italy, Jun. 2020, pp. 2938–2948.
X. Song, H. Li, K. Hu, and G. Zai, “Backdoor federated learning by poisoning key parameters,” Electronics, vol. 14, no. 1, p. 129, 2025.
P. Blanchard, E. M. El Mhamdi, R. Guerraoui, and J. Stainer, “Machine learning with adversaries: Byzantine-tolerant gradient descent,” in Adv. Neural Inf. Process. Syst. (NeurIPS), Long Beach, CA, USA, Dec. 2017.
Y. Feng, Y. Guo, Y. Hou, et al., “A survey of security threats in federated learning,” Complex Intell. Syst., vol. 11, p. 165, 2025.
N. Rieke, J. Hancox, W. Li, et al., “The future of digital health with federated learning,” npj Digit. Med., vol. 3, p. 119, 2020.