The success of many machine learning algorithms (such as Deep Learning techniques) resides in the availability of vast amounts of data [Pou18]. In many scenarios, several companies or parties that own related data may be interested in building a common model, with the restriction of preserving the privacy of their users’ data. Although several approaches have dealt with learning models while preserving data privacy [Wan18], Federated Learning (FL) provides a complete framework to build shared models among several parties, where all of them benefit from the global model, but local data privacy is preserved [Yan19].
In most FL algorithms, each party updates the global model using their own data, while a central server orchestrates the process, aggregating the updates and sending back the global model to each party [McM16]. In the process, the data never leaves each location, and the updates are sent with encryption, preserving privacy at every moment.
Although the global model is expected to perform better than each isolated one built only with local data, this might not be true for some parties. In order to maintain the incentive of these parties to participate in the process, some work has been done for the personalization of the global model for each of the clients. These techniques rely on the local adaptation of the global model, so each party not only collaborate in global learning but also optimize it locally for themselves, better fitting to their characteristics. Most of the prior work in FL personalization still focuses on building a single shared model guided by the performance over global data, but this may not have much relevance if the models are then personalized [Kul20]; therefore, the performance on each local party should be also considered in the global learning, thus reducing the risk of having poor local models.
Key references
[Kul20] V. Kulkarni, M. Kulkarni, A. Pant. Survey of Personalization Techniques for Federated Learning. arXiv preprint arXiv:2003.08673. 2020.
[McM16] H. McMahan, E. Moore, D. Ramage, B. A. Arcas. Federated Learning of Deep Networks using Model Averaging. arXiv preprint arXiv:1602.05629. 2016.
[Yan19] Q. Yang, Y. Liu, Y. Cheng, Y. Kang, T. Chen, H. Yu. Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3), 1-207. 2019.
References
[Ari19] M. G. Arivazhagan, V. Aggarwal, A. K. Singh, S. Choudhary. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818. 2019.
[Bon19] K. Bonawitz, H. Eichner, W. Grieskamp, D. Huba, A. Ingerman, V. Ivanov, C. Kiddon, J. Konečný, S. Mazzocchi, H. B. McMahan, T. Van Overveldt, D. Petrou, D. Ramage, J. Roselander. Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046. 2019.
[Har18] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, D. Ramage. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604. 2018.
[Jia19] Y. Jiang, J. Konečný, K. Rush, S. Kannan. Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488. 2019.
[Kai19] P. Kairouz, et al. Advances and open problems in federated learning. arXiv preprint arXiv:1912.04977. 2019.
[Man20] Y. Mansour, M. Mohri, J. Ro, A. T. Suresh. Three approaches for personalization with applications to federated learning. arXiv preprint arXiv:2002.10619. 2020.
[Pou18] S. Pouyanfar, S. Sadiq, Y. Yan, H. Tian, Y. Tao, M. Presa-Reyes, M. Shyu, S. Chen, S. S. Iyengar. A survey on deep learning: Algorithms, techniques, and applications. ACM Computing Surveys (CSUR), 51(5), 1-36. 2018
[Wan18] A. Wang, C. Wang, M. Bi, J. Xu. A Review of Privacy-Preserving Machine Learning Classification. In International Conference on Cloud Computing and Security, 671-682. 2018.
[Wu20] Q. Wu, K. He, X. Chen. Personalized federated learning for intelligent IoT applications: A cloud-edge based framework. IEEE Open Journal of the Computer Society, 1, 35-44. 2020.
[Yu20] T. Yu, E. Bagdasaryan, V. Shmatikov. Salvaging federated learning by local adaptation. arXiv preprint arXiv:2002.04758. 2020.