Private Federated Learning: From Theory to Practice


Private Federated Learning: From Theory to Practice

Principal Investigator

Harvard school of engineering and applied sciences


This project aims to develop theory and algorithms for privacy-preserving federated learn- ing through two intertwined thrusts. The first thrust will develop mathematical tools for quantifying and ensuring privacy of distributed optimization algorithms used in federated learning (e.g., stochastic gradient descent). The privacy metric of choice is differential privacy and its variants. Specifically, Thrust 1 will apply tools from information theory to derive sharp characterizations of the fundamental trade-off between privacy and accuracy in federated learning. This trade-off inherently depends on parameters of the learning algorithm—number of training rounds, number of parties, sample size, etc.

The second research thrust will develop and test federated learning algorithms that ensure high accuracy in machine learning (ML) tasks for a target privacy guarantee. This will be accomplished by designing privacy mechanisms according to the theory developed in Thrust 1, then benchmarking their performance on real-wold datasets. The ultimate goal of Thurst 2 is to produce federated learning algorithms that achieve a favorable trade-off between accuracy and privacy while requiring minimal human tuning.