Private Cross-Silo Federated Learning for Extracting Vaccine Adverse Event Mentions

Private Cross-Silo Federated Learning for Extracting Vaccine Adverse Event Mentions

Pallika Kanani, Virendra Marathe, Daniel Peterson, Rave Harpaz, Steve Bright

14 August 2021

Federated Learning (FL) is quickly becoming a goto distributed training paradigm for users to jointly train a global model without physically sharing their data. Users can indirectly contribute to, and directly benefit from a much larger aggregate data corpus used to train the global model. However, literature on successful application of FL in real-world problem settings is somewhat sparse. In this pa- per, we describe our experience applying a FL based solution to the Named Entity Recognition (NER) task for an adverse event detection application in the context of mass scale vaccination programs. We present a comprehensive empirical analysis of various dimensions of benefits gained with FL based training. Furthermore, we investi- gate effects of tighter Differential Privacy (DP) constraints in highly sensitive settings where federation users must enforce Local DP to ensure strict privacy guarantees. We show that local DP can severely cripple the global model’s prediction accuracy, thus disincentivizing users from participating in the federation. In response, we demon- strate how recent innovation on personalization methods can help significantly recover the lost accuracy. We focus our analysis on the Federated Fine-Tuning algorithm, FedFT, and prove that it is not PAC Identifiable, thus making it even more attractive for FL-based training.


Venue : KDD'21 Applied Data Science Track

File Name : VAERS-KDD21-submission.pdf