Automatically extracting mentions of suspected drug or vaccine adverse events (potential side effects) from unstructured text is critical in the current pandemic, but small amounts of labeled training data remains silo-ed across organizations due to privacy concerns. Federated Learning (FL) is quickly becoming a goto distributed training paradigm for such users to jointly train a more accurate global model without physically sharing their data. However, literature on successful application of FL in real-world problem settings is somewhat sparse. In this pa- per, we describe our experience applying a FL based solution to the Named Entity Recognition (NER) task for an adverse event detection application in the con- text of mass scale vaccination programs. Furthermore, we show that Differential Privacy (DP), which offers stronger privacy guarantees, but severely cripples the global model’s prediction accuracy, thus dis-incentivizing users from participating in the federation. We demonstrate how recent innovation on personalization methods can help significantly recover the lost accuracy.