In a federated learning (FL) system, users can collaborate to build a shared model without explicitly sharing data, but model accuracy degrades if differential privacy guarantees are required during training. We hypothesize that domain adaptation techniques can effectively address this problem while increasing per-user prediction accuracy, especially when user data comes from disparate distributions. We present and analyze a mixture of experts (MoE) based domain adaptation approach that allows effective collaboration between users in a differentially private FL setting. Each user contributes to (and benefits from) a general, shared model to perform a common task, while maintaining a private model to adjust their predictions to their particular domain. Using both synthetic and real-world datasets, we empirically demonstrate that these private models can increase accuracy, while protecting against the release of users’ private data.