10 April 2017

Google Research Blog: “Federated Learning: Collaborative Machine Learning without Centralized Training Data”

It works like this: your device downloads the current model, improves it by learning from data on your phone, and then summarizes the changes as a small focused update. Only this update to the model is sent to the cloud, using encrypted communication, where it is immediately averaged with other user updates to improve the shared model. All the training data remains on your device, and no individual updates are stored in the cloud.

Federated Learning allows for smarter models, lower latency, and less power consumption, all while ensuring privacy. And this approach has another immediate benefit: in addition to providing an update to the shared model, the improved model on your phone can also be used immediately, powering experiences personalized by the way you use your phone.

Brendan McMahan & Daniel Ramage

I’m no expert, but it sounds like Google negated any privacy advantage Apple might have had by using ‘differential privacy’ on its devices. Looking forward to seeing federated learning integrated into Gboard for iOS as well.

Federated Learning flow chart
Your phone personalizes the model locally, based on your usage (A). Many users’ updates are aggregated (B) to form a consensus change (C) to the shared model, after which the procedure is repeated.

Post a Comment