Search Mailing List Archives
[theory-seminar] Theory Lunch 08/06: Margalit Glasgow
noaj at stanford.edu
Wed Aug 5 09:30:00 PDT 2020
Theory lunch will be happening tomorrow at
PT) and Margalit will tell us about *Approximate Gradient Codes with
Optimal Decoding*. An abstract can be found below.
Gradient codes are used in distributed machine learning problems to more
accurately compute batch gradients in the presence of slow machines, called
stragglers. A gradient code describes a pattern of replicating and
assigning data to each machine, and may guarantee that the full batch
gradient can be computed *exactly *or *approximately. *In this talk, I'll
discuss a new design for gradient codes based on expander graphs that well
approximate the full gradient with both random and adversarial stragglers.
The performance of these codes can be analyzed by studying the connected
components in randomly sparsified expander graphs! I'll also show that
using these codes yields good convergence guarantees for gradient descent
Joint work with Mary Wootters
See you there!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the theory-seminar