Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] "Towards instance-optimal compression for distributed mean estimation" – Ananda Theertha Suresh (Thu, 10-Mar @ 4:00pm)

Tavor Baharav tavorb at stanford.edu
Thu Mar 10 14:15:01 PST 2022


Reminder: this talk will be today at 4pm via Zoom (link below).
Please join us for snacks at 3:30pm in the Grove outside Packard.

On Mon, Mar 7, 2022 at 6:07 PM Tavor Baharav <tavorb at stanford.edu> wrote:

> Towards instance-optimal compression for distributed mean estimationAnanda
> Theertha Suresh – Research Scientist, Google Research
>
> Thu, 10-Mar / 4:00pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> (in person)
>
> *Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be streamed on
> Zoom: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> <https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ>*
> Abstract
>
> Distributed mean estimation is a commonly used subroutine in many
> distributed learning and optimization algorithms. In several distributed
> scenarios, communication cost is a bottleneck and quantization techniques
> have been proposed to improve communication efficiency. However, existing
> techniques often suffer a quantization error scaling with the range of data
> points. We propose a new non-interactive correlated quantization protocol
> whose error guarantee depends on the deviation of data points instead of
> their absolute range. Furthermore, our algorithm and analysis does not make
> any distribution assumptions or require any prior knowledge on the
> concentration property of the data. We prove the optimality of our protocol
> under mild assumptions and also show that applying it as a subroutine in
> distributed optimization leads to better convergence rates.
>
> Based on joint work with Jae Ro, Ziteng Sun, and Felix Yu.
> Bio
>
> Ananda Theertha Suresh is a research scientist at Google Research, New
> York. He received his PhD from University of California San Diego, where he
> was advised by Prof. Alon Orlitsky. His research focuses on theoretical and
> algorithmic aspects of machine learning, information theory, differential
> privacy, and statistics. He is a recipient of the 2017 Paul Baran Maroni
> Young Scholar award and a co-recipient of best paper awards at NeurIPS
> 2015, ALT 2020, CCS 2021, and a best paper honorable mention award at ICML
> 2017.
>
> *This talk is hosted by the ISL Colloquium
> <https://isl.stanford.edu/talks/>. To receive talk announcements, subscribe
> to the mailing list isl-colloq at lists.stanford.edu
> <https://mailman.stanford.edu/mailman/listinfo/isl-colloq>.*
> ------------------------------
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
> This talk:
> http://isl.stanford.edu/talks/talks/2022q1/ananda-theertha-suresh/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20220310/6e666d71/attachment.html>


More information about the theory-seminar mailing list