Search Mailing List Archives
[theory-seminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Junyao Zhao
junyaoz at stanford.edu
Thu May 26 10:07:07 PDT 2022
A gentle reminder: This is happening in 10 minutes (the talk is canceled, and we will have an hour of socializing instead).
________________________________
From: theory-seminar <theory-seminar-bounces at lists.stanford.edu> on behalf of Junyao Zhao <junyaoz at stanford.edu>
Sent: Wednesday, May 25, 2022 4:32 PM
To: theory-seminar at lists.stanford.edu <theory-seminar at lists.stanford.edu>; thseminar at cs.stanford.edu <thseminar at cs.stanford.edu>
Subject: Re: [theory-seminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hi everyone,
Yeshwanth's talk is postponed due to unforeseen circumstances. Tomorrow we will have an hour of socializing instead.
Best,
Junyao
________________________________
From: theory-seminar <theory-seminar-bounces at lists.stanford.edu> on behalf of Junyao Zhao <junyaoz at stanford.edu>
Sent: Sunday, May 22, 2022 8:04 PM
To: theory-seminar at lists.stanford.edu <theory-seminar at lists.stanford.edu>; thseminar at cs.stanford.edu <thseminar at cs.stanford.edu>
Subject: [theory-seminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad<https://www.google.com/maps/place/Science+%26+Engineering+Quad+Courtyard/@37.4284882,-122.1765394,17z/data=!3m1!4b1!4m5!3m4!1s0x808fbb8ce58bcc27:0x677c06a883bb7bb7!8m2!3d37.428484!4d-122.1743507>. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "low-dimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "high-dimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20220526/d032edf3/attachment-0003.html>
More information about the theory-seminar
mailing list