Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)

Junyao Zhao junyaoz at stanford.edu
Wed May 25 16:32:47 PDT 2022


Hi everyone,

Yeshwanth's talk is postponed due to unforeseen circumstances. Tomorrow we will have an hour of socializing instead.

Best,
Junyao

________________________________
From: theory-seminar <theory-seminar-bounces at lists.stanford.edu> on behalf of Junyao Zhao <junyaoz at stanford.edu>
Sent: Sunday, May 22, 2022 8:04 PM
To: theory-seminar at lists.stanford.edu <theory-seminar at lists.stanford.edu>; thseminar at cs.stanford.edu <thseminar at cs.stanford.edu>
Subject: [theory-seminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)

Hello everyone,

This week's theory lunch will take place Thursday at noon in the Engineering Quad<https://www.google.com/maps/place/Science+%26+Engineering+Quad+Courtyard/@37.4284882,-122.1765394,17z/data=!3m1!4b1!4m5!3m4!1s0x808fbb8ce58bcc27:0x677c06a883bb7bb7!8m2!3d37.428484!4d-122.1743507>. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms

Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "low-dimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "high-dimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.

Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599


Cheers,
Junyao

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20220525/0f228660/attachment-0003.html>


More information about the theory-seminar mailing list