Search Mailing List Archives
[theory-seminar] Theory Lunch 5/12: Max Hopkins (UCSD)
Megan D. Harris
mdharris at stanford.edu
Thu May 12 11:58:14 PDT 2022
Also, no vegetarian option - I made a mistake on the order. There is a salad but it has chicken and bacon on it, dressing is on the side (you can remove the meat).
Best,
Megan Denise Harris | Faculty Administrator | Computer Science (Gates Building) |
353 Serra Mall, Rm 187, Stanford, CA 94305 |
Office Phone | 650.723.1658 | Cell 206-313-1390 |
Campus Days: Monday and Thursday 7AM-3:30PM
________________________________
From: theory-seminar <theory-seminar-bounces at lists.stanford.edu> on behalf of Junyao Zhao <junyaoz at stanford.edu>
Sent: Wednesday, May 11, 2022 10:21 PM
To: theory-seminar at lists.stanford.edu <theory-seminar at lists.stanford.edu>; thseminar at cs.stanford.edu <thseminar at cs.stanford.edu>
Subject: Re: [theory-seminar] Theory Lunch 5/12: Max Hopkins (UCSD)
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar <theory-seminar-bounces at lists.stanford.edu> on behalf of Junyao Zhao <junyaoz at stanford.edu>
Sent: Sunday, May 8, 2022 8:30 PM
To: theory-seminar at lists.stanford.edu <theory-seminar at lists.stanford.edu>; thseminar at cs.stanford.edu <thseminar at cs.stanford.edu>
Subject: [theory-seminar] Theory Lunch 5/12: Max Hopkins (UCSD)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad<https://www.google.com/maps/place/Science+%26+Engineering+Quad+Courtyard/@37.4284882,-122.1765394,17z/data=!3m1!4b1!4m5!3m4!1s0x808fbb8ce58bcc27:0x677c06a883bb7bb7!8m2!3d37.428484!4d-122.1743507>. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis’ Probably Approximately Correct (PAC)-Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ∈ H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X × {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAC-learning and give an elementary, `model-independent’ proof of their equivalence via direct blackbox reduction. Time willing, we’ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20220512/8962ab27/attachment-0003.html>
More information about the theory-seminar
mailing list