Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] Theory Lunch 5/12: Max Hopkins (UCSD)

Junyao Zhao junyaoz at
Sun May 8 20:30:19 PDT 2022

Hi everyone,

This week's theory lunch will take place Thursday at noon in the Engineering Quad<,-122.1765394,17z/data=!3m1!4b1!4m5!3m4!1s0x808fbb8ce58bcc27:0x677c06a883bb7bb7!8m2!3d37.428484!4d-122.1743507>. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need

Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis’ Probably Approximately Correct (PAC)-Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ∈ H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X × {0, 1}, find the best approximation to D in H.

Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAC-learning and give an elementary, `model-independent’ proof of their equivalence via direct blackbox reduction. Time willing, we’ll also discuss some new implications of this framework beyond the original PAC model.

Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the theory-seminar mailing list