Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] Theory Lunch 08/12: Mingda Qiao

David Wajc wajc at stanford.edu
Mon Aug 9 10:08:41 PDT 2021


Hi all,

This week's theory lunch will take place Thursday at noon (PDT), at this
zoom room
<https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09>.
As
usual, we'll start with some socializing, followed by a talk starting at
12:30pm. Pras will be MCing. Thanks Pras!
Mingda will tell us about: *Exponential Weights Algorithms for Selective
Learning*

*Abstract:* We study the selective learning problem, in which a learner
observes $n$ labeled data points one at a time. At a time of its choosing,
the learner picks a window length $w$ and a hypothesis $h$ from a given
hypothesis class $H$, and then labels the next $w$ data points using $h$.
The excess risk incurred by the learner is defined as the difference
between the average loss of $h$ over the $w$ data points and the smallest
possible average loss among all hypotheses in $H$ over those data points.

We give an improved algorithm, termed the hybrid exponential weights
algorithm, that achieves an expected excess risk of $O((\log\log|H| +
\log\log n)/\log n)$. This gives a doubly exponential improvement in the
dependence on $|H|$ over the best known upper bound. We also prove an
almost matching lower bound, showing that the $\log\log|H|$ dependence is
necessary.

Based on joint work with Greg Valiant.

Cheers, David
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20210809/ceb08cfd/attachment-0001.html>


More information about the theory-seminar mailing list