From wajc at stanford.edu Mon Mar 1 11:20:01 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 1 Mar 2021 11:20:01 -0800
Subject: [theory-seminar] Theory Lunch 03/04: Yang P. Liu
Message-ID:
Hi all,
This week's theory lunch will take place Thursday at noon (PDT), at our
gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Yang will tell us about *Discrepancy Minimization via a Self-Balancing Walk*
*Abstract:*
*We study discrepancy minimization for vectors in R^n under various
settings. The main result is the analysis of a new simple random process in
multiple dimensions through a comparison argument. As corollaries, we
obtain bounds which are tight up to logarithmic factors for online vector
balancing against oblivious adversaries, resolving several questions posed
by Bansal, Jiang, Singla, and Sinha (STOC 2020), as well as a linear time
algorithm for logarithmic bounds for the Koml?s
conjecture.Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Mar 1 11:20:01 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 1 Mar 2021 11:20:01 -0800
Subject: [theory-seminar] Theory Lunch 03/04: Yang P. Liu
Message-ID:
Hi all,
This week's theory lunch will take place Thursday at noon (PDT), at our
gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Yang will tell us about *Discrepancy Minimization via a Self-Balancing Walk*
*Abstract:*
*We study discrepancy minimization for vectors in R^n under various
settings. The main result is the analysis of a new simple random process in
multiple dimensions through a comparison argument. As corollaries, we
obtain bounds which are tight up to logarithmic factors for online vector
balancing against oblivious adversaries, resolving several questions posed
by Bansal, Jiang, Singla, and Sinha (STOC 2020), as well as a linear time
algorithm for logarithmic bounds for the Koml?s
conjecture.Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 4 08:55:50 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 4 Mar 2021 08:55:50 -0800
Subject: [theory-seminar] Theory Lunch 03/04: Yang P. Liu
In-Reply-To:
References:
Message-ID:
Reminder: theory lunch is today at noon. See you then!
Cheers,
David
On Mon, 1 Mar 2021 at 11:20, David Wajc wrote:
> Hi all,
>
> This week's theory lunch will take place Thursday at noon (PDT), at our
> gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Yang will tell us about *Discrepancy Minimization via a Self-Balancing
> Walk*
>
> *Abstract:*
>
>
>
>
> *We study discrepancy minimization for vectors in R^n under various
> settings. The main result is the analysis of a new simple random process in
> multiple dimensions through a comparison argument. As corollaries, we
> obtain bounds which are tight up to logarithmic factors for online vector
> balancing against oblivious adversaries, resolving several questions posed
> by Bansal, Jiang, Singla, and Sinha (STOC 2020), as well as a linear time
> algorithm for logarithmic bounds for the Koml?s
> conjecture.Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
> lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 4 08:55:50 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 4 Mar 2021 08:55:50 -0800
Subject: [theory-seminar] Theory Lunch 03/04: Yang P. Liu
In-Reply-To:
References:
Message-ID:
Reminder: theory lunch is today at noon. See you then!
Cheers,
David
On Mon, 1 Mar 2021 at 11:20, David Wajc wrote:
> Hi all,
>
> This week's theory lunch will take place Thursday at noon (PDT), at our
> gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Yang will tell us about *Discrepancy Minimization via a Self-Balancing
> Walk*
>
> *Abstract:*
>
>
>
>
> *We study discrepancy minimization for vectors in R^n under various
> settings. The main result is the analysis of a new simple random process in
> multiple dimensions through a comparison argument. As corollaries, we
> obtain bounds which are tight up to logarithmic factors for online vector
> balancing against oblivious adversaries, resolving several questions posed
> by Bansal, Jiang, Singla, and Sinha (STOC 2020), as well as a linear time
> algorithm for logarithmic bounds for the Koml?s
> conjecture.Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
> lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 4 12:24:59 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 4 Mar 2021 12:24:59 -0800
Subject: [theory-seminar] Theory Lunch 03/04: Yang P. Liu
In-Reply-To:
References:
Message-ID:
Reminder: Yang's talk starts in 5 minutes.
Cheers,
David
On Thu, 4 Mar 2021 at 08:55, David Wajc wrote:
> Reminder: theory lunch is today at noon. See you then!
>
> Cheers,
> David
>
> On Mon, 1 Mar 2021 at 11:20, David Wajc wrote:
>
>> Hi all,
>>
>> This week's theory lunch will take place Thursday at noon (PDT), at our
>> gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Yang will tell us about *Discrepancy Minimization via a Self-Balancing
>> Walk*
>>
>> *Abstract:*
>>
>>
>>
>>
>> *We study discrepancy minimization for vectors in R^n under various
>> settings. The main result is the analysis of a new simple random process in
>> multiple dimensions through a comparison argument. As corollaries, we
>> obtain bounds which are tight up to logarithmic factors for online vector
>> balancing against oblivious adversaries, resolving several questions posed
>> by Bansal, Jiang, Singla, and Sinha (STOC 2020), as well as a linear time
>> algorithm for logarithmic bounds for the Koml?s
>> conjecture.Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
>> lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 4 12:24:59 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 4 Mar 2021 12:24:59 -0800
Subject: [theory-seminar] Theory Lunch 03/04: Yang P. Liu
In-Reply-To:
References:
Message-ID:
Reminder: Yang's talk starts in 5 minutes.
Cheers,
David
On Thu, 4 Mar 2021 at 08:55, David Wajc wrote:
> Reminder: theory lunch is today at noon. See you then!
>
> Cheers,
> David
>
> On Mon, 1 Mar 2021 at 11:20, David Wajc wrote:
>
>> Hi all,
>>
>> This week's theory lunch will take place Thursday at noon (PDT), at our
>> gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Yang will tell us about *Discrepancy Minimization via a Self-Balancing
>> Walk*
>>
>> *Abstract:*
>>
>>
>>
>>
>> *We study discrepancy minimization for vectors in R^n under various
>> settings. The main result is the analysis of a new simple random process in
>> multiple dimensions through a comparison argument. As corollaries, we
>> obtain bounds which are tight up to logarithmic factors for online vector
>> balancing against oblivious adversaries, resolving several questions posed
>> by Bansal, Jiang, Singla, and Sinha (STOC 2020), as well as a linear time
>> algorithm for logarithmic bounds for the Koml?s
>> conjecture.Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
>> lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Sat Mar 6 22:47:37 2021
From: jneu at stanford.edu (Joachim Neu)
Date: Sat, 06 Mar 2021 22:47:37 -0800
Subject: [theory-seminar] =?utf-8?q?=22Robust_Learning_from_Batches_--_The?=
=?utf-8?q?_Best_Things_in_Life_are_=28Almost=29_Free=22_=E2=80=93_Alon_Or?=
=?utf-8?q?litsky_=28Thu=2C_11-Mar_=40_4=3A30pm=29?=
Message-ID: <40e5db9940a09a63191cc92965b7e106e63dbd28.camel@stanford.edu>
Robust Learning from Batches -- The Best Things in Life are (Almost)
Free
Alon Orlitsky ? Professor, UC San Diego
Thu, 11-Mar / 4:30pm
/ Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
To avoid Zoom-bombing, we ask attendees to sign in
via the above URL to receive the Zoom meeting details by email.
Please join us before the talk for the Thursdays
4pm ISL coffee (half-)hour in Gathertown at:
https://gather.town/uaAn6GTFg40xKE2u/ISL (Password: isl-colloq)
Abstract
In many applications, including natural language processing, sensor
networks, collaborative filtering, and federated learning, data are
collected in batches, some potentially corrupt, biased, or even
adversarial. Learning algorithms for this setting have therefore
garnered considerable recent attention. We develop a general framework
for robust learning from batches, and determine the least number of
samples required for robust density estimation and classification over
both discrete and continuous domains. Perhaps surprisingly, we show
that
robust learning can be achieved with essentially the same number of
samples as required for genuine data. For the important problems of
learning discrete and piecewise-polynomial densities, and of
interval-based classification, we achieve these limits with
polynomial-time algorithms.
Based on joint work with Ayush Jain.
Bio
Alon Orlitsky received B.Sc. degrees in Mathematics and Electrical
Engineering from Ben Gurion University, and M.Sc. and Ph.D. degrees in
Electrical Engineering from Stanford University. After a decade with
the
Communications Analysis Research Department of Bell Laboratories and
a
year as a quantitative analyst at D.E. Shaw and Company, he joined the
University of California San Diego, where he is currently a professor
of
Electrical and Computer Engineering and of Computer Science and
Engineering and holds the Qualcomm Chair for Information Theory and
its
Applications. His research concerns information theory, statistical
modeling, and machine learning, focusing on fundamental limits and
practical algorithms for extracting knowledge from data. Among other
distinctions, Alon is a recipient of the 2021 Information Theory
Society
Shannon Award and a co-recipient of the 2017 ICML Best Paper
Honorable
Mention Award, the 2015 NeurIPS Paper Award, and the 2006 Information
Theory Society Paper Award.
Mailing list:
https://mailman.stanford.edu/mailman/listinfo/isl-colloq
This talk:
http://isl.stanford.edu/talks/talks/2021q1/alon-orlitsky/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Mar 8 12:24:32 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 8 Mar 2021 12:24:32 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
Message-ID:
Hi all,
This week's theory lunch will take place Thursday at noon (PDT), at our
gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
Sidestepping*
*Abstract: *We consider an online binary prediction setting where a
forecaster observes a sequence of T bits one by one. Before each bit is
revealed, the forecaster predicts the probability that the bit is 1. The
forecaster is called well-calibrated if for each p in [0, 1], among the n_p
bits for which the forecaster predicts probability p, the actual number of
ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
\sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
deviates from being well-calibrated. It has long been known that an
O(T^{2/3}) calibration error is achievable even when the bits are chosen
adversarially, and possibly based on the previous predictions. However,
little is known on the lower bound side, except an Omega(T^{1/2}) bound
that follows from the trivial example of independent coin flips.
In this work, we prove an Omega(T^{0.528}) bound on the calibration error,
which is the first super-T^{1/2} lower bound for this setting to the best
of our knowledge. Our technical contributions include a new lower bound
technique, termed "sidestepping", which circumvents the obstacles that have
previously hindered strong calibration lower bounds. We also propose an
abstraction of the prediction setting, termed the Sign-Preservation game,
which may be of independent interest. This game has a much smaller state
space than the full prediction setting and allows simpler analyses. The
Omega(T^{0.528}) lower bound follows from a general reduction theorem that
translates lower bounds on the game value of Sign-Preservation into lower
bounds on the calibration error.
This is a joint work with Greg Valiant.
*Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
hall, (2) grab a seat, and (3) press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Mar 8 12:24:32 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 8 Mar 2021 12:24:32 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
Message-ID:
Hi all,
This week's theory lunch will take place Thursday at noon (PDT), at our
gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
Sidestepping*
*Abstract: *We consider an online binary prediction setting where a
forecaster observes a sequence of T bits one by one. Before each bit is
revealed, the forecaster predicts the probability that the bit is 1. The
forecaster is called well-calibrated if for each p in [0, 1], among the n_p
bits for which the forecaster predicts probability p, the actual number of
ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
\sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
deviates from being well-calibrated. It has long been known that an
O(T^{2/3}) calibration error is achievable even when the bits are chosen
adversarially, and possibly based on the previous predictions. However,
little is known on the lower bound side, except an Omega(T^{1/2}) bound
that follows from the trivial example of independent coin flips.
In this work, we prove an Omega(T^{0.528}) bound on the calibration error,
which is the first super-T^{1/2} lower bound for this setting to the best
of our knowledge. Our technical contributions include a new lower bound
technique, termed "sidestepping", which circumvents the obstacles that have
previously hindered strong calibration lower bounds. We also propose an
abstraction of the prediction setting, termed the Sign-Preservation game,
which may be of independent interest. This game has a much smaller state
space than the full prediction setting and allows simpler analyses. The
Omega(T^{0.528}) lower bound follows from a general reduction theorem that
translates lower bounds on the game value of Sign-Preservation into lower
bounds on the calibration error.
This is a joint work with Greg Valiant.
*Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
hall, (2) grab a seat, and (3) press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From baxelrod at stanford.edu Tue Mar 9 00:19:10 2021
From: baxelrod at stanford.edu (Brian Axelrod)
Date: Tue, 9 Mar 2021 08:19:10 +0000
Subject: [theory-seminar] Sign up for admit weekend!
Message-ID:
Hi Folks,
tldr; If you're a theory student, or have a theory advisor, please commit to being present at admit weekend!
This weekend is admit weekend! We're going to be meeting the admits via
1) https://ohyay.co/s/stanford-cs-phd-admit-weekend-2021 for student meetings and the reception.
2) The theory lunch gather https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory. We will send out an updated password later this week after theory lunch. If you don't get the password by Friday (we'll be using a different list), please email be directly.
The full schedule can be found here: https://admitweekend.stanford.edu/
Please sign up here! https://docs.google.com/spreadsheets/d/1zjN_yE36mk-QlDqG5MG9_RZgWAk8zLyw57GaQlle-8I/edit?usp=sharing. It's good to have multiple people per cell so don't hesitate to add yourself even if there's already someone there! Please try to make sure we have even coverage among time slots and professors.
Thanks,
Brian&Guy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From huypham at stanford.edu Tue Mar 9 10:57:01 2021
From: huypham at stanford.edu (Huy Tuan Pham)
Date: Tue, 9 Mar 2021 10:57:01 -0800
Subject: [theory-seminar] Interesting seminar this Friday
Message-ID:
Hi everyone,
The Faculty Area of Research Seminar in the Math Department this Friday
(03/12) may be of interest to some of you.
*Title:* Random graphs and thresholds
*Speaker: *Jinyoung Park (IAS)
*Abstract: *I will briefly introduce the notion of random graphs and some
of their basic properties, mostly focusing on thresholds for increasing
properties. I will also introduce "Kahn-Kalai expectation threshold
conjecture" and explain the motivation behind it with some examples. If
time permits, we will talk about a recent result of Jeff Kahn, Bhargav
Narayanan, and myself stating that the threshold for the random graph
G(n,p) to contain the square of a Hamilton cycle is 1/sqrt n, resolving a
conjecture of K?hn and Osthus from 2012. The proof idea is motivated by the
recent work of Frankston and the three aforementioned authors on a
conjecture of Talagrand -- "a fractional version" of the Kahn-Kalai
conjecture.
The seminar runs from 11:30AM-12:30PM PST this Friday (03/12). This is the
Zoom link for the event.
Join from PC, Mac, Linux, iOS or Android:
https://stanford.zoom.us/j/94515934688?pwd=UG1rVzg0RVZrNWw4bWtVdFFMSGpXUT09
Password: 986571
Best wishes,
Huy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From huypham at stanford.edu Tue Mar 9 10:57:01 2021
From: huypham at stanford.edu (Huy Tuan Pham)
Date: Tue, 9 Mar 2021 10:57:01 -0800
Subject: [theory-seminar] Interesting seminar this Friday
Message-ID:
Hi everyone,
The Faculty Area of Research Seminar in the Math Department this Friday
(03/12) may be of interest to some of you.
*Title:* Random graphs and thresholds
*Speaker: *Jinyoung Park (IAS)
*Abstract: *I will briefly introduce the notion of random graphs and some
of their basic properties, mostly focusing on thresholds for increasing
properties. I will also introduce "Kahn-Kalai expectation threshold
conjecture" and explain the motivation behind it with some examples. If
time permits, we will talk about a recent result of Jeff Kahn, Bhargav
Narayanan, and myself stating that the threshold for the random graph
G(n,p) to contain the square of a Hamilton cycle is 1/sqrt n, resolving a
conjecture of K?hn and Osthus from 2012. The proof idea is motivated by the
recent work of Frankston and the three aforementioned authors on a
conjecture of Talagrand -- "a fractional version" of the Kahn-Kalai
conjecture.
The seminar runs from 11:30AM-12:30PM PST this Friday (03/12). This is the
Zoom link for the event.
Join from PC, Mac, Linux, iOS or Android:
https://stanford.zoom.us/j/94515934688?pwd=UG1rVzg0RVZrNWw4bWtVdFFMSGpXUT09
Password: 986571
Best wishes,
Huy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Wed Mar 10 15:15:55 2021
From: moses at cs.stanford.edu (Moses Charikar)
Date: Wed, 10 Mar 2021 15:15:55 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
In-Reply-To:
References:
Message-ID:
Hi folks,
Just wanted to add that we will have some of our PhD admits joining us at
theory lunch tomorrow, so do make an extra effort to show up on time and
talk to them.
Best,
Moses
On Mon, Mar 8, 2021 at 12:25 PM David Wajc wrote:
> Hi all,
>
> This week's theory lunch will take place Thursday at noon (PDT), at our
> gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
> Sidestepping*
>
> *Abstract: *We consider an online binary prediction setting where a
> forecaster observes a sequence of T bits one by one. Before each bit is
> revealed, the forecaster predicts the probability that the bit is 1. The
> forecaster is called well-calibrated if for each p in [0, 1], among the n_p
> bits for which the forecaster predicts probability p, the actual number of
> ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
> \sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
> deviates from being well-calibrated. It has long been known that an
> O(T^{2/3}) calibration error is achievable even when the bits are chosen
> adversarially, and possibly based on the previous predictions. However,
> little is known on the lower bound side, except an Omega(T^{1/2}) bound
> that follows from the trivial example of independent coin flips.
>
> In this work, we prove an Omega(T^{0.528}) bound on the calibration error,
> which is the first super-T^{1/2} lower bound for this setting to the best
> of our knowledge. Our technical contributions include a new lower bound
> technique, termed "sidestepping", which circumvents the obstacles that have
> previously hindered strong calibration lower bounds. We also propose an
> abstraction of the prediction setting, termed the Sign-Preservation game,
> which may be of independent interest. This game has a much smaller state
> space than the full prediction setting and allows simpler analyses. The
> Omega(T^{0.528}) lower bound follows from a general reduction theorem that
> translates lower bounds on the game value of Sign-Preservation into lower
> bounds on the calibration error.
>
> This is a joint work with Greg Valiant.
>
>
>
> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
> hall, (2) grab a seat, and (3) press X to join the zoom lecture*
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Wed Mar 10 15:15:55 2021
From: moses at cs.stanford.edu (Moses Charikar)
Date: Wed, 10 Mar 2021 15:15:55 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
In-Reply-To:
References:
Message-ID:
Hi folks,
Just wanted to add that we will have some of our PhD admits joining us at
theory lunch tomorrow, so do make an extra effort to show up on time and
talk to them.
Best,
Moses
On Mon, Mar 8, 2021 at 12:25 PM David Wajc wrote:
> Hi all,
>
> This week's theory lunch will take place Thursday at noon (PDT), at our
> gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
> Sidestepping*
>
> *Abstract: *We consider an online binary prediction setting where a
> forecaster observes a sequence of T bits one by one. Before each bit is
> revealed, the forecaster predicts the probability that the bit is 1. The
> forecaster is called well-calibrated if for each p in [0, 1], among the n_p
> bits for which the forecaster predicts probability p, the actual number of
> ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
> \sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
> deviates from being well-calibrated. It has long been known that an
> O(T^{2/3}) calibration error is achievable even when the bits are chosen
> adversarially, and possibly based on the previous predictions. However,
> little is known on the lower bound side, except an Omega(T^{1/2}) bound
> that follows from the trivial example of independent coin flips.
>
> In this work, we prove an Omega(T^{0.528}) bound on the calibration error,
> which is the first super-T^{1/2} lower bound for this setting to the best
> of our knowledge. Our technical contributions include a new lower bound
> technique, termed "sidestepping", which circumvents the obstacles that have
> previously hindered strong calibration lower bounds. We also propose an
> abstraction of the prediction setting, termed the Sign-Preservation game,
> which may be of independent interest. This game has a much smaller state
> space than the full prediction setting and allows simpler analyses. The
> Omega(T^{0.528}) lower bound follows from a general reduction theorem that
> translates lower bounds on the game value of Sign-Preservation into lower
> bounds on the calibration error.
>
> This is a joint work with Greg Valiant.
>
>
>
> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
> hall, (2) grab a seat, and (3) press X to join the zoom lecture*
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From baxelrod at stanford.edu Thu Mar 11 08:13:26 2021
From: baxelrod at stanford.edu (Brian Axelrod)
Date: Thu, 11 Mar 2021 16:13:26 +0000
Subject: [theory-seminar] Sign up for admit weekend!
In-Reply-To:
References:
Message-ID:
Hi Folks!
Please sign up to hang out with the admits! To sign up please add your name to the cells on rows during which you'll be online and, in the columns, corresponding to your advisor(s). We want to have a dense matrix!
https://docs.google.com/spreadsheets/d/1zjN_yE36mk-QlDqG5MG9_RZgWAk8zLyw57GaQlle-8I/edit?usp=sharing
We need more people Friday evening especially!
Regards,
Brian
________________________________
From: Brian Axelrod
Sent: Tuesday, March 9, 2021 12:19 AM
To: theory-students at lists.stanford.edu ; theory-seminar at lists.stanford.edu ; thseminar at lists.stanford.edu
Subject: Sign up for admit weekend!
Hi Folks,
tldr; If you're a theory student, or have a theory advisor, please commit to being present at admit weekend!
This weekend is admit weekend! We're going to be meeting the admits via
1) https://ohyay.co/s/stanford-cs-phd-admit-weekend-2021 for student meetings and the reception.
2) The theory lunch gather https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory. We will send out an updated password later this week after theory lunch. If you don't get the password by Friday (we'll be using a different list), please email be directly.
The full schedule can be found here: https://admitweekend.stanford.edu/
Please sign up here! https://docs.google.com/spreadsheets/d/1zjN_yE36mk-QlDqG5MG9_RZgWAk8zLyw57GaQlle-8I/edit?usp=sharing. It's good to have multiple people per cell so don't hesitate to add yourself even if there's already someone there! Please try to make sure we have even coverage among time slots and professors.
Thanks,
Brian&Guy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 11 10:00:06 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 11 Mar 2021 10:00:06 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
In-Reply-To:
References:
Message-ID:
Reminder: this is happening in two hours. Come hang out with PhD admits and
then hear a talk by Mingda.
See you there,
David
On Wed, 10 Mar 2021 at 15:16, Moses Charikar wrote:
> Hi folks,
>
> Just wanted to add that we will have some of our PhD admits joining us at
> theory lunch tomorrow, so do make an extra effort to show up on time and
> talk to them.
>
> Best,
> Moses
>
>
>
> On Mon, Mar 8, 2021 at 12:25 PM David Wajc wrote:
>
>> Hi all,
>>
>> This week's theory lunch will take place Thursday at noon (PDT), at our
>> gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
>> Sidestepping*
>>
>> *Abstract: *We consider an online binary prediction setting where a
>> forecaster observes a sequence of T bits one by one. Before each bit is
>> revealed, the forecaster predicts the probability that the bit is 1. The
>> forecaster is called well-calibrated if for each p in [0, 1], among the n_p
>> bits for which the forecaster predicts probability p, the actual number of
>> ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
>> \sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
>> deviates from being well-calibrated. It has long been known that an
>> O(T^{2/3}) calibration error is achievable even when the bits are chosen
>> adversarially, and possibly based on the previous predictions. However,
>> little is known on the lower bound side, except an Omega(T^{1/2}) bound
>> that follows from the trivial example of independent coin flips.
>>
>> In this work, we prove an Omega(T^{0.528}) bound on the calibration
>> error, which is the first super-T^{1/2} lower bound for this setting to the
>> best of our knowledge. Our technical contributions include a new lower
>> bound technique, termed "sidestepping", which circumvents the obstacles
>> that have previously hindered strong calibration lower bounds. We also
>> propose an abstraction of the prediction setting, termed the
>> Sign-Preservation game, which may be of independent interest. This game has
>> a much smaller state space than the full prediction setting and allows
>> simpler analyses. The Omega(T^{0.528}) lower bound follows from a general
>> reduction theorem that translates lower bounds on the game value of
>> Sign-Preservation into lower bounds on the calibration error.
>>
>> This is a joint work with Greg Valiant.
>>
>>
>>
>> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
>> hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>>
> _______________________________________________
>> theory-seminar mailing list
>> theory-seminar at lists.stanford.edu
>> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 11 10:00:06 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 11 Mar 2021 10:00:06 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
In-Reply-To:
References:
Message-ID:
Reminder: this is happening in two hours. Come hang out with PhD admits and
then hear a talk by Mingda.
See you there,
David
On Wed, 10 Mar 2021 at 15:16, Moses Charikar wrote:
> Hi folks,
>
> Just wanted to add that we will have some of our PhD admits joining us at
> theory lunch tomorrow, so do make an extra effort to show up on time and
> talk to them.
>
> Best,
> Moses
>
>
>
> On Mon, Mar 8, 2021 at 12:25 PM David Wajc wrote:
>
>> Hi all,
>>
>> This week's theory lunch will take place Thursday at noon (PDT), at our
>> gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
>> Sidestepping*
>>
>> *Abstract: *We consider an online binary prediction setting where a
>> forecaster observes a sequence of T bits one by one. Before each bit is
>> revealed, the forecaster predicts the probability that the bit is 1. The
>> forecaster is called well-calibrated if for each p in [0, 1], among the n_p
>> bits for which the forecaster predicts probability p, the actual number of
>> ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
>> \sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
>> deviates from being well-calibrated. It has long been known that an
>> O(T^{2/3}) calibration error is achievable even when the bits are chosen
>> adversarially, and possibly based on the previous predictions. However,
>> little is known on the lower bound side, except an Omega(T^{1/2}) bound
>> that follows from the trivial example of independent coin flips.
>>
>> In this work, we prove an Omega(T^{0.528}) bound on the calibration
>> error, which is the first super-T^{1/2} lower bound for this setting to the
>> best of our knowledge. Our technical contributions include a new lower
>> bound technique, termed "sidestepping", which circumvents the obstacles
>> that have previously hindered strong calibration lower bounds. We also
>> propose an abstraction of the prediction setting, termed the
>> Sign-Preservation game, which may be of independent interest. This game has
>> a much smaller state space than the full prediction setting and allows
>> simpler analyses. The Omega(T^{0.528}) lower bound follows from a general
>> reduction theorem that translates lower bounds on the game value of
>> Sign-Preservation into lower bounds on the calibration error.
>>
>> This is a joint work with Greg Valiant.
>>
>>
>>
>> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
>> hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>>
> _______________________________________________
>> theory-seminar mailing list
>> theory-seminar at lists.stanford.edu
>> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 11 12:28:13 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 11 Mar 2021 12:28:13 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
In-Reply-To:
References:
Message-ID:
Reminder: talk starts in a few minutes.
Cheers,
David
On Thu, 11 Mar 2021 at 10:00, David Wajc wrote:
> Reminder: this is happening in two hours. Come hang out with PhD admits
> and then hear a talk by Mingda.
>
> See you there,
> David
>
> On Wed, 10 Mar 2021 at 15:16, Moses Charikar
> wrote:
>
>> Hi folks,
>>
>> Just wanted to add that we will have some of our PhD admits joining us at
>> theory lunch tomorrow, so do make an extra effort to show up on time and
>> talk to them.
>>
>> Best,
>> Moses
>>
>>
>>
>> On Mon, Mar 8, 2021 at 12:25 PM David Wajc wrote:
>>
>>> Hi all,
>>>
>>> This week's theory lunch will take place Thursday at noon (PDT), at our
>>> gather space:
>>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>>> SongComplexity).
>>> Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
>>> Sidestepping*
>>>
>>> *Abstract: *We consider an online binary prediction setting where a
>>> forecaster observes a sequence of T bits one by one. Before each bit is
>>> revealed, the forecaster predicts the probability that the bit is 1. The
>>> forecaster is called well-calibrated if for each p in [0, 1], among the n_p
>>> bits for which the forecaster predicts probability p, the actual number of
>>> ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
>>> \sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
>>> deviates from being well-calibrated. It has long been known that an
>>> O(T^{2/3}) calibration error is achievable even when the bits are chosen
>>> adversarially, and possibly based on the previous predictions. However,
>>> little is known on the lower bound side, except an Omega(T^{1/2}) bound
>>> that follows from the trivial example of independent coin flips.
>>>
>>> In this work, we prove an Omega(T^{0.528}) bound on the calibration
>>> error, which is the first super-T^{1/2} lower bound for this setting to the
>>> best of our knowledge. Our technical contributions include a new lower
>>> bound technique, termed "sidestepping", which circumvents the obstacles
>>> that have previously hindered strong calibration lower bounds. We also
>>> propose an abstraction of the prediction setting, termed the
>>> Sign-Preservation game, which may be of independent interest. This game has
>>> a much smaller state space than the full prediction setting and allows
>>> simpler analyses. The Omega(T^{0.528}) lower bound follows from a general
>>> reduction theorem that translates lower bounds on the game value of
>>> Sign-Preservation into lower bounds on the calibration error.
>>>
>>> This is a joint work with Greg Valiant.
>>>
>>>
>>>
>>> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
>>> lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>>>
>> _______________________________________________
>>> theory-seminar mailing list
>>> theory-seminar at lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 11 12:28:13 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 11 Mar 2021 12:28:13 -0800
Subject: [theory-seminar] Theory Lunch 03/11: Mingda Qiao
In-Reply-To:
References:
Message-ID:
Reminder: talk starts in a few minutes.
Cheers,
David
On Thu, 11 Mar 2021 at 10:00, David Wajc wrote:
> Reminder: this is happening in two hours. Come hang out with PhD admits
> and then hear a talk by Mingda.
>
> See you there,
> David
>
> On Wed, 10 Mar 2021 at 15:16, Moses Charikar
> wrote:
>
>> Hi folks,
>>
>> Just wanted to add that we will have some of our PhD admits joining us at
>> theory lunch tomorrow, so do make an extra effort to show up on time and
>> talk to them.
>>
>> Best,
>> Moses
>>
>>
>>
>> On Mon, Mar 8, 2021 at 12:25 PM David Wajc wrote:
>>
>>> Hi all,
>>>
>>> This week's theory lunch will take place Thursday at noon (PDT), at our
>>> gather space:
>>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>>> SongComplexity).
>>> Mingda will tell us about*:* *Stronger Calibration Lower Bounds via
>>> Sidestepping*
>>>
>>> *Abstract: *We consider an online binary prediction setting where a
>>> forecaster observes a sequence of T bits one by one. Before each bit is
>>> revealed, the forecaster predicts the probability that the bit is 1. The
>>> forecaster is called well-calibrated if for each p in [0, 1], among the n_p
>>> bits for which the forecaster predicts probability p, the actual number of
>>> ones, m_p, is indeed equal to p * n_p. The calibration error, defined as
>>> \sum_p |m_p - p n_p|, quantifies the extent to which the forecaster
>>> deviates from being well-calibrated. It has long been known that an
>>> O(T^{2/3}) calibration error is achievable even when the bits are chosen
>>> adversarially, and possibly based on the previous predictions. However,
>>> little is known on the lower bound side, except an Omega(T^{1/2}) bound
>>> that follows from the trivial example of independent coin flips.
>>>
>>> In this work, we prove an Omega(T^{0.528}) bound on the calibration
>>> error, which is the first super-T^{1/2} lower bound for this setting to the
>>> best of our knowledge. Our technical contributions include a new lower
>>> bound technique, termed "sidestepping", which circumvents the obstacles
>>> that have previously hindered strong calibration lower bounds. We also
>>> propose an abstraction of the prediction setting, termed the
>>> Sign-Preservation game, which may be of independent interest. This game has
>>> a much smaller state space than the full prediction setting and allows
>>> simpler analyses. The Omega(T^{0.528}) lower bound follows from a general
>>> reduction theorem that translates lower bounds on the game value of
>>> Sign-Preservation into lower bounds on the calibration error.
>>>
>>> This is a joint work with Greg Valiant.
>>>
>>>
>>>
>>> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the
>>> lecture hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>>>
>> _______________________________________________
>>> theory-seminar mailing list
>>> theory-seminar at lists.stanford.edu
>>> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Fri Mar 12 16:48:07 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Fri, 12 Mar 2021 16:48:07 -0800
Subject: [theory-seminar] =?utf-8?q?=22Approximating_cross-validation=3A_g?=
=?utf-8?q?uarantees_for_model_assessment_and_selection=22_?=
=?utf-8?b?4oCTIEFzaGlhIFdpbHNvbiAoVGh1LCAxOC1NYXIgQCA0OjMwcG0p?=
Message-ID:
Approximating cross-validation: guarantees for model assessment and
selectionAshia Wilson ? Professor, MIT
Thu, 18-Mar / 4:30pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*To avoid Zoom-bombing, we ask attendees to sign in via the above URL to
receive the Zoom meeting details by email.*
Abstract
Cross-validation (CV) is the de facto standard for selecting accurate
predictive models and assessing model performance. However, CV suffers from
a need to repeatedly refit a learning procedure on a large number of
training datasets. To reduce the computational burden, a number of works
have introduced approximate CV procedures that simultaneously reduce
runtime and provide model assessments comparable to CV when the prediction
problem is sufficiently smooth. An open question however is whether these
procedures are suitable for model selection. In this talk, I?ll describe
(i) broad conditions under which the model selection performance of
approximate CV nearly matches that of CV, (ii) examples of prediction
problems where approximate CV selection fails to mimic CV selection, and
(iii) an extension of these results and the approximate CV framework more
broadly to non-smooth prediction problems like L1-regularized empirical
risk minimization.
Bio
Ashia is an Assistant Professor in EECS at MIT. Her research focuses on the
methodological foundations and theory of various topics in machine
learning. She is interested in developing frameworks for algorithmic
assessment and providing rigorous guarantees for algorithmic performance.
She received her BA from Harvard University with a concentration in applied
mathematics and a minor in philosophy, and a PhD from UC Berkeley in
statistics. She most recently held a postdoctoral position in the machine
learning group at Microsoft Research, New England.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list isl-colloq at lists.stanford.edu
.*
------------------------------
Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
This talk: http://isl.stanford.edu/talks/talks/2021q1/ashia-wilson/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gblanc at stanford.edu Fri Mar 12 21:49:37 2021
From: gblanc at stanford.edu (Guy Blanc)
Date: Fri, 12 Mar 2021 21:49:37 -0800
Subject: [theory-seminar] CS admit weekend: Theory game night
Message-ID:
Hi Stanford theory students,
This *Saturday at 6pm* PST, we'll have the main Theory social event of
admit weekend: A board game night! It will be in the theory lunch
gathertown, though with a different password. Current PhD students, please
come and show the admits a good time :)
Here is the link ,
and the password is "Welcome2StanfordTheory"
When you join, make your way to the new board game room. The door to it is
located in the top right of our main gather space (see the "Game room"
label in the below screenshot). Inside the room, there are various tables
you can walk up to and interact with by pressing "x" to play the
corresponding game.
[image: image.png]
Cheers,
Guy and Brian
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 790780 bytes
Desc: not available
URL:
From wajc at stanford.edu Mon Mar 15 09:22:23 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 15 Mar 2021 09:22:23 -0700
Subject: [theory-seminar] Theory Lunch 03/18: Margalit Glasgow
Message-ID:
Hi all,
This week's theory lunch will take place Thursday at noon (PDT), at our
gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Margalit will tell us about*:* *Invertibility of 3-core of Erdos-Reyni
Graphs*
*Abstract**: *
In this talk, I'll show that in ER graphs with average degree d = omega(1),
with probability 1 - o(1), the adjacency matrix of the 3-core of the graph
has full rank. The key idea is a tight characterization of the
combinatorial structures that cause linear dependencies in sparse random
matrices. This proves a weakened version of a conjecture from Vu, 2014,
which speculates that with high probability, the 3-core of a ER random
graph is invertible for any average degree d > 1.
Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Mar 15 09:22:23 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 15 Mar 2021 09:22:23 -0700
Subject: [theory-seminar] Theory Lunch 03/18: Margalit Glasgow
Message-ID:
Hi all,
This week's theory lunch will take place Thursday at noon (PDT), at our
gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Margalit will tell us about*:* *Invertibility of 3-core of Erdos-Reyni
Graphs*
*Abstract**: *
In this talk, I'll show that in ER graphs with average degree d = omega(1),
with probability 1 - o(1), the adjacency matrix of the 3-core of the graph
has full rank. The key idea is a tight characterization of the
combinatorial structures that cause linear dependencies in sparse random
matrices. This proves a weakened version of a conjecture from Vu, 2014,
which speculates that with high probability, the 3-core of a ER random
graph is invertible for any average degree d > 1.
Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From huypham at stanford.edu Mon Mar 15 14:45:07 2021
From: huypham at stanford.edu (Huy Tuan Pham)
Date: Mon, 15 Mar 2021 14:45:07 -0700
Subject: [theory-seminar] Interesting seminar this Friday
Message-ID:
Hi everyone,
The Faculty Area of Research Seminar in the Math Department this Friday
(03/19) may be of interest to some of you.
=====================
*Title:* *Combinatorial optimization, submodular functions, and Nash social
welfare*
*Speaker: Jan Vondrak*
*Abstract:* I will talk about the problem of allocating indivisible goods
to agents in order to optimize a certain welfare objective. Various
objectives can be considered, the most natural being the summation of
"valuation functions" of the participating agents. The "Nash social
welfare" is an alternative objective which goes back to John Nash's work in
the 1950s; it is the *geometric average* rather than a sum of valuation
functions, which has several desirable properties such as balancing total
welfare with fairness. On the technical side, it presents a significantly
different problem, with connections to areas such as matching theory,
computation of the permanent, and stable polynomials.
Our main new result is that one can find an allocation within a constant
factor of the optimal Nash social welfare, whenever the valuation functions
are submodular.
This is joint work with Wenzheng Li.
=====================
The seminar runs from 11:30AM-12:30PM PST this Friday (03/19). This is the
Zoom link for the event.
Join from PC, Mac, Linux, iOS or Android:
https://stanford.zoom.us/j/94515934688?pwd=UG1rVzg0RVZrNWw4bWtVdFFMSGpXUT09
Password: 986571
Best wishes,
Huy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From huypham at stanford.edu Mon Mar 15 14:45:07 2021
From: huypham at stanford.edu (Huy Tuan Pham)
Date: Mon, 15 Mar 2021 14:45:07 -0700
Subject: [theory-seminar] Interesting seminar this Friday
Message-ID:
Hi everyone,
The Faculty Area of Research Seminar in the Math Department this Friday
(03/19) may be of interest to some of you.
=====================
*Title:* *Combinatorial optimization, submodular functions, and Nash social
welfare*
*Speaker: Jan Vondrak*
*Abstract:* I will talk about the problem of allocating indivisible goods
to agents in order to optimize a certain welfare objective. Various
objectives can be considered, the most natural being the summation of
"valuation functions" of the participating agents. The "Nash social
welfare" is an alternative objective which goes back to John Nash's work in
the 1950s; it is the *geometric average* rather than a sum of valuation
functions, which has several desirable properties such as balancing total
welfare with fairness. On the technical side, it presents a significantly
different problem, with connections to areas such as matching theory,
computation of the permanent, and stable polynomials.
Our main new result is that one can find an allocation within a constant
factor of the optimal Nash social welfare, whenever the valuation functions
are submodular.
This is joint work with Wenzheng Li.
=====================
The seminar runs from 11:30AM-12:30PM PST this Friday (03/19). This is the
Zoom link for the event.
Join from PC, Mac, Linux, iOS or Android:
https://stanford.zoom.us/j/94515934688?pwd=UG1rVzg0RVZrNWw4bWtVdFFMSGpXUT09
Password: 986571
Best wishes,
Huy
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Tue Mar 16 11:50:59 2021
From: reingold at stanford.edu (Omer Reingold)
Date: Tue, 16 Mar 2021 11:50:59 -0700
Subject: [theory-seminar] HALG 2021
Message-ID:
A good forum to present your research.
Omer
-------------------------------------------------------------------
6th Highlights of Algorithms conference (HALG 2021)
London, May 31-June 3, 2021
https://highlightsofalgorithms.org/
The Highlights of Algorithms conference is a forum for presenting the
highlights of recent developments in algorithms and for discussing
potential further advances in this area. Theible conference will
provide a broad picture of the latest research in algorithms through a
series of invited talks, as well as the possibility for all
researchers and students to present their recent results through a
series of short talks and poster presentations. Attending the
Highlights of Algorithms conference will also be an opportunity for
networking and meeting leading researchers in algorithms.
-------------------------------------------------------------------
Call For Submissions of Contributed Presentations
The HALG 2021 conference seeks submissions for contributed
presentations. Each presentation is expected to consist of a short
talk followed by a question and answer session. There will be no
conference proceedings, hence presenting work already published or
accepted at a different venue or journal (or to be submitted there) is
welcome.
If you would like to present your results at HALG 2021, please submit
the details (the abstract, the paper and the speaker of the talk) via
EasyChair:
https://easychair.org/conferences/?conf=halg21
The abstract should include (when relevant) information where the
results have been published/accepted (e.g., conference), and where
they are publicly available (e.g., ArXiv). All submissions will be
reviewed by the program committee, giving priority to work accepted or
published in 2020 or later.
Submissions deadline: April 15th, 2021, 23:00 GMT.
Acceptance/rejection notifications for contributed presentations:
April 30th, 2021.
_______________________________________________
From reingold at stanford.edu Wed Mar 17 07:11:37 2021
From: reingold at stanford.edu (Omer Reingold)
Date: Wed, 17 Mar 2021 07:11:37 -0700
Subject: [theory-seminar] =?utf-8?q?The_Abel_Prize_was_awarded_earlier_tod?=
=?utf-8?q?ay_to_L=C3=A1szl=C3=B3_Lov=C3=A1sz_and_Avi_Wigderson?=
Message-ID:
This St. Patrick?s Day is even happier than usual
https://gilkalai.wordpress.com/2021/03/17/cheerful-news-in-difficult-times-the-abel-prize-is-awarded-to-laszlo-lovasz-and-avi-wigderson/
This is a wonderful recognition of two brilliant and influential
researchers but also to the TOC community.
Cheers,
Omer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Wed Mar 17 09:11:55 2021
From: reingold at stanford.edu (Omer Reingold)
Date: Wed, 17 Mar 2021 09:11:55 -0700
Subject: [theory-seminar]
=?utf-8?q?The_Abel_Prize_was_awarded_earlier_tod?=
=?utf-8?q?ay_to_L=C3=A1szl=C3=B3_Lov=C3=A1sz_and_Avi_Wigderson?=
In-Reply-To:
References:
Message-ID:
Apologies for the spam, but wanted to point out the prize announcement
which is an enthusiastic praise to the impact of CS
theory within modern mathematics:
https://www.youtube.com/watch?v=0_NK_OkpmUY and to these two giants.
Omer
On Wed, Mar 17, 2021 at 7:11 AM Omer Reingold wrote:
>
> This St. Patrick?s Day is even happier than usual
> https://gilkalai.wordpress.com/2021/03/17/cheerful-news-in-difficult-times-the-abel-prize-is-awarded-to-laszlo-lovasz-and-avi-wigderson/
>
> This is a wonderful recognition of two brilliant and influential researchers but also to the TOC community.
>
> Cheers,
> Omer
From wajc at stanford.edu Thu Mar 18 09:00:17 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 18 Mar 2021 09:00:17 -0700
Subject: [theory-seminar] Theory Lunch 03/18: Margalit Glasgow
In-Reply-To:
References:
Message-ID:
Reminder: (the socializing part of) theory lunch will start in three hours.
Cheers,
David
On Mon, 15 Mar 2021 at 09:22, David Wajc wrote:
> Hi all,
>
> This week's theory lunch will take place Thursday at noon (PDT), at our
> gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Margalit will tell us about*:* *Invertibility of 3-core of Erdos-Reyni
> Graphs*
>
> *Abstract**: *
> In this talk, I'll show that in ER graphs with average degree d =
> omega(1), with probability 1 - o(1), the adjacency matrix of the 3-core of
> the graph has full rank. The key idea is a tight characterization of the
> combinatorial structures that cause linear dependencies in sparse random
> matrices. This proves a weakened version of a conjecture from Vu, 2014,
> which speculates that with high probability, the 3-core of a ER random
> graph is invertible for any average degree d > 1.
>
> Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 18 09:00:17 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 18 Mar 2021 09:00:17 -0700
Subject: [theory-seminar] Theory Lunch 03/18: Margalit Glasgow
In-Reply-To:
References:
Message-ID:
Reminder: (the socializing part of) theory lunch will start in three hours.
Cheers,
David
On Mon, 15 Mar 2021 at 09:22, David Wajc wrote:
> Hi all,
>
> This week's theory lunch will take place Thursday at noon (PDT), at our
> gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Margalit will tell us about*:* *Invertibility of 3-core of Erdos-Reyni
> Graphs*
>
> *Abstract**: *
> In this talk, I'll show that in ER graphs with average degree d =
> omega(1), with probability 1 - o(1), the adjacency matrix of the 3-core of
> the graph has full rank. The key idea is a tight characterization of the
> combinatorial structures that cause linear dependencies in sparse random
> matrices. This proves a weakened version of a conjecture from Vu, 2014,
> which speculates that with high probability, the 3-core of a ER random
> graph is invertible for any average degree d > 1.
>
> Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Thu Mar 18 10:36:43 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Thu, 18 Mar 2021 10:36:43 -0700
Subject: [theory-seminar]
=?utf-8?q?=22Approximating_cross-validation=3A_g?=
=?utf-8?q?uarantees_for_model_assessment_and_selection=22_?=
=?utf-8?b?4oCTIEFzaGlhIFdpbHNvbiAoVGh1LCAxOC1NYXIgQCA0OjMwcG0p?=
In-Reply-To:
References:
Message-ID:
Reminder: this talk is today at 4:30pm.
On Fri, Mar 12, 2021 at 4:48 PM Tavor Baharav wrote:
> Approximating cross-validation: guarantees for model assessment and
> selectionAshia Wilson ? Professor, MIT
>
> Thu, 18-Mar / 4:30pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
>
> *To avoid Zoom-bombing, we ask attendees to sign in via the above URL to
> receive the Zoom meeting details by email.*
> Abstract
>
> Cross-validation (CV) is the de facto standard for selecting accurate
> predictive models and assessing model performance. However, CV suffers from
> a need to repeatedly refit a learning procedure on a large number of
> training datasets. To reduce the computational burden, a number of works
> have introduced approximate CV procedures that simultaneously reduce
> runtime and provide model assessments comparable to CV when the prediction
> problem is sufficiently smooth. An open question however is whether these
> procedures are suitable for model selection. In this talk, I?ll describe
> (i) broad conditions under which the model selection performance of
> approximate CV nearly matches that of CV, (ii) examples of prediction
> problems where approximate CV selection fails to mimic CV selection, and
> (iii) an extension of these results and the approximate CV framework more
> broadly to non-smooth prediction problems like L1-regularized empirical
> risk minimization.
> Bio
>
> Ashia is an Assistant Professor in EECS at MIT. Her research focuses on
> the methodological foundations and theory of various topics in machine
> learning. She is interested in developing frameworks for algorithmic
> assessment and providing rigorous guarantees for algorithmic performance.
> She received her BA from Harvard University with a concentration in applied
> mathematics and a minor in philosophy, and a PhD from UC Berkeley in
> statistics. She most recently held a postdoctoral position in the machine
> learning group at Microsoft Research, New England.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list isl-colloq at lists.stanford.edu
> .*
> ------------------------------
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
> This talk: http://isl.stanford.edu/talks/talks/2021q1/ashia-wilson/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 18 12:26:11 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 18 Mar 2021 12:26:11 -0700
Subject: [theory-seminar] Theory Lunch 03/18: Margalit Glasgow
In-Reply-To:
References:
Message-ID:
Reminder: the talk part of theory lunch will start in a few minutes.
Cheers,
David
On Thu, 18 Mar 2021 at 09:00, David Wajc wrote:
> Reminder: (the socializing part of) theory lunch will start in three hours.
>
> Cheers,
> David
>
> On Mon, 15 Mar 2021 at 09:22, David Wajc wrote:
>
>> Hi all,
>>
>> This week's theory lunch will take place Thursday at noon (PDT), at our
>> gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Margalit will tell us about*:* *Invertibility of 3-core of Erdos-Reyni
>> Graphs*
>>
>> *Abstract**: *
>> In this talk, I'll show that in ER graphs with average degree d =
>> omega(1), with probability 1 - o(1), the adjacency matrix of the 3-core of
>> the graph has full rank. The key idea is a tight characterization of the
>> combinatorial structures that cause linear dependencies in sparse random
>> matrices. This proves a weakened version of a conjecture from Vu, 2014,
>> which speculates that with high probability, the 3-core of a ER random
>> graph is invertible for any average degree d > 1.
>>
>> Cheers,
>> David
>>
>> PS
>> *Pro tip:* To join the talk (at 12:30):
>> (1) go to the lecture hall,
>> (2) grab a seat, and
>> (3)* press X to join the zoom lecture*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Mar 18 12:26:11 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 18 Mar 2021 12:26:11 -0700
Subject: [theory-seminar] Theory Lunch 03/18: Margalit Glasgow
In-Reply-To:
References:
Message-ID:
Reminder: the talk part of theory lunch will start in a few minutes.
Cheers,
David
On Thu, 18 Mar 2021 at 09:00, David Wajc wrote:
> Reminder: (the socializing part of) theory lunch will start in three hours.
>
> Cheers,
> David
>
> On Mon, 15 Mar 2021 at 09:22, David Wajc wrote:
>
>> Hi all,
>>
>> This week's theory lunch will take place Thursday at noon (PDT), at our
>> gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Margalit will tell us about*:* *Invertibility of 3-core of Erdos-Reyni
>> Graphs*
>>
>> *Abstract**: *
>> In this talk, I'll show that in ER graphs with average degree d =
>> omega(1), with probability 1 - o(1), the adjacency matrix of the 3-core of
>> the graph has full rank. The key idea is a tight characterization of the
>> combinatorial structures that cause linear dependencies in sparse random
>> matrices. This proves a weakened version of a conjecture from Vu, 2014,
>> which speculates that with high probability, the 3-core of a ER random
>> graph is invertible for any average degree d > 1.
>>
>> Cheers,
>> David
>>
>> PS
>> *Pro tip:* To join the talk (at 12:30):
>> (1) go to the lecture hall,
>> (2) grab a seat, and
>> (3)* press X to join the zoom lecture*
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From marykw at stanford.edu Fri Mar 19 11:09:37 2021
From: marykw at stanford.edu (Mary Wootters)
Date: Fri, 19 Mar 2021 11:09:37 -0700
Subject: [theory-seminar] Course Announcement: CS351,
Open Problems in Coding Theory
Message-ID:
Hi all,
I'm teaching a new class in the spring, CS351, named "Open Problems in
Coding Theory", that might be of interest to folks on this list. The goals
of the course are to (a) explore the research frontier in coding theory,
(b) get practice with research skills like reading papers,
identifying/formulating open problems, and breaking down research problems
into manageable steps and toy problems; and (c) have some fun actually
thinking about open problems!
Here's a course blurb:
*Coding theory is the study of how to encode data to protect it from noise.
Coding theory touches CS, EE, math, and many other areas, and there are
exciting open problems at all of these frontiers. In this class, we will
explore these open problems by reading recent research papers and thinking
about some open problems together. Required work will involve reading and
presenting research papers, as well as working in small groups to formulate
and work on open problems and presenting progress. (Solving an open problem
is not required!) Topics will depend on student interest and may include
locality, list decoding, index coding, interactive communication, and group
testing.*
I hope that the course will be accessible to anyone who knows a bit about
coding theory -- eg, if you've taken CS250/EE387 or a similar course --
even if you haven't done research before. However, I hope that the course
will also be fun and interesting if you are a seasoned researcher looking
to learn more about coding theory, or to work on some coding theory
problems with a group.
The course will meet once a week on Fridays in a marathon session from
1-3:50pm. We will break up the time with a variety of activities,
including student presentations, brainstorming in small groups, and invited
speakers.
Please let me know if you have any questions about the class!
Best,
Mary
--
Mary Wootters (she/her)
Assistant Professor of Computer Science and Electrical Engineering
Stanford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ilo at stanford.edu Mon Mar 22 13:50:09 2021
From: ilo at stanford.edu (Irene Yuan Lo)
Date: Mon, 22 Mar 2021 20:50:09 +0000
Subject: [theory-seminar] =?windows-1252?q?The_1st_ACM_Conference_on_Equit?=
=?windows-1252?q?y_and_Access_in_Algorithms=2C_Mechanisms=2C_and_Optimiza?=
=?windows-1252?q?tion_=28EAAMO=9221=29_Call_for_Participation?=
Message-ID:
Hi everyone,
We are extremely excited to announce the inaugural ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO?21) that will take place virtually on 5-9 October 2021!
The conference stems from the Mechanism Design for Social Good (MD4SG) initiative, and aims to highlight work where techniques from algorithms, optimization, and mechanism design, along with insights from the social sciences and humanistic studies, can help improve equity and access to opportunity for historically disadvantaged and underserved communities. EAAMO?21 will provide an international forum for presenting research papers, problem pitches, survey and position papers, new datasets, and software demonstrations towards the goal of bridging research and practice. Read more about us on our website.
We would greatly appreciate it if you could forward the information to other colleagues who may be interested in submitting proposals for participation, especially to those who are working on the use of quantitative methods to improve the wellbeing of marginalized communities.
EAAMO?21 is organized by the Mechanism Design for Social Good (MD4SG) initiative, and builds on the MD4SG technical workshop series and tutorials at conferences including ACM EC, ACM COMPASS, ACM FAccT and WINE. The conference will feature keynote talks, panels, and contributed presentations across numerous fields. In line with the MD4SG core values of bridging research and practice, the conference will bring together researchers, policy-makers, and practitioners in various government and non-government organizations, community organizations, and industry to build multi-disciplinary pipelines. EAAMO?21 is proudly supported by ACM SIGecom and SIGAI.
EAAMO?21 is soliciting submissions of research papers, position and policy papers, as well as special problem- and practice-driven submissions, to be presented at the conference. Submissions can fall into one of two tracks: research track and the policy & practice track, and can be archival or non-archival. Note that archival submissions will be published in the conference proceedings and must follow ACM guidelines. Non-archival submissions will not be published with proceedings and can be already published work or work that will be published in the future in a different conference or journal. The deadline for submissions is June 3, 2021, 5pm ET. For more information regarding submissions information and topics, please visit our website.
Important Information:
* Paper Submission Deadline: 3 June 2021 at 5 PM ET / 9 PM GMT
* Paper Submission Page: EasyChair
* Financial Assistance Application Deadline: 3 June 2021 at 5 PM ET / 9 PM GMT
* Notification: 15 July 2021
* Event Date: 5-9 October 2021
Organizing Committee:
Program Co-Chairs
* Rediet Abebe, University of California Berkeley & Harvard Society of Fellows
* Irene Lo, Stanford University
* Ana-Andreea Stoica, Columbia University
Executive Committee:
* Rediet Abebe, University of California Berkeley & Harvard Society of Fellows
* Kira Goldner, Columbia University
* Maximilian Kasy, University of Oxford
* Jon Kleinberg, Cornell University
* Illenin Kondo, the Federal Reserve Bank of Minneapolis
* Sera Linardi, University of Pittsburgh
* Irene Lo, Stanford University
* Ana-Andreea Stoica, Columbia University
General Chair:
* Francisco Marmolejo-Cossio, University of Oxford
For any questions, comments, or inquiries, email us at pc at eaamo.org.
--
Irene Lo
Assistant Professor in Management Science & Engineering
Stanford University
ilo at stanford.edu | 909-859-4183
Website: https://sites.google.com/view/irene-lo
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Mar 22 16:28:43 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 22 Mar 2021 16:28:43 -0700
Subject: [theory-seminar] Spring break + call for speakers for next quarter
Message-ID:
Hi all,
This week we won't be holding our regular theory lunch, due to spring break.
This is a good time to solicit speakers for next quarter! A few slots are
already booked, but we have free slots starting April 15. Get 'em while
supplies last!
Cheers,
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Mar 22 16:28:43 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 22 Mar 2021 16:28:43 -0700
Subject: [theory-seminar] Spring break + call for speakers for next quarter
Message-ID:
Hi all,
This week we won't be holding our regular theory lunch, due to spring break.
This is a good time to solicit speakers for next quarter! A few slots are
already booked, but we have free slots starting April 15. Get 'em while
supplies last!
Cheers,
David
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Thu Mar 25 00:58:19 2021
From: moses at cs.stanford.edu (Moses Charikar)
Date: Thu, 25 Mar 2021 00:58:19 -0700
Subject: [theory-seminar] STATS 318: Modern Markov Chains
Message-ID:
Hi folks,
Persi Diaconis is teaching a course on Modern Markov Chains in the Spring
that some of you will find interesting. See the attached flyer for info.
Cheers,
Moses
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Spring2021_STATS318.pdf
Type: application/pdf
Size: 92244 bytes
Desc: not available
URL:
From yjhan at stanford.edu Fri Mar 26 15:26:16 2021
From: yjhan at stanford.edu (Yanjun Han)
Date: Fri, 26 Mar 2021 22:26:16 +0000
Subject: [theory-seminar] Course announcement: EE378C (Information-theoretic
Lower Bounds in Data Science)
Message-ID:
EE378C: Information-theoretic Lower Bounds in Data Science
"Grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference," Reinhold Niebuhr (1892?1971)
We are teaching a new class this spring, on the science and the art of the impossible. In this course, we will provide a diverse but unified set of ideas and tools to establish impossibility results (science), and use a rich set of interdisciplinary examples to show how to choose from and apply these ideas (art). Establishing impossibility results is a common task in various fields of data science; for example, what is the smallest error one could achieve in a classification problem? How many iterations do we need to approach the optimal solution in optimization? How much computation/communication/memory do we need to learn the distribution of the data? What is the optimal data querying/collection scheme in online and reinforcement learning? What is the best representation/coding of a message in a given communication channel?
This course aims to explore the use of information-theoretic lower bounds in data science. From the theory side, we will provide an extensive set of ideas and tools for establishing such bounds, providing a unified exposition of seemingly disparate ideas from different fields and problems. From the applied side, we will illustrate the efficacy of these ideas in a wide range of problems - spanning machine learning, statistics, information theory, theoretical computer science, optimization, online learning and bandits, operations research, and more - and provide guidelines for choosing and using the appropriate set of tools for a given problem.
Tools and examples, both in breadth and depth, are the core elements of this course, and they will be considered in a multitude of data scientific contexts.
Course website: https://web.stanford.edu/class/ee378c/
Content:
Two big ideas, i.e. reduction and testing, on establishing lower bounds, and several special topics.
Reduction: statistical decision theory, deficiency, Le Cam?s distance, asymptotic equivalence, limit of experiments, local asymptotic normality, Hajek-Le Cam classical asymptotics, parametric submodel, statistical-computational tradeoff
Testing: f-divergence, joint range, Le Cam?s two-point method, simple against composite hypotheses, Ingster-Suslina method, fuzzy hypothesis testing, orthogonal polynomial, moment matching, testing multiple hypotheses, Fano, Assouad, Global Fano, packing and covering bounds
Special topics: communication/privacy constrained estimation/testing, adaptation lower bounds, sequential experimental design, min-max and max-min formulation, compression-based arguments, geometric arguments, strong converses, multi-terminal information theory
Examples/applications: communication complexity, nonparametric density estimation and regression, Poissonization, asymptotic efficiency, planted clique, sparse PCA, bandits and online learning, (generalized) uniformity and identity testing, functional estimation, Gaussian mixture model, method of moments, theory of aggregation, optimality of VC dimension, oracle complexity of (stochastic) optimization, isotonic and convex regression, log-concave density estimation, sparse linear model, dynamic pricing, learning with limited rounds of adaptivity, optimality of Johnson-Lindenstrauss, universal source coding, density estimation under TV/Hellinger/KL divergence
Prerequisites:
EE 278, or CS 229T, or STATS 300A, or equivalent, or Instructor?s permission. This course will be mostly self-contained, but requires mathematical maturity at a graduate level. Background in statistics, information theory, machine learning, and/or optimization is recommended.
Logistics:
The course will cover a wide range of ideas, tools, and examples of proving impossibility results. Grading will be based primarily on 3-4 homework sets and a final literature review. A reading list is offered on the course website, but students are welcome to propose their own choice of papers or research projects.
Instructors: Yanjun Han, Ayfer Ozgur, and Tsachy Weissman
Time: Mon, Wed 10:00 AM - 11:20 AM
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From liyang at cs.stanford.edu Fri Mar 26 21:02:32 2021
From: liyang at cs.stanford.edu (Li-Yang Tan)
Date: Fri, 26 Mar 2021 21:02:32 -0700
Subject: [theory-seminar] Complexity theory courses this quarter
Message-ID:
Hi all,
Hope everyone had a great spring break!
I'm teaching two courses on complexity theory this quarter:
- CS254B ,
a graduate-level introduction to the field. The course is structured
chronologically: we'll start with select gems from the 70s and work our way
up to the present. Among other topics, we'll take a deep dive into the
Hardness vs. Randomness paradigm, one of the major achievements (and
surprises!) of complexity theory.
- CS359A ,
a research seminar on concrete complexity. A main focus of the seminar
will be on mapping out the research frontier, developing fruitful research
directions, and tackling research problems together.
Let me know if you have any questions, and I look forward to seeing you in
class next week.
All the best,
Li-Yang
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From liyang at cs.stanford.edu Fri Mar 26 21:02:32 2021
From: liyang at cs.stanford.edu (Li-Yang Tan)
Date: Fri, 26 Mar 2021 21:02:32 -0700
Subject: [theory-seminar] Complexity theory courses this quarter
Message-ID:
Hi all,
Hope everyone had a great spring break!
I'm teaching two courses on complexity theory this quarter:
- CS254B ,
a graduate-level introduction to the field. The course is structured
chronologically: we'll start with select gems from the 70s and work our way
up to the present. Among other topics, we'll take a deep dive into the
Hardness vs. Randomness paradigm, one of the major achievements (and
surprises!) of complexity theory.
- CS359A ,
a research seminar on concrete complexity. A main focus of the seminar
will be on mapping out the research frontier, developing fruitful research
directions, and tackling research problems together.
Let me know if you have any questions, and I look forward to seeing you in
class next week.
All the best,
Li-Yang
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jacobfox at stanford.edu Mon Mar 29 22:16:00 2021
From: jacobfox at stanford.edu (Jacob Fox)
Date: Tue, 30 Mar 2021 05:16:00 +0000
Subject: [theory-seminar] Course announcement: Math 233C Topics in
Combinatorics, Advances in Extremal and Probabilistic Combinatorics
Message-ID:
This spring quarter on Tuesdays and Thursdays from 12:30pm to 1:50pm I will be teaching the graduate course
Math 233C Topics in Combinatorics, Advances in Extremal and Probabilistic Combinatorics
It assumes Math 159 (the undergraduate course on probabilistic methods in combinatorics).
A brief description: Math 233C is an advanced graduate class on extremal and probabilistic combinatorics. Important methods, results and open problems will be highlighted. These methods include dependent random choice, containers method, entropy method, and randomized algebraic geometric constructions.
The syllabus is attached to this email. The lectures will be over zoom.
Cheers,
Jacob Fox
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Math 233C Syllabus Spring 2021.pdf
Type: application/pdf
Size: 87327 bytes
Desc: Math 233C Syllabus Spring 2021.pdf
URL:
From jacobfox at stanford.edu Mon Mar 29 22:16:00 2021
From: jacobfox at stanford.edu (Jacob Fox)
Date: Tue, 30 Mar 2021 05:16:00 +0000
Subject: [theory-seminar] Course announcement: Math 233C Topics in
Combinatorics, Advances in Extremal and Probabilistic Combinatorics
Message-ID:
This spring quarter on Tuesdays and Thursdays from 12:30pm to 1:50pm I will be teaching the graduate course
Math 233C Topics in Combinatorics, Advances in Extremal and Probabilistic Combinatorics
It assumes Math 159 (the undergraduate course on probabilistic methods in combinatorics).
A brief description: Math 233C is an advanced graduate class on extremal and probabilistic combinatorics. Important methods, results and open problems will be highlighted. These methods include dependent random choice, containers method, entropy method, and randomized algebraic geometric constructions.
The syllabus is attached to this email. The lectures will be over zoom.
Cheers,
Jacob Fox
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Math 233C Syllabus Spring 2021.pdf
Type: application/pdf
Size: 87327 bytes
Desc: Math 233C Syllabus Spring 2021.pdf
URL:
From wajc at stanford.edu Tue Mar 30 14:25:41 2021
From: wajc at stanford.edu (David Wajc)
Date: Tue, 30 Mar 2021 14:25:41 -0700
Subject: [theory-seminar] Theory Lunch: 04/01 Yeganeh Ali Mohammadi
Message-ID:
Hi all,
Welcome back from Spring break!
As a reminder, there are still open slots for talks this quarter.
Let me know if you want to give a talk!
The first theory lunch of the quarter will take place Thursday at noon
(PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Yeganeh will tell us about*:* *Raising Supply vs Improving Matching: A
Random Walk Down Spatial Markets*
*Abstract:*
We study dynamic matching in a spatial setting: there are $n$ riders and
$m$ drivers placed uniformly at random on the interval $[0,1]^2$. The
location of the drivers is known. The riders arrive in some (possibly
adversarial) order and they have to be matched irrevocably to a driver at
the time of arrival. The cost of matching a driver to a rider is equal to
the $l_1$-norm of their distance on the interval. The question we consider
is which strategy is better: to boost supply by attracting more drivers to
the platform, or to have a perfect forecast and design an optimal matching
technology?
We prove that if $m\geq (1+\epsilon)n$ for some $\epsilon>0$, the cost of
matching returned by a simple greedy algorithm that pairs each arriving
rider to the closest available driver is $\tilde{O}(1)$. On the other hand,
when $n=m$, even an omniscient algorithm with perfect knowledge about the
positions of riders cannot find a matching with cost better than
$C\sqrt{n}$. Our results shed light on the important role of supply in
spatial matching markets: No level of sophistication in the matching
algorithm and no amount of data to predict times and locations of future
demand in a balanced market can beat a myopic greedy algorithm with a small
excess supply.
*This is joint work with Mohammad Akbarpour, Shengwu Li and Amin Saberi.*
*Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
hall, (2) grab a seat, and (3) press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Tue Mar 30 14:25:41 2021
From: wajc at stanford.edu (David Wajc)
Date: Tue, 30 Mar 2021 14:25:41 -0700
Subject: [theory-seminar] Theory Lunch: 04/01 Yeganeh Ali Mohammadi
Message-ID:
Hi all,
Welcome back from Spring break!
As a reminder, there are still open slots for talks this quarter.
Let me know if you want to give a talk!
The first theory lunch of the quarter will take place Thursday at noon
(PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Yeganeh will tell us about*:* *Raising Supply vs Improving Matching: A
Random Walk Down Spatial Markets*
*Abstract:*
We study dynamic matching in a spatial setting: there are $n$ riders and
$m$ drivers placed uniformly at random on the interval $[0,1]^2$. The
location of the drivers is known. The riders arrive in some (possibly
adversarial) order and they have to be matched irrevocably to a driver at
the time of arrival. The cost of matching a driver to a rider is equal to
the $l_1$-norm of their distance on the interval. The question we consider
is which strategy is better: to boost supply by attracting more drivers to
the platform, or to have a perfect forecast and design an optimal matching
technology?
We prove that if $m\geq (1+\epsilon)n$ for some $\epsilon>0$, the cost of
matching returned by a simple greedy algorithm that pairs each arriving
rider to the closest available driver is $\tilde{O}(1)$. On the other hand,
when $n=m$, even an omniscient algorithm with perfect knowledge about the
positions of riders cannot find a matching with cost better than
$C\sqrt{n}$. Our results shed light on the important role of supply in
spatial matching markets: No level of sophistication in the matching
algorithm and no amount of data to predict times and locations of future
demand in a balanced market can beat a myopic greedy algorithm with a small
excess supply.
*This is joint work with Mohammad Akbarpour, Shengwu Li and Amin Saberi.*
*Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
hall, (2) grab a seat, and (3) press X to join the zoom lecture*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Tue Mar 30 17:01:47 2021
From: moses at cs.stanford.edu (Moses Charikar)
Date: Tue, 30 Mar 2021 17:01:47 -0700
Subject: [theory-seminar] summer internships at AWS Analytics
Message-ID:
Hi folks,
Forwarding a message from Nina Mishra about summer internships at AWS
Analytics;.
Cheers,
Moses
-------
The AWS Analytics team at Amazon is looking for Applied Science interns for
the Summer of 2021. We seek candidates who are keen to advance the state
of the art in ML/algorithms on numeric, multidimensional, time-series
data. Ideal candidates are PhD students with a few papers under their
belt. The desired output of an internship is a publication in a reputable
forum. Interested candidates can send their CV to nmishra at amazon.com.
Applications must be received by April 15, 2021.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From yjhan at stanford.edu Tue Mar 30 18:43:48 2021
From: yjhan at stanford.edu (Yanjun Han)
Date: Wed, 31 Mar 2021 01:43:48 +0000
Subject: [theory-seminar] [info_theory_forum] Course announcement:
EE378C (Information-theoretic Lower Bounds in Data Science)
In-Reply-To:
References:
Message-ID:
Update: there was mistakenly an enrollment cap at 30 which prevents new enrollment in the system. We have lifted the cap and it should work now.
On Mar 26, 2021, at 3:26 PM, Yanjun Han > wrote:
EE378C: Information-theoretic Lower Bounds in Data Science
"Grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference," Reinhold Niebuhr (1892?1971)
We are teaching a new class this spring, on the science and the art of the impossible. In this course, we will provide a diverse but unified set of ideas and tools to establish impossibility results (science), and use a rich set of interdisciplinary examples to show how to choose from and apply these ideas (art). Establishing impossibility results is a common task in various fields of data science; for example, what is the smallest error one could achieve in a classification problem? How many iterations do we need to approach the optimal solution in optimization? How much computation/communication/memory do we need to learn the distribution of the data? What is the optimal data querying/collection scheme in online and reinforcement learning? What is the best representation/coding of a message in a given communication channel?
This course aims to explore the use of information-theoretic lower bounds in data science. From the theory side, we will provide an extensive set of ideas and tools for establishing such bounds, providing a unified exposition of seemingly disparate ideas from different fields and problems. From the applied side, we will illustrate the efficacy of these ideas in a wide range of problems - spanning machine learning, statistics, information theory, theoretical computer science, optimization, online learning and bandits, operations research, and more - and provide guidelines for choosing and using the appropriate set of tools for a given problem.
Tools and examples, both in breadth and depth, are the core elements of this course, and they will be considered in a multitude of data scientific contexts.
Course website: https://web.stanford.edu/class/ee378c/
Content:
Two big ideas, i.e. reduction and testing, on establishing lower bounds, and several special topics.
Reduction: statistical decision theory, deficiency, Le Cam?s distance, asymptotic equivalence, limit of experiments, local asymptotic normality, Hajek-Le Cam classical asymptotics, parametric submodel, statistical-computational tradeoff
Testing: f-divergence, joint range, Le Cam?s two-point method, simple against composite hypotheses, Ingster-Suslina method, fuzzy hypothesis testing, orthogonal polynomial, moment matching, testing multiple hypotheses, Fano, Assouad, Global Fano, packing and covering bounds
Special topics: communication/privacy constrained estimation/testing, adaptation lower bounds, sequential experimental design, min-max and max-min formulation, compression-based arguments, geometric arguments, strong converses, multi-terminal information theory
Examples/applications: communication complexity, nonparametric density estimation and regression, Poissonization, asymptotic efficiency, planted clique, sparse PCA, bandits and online learning, (generalized) uniformity and identity testing, functional estimation, Gaussian mixture model, method of moments, theory of aggregation, optimality of VC dimension, oracle complexity of (stochastic) optimization, isotonic and convex regression, log-concave density estimation, sparse linear model, dynamic pricing, learning with limited rounds of adaptivity, optimality of Johnson-Lindenstrauss, universal source coding, density estimation under TV/Hellinger/KL divergence
Prerequisites:
EE 278, or CS 229T, or STATS 300A, or equivalent, or Instructor?s permission. This course will be mostly self-contained, but requires mathematical maturity at a graduate level. Background in statistics, information theory, machine learning, and/or optimization is recommended.
Logistics:
The course will cover a wide range of ideas, tools, and examples of proving impossibility results. Grading will be based primarily on 3-4 homework sets and a final literature review. A reading list is offered on the course website, but students are welcome to propose their own choice of papers or research projects.
Instructors: Yanjun Han, Ayfer Ozgur, and Tsachy Weissman
Time: Mon, Wed 10:00 AM - 11:20 AM
_______________________________________________
information_theory_forum mailing list
information_theory_forum at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/information_theory_forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ajmann at stanford.edu Wed Mar 31 09:41:20 2021
From: ajmann at stanford.edu (Ariana Joy Mann)
Date: Wed, 31 Mar 2021 16:41:20 +0000
Subject: [theory-seminar] =?windows-1252?q?=22Modeling_the_Heterogeneity_i?=
=?windows-1252?q?n_COVID-19=92s_Reproductive_Number_and_Its_Impact_on_Pre?=
=?windows-1252?q?dictive_Scenarios=22_-_Claire_Donnat_=28Mon_Apr-5_=40_12?=
=?windows-1252?q?pm=29?=
Message-ID:
Modeling the Heterogeneity in COVID-19?s Reproductive Number and Its Impact on Predictive Scenarios
Claire Donnat ? Professor, University of Chicago
Hosted by the Algorithms and Friends Seminar
Mon, 5-Apr / 12pm / Zoom: https://stanford.zoom.us/meeting/register/tJEpcOyopzwjGdFFJD1G5LooJcdMIDdD86Qm
To avoid Zoom-bombing, we ask attendees to sign in via the above URL to receive the Zoom meeting details by email.
Abstract
The correct evaluation of the reproductive number R for COVID-19 ? which characterizes the average number of secondary cases generated by each typical primary case? is central in the quantification of the potential scope of the pandemic and the selection of an appropriate course of action. In most models, R is modelled as a universal constant for the virus across outbreak clusters and individuals? effectively averaging out the inherent variability of the transmission process due to varying individual contact rates, population densities, demographics, or temporal factors amongst many. Yet, due to the exponential nature of epidemic growth, the error due to this simplification can be rapidly amplified and lead to inaccurate predictions and/or risk evaluation. From the statistical modeling perspective, the magnitude of the impact of this averaging remains an open question: how can this intrinsic variability be percolated into epidemic models, and how can its impact on uncertainty quantification and predictive scenarios be better quantified? In this talk, we discuss a Bayesian perspective on this question, creating a bridge between the agent-based and compartmental approaches commonly used in the literature. After deriving a Bayesian model that captures at scale the heterogeneity of a population and environmental conditions, we simulate the spread of the epidemic as well as the impact of different social distancing strategies, and highlight the strong impact of this added variability on the reported results. We base our discussion on both synthetic experiments ? thereby quantifying of the reliability and the magnitude of the effects ? and real COVID-19 data.
Bio
Claire Donnat is an Assistant Professor in the Statistics Department at the University of Chicago. After completing her undergraduate and graduate studies at Ecole Polytechnique (France), she pursued her PhD in Statistics at Stanford University under the supervision of Professor Susan Holmes, and graduated in Spring 2020. Her research interests consist in devising statistical methods for inference on graphs and heterogeneous datasets, and in particular, with applications to biomedical data.
________________________________
Mailing list: https://mailman.stanford.edu/mailman/listinfo/algorithms-and-friends
Algorithm & Friends Seminar: http://theory.stanford.edu/algofriends/possible_dates.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: