From wajc at stanford.edu Thu Apr 1 12:03:36 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 1 Apr 2021 12:03:36 0700
Subject: [theoryseminar] Theory Lunch: 04/01 Yeganeh Ali Mohammadi
InReplyTo:
References:
MessageID:
Hi all,
Yeganeh's talk will start in half an hour! See you there.
Cheers,
David
On Tue, 30 Mar 2021 at 14:25, David Wajc wrote:
> Hi all,
>
> Welcome back from Spring break!
> As a reminder, there are still open slots for talks this quarter.
> Let me know if you want to give a talk!
>
> The first theory lunch of the quarter will take place Thursday at noon
> (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Yeganeh will tell us about*:* *Raising Supply vs Improving Matching: A
> Random Walk Down Spatial Markets*
>
> *Abstract:*
> We study dynamic matching in a spatial setting: there are $n$ riders and
> $m$ drivers placed uniformly at random on the interval $[0,1]^2$. The
> location of the drivers is known. The riders arrive in some (possibly
> adversarial) order and they have to be matched irrevocably to a driver at
> the time of arrival. The cost of matching a driver to a rider is equal to
> the $l_1$norm of their distance on the interval. The question we consider
> is which strategy is better: to boost supply by attracting more drivers to
> the platform, or to have a perfect forecast and design an optimal matching
> technology?
>
> We prove that if $m\geq (1+\epsilon)n$ for some $\epsilon>0$, the cost of
> matching returned by a simple greedy algorithm that pairs each arriving
> rider to the closest available driver is $\tilde{O}(1)$. On the other hand,
> when $n=m$, even an omniscient algorithm with perfect knowledge about the
> positions of riders cannot find a matching with cost better than
> $C\sqrt{n}$. Our results shed light on the important role of supply in
> spatial matching markets: No level of sophistication in the matching
> algorithm and no amount of data to predict times and locations of future
> demand in a balanced market can beat a myopic greedy algorithm with a small
> excess supply.
>
> *This is joint work with Mohammad Akbarpour, Shengwu Li and Amin Saberi.*
>
>
> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
> hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 1 12:03:36 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 1 Apr 2021 12:03:36 0700
Subject: [theoryseminar] Theory Lunch: 04/01 Yeganeh Ali Mohammadi
InReplyTo:
References:
MessageID:
Hi all,
Yeganeh's talk will start in half an hour! See you there.
Cheers,
David
On Tue, 30 Mar 2021 at 14:25, David Wajc wrote:
> Hi all,
>
> Welcome back from Spring break!
> As a reminder, there are still open slots for talks this quarter.
> Let me know if you want to give a talk!
>
> The first theory lunch of the quarter will take place Thursday at noon
> (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Yeganeh will tell us about*:* *Raising Supply vs Improving Matching: A
> Random Walk Down Spatial Markets*
>
> *Abstract:*
> We study dynamic matching in a spatial setting: there are $n$ riders and
> $m$ drivers placed uniformly at random on the interval $[0,1]^2$. The
> location of the drivers is known. The riders arrive in some (possibly
> adversarial) order and they have to be matched irrevocably to a driver at
> the time of arrival. The cost of matching a driver to a rider is equal to
> the $l_1$norm of their distance on the interval. The question we consider
> is which strategy is better: to boost supply by attracting more drivers to
> the platform, or to have a perfect forecast and design an optimal matching
> technology?
>
> We prove that if $m\geq (1+\epsilon)n$ for some $\epsilon>0$, the cost of
> matching returned by a simple greedy algorithm that pairs each arriving
> rider to the closest available driver is $\tilde{O}(1)$. On the other hand,
> when $n=m$, even an omniscient algorithm with perfect knowledge about the
> positions of riders cannot find a matching with cost better than
> $C\sqrt{n}$. Our results shed light on the important role of supply in
> spatial matching markets: No level of sophistication in the matching
> algorithm and no amount of data to predict times and locations of future
> demand in a balanced market can beat a myopic greedy algorithm with a small
> excess supply.
>
> *This is joint work with Mohammad Akbarpour, Shengwu Li and Amin Saberi.*
>
>
> *Cheers,DavidPSPro tip: To join the talk (at 12:30):(1) go to the lecture
> hall, (2) grab a seat, and (3) press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From ajmann at stanford.edu Mon Apr 5 10:20:48 2021
From: ajmann at stanford.edu (Ariana Joy Mann)
Date: Mon, 5 Apr 2021 17:20:48 +0000
Subject: [theoryseminar] =?windows1252?q?Today=3A_=22Modeling_the_Hetero?=
=?windows1252?q?geneity_in_COVID19=92s_Reproductive_Number_and_Its_Impa?=
=?windows1252?q?ct_on_Predictive_Scenarios=22__Claire_Donnat_=28Mon_Apr?=
=?windows1252?q?5_=40_12pm=29?=
MessageID:
Reminder: This talk is happening today, Monday at 12pm PT
________________________________
Modeling the Heterogeneity in COVID19?s Reproductive Number and Its Impact on Predictive Scenarios
Claire Donnat ? Professor, University of Chicago
Hosted by the Algorithms and Friends Seminar
Mon, 5Apr / 12pm / Zoom: https://stanford.zoom.us/meeting/register/tJEpcOyopzwjGdFFJD1G5LooJcdMIDdD86Qm
To avoid Zoombombing, we ask attendees to sign in via the above URL to receive the Zoom meeting details by email.
Abstract
The correct evaluation of the reproductive number R for COVID19 ? which characterizes the average number of secondary cases generated by each typical primary case? is central in the quantification of the potential scope of the pandemic and the selection of an appropriate course of action. In most models, R is modelled as a universal constant for the virus across outbreak clusters and individuals? effectively averaging out the inherent variability of the transmission process due to varying individual contact rates, population densities, demographics, or temporal factors amongst many. Yet, due to the exponential nature of epidemic growth, the error due to this simplification can be rapidly amplified and lead to inaccurate predictions and/or risk evaluation. From the statistical modeling perspective, the magnitude of the impact of this averaging remains an open question: how can this intrinsic variability be percolated into epidemic models, and how can its impact on uncertainty quantification and predictive scenarios be better quantified? In this talk, we discuss a Bayesian perspective on this question, creating a bridge between the agentbased and compartmental approaches commonly used in the literature. After deriving a Bayesian model that captures at scale the heterogeneity of a population and environmental conditions, we simulate the spread of the epidemic as well as the impact of different social distancing strategies, and highlight the strong impact of this added variability on the reported results. We base our discussion on both synthetic experiments ? thereby quantifying of the reliability and the magnitude of the effects ? and real COVID19 data.
Bio
Claire Donnat is an Assistant Professor in the Statistics Department at the University of Chicago. After completing her undergraduate and graduate studies at Ecole Polytechnique (France), she pursued her PhD in Statistics at Stanford University under the supervision of Professor Susan Holmes, and graduated in Spring 2020. Her research interests consist in devising statistical methods for inference on graphs and heterogeneous datasets, and in particular, with applications to biomedical data.
________________________________
Mailing list: https://mailman.stanford.edu/mailman/listinfo/algorithmsandfriends
Algorithm & Friends Seminar: http://theory.stanford.edu/algofriends/possible_dates.html
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Mon Apr 5 11:13:23 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Mon, 5 Apr 2021 14:13:23 0400
Subject: [theoryseminar] =?utf8?q?=22Scalable_semidefinite_programming?=
=?utf8?b?IiDigJMgSm9lbCBUcm9wcCAoVGh1LCA4LUFwciBAIDQ6MzBwbSk=?=
MessageID:
Scalable semidefinite programmingJoel Tropp ? Professor, Caltech
Thu, 8Apr / 4:30pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*To avoid Zoombombing, we ask attendees to sign in via the above URL to
receive the Zoom meeting details by email.*
Abstract
Semidefinite programming (SDP) is a powerful framework from convex
optimization that has striking potential for data science applications.
This talk describes a provably correct randomized algorithm for solving
large, weakly constrained SDP problems by economizing on the storage and
arithmetic costs. Numerical evidence shows that the method is effective for
a range of applications, including relaxations of MaxCut, abstract phase
retrieval, and quadratic assignment problems. Running on a laptop
equivalent, the algorithm can handle SDP instances where the matrix
variable has over 10^14 entries.
This talk will highlight the ideas behind the algorithm in a streamlined
setting. The insights include a careful problem formulation, design of a
bespoke optimization method, use of randomized eigenvalue computations, and
use of randomized sketching methods.
Joint work with Alp Yurtsever, Olivier Fercoq, Madeleine Udell, and Volkan
Cevher. Based on arXiv 1912.02949 (Scalable SDP, SIMODS 2021) and other
papers (SketchyCGM in AISTATS 2017, Nystr?m sketch in NeurIPS 2017).
Bio
Joel A. Tropp is Steele Family Professor of Applied and Computational
Mathematics at Caltech. His research centers on data science, applied
mathematics, numerical algorithms, and random matrix theory. He attained
the Ph.D. degree in Computational Applied Mathematics at the University of
Texas at Austin in 2004, and he joined Caltech in 2007. Prof. Tropp won the
PECASE in 2008, and he was recognized as a Highly Cited Researcher in
Computer Science each year from 2014?2018. He is cofounder and Section
Editor of the SIAM Journal on Mathematics of Data Science (SIMODS), and he
was cochair of the inaugural 2020 SIAM Conference on the Mathematics of
Data Science. Prof. Tropp was elected SIAM Fellow in 2019 and IEEE Fellow
in 2020.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list islcolloq at lists.stanford.edu
.*

Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2021q2/joeltropp/
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 5 11:33:40 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 5 Apr 2021 11:33:40 0700
Subject: [theoryseminar] Theory lunch 04/08: Guy Blanc
MessageID:
Hi all,
Theory lunch will take place Thursday at noon (PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Guy will tell us about*:* *Properly learning decision trees in almost
polynomial time*
*Abstract*: We give an n^{O(log log n)}time membership query algorithm for
properly learning polynomialsize decision trees under the uniform
distribution. The previous best running time was n^{O(log n)}, a
consequence of a classic algorithm of Ehrenfeucht and Haussler from 1989.
Our algorithm is built on a new structural result for decision trees that
strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the
OSSS theorem says that every decision tree has an influential variable, we
show that every decision tree can be "pruned" so that *every* variable in
the resulting tree is influential.
*Joint work with Jane Lange, Mingda Qiao, and LiYang Tan.*
Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 5 11:33:40 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 5 Apr 2021 11:33:40 0700
Subject: [theoryseminar] Theory lunch 04/08: Guy Blanc
MessageID:
Hi all,
Theory lunch will take place Thursday at noon (PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Guy will tell us about*:* *Properly learning decision trees in almost
polynomial time*
*Abstract*: We give an n^{O(log log n)}time membership query algorithm for
properly learning polynomialsize decision trees under the uniform
distribution. The previous best running time was n^{O(log n)}, a
consequence of a classic algorithm of Ehrenfeucht and Haussler from 1989.
Our algorithm is built on a new structural result for decision trees that
strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the
OSSS theorem says that every decision tree has an influential variable, we
show that every decision tree can be "pruned" so that *every* variable in
the resulting tree is influential.
*Joint work with Jane Lange, Mingda Qiao, and LiYang Tan.*
Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Thu Apr 8 08:21:58 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Thu, 8 Apr 2021 11:21:58 0400
Subject: [theoryseminar]
=?utf8?q?=22Scalable_semidefinite_programming?=
=?utf8?b?IiDigJMgSm9lbCBUcm9wcCAoVGh1LCA4LUFwciBAIDQ6MzBwbSk=?=
InReplyTo:
References:
MessageID:
Reminder: this talk is today at 4:30pm.
On Mon, Apr 5, 2021 at 2:13 PM Tavor Baharav wrote:
> Scalable semidefinite programmingJoel Tropp ? Professor, Caltech
>
> Thu, 8Apr / 4:30pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
>
> *To avoid Zoombombing, we ask attendees to sign in via the above URL to
> receive the Zoom meeting details by email.*
> Abstract
>
> Semidefinite programming (SDP) is a powerful framework from convex
> optimization that has striking potential for data science applications.
> This talk describes a provably correct randomized algorithm for solving
> large, weakly constrained SDP problems by economizing on the storage and
> arithmetic costs. Numerical evidence shows that the method is effective for
> a range of applications, including relaxations of MaxCut, abstract phase
> retrieval, and quadratic assignment problems. Running on a laptop
> equivalent, the algorithm can handle SDP instances where the matrix
> variable has over 10^14 entries.
>
> This talk will highlight the ideas behind the algorithm in a streamlined
> setting. The insights include a careful problem formulation, design of a
> bespoke optimization method, use of randomized eigenvalue computations, and
> use of randomized sketching methods.
>
> Joint work with Alp Yurtsever, Olivier Fercoq, Madeleine Udell, and Volkan
> Cevher. Based on arXiv 1912.02949 (Scalable SDP, SIMODS 2021) and other
> papers (SketchyCGM in AISTATS 2017, Nystr?m sketch in NeurIPS 2017).
> Bio
>
> Joel A. Tropp is Steele Family Professor of Applied and Computational
> Mathematics at Caltech. His research centers on data science, applied
> mathematics, numerical algorithms, and random matrix theory. He attained
> the Ph.D. degree in Computational Applied Mathematics at the University of
> Texas at Austin in 2004, and he joined Caltech in 2007. Prof. Tropp won the
> PECASE in 2008, and he was recognized as a Highly Cited Researcher in
> Computer Science each year from 2014?2018. He is cofounder and Section
> Editor of the SIAM Journal on Mathematics of Data Science (SIMODS), and he
> was cochair of the inaugural 2020 SIAM Conference on the Mathematics of
> Data Science. Prof. Tropp was elected SIAM Fellow in 2019 and IEEE Fellow
> in 2020.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
> 
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk: http://isl.stanford.edu/talks/talks/2021q2/joeltropp/
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 8 08:45:55 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 8 Apr 2021 08:45:55 0700
Subject: [theoryseminar] Theory lunch 04/08: Guy Blanc
InReplyTo:
References:
MessageID:
Hi all,
Reminder: theory lunch socializing, followed by Guy's talk, will start at
noon.
Cheers,
David
On Mon, 5 Apr 2021 at 11:33, David Wajc wrote:
> Hi all,
>
> Theory lunch will take place Thursday at noon (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Guy will tell us about*:* *Properly learning decision trees in almost
> polynomial time*
>
> *Abstract*: We give an n^{O(log log n)}time membership query algorithm
> for properly learning polynomialsize decision trees under the uniform
> distribution. The previous best running time was n^{O(log n)}, a
> consequence of a classic algorithm of Ehrenfeucht and Haussler from 1989.
> Our algorithm is built on a new structural result for decision trees that
> strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the
> OSSS theorem says that every decision tree has an influential variable, we
> show that every decision tree can be "pruned" so that *every* variable in
> the resulting tree is influential.
> *Joint work with Jane Lange, Mingda Qiao, and LiYang Tan.*
>
> Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 8 08:45:55 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 8 Apr 2021 08:45:55 0700
Subject: [theoryseminar] Theory lunch 04/08: Guy Blanc
InReplyTo:
References:
MessageID:
Hi all,
Reminder: theory lunch socializing, followed by Guy's talk, will start at
noon.
Cheers,
David
On Mon, 5 Apr 2021 at 11:33, David Wajc wrote:
> Hi all,
>
> Theory lunch will take place Thursday at noon (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Guy will tell us about*:* *Properly learning decision trees in almost
> polynomial time*
>
> *Abstract*: We give an n^{O(log log n)}time membership query algorithm
> for properly learning polynomialsize decision trees under the uniform
> distribution. The previous best running time was n^{O(log n)}, a
> consequence of a classic algorithm of Ehrenfeucht and Haussler from 1989.
> Our algorithm is built on a new structural result for decision trees that
> strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the
> OSSS theorem says that every decision tree has an influential variable, we
> show that every decision tree can be "pruned" so that *every* variable in
> the resulting tree is influential.
> *Joint work with Jane Lange, Mingda Qiao, and LiYang Tan.*
>
> Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 8 12:26:23 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 8 Apr 2021 12:26:23 0700
Subject: [theoryseminar] Theory lunch 04/08: Guy Blanc
InReplyTo:
References:
MessageID:
Reminder: Guy's talk will start in a few minutes.
On Thu, 8 Apr 2021 at 08:45, David Wajc wrote:
> Hi all,
>
> Reminder: theory lunch socializing, followed by Guy's talk, will start at
> noon.
>
> Cheers,
> David
>
> On Mon, 5 Apr 2021 at 11:33, David Wajc wrote:
>
>> Hi all,
>>
>> Theory lunch will take place Thursday at noon (PDT), at our gather
>> space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Guy will tell us about*:* *Properly learning decision trees in almost
>> polynomial time*
>>
>> *Abstract*: We give an n^{O(log log n)}time membership query algorithm
>> for properly learning polynomialsize decision trees under the uniform
>> distribution. The previous best running time was n^{O(log n)}, a
>> consequence of a classic algorithm of Ehrenfeucht and Haussler from 1989.
>> Our algorithm is built on a new structural result for decision trees that
>> strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the
>> OSSS theorem says that every decision tree has an influential variable, we
>> show that every decision tree can be "pruned" so that *every* variable
>> in the resulting tree is influential.
>> *Joint work with Jane Lange, Mingda Qiao, and LiYang Tan.*
>>
>> Cheers,
>> David
>>
>> PS
>> *Pro tip:* To join the talk (at 12:30):
>> (1) go to the lecture hall,
>> (2) grab a seat, and
>> (3)* press X to join the zoom lecture*
>>
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 8 12:26:23 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 8 Apr 2021 12:26:23 0700
Subject: [theoryseminar] Theory lunch 04/08: Guy Blanc
InReplyTo:
References:
MessageID:
Reminder: Guy's talk will start in a few minutes.
On Thu, 8 Apr 2021 at 08:45, David Wajc wrote:
> Hi all,
>
> Reminder: theory lunch socializing, followed by Guy's talk, will start at
> noon.
>
> Cheers,
> David
>
> On Mon, 5 Apr 2021 at 11:33, David Wajc wrote:
>
>> Hi all,
>>
>> Theory lunch will take place Thursday at noon (PDT), at our gather
>> space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> SongComplexity).
>> Guy will tell us about*:* *Properly learning decision trees in almost
>> polynomial time*
>>
>> *Abstract*: We give an n^{O(log log n)}time membership query algorithm
>> for properly learning polynomialsize decision trees under the uniform
>> distribution. The previous best running time was n^{O(log n)}, a
>> consequence of a classic algorithm of Ehrenfeucht and Haussler from 1989.
>> Our algorithm is built on a new structural result for decision trees that
>> strengthens a theorem of O'Donnell, Saks, Schramm, and Servedio. While the
>> OSSS theorem says that every decision tree has an influential variable, we
>> show that every decision tree can be "pruned" so that *every* variable
>> in the resulting tree is influential.
>> *Joint work with Jane Lange, Mingda Qiao, and LiYang Tan.*
>>
>> Cheers,
>> David
>>
>> PS
>> *Pro tip:* To join the talk (at 12:30):
>> (1) go to the lecture hall,
>> (2) grab a seat, and
>> (3)* press X to join the zoom lecture*
>>
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 12 16:17:01 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 12 Apr 2021 16:17:01 0700
Subject: [theoryseminar] Theory lunch: No talk this week
MessageID:
Hi all,
There will be no theory lunch talk this week. We will reconvene next
week to hear Nathan Zixia Hu talk about Sampling Arborescences in Parallel.
Cheers,
David
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 12 16:17:01 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 12 Apr 2021 16:17:01 0700
Subject: [theoryseminar] Theory lunch: No talk this week
MessageID:
Hi all,
There will be no theory lunch talk this week. We will reconvene next
week to hear Nathan Zixia Hu talk about Sampling Arborescences in Parallel.
Cheers,
David
 next part 
An HTML attachment was scrubbed...
URL:
From tselil at stanford.edu Mon Apr 12 18:27:51 2021
From: tselil at stanford.edu (Tselil Schramm)
Date: Mon, 12 Apr 2021 18:27:51 0700
Subject: [theoryseminar] Wednesday TCS+ talk by Andrea Lincoln (UC Berkeley)
MessageID:
Hello theorists,
See the talk announcement below for a talk that may be of interest to many.
Tselil

Dear TCS+ followers,
Our next talk will take place this coming Wednesday, April 14th at 1:00 PM
Eastern Time (10:00 AM Pacific Time, 19:00 Central European Time, 17:00
UTC). Andrea Lincoln from UC Berkeley will speak about
"New Techniques for Proving FineGrained AverageCase Hardness" (abstract
below).
Please sign up on the online form at
https://sites.google.com/view/tcsplus/welcome/nexttcstalk if you wish to
join the talk as an individual or a group. Due to security concerns, note
that registration is required to attend the interactive Zoom talk, and a
(free) Zoom account is required to attend. (The link to the recording will
also be posted on our website afterwards.)
Hoping to see you all there,
The organizers

Speaker: Andrea Lincoln (UC Berkeley)
Title: New Techniques for Proving FineGrained AverageCase Hardness
Abstract: In this talk I will cover a new technique for worstcase to
averagecase reductions. There are two primary concepts introduced in this
talk: "factored" problems and a framework for worstcase to averagecase
finegrained (WCtoACFG) self reductions.
We will define new versions of OV, kSUM and zerokclique that are both
worstcase and averagecase finegrained hard assuming the core hypotheses
of finegrained complexity. We then use these as a basis for finegrained
hardness and averagecase hardness of other problems. Our hard factored
problems are also simple enough that we can reduce them to many other
problems, e.g. to edit distance, kLCS and versions of MaxFlow. We further
consider counting variants of the factored problems and give WCtoACFG
reductions for them for a natural distribution.
To show hardness for these factored problems we formalize the framework of
[BoixAdsera et al. 2019] that was used to give a WCtoACFG reduction for
counting kcliques. We define an explicit property of problems such that if
a problem has that property one can use the framework on the problem to get
a WCtoACFG self reduction.
In total these factored problems and the framework together give tight
finegrained averagecase hardness for various problems including the
counting variant of regular expression matching.
Based on joint work with Mina Dalirrooyfard and Virginia Vassilevska
Williams.
 next part 
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Thu Apr 15 16:55:00 2021
From: reingold at stanford.edu (Omer Reingold)
Date: Thu, 15 Apr 2021 13:55:00 1000
Subject: [theoryseminar] Self confidence in TOC
MessageID:
Not in the habit of advertising my posts in this list, but I feel it is an
important message to the more junior members of this list:
https://theorydish.blog/2021/04/15/tocapersonalperspective2021/
On a related note, please remember that theory dish is a medium for our
entire Stanford TOC community. Please let me know if you want to contribute
content or even to take charge of the blog for a while.
Best
Omer
 next part 
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Mon Apr 19 08:44:07 2021
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Mon, 19 Apr 2021 08:44:07 0700
Subject: [theoryseminar] =?utf8?q?=22Recent_results_in_planted_assignmen?=
=?utf8?q?t_problems=22_=E2=80=93_Yihong_Wu_=28Thu=2C_22Apr_=40_4?=
=?utf8?b?OjMwcG0p?=
MessageID:
Recent results in planted assignment problems Yihong Wu ? Professor, Yale
Thu, 22Apr / 4:30pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*To avoid Zoombombing, we ask attendees to sign in via the above URL to
receive the Zoom meeting details by email.*
*If you would like to meet with the speaker, please sign up here
.*
Abstract
Motivated by applications such as particle tracking, network
deanonymization, and computer vision, a recent thread of research is
devoted to statistical models of assignment problems, in which the data are
random weight graphs correlated with the latent permutation. In contrast to
problems such as planted clique or stochastic block model, the major
difference here is the lack of lowrank structures, which brings forth new
challenges in both statistical analysis and algorithm design.
In the first half of the talk, we discuss the linear assignment problem,
where the goal is to reconstruct a perfect matching planted in a randomly
weighted bipartite graph, whose planted and unplanted edge weights are
independently drawn from two different distributions. We determine the
sharp threshold at which the optimal reconstruction error (fraction of
misclassified edges) exhibits a phase transition from imperfect to perfect.
Furthermore, for exponential weight distributions, this phase transition is
shown to be of infiniteorder, confirming the conjecture in [Semerjian et
al. 2020]. The negative result is shown by proving that, below the
threshold, the posterior distribution is concentrated away from the hidden
matching by constructing exponentially many long augmenting cycles.
In the second half of the talk, we discuss the quadratic assignment problem
(graph matching), where the goal is to recover the hidden vertex
correspondence between two edgecorrelated ErdosRenyi graphs. We prove
that there exists a sharp threshold, above which one can correctly match
all but a vanishing fraction of the vertices and below which matching any
positive fraction is impossible, a phenomenon known as the ?allornothing?
phase transition. The proof builds upon a tight characterization of the
mutual information via the truncated secondmoment method and an
appropriate ?area theorem?. Achieving these thresholds with efficient
algorithms remains open.
This talk is based on joint work with Jian Ding, Jiaming Xu, Dana Yang and
Sophie Yu. Preprints available at: https://arxiv.org/abs/2103.09383,
https://arxiv.org/abs/2008.10097, https://arxiv.org/abs/2102.00082.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list islcolloq at lists.stanford.edu
.*
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 19 10:14:15 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 19 Apr 2021 10:14:15 0700
Subject: [theoryseminar] Theory lunch 04/22: Nathan Zixia Hu
MessageID:
Hi all,
Theory lunch will take place Thursday at noon (PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Nathan will tell us about*:* *Sampling Arborescences in Parallel*
*Abstract: *We study the problem of sampling a uniformly random directed
rooted spanning tree, also known as an arborescence, from a possibly
weighted directed graph. Classically, this problem has long been known to
be polynomialtime solvable; the exact number of arborescences can be
computed by a determinant [Tut48], and sampling can be reduced to counting
[JVV86; JS96]. However, the classic reduction from sampling to counting
seems to be inherently sequential. This raises the question of designing
efficient parallel algorithms for sampling. We show that sampling
arborescences can be done in RNC.
For several wellstudied combinatorial structures, counting can be reduced
to the computation of a determinant, which is known to be in NC [Csa75].
These include arborescences, planar graph perfect matchings, Eulerian tours
in digraphs, and determinantal point processes. However, not much is known
about efficient parallel sampling of these structures. Our work is a step
towards resolving this mystery.
Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 19 10:14:15 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 19 Apr 2021 10:14:15 0700
Subject: [theoryseminar] Theory lunch 04/22: Nathan Zixia Hu
MessageID:
Hi all,
Theory lunch will take place Thursday at noon (PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
Nathan will tell us about*:* *Sampling Arborescences in Parallel*
*Abstract: *We study the problem of sampling a uniformly random directed
rooted spanning tree, also known as an arborescence, from a possibly
weighted directed graph. Classically, this problem has long been known to
be polynomialtime solvable; the exact number of arborescences can be
computed by a determinant [Tut48], and sampling can be reduced to counting
[JVV86; JS96]. However, the classic reduction from sampling to counting
seems to be inherently sequential. This raises the question of designing
efficient parallel algorithms for sampling. We show that sampling
arborescences can be done in RNC.
For several wellstudied combinatorial structures, counting can be reduced
to the computation of a determinant, which is known to be in NC [Csa75].
These include arborescences, planar graph perfect matchings, Eulerian tours
in digraphs, and determinantal point processes. However, not much is known
about efficient parallel sampling of these structures. Our work is a step
towards resolving this mystery.
Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
 next part 
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Mon Apr 19 16:30:40 2021
From: nehgupta at stanford.edu (Neha Gupta)
Date: Mon, 19 Apr 2021 23:30:40 +0000
Subject: [theoryseminar] Quals talk Tuesday, 20 April at 2pm
MessageID:
Hi everyone,
I will be giving my qualifying exam talk tomorrow (Tuesday, 20 April) at 2pm on "Differentially Private PAC learning". Please join if you are interested.
Zoom link: https://stanford.zoom.us/j/96818814923?pwd=TWUvdEwwRXhBZGRORTJ1ejFOZCsxQT09
Meeting ID: 968 1881 4923
Password: 727362
Thanks,
Neha
 next part 
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Mon Apr 19 16:30:40 2021
From: nehgupta at stanford.edu (Neha Gupta)
Date: Mon, 19 Apr 2021 23:30:40 +0000
Subject: [theoryseminar] Quals talk Tuesday, 20 April at 2pm
MessageID:
Hi everyone,
I will be giving my qualifying exam talk tomorrow (Tuesday, 20 April) at 2pm on "Differentially Private PAC learning". Please join if you are interested.
Zoom link: https://stanford.zoom.us/j/96818814923?pwd=TWUvdEwwRXhBZGRORTJ1ejFOZCsxQT09
Meeting ID: 968 1881 4923
Password: 727362
Thanks,
Neha
 next part 
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Wed Apr 21 15:50:21 2021
From: lunjia at stanford.edu (Lunjia Hu)
Date: Wed, 21 Apr 2021 22:50:21 +0000
Subject: [theoryseminar] Quals Talk: the Sunflower Lemma
MessageID:
Hi everyone,
Next week, I will be giving my quals talk on the sunflower lemma. The talk will be on Tuesday, April 27, at 2 PM PST. Details are below.
Cheers,
Lunjia
Abstract: The sunflower lemma is a combinatorial tool with broad applications in theoretical computer science. The lemma guarantees that any large family of small sets must contain a large sunflower?a subfamily of sets with a shared ?core? and disjoint ?petals?. The original bound in the sunflower lemma proved by Erd?s and Rado in 1960 had remained essentially the best for nearly 60 years. In 2019, the bound was significantly improved in a breakthrough work by AlweissLovettWuZhang, leading to improved monotone circuit lower bounds. This talk will include 1) an introduction to the sunflower lemma, 2) an overview of the many applications of the lemma with a focus on monotone circuit complexity, and 3) an interesting ?entropic? proof of the current best sunflower lemma.
Zoom link: https://stanford.zoom.us/j/98064347535?pwd=Y0tkTGFJUys5b3JpWmhOUVo1Rlg2QT09
Password: 000669
 next part 
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Wed Apr 21 15:50:21 2021
From: lunjia at stanford.edu (Lunjia Hu)
Date: Wed, 21 Apr 2021 22:50:21 +0000
Subject: [theoryseminar] Quals Talk: the Sunflower Lemma
MessageID:
Hi everyone,
Next week, I will be giving my quals talk on the sunflower lemma. The talk will be on Tuesday, April 27, at 2 PM PST. Details are below.
Cheers,
Lunjia
Abstract: The sunflower lemma is a combinatorial tool with broad applications in theoretical computer science. The lemma guarantees that any large family of small sets must contain a large sunflower?a subfamily of sets with a shared ?core? and disjoint ?petals?. The original bound in the sunflower lemma proved by Erd?s and Rado in 1960 had remained essentially the best for nearly 60 years. In 2019, the bound was significantly improved in a breakthrough work by AlweissLovettWuZhang, leading to improved monotone circuit lower bounds. This talk will include 1) an introduction to the sunflower lemma, 2) an overview of the many applications of the lemma with a focus on monotone circuit complexity, and 3) an interesting ?entropic? proof of the current best sunflower lemma.
Zoom link: https://stanford.zoom.us/j/98064347535?pwd=Y0tkTGFJUys5b3JpWmhOUVo1Rlg2QT09
Password: 000669
 next part 
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Thu Apr 22 08:45:06 2021
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Thu, 22 Apr 2021 08:45:06 0700
Subject: [theoryseminar]
=?utf8?q?=22Recent_results_in_planted_assignmen?=
=?utf8?q?t_problems=22_=E2=80=93_Yihong_Wu_=28Thu=2C_22Apr_=40_4?=
=?utf8?b?OjMwcG0p?=
InReplyTo:
References:
MessageID:
Reminder that this talk will take place today, Thu. 22Apr, at 4:30pm.
On Mon, Apr 19, 2021 at 8:44 AM Kabir Chandrasekher
wrote:
> Recent results in planted assignment problems Yihong Wu ? Professor, Yale
>
> Thu, 22Apr / 4:30pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
>
> *To avoid Zoombombing, we ask attendees to sign in via the above URL to
> receive the Zoom meeting details by email.*
>
>
> *If you would like to meet with the speaker, please sign up here
> .*
> Abstract
>
> Motivated by applications such as particle tracking, network
> deanonymization, and computer vision, a recent thread of research is
> devoted to statistical models of assignment problems, in which the data are
> random weight graphs correlated with the latent permutation. In contrast to
> problems such as planted clique or stochastic block model, the major
> difference here is the lack of lowrank structures, which brings forth new
> challenges in both statistical analysis and algorithm design.
>
> In the first half of the talk, we discuss the linear assignment problem,
> where the goal is to reconstruct a perfect matching planted in a randomly
> weighted bipartite graph, whose planted and unplanted edge weights are
> independently drawn from two different distributions. We determine the
> sharp threshold at which the optimal reconstruction error (fraction of
> misclassified edges) exhibits a phase transition from imperfect to perfect.
> Furthermore, for exponential weight distributions, this phase transition is
> shown to be of infiniteorder, confirming the conjecture in [Semerjian et
> al. 2020]. The negative result is shown by proving that, below the
> threshold, the posterior distribution is concentrated away from the hidden
> matching by constructing exponentially many long augmenting cycles.
>
> In the second half of the talk, we discuss the quadratic assignment
> problem (graph matching), where the goal is to recover the hidden vertex
> correspondence between two edgecorrelated ErdosRenyi graphs. We prove
> that there exists a sharp threshold, above which one can correctly match
> all but a vanishing fraction of the vertices and below which matching any
> positive fraction is impossible, a phenomenon known as the ?allornothing?
> phase transition. The proof builds upon a tight characterization of the
> mutual information via the truncated secondmoment method and an
> appropriate ?area theorem?. Achieving these thresholds with efficient
> algorithms remains open.
>
> This talk is based on joint work with Jian Ding, Jiaming Xu, Dana Yang and
> Sophie Yu. Preprints available at: https://arxiv.org/abs/2103.09383,
> https://arxiv.org/abs/2008.10097, https://arxiv.org/abs/2102.00082.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 22 08:52:34 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 22 Apr 2021 08:52:34 0700
Subject: [theoryseminar] Theory lunch 04/22: Nathan Zixia Hu
InReplyTo:
References:
MessageID:
Hi all,
Reminder: theory lunch is happening later today.
Cheers,
David
On Mon, 19 Apr 2021 at 10:14, David Wajc wrote:
> Hi all,
>
> Theory lunch will take place Thursday at noon (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Nathan will tell us about*:* *Sampling Arborescences in Parallel*
>
> *Abstract: *We study the problem of sampling a uniformly random directed
> rooted spanning tree, also known as an arborescence, from a possibly
> weighted directed graph. Classically, this problem has long been known to
> be polynomialtime solvable; the exact number of arborescences can be
> computed by a determinant [Tut48], and sampling can be reduced to counting
> [JVV86; JS96]. However, the classic reduction from sampling to counting
> seems to be inherently sequential. This raises the question of designing
> efficient parallel algorithms for sampling. We show that sampling
> arborescences can be done in RNC.
>
> For several wellstudied combinatorial structures, counting can be reduced
> to the computation of a determinant, which is known to be in NC [Csa75].
> These include arborescences, planar graph perfect matchings, Eulerian tours
> in digraphs, and determinantal point processes. However, not much is known
> about efficient parallel sampling of these structures. Our work is a step
> towards resolving this mystery.
> Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 22 08:52:34 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 22 Apr 2021 08:52:34 0700
Subject: [theoryseminar] Theory lunch 04/22: Nathan Zixia Hu
InReplyTo:
References:
MessageID:
Hi all,
Reminder: theory lunch is happening later today.
Cheers,
David
On Mon, 19 Apr 2021 at 10:14, David Wajc wrote:
> Hi all,
>
> Theory lunch will take place Thursday at noon (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> Nathan will tell us about*:* *Sampling Arborescences in Parallel*
>
> *Abstract: *We study the problem of sampling a uniformly random directed
> rooted spanning tree, also known as an arborescence, from a possibly
> weighted directed graph. Classically, this problem has long been known to
> be polynomialtime solvable; the exact number of arborescences can be
> computed by a determinant [Tut48], and sampling can be reduced to counting
> [JVV86; JS96]. However, the classic reduction from sampling to counting
> seems to be inherently sequential. This raises the question of designing
> efficient parallel algorithms for sampling. We show that sampling
> arborescences can be done in RNC.
>
> For several wellstudied combinatorial structures, counting can be reduced
> to the computation of a determinant, which is known to be in NC [Csa75].
> These include arborescences, planar graph perfect matchings, Eulerian tours
> in digraphs, and determinantal point processes. However, not much is known
> about efficient parallel sampling of these structures. Our work is a step
> towards resolving this mystery.
> Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Sun Apr 25 12:34:47 2021
From: jneu at stanford.edu (Joachim Neu)
Date: Sun, 25 Apr 2021 12:34:47 0700
Subject: [theoryseminar] =?utf8?q?Fwd=3A_=22Private_Stochastic_Convex_Op?=
=?utf8?q?timization=22_=E2=80=93__Kunal_Talwar_=28Thu=2C_29Apr_=40_4=3A?=
=?utf8?q?30pm=29?=
References: <54448aa0e5cd3b8d19b7220db5a1352af0be14bd.camel@stanford.edu>
MessageID: <5ceb8abfbcf8699e02fb3032d296256ef8c079fb.camel@stanford.edu>
This talk might be of interest to some on this list.
 Forwarded Message 
Private Stochastic Convex Optimization
Kunal Talwar ? Research Scientist, Apple
Thu, 29Apr / 4:30pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
To avoid Zoombombing, we ask attendees to sign in via the above URL to
receive the Zoom meeting details by email.
Abstract
I will summarize some recent works on differentially private (DP)
algorithms for stochastic convex optimization: the problem of
minimizing the population loss given i.i.d. samples from a distribution
over convex loss functions. In the standard l2/l2 setting, we will see
two approaches to getting optimal rates for this problem. We show that
for a wide range of parameters, privacy causes no additional overhead
in accuracy or run time. In the process, we will develop techniques for
private stochastic optimization that work for other geometries. For the
LASSO setting when optimizing over the l1 ball, we will see private
algorithms that achieve optimal rates. Based on joint works with
various subsets of Hilal Asi, Raef Bassily, Vitaly Feldman, Tomer Koren
and Abhradeep Thakurta.
Bio
Kunal Talwar is a Research Scientist at Apple, leading a research group
of foundations of ML and Private Data Analysis. His research interests
span various aspects of Computer Science including Differential
Privacy, Machine Learning, Algorithms and Data Structures. He got his
B.Tech. from IIT Delhi (2000) and his PhD from UC Berkeley (2004).
Prior to joining Apple, he worked at Microsoft Research in Silicon
Valley from 2005 to 2014, and at Google Brain from 2014 to 2019. He has
made major contributions to Differential Privacy, Metric Embeddings and
Discrepancy Theory. His work has been recognized by the Privacy
Enhancing Technologies award in 2009 and the ICLR Best Paper award in
2017.
This talk is hosted by the ISL Colloquium. To receive talk
announcements, subscribe to the mailing list isl
colloq at lists.stanford.edu.
Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2021q2/kunaltalwar/
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 26 13:01:06 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 26 Apr 2021 13:01:06 0700
Subject: [theoryseminar] Theory Lunch 04/29: John Wright (UT Austin)
MessageID:
Hi all,
Theory lunch will take place Thursday at noon (PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
John will tell us about*:* *MIP*=RE*
*Abstract: *Quantum complexity theory gives us a powerful lens to study
basic resources and properties that arise in quantum computing. One of the
most beguiling of these is the mysterious phenomenon of quantum
entanglement, in which two farapart quantum systems can affect each other
faster than the speed of light. To study this, researchers in 2004
introduced the complexity class MIP*, which connected entanglement to a
classic notion in complexity theory known as multiprover interactive
proofs. Since then, determining the power of MIP* has remained a major open
problem in the field of quantum complexity theory.
In this talk, I will describe recent work giving a solution to this
problem: MIP* = RE, the complexity class containing the halting problem and
those languages which reduce to it. This shows that entanglement is a
resource of almost unimaginable power, as it can be used to solve problems
which are undecidable. The proof involves new techniques that allow a
classical verifier to use entanglement to delegate increasingly complex
computations to two quantum provers. I will also describe the deep and
surprising connections that MIP* has to two other major open problems,
Tsirelson's problem from entanglement theory and Connes' embedding problem
from operator algebras, and show how this result leads to a resolution of
both problems in the negative.
This is joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and
Henry Yuen.Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Mon Apr 26 13:01:06 2021
From: wajc at stanford.edu (David Wajc)
Date: Mon, 26 Apr 2021 13:01:06 0700
Subject: [theoryseminar] Theory Lunch 04/29: John Wright (UT Austin)
MessageID:
Hi all,
Theory lunch will take place Thursday at noon (PDT), at our gather space:
https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
SongComplexity).
John will tell us about*:* *MIP*=RE*
*Abstract: *Quantum complexity theory gives us a powerful lens to study
basic resources and properties that arise in quantum computing. One of the
most beguiling of these is the mysterious phenomenon of quantum
entanglement, in which two farapart quantum systems can affect each other
faster than the speed of light. To study this, researchers in 2004
introduced the complexity class MIP*, which connected entanglement to a
classic notion in complexity theory known as multiprover interactive
proofs. Since then, determining the power of MIP* has remained a major open
problem in the field of quantum complexity theory.
In this talk, I will describe recent work giving a solution to this
problem: MIP* = RE, the complexity class containing the halting problem and
those languages which reduce to it. This shows that entanglement is a
resource of almost unimaginable power, as it can be used to solve problems
which are undecidable. The proof involves new techniques that allow a
classical verifier to use entanglement to delegate increasingly complex
computations to two quantum provers. I will also describe the deep and
surprising connections that MIP* has to two other major open problems,
Tsirelson's problem from entanglement theory and Connes' embedding problem
from operator algebras, and show how this result leads to a resolution of
both problems in the negative.
This is joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and
Henry Yuen.Cheers,
David
PS
*Pro tip:* To join the talk (at 12:30):
(1) go to the lecture hall,
(2) grab a seat, and
(3)* press X to join the zoom lecture*
 next part 
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Mon Apr 26 15:20:20 2021
From: lunjia at stanford.edu (Lunjia Hu)
Date: Mon, 26 Apr 2021 22:20:20 +0000
Subject: [theoryseminar] Quals Talk: the Sunflower Lemma
InReplyTo:
References:
MessageID:
Hi everyone!
This is just a reminder that the talk is happening tomorrow (Tue, Apr 27) at 2 pm PDT. Hope to see you there!
Zoom link: https://stanford.zoom.us/j/98064347535?pwd=Y0tkTGFJUys5b3JpWmhOUVo1Rlg2QT09
Password: 000669
Cheers,
Lunjia
________________________________
From: Lunjia Hu
Sent: Wednesday, April 21, 2021 3:50 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Quals Talk: the Sunflower Lemma
Hi everyone,
Next week, I will be giving my quals talk on the sunflower lemma. The talk will be on Tuesday, April 27, at 2 PM PST. Details are below.
Cheers,
Lunjia
Abstract: The sunflower lemma is a combinatorial tool with broad applications in theoretical computer science. The lemma guarantees that any large family of small sets must contain a large sunflower?a subfamily of sets with a shared ?core? and disjoint ?petals?. The original bound in the sunflower lemma proved by Erd?s and Rado in 1960 had remained essentially the best for nearly 60 years. In 2019, the bound was significantly improved in a breakthrough work by AlweissLovettWuZhang, leading to improved monotone circuit lower bounds. This talk will include 1) an introduction to the sunflower lemma, 2) an overview of the many applications of the lemma with a focus on monotone circuit complexity, and 3) an interesting ?entropic? proof of the current best sunflower lemma.
Zoom link: https://stanford.zoom.us/j/98064347535?pwd=Y0tkTGFJUys5b3JpWmhOUVo1Rlg2QT09
Password: 000669
 next part 
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Mon Apr 26 15:20:20 2021
From: lunjia at stanford.edu (Lunjia Hu)
Date: Mon, 26 Apr 2021 22:20:20 +0000
Subject: [theoryseminar] Quals Talk: the Sunflower Lemma
InReplyTo:
References:
MessageID:
Hi everyone!
This is just a reminder that the talk is happening tomorrow (Tue, Apr 27) at 2 pm PDT. Hope to see you there!
Zoom link: https://stanford.zoom.us/j/98064347535?pwd=Y0tkTGFJUys5b3JpWmhOUVo1Rlg2QT09
Password: 000669
Cheers,
Lunjia
________________________________
From: Lunjia Hu
Sent: Wednesday, April 21, 2021 3:50 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Quals Talk: the Sunflower Lemma
Hi everyone,
Next week, I will be giving my quals talk on the sunflower lemma. The talk will be on Tuesday, April 27, at 2 PM PST. Details are below.
Cheers,
Lunjia
Abstract: The sunflower lemma is a combinatorial tool with broad applications in theoretical computer science. The lemma guarantees that any large family of small sets must contain a large sunflower?a subfamily of sets with a shared ?core? and disjoint ?petals?. The original bound in the sunflower lemma proved by Erd?s and Rado in 1960 had remained essentially the best for nearly 60 years. In 2019, the bound was significantly improved in a breakthrough work by AlweissLovettWuZhang, leading to improved monotone circuit lower bounds. This talk will include 1) an introduction to the sunflower lemma, 2) an overview of the many applications of the lemma with a focus on monotone circuit complexity, and 3) an interesting ?entropic? proof of the current best sunflower lemma.
Zoom link: https://stanford.zoom.us/j/98064347535?pwd=Y0tkTGFJUys5b3JpWmhOUVo1Rlg2QT09
Password: 000669
 next part 
An HTML attachment was scrubbed...
URL:
From marykw at stanford.edu Tue Apr 27 15:25:22 2021
From: marykw at stanford.edu (Mary Wootters)
Date: Tue, 27 Apr 2021 15:25:22 0700
Subject: [theoryseminar] Fwd: NASIT 2021
InReplyTo:
References:
MessageID:
Hi all,
See below for an announcement for the 2021 North American School of
Information Theory.
Best,
Mary
 Forwarded message 
From: Ian F. Blake
Date: Sun, Apr 25, 2021 at 1:17 PM
Subject: NASIT 2021
To:
Dear Professor Wootters,
We hope this email finds you well!
The virtual 2021 North American School of Information Theory (NASIT 2021) is
approaching. It features top researchers presenting tutorials on important
topics for our students:

? Prof. Michelle Effros: Network Information Theory

? Prof. Negar Kiyavash: Causal Inference

? Prof. Douglas Stebila: Postquantum Cryptography From the Learning
with Errors Problem

? Prof. David Tse: Operating Blockchains at Physical Limits

? Prof. Wei Yu: Massive Random Access and Massive MIMO

? Prof. Lizhong Zheng: Understanding Deep Learning With an
Information Geometric
Method
This is in addition to two poster session for our students, to provide them
an opportunity to interact with other students and faculty and get
experience in presenting and discussing their ideas in an informal and
relaxed environment. And there is a poster award!
More details can be found here:
http://conferences.ece.ubc.ca/nasit2021/index.html. Please help us by
sharing this with your students and colleagues, and encourage your students
to submit a poster.
Note that students should register their interest to submit a poster before
May 20, and should submit their final poster before June 13. More details
can be found in the call for posters
http://conferences.ece.ubc.ca/nasit2021/poster.html.
We look forward to virtually see you and your students in NASIT 2021.
Lutz, Lele, Anas and Ian

Mary Wootters (she/her)
Assistant Professor of Computer Science and Electrical Engineering
Stanford University
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: CFP 2021 IEEE NASIT.pdf
Type: application/pdf
Size: 1402139 bytes
Desc: not available
URL:
From wajc at stanford.edu Thu Apr 29 09:03:17 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 29 Apr 2021 09:03:17 0700
Subject: [theoryseminar] Theory Lunch 04/29: John Wright (UT Austin)
InReplyTo:
References:
MessageID:
Reminder: John's talk starts at 12:30, with pretalk gather.town
socializing at noon. See you there!
Cheers,
David
On Mon, 26 Apr 2021 at 13:01, David Wajc wrote:
> Hi all,
>
> Theory lunch will take place Thursday at noon (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> John will tell us about*:* *MIP*=RE*
>
> *Abstract: *Quantum complexity theory gives us a powerful lens to study
> basic resources and properties that arise in quantum computing. One of the
> most beguiling of these is the mysterious phenomenon of quantum
> entanglement, in which two farapart quantum systems can affect each other
> faster than the speed of light. To study this, researchers in 2004
> introduced the complexity class MIP*, which connected entanglement to a
> classic notion in complexity theory known as multiprover interactive
> proofs. Since then, determining the power of MIP* has remained a major open
> problem in the field of quantum complexity theory.
>
> In this talk, I will describe recent work giving a solution to this
> problem: MIP* = RE, the complexity class containing the halting problem and
> those languages which reduce to it. This shows that entanglement is a
> resource of almost unimaginable power, as it can be used to solve problems
> which are undecidable. The proof involves new techniques that allow a
> classical verifier to use entanglement to delegate increasingly complex
> computations to two quantum provers. I will also describe the deep and
> surprising connections that MIP* has to two other major open problems,
> Tsirelson's problem from entanglement theory and Connes' embedding problem
> from operator algebras, and show how this result leads to a resolution of
> both problems in the negative.
>
> This is joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and
> Henry Yuen.Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From wajc at stanford.edu Thu Apr 29 09:03:17 2021
From: wajc at stanford.edu (David Wajc)
Date: Thu, 29 Apr 2021 09:03:17 0700
Subject: [theoryseminar] Theory Lunch 04/29: John Wright (UT Austin)
InReplyTo:
References:
MessageID:
Reminder: John's talk starts at 12:30, with pretalk gather.town
socializing at noon. See you there!
Cheers,
David
On Mon, 26 Apr 2021 at 13:01, David Wajc wrote:
> Hi all,
>
> Theory lunch will take place Thursday at noon (PDT), at our gather space:
> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
> SongComplexity).
> John will tell us about*:* *MIP*=RE*
>
> *Abstract: *Quantum complexity theory gives us a powerful lens to study
> basic resources and properties that arise in quantum computing. One of the
> most beguiling of these is the mysterious phenomenon of quantum
> entanglement, in which two farapart quantum systems can affect each other
> faster than the speed of light. To study this, researchers in 2004
> introduced the complexity class MIP*, which connected entanglement to a
> classic notion in complexity theory known as multiprover interactive
> proofs. Since then, determining the power of MIP* has remained a major open
> problem in the field of quantum complexity theory.
>
> In this talk, I will describe recent work giving a solution to this
> problem: MIP* = RE, the complexity class containing the halting problem and
> those languages which reduce to it. This shows that entanglement is a
> resource of almost unimaginable power, as it can be used to solve problems
> which are undecidable. The proof involves new techniques that allow a
> classical verifier to use entanglement to delegate increasingly complex
> computations to two quantum provers. I will also describe the deep and
> surprising connections that MIP* has to two other major open problems,
> Tsirelson's problem from entanglement theory and Connes' embedding problem
> from operator algebras, and show how this result leads to a resolution of
> both problems in the negative.
>
> This is joint work with Zhengfeng Ji, Anand Natarajan, Thomas Vidick, and
> Henry Yuen.Cheers,
> David
>
> PS
> *Pro tip:* To join the talk (at 12:30):
> (1) go to the lecture hall,
> (2) grab a seat, and
> (3)* press X to join the zoom lecture*
>
 next part 
An HTML attachment was scrubbed...
URL:
From mqiao at stanford.edu Fri Apr 30 13:42:18 2021
From: mqiao at stanford.edu (Mingda Qiao)
Date: Fri, 30 Apr 2021 20:42:18 +0000
Subject: [theoryseminar] Quals Talk: Trace Reconstruction
MessageID: <0D5039E3E46C41D5B273D72E8F9F2302@stanford.edu>
Hi all,
I will give my quals talk on the trace reconstruction problem on May 7 (next Friday) at 2pm PT. Details are below.
Cheers,
Mingda
Abstract: In the trace reconstruction problem, we observe "traces" of an unknown binary string. Each trace is a random subsequence obtained by independently deleting each bit with a probability of 1/2. The goal is to recover the string using as few traces as possible.
In this talk, I will survey some recent results on this problem, including (1) an exp(O(n^{1/3})) upper bound on the number of traces, obtained by Nazarov and Peres (2017) and De, O'Donnell, and Servedio (2017) simultaneously; (2) a recent exp(\tilde O(n^{1/5})) upper bound proved by Chase (2021). Both results follow from the analysis of a specific type of polynomials in the complex plane.
Zoom link: https://stanford.zoom.us/j/95286429990?pwd=b3lqNlpJTHBKU0hjRkhaTXRNSlFmdz09
Password: 142857
 next part 
An HTML attachment was scrubbed...
URL:
From mqiao at stanford.edu Fri Apr 30 13:42:18 2021
From: mqiao at stanford.edu (Mingda Qiao)
Date: Fri, 30 Apr 2021 20:42:18 +0000
Subject: [theoryseminar] Quals Talk: Trace Reconstruction
MessageID: <0D5039E3E46C41D5B273D72E8F9F2302@stanford.edu>
Hi all,
I will give my quals talk on the trace reconstruction problem on May 7 (next Friday) at 2pm PT. Details are below.
Cheers,
Mingda
Abstract: In the trace reconstruction problem, we observe "traces" of an unknown binary string. Each trace is a random subsequence obtained by independently deleting each bit with a probability of 1/2. The goal is to recover the string using as few traces as possible.
In this talk, I will survey some recent results on this problem, including (1) an exp(O(n^{1/3})) upper bound on the number of traces, obtained by Nazarov and Peres (2017) and De, O'Donnell, and Servedio (2017) simultaneously; (2) a recent exp(\tilde O(n^{1/5})) upper bound proved by Chase (2021). Both results follow from the analysis of a specific type of polynomials in the complex plane.
Zoom link: https://stanford.zoom.us/j/95286429990?pwd=b3lqNlpJTHBKU0hjRkhaTXRNSlFmdz09
Password: 142857
 next part 
An HTML attachment was scrubbed...
URL: