From junyaoz at stanford.edu Sun May 1 23:35:54 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 2 May 2022 06:35:54 +0000
Subject: [theoryseminar] Theory Lunch 5/5: Frederic Koehler
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Fred will tell us about: A Sampling Generalization of Dense MAXCUT
Abstract: A classic result from the 90's says MAXCUT is polynomial time approximable within (1  \epsilon) multiplicative error on dense graphs, even though it is hard on sparse graphs. Sampling from a dense/lowthresholdrank Ising model is a version of this problem that essentially (1) replaces 'max' by 'softmax' and (2) requires an approximation with small additive error. While interesting structural results were known for such models, it remained unclear if the sampling problem was polynomial time tractable. We discuss a new algorithm which can indeed sample and (as a special case) recovers the approximability of dense MAXCUT.
Based on a joint work with Holden Lee and Andrej Risteski.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun May 1 23:35:54 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 2 May 2022 06:35:54 +0000
Subject: [theoryseminar] Theory Lunch 5/5: Frederic Koehler
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Fred will tell us about: A Sampling Generalization of Dense MAXCUT
Abstract: A classic result from the 90's says MAXCUT is polynomial time approximable within (1  \epsilon) multiplicative error on dense graphs, even though it is hard on sparse graphs. Sampling from a dense/lowthresholdrank Ising model is a version of this problem that essentially (1) replaces 'max' by 'softmax' and (2) requires an approximation with small additive error. While interesting structural results were known for such models, it remained unclear if the sampling problem was polynomial time tractable. We discuss a new algorithm which can indeed sample and (as a special case) recovers the approximability of dense MAXCUT.
Based on a joint work with Holden Lee and Andrej Risteski.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Mon May 2 11:49:38 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Mon, 2 May 2022 11:49:38 0700
Subject: [theoryseminar] =?utf8?q?=22The_online_convex_optimization_appr?=
=?utf8?q?oach_to_control=22_=E2=80=93_Elad_Hazan_=28Thu=2C_5May_?=
=?utf8?b?QCA0OjAwcG0p?=
MessageID:
The online convex optimization approach to controlElad Hazan ? Professor,
Princeton
Thu, 5May / 4:00pm / Packard 101 (in person)
*Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). The talk will be streamed on Zoom
for those unable to attend in
person: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*
Abstract
In this talk we will discuss an emerging paradigm in differentiable
reinforcement learning called ?nonstochastic control?. The new approach
applies techniques from online convex optimization and convex relaxations
to obtain new methods with provable guarantees for classical settings in
optimal and robust control. We will discuss recent extensions to nonlinear
adaptive control and planning.
No background is required for this talk, and relevant materials can be
found here .
Bio
Elad Hazan is a professor of computer science at Princeton University. His
research focuses on the design and analysis of algorithms for basic
problems in machine learning and optimization. Amongst his contributions
are the coinvention of the AdaGrad algorithm for deep learning, and the
first sublineartime algorithms for convex optimization. He is the
recipient of the Bell Labs prize, the IBM Goldberg best paper award twice,
in 2012 and 2008, a European Research Council grant, a Marie Curie
fellowship, twice the Google Research Award and ACM fellowship. He served
on the steering committee of the Association for Computational Learning and
has been program chair for COLT 2015. He is the cofounder and director of
Google AI Princeton.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list islcolloq at lists.stanford.edu
.*

Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2022q2/eladhazan/
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Wed May 4 20:04:49 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Wed, 4 May 2022 20:04:49 0700
Subject: [theoryseminar]
=?utf8?q?=22The_online_convex_optimization_appr?=
=?utf8?q?oach_to_control=22_=E2=80=93_Elad_Hazan_=28Thu=2C_5May_?=
=?utf8?b?QCA0OjAwcG0p?=
InReplyTo:
References:
MessageID:
Reminder: this talk will be tomorrow (Thursday) at 4pm in Packard 101. If
you would like to meet with the speaker, please sign up here:
https://docs.google.com/spreadsheets/d/1AHVKmaby3AGHTMgADqQeoAFsZWXYV8gYkcpuXxlQeQ/edit#gid=76119083
.
Please join us for snacks at 3:30pm in the Grove outside Packard before the
talk.
On Mon, May 2, 2022 at 11:49 AM Tavor Baharav wrote:
> The online convex optimization approach to controlElad Hazan ? Professor,
> Princeton
>
> Thu, 5May / 4:00pm / Packard 101 (in person)
>
> *Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be streamed on Zoom
> for those unable to attend in
> person: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> *
> Abstract
>
> In this talk we will discuss an emerging paradigm in differentiable
> reinforcement learning called ?nonstochastic control?. The new approach
> applies techniques from online convex optimization and convex relaxations
> to obtain new methods with provable guarantees for classical settings in
> optimal and robust control. We will discuss recent extensions to nonlinear
> adaptive control and planning.
>
> No background is required for this talk, and relevant materials can be
> found here .
> Bio
>
> Elad Hazan is a professor of computer science at Princeton University. His
> research focuses on the design and analysis of algorithms for basic
> problems in machine learning and optimization. Amongst his contributions
> are the coinvention of the AdaGrad algorithm for deep learning, and the
> first sublineartime algorithms for convex optimization. He is the
> recipient of the Bell Labs prize, the IBM Goldberg best paper award twice,
> in 2012 and 2008, a European Research Council grant, a Marie Curie
> fellowship, twice the Google Research Award and ACM fellowship. He served
> on the steering committee of the Association for Computational Learning and
> has been program chair for COLT 2015. He is the cofounder and director of
> Google AI Princeton.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
> 
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk: http://isl.stanford.edu/talks/talks/2022q2/eladhazan/
>
 next part 
An HTML attachment was scrubbed...
URL:
From yeganeh at stanford.edu Thu May 5 10:22:39 2022
From: yeganeh at stanford.edu (Yeganeh Ali Mohammadi)
Date: Thu, 5 May 2022 17:22:39 +0000
Subject: [theoryseminar] Fw: OR student seminar 5/5  Improved Online
Contention Resolution for Matchings and Applications to the Gig Economy
InReplyTo:
References:
MessageID:
Hi all,
Tristan and Mohammad are giving a talk today at OR student seminar on online matching.
It might be of interest to some of you.
Location: Y2E2101
Time: 56pm
________________________________
From: orseminars on behalf of Bryce McLaughlin
Sent: Thursday, May 5, 2022 10:11 AM
To: orseminars at lists.stanford.edu ; studentorseminar at lists.stanford.edu
Subject: Re: OR student seminar 5/5  Improved Online Contention Resolution for Matchings and Applications to the Gig Economy
Hi all!
A reminder that Tristan and Mohammed will be presenting their work at 5pm in Y2E2101
Hope to see you there.
Best,
Bryce
On Thu, Apr 28, 2022 at 6:53 PM Bryce McLaughlin > wrote:
Hi all,
Next week we will be continuing the OR student seminar on Thursday at 5pm where Tristan and Mohammad will be inperson, location TBD presenting their work Improved Online Contention Resolution for Matchings and Applications to the Gig Economy.
Please note the new time.
Title: Improved Online Contention Resolution for Matchings and Applications to the Gig Economy
Abstract:
Motivated by applications in the gig economy, we study approximation algorithms for a sequential pricing problem. The input is a bipartite graph G=(I,J,E) between individuals I and jobs J. The platform has a value of v_j for matching job j to an individual worker. To find a matching, the platform can consider the edges (i, j) in any order and make worker i a onetime takeitorleaveit offer to complete the job j at a price of the platform's choosing, after which the worker accepts with a known probability. What is the best way to make offers to maximize revenue and/or social welfare?
The optimal algorithm is known to be NPhard to compute (even if there is only a single job). With this in mind, we design efficient approximations to the optimal policy via a new RandomOrder Online Contention Resolution Scheme (ROOCRS) for matching. Our main result is a 0.456balanced ROOCRS in bipartite graphs and a 0.45balanced ROOCRS in general graphs. These algorithms improve on the recent bound of 0.432 of Brubach et al., and improve on the bestknown lower bounds for the correlation gap of matching, despite applying to a significantly more restrictive setting. From this we obtain a 0.456approximate algorithm for the sequential pricing problem.
As always, if you plan on attending in person please fill in this brief form before Tuesday  We need to have headcounts before ordering for refunding purposes.
Finally, we would like to thank the generosity of the Management Science and Engineering department, the Operations, Information, and Technology group at the Business School, Infanger Investment Technology, and the DienerVeinott family for supporting this event.
Hope to see you at this seminar and in the many soon to follow.
Best,
Bryce
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
An embedded and charsetunspecified text was scrubbed...
Name: ATT00001.txt
URL:
From junyaoz at stanford.edu Wed May 4 23:04:03 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 5 May 2022 06:04:03 +0000
Subject: [theoryseminar] Theory Lunch 5/5: Frederic Koehler
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 1, 2022 11:35 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/5: Frederic Koehler
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Fred will tell us about: A Sampling Generalization of Dense MAXCUT
Abstract: A classic result from the 90's says MAXCUT is polynomial time approximable within (1  \epsilon) multiplicative error on dense graphs, even though it is hard on sparse graphs. Sampling from a dense/lowthresholdrank Ising model is a version of this problem that essentially (1) replaces 'max' by 'softmax' and (2) requires an approximation with small additive error. While interesting structural results were known for such models, it remained unclear if the sampling problem was polynomial time tractable. We discuss a new algorithm which can indeed sample and (as a special case) recovers the approximability of dense MAXCUT.
Based on a joint work with Holden Lee and Andrej Risteski.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 4 23:04:03 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 5 May 2022 06:04:03 +0000
Subject: [theoryseminar] Theory Lunch 5/5: Frederic Koehler
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 1, 2022 11:35 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/5: Frederic Koehler
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Fred will tell us about: A Sampling Generalization of Dense MAXCUT
Abstract: A classic result from the 90's says MAXCUT is polynomial time approximable within (1  \epsilon) multiplicative error on dense graphs, even though it is hard on sparse graphs. Sampling from a dense/lowthresholdrank Ising model is a version of this problem that essentially (1) replaces 'max' by 'softmax' and (2) requires an approximation with small additive error. While interesting structural results were known for such models, it remained unclear if the sampling problem was polynomial time tractable. We discuss a new algorithm which can indeed sample and (as a special case) recovers the approximability of dense MAXCUT.
Based on a joint work with Holden Lee and Andrej Risteski.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Fri May 6 11:02:16 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Fri, 6 May 2022 11:02:16 0700
Subject: [theoryseminar] KahnKalai conjecture proof: Wed May 11, 2:30pm
MessageID:
Theory friends,
For those of you who missed Huy Pham's combinatorics seminar talk last week
about the breakthrough result on the KahnKalai conjecture as well as those
who attended and would like to hear more details ... we've arranged for him
to give a 2 hour theory seminar next week so he can go over the short and
elegant proof. The talk will be self contained. Abstract below.
When: Wed May 11, 2:304:30pm
Where: Gates 415
You can read about the result in the Math dept news blurb
and the Quanta magazine article.
Cheers,
Moses
Title: A proof of the KahnKalai conjecture
Abstract: In this session, I will describe in detail the proof of the
KahnKalai conjecture in the recent joint work with Jinyoung Park. This
conjecture concerns the threshold of an increasing boolean function (or of
an increasing graph property), which is the density at which a random set
(or a random graph) transitions from unlikely satisfying to likely
satisfying the function (or property). Kahn and Kalai conjectured that for
any nontrivial increasing property on a finite set, its threshold is never
far from its "expectationthreshold," which is a natural (and often easy to
calculate) lower bound on the threshold. The KahnKalai conjecture directly
implies a number of difficult results in probabilistic combinatorics and
random graph theory, such as Shamir?s problem on hypergraph matchings, or
the threshold for containing a bounded degree spanning tree.
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Sun May 8 20:57:02 2022
From: jneu at stanford.edu (Joachim Neu)
Date: Sun, 08 May 2022 20:57:02 0700
Subject: [theoryseminar] =?iso88591?q?=22Beyond_the_Csisz=E1rK=F6rner_?=
=?iso88591?q?Bound=3A_BestPossible_Wiretap_Coding_via_Obfuscation=22_?=
=?iso88591?q?=3F_Amit_Sahai_=28Thu=2C_12May_=40_4=3A00pm=29?=
MessageID: <8d10110fa61c5b90038db21fccad9bb8b250482c.camel@stanford.edu>
Beyond the Csisz?rK?rner Bound: BestPossible Wiretap Coding via Obfuscation
Amit Sahai ? Professor, UCLA
Thu, 12May / 4:00pm / Packard 101 (in person)
Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). The talk will be streamed on
Zoom for those unable to attend in person:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
Abstract
A wiretap coding scheme (Wyner, Bell Syst. Tech. J. 1975) enables Alice
to reliably communicate a message m to an honest Bob by sending an
encoding c over a noisy channel chB, while at the same time hiding m
from Eve who receives c over another noisy channel chE.
Wiretap coding is clearly impossible when chB is a degraded version of
chE, in the sense that the output of chB can be simulated using only
the output of chE. A classic work of Csisz?r and K?rner (IEEE Trans.
Inf. Theory, 1978) shows that the converse does not hold. This follows
from their full characterization of the channel pairs (chB, chE) that
enable informationtheoretic wiretap coding.
In this work, we show that in fact the converse does hold when
considering computational security; that is, wiretap coding against a
computationally bounded Eve is possible if and only if chB is not a
degraded version of chE. Our construction assumes the existence of
virtual blackbox (VBB) obfuscation of specific classes of ``evasive?
functions that generalize fuzzy point functions, and can be
heuristically instantiated using indistinguishability obfuscation.
Finally, our solution has the appealing feature of being universal in
the sense that Alice?s algorithm depends only on chB and not on chE.
Joint work with Yuval Ishai, Alexis Korb, and Paul Lou.
Bio
Amit Sahai is a Simons Investigator (2021), Fellow of the ACM (2018)
and a Fellow of the IACR (2019). He is also a Fellow of the Royal
Society of Arts (2021), and Advisor to the Prison Mathematics Project.
He is the incumbent of the Symantec Endowed Chair in Computer Science.
He received his Ph.D. in Computer Science from MIT in 2000. From 2000
to 2004, he was on the faculty at Princeton University; in 2004 he
joined the UCLA Samueli School of Engineering, where he currently holds
the position of Professor of Computer Science. He serves as an editor
of J. Cryptology (SpringerNature). His research interests are in
security and cryptography, and theoretical computer science more
broadly. He is the coinventor of AttributeBased Encryption,
Functional Encryption, and Indistinguishability Obfuscation. He has
published more than 150 original technical research papers at venues
such as the ACM Symposium on Theory of Computing (STOC), CRYPTO, and
the Journal of the ACM. He has given a number of invited talks at
institutions such as MIT, Stanford, and Berkeley, including the 2004
Distinguished Cryptographer Lecture Series at NTT Labs, Japan.
Professor Sahai is the recipient of numerous honors; he was named an
Alfred P. Sloan Foundation Research Fellow in 2002, received an Okawa
Research Grant Award in 2007, a Xerox Foundation Faculty Award in 2010,
a Google Faculty Research Award in 2010, a 2012 Pazy Memorial Award, a
2016 ACM CCS Test of Time Award, a 2019 AWS Machine Learning Research
Award, a 2020 IACR Test of Time Award (Eurocrypt), and a STOC 2021 Best
Paper Award. For his contributions to the conception and development of
indistinguishability obfusction, he was awarded the 2022 Held Prize by
the National Academy of Sciences. For his teaching, he was given the
2016 Lockheed Martin Excellence in Teaching Award from the Samueli
School of Engineering at UCLA. His research has been covered by several
news agencies including the BBC World Service, Quanta Magazine, Wired,
and IEEE Spectrum.
This talk is hosted by the ISL Colloquium. To receive talk
announcements, subscribe to the mailing list isl
colloq at lists.stanford.edu.
Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2022q2/amitsahai/
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun May 8 20:30:19 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 9 May 2022 03:30:19 +0000
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis? Probably Approximately Correct (PAC)Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ? H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X ? {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAClearning and give an elementary, `modelindependent? proof of their equivalence via direct blackbox reduction. Time willing, we?ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun May 8 20:30:19 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 9 May 2022 03:30:19 +0000
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis? Probably Approximately Correct (PAC)Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ? H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X ? {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAClearning and give an elementary, `modelindependent? proof of their equivalence via direct blackbox reduction. Time willing, we?ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From abouland at stanford.edu Mon May 9 16:54:32 2022
From: abouland at stanford.edu (Adam Bouland)
Date: Mon, 9 May 2022 16:54:32 0700
Subject: [theoryseminar] Simons Quantum Colloquium tomorrow (Tues,
May 10) by Leonard Susskind, followed by panel featuring Scott Aaronson,
Geoffrey Penington, and Edward Witten
MessageID:
FYI Lenny Susskind is giving a Simons Colloquium tomorrow on complexity
theory and black holes, which may be of interest!
Best,
Adam

*From:* Simons Institute Events
*Date:* May 9, 2022 at 2:18:08 PM PDT
*To:* undisclosedrecipients:;
*Subject:* *[theoryfaculty] [theoryannounce] Quantum Colloquium tomorrow
(Tues, May 10) by Leonard Susskind, followed by panel featuring Scott
Aaronson, Geoffrey Penington, and Edward Witten*
*ReplyTo:* Simons Institute Events
Hello,
The Simons Institute would like to invite the members of your listserv to
join us on Zoom for our final Quantum Colloquium of the semester as we
believe the topic may be of interest. Our final Quantum Colloquium of the
semester is taking place tomorrow, Tuesday, May 10. Leonard Susskind
(Stanford University) will present a talk on "Black Holes and the
QuantumExtended ChurchTuring Thesis" starting at 11 a.m. Pacific
Time. Please see below for the full title and abstract details.
Following the colloquium at 12 p.m. Pacific Time, we are pleased to present
a panel discussion featuring Scott Aaronson (UT Austin), Geoffrey Penington
(UC Berkeley), and Edward Witten (IAS). Moderated by Umesh Vazirani (UC
Berkeley).
Further details about the colloquium can be viewed here:
https://simons.berkeley.edu/events/quantumcolloquium
Public Zoom webinar link: https://berkeley.zoom.us/j/95040632440
We hope to see you there!
Umesh Vazirani
*Title:* Black Holes and the QuantumExtended ChurchTuring Thesis
*Speakers: *Leonard Susskind (Stanford University)
*Abstract:* A few years ago three computer scientists named Adam Bouland,
Bill Fefferman, and Umesh Vazirani, wrote a paper that promises to
radically change the way we think about the interiors of black holes.
Inspired by their paper I will explain how black holes threaten the QECTT,
and how the properties of horizons rescue the thesis, and eventually make
predictions for the complexity of extracting information from behind the
black hole horizon. I'll try my best to explain enough about black holes to
keep the lecture self contained.
Best regards,

*Events Team *
*Simons Institute for the Theory of Computing*
Melvin Calvin Lab  UC Berkeley
*Google Maps
 Directions &
Parking *
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Wed May 11 08:27:56 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Wed, 11 May 2022 08:27:56 0700
Subject: [theoryseminar] KahnKalai conjecture proof: today, 2:30pm
InReplyTo:
References:
MessageID:
Theory friends,
Quick reminder that we have a special theory seminar by Huy Pham today,
2:304:30pm in Gates 415, on the proof of the KahnKalai conjecture. See
details below.
A white board talk is much better in person, but for those who would like
to follow along remotely, here is a zoom link:
https://stanford.zoom.us/j/98564827425?pwd=QjZkS2RCclBLekpLZmczeGxKeTF2dz09
Cheers,
Moses
On Fri, May 6, 2022 at 11:02 AM Moses Charikar
wrote:
> Theory friends,
>
> For those of you who missed Huy Pham's combinatorics seminar talk last
> week about the breakthrough result on the KahnKalai conjecture as well as
> those who attended and would like to hear more details ... we've arranged
> for him to give a 2 hour theory seminar next week so he can go over the
> short and elegant proof. The talk will be self contained. Abstract below.
>
> When: Wed May 11, 2:304:30pm
> Where: Gates 415
>
> You can read about the result in the Math dept news blurb
>
> and the Quanta magazine article.
>
>
> Cheers,
> Moses
>
> Title: A proof of the KahnKalai conjecture
>
> Abstract: In this session, I will describe in detail the proof of the
> KahnKalai conjecture in the recent joint work with Jinyoung Park. This
> conjecture concerns the threshold of an increasing boolean function (or of
> an increasing graph property), which is the density at which a random set
> (or a random graph) transitions from unlikely satisfying to likely
> satisfying the function (or property). Kahn and Kalai conjectured that for
> any nontrivial increasing property on a finite set, its threshold is never
> far from its "expectationthreshold," which is a natural (and often easy to
> calculate) lower bound on the threshold. The KahnKalai conjecture directly
> implies a number of difficult results in probabilistic combinatorics and
> random graph theory, such as Shamir?s problem on hypergraph matchings, or
> the threshold for containing a bounded degree spanning tree.
>
>
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Thu May 12 08:56:15 2022
From: jneu at stanford.edu (Joachim Neu)
Date: Thu, 12 May 2022 08:56:15 0700
Subject: [theoryseminar]
=?iso88591?q?=22Beyond_the_Csisz=E1rK=F6rner_?=
=?iso88591?q?Bound=3A_BestPossible_Wiretap_Coding_via_Obfuscation=22_?=
=?iso88591?q?=3F_Amit_Sahai_=28Thu=2C_12May_=40_4=3A00pm=29?=
InReplyTo: <8d10110fa61c5b90038db21fccad9bb8b250482c.camel@stanford.edu>
References: <8d10110fa61c5b90038db21fccad9bb8b250482c.camel@stanford.edu>
MessageID: <2203a14bce73b6649dd25665045ee0370f64a197.camel@stanford.edu>
Reminder: This talk is today at 4pm in Packard 101. (Also streamed on
Zoom, see link below.)
On Sun, 20220508 at 20:57 0700, Joachim Neu wrote:
> Beyond the Csisz?rK?rner Bound: BestPossible Wiretap Coding via Obfuscation
> Amit Sahai ? Professor, UCLA
> Thu, 12May / 4:00pm / Packard 101 (in person)
> Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be streamed on
> Zoom for those unable to attend in person:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> Abstract
> A wiretap coding scheme (Wyner, Bell Syst. Tech. J. 1975) enables
> Alice to reliably communicate a message m to an honest Bob by sending
> an encoding c over a noisy channel chB, while at the same time hiding
> m from Eve who receives c over another noisy channel chE.
> Wiretap coding is clearly impossible when chB is a degraded version
> of chE, in the sense that the output of chB can be simulated using
> only the output of chE. A classic work of Csisz?r and K?rner (IEEE
> Trans. Inf. Theory, 1978) shows that the converse does not hold. This
> follows from their full characterization of the channel pairs (chB,
> chE) that enable informationtheoretic wiretap coding.
> In this work, we show that in fact the converse does hold when
> considering computational security; that is, wiretap coding against a
> computationally bounded Eve is possible if and only if chB is not a
> degraded version of chE. Our construction assumes the existence of
> virtual blackbox (VBB) obfuscation of specific classes of ``evasive?
> functions that generalize fuzzy point functions, and can be
> heuristically instantiated using indistinguishability obfuscation.
> Finally, our solution has the appealing feature of being universal in
> the sense that Alice?s algorithm depends only on chB and not on chE.
> Joint work with Yuval Ishai, Alexis Korb, and Paul Lou.
> Bio
> Amit Sahai is a Simons Investigator (2021), Fellow of the ACM (2018)
> and a Fellow of the IACR (2019). He is also a Fellow of the Royal
> Society of Arts (2021), and Advisor to the Prison Mathematics
> Project. He is the incumbent of the Symantec Endowed Chair in
> Computer Science. He received his Ph.D. in Computer Science from MIT
> in 2000. From 2000 to 2004, he was on the faculty at Princeton
> University; in 2004 he joined the UCLA Samueli School of Engineering,
> where he currently holds the position of Professor of Computer
> Science. He serves as an editor of J. Cryptology (SpringerNature).
> His research interests are in security and cryptography, and
> theoretical computer science more broadly. He is the coinventor of
> AttributeBased Encryption, Functional Encryption, and
> Indistinguishability Obfuscation. He has published more than 150
> original technical research papers at venues such as the ACM
> Symposium on Theory of Computing (STOC), CRYPTO, and the Journal of
> the ACM. He has given a number of invited talks at institutions such
> as MIT, Stanford, and Berkeley, including the 2004 Distinguished
> Cryptographer Lecture Series at NTT Labs, Japan. Professor Sahai is
> the recipient of numerous honors; he was named an Alfred P. Sloan
> Foundation Research Fellow in 2002, received an Okawa Research Grant
> Award in 2007, a Xerox Foundation Faculty Award in 2010, a Google
> Faculty Research Award in 2010, a 2012 Pazy Memorial Award, a 2016
> ACM CCS Test of Time Award, a 2019 AWS Machine Learning Research
> Award, a 2020 IACR Test of Time Award (Eurocrypt), and a STOC 2021
> Best Paper Award. For his contributions to the conception and
> development of indistinguishability obfusction, he was awarded the
> 2022 Held Prize by the National Academy of Sciences. For his
> teaching, he was given the 2016 Lockheed Martin Excellence in
> Teaching Award from the Samueli School of Engineering at UCLA. His
> research has been covered by several news agencies including the BBC
> World Service, Quanta Magazine, Wired, and IEEE Spectrum.
> This talk is hosted by the ISL Colloquium. To receive talk
> announcements, subscribe to the mailing list isl
> colloq at lists.stanford.edu.
>
> Mailing list:
> https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk: http://isl.stanford.edu/talks/talks/2022q2/amitsahai/
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 11 22:21:04 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 12 May 2022 05:21:04 +0000
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 8, 2022 8:30 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis? Probably Approximately Correct (PAC)Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ? H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X ? {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAClearning and give an elementary, `modelindependent? proof of their equivalence via direct blackbox reduction. Time willing, we?ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 11 22:21:04 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 12 May 2022 05:21:04 +0000
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 8, 2022 8:30 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis? Probably Approximately Correct (PAC)Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ? H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X ? {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAClearning and give an elementary, `modelindependent? proof of their equivalence via direct blackbox reduction. Time willing, we?ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From mdharris at stanford.edu Thu May 12 11:58:14 2022
From: mdharris at stanford.edu (Megan D. Harris)
Date: Thu, 12 May 2022 18:58:14 +0000
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
InReplyTo:
References:
MessageID:
Also, no vegetarian option  I made a mistake on the order. There is a salad but it has chicken and bacon on it, dressing is on the side (you can remove the meat).
Best,
Megan Denise Harris  Faculty Administrator  Computer Science (Gates Building) 
353 Serra Mall, Rm 187, Stanford, CA 94305 
Office Phone  650.723.1658  Cell 2063131390 
Campus Days: Monday and Thursday 7AM3:30PM
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Wednesday, May 11, 2022 10:21 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 8, 2022 8:30 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis? Probably Approximately Correct (PAC)Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ? H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X ? {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAClearning and give an elementary, `modelindependent? proof of their equivalence via direct blackbox reduction. Time willing, we?ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From mdharris at stanford.edu Thu May 12 11:58:14 2022
From: mdharris at stanford.edu (Megan D. Harris)
Date: Thu, 12 May 2022 18:58:14 +0000
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
InReplyTo:
References:
MessageID:
Also, no vegetarian option  I made a mistake on the order. There is a salad but it has chicken and bacon on it, dressing is on the side (you can remove the meat).
Best,
Megan Denise Harris  Faculty Administrator  Computer Science (Gates Building) 
353 Serra Mall, Rm 187, Stanford, CA 94305 
Office Phone  650.723.1658  Cell 2063131390 
Campus Days: Monday and Thursday 7AM3:30PM
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Wednesday, May 11, 2022 10:21 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 8, 2022 8:30 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/12: Max Hopkins (UCSD)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Max will tell us about: Realizable Learning is All You Need
Abstract: The equivalence of realizable and agnostic learning in Valiant and Vapnik and Chervonenkis? Probably Approximately Correct (PAC)Learning model is one of the most classical results in learning theory, dating all the way back to the latters' 1974 book on the theory of pattern recognition. Roughly speaking, this surprising equivalence states that given a set X and family of binary classifiers H, the ability to learn a classifier h ? H from labeled examples of the form (x, h(x)) is in fact sufficient for a (seemingly) much harder task: given samples from any distribution D over X ? {0, 1}, find the best approximation to D in H.
Traditionally, the proof of this fact is complicated and brittle, relying on a third party equivalence with a strong property called uniform convergence. In this talk, we review the basic definitions of realizable and agnostic PAClearning and give an elementary, `modelindependent? proof of their equivalence via direct blackbox reduction. Time willing, we?ll also discuss some new implications of this framework beyond the original PAC model.
Based on joint work with Daniel Kane, Shachar Lovett, and Gaurav Mahajan.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From aviad.rubinstein at gmail.com Thu May 12 13:28:05 2022
From: aviad.rubinstein at gmail.com (Aviad Rubinstein)
Date: Thu, 12 May 2022 13:28:05 0700
Subject: [theoryseminar] Fwd: Economics Seminar on Wednesday, May 18,
2022 with Professor Yannai Gonczarowski
InReplyTo:
References:
MessageID:
This talk may be of interest to many on this list.
Cheers,
Aviad
 Forwarded message 
From: Rochelle Bagalso
Date: Thu, 12 May 2022 at 10:48
Subject: Economics Seminar on Wednesday, May 18, 2022 with Professor Yannai
Gonczarowski
To:
Cc: Giselle Alvarez , Gabriel Lozano <
glozano at stanford.edu>, Rochelle Bagalso
The Economics seminar guest speaker on Wednesday, May 18th is Professor
Yannai Gonczarowski from Harvard University. The title of the paper
presentation is *SelfExplanatory Strategyproof Mechanisms*. The abstract
of the paper is attached.
The hybrid seminar is scheduled from 3:45 ? 5:00 p.m. in C105. A Zoom link
is provided here
for
those joining remotely or see full details below.
If you would like to meet with Professor Gonczarowski please click here
to select an available meeting time.
For those attending the seminar in C105 please review the guidelines
below. Please
note that a Google form will be sent out in the morning the day of the
seminar. All who will attend in person will need to signin prior to the
seminar.

Staff and student attendees must complete the HealthCheck
daily health attestation and
COVID19 testing requirements, where applicable.

Face coverings are no longer required at Stanford but strongly
recommended. See Health Alerts
for the most up to date policies.

Visitors: NonStanford affiliates (i.e. lecturers, guest speakers,
alumni) must follow the university Visitor policy
.
Thank you,
Rochelle

Join from PC, Mac, Linux, iOS or Android:
https://stanford.zoom.us/j/96553591190?pwd=blBLRUVUc1RaUFdlSGtYNWZ6YWVtZz09
Password: 254675
*Rochelle Bagalso*Program Coordinator, Faculty Support Team
*Stanford Graduate School of Business*Knight Management Center
Stanford University
655 Knight Way, W308B
Stanford, CA 943057298
Phone (650) 7362237
*Change Lives. Change Organizations. Change the World.*
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: Abstract.pdf
Type: application/pdf
Size: 142752 bytes
Desc: not available
URL:
From vaggos at stanford.edu Fri May 13 07:59:26 2022
From: vaggos at stanford.edu (Vaggos Chatziafratis)
Date: Fri, 13 May 2022 14:59:26 +0000
Subject: [theoryseminar] TOCASV is back (May 20@Stanford)!
InReplyTo:
References:
MessageID:
Hello Soheil,
Do you perhaps know if TOCA talks will be broadcast over zoom too?
I will be there but I know several people from MIT and UC Irvine who showed interest to the event but cannot come so I thought I'd ask.
Thanks for organizing,
Vaggos
Get Outlook for iOS
________________________________
From: theoryseminar on behalf of Soheil Behnezhad
Sent: Thursday, April 21, 2022 2:09:01 PM
To: theoryseminar at lists.stanford.edu
Subject: Re: [theoryseminar] TOCASV is back (May 20 at Stanford)!
Hi everyone,
This is a quick reminder to please register for TOCASV (happening May 20th) if you are planning to attend.
Best,
Soheil
________________________________
From: Soheil Behnezhad
Sent: Saturday, April 16, 2022 5:28 PM
To: theoryseminar at lists.stanford.edu
Subject: TOCASV is back (May 20 at Stanford)!
Hi,
For the past several years, we had a (bi)annual event held alternatively at Stanford and Google called TOCASV for Bay Area theoreticians to meet. This was on pause for the last two years during the pandemic, but we will revive the tradition this year at Stanford on May 20th. It is going to be an inperson event with talks, posters, and food. Registration is free but required. Please register here by April 22nd. We will announce the program soon, but if you haven't attended TOCASV before, see the links on TOCASV'22 website to the previous events to get an idea of what to expect.
Everyone, especially students, are welcome and encouraged to present posters of their work. If you are planning to present a poster, indicate so in the registration form so we make sure to have a poster stand secured for you.
Mark your calendars and stay tuned for more info. We look forward to seeing everyone on May 20 at Stanford!
Best,
Soheil Behnezhad
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Sat May 14 12:07:58 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Sat, 14 May 2022 12:07:58 0700
Subject: [theoryseminar] KahnKalai conjecture proof: today, 2:30pm
InReplyTo:
References:
MessageID:
Folks, for those who could not make it, a recording of this talk is
available here:
https://youtu.be/ElaaV3OLD9I
Cheers,
Moses
On Wed, May 11, 2022 at 8:27 AM Moses Charikar
wrote:
> Theory friends,
>
> Quick reminder that we have a special theory seminar by Huy Pham today,
> 2:304:30pm in Gates 415, on the proof of the KahnKalai conjecture. See
> details below.
>
> A white board talk is much better in person, but for those who would like
> to follow along remotely, here is a zoom link:
> https://stanford.zoom.us/j/98564827425?pwd=QjZkS2RCclBLekpLZmczeGxKeTF2dz09
>
> Cheers,
> Moses
>
> On Fri, May 6, 2022 at 11:02 AM Moses Charikar
> wrote:
>
>> Theory friends,
>>
>> For those of you who missed Huy Pham's combinatorics seminar talk last
>> week about the breakthrough result on the KahnKalai conjecture as well as
>> those who attended and would like to hear more details ... we've arranged
>> for him to give a 2 hour theory seminar next week so he can go over the
>> short and elegant proof. The talk will be self contained. Abstract below.
>>
>> When: Wed May 11, 2:304:30pm
>> Where: Gates 415
>>
>> You can read about the result in the Math dept news blurb
>>
>> and the Quanta magazine article.
>>
>>
>> Cheers,
>> Moses
>>
>> Title: A proof of the KahnKalai conjecture
>>
>> Abstract: In this session, I will describe in detail the proof of the
>> KahnKalai conjecture in the recent joint work with Jinyoung Park. This
>> conjecture concerns the threshold of an increasing boolean function (or of
>> an increasing graph property), which is the density at which a random set
>> (or a random graph) transitions from unlikely satisfying to likely
>> satisfying the function (or property). Kahn and Kalai conjectured that for
>> any nontrivial increasing property on a finite set, its threshold is never
>> far from its "expectationthreshold," which is a natural (and often easy to
>> calculate) lower bound on the threshold. The KahnKalai conjecture directly
>> implies a number of difficult results in probabilistic combinatorics and
>> random graph theory, such as Shamir?s problem on hypergraph matchings, or
>> the threshold for containing a bounded degree spanning tree.
>>
>>
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun May 15 20:51:19 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 16 May 2022 03:51:19 +0000
Subject: [theoryseminar] Theory Lunch 5/19: Mark Zhandry (Princeton)
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Mark will tell us about: Verifiable Quantum Advantage without Structure
Abstract: ?Structure? has long played a central role in proposals for superpolynomial quantum advantage. This is especially true for problems whose solutions can be efficiently verified, where all prior results require algebraic computational conjectures or oracles with very specific features. In this talk, I will discuss a new approach for verifiable quantum advantage which, for a reasonable complexitytheoretic notion of ?structure?, requires no structure at all.
* Joint work with Takashi Yamakawa
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun May 15 20:51:19 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 16 May 2022 03:51:19 +0000
Subject: [theoryseminar] Theory Lunch 5/19: Mark Zhandry (Princeton)
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Mark will tell us about: Verifiable Quantum Advantage without Structure
Abstract: ?Structure? has long played a central role in proposals for superpolynomial quantum advantage. This is especially true for problems whose solutions can be efficiently verified, where all prior results require algebraic computational conjectures or oracles with very specific features. In this talk, I will discuss a new approach for verifiable quantum advantage which, for a reasonable complexitytheoretic notion of ?structure?, requires no structure at all.
* Joint work with Takashi Yamakawa
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Tue May 17 09:09:28 2022
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Tue, 17 May 2022 09:09:28 0700
Subject: [theoryseminar] =?utf8?q?=22Algorithms_and_computational_limits?=
=?utf8?q?_for_infinitehorizon_generalsum_stochastic_games=22_?=
=?utf8?q?=E2=80=93_Vidya_Muthukumar_=28Thu=2C_19May_=40_4=3A00pm?=
=?utf8?q?=29?=
MessageID:
Algorithms and computational limits for infinitehorizon generalsum
stochastic games Vidya Muthukumar ? Professor, Georgia Tech
Thu, 19May / 4:00pm / Packard 101 (in person)
* Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). The talk will be streamed on Zoom
for those unable to attend in person:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*
Abstract
Stochastic games, first introduced by Lloyd Shapley in 1953, are a natural
gametheoretic generalization of Markov decision processes. The planning
problem for stochastic games ? that is, efficiently computing a policy that
is a Nash equilibrium for all players in the game ? is a fundamental
building block for multiagent reinforcement learning.
In this talk, we provide algorithms and computational limits for finding
Nash equilibria (NE) in infinitehorizon, generalsum stochastic games of
two types: simultaneous play (SimSG) and turnbased play (TBSG). We prove
that computing an approximate stationary NE is PPAD*complete in the number
of states (S) for both SimSG and TBSG. This intractability result for TBSG
in particular highlights a surprising separation between the complexity of
the planning problem for nonstationary NE (which can be shown to be
computable in polynomialtime for TBSG) and stationary NE. Despite the
worstcase intractability, we also identify some special cases of
generalsum TBSGs for which pure stationary NE always exist and are
computable in polynomial time.
This talk will not assume any prior algorithmic game theory or complexity
theory knowledge.
*a complexity class for certain types of problems for which a solution is
guaranteed to exist. Introduced by Christos Papadimitriou in ?94; believed
to be intractable, but easier than NP.
Bio
Vidya Muthukumar is an Assistant Professor in the Schools of Electrical and
Computer Engineering and Industrial and Systems Engineering at Georgia
Institute of Technology. Her broad interests are in game theory, online and
statistical learning. She is particularly interested in designing learning
algorithms that provably adapt in strategic environments, fundamental
properties of overparameterized models, and algorithmic foundations of
multiagent reinforcement learning.
Vidya received the PhD degree in Electrical Engineering and Computer
Sciences from University of California, Berkeley. She is the recipient of
the Adobe Data Science Research Award, a SimonsBerkeleyGoogle Research
Fellowship (for the Fall 2020 program on ?Theory of Reinforcement
Learning?), IBM Science for Social Good Fellowship and a Georgia Tech Class
of 1969 Teaching Fellowship for the academic year 20212022.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list islcolloq at lists.stanford.edu
.*
 next part 
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Thu May 19 09:30:19 2022
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Thu, 19 May 2022 09:30:19 0700
Subject: [theoryseminar]
=?utf8?q?=22Algorithms_and_computational_limits?=
=?utf8?q?_for_infinitehorizon_generalsum_stochastic_games=22_?=
=?utf8?q?=E2=80=93_Vidya_Muthukumar_=28Thu=2C_19May_=40_4=3A00pm?=
=?utf8?q?=29?=
InReplyTo:
References:
MessageID:
Hi All,
Unfortunately this talk has been postponed due to unforeseen
circumstances.
Thanks,
Kabir
On Tue, May 17, 2022 at 9:09 AM Kabir Chandrasekher
wrote:
> Algorithms and computational limits for infinitehorizon generalsum
> stochastic games Vidya Muthukumar ? Professor, Georgia Tech
>
> Thu, 19May / 4:00pm / Packard 101 (in person)
>
> * Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be streamed on Zoom
> for those unable to attend in person:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
>
> *
> Abstract
>
> Stochastic games, first introduced by Lloyd Shapley in 1953, are a natural
> gametheoretic generalization of Markov decision processes. The planning
> problem for stochastic games ? that is, efficiently computing a policy that
> is a Nash equilibrium for all players in the game ? is a fundamental
> building block for multiagent reinforcement learning.
>
> In this talk, we provide algorithms and computational limits for finding
> Nash equilibria (NE) in infinitehorizon, generalsum stochastic games of
> two types: simultaneous play (SimSG) and turnbased play (TBSG). We prove
> that computing an approximate stationary NE is PPAD*complete in the number
> of states (S) for both SimSG and TBSG. This intractability result for TBSG
> in particular highlights a surprising separation between the complexity of
> the planning problem for nonstationary NE (which can be shown to be
> computable in polynomialtime for TBSG) and stationary NE. Despite the
> worstcase intractability, we also identify some special cases of
> generalsum TBSGs for which pure stationary NE always exist and are
> computable in polynomial time.
>
> This talk will not assume any prior algorithmic game theory or complexity
> theory knowledge.
>
> *a complexity class for certain types of problems for which a solution is
> guaranteed to exist. Introduced by Christos Papadimitriou in ?94; believed
> to be intractable, but easier than NP.
> Bio
>
> Vidya Muthukumar is an Assistant Professor in the Schools of Electrical
> and Computer Engineering and Industrial and Systems Engineering at Georgia
> Institute of Technology. Her broad interests are in game theory, online and
> statistical learning. She is particularly interested in designing learning
> algorithms that provably adapt in strategic environments, fundamental
> properties of overparameterized models, and algorithmic foundations of
> multiagent reinforcement learning.
>
> Vidya received the PhD degree in Electrical Engineering and Computer
> Sciences from University of California, Berkeley. She is the recipient of
> the Adobe Data Science Research Award, a SimonsBerkeleyGoogle Research
> Fellowship (for the Fall 2020 program on ?Theory of Reinforcement
> Learning?), IBM Science for Social Good Fellowship and a Georgia Tech Class
> of 1969 Teaching Fellowship for the academic year 20212022.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
>
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 18 23:42:54 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 19 May 2022 06:42:54 +0000
Subject: [theoryseminar] Theory Lunch 5/19: Mark Zhandry (Princeton)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 15, 2022 8:51 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/19: Mark Zhandry (Princeton)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Mark will tell us about: Verifiable Quantum Advantage without Structure
Abstract: ?Structure? has long played a central role in proposals for superpolynomial quantum advantage. This is especially true for problems whose solutions can be efficiently verified, where all prior results require algebraic computational conjectures or oracles with very specific features. In this talk, I will discuss a new approach for verifiable quantum advantage which, for a reasonable complexitytheoretic notion of ?structure?, requires no structure at all.
* Joint work with Takashi Yamakawa
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 18 23:42:54 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 19 May 2022 06:42:54 +0000
Subject: [theoryseminar] Theory Lunch 5/19: Mark Zhandry (Princeton)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 15, 2022 8:51 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/19: Mark Zhandry (Princeton)
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Mark will tell us about: Verifiable Quantum Advantage without Structure
Abstract: ?Structure? has long played a central role in proposals for superpolynomial quantum advantage. This is especially true for problems whose solutions can be efficiently verified, where all prior results require algebraic computational conjectures or oracles with very specific features. In this talk, I will discuss a new approach for verifiable quantum advantage which, for a reasonable complexitytheoretic notion of ?structure?, requires no structure at all.
* Joint work with Takashi Yamakawa
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From soheil.behnezhad at gmail.com Thu May 19 15:42:11 2022
From: soheil.behnezhad at gmail.com (Soheil Behnezhad)
Date: Thu, 19 May 2022 15:42:11 0700
Subject: [theoryseminar] TOCASV Starts Tomorrow at 10:30am
MessageID:
Hello everyone,
I wanted to remind you that TOCASV
will start *tomorrow* at the Tresidder Oak Lounge of Stanford. The program
starts with Jelani Nelson's talk at *10:30am* and ends at *5:30pm*. You can
find the full schedule attached and also on the TOCASV website
.
We will have a (limited) number of parking spots reserved for the
attendees. If you're planning to use them, make sure to check out the parking
instructions to
register your car and park in the right place to avoid citations (as
Stanford won't reimburse citations).
If you'd like to explore the campus after the event, visit this link
. The Stanford COVID policy can
also be found in here
.
Looking forward to seeing you tomorrow,
Soheil
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: TOCAProgram.pdf
Type: application/pdf
Size: 817701 bytes
Desc: not available
URL:
From junyaoz at stanford.edu Sun May 22 20:04:43 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 23 May 2022 03:04:43 +0000
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri
(Berkeley)
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "lowdimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "highdimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun May 22 20:04:43 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 23 May 2022 03:04:43 +0000
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri
(Berkeley)
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "lowdimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "highdimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 25 16:32:47 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Wed, 25 May 2022 23:32:47 +0000
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri
(Berkeley)
InReplyTo:
References:
MessageID:
Hi everyone,
Yeshwanth's talk is postponed due to unforeseen circumstances. Tomorrow we will have an hour of socializing instead.
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 22, 2022 8:04 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "lowdimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "highdimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed May 25 16:32:47 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Wed, 25 May 2022 23:32:47 +0000
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri
(Berkeley)
InReplyTo:
References:
MessageID:
Hi everyone,
Yeshwanth's talk is postponed due to unforeseen circumstances. Tomorrow we will have an hour of socializing instead.
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 22, 2022 8:04 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "lowdimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "highdimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu May 26 10:07:07 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 26 May 2022 17:07:07 +0000
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri
(Berkeley)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes (the talk is canceled, and we will have an hour of socializing instead).
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Wednesday, May 25, 2022 4:32 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hi everyone,
Yeshwanth's talk is postponed due to unforeseen circumstances. Tomorrow we will have an hour of socializing instead.
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 22, 2022 8:04 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "lowdimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "highdimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu May 26 10:07:07 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 26 May 2022 17:07:07 +0000
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri
(Berkeley)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes (the talk is canceled, and we will have an hour of socializing instead).
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Wednesday, May 25, 2022 4:32 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hi everyone,
Yeshwanth's talk is postponed due to unforeseen circumstances. Tomorrow we will have an hour of socializing instead.
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, May 22, 2022 8:04 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 5/26: Yeshwanth Cherapanamjeri (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yeshwanth will tell us about: Uniform Approximations for Randomized Hadamard Transforms
Abstract: In this talk, I will present some recent work establishing concentration properties for a class of structured random linear transformations based on Hadamard matrices. This class of matrices has been adopted as a computationally efficient alternative to "fully" random linear transformations (for instance, a matrix of iid Gaussians) in applications ranging from dimensionality reduction and compressed sensing to various high dimensional machine learning scenarios. However, previous theoretical results only apply to the "lowdimensional" setting where a small number of rows are sampled from a full transformation matrix. I will present a full proof of our "highdimensional" result where we show that as for as the distribution of the entries of the output are concerned, these structured transformations behave much the same as a fully random transformation. I will then describe an application of our inequality to the practically relevant setting of kernel approximation where we obtain guarantees competitive with those for fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper: https://arxiv.org/abs/2203.01599
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL: