From junyaoz at stanford.edu Tue Mar 1 10:45:55 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Tue, 1 Mar 2022 18:45:55 +0000
Subject: [theoryseminar] Theory Lunch 3/3: Noah Shutty
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at 11:50am. We'll start with socializing and having lunch in the grass area near Gates (see the map at the end), followed by a remote talk at 12:40pm in Fujitsu. Noah will give his talk remotely, and he'll tell us about: Computing Efficiently on Encoded Data
Abstract: Error correcting codes can protect data from catastrophic loss. But could certain codes also make some computations more efficient?
In this work, we consider a distributed computing scenario in which several nodes collectively store bits of data, encoded in a ReedSolomon code. We show that arbitrary linear functions of this data can be computed by downloading just bits from each node, in contrast to the that the na?ve approach requires.
This presents recent work with Mary Wootters at [arxiv/2107.11847].
Note the new location and the slight change of time for this week. We want to spare people the extra walk and give people enough time to have lunch before the talk.
If you want to attend the talk but cannot come in person, please use this zoom link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
Theory lunch location for this week: We'll have lunch in the grass area near Gates (red point in the map below), and we'll watch Noah's talk together in Fujitsu (Gates 403).
[cid:15fe8f1f358247b0ae2fd6f6ec535b52]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 1731537 bytes
Desc: image.png
URL:
From junyaoz at stanford.edu Tue Mar 1 10:45:55 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Tue, 1 Mar 2022 18:45:55 +0000
Subject: [theoryseminar] Theory Lunch 3/3: Noah Shutty
MessageID:
Hi everyone,
This week's theory lunch will take place Thursday at 11:50am. We'll start with socializing and having lunch in the grass area near Gates (see the map at the end), followed by a remote talk at 12:40pm in Fujitsu. Noah will give his talk remotely, and he'll tell us about: Computing Efficiently on Encoded Data
Abstract: Error correcting codes can protect data from catastrophic loss. But could certain codes also make some computations more efficient?
In this work, we consider a distributed computing scenario in which several nodes collectively store bits of data, encoded in a ReedSolomon code. We show that arbitrary linear functions of this data can be computed by downloading just bits from each node, in contrast to the that the na?ve approach requires.
This presents recent work with Mary Wootters at [arxiv/2107.11847].
Note the new location and the slight change of time for this week. We want to spare people the extra walk and give people enough time to have lunch before the talk.
If you want to attend the talk but cannot come in person, please use this zoom link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
Theory lunch location for this week: We'll have lunch in the grass area near Gates (red point in the map below), and we'll watch Noah's talk together in Fujitsu (Gates 403).
[cid:15fe8f1f358247b0ae2fd6f6ec535b52]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 1731537 bytes
Desc: image.png
URL:
From marykw at stanford.edu Wed Mar 2 18:55:14 2022
From: marykw at stanford.edu (Mary Wootters)
Date: Wed, 2 Mar 2022 18:55:14 0800
Subject: [theoryseminar] Learning Theory Mentoring Workshop
MessageID:
Hi all,
See below for an announcement about a Learning Theory Mentoring Workshop
(to be held at ALT) March 1415! The application deadline is March 10.
Best,
Mary
=========
Hi all,
We are pleased to invite you to a Learning Theory Mentorship Workshop in
collaboration with the Conference on Algorithmic Learning Theory (ALT) 2022
to be held virtually on March 1415, 2022. The workshop is free.
The workshop is intended for upperlevel undergraduate and alllevel
graduate students as well as postdoctoral researchers. No prior research
experience in the field is expected. We have several planned events
including:

Two ?howto? talks:

How to read a paper (e.g., questions to keep in mind while reading a
paper and how to get the most out of a paper in limited time)

How to navigate peer review (e.g., how to write reviews and rebuttals)

Short technical tutorials on favorite learning theory concepts and
techniques (e.g., concentration inequalities, complexity measures, and
optimization tricks)

An ?Ask Me Anything? (AMA) with a member of the learning theory community

A social hour with mentoring tables
Our lineup includes: Shipra Agrawal, Cl?ment Canonne, Rachel Cummings, Sam
Hopkins, Nicole Immorlica, Akshay Krishnamurthy, Jamie Morgenstern, Aaditya
Ramdas, and Csaba Szepesv?ri.
A short application form is required
to participate with an application deadline of Thursday, March 10, 2022.
Students with backgrounds that are underrepresented or underserved in
related fields are especially encouraged to apply. We are trying our best
to accommodate all time zones. More information (including the schedule)
can be found on the event?s website: http://letall.com/alt22.html.
This workshop is part of our broader community building initiative called
the Learning Theory Alliance (founded by Surbhi Goel, Nika Haghtalab, and
Ellen Vitercik; advised by Peter Bartlett, Avrim Blum, Stefanie Jegelka,
PoLing Loh, and Jenn Wortman Vaughan). Check out http://letall.com/ for
more details.
Best,
Surbhi Goel, Thodoris Lykouris, Vidya Muthukumar, and Ellen Vitercik
 next part 
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Wed Mar 2 21:55:21 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Thu, 3 Mar 2022 05:55:21 +0000
Subject: [theoryseminar] "Information Lattice Learning"  Haizi Yu (Friday,
March 4th, 2pm PT)
MessageID: <321A537767B94986B07ADB9977C219D7@stanford.edu>
Hi everyone,
We continue with the Information Theory Forum (IT Forum) talks this week @Fri, March 4th, 2pm PT with Dr. Haizi Yu. The talks are hosted and accessible via Zoom.
If you want to receive reminder emails, please join the IT Forum mailing list.
Details for this week?s talk are below:
Information Lattice Learning
Haizi Yu, University of Chicago
Fri, 4th March, 2pm PT
Zoom Link
pwd: 032264
Abstract:
Can AI learn rules and abstractions of raw data? How little priors and how little data are needed to do so? How interpretable can the learned rules and the rulelearning process be? Whereas humans are extremely flexible in abstracting raw patterns and distilling rules, current AI systems are mostly good at either applying humandistilled rules (rulebased AI) or capturing patterns in a taskdriven fashion (pattern recognition), but not at learning patterns in a humaninterpretable way similar to humaninduced theory and knowledge. We develop a generic whitebox paradigm called Information Lattice Learning (ILL) to distill humaninterpretable understanding of data in a humanlike manner. We build the ILL framework based on the core idea of computational abstraction and project statistical learning and inference onto lattices of abstractions. The resulting framework generalizes Shannon's information lattice and further brings learning into the picture. ILL targets applications where it can augment human intelligence and human creativity. We first deploy ILL in knowledge discovery, where we implement automatic theorists to learn music theory from scores, chemical laws from molecules, as well as rules on neurogenesis and team formation from singlecell RNAseq and social behavioral datasets, respectively. For instance, ILL is capable of reconstructing, in explicit form, about 70% of a standard music theory curriculum from only 370 of Bach's chorales, while also discovering new rules that interest music researchers. We also demonstrate ILL's nearhuman performance on trulyfewshot learning, where we achieve stateoftheart results in character classifications from few and only few training examplesno extra pretraining or validation data. For instance, we can reach 80%/90% MNIST test accuracy by using only the first/first four training images per class. We further demonstrate ILL's use cases in humanAI cocreation systems for crowd artistic creations.
Bio:
Haizi Yu is a postdoctoral scholar in the Knowledge Lab at University of Chicago. He completed his Ph.D. degree in Computer Science from University of Illinois at UrbanaChampaign, his M.S. degree in Computer Science from Stanford University, and his B.S. degree in Automation from Tsinghua University. His research interest includes general and explainable artificial intelligence, interpretable machine learning, automatic concept learning and knowledge discovery, as well as music intelligence.
Best
Pulkit
 next part 
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Thu Mar 3 11:16:03 2022
From: reingold at stanford.edu (Omer Reingold)
Date: Thu, 3 Mar 2022 11:16:03 0800
Subject: [theoryseminar] The Practice of Theory Research  Final Talks
MessageID:
Hello everybody,
CS 163 is a "research methods in theory of CS" course. There were 5 groups
doing research throughout the quarter, and they will be telling about their
progress next week. I append the talk schedules as well as abstracts. The
students will really appreciate an external audience so please let me know
if you can make it for any of the talks (even if you can attend a single
talk it will be great).
Thanks
Omer
*Tuesday, March 8th*
11:3012: *LowBandwidth Evaluation of HighDegree Polynomials Using
HighRate ErrorCorrecting Codes, *Connor Meany, Agustin Otero, Abraham
Ryzhik
1212:30: *Weak Group Isomorphism in AlmostQuadratic Time, *Hanson Hao,
Ben Heller, Wenqi Li
12:301pm: *Card Guessing Game with Riffle Shuffles and an Adversarial
Perspective, *Suzy Lou, Josh Nkoy, Joey Rivkin, Colin Sullivan
*Thursday, March 10th*
11:3012: *More Robust Distributed Merkle's Puzzles, *Christie Di and Jamie
Song
1212:30: *Neural Networks Generalize from SelfAveraging Subclassifiers
in the Same Way As Adaptive Boosting, *Peter Chatain and Michael Sun
* Abstracts:*
Title: *LowBandwidth Evaluation of HighDegree Polynomials Using HighRate
ErrorCorrecting Codes*
Speakers: Connor Meany, Agustin Otero, Abraham Ryzhik
Abstract: The problem of computing functions F:F^k?F on encoded data c=C(x)
? F^n is the most natural generalization to asking for decodings of it:
F(x) = x. In [SW22] the problem is solved using less encoded information
than necessary in trivially first decoding, then evaluating: \tilde{c}
??C?1??? xF??F(x), for F linear over F. We show how to maintain encoding
bandwidth gains below trivial while computing nonlinear functions of the
form F(x) =\sum f_i(x_i) for f_i ? F[X] a polynomial of degree at most p?N,
?i?[k], without sustaining factors of p losses in encoding rate k/n, where
C: F^k ? F^n. To achieve this, we show that changing the encoding function
to f(x)^p makes it not much harder to interpolate than the original f(x),
while tolerating errors as in [SW22]. This stands to improve the potency of
distributed storage and parallel computation, broadening the class of
functions that can be computed on desirablerate encoded data with low
download bandwidth costs.
Title: *Weak Group Isomorphism in AlmostQuadratic Time*
Speakers: Hanson Hao, Ben Heller, Wenqi Li
Abstract: In DW2020, the group isomorphism problem is solved in nearly
linear time for a dense set of group orders. Thus, the barrier to improving
group isomorphism lies in the group orders not contained in this dense set.
Specifically, the hardest case is thought to be the case of pgroups.
Although there are not many reductions formalizing that this is the hardest
case, in GQ2021 there is some progress on this path with a reduction among
pgroups. The group isomorphism problem is of independent interest, but is
also motivated by the existence of a reduction from group isomorphism to
graph isomorphism Miller 1979, the complexity of which is a major unsolved
problem in complexity theory. Furthermore, the hard case of group
isomorphism with pgroups is thought to be a hard case of graph isomorphism
under the reduction from groups to graphs, however there is no reduction
making this provably true. At the moment, nobody really knows how to handle
the case of pgroups. And for most group orders that do not contain
pgroups, there is a nearly linear time algorithm to check for isomorphism.
In order to find new angles on this sort of problem, we will investigate a
weaker notion than isomorphism.
Title: *Card Guessing Game with Riffle Shuffles and an Adversarial
Perspective*
Speakers: Suzy Lou, Josh Nkoy, Joey Rivkin, Colin Sullivan
Abstract: This paper explores the card guessing game on a riffleshuffled
deck. In this game, a dealer orders a deck of n cards numbered 1, . . . ,
n, and a guesser guesses the cards one at a time, attempting to maximize
their number of correct guesses in expectation. The paper extends previous
work on a variant of the card guessing game where the dealer is limited to
a single riffle shuffle. We show an optimal guesser strategy against
arbitrarily many riffle shuffles and compute a rough bound on the
expectation of correct guesses using this strategy. We also introduce an
adversarial analogue of a riffle shuffle dealer, in which a dealer can
order the cards in any way consistent with some fixed number of riffle
shuffles. We give an optimal optimal adversarial dealer strategy using this
model and compute the upper bound on the number of correct guesses against
this dealer.
Title: *More Robust Distributed Merkle's Puzzles*
Speakers: Christie Di and Jamie Song
Abstract: In 1974, Merkle proposed the first key exchanging scheme based on
symmetric primitives called ?Merkle?s Puzzle?, which allows two players to
build a secure channel against an eavesdropper. In 2021, Dinur and Hassen
proposed a distributed merkle?s puzzle protocol but it fails to work under
constant fraction of semihonest or malicious players. We will first
present a basic protocol which should be robust against any number of
semihonest or malicious players, then we will proceed with two other
protocols with better query complexity. Our final goal is to prove the
optimality for a model against a constant proportion of semihonest and
malicious players, or give an impossibility result to show the limit of
distributed merkle's puzzles in that setting. Our works could potentially
extend to other highly connected network models.
Title: *Neural Networks Generalize from SelfAveraging Subclassifiers in
the Same Way As Adaptive Boosting*
Speakers: Peter Chatain and Michael Sun
Abstract: In recent years, neural networks (NN?s) have made giant leaps in
a wide variety of domains. NN?s are often referred to as ?black box?
algorithms due to how little we can explain their empirical success. Our
foundational research seeks to explain why neural networks generalize. A
recent advancement derived a mutual information measure for explaining the
performance of deep NN?s through a sequence of increasingly complex
functions. We show deep NN?s learn a series of boosted classifiers whose
generalization is popularly attributed to selfaveraging over an increasing
number of interpolating subclassifiers. To our knowledge, we are the first
work to establish the connection between generalization in boosted
classifiers and generalization in deep NN?s. Our experimental evidence and
theoretical analysis suggest NNs trained with dropout exhibit similar
selfaveraging behavior over interpolating subclassifiers as cited popular
explanations for the postinterpolation generalization phenomenon in
boosting.
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Thu Mar 3 11:39:19 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Thu, 3 Mar 2022 11:39:19 0800
Subject: [theoryseminar]
=?utf8?q?=22Adaptivity_and_Confounding_in_Multi?=
=?utf8?q?Armed_Bandit_Experiments=22_=E2=80=93_Daniel_Russo_=28T?=
=?utf8?q?hu=2C_3Mar_=40_4=3A00pm=29?=
InReplyTo:
References:
MessageID:
Reminder: this talk (hosted jointly with the Stanford RL Forum) will be
today at 4pm via Zoom (link here
).
Please join us for snacks at 3:30pm in the Grove outside Packard.
On Mon, Feb 28, 2022 at 11:53 AM Tavor Baharav wrote:
> Adaptivity and Confounding in MultiArmed Bandit ExperimentsDaniel Russo
> ? Professor, Columbia Business School
>
> Thu, 3Mar / 4:00pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> (Zoom only)
>
> *Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be held on
> Zoom: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> *
> Abstract
>
> Multiarmed bandit algorithms minimize experimentation costs required to
> converge on optimal behavior. They do so by rapidly adapting
> experimentation effort away from poorly performing actions as feedback is
> observed. But this desirable feature makes them sensitive to confounding,
> which is the primary concern underlying classical randomized controlled
> trials. We highlight, for instance, that popular bandit algorithms cannot
> address the problem of identifying the best action when dayofweek effects
> may influence reward observations. In response, this paper proposes
> deconfounded Thompson sampling, which makes simple, but critical,
> modifications to the way Thompson sampling is usually applied. Theoretical
> guarantees suggest the algorithm strikes a delicate balance between
> adaptivity and robustness to confounding. It attains asymptotic lower
> bounds on the number of samples required to confidently identify the best
> action ? suggesting optimal adaptivity ? but also satisfies strong
> performance guarantees in the presence of dayofweek effects and delayed
> observations ? suggesting unusual robustness. At the core of the paper is a
> new model of contextual bandit experiments in which issues of delayed
> learning and distribution shift arise organically.
> Bio
>
> Daniel Russo is an Associate Professor in the Decision, Risk, and
> Operations division of Columbia Business School. His research focuses on
> problems at the intersection of sequential decisionmaking and statistical
> machine learning. He completed his PhD at Stanford under the supervision of
> Ben Van Roy.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
> 
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk: http://isl.stanford.edu/talks/talks/2022q1/danrusso/
>
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Mar 3 10:19:28 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 3 Mar 2022 18:19:28 +0000
Subject: [theoryseminar] Theory Lunch 3/3: Noah Shutty
InReplyTo:
References:
MessageID:
A kind reminder: This is happening at 11:50am. Noah's talk will start at 12:40pm in Fujitsu (Gates 403).
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Tuesday, March 1, 2022 10:45 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 3/3: Noah Shutty
Hi everyone,
This week's theory lunch will take place Thursday at 11:50am. We'll start with socializing and having lunch in the grass area near Gates (see the map at the end), followed by a remote talk at 12:40pm in Fujitsu. Noah will give his talk remotely, and he'll tell us about: Computing Efficiently on Encoded Data
Abstract: Error correcting codes can protect data from catastrophic loss. But could certain codes also make some computations more efficient?
In this work, we consider a distributed computing scenario in which several nodes collectively store bits of data, encoded in a ReedSolomon code. We show that arbitrary linear functions of this data can be computed by downloading just bits from each node, in contrast to the that the na?ve approach requires.
This presents recent work with Mary Wootters at [arxiv/2107.11847].
Note the new location and the slight change of time for this week. We want to spare people the extra walk and give people enough time to have lunch before the talk.
If you want to attend the talk but cannot come in person, please use this zoom link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
Theory lunch location for this week: We'll have lunch in the grass area near Gates (red point in the map below), and we'll watch Noah's talk together in Fujitsu (Gates 403).
[cid:15fe8f1f358247b0ae2fd6f6ec535b52]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 1731537 bytes
Desc: image.png
URL:
From junyaoz at stanford.edu Thu Mar 3 10:19:28 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 3 Mar 2022 18:19:28 +0000
Subject: [theoryseminar] Theory Lunch 3/3: Noah Shutty
InReplyTo:
References:
MessageID:
A kind reminder: This is happening at 11:50am. Noah's talk will start at 12:40pm in Fujitsu (Gates 403).
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Tuesday, March 1, 2022 10:45 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 3/3: Noah Shutty
Hi everyone,
This week's theory lunch will take place Thursday at 11:50am. We'll start with socializing and having lunch in the grass area near Gates (see the map at the end), followed by a remote talk at 12:40pm in Fujitsu. Noah will give his talk remotely, and he'll tell us about: Computing Efficiently on Encoded Data
Abstract: Error correcting codes can protect data from catastrophic loss. But could certain codes also make some computations more efficient?
In this work, we consider a distributed computing scenario in which several nodes collectively store bits of data, encoded in a ReedSolomon code. We show that arbitrary linear functions of this data can be computed by downloading just bits from each node, in contrast to the that the na?ve approach requires.
This presents recent work with Mary Wootters at [arxiv/2107.11847].
Note the new location and the slight change of time for this week. We want to spare people the extra walk and give people enough time to have lunch before the talk.
If you want to attend the talk but cannot come in person, please use this zoom link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
Theory lunch location for this week: We'll have lunch in the grass area near Gates (red point in the map below), and we'll watch Noah's talk together in Fujitsu (Gates 403).
[cid:15fe8f1f358247b0ae2fd6f6ec535b52]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 1731537 bytes
Desc: image.png
URL:
From reingold at stanford.edu Thu Mar 3 16:34:16 2022
From: reingold at stanford.edu (Omer Reingold)
Date: Thu, 3 Mar 2022 16:34:16 0800
Subject: [theoryseminar] The Practice of Theory Research  Final Talks
InReplyTo:
References:
MessageID:
And the location of the talks is very close to Gates in Sequoia Hall 200
? No zoom.
Omer
On Thu, Mar 3, 2022 at 11:16 AM Omer Reingold wrote:
> Hello everybody,
>
> CS 163 is a "research methods in theory of CS" course. There were 5 groups
> doing research throughout the quarter, and they will be telling about their
> progress next week. I append the talk schedules as well as abstracts. The
> students will really appreciate an external audience so please let me know
> if you can make it for any of the talks (even if you can attend a single
> talk it will be great).
>
> Thanks
> Omer
>
> *Tuesday, March 8th*
>
>
>
> 11:3012: *LowBandwidth Evaluation of HighDegree Polynomials Using
> HighRate ErrorCorrecting Codes, *Connor Meany, Agustin Otero, Abraham
> Ryzhik
>
> 1212:30: *Weak Group Isomorphism in AlmostQuadratic Time, *Hanson Hao,
> Ben Heller, Wenqi Li
>
> 12:301pm: *Card Guessing Game with Riffle Shuffles and an Adversarial
> Perspective, *Suzy Lou, Josh Nkoy, Joey Rivkin, Colin Sullivan
>
>
>
> *Thursday, March 10th*
>
>
>
> 11:3012: *More Robust Distributed Merkle's Puzzles, *Christie Di and
> Jamie Song
>
> 1212:30: *Neural Networks Generalize from SelfAveraging Subclassifiers
> in the Same Way As Adaptive Boosting, *Peter Chatain and Michael Sun
>
>
>
> * Abstracts:*
>
>
> Title: *LowBandwidth Evaluation of HighDegree Polynomials Using
> HighRate ErrorCorrecting Codes*
>
> Speakers: Connor Meany, Agustin Otero, Abraham Ryzhik
>
>
>
> Abstract: The problem of computing functions F:F^k?F on encoded data
> c=C(x) ? F^n is the most natural generalization to asking for decodings
> of it: F(x) = x. In [SW22] the problem is solved using less encoded
> information than necessary in trivially first decoding, then evaluating:
> \tilde{c} ??C?1??? xF??F(x), for F linear over F. We show how to maintain
> encoding bandwidth gains below trivial while computing nonlinear functions
> of the form F(x) =\sum f_i(x_i) for f_i ? F[X] a polynomial of degree at
> most p?N, ?i?[k], without sustaining factors of p losses in encoding rate
> k/n, where C: F^k ? F^n. To achieve this, we show that changing the
> encoding function to f(x)^p makes it not much harder to interpolate than
> the original f(x), while tolerating errors as in [SW22]. This stands to
> improve the potency of distributed storage and parallel computation,
> broadening the class of functions that can be computed on desirablerate
> encoded data with low download bandwidth costs.
>
>
>
> Title: *Weak Group Isomorphism in AlmostQuadratic Time*
>
> Speakers: Hanson Hao, Ben Heller, Wenqi Li
>
>
>
> Abstract: In DW2020, the group isomorphism problem is solved in nearly
> linear time for a dense set of group orders. Thus, the barrier to improving
> group isomorphism lies in the group orders not contained in this dense set.
> Specifically, the hardest case is thought to be the case of pgroups.
> Although there are not many reductions formalizing that this is the hardest
> case, in GQ2021 there is some progress on this path with a reduction among
> pgroups. The group isomorphism problem is of independent interest, but is
> also motivated by the existence of a reduction from group isomorphism to
> graph isomorphism Miller 1979, the complexity of which is a major unsolved
> problem in complexity theory. Furthermore, the hard case of group
> isomorphism with pgroups is thought to be a hard case of graph isomorphism
> under the reduction from groups to graphs, however there is no reduction
> making this provably true. At the moment, nobody really knows how to handle
> the case of pgroups. And for most group orders that do not contain
> pgroups, there is a nearly linear time algorithm to check for isomorphism.
> In order to find new angles on this sort of problem, we will investigate a
>
> weaker notion than isomorphism.
>
>
>
> Title: *Card Guessing Game with Riffle Shuffles and an Adversarial
> Perspective*
>
> Speakers: Suzy Lou, Josh Nkoy, Joey Rivkin, Colin Sullivan
>
>
>
> Abstract: This paper explores the card guessing game on a riffleshuffled
> deck. In this game, a dealer orders a deck of n cards numbered 1, . . . ,
> n, and a guesser guesses the cards one at a time, attempting to maximize
> their number of correct guesses in expectation. The paper extends previous
> work on a variant of the card guessing game where the dealer is limited to
> a single riffle shuffle. We show an optimal guesser strategy against
> arbitrarily many riffle shuffles and compute a rough bound on the
> expectation of correct guesses using this strategy. We also introduce an
> adversarial analogue of a riffle shuffle dealer, in which a dealer can
> order the cards in any way consistent with some fixed number of riffle
> shuffles. We give an optimal optimal adversarial dealer strategy using this
> model and compute the upper bound on the number of correct guesses against
> this dealer.
>
>
>
> Title: *More Robust Distributed Merkle's Puzzles*
>
> Speakers: Christie Di and Jamie Song
>
>
>
> Abstract: In 1974, Merkle proposed the first key exchanging scheme based
> on symmetric primitives called ?Merkle?s Puzzle?, which allows two players
> to build a secure channel against an eavesdropper. In 2021, Dinur and
> Hassen proposed a distributed merkle?s puzzle protocol but it fails to work
> under constant fraction of semihonest or malicious players. We will first
> present a basic protocol which should be robust against any number of
> semihonest or malicious players, then we will proceed with two other
> protocols with better query complexity. Our final goal is to prove the
> optimality for a model against a constant proportion of semihonest and
> malicious players, or give an impossibility result to show the limit of
> distributed merkle's puzzles in that setting. Our works could potentially
> extend to other highly connected network models.
>
>
>
> Title: *Neural Networks Generalize from SelfAveraging Subclassifiers in
> the Same Way As Adaptive Boosting*
>
> Speakers: Peter Chatain and Michael Sun
>
>
>
> Abstract: In recent years, neural networks (NN?s) have made giant leaps in
> a wide variety of domains. NN?s are often referred to as ?black box?
> algorithms due to how little we can explain their empirical success. Our
> foundational research seeks to explain why neural networks generalize. A
> recent advancement derived a mutual information measure for explaining the
> performance of deep NN?s through a sequence of increasingly complex
> functions. We show deep NN?s learn a series of boosted classifiers whose
> generalization is popularly attributed to selfaveraging over an increasing
> number of interpolating subclassifiers. To our knowledge, we are the first
> work to establish the connection between generalization in boosted
> classifiers and generalization in deep NN?s. Our experimental evidence and
> theoretical analysis suggest NNs trained with dropout exhibit similar
> selfaveraging behavior over interpolating subclassifiers as cited popular
> explanations for the postinterpolation generalization phenomenon in
> boosting.
>
>
>
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Sun Mar 6 16:53:13 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Sun, 6 Mar 2022 16:53:13 0800
Subject: [theoryseminar] max flow in near linear time: Fri Mar 18, 24pm
MessageID:
Theory folks,
Mark your calendars: Our very own Yang Liu and Li Chen (visiting from
Georgia Tech) will tell us about their new breakthrough result on max flow
and mincost flows on Friday, Mar 18, 24pm in Gates 415.
Title and abstract are below. Hope to see you there!
Cheers,
Moses
Title: Maximum Flow and MinimumCost Flow in AlmostLinear Time.
Abstract: We give an algorithm that computes exact maximum flows and
minimumcost flows on directed graphs with $m$ edges and polynomially
bounded integral demands, costs, and capacities in $m^{1+o(1)}$ time. Our
algorithm builds the flow through a sequence of $m^{1+o(1)}$ approximate
undirected minimumratio cycles, each of which is computed and processed in
amortized $m^{o(1)}$ time using a dynamic data structure.
Our framework extends to an algorithm running in $m^{1+o(1)}$ time for
computing flows that minimize general edgeseparable convex functions to
high accuracy. This gives an almostlinear time algorithm for several
problems including entropyregularized optimal transport, matrix scaling,
$p$norm flows, and isotonic regression.
Joint work with Rasmus Kyng, Richard Peng, Maximilian Probst Gutenberg, and
Sushant Sachdeva.
https://arxiv.org/abs/2203.00671
 next part 
An HTML attachment was scrubbed...
URL:
From rayyli at stanford.edu Mon Mar 7 09:16:03 2022
From: rayyli at stanford.edu (Ray Li)
Date: Mon, 7 Mar 2022 17:16:03 +0000
Subject: [theoryseminar] Thesis defense
MessageID:
Hi All,
I'm defending my thesis tomorrow Tuesday 3/8 at 2:15pm, and I would be delighted to see you there!
Best,
Ray
Title: New Combinatorial Bounds for ErrorCorrecting Codes
Ray Li
Computer Science Department
Stanford University
Advisors: Jacob Fox and Mary Wootters
Time: Tuesday, March 8th, 2:15pm
Location: Gates 415
Abstract:
Errorcorrecting codes protect information from noise. In the standard setup, a sender, Alice, wants to send a message through a noisy channel to a receiver, Bob. To do so, Alice encodes the message with an errorcorrecting code so that Bob can recover the message, even in the presence of noise. This thesis addresses fundamental challenges in two basic coding theory contexts: deletion codes and listdecoding.
In deletion codes (Levenshtein '66, Ullman '67), the noisy channel transmits a subsequence of Alice's encoded message. This setup is motivated by applications such as DNA storage, magnetic recording, and internet transmission. Though deletion codes is an old topic, our understanding was poor compared to other errors like substitutions and erasures, and many basic questions remained open until recently. We contribute to this recent progress, answering one extremely basic question: can positive rate binary codes correct a worstcase deletion fraction approaching the natural limit of 1/2?
In list decoding (Elias '57, Wozencraft '58), Bob only needs to output a small list of messages containing the correct message. This relaxation allows Alice and Bob to tolerate more noise (approximately twice as much). For this reason (and others), listdecoding finds various applications such as group testing, compressed sensing, algorithm design, pseudorandomness, complexity, and cryptography. All applications require explicit listdecodable codes, but our best listdecodable codes are often nonexplicit random codes. Towards finding optimal explicit listdecodable codes, we show stronger list decoding results for morestructured ensembles of codes, such as random linear codes and random Reed Solomon codes.
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Mon Mar 7 09:28:19 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 7 Mar 2022 17:28:19 +0000
Subject: [theoryseminar] Theory Lunch 3/10: Siqi Liu (Berkeley)
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Siqi will tell us about: On statistical inference when fixed points of belief propagation are unstable
Abstract: Many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. Inspired by ideas from statistical physics, the presence of a stable fixed point for belief propagation has been widely conjectured to characterize the computational tractability of these problems. For community detection in stochastic block models, many of these predictions have been rigorously confirmed. We consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. We carry out the cavity method calculations from statistical physics to compute the regime of parameters where recovery should be algorithmically tractable. At precisely the predicted tractable regime, we give a general polynomialtime algorithm for the problem of recovery.
In this talk, we will discuss:
How to apply cavity method to this general model of statistical inference problems
Intuitions behind the recovery algorithm
This talk is based on joint work with Sidhanth Mohanty and Prasad Raghavendra.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Mon Mar 7 09:28:19 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 7 Mar 2022 17:28:19 +0000
Subject: [theoryseminar] Theory Lunch 3/10: Siqi Liu (Berkeley)
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Siqi will tell us about: On statistical inference when fixed points of belief propagation are unstable
Abstract: Many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. Inspired by ideas from statistical physics, the presence of a stable fixed point for belief propagation has been widely conjectured to characterize the computational tractability of these problems. For community detection in stochastic block models, many of these predictions have been rigorously confirmed. We consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. We carry out the cavity method calculations from statistical physics to compute the regime of parameters where recovery should be algorithmically tractable. At precisely the predicted tractable regime, we give a general polynomialtime algorithm for the problem of recovery.
In this talk, we will discuss:
How to apply cavity method to this general model of statistical inference problems
Intuitions behind the recovery algorithm
This talk is based on joint work with Sidhanth Mohanty and Prasad Raghavendra.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Mon Mar 7 18:07:33 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Mon, 7 Mar 2022 18:07:33 0800
Subject: [theoryseminar] =?utf8?q?=22Towards_instanceoptimal_compressio?=
=?utf8?q?n_for_distributed_mean_estimation=22_=E2=80=93_Ananda_Th?=
=?utf8?q?eertha_Suresh_=28Thu=2C_10Mar_=40_4=3A00pm=29?=
MessageID:
Towards instanceoptimal compression for distributed mean estimationAnanda
Theertha Suresh ? Research Scientist, Google Research
Thu, 10Mar / 4:00pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
(in person)
*Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). The talk will be streamed on
Zoom: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*
Abstract
Distributed mean estimation is a commonly used subroutine in many
distributed learning and optimization algorithms. In several distributed
scenarios, communication cost is a bottleneck and quantization techniques
have been proposed to improve communication efficiency. However, existing
techniques often suffer a quantization error scaling with the range of data
points. We propose a new noninteractive correlated quantization protocol
whose error guarantee depends on the deviation of data points instead of
their absolute range. Furthermore, our algorithm and analysis does not make
any distribution assumptions or require any prior knowledge on the
concentration property of the data. We prove the optimality of our protocol
under mild assumptions and also show that applying it as a subroutine in
distributed optimization leads to better convergence rates.
Based on joint work with Jae Ro, Ziteng Sun, and Felix Yu.
Bio
Ananda Theertha Suresh is a research scientist at Google Research, New
York. He received his PhD from University of California San Diego, where he
was advised by Prof. Alon Orlitsky. His research focuses on theoretical and
algorithmic aspects of machine learning, information theory, differential
privacy, and statistics. He is a recipient of the 2017 Paul Baran Maroni
Young Scholar award and a corecipient of best paper awards at NeurIPS
2015, ALT 2020, CCS 2021, and a best paper honorable mention award at ICML
2017.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list islcolloq at lists.stanford.edu
.*

Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk:
http://isl.stanford.edu/talks/talks/2022q1/anandatheerthasuresh/
 next part 
An HTML attachment was scrubbed...
URL:
From kjtian at stanford.edu Tue Mar 8 16:28:16 2022
From: kjtian at stanford.edu (Kevin Tian)
Date: Tue, 8 Mar 2022 16:28:16 0800
Subject: [theoryseminar] Practice job talk
MessageID:
Hi all,
I will be giving a practice job talk on 3/15 at 1112 AM PST  you are all
invited, and I would really appreciate it if you can make it. The talk will
be virtual (since I'm not on campus right now), and closer to the date of
the talk, I will send out an anonymous feedback form and a Zoom link.
Thanks a lot in advance everyone! Here is the talk description.
Title: Iterative Methods and HighDimensional Statistics
Abstract: Algorithmic primitives such as stochastic gradient descent,
discretized Langevin dynamics, regression, and clustering have emerged as
powerful workhorses enabling many recent advances in data science and
machine learning. These ubiquitous methods have wellunderstood analyses in
classical regimes; however, in prominent modern applications going beyond
these regimes, the theoretical guarantees of basic algorithms and their
analyses may often leave much on the table. In this talk, I examine how
tools originally developed in the fields of iterative methods and
highdimensional statistics can be combined and reimagined to build a
modern theory of reliable, scalable, and accurate algorithm design. As case
studies, I will mainly focus on two lines of research I have pushed forward
in my Ph.D. work, spanning the modern algorithmic theories of robust
statistical estimation and highdimensional sampling.
Cheers,
Kevin
 next part 
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Tue Mar 8 18:57:43 2022
From: reingold at stanford.edu (Omer Reingold)
Date: Tue, 8 Mar 2022 18:57:43 0800
Subject: [theoryseminar] Friday March 25th,
1pm: quantitative models in a fully integrated healthcare system
MessageID:
Noa and Noam (bios below) will give a talk on Friday March 25th at 1pm in
the Fujitsu conference room (on the 4th floor in Gates). Details on the
talk are below and (based on their past talks) I highly recommend
attending. The speakers are experts in public health care with a very wide
and deep education and a tremendous openness to adopting advanced CS
research in reallife health care systems (through one of the largest
healthcare providers in the world). They have been involved in some of the
highest profile COVID research publications (e.g., on the effectiveness of
vaccines) and I was lucky to have them apply some of our
algorithmicfairness work. Feel free to discriminate this invitation to
anybody who may be interested.
Best wishes
Omer
*Title*: Opportunities for the application of quantitative models in a
fully integrated healthcare system
*Abstract*
Health insurance in Israel is mandatory, comprehensive in its list of
services, and provided by four
integrated payerprovider organizations. Clalit Health Services is the
largest of these organizations ?
responsible for the care of over half of the Israeli population. Most of
this care (outpatient and
inpatient) is directly provided by Clalit, and the rest is purchased by
Clalit. All services provided or
purchased are stored in a single comprehensive analytic data warehouse. Our
talk will focus on the
opportunities that such an integrated system and its data offers in using
quantitative models for
stateoftheart research and digital healthcare interventions.
We will discuss the two main quantitative tools used for digital healthcare
? causal inference and
prediction models. We will show how the depth and immediacy of the data
allowed the conduct of
causal research that provided necessary and timely information regarding
the effectiveness and
safety of mRNA Covid19 vaccines. We will also show how such unique data
enabled us to study an
oftenoverlooked aspect of the vaccination ? indirect protective effects.
We will also demonstrate
how this data can be used for promoting predictive, proactive, and
personalized care. We will
demonstrate how prediction models are created and how they are integrated
into the point of care.
Noam Barda holds an MD from TelAviv University, a PhD coadvised in public
health and computer
science from BenGurion University and a BSc in computer science from the
Open University. He
completed his postdoctorate in the Department of Biomedical Informatics
(DBMI) at Harvard
Medical School. He is the head of the RealWorld Evidence Research and
Innovation Lab at Tel
HaShomer medical center, Israel?s largest hospital, and coheads the
Digital Healthcare Laboratory in
the department of Software and Information Systems Engineering at
BenGurion University.
Noa Dagan holds an MD and an MPH from the Hebrew University, and a PhD in
Computer Science
from BenGurion University. She completed her postdoctorate in the
Department of Biomedical
Informatics (DBMI) at Harvard Medical School. Dr. Dagan is currently the
director of the AIdriven
Medicine Department in Clalit Innovation and the Clalit Research Institute,
and coheads the Digital
Healthcare Laboratory in the department of Software and Information Systems
Engineering at Ben
Gurion University.
 next part 
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Wed Mar 9 13:34:50 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Wed, 9 Mar 2022 21:34:50 +0000
Subject: [theoryseminar] =?utf8?q?=22Semantic_vs=2E_Effective_Communicat?=
=?utf8?q?ions=22__Deniz_G=C3=BCnd=C3=BCz_=28Friday=2C_March_11th=2C_2pm?=
=?utf8?q?_PT=29?=
MessageID: <2A3BD8A570454A12B37088AE4DDED95B@stanford.edu>
Hi everyone,
We continue with the Information Theory Forum (IT Forum) talks this week @Fri, March 11th, 2pm PT with Prof. Deniz G?nd?z. The talks are hosted and accessible via Zoom.
If you want to receive reminder emails, please join the IT Forum mailing list.
Details for this week?s talk are below:
Semantic vs. Effective Communications
Deniz G?nd?z
, Imperial College London
Fri, 11th March, 2pm PT
Zoom Link
pwd: 032264
Abstract
I will start this talk by motivating more mathematical definitions for Weaver?s three levels of communication problems in his famous article that introduced Shannon?s work in the book The Mathematical Theory of Communication (1949). I will then focus on the classes of semantic and effective communication problems. First, I will present some of our recent results on semantic communications involving wireless video transmission and the communication of deep neural networks over noisy channels. Then, I will provide an example of an effective communication problem, called remote Markov decision process, present some initial results, and discuss various open problems for future research.
Bio
Deniz G?nd?z received his B.S. degree from Middle East Technical University, Turkey in 2002, and M.S. and Ph.D. degrees from NYU Tandon School of Engineering in 2004 and 2007, respectively. He is a Professor of Information Processing in the Electrical and Electronic Engineering Department of Imperial College London, UK. He is also a parttime faculty member at the University of Modena and Reggio Emilia, Italy, and has held visiting positions at University of Padova (20182020) and Princeton University (20092012). His research interests lie in the areas of wireless communications, information theory, machine learning, and privacy. Dr. G?nd?z is a Fellow of the IEEE, and a Distinguished Lecturer for the IEEE Information Theory Society (202022). He serves in various editorial roles for the IEEE Transactions on Information Theory, IEEE Transactions on Wireless Communications, IEEE Transactions on Communications, and the IEEE Journal on Selected Areas in Communications (JSAC).
Best
Pulkit
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Mar 10 09:56:47 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 10 Mar 2022 17:56:47 +0000
Subject: [theoryseminar] Theory Lunch 3/10: Siqi Liu (Berkeley)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Monday, March 7, 2022 9:28 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 3/10: Siqi Liu (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Siqi will tell us about: On statistical inference when fixed points of belief propagation are unstable
Abstract: Many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. Inspired by ideas from statistical physics, the presence of a stable fixed point for belief propagation has been widely conjectured to characterize the computational tractability of these problems. For community detection in stochastic block models, many of these predictions have been rigorously confirmed. We consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. We carry out the cavity method calculations from statistical physics to compute the regime of parameters where recovery should be algorithmically tractable. At precisely the predicted tractable regime, we give a general polynomialtime algorithm for the problem of recovery.
In this talk, we will discuss:
How to apply cavity method to this general model of statistical inference problems
Intuitions behind the recovery algorithm
This talk is based on joint work with Sidhanth Mohanty and Prasad Raghavendra.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Mar 10 09:56:47 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 10 Mar 2022 17:56:47 +0000
Subject: [theoryseminar] Theory Lunch 3/10: Siqi Liu (Berkeley)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Monday, March 7, 2022 9:28 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 3/10: Siqi Liu (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Siqi will tell us about: On statistical inference when fixed points of belief propagation are unstable
Abstract: Many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. Inspired by ideas from statistical physics, the presence of a stable fixed point for belief propagation has been widely conjectured to characterize the computational tractability of these problems. For community detection in stochastic block models, many of these predictions have been rigorously confirmed. We consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. We carry out the cavity method calculations from statistical physics to compute the regime of parameters where recovery should be algorithmically tractable. At precisely the predicted tractable regime, we give a general polynomialtime algorithm for the problem of recovery.
In this talk, we will discuss:
How to apply cavity method to this general model of statistical inference problems
Intuitions behind the recovery algorithm
This talk is based on joint work with Sidhanth Mohanty and Prasad Raghavendra.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Thu Mar 10 14:15:01 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Thu, 10 Mar 2022 14:15:01 0800
Subject: [theoryseminar]
=?utf8?q?=22Towards_instanceoptimal_compressio?=
=?utf8?q?n_for_distributed_mean_estimation=22_=E2=80=93_Ananda_Th?=
=?utf8?q?eertha_Suresh_=28Thu=2C_10Mar_=40_4=3A00pm=29?=
InReplyTo:
References:
MessageID:
Reminder: this talk will be today at 4pm via Zoom (link below).
Please join us for snacks at 3:30pm in the Grove outside Packard.
On Mon, Mar 7, 2022 at 6:07 PM Tavor Baharav wrote:
> Towards instanceoptimal compression for distributed mean estimationAnanda
> Theertha Suresh ? Research Scientist, Google Research
>
> Thu, 10Mar / 4:00pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> (in person)
>
> *Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be streamed on
> Zoom: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> *
> Abstract
>
> Distributed mean estimation is a commonly used subroutine in many
> distributed learning and optimization algorithms. In several distributed
> scenarios, communication cost is a bottleneck and quantization techniques
> have been proposed to improve communication efficiency. However, existing
> techniques often suffer a quantization error scaling with the range of data
> points. We propose a new noninteractive correlated quantization protocol
> whose error guarantee depends on the deviation of data points instead of
> their absolute range. Furthermore, our algorithm and analysis does not make
> any distribution assumptions or require any prior knowledge on the
> concentration property of the data. We prove the optimality of our protocol
> under mild assumptions and also show that applying it as a subroutine in
> distributed optimization leads to better convergence rates.
>
> Based on joint work with Jae Ro, Ziteng Sun, and Felix Yu.
> Bio
>
> Ananda Theertha Suresh is a research scientist at Google Research, New
> York. He received his PhD from University of California San Diego, where he
> was advised by Prof. Alon Orlitsky. His research focuses on theoretical and
> algorithmic aspects of machine learning, information theory, differential
> privacy, and statistics. He is a recipient of the 2017 Paul Baran Maroni
> Young Scholar award and a corecipient of best paper awards at NeurIPS
> 2015, ALT 2020, CCS 2021, and a best paper honorable mention award at ICML
> 2017.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
> 
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk:
> http://isl.stanford.edu/talks/talks/2022q1/anandatheerthasuresh/
>
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Sun Mar 13 16:28:34 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Sun, 13 Mar 2022 16:28:34 0700
Subject: [theoryseminar] topics class suggestions
MessageID:
Hello theory and ML folks,
I plan to teach a topics class next year and I wanted to get some input on
what sorts of topics courses students might be interested in. I can't
promise I will offer exactly what you want, but this will certainly
influence my thinking, so it will be helpful to get some sense of student
interest. I have a couple of weeks to converge on a topic.
For a class to be viable, we need at least 8 students to register for it
and do the work.
For a topics class, this could involve some combination of reading papers
and writing a report/making a presentation, possibly doing a small number
of homework problems and scribing notes  less work if you are taking
the class CR/NC. I hope there will be more than the bare minimum of 8
students  it is a lot more fun to teach a class when there are more
people in the room (up to a limit of course  I will have my fair share of
very large classes next year as well).
To contribute your suggestions, please use this shared doc:
https://docs.google.com/document/d/1gakhPCR81L1aINY5ayjHhJJ1RXHQzSnQzhzj1SqLtIA/edit?usp=sharing
Feel free to invent your own conventions to support previous suggestions
(it will be helpful to get a sense for how many students would be willing
to actually register for such a class).
FYI, I did a similar poll with the theory students back in 2018 and these
are some of the suggestions that came from there:
https://docs.google.com/document/d/1P7a1J8VWMGXTXjpcpDKXKW6sbvkIG7blOJhoDi42ak/edit?usp=sharing
Here is what the StatsML students came up with back then:
https://docs.google.com/document/d/15p3Qa5QCc48ABtZPA6w0NdjZ4DbDP5RwElwnpWPQ/edit?usp=sharing
I look forward to your feedback.
Cheers,
Moses
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Mon Mar 14 09:47:46 2022
From: jneu at stanford.edu (Joachim Neu)
Date: Mon, 14 Mar 2022 09:47:46 0700
Subject: [theoryseminar] =?utf8?q?=22Palo_Alto=2C_We_Have_a_Problem=2E_T?=
=?utf8?q?here_Is_No_Oracle!=22_=E2=80=93_Amin_Karbasi_=28Thu=2C_17Mar_?=
=?utf8?b?QCA0OjAwcG0p?=
MessageID: <93567ea244c16c86e36dcba676a1e78308f1ac3d.camel@stanford.edu>
Palo Alto, We Have a Problem. There Is No Oracle!
Amin Karbasi ? Professor, Yale University
Thu, 17Mar / 4:00pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
?
Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating).
Abstract
Artificial intelligence is fundamentally about making decisions under
uncertainty from a massive pool of possibilities, where combinatorial
techniques have long been central tools. Indeed, many scientific and
engineering models feature inherently discrete decision variables?from
phrases in a corpus to objects in an image. Similarly, nearly all
aspects of the machine learning pipeline involve discrete tasks from
data summarization and sketching to feature selection and model
explanation.
Classically, in order to design optimization methods, we usually assume
that the objective function is either fully known or accessible via an
oracle. In many modern applications, however, the objectives we aim to
optimize should be rather learned, estimated, or simulated from data, a
process that is subject to stochastic fluctuations. Moreover, it has
long been known that solutions obtained from combinatorial optimization
methods can demonstrate striking sensitivity to changes in the
parameters of the underlying problem. So, what are the guarantees of
the combinatorial algorithms we develop (and teach) when the perfect
oracle does not exist? In this talk, we will address this challenge and
build a fundamentally new connection between discrete and (nonconvex)
continuous optimizations that aim to lift the current provable methods
out of the sterile lab environment and scale them into the real world.
Bio
Amin Karbasi is currently an (untenured) associate professor of
Electrical Engineering, Computer Science, and Statistics & Data Science
at Yale University. He is also a research scientist at Google NY. He
has been the recipient of the National Science Foundation (NSF) Career
Award, Office of Naval Research (ONR) Young Investigator Award, Air
Force Office of Scientific Research (AFOSR) Young Investigator Award,
DARPA Young Faculty Award, National Academy of Engineering (NAE)
Grainger Award, Nokia BellLabs Prize, Amazon Research Award, Google
Faculty Research Award, Microsoft Azure Research Award, Simons Research
Fellowship, and ETH Research Fellowship. His work on machine learning,
statistics, and signal processing has received awards in a number of
premier conferences and journals, including Medical Image Computing and
ComputerAssisted Interventions Conference (MICCAI), FacebookMAIN
award from AINeuroscience symposium, International Conference on
Artificial Intelligence and Statistics (AISTATS), IEEE Communications
Society Data Storage, International Conference on Acoustics, Speech,
and Signal Processing (ICASSP), ACM SIGMETRICS, and IEEE International
Symposium on Information Theory (ISIT). His Ph.D. work received the
Patrick Denantes Memorial Prize for the best doctoral thesis from the
School of Computer and Communication Sciences at EPFL, Switzerland.
This talk is hosted by the ISL Colloquium. To receive talk
announcements, subscribe to the mailing list isl
colloq at lists.stanford.edu.
Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2022q1/aminkarbasi/
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Mar 13 22:55:06 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 14 Mar 2022 05:55:06 +0000
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia
University)
MessageID:
Hello everyone,
This last theory lunch of this quarter will take place this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Shivam will tell us about: Approximating Sumset Size
Abstract: Given a subset [A] of the dimensional Boolean hypercube [\mathbb{F}_2^n] , the sumset [A+A] is the set [\{a+a': a, a' \in A\}] where addition is in [\mathbb{F}_2^n] . Sumsets play an important role in additive combinatorics, where they feature in many central results of the field.
The main result I will talk about is an algorithm for the problem of sumset size estimation. In more detail, the algorithm is given oracle access to (the indicator function of) an arbitrary [A \subseteq \mathbb{F}_2^n] and an accuracy parameter [\epsilon > 0] , and with high probability it outputs a value [0 \leq v \leq 1] that is [\pm \varepsilon] close to [\mathrm{Vol}(A' + A')] for some perturbation [A' \subseteq A] of [A] satisfying [\mathrm{Vol}(A \setminus A') \leq \varepsilon.] It is easy to see that without the relaxation of dealing with [A'] rather than [A] , any algorithm for estimating [\mathrm{Vol}((A+A)] to any nontrivial accuracy must make [2^{\Omega(n)}] queries. In contrast, we will (at a highlevel) describe how to obtain an algorithm whose query complexity depends only on and is completely independent of the ambient dimension .
Based on joint work with Anindya De and Rocco Servedio.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Mar 13 22:55:06 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 14 Mar 2022 05:55:06 +0000
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia
University)
MessageID:
Hello everyone,
This last theory lunch of this quarter will take place this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Shivam will tell us about: Approximating Sumset Size
Abstract: Given a subset [A] of the dimensional Boolean hypercube [\mathbb{F}_2^n] , the sumset [A+A] is the set [\{a+a': a, a' \in A\}] where addition is in [\mathbb{F}_2^n] . Sumsets play an important role in additive combinatorics, where they feature in many central results of the field.
The main result I will talk about is an algorithm for the problem of sumset size estimation. In more detail, the algorithm is given oracle access to (the indicator function of) an arbitrary [A \subseteq \mathbb{F}_2^n] and an accuracy parameter [\epsilon > 0] , and with high probability it outputs a value [0 \leq v \leq 1] that is [\pm \varepsilon] close to [\mathrm{Vol}(A' + A')] for some perturbation [A' \subseteq A] of [A] satisfying [\mathrm{Vol}(A \setminus A') \leq \varepsilon.] It is easy to see that without the relaxation of dealing with [A'] rather than [A] , any algorithm for estimating [\mathrm{Vol}((A+A)] to any nontrivial accuracy must make [2^{\Omega(n)}] queries. In contrast, we will (at a highlevel) describe how to obtain an algorithm whose query complexity depends only on and is completely independent of the ambient dimension .
Based on joint work with Anindya De and Rocco Servedio.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From ktian6 at gmail.com Mon Mar 14 14:02:15 2022
From: ktian6 at gmail.com (Kevin Tian)
Date: Mon, 14 Mar 2022 14:02:15 0700
Subject: [theoryseminar] Practice job talk
InReplyTo:
References:
MessageID:
Hi,
Just wanted to send a reminder that this is happening tomorrow at 11 AM PST.
Zoom link:
https://stanford.zoom.us/j/92359543849?pwd=MjdCSFl1bkRVNzBYYzZEajRQK3NDZz09
Password: 124075
Anonymous feedback form:
https://docs.google.com/forms/d/e/1FAIpQLSeTCnGeGMRsh_xPiW4Cg_vk96feR3DeBfTbamz6KO9Q0yA88Q/viewform
Thanks,
Kevin
On Tue, Mar 8, 2022 at 4:28 PM Kevin Tian wrote:
> Hi all,
>
> I will be giving a practice job talk on 3/15 at 1112 AM PST  you are
> all invited, and I would really appreciate it if you can make it. The talk
> will be virtual (since I'm not on campus right now), and closer to the date
> of the talk, I will send out an anonymous feedback form and a Zoom link.
>
> Thanks a lot in advance everyone! Here is the talk description.
>
> Title: Iterative Methods and HighDimensional Statistics
> Abstract: Algorithmic primitives such as stochastic gradient descent,
> discretized Langevin dynamics, regression, and clustering have emerged as
> powerful workhorses enabling many recent advances in data science and
> machine learning. These ubiquitous methods have wellunderstood analyses in
> classical regimes; however, in prominent modern applications going beyond
> these regimes, the theoretical guarantees of basic algorithms and their
> analyses may often leave much on the table. In this talk, I examine how
> tools originally developed in the fields of iterative methods and
> highdimensional statistics can be combined and reimagined to build a
> modern theory of reliable, scalable, and accurate algorithm design. As case
> studies, I will mainly focus on two lines of research I have pushed forward
> in my Ph.D. work, spanning the modern algorithmic theories of robust
> statistical estimation and highdimensional sampling.
>
> Cheers,
> Kevin
>
 next part 
An HTML attachment was scrubbed...
URL:
From haozewu at stanford.edu Mon Mar 14 16:42:15 2022
From: haozewu at stanford.edu (Haoze Wu)
Date: Mon, 14 Mar 2022 23:42:15 +0000
Subject: [theoryseminar] Student meeting with theory faculty candidate Alex
Lombardi
MessageID:
Hello everyone,
Alex Lombardi is another theory faculty candidate in the CS department this year. The student meeting with Alex will be tomorrow between 3:003:45 pm on Zoom. Feel free to come chatting with Alex and asking him any questions! This should be interesting to folks working on cryptography.
Zoom link: https://stanford.zoom.us/my/hwu94305?pwd=TzM2RkVsNjV3RXptemZKU0RhUVRCZz09
Password: 35394305
Best wishes,
Andrew
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Thu Mar 17 11:05:55 2022
From: jneu at stanford.edu (Joachim Neu)
Date: Thu, 17 Mar 2022 11:05:55 0700
Subject: [theoryseminar]
=?utf8?q?=22Palo_Alto=2C_We_Have_a_Problem=2E_T?=
=?utf8?q?here_Is_No_Oracle!=22_=E2=80=93_Amin_Karbasi_=28Thu=2C_17Mar_?=
=?utf8?b?QCA0OjAwcG0p?=
InReplyTo: <93567ea244c16c86e36dcba676a1e78308f1ac3d.camel@stanford.edu>
References: <93567ea244c16c86e36dcba676a1e78308f1ac3d.camel@stanford.edu>
MessageID: <6939ae446f923d434b614f68ea39528859c2fec1.camel@stanford.edu>
Reminder: this talk will be today at 4pm?via Zoom (link below).
Please join us for snacks at 3:30pm in the Grove outside Packard.
On Mon, 20220314 at 09:47 0700, Joachim Neu wrote:
> Palo Alto, We Have a Problem. There Is No Oracle!
> Amin Karbasi ? Professor, Yale University
> Thu, 17Mar / 4:00pm / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> ?
> Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating).
> Abstract
> Artificial intelligence is fundamentally about making decisions under
> uncertainty from a massive pool of possibilities, where combinatorial
> techniques have long been central tools. Indeed, many scientific and
> engineering models feature inherently discrete decision
> variables?from phrases in a corpus to objects in an image. Similarly,
> nearly all aspects of the machine learning pipeline involve discrete
> tasks from data summarization and sketching to feature selection and
> model explanation.
> Classically, in order to design optimization methods, we usually
> assume that the objective function is either fully known or
> accessible via an oracle. In many modern applications, however, the
> objectives we aim to optimize should be rather learned, estimated, or
> simulated from data, a process that is subject to stochastic
> fluctuations. Moreover, it has long been known that solutions
> obtained from combinatorial optimization methods can demonstrate
> striking sensitivity to changes in the parameters of the underlying
> problem. So, what are the guarantees of the combinatorial algorithms
> we develop (and teach) when the perfect oracle does not exist? In
> this talk, we will address this challenge and build a fundamentally
> new connection between discrete and (nonconvex) continuous
> optimizations that aim to lift the current provable methods out of
> the sterile lab environment and scale them into the real world.
> Bio
> Amin Karbasi is currently an (untenured) associate professor of
> Electrical Engineering, Computer Science, and Statistics & Data
> Science at Yale University. He is also a research scientist at Google
> NY. He has been the recipient of the National Science Foundation
> (NSF) Career Award, Office of Naval Research (ONR) Young Investigator
> Award, Air Force Office of Scientific Research (AFOSR) Young
> Investigator Award, DARPA Young Faculty Award, National Academy of
> Engineering (NAE) Grainger Award, Nokia BellLabs Prize, Amazon
> Research Award, Google Faculty Research Award, Microsoft Azure
> Research Award, Simons Research Fellowship, and ETH Research
> Fellowship. His work on machine learning, statistics, and signal
> processing has received awards in a number of premier conferences and
> journals, including Medical Image Computing and ComputerAssisted
> Interventions Conference (MICCAI), FacebookMAIN award from AI
> Neuroscience symposium, International Conference on Artificial
> Intelligence and Statistics (AISTATS), IEEE Communications Society
> Data Storage, International Conference on Acoustics, Speech, and
> Signal Processing (ICASSP), ACM SIGMETRICS, and IEEE International
> Symposium on Information Theory (ISIT). His Ph.D. work received the
> Patrick Denantes Memorial Prize for the best doctoral thesis from the
> School of Computer and Communication Sciences at EPFL, Switzerland.
> This talk is hosted by the ISL Colloquium. To receive talk
> announcements, subscribe to the mailing list isl
> colloq at lists.stanford.edu.
>
> Mailing list:
> https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk: http://isl.stanford.edu/talks/talks/2022q1/aminkarbasi/
>
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Mar 16 21:54:24 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 17 Mar 2022 04:54:24 +0000
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia
University)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: Junyao Zhao
Sent: Sunday, March 13, 2022 10:55 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia University)
Hello everyone,
This last theory lunch of this quarter will take place this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Shivam will tell us about: Approximating Sumset Size
Abstract: Given a subset [A] of the dimensional Boolean hypercube [\mathbb{F}_2^n] , the sumset [A+A] is the set [\{a+a': a, a' \in A\}] where addition is in [\mathbb{F}_2^n] . Sumsets play an important role in additive combinatorics, where they feature in many central results of the field.
The main result I will talk about is an algorithm for the problem of sumset size estimation. In more detail, the algorithm is given oracle access to (the indicator function of) an arbitrary [A \subseteq \mathbb{F}_2^n] and an accuracy parameter [\epsilon > 0] , and with high probability it outputs a value [0 \leq v \leq 1] that is [\pm \varepsilon] close to [\mathrm{Vol}(A' + A')] for some perturbation [A' \subseteq A] of [A] satisfying [\mathrm{Vol}(A \setminus A') \leq \varepsilon.] It is easy to see that without the relaxation of dealing with [A'] rather than [A] , any algorithm for estimating [\mathrm{Vol}((A+A)] to any nontrivial accuracy must make [2^{\Omega(n)}] queries. In contrast, we will (at a highlevel) describe how to obtain an algorithm whose query complexity depends only on and is completely independent of the ambient dimension .
Based on joint work with Anindya De and Rocco Servedio.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Mar 16 21:54:24 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 17 Mar 2022 04:54:24 +0000
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia
University)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: Junyao Zhao
Sent: Sunday, March 13, 2022 10:55 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia University)
Hello everyone,
This last theory lunch of this quarter will take place this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Shivam will tell us about: Approximating Sumset Size
Abstract: Given a subset [A] of the dimensional Boolean hypercube [\mathbb{F}_2^n] , the sumset [A+A] is the set [\{a+a': a, a' \in A\}] where addition is in [\mathbb{F}_2^n] . Sumsets play an important role in additive combinatorics, where they feature in many central results of the field.
The main result I will talk about is an algorithm for the problem of sumset size estimation. In more detail, the algorithm is given oracle access to (the indicator function of) an arbitrary [A \subseteq \mathbb{F}_2^n] and an accuracy parameter [\epsilon > 0] , and with high probability it outputs a value [0 \leq v \leq 1] that is [\pm \varepsilon] close to [\mathrm{Vol}(A' + A')] for some perturbation [A' \subseteq A] of [A] satisfying [\mathrm{Vol}(A \setminus A') \leq \varepsilon.] It is easy to see that without the relaxation of dealing with [A'] rather than [A] , any algorithm for estimating [\mathrm{Vol}((A+A)] to any nontrivial accuracy must make [2^{\Omega(n)}] queries. In contrast, we will (at a highlevel) describe how to obtain an algorithm whose query complexity depends only on and is completely independent of the ambient dimension .
Based on joint work with Anindya De and Rocco Servedio.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Thu Mar 17 12:03:08 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Thu, 17 Mar 2022 12:03:08 0700
Subject: [theoryseminar] [New Location] max flow in near linear time:
Tomorrow, 24pm
InReplyTo:
References:
MessageID:
Theory friends,
Quick reminder that Yang Liu and Li Chen will tell us about their new near
linear time max flow result *tomorrow (Fri Mar 18), 24pm.*
Please note the new location: *Gates 259*
Li and Yang plan to have about 30 min of slides first; the rest of the talk
will be on the whiteboard.
The room is a little smaller than our prepandemic theory seminar location,
with about 20 chairs and room for 30 people, so come on time!
Cheers,
Moses
On Sun, Mar 6, 2022 at 4:53 PM Moses Charikar wrote:
> Theory folks,
>
> Mark your calendars: Our very own Yang Liu and Li Chen (visiting from
> Georgia Tech) will tell us about their new breakthrough result on max flow
> and mincost flows on Friday, Mar 18, 24pm in Gates 415.
>
> Title and abstract are below. Hope to see you there!
>
> Cheers,
> Moses
>
> Title: Maximum Flow and MinimumCost Flow in AlmostLinear Time.
>
> Abstract: We give an algorithm that computes exact maximum flows and
> minimumcost flows on directed graphs with $m$ edges and polynomially
> bounded integral demands, costs, and capacities in $m^{1+o(1)}$ time. Our
> algorithm builds the flow through a sequence of $m^{1+o(1)}$ approximate
> undirected minimumratio cycles, each of which is computed and processed in
> amortized $m^{o(1)}$ time using a dynamic data structure.
>
> Our framework extends to an algorithm running in $m^{1+o(1)}$ time for
> computing flows that minimize general edgeseparable convex functions to
> high accuracy. This gives an almostlinear time algorithm for several
> problems including entropyregularized optimal transport, matrix scaling,
> $p$norm flows, and isotonic regression.
>
> Joint work with Rasmus Kyng, Richard Peng, Maximilian Probst Gutenberg,
> and Sushant Sachdeva.
>
> https://arxiv.org/abs/2203.00671
>
 next part 
An HTML attachment was scrubbed...
URL:
From vc.lecomte at gmail.com Thu Mar 17 15:49:48 2022
From: vc.lecomte at gmail.com (Victor Lecomte)
Date: Thu, 17 Mar 2022 15:49:48 0700
Subject: [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli (Columbia
University)
InReplyTo:
References:
MessageID:
If you liked Shivam's talk, you should consider attending his talk *tomorrow
121pm in Gates 287*. It will be a whiteboard talk with no prerequisites,
where he'll tell us about "convex influences" (a deep link between monotone
Boolean functions and convex bodies over Gaussian space).
Hope to see you there!
Victor
Junyao Zhao ? 2022?3?17??? 11:50???
> A gentle reminder: This is happening in 10 minutes.
>
> 
> *From:* Junyao Zhao
> *Sent:* Sunday, March 13, 2022 10:55 PM
> *To:* theoryseminar at lists.stanford.edu ;
> thseminar at cs.stanford.edu
> *Subject:* [theoryseminar] Theory Lunch 3/17: Shivam Nadimpalli
> (Columbia University)
>
> Hello everyone,
>
> This last theory lunch of this quarter will take place this Thursday at
> noon in the Engineering Quad
> .
> We'll start with some socializing, followed by a talk at 12:30pm. Shivam will
> tell us about: *Approximating Sumset Size*
>
> *Abstract:* Given a subset [image: A] of the [image: n]dimensional
> Boolean hypercube [image: \mathbb{F}_2^n], the sumset [image: A+A] is the
> set [image: \{a+a': a, a' \in A\}] where addition is in [image:
> \mathbb{F}_2^n]. Sumsets play an important role in additive
> combinatorics, where they feature in many central results of the field.
>
> The main result I will talk about is an algorithm for the problem of sumset
> size estimation. In more detail, the algorithm is given oracle access to
> (the indicator function of) an arbitrary [image: A \subseteq
> \mathbb{F}_2^n] and an accuracy parameter [image: \epsilon > 0], and with
> high probability it outputs a value [image: 0 \leq v \leq 1] that is [image:
> \pm \varepsilon]close to [image: \mathrm{Vol}(A' + A')] for some
> perturbation [image: A' \subseteq A] of [image: A] satisfying [image:
> \mathrm{Vol}(A \setminus A') \leq \varepsilon.] It is easy to see that
> without the relaxation of dealing with [image: A'] rather than [image: A],
> any algorithm for estimating [image: \mathrm{Vol}((A+A)] to any
> nontrivial accuracy must make [image: 2^{\Omega(n)}] queries. In
> contrast, we will (at a highlevel) describe how to obtain an algorithm
> whose query complexity depends only on [image: \epsilon] and is
> completely independent of the ambient dimension [image: n].
>
> Based on joint work with Anindya De
> and Rocco Servedio.
>
> Cheers,
> Junyao
>
> _______________________________________________
> theoryseminar mailing list
> theoryseminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theoryseminar
>
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Fri Mar 18 12:13:22 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Fri, 18 Mar 2022 12:13:22 0700
Subject: [theoryseminar] [New Location] max flow in near linear time:
Tomorrow, 24pm
InReplyTo:
References:
MessageID:
Theory folks,
Quick reminder that we will have a special seminar at 2pm today (details
below).
If you are unable to join in person, but would like to follow along, you
can join virtually via zoom:
https://stanford.zoom.us/j/93456697960?pwd=TXJqU1hpV09tL2E1NDViMHVsblBGUT09
Cheers,
Moses
On Thu, Mar 17, 2022 at 12:03 PM Moses Charikar
wrote:
> Theory friends,
>
> Quick reminder that Yang Liu and Li Chen will tell us about their new near
> linear time max flow result *tomorrow (Fri Mar 18), 24pm.*
>
> Please note the new location: *Gates 259*
>
> Li and Yang plan to have about 30 min of slides first; the rest of the
> talk will be on the whiteboard.
>
> The room is a little smaller than our prepandemic theory seminar
> location, with about 20 chairs and room for 30 people, so come on time!
>
> Cheers,
> Moses
>
>
> On Sun, Mar 6, 2022 at 4:53 PM Moses Charikar
> wrote:
>
>> Theory folks,
>>
>> Mark your calendars: Our very own Yang Liu and Li Chen (visiting from
>> Georgia Tech) will tell us about their new breakthrough result on max flow
>> and mincost flows on Friday, Mar 18, 24pm in Gates 415.
>>
>> Title and abstract are below. Hope to see you there!
>>
>> Cheers,
>> Moses
>>
>> Title: Maximum Flow and MinimumCost Flow in AlmostLinear Time.
>>
>> Abstract: We give an algorithm that computes exact maximum flows and
>> minimumcost flows on directed graphs with $m$ edges and polynomially
>> bounded integral demands, costs, and capacities in $m^{1+o(1)}$ time. Our
>> algorithm builds the flow through a sequence of $m^{1+o(1)}$ approximate
>> undirected minimumratio cycles, each of which is computed and processed in
>> amortized $m^{o(1)}$ time using a dynamic data structure.
>>
>> Our framework extends to an algorithm running in $m^{1+o(1)}$ time for
>> computing flows that minimize general edgeseparable convex functions to
>> high accuracy. This gives an almostlinear time algorithm for several
>> problems including entropyregularized optimal transport, matrix scaling,
>> $p$norm flows, and isotonic regression.
>>
>> Joint work with Rasmus Kyng, Richard Peng, Maximilian Probst Gutenberg,
>> and Sushant Sachdeva.
>>
>> https://arxiv.org/abs/2203.00671
>>
>
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Sun Mar 20 14:15:53 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Sun, 20 Mar 2022 14:15:53 0700
Subject: [theoryseminar] [New Location] max flow in near linear time:
Tomorrow, 24pm
InReplyTo:
References:
MessageID:
Theory friends,
For those of you who couldn't make it, a recording of the talk is now
available here:
https://youtu.be/KsMtVthpkzI
Cheers,
Moses
On Fri, Mar 18, 2022 at 12:13 PM Moses Charikar
wrote:
> Theory folks,
>
> Quick reminder that we will have a special seminar at 2pm today (details
> below).
> If you are unable to join in person, but would like to follow along, you
> can join virtually via zoom:
> https://stanford.zoom.us/j/93456697960?pwd=TXJqU1hpV09tL2E1NDViMHVsblBGUT09
>
> Cheers,
> Moses
>
> On Thu, Mar 17, 2022 at 12:03 PM Moses Charikar
> wrote:
>
>> Theory friends,
>>
>> Quick reminder that Yang Liu and Li Chen will tell us about their new
>> near linear time max flow result *tomorrow (Fri Mar 18), 24pm.*
>>
>> Please note the new location: *Gates 259*
>>
>> Li and Yang plan to have about 30 min of slides first; the rest of the
>> talk will be on the whiteboard.
>>
>> The room is a little smaller than our prepandemic theory seminar
>> location, with about 20 chairs and room for 30 people, so come on time!
>>
>> Cheers,
>> Moses
>>
>>
>> On Sun, Mar 6, 2022 at 4:53 PM Moses Charikar
>> wrote:
>>
>>> Theory folks,
>>>
>>> Mark your calendars: Our very own Yang Liu and Li Chen (visiting from
>>> Georgia Tech) will tell us about their new breakthrough result on max flow
>>> and mincost flows on Friday, Mar 18, 24pm in Gates 415.
>>>
>>> Title and abstract are below. Hope to see you there!
>>>
>>> Cheers,
>>> Moses
>>>
>>> Title: Maximum Flow and MinimumCost Flow in AlmostLinear Time.
>>>
>>> Abstract: We give an algorithm that computes exact maximum flows and
>>> minimumcost flows on directed graphs with $m$ edges and polynomially
>>> bounded integral demands, costs, and capacities in $m^{1+o(1)}$ time. Our
>>> algorithm builds the flow through a sequence of $m^{1+o(1)}$ approximate
>>> undirected minimumratio cycles, each of which is computed and processed in
>>> amortized $m^{o(1)}$ time using a dynamic data structure.
>>>
>>> Our framework extends to an algorithm running in $m^{1+o(1)}$ time for
>>> computing flows that minimize general edgeseparable convex functions to
>>> high accuracy. This gives an almostlinear time algorithm for several
>>> problems including entropyregularized optimal transport, matrix scaling,
>>> $p$norm flows, and isotonic regression.
>>>
>>> Joint work with Rasmus Kyng, Richard Peng, Maximilian Probst Gutenberg,
>>> and Sushant Sachdeva.
>>>
>>> https://arxiv.org/abs/2203.00671
>>>
>>
 next part 
An HTML attachment was scrubbed...
URL:
From tselil at stanford.edu Tue Mar 22 13:02:06 2022
From: tselil at stanford.edu (Tselil Schramm)
Date: Tue, 22 Mar 2022 13:02:06 0700
Subject: [theoryseminar] Course announcement: The SumofSquares
Algorithmic Paradigm in Statistics
MessageID:
Hello theorists & friends,
I am teaching a graduate course this coming quarter (STATS 314A), "the
SumofSquares Algorithmic Paradigm in Statistics." I am advertising it
here as I hope it will be of interest to many on this list. Here
is the course
website + syllabus.
To some of you, the theme may be familiar from my STATS 319 offering from
Winter 2021. Even though there will be overlap in content, I expect this
edition will be quite different: the format will be lectures rather than
student presentations, the pace will be more leisurely, and the scope will
be broader (these seemingly contradicting assertions are possible because
we'll have 3x more time).
Hope you are having a lovely Spring Break!
Tselil
 next part 
An HTML attachment was scrubbed...
URL:
From saberi at stanford.edu Wed Mar 23 15:13:16 2022
From: saberi at stanford.edu (Amin Saberi)
Date: Wed, 23 Mar 2022 22:13:16 +0000
Subject: [theoryseminar] A course on Matching Theory
MessageID:
Dear friends,
I am teaching the following course in the Spring quarter:
Amin
MS&E 319: Matching Theory
Amin Saberi
T Th 01:30p03:00p
Shriram Center for BioCheme, Rm 108
The theory of matching with its roots in the work of mathematical giants like Euler and Kirchhoff has played a central and catalytic role in combinatorial optimization for decades. More recently, the growth of online marketplaces for allocating advertisements, rides, or other goods and services has led to new interest and progress in this area.
The course starts with classic results characterizing matchings in bipartite and general graphs and explores connections with algebraic graph theory and discrete probability. Those results are complemented with models and algorithms developed for modern applications in market design, online advertising, and ridesharing.
Topics include:
Matching, determinant, and Pfaffian
Matching and polynomial identity testing
Isolating lemma and matrix inversion, matching in RNC
Combinatorial and polyhedral characterizations
The assignment problem and its dual, primaldual, and auction algorithms
Tutte?s theorem, Edmond?s LP, and the Blossom algorithm
The GallaiEdmonds decomposition, BergeTutte formula, and applications in Nash bargaining
The stable marriage problem
GaleShapley theorem, incentive and fairness issues
LP characterization, counting stable matchings
Matching in dynamic environments
Online matching under various arrival models
Applications in ridesharing and online advertising
Computation of matching
Combinatorial vs continuous algorithms, nearlinear time algorithms
Matchings in sublinear time, streaming computational models
Sparsifiers and stochastic matching
Counting matchings
The Van der Waerden conjecture, BregmanMinc?s inequality
Deterministic approximations, counting matchings in planar graphs
Markov chain Monte Carlo algorithms, sequential importance sampling
The Ising model, applications, and basic properties
The matching polynomial and its roots
HeilmanLieb theorem, the maximum root of the matching polynomial
treelike walks, 2lifts, and BiluLinial conjecture
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Thu Mar 24 12:19:25 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Thu, 24 Mar 2022 12:19:25 0700
Subject: [theoryseminar] Reading group on Graph ML and GNNs
MessageID:
Dear theory friends,
Amin, Jure and I, together with a few students, are organizing a reading
group on Graph Machine Learning, focusing on Graph Neural Networks, and we
would love for you to join us!
The time we set for the Spring quarter is 4:306 pm on Tuesdays, starting
next week. We have tentatively reserved Gates 104 (with a capacity of 17).
Please let Amin and me know if you like to join and if you have any
questions.
All the best,
Moses (and Amin and Jure)
 next part 
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Fri Mar 25 09:17:18 2022
From: reingold at stanford.edu (Omer Reingold)
Date: Fri, 25 Mar 2022 09:17:18 0700
Subject: [theoryseminar] Friday March 25th,
1pm: quantitative models in a fully integrated healthcare system
InReplyTo:
References:
MessageID:
This is happening today at 1pm. It is a unique perspective on the
challenges and opportunities of causality and predictive models in
healthcare. Many of the examples are from some of the highest impact
scientific research on public health (e.g., effectiveness of vaccination)
during the COVID pandemic. (A tiny but personally exciting part describes
the way multicalibration (born at Stanford) was deployed in this context.)
Highly recommended,
Omer
On Tue, Mar 8, 2022 at 6:57 PM Omer Reingold wrote:
> Noa and Noam (bios below) will give a talk on Friday March 25th at 1pm in
> the Fujitsu conference room (on the 4th floor in Gates). Details on the
> talk are below and (based on their past talks) I highly recommend
> attending. The speakers are experts in public health care with a very
> wide and deep education and a tremendous openness to adopting advanced CS
> research in reallife health care systems (through one of the largest
> healthcare providers in the world). They have been involved in some of the
> highest profile COVID research publications (e.g., on the effectiveness of
> vaccines) and I was lucky to have them apply some of our
> algorithmicfairness work. Feel free to discriminate this invitation to
> anybody who may be interested.
>
> Best wishes
> Omer
>
> *Title*: Opportunities for the application of quantitative models in a
> fully integrated healthcare system
>
> *Abstract*
>
> Health insurance in Israel is mandatory, comprehensive in its list of
> services, and provided by four
> integrated payerprovider organizations. Clalit Health Services is the
> largest of these organizations ?
> responsible for the care of over half of the Israeli population. Most of
> this care (outpatient and
> inpatient) is directly provided by Clalit, and the rest is purchased by
> Clalit. All services provided or
> purchased are stored in a single comprehensive analytic data warehouse.
> Our talk will focus on the
> opportunities that such an integrated system and its data offers in using
> quantitative models for
> stateoftheart research and digital healthcare interventions.
> We will discuss the two main quantitative tools used for digital
> healthcare ? causal inference and
> prediction models. We will show how the depth and immediacy of the data
> allowed the conduct of
> causal research that provided necessary and timely information regarding
> the effectiveness and
> safety of mRNA Covid19 vaccines. We will also show how such unique data
> enabled us to study an
> oftenoverlooked aspect of the vaccination ? indirect protective effects.
> We will also demonstrate
> how this data can be used for promoting predictive, proactive, and
> personalized care. We will
> demonstrate how prediction models are created and how they are integrated
> into the point of care.
>
> Noam Barda holds an MD from TelAviv University, a PhD coadvised in
> public health and computer
> science from BenGurion University and a BSc in computer science from the
> Open University. He
> completed his postdoctorate in the Department of Biomedical Informatics
> (DBMI) at Harvard
> Medical School. He is the head of the RealWorld Evidence Research and
> Innovation Lab at Tel
> HaShomer medical center, Israel?s largest hospital, and coheads the
> Digital Healthcare Laboratory in
> the department of Software and Information Systems Engineering at
> BenGurion University.
> Noa Dagan holds an MD and an MPH from the Hebrew University, and a PhD in
> Computer Science
> from BenGurion University. She completed her postdoctorate in the
> Department of Biomedical
> Informatics (DBMI) at Harvard Medical School. Dr. Dagan is currently the
> director of the AIdriven
> Medicine Department in Clalit Innovation and the Clalit Research
> Institute, and coheads the Digital
> Healthcare Laboratory in the department of Software and Information
> Systems Engineering at Ben
> Gurion University.
>
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Fri Mar 25 13:01:27 2022
From: jneu at stanford.edu (Joachim Neu)
Date: Fri, 25 Mar 2022 13:01:27 0700
Subject: [theoryseminar] =?iso88591?q?=22Beyond_the_Csisz=E1rK=F6rner_?=
=?iso88591?q?Bound=3A_BestPossible_Wiretap_Coding_via_Obfuscation=22_?=
=?iso88591?q?=3F_Amit_Sahai_=28Thu=2C_31Mar_=40_4=3A00pm=29?=
MessageID:
Beyond the Csisz?rK?rner Bound: BestPossible Wiretap Coding via Obfuscation
Amit Sahai ? Professor, UCLA
Thu, 31Mar / 4:00pm / Packard 101 (in person)
Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). The talk will be streamed on
Zoom for those unable to attend in person:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
Abstract
A wiretap coding scheme (Wyner, Bell Syst. Tech. J. 1975) enables Alice
to reliably communicate a message m to an honest Bob by sending an
encoding c over a noisy channel chB, while at the same time hiding m
from Eve who receives c over another noisy channel chE.
Wiretap coding is clearly impossible when chB is a degraded version of
chE, in the sense that the output of chB can be simulated using only
the output of chE. A classic work of Csisz?r and K?rner (IEEE Trans.
Inf. Theory, 1978) shows that the converse does not hold. This follows
from their full characterization of the channel pairs (chB, chE) that
enable informationtheoretic wiretap coding.
In this work, we show that in fact the converse does hold when
considering computational security; that is, wiretap coding against a
computationally bounded Eve is possible if and only if chB is not a
degraded version of chE. Our construction assumes the existence of
virtual blackbox (VBB) obfuscation of specific classes of ``evasive?
functions that generalize fuzzy point functions, and can be
heuristically instantiated using indistinguishability obfuscation.
Finally, our solution has the appealing feature of being universal in
the sense that Alice?s algorithm depends only on chB and not on chE.
Joint work with Yuval Ishai, Alexis Korb, and Paul Lou.
Bio
Amit Sahai is a Simons Investigator (2021), Fellow of the ACM (2018)
and a Fellow of the IACR (2019). He is also a Fellow of the Royal
Society of Arts (2021), and Advisor to the Prison Mathematics Project.
He is the incumbent of the Symantec Endowed Chair in Computer Science.
He received his Ph.D. in Computer Science from MIT in 2000. From 2000
to 2004, he was on the faculty at Princeton University; in 2004 he
joined the UCLA Samueli School of Engineering, where he currently holds
the position of Professor of Computer Science. He serves as an editor
of J. Cryptology (SpringerNature). His research interests are in
security and cryptography, and theoretical computer science more
broadly. He is the coinventor of AttributeBased Encryption,
Functional Encryption, and Indistinguishability Obfuscation. He has
published more than 150 original technical research papers at venues
such as the ACM Symposium on Theory of Computing (STOC), CRYPTO, and
the Journal of the ACM. He has given a number of invited talks at
institutions such as MIT, Stanford, and Berkeley, including the 2004
Distinguished Cryptographer Lecture Series at NTT Labs, Japan.
Professor Sahai is the recipient of numerous honors; he was named an
Alfred P. Sloan Foundation Research Fellow in 2002, received an Okawa
Research Grant Award in 2007, a Xerox Foundation Faculty Award in 2010,
a Google Faculty Research Award in 2010, a 2012 Pazy Memorial Award, a
2016 ACM CCS Test of Time Award, a 2019 AWS Machine Learning Research
Award, a 2020 IACR Test of Time Award (Eurocrypt), and a STOC 2021 Best
Paper Award. For his contributions to the conception and development of
indistinguishability obfusction, he was awarded the 2022 Held Prize by
the National Academy of Sciences. For his teaching, he was given the
2016 Lockheed Martin Excellence in Teaching Award from the Samueli
School of Engineering at UCLA. His research has been covered by several
news agencies including the BBC World Service, Quanta Magazine, Wired,
and IEEE Spectrum.
This talk is hosted by the ISL Colloquium. To receive talk
announcements, subscribe to the mailing list isl
colloq at lists.stanford.edu.
Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2022q2/amitsahai/
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Mon Mar 28 08:26:36 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Mon, 28 Mar 2022 08:26:36 0700
Subject: [theoryseminar] course announcement: Algorithmic Techniques for
Big Data
MessageID:
Dear theory friends,
If you would like to learn cool theory tools for big data, you might be
interested in the course I'm teaching this quarter (details below).
Cheers,
Moses
PS Apologies if you got duplicate copies of this course announcement

CS 368: Algorithmic Techniques for Big Data
Instructor: Moses Charikar
Spring 2022
MonWed 9:4511:15
Mitchell Earth Sciences B67
Description:
Designing algorithms for efficient processing of large data sets poses
unique challenges. This course will discuss algorithmic paradigms that have
been developed to efficiently process data sets that are much larger than
available memory. We will cover streaming algorithms and sketching methods
that produce compact data structures, dimension reduction methods that
preserve geometric structure, efficient algorithms for numerical linear
algebra, graph sparsification methods, as well as impossibility results for
these techniques.
Tentative Syllabus:
1. Sketching and Streaming algorithms for basic statistics:
? Distinct elements, heavy hitters, frequency moments, pstable sketches
2. Dimension Reduction
? Johnson Lindenstrauss lemma, lower bounds and impossibility results
3. Graph stream algorithms
? connectivity, cut/spectral sparsifiers, spanners, matching, graph
sketching
4. Lower bounds for Sketching and Streaming:
? communication complexity: Equality, Index and SetDisjointness
5. Locality Sensitive Hashing
? similarity estimation, approximate nearest neighbor search, data
dependent hashing
6. Fast Approximate Numerical Linear Algebra
? matrix multiplication, lowrank approximation, subspace embeddings,
least squares regression
7. Massively Parallel Computing Models
? MST, connected components, matching, and submodular optimization in
the MapReduce model
Course website: https://cs368stanford.github.io/spring2022/
Units: 3
Assignments:
3 problems sets (1 can be replaced by programming assignment) and research
project.
Prerequisites:
Students will be expected to have a strong background in algorithms and
probability.
 next part 
An HTML attachment was scrubbed...
URL:
From jacobfox at stanford.edu Tue Mar 29 00:11:24 2022
From: jacobfox at stanford.edu (Jacob Fox)
Date: Tue, 29 Mar 2022 07:11:24 +0000
Subject: [theoryseminar] Course announcement: Math 233C Topics in
Combinatorics
MessageID:
This spring quarter on Tuesdays and Thursdays from 9:45am to 11:15am I will be teaching the graduate course Math 233C Topics in Combinatorics in building 540, room 540108.
A brief description: Math 233C is a graduate class on Extremal Combinatorics and Ramsey theory. Important methods, results and open problems will be highlighted.
The attached syllabus contains more detailed information on the course.
Cheers,
Jacob Fox
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: Math 233C Syllabus Spring 2022.pdf
Type: application/pdf
Size: 112328 bytes
Desc: Math 233C Syllabus Spring 2022.pdf
URL:
From jacobfox at stanford.edu Tue Mar 29 00:11:24 2022
From: jacobfox at stanford.edu (Jacob Fox)
Date: Tue, 29 Mar 2022 07:11:24 +0000
Subject: [theoryseminar] Course announcement: Math 233C Topics in
Combinatorics
MessageID:
This spring quarter on Tuesdays and Thursdays from 9:45am to 11:15am I will be teaching the graduate course Math 233C Topics in Combinatorics in building 540, room 540108.
A brief description: Math 233C is a graduate class on Extremal Combinatorics and Ramsey theory. Important methods, results and open problems will be highlighted.
The attached syllabus contains more detailed information on the course.
Cheers,
Jacob Fox
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: Math 233C Syllabus Spring 2022.pdf
Type: application/pdf
Size: 112328 bytes
Desc: Math 233C Syllabus Spring 2022.pdf
URL:
From junyaoz at stanford.edu Tue Mar 29 10:07:43 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Tue, 29 Mar 2022 17:07:43 +0000
Subject: [theoryseminar] Theory Lunch Spring Edition 3/31: Esty Kelman
MessageID:
Hi everyone,
Hope you had a nice spring break!
The spring edition of theory lunch will begin this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Esty will tell us about Analysis of Boolean functions: KKL Theorem via Random Restrictions and LogSobolev Inequality
Abstract: A Boolean function over the ndimensional binary hypercube is a function f of the form f:{0,1}^n>{0,1}. The KKL Theorem [STOC'88], a fundamental result in the field that is named after Kahn, Kalai, and Linial, states the following:
Every Boolean function f on n variables has a single variable with a nontrivial influence on the value of f. The theorem was proved using Fourier analysis and other novel techniques that are still widely used today. A particular ingredient in the proof is the hypercontractive inequality, which became a key theorem in many proofs in this field. As hypercontractive inequality is sometimes considered a bit counterintuitive, researchers try to avoid using it and develop alternative tools that yield the same and even stronger results.
We establish an alternative proof technique for the KKL Theorem, as well as for other related results, avoiding the use of hypercontractive inequality. Instead, we apply a suitable random restriction on the function, i.e., choosing a random set of variables and fixing them to a random value. Then, we follow by applying the LogSobolev inequality. (A particular case of The LogSobolev inequality is the edge isoperimetric inequality, which states that each small subset of vertices of the ndimensional binary hypercube has many outgoing edges that connect it to other vertices of the graph.).
Our proofs might serve as an alternative, uniform exposition to these theorems, and the techniques might be useful to obtain more results.
In this talk, we will prove a special case of the KKL Theorem using our new approach and demonstrate how to apply the random restriction and the edge isoperimetric inequality.
Based on a joint work with Subhash Khot, Guy Kindler, Dor Minzer and Muli Safra.
By the way, we have some slots available for talks (4/21, 5/5, 5/19, and 6/2). If you want to share some cool math with the group, please let me know.
Wishing you a great start of the spring quarter!
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Tue Mar 29 10:07:43 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Tue, 29 Mar 2022 17:07:43 +0000
Subject: [theoryseminar] Theory Lunch Spring Edition 3/31: Esty Kelman
MessageID:
Hi everyone,
Hope you had a nice spring break!
The spring edition of theory lunch will begin this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Esty will tell us about Analysis of Boolean functions: KKL Theorem via Random Restrictions and LogSobolev Inequality
Abstract: A Boolean function over the ndimensional binary hypercube is a function f of the form f:{0,1}^n>{0,1}. The KKL Theorem [STOC'88], a fundamental result in the field that is named after Kahn, Kalai, and Linial, states the following:
Every Boolean function f on n variables has a single variable with a nontrivial influence on the value of f. The theorem was proved using Fourier analysis and other novel techniques that are still widely used today. A particular ingredient in the proof is the hypercontractive inequality, which became a key theorem in many proofs in this field. As hypercontractive inequality is sometimes considered a bit counterintuitive, researchers try to avoid using it and develop alternative tools that yield the same and even stronger results.
We establish an alternative proof technique for the KKL Theorem, as well as for other related results, avoiding the use of hypercontractive inequality. Instead, we apply a suitable random restriction on the function, i.e., choosing a random set of variables and fixing them to a random value. Then, we follow by applying the LogSobolev inequality. (A particular case of The LogSobolev inequality is the edge isoperimetric inequality, which states that each small subset of vertices of the ndimensional binary hypercube has many outgoing edges that connect it to other vertices of the graph.).
Our proofs might serve as an alternative, uniform exposition to these theorems, and the techniques might be useful to obtain more results.
In this talk, we will prove a special case of the KKL Theorem using our new approach and demonstrate how to apply the random restriction and the edge isoperimetric inequality.
Based on a joint work with Subhash Khot, Guy Kindler, Dor Minzer and Muli Safra.
By the way, we have some slots available for talks (4/21, 5/5, 5/19, and 6/2). If you want to share some cool math with the group, please let me know.
Wishing you a great start of the spring quarter!
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From saberi at stanford.edu Wed Mar 30 08:46:54 2022
From: saberi at stanford.edu (Amin Saberi)
Date: Wed, 30 Mar 2022 15:46:54 +0000
Subject: [theoryseminar] Christos Papadimitriou speaking tomorrow at 4:30 pm
MessageID:
Hello everyone,
Christos Papadimitriou is speaking in a special RAIN seminar tomorrow at 4:30. I expect it to be a very interesting talk on a fascinating topic.
See below.
Amin
4:30 PM, Thursday, March 31st
Gates 403 Fujitsu Conference Room
Title: How does the brain beget the mind?
Speaker: Christos Papadimitriou, Columbia University
Abstract: There is no doubt that cognition and intelligence are the results of neural activity  but how, exactly? How do molecules, neurons, and synapses give rise to reasoning, language, plans, stories, art, math? Despite dazzling progress in experimental neuroscience, as well as in cognitive science, we do not seem to be making progress on the overarching question. As Richard Axel recently put it in an interview: "We don't have a logic for the transformation of neuronal activity to thought and action. I view discerning [this] logic as the most important future direction of neuroscience". What kind of formal system would qualify as this "logic"?
I will introduce a computational system whose basic data structure is the assembly of neurons  assemblies are large populations of neurons representing concepts, words, ideas, episodes, etc. The Assembly Calculus is biologically plausible in the sense that Its primitives are properties of assemblies observed in experiments, or useful for explaining other experiments, and can be provably (through both mathematical proof and simulations in biologically realistic platforms) "compiled down" to the activity of neurons and synapses. Experiments show that this programming framework can simulate  exclusively through the spiking of neurons  highlevel cognitive functions, such as parsing natural language and planning in the blocks world.. We believe that this formalism is wellpositioned to help in bridging the gap between the brain and the mind.
Bio: One of world?s leading computer science theorists, Christos Papadimitriou is best known for his work in computational complexity, helping to expand its methodology and reach. He has also explored other fields through what he calls the algorithmic lens, having contributed to biology and the theory of evolution, economics, and game theory (where he helped found the field of algorithmic game theory), artificial intelligence, robotics, networks and the Internet, and more recently the study of the brain.
He authored the widely used textbook Computational Complexity, as well as four others, and has written three novels, including the bestselling Logicomix and his latest, Independence. Papadimitriou has been awarded the Knuth Prize, IEEE?s John von Neumann Medal, the EATCS Award, the IEEE Computer Society Charles Babbage Award, and the G?del Prize. He is a fellow of the Association for Computer Machinery and the National Academy of Engineering, and a member of the National Academy of Sciences.
 next part 
An HTML attachment was scrubbed...
URL:
From jneu at stanford.edu Wed Mar 30 17:29:50 2022
From: jneu at stanford.edu (Joachim Neu)
Date: Wed, 30 Mar 2022 17:29:50 0700
Subject: [theoryseminar]
=?iso88591?q?=22Beyond_the_Csisz=E1rK=F6rner_?=
=?iso88591?q?Bound=3A_BestPossible_Wiretap_Coding_via_Obfuscation=22_?=
=?iso88591?q?=3F_Amit_Sahai_=28Thu=2C_31Mar_=40_4=3A00pm=29?=
InReplyTo:
References:
MessageID:
Unfortunately, this talk was postponed to Thu, 12 May.
On Fri, 20220325 at 13:01 0700, Joachim Neu wrote:
> Beyond the Csisz?rK?rner Bound: BestPossible Wiretap Coding via
> Obfuscation
> ?
> Amit Sahai ? Professor, UCLA
> Thu, 31Mar / 4:00pm / Packard 101 (in person)
> ?Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). The talk will be streamed on
> Zoom for those unable to attend in person:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> ?
> Abstract
> A wiretap coding scheme (Wyner, Bell Syst. Tech. J. 1975) enables
> Alice to reliably communicate a message m to an honest Bob by sending
> an encoding c over a noisy channel chB, while at the same time hiding
> m from Eve who receives c over another noisy channel chE.
> Wiretap coding is clearly impossible when chB is a degraded version
> of chE, in the sense that the output of chB can be simulated using
> only the output of chE. A classic work of Csisz?r and K?rner (IEEE
> Trans. Inf. Theory, 1978) shows that the converse does not hold. This
> follows from their full characterization of the channel pairs (chB,
> chE) that enable informationtheoretic wiretap coding.
> In this work, we show that in fact the converse does hold when
> considering computational security; that is, wiretap coding against a
> computationally bounded Eve is possible if and only if chB is not a
> degraded version of chE. Our construction assumes the existence of
> virtual blackbox (VBB) obfuscation of specific classes of ``evasive?
> functions that generalize fuzzy point functions, and can be
> heuristically instantiated using indistinguishability obfuscation.
> Finally, our solution has the appealing feature of being universal in
> the sense that Alice?s algorithm depends only on chB and not on chE.
> Joint work with Yuval Ishai, Alexis Korb, and Paul Lou.
> Bio
> Amit Sahai is a Simons Investigator (2021), Fellow of the ACM (2018)
> and a Fellow of the IACR (2019). He is also a Fellow of the Royal
> Society of Arts (2021), and Advisor to the Prison Mathematics
> Project. He is the incumbent of the Symantec Endowed Chair in
> Computer Science. He received his Ph.D. in Computer Science from MIT
> in 2000. From 2000 to 2004, he was on the faculty at Princeton
> University; in 2004 he joined the UCLA Samueli School of Engineering,
> where he currently holds the position of Professor of Computer
> Science. He serves as an editor of J. Cryptology (SpringerNature).
> His research interests are in security and cryptography, and
> theoretical computer science more broadly. He is the coinventor of
> AttributeBased Encryption, Functional Encryption, and
> Indistinguishability Obfuscation. He has published more than 150
> original technical research papers at venues such as the ACM
> Symposium on Theory of Computing (STOC), CRYPTO, and the Journal of
> the ACM. He has given a number of invited talks at institutions such
> as MIT, Stanford, and Berkeley, including the 2004 Distinguished
> Cryptographer Lecture Series at NTT Labs, Japan. Professor Sahai is
> the recipient of numerous honors; he was named an Alfred P. Sloan
> Foundation Research Fellow in 2002, received an Okawa Research Grant
> Award in 2007, a Xerox Foundation Faculty Award in 2010, a Google
> Faculty Research Award in 2010, a 2012 Pazy Memorial Award, a 2016
> ACM CCS Test of Time Award, a 2019 AWS Machine Learning Research
> Award, a 2020 IACR Test of Time Award (Eurocrypt), and a STOC 2021
> Best Paper Award. For his contributions to the conception and
> development of indistinguishability obfusction, he was awarded the
> 2022 Held Prize by the National Academy of Sciences. For his
> teaching, he was given the 2016 Lockheed Martin Excellence in
> Teaching Award from the Samueli School of Engineering at UCLA. His
> research has been covered by several news agencies including the BBC
> World Service, Quanta Magazine, Wired, and IEEE Spectrum.
> This talk is hosted by the ISL Colloquium. To receive talk
> announcements, subscribe to the mailing list
> islcolloq at lists.stanford.edu.
> ?
> ?Mailing list:
> https://mailman.stanford.edu/mailman/listinfo/islcolloq
> ?This talk: http://isl.stanford.edu/talks/talks/2022q2/amitsahai/
>
From junyaoz at stanford.edu Wed Mar 30 21:52:02 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 31 Mar 2022 04:52:02 +0000
Subject: [theoryseminar] Theory Lunch Spring Edition 3/31: Esty Kelman
InReplyTo:
References:
MessageID:
A friendly reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Tuesday, March 29, 2022 10:07 AM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: [theoryseminar] Theory Lunch Spring Edition 3/31: Esty Kelman
Hi everyone,
Hope you had a nice spring break!
The spring edition of theory lunch will begin this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Esty will tell us about Analysis of Boolean functions: KKL Theorem via Random Restrictions and LogSobolev Inequality
Abstract: A Boolean function over the ndimensional binary hypercube is a function f of the form f:{0,1}^n>{0,1}. The KKL Theorem [STOC'88], a fundamental result in the field that is named after Kahn, Kalai, and Linial, states the following:
Every Boolean function f on n variables has a single variable with a nontrivial influence on the value of f. The theorem was proved using Fourier analysis and other novel techniques that are still widely used today. A particular ingredient in the proof is the hypercontractive inequality, which became a key theorem in many proofs in this field. As hypercontractive inequality is sometimes considered a bit counterintuitive, researchers try to avoid using it and develop alternative tools that yield the same and even stronger results.
We establish an alternative proof technique for the KKL Theorem, as well as for other related results, avoiding the use of hypercontractive inequality. Instead, we apply a suitable random restriction on the function, i.e., choosing a random set of variables and fixing them to a random value. Then, we follow by applying the LogSobolev inequality. (A particular case of The LogSobolev inequality is the edge isoperimetric inequality, which states that each small subset of vertices of the ndimensional binary hypercube has many outgoing edges that connect it to other vertices of the graph.).
Our proofs might serve as an alternative, uniform exposition to these theorems, and the techniques might be useful to obtain more results.
In this talk, we will prove a special case of the KKL Theorem using our new approach and demonstrate how to apply the random restriction and the edge isoperimetric inequality.
Based on a joint work with Subhash Khot, Guy Kindler, Dor Minzer and Muli Safra.
By the way, we have some slots available for talks (4/21, 5/5, 5/19, and 6/2). If you want to share some cool math with the group, please let me know.
Wishing you a great start of the spring quarter!
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Mar 30 21:52:02 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 31 Mar 2022 04:52:02 +0000
Subject: [theoryseminar] Theory Lunch Spring Edition 3/31: Esty Kelman
InReplyTo:
References:
MessageID:
A friendly reminder: This is happening in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Tuesday, March 29, 2022 10:07 AM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: [theoryseminar] Theory Lunch Spring Edition 3/31: Esty Kelman
Hi everyone,
Hope you had a nice spring break!
The spring edition of theory lunch will begin this Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Esty will tell us about Analysis of Boolean functions: KKL Theorem via Random Restrictions and LogSobolev Inequality
Abstract: A Boolean function over the ndimensional binary hypercube is a function f of the form f:{0,1}^n>{0,1}. The KKL Theorem [STOC'88], a fundamental result in the field that is named after Kahn, Kalai, and Linial, states the following:
Every Boolean function f on n variables has a single variable with a nontrivial influence on the value of f. The theorem was proved using Fourier analysis and other novel techniques that are still widely used today. A particular ingredient in the proof is the hypercontractive inequality, which became a key theorem in many proofs in this field. As hypercontractive inequality is sometimes considered a bit counterintuitive, researchers try to avoid using it and develop alternative tools that yield the same and even stronger results.
We establish an alternative proof technique for the KKL Theorem, as well as for other related results, avoiding the use of hypercontractive inequality. Instead, we apply a suitable random restriction on the function, i.e., choosing a random set of variables and fixing them to a random value. Then, we follow by applying the LogSobolev inequality. (A particular case of The LogSobolev inequality is the edge isoperimetric inequality, which states that each small subset of vertices of the ndimensional binary hypercube has many outgoing edges that connect it to other vertices of the graph.).
Our proofs might serve as an alternative, uniform exposition to these theorems, and the techniques might be useful to obtain more results.
In this talk, we will prove a special case of the KKL Theorem using our new approach and demonstrate how to apply the random restriction and the edge isoperimetric inequality.
Based on a joint work with Subhash Khot, Guy Kindler, Dor Minzer and Muli Safra.
By the way, we have some slots available for talks (4/21, 5/5, 5/19, and 6/2). If you want to share some cool math with the group, please let me know.
Wishing you a great start of the spring quarter!
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Thu Mar 31 15:44:10 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Thu, 31 Mar 2022 15:44:10 0700
Subject: [theoryseminar] Christos Papadimitriou speaking *today* at 4:30 pm
InReplyTo:
References:
MessageID:
Folks,
Quick reminder about Christos Papadimitrou's talk today in less than an
hour.
By popular demand, we will have the talk available on zoom:
https://stanford.zoom.us/j/91992444410?pwd=WlRiRGViNjlscjhjY3g2eURjRzJUZz09
Cheers,
Moses
On Wed, Mar 30, 2022 at 8:47 AM Amin Saberi wrote:
>
> Hello everyone,
>
> Christos Papadimitriou is speaking in a special RAIN seminar tomorrow at
> 4:30. I expect it to be a very interesting talk on a fascinating topic.
>
> See below.
>
> Amin
>
> *4:30 PM, Thursday, March 31st *
> *Gates 403 Fujitsu Conference Room*
>
> *Title:* How does the brain beget the mind?
> *Speaker:* Christos Papadimitriou, Columbia University
>
> *Abstract: * There is no doubt that cognition and intelligence are the
> results of neural activity  but how, exactly? How do molecules,
> neurons, and synapses give rise to reasoning, language, plans, stories,
> art, math? Despite dazzling progress in experimental neuroscience, as
> well as in cognitive science, we do not seem to be making progress on the
> overarching question. As Richard Axel recently put it in an interview: "We
> don't have a logic for the transformation of neuronal activity to thought
> and action. I view discerning [this] logic as the most important future
> direction of neuroscience". What kind of formal system would qualify as
> this "logic"?
>
> I will introduce a computational system whose basic data structure is the
> assembly of neurons  assemblies are large populations of neurons
> representing concepts, words, ideas, episodes, etc. The Assembly Calculus
> is biologically plausible in the sense that Its primitives are properties
> of assemblies observed in experiments, or useful for explaining other
> experiments, and can be provably (through both mathematical proof and
> simulations in biologically realistic platforms) "compiled down" to the
> activity of neurons and synapses. Experiments show that this programming
> framework can simulate  exclusively through the spiking of neurons 
> highlevel cognitive functions, such as parsing natural language and
> planning in the blocks world.. We believe that this formalism is
> wellpositioned to help in bridging the gap between the brain and the mind.
>
>
> *Bio: *One of world?s leading computer science theorists, Christos
> Papadimitriou is best known for his work in computational complexity,
> helping to expand its methodology and reach. He has also explored other
> fields through what he calls the algorithmic lens, having contributed to
> biology and the theory of evolution, economics, and game theory (where he
> helped found the field of algorithmic game theory), artificial
> intelligence, robotics, networks and the Internet, and more recently the
> study of the brain.
>
> He authored the widely used textbook *Computational Complexity,* as well
> as four others, and has written three novels, including the bestselling
> *Logicomix* and his latest, *Independence. *Papadimitriou has been
> awarded the Knuth Prize, IEEE?s John von Neumann Medal, the EATCS Award,
> the IEEE Computer Society Charles Babbage Award, and the G?del Prize. He is
> a fellow of the Association for Computer Machinery and the National Academy
> of Engineering, and a member of the National Academy of Sciences.
>
> _______________________________________________
> theoryseminar mailing list
> theoryseminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theoryseminar
>
 next part 
An HTML attachment was scrubbed...
URL: