From moses at cs.stanford.edu Fri Apr 1 06:50:45 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Fri, 1 Apr 2022 06:50:45 0700
Subject: [theoryseminar] Christos Papadimitriou speaking *today* at
4:30 pm
InReplyTo:
References:
MessageID:
A recording of Christos Papadimitriou's talk is now available here:
https://youtu.be/86bcml5ug4U
Cheers,
Moses
On Thu, Mar 31, 2022 at 3:44 PM Moses Charikar
wrote:
> Folks,
>
> Quick reminder about Christos Papadimitrou's talk today in less than an
> hour.
>
> By popular demand, we will have the talk available on zoom:
>
> https://stanford.zoom.us/j/91992444410?pwd=WlRiRGViNjlscjhjY3g2eURjRzJUZz09
>
>
> Cheers,
> Moses
>
> On Wed, Mar 30, 2022 at 8:47 AM Amin Saberi wrote:
>
>>
>> Hello everyone,
>>
>> Christos Papadimitriou is speaking in a special RAIN seminar tomorrow at
>> 4:30. I expect it to be a very interesting talk on a fascinating topic.
>>
>> See below.
>>
>> Amin
>>
>> *4:30 PM, Thursday, March 31st *
>> *Gates 403 Fujitsu Conference Room*
>>
>> *Title:* How does the brain beget the mind?
>> *Speaker:* Christos Papadimitriou, Columbia University
>>
>> *Abstract: * There is no doubt that cognition and intelligence are the
>> results of neural activity  but how, exactly? How do molecules,
>> neurons, and synapses give rise to reasoning, language, plans, stories,
>> art, math? Despite dazzling progress in experimental neuroscience, as
>> well as in cognitive science, we do not seem to be making progress on the
>> overarching question. As Richard Axel recently put it in an interview: "We
>> don't have a logic for the transformation of neuronal activity to thought
>> and action. I view discerning [this] logic as the most important future
>> direction of neuroscience". What kind of formal system would qualify as
>> this "logic"?
>>
>> I will introduce a computational system whose basic data structure is the
>> assembly of neurons  assemblies are large populations of neurons
>> representing concepts, words, ideas, episodes, etc. The Assembly Calculus
>> is biologically plausible in the sense that Its primitives are properties
>> of assemblies observed in experiments, or useful for explaining other
>> experiments, and can be provably (through both mathematical proof and
>> simulations in biologically realistic platforms) "compiled down" to the
>> activity of neurons and synapses. Experiments show that this programming
>> framework can simulate  exclusively through the spiking of neurons 
>> highlevel cognitive functions, such as parsing natural language and
>> planning in the blocks world.. We believe that this formalism is
>> wellpositioned to help in bridging the gap between the brain and the mind.
>>
>>
>> *Bio: *One of world?s leading computer science theorists, Christos
>> Papadimitriou is best known for his work in computational complexity,
>> helping to expand its methodology and reach. He has also explored other
>> fields through what he calls the algorithmic lens, having contributed to
>> biology and the theory of evolution, economics, and game theory (where he
>> helped found the field of algorithmic game theory), artificial
>> intelligence, robotics, networks and the Internet, and more recently the
>> study of the brain.
>>
>> He authored the widely used textbook *Computational Complexity,* as well
>> as four others, and has written three novels, including the bestselling
>> *Logicomix* and his latest, *Independence. *Papadimitriou has been
>> awarded the Knuth Prize, IEEE?s John von Neumann Medal, the EATCS Award,
>> the IEEE Computer Society Charles Babbage Award, and the G?del Prize. He is
>> a fellow of the Association for Computer Machinery and the National Academy
>> of Engineering, and a member of the National Academy of Sciences.
>>
>
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Sun Apr 3 17:12:13 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Sun, 3 Apr 2022 17:12:13 0700
Subject: [theoryseminar] Jelani Nelson talk tomorrow (Mon Apr 4), 1:30pm
MessageID:
Theory friends,
Some of you might be interested in Jelani Nelson's guest lecture for CS 114
tomorrow (Mon Apr 4), 1:303:00pm in Packard 101:
Optimality of the JohnsonLindenstrauss Lemma
Jelani Nelson (UC Berkeley)
I encourage you to attend.
Cheers,
Moses
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Tue Apr 5 07:32:12 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Tue, 5 Apr 2022 14:32:12 +0000
Subject: [theoryseminar] Theory Lunch 4/7: Nathan Harms (UWaterloo)
MessageID:
Hi everyone,
This week's theory lunch will have a hybrid format. We'll start with socializing and having lunch in the grass area outside Gates (red point in the map at the end) at 11:50am. Then we'll have an online talk at 12:30pm. Nathan will give his talk remotely, and he'll tell us about: VC Dimension and DistributionFree SampleBased Testing
Abstract: We consider the problem of determining which classes of functions can be tested more efficiently than they can be learned, in the distributionfree samplebased model that corresponds to the standard PAC learning setting. Our main result shows that while VC dimension by itself does not always provide tight bounds on the number of samples required to test a class of functions in this model, it can be combined with a closelyrelated variant that we call ?lower VC? (or LVC) dimension to obtain strong lower bounds on this sample complexity. We use this result to obtain strong and in many cases nearly optimal bounds on the sample complexity for testing unions of intervals, halfspaces, intersections of halfspaces, polynomial threshold functions, and decision trees.
This talk will be on zoom. You can watch the talk at Gates 105 (this room has about 30 chairs) or watch it using this link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
[cid:da78588940074d499731374feef50c4e]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 3165979 bytes
Desc: image.png
URL:
From junyaoz at stanford.edu Tue Apr 5 07:32:12 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Tue, 5 Apr 2022 14:32:12 +0000
Subject: [theoryseminar] Theory Lunch 4/7: Nathan Harms (UWaterloo)
MessageID:
Hi everyone,
This week's theory lunch will have a hybrid format. We'll start with socializing and having lunch in the grass area outside Gates (red point in the map at the end) at 11:50am. Then we'll have an online talk at 12:30pm. Nathan will give his talk remotely, and he'll tell us about: VC Dimension and DistributionFree SampleBased Testing
Abstract: We consider the problem of determining which classes of functions can be tested more efficiently than they can be learned, in the distributionfree samplebased model that corresponds to the standard PAC learning setting. Our main result shows that while VC dimension by itself does not always provide tight bounds on the number of samples required to test a class of functions in this model, it can be combined with a closelyrelated variant that we call ?lower VC? (or LVC) dimension to obtain strong lower bounds on this sample complexity. We use this result to obtain strong and in many cases nearly optimal bounds on the sample complexity for testing unions of intervals, halfspaces, intersections of halfspaces, polynomial threshold functions, and decision trees.
This talk will be on zoom. You can watch the talk at Gates 105 (this room has about 30 chairs) or watch it using this link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
[cid:da78588940074d499731374feef50c4e]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 3165979 bytes
Desc: image.png
URL:
From moses at cs.stanford.edu Tue Apr 5 10:00:46 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Tue, 5 Apr 2022 10:00:46 0700
Subject: [theoryseminar] Chandra Chekuri talk: Fri Apr 8, 2pm
MessageID:
Theory friends,
Chandra Chekuri will be giving a talk this Friday Apr 8, 23pm in Gates
259, on efficient algorithms for densest subgraph. See details below.
Hope to see many of you there.
Cheers,
Moses
Title: Densest Subgraph: Supermodularity, Iterative Peeling, and Flow
Speaker: Chandra Chekuri (UIUC)
Abstract: The densest subgraph problem in a graph (DSG), in the simplest
form, is the following. Given an undirected graph G = (V, E) find a subset
S of vertices that maximizes the ratio E(S)/S where E(S) is the set of
edges with both endpoints in S. DSG and several of its variants are
wellstudied in theory and practice and have many applications in data
mining and network analysis. We study fast algorithms and structural
aspects of DSG via the lens of supermodularity. For this we consider the
densest supermodular subset problem (DSS): given a nonnegative
supermodular function f over a ground set V, maximize f(S)/S.
For DSG we describe a simple flowbased algorithm that outputs a
(1?\epsilon)approximation in deterministic O(m polylog(n)/\epsilon) time
where m is the number of edges.
Greedy peeling algorithms have been very popular for DSG and several
variants due to their efficiency, empirical performance, and worstcase
approximation guarantees. We describe a simple peeling algorithm for DSS
and analyze its approximation guarantee in a fashion that unifies several
existing results. Boob et al. developed an iterative peeling algorithm for
DSG which appears to work very well in practice, and made a conjecture
about its convergence to optimality. We affirmatively answer their
conjecture, and in fact prove that a natural generalization of their
algorithm converges for any supermodular function f; the key to our proof
is to consider an LP formulation that is derived via the Lov?sz extension
of a supermodular function which is inspired by the LP relaxation of
Charikar that has played an important role in several developments.
This is joint work with Kent Quanrud and Manuel Torres.
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Apr 6 22:48:47 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 7 Apr 2022 05:48:47 +0000
Subject: [theoryseminar] Theory Lunch 4/7: Nathan Harms (UWaterloo)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 5 minutes (note the unusual location).
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Tuesday, April 5, 2022 7:32 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/7: Nathan Harms (UWaterloo)
Hi everyone,
This week's theory lunch will have a hybrid format. We'll start with socializing and having lunch in the grass area outside Gates (red point in the map at the end) at 11:50am. Then we'll have an online talk at 12:30pm. Nathan will give his talk remotely, and he'll tell us about: VC Dimension and DistributionFree SampleBased Testing
Abstract: We consider the problem of determining which classes of functions can be tested more efficiently than they can be learned, in the distributionfree samplebased model that corresponds to the standard PAC learning setting. Our main result shows that while VC dimension by itself does not always provide tight bounds on the number of samples required to test a class of functions in this model, it can be combined with a closelyrelated variant that we call ?lower VC? (or LVC) dimension to obtain strong lower bounds on this sample complexity. We use this result to obtain strong and in many cases nearly optimal bounds on the sample complexity for testing unions of intervals, halfspaces, intersections of halfspaces, polynomial threshold functions, and decision trees.
This talk will be on zoom. You can watch the talk at Gates 105 (this room has about 30 chairs) or watch it using this link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
[cid:da78588940074d499731374feef50c4e]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 3165979 bytes
Desc: image.png
URL:
From junyaoz at stanford.edu Wed Apr 6 22:48:47 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 7 Apr 2022 05:48:47 +0000
Subject: [theoryseminar] Theory Lunch 4/7: Nathan Harms (UWaterloo)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 5 minutes (note the unusual location).
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Tuesday, April 5, 2022 7:32 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/7: Nathan Harms (UWaterloo)
Hi everyone,
This week's theory lunch will have a hybrid format. We'll start with socializing and having lunch in the grass area outside Gates (red point in the map at the end) at 11:50am. Then we'll have an online talk at 12:30pm. Nathan will give his talk remotely, and he'll tell us about: VC Dimension and DistributionFree SampleBased Testing
Abstract: We consider the problem of determining which classes of functions can be tested more efficiently than they can be learned, in the distributionfree samplebased model that corresponds to the standard PAC learning setting. Our main result shows that while VC dimension by itself does not always provide tight bounds on the number of samples required to test a class of functions in this model, it can be combined with a closelyrelated variant that we call ?lower VC? (or LVC) dimension to obtain strong lower bounds on this sample complexity. We use this result to obtain strong and in many cases nearly optimal bounds on the sample complexity for testing unions of intervals, halfspaces, intersections of halfspaces, polynomial threshold functions, and decision trees.
This talk will be on zoom. You can watch the talk at Gates 105 (this room has about 30 chairs) or watch it using this link: https://stanford.zoom.us/j/98932206471?pwd=YXdubytLVGNTbXhGeXFxNmJaVnhrUT09.
Cheers,
Junyao
[cid:da78588940074d499731374feef50c4e]
 next part 
An HTML attachment was scrubbed...
URL:
 next part 
A nontext attachment was scrubbed...
Name: image.png
Type: image/png
Size: 3165979 bytes
Desc: image.png
URL:
From yeganeh at stanford.edu Thu Apr 7 19:13:42 2022
From: yeganeh at stanford.edu (Yeganeh Ali Mohammadi)
Date: Fri, 8 Apr 2022 02:13:42 +0000
Subject: [theoryseminar] Fw: OR Student Seminar moves to Fridays 45 pm 
Starting this week!
InReplyTo:
References:
MessageID:
I'm giving a talk tomorrow in the OR student seminar.
The topic might be of interest to some of you (I already gave a version of the talk in the Women in Theory Forum).
Location: Y2E2101
Time: 45 pm
________________________________
From: Yeganeh Ali Mohammadi
Sent: Monday, April 4, 2022 5:28 PM
To: orseminars at lists.stanford.edu ; msandephd at lists.stanford.edu
Cc: Bryce McLaughlin
Subject: OR Student Seminar moves to Fridays 45 pm  Starting this week!
Hi all!
We hope you had a great spring break and a solid start to the spring quarter!
Because of the conflict new seminar series on Thursday evenings (Learning and Reasoning in Carbon and Silicons) we need to change our meetings to Fridays 4 5 pm, starting this week on April 8th. Sorry for the inconvenience. Hopefully, you can still attend the seminars, before heading off to have fun on the weekend!
This week I will be giving a talk on "Algorithms using Local Graph Features to Predict Epidemics".
Abstract. We study a simple model of epidemics where an infected node transmits the infection to its neighbors independently with probability p. The size of an outbreak in this model is closely related to that of the giant connected component in 'edge percolation', studied for a large class of networks including configuration model and preferential attachment. Even though these models capture the role of superspreaders in the spread of an epidemic, they only consider graphs that are locally treelike i.e. have a few short cycles. Some generalizations of the configuration model were suggested to capture local communities, known as household models, or hierarchical configuration models.
Here, we ask a different question: What information is needed for general networks to predict the size of an outbreak? Is it possible to make predictions by accessing the distribution of small subgraphs (or motifs)? We answer the question in the affirmative for largeset expanders with BenjaminiSchramm limits. In particular, we show that there is an algorithm that gives a (1??) approximation of the probability and the final size of an outbreak by accessing a constantsize neighborhood of a constant number of nodes chosen uniformly at random. We also present corollaries of the theorem for the preferential attachment model and study generalizations with household (or motif) structure.
This is joint work with Christian Borgs and Amin Saberi.
Location: Y2E2101
Zoom:
https://stanford.zoom.us/j/96485302552?pwd=MWcybXNjT2d1VGNUOGUrMmU1d082UT09
We will order food to enjoy outside after the seminar. If you plan on attending in person please fill in this brief form before Wednesday noon (4/6). See the future spring quarter talks here!
Please wear a mask at all times while you are inside. Attendance will be taken to allow for contract tracing at the seminar.
If you know of anyone or any group who may be interested in coming please forward them this email! If you were forwarded this email (instead of bcc'd) and would like to sign up to be on the mailing list directly please sign up using this form
Finally, we would like to thank the generosity of the Management Science and Engineering department, the Operations, Information, and Technology group at the Business School, Infanger Investment Technology and the DienerVeinott family for supporting this event.
Hope to see you at this seminar and in the many soon to follow.
Best,
Bryce and Yeganeh
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Fri Apr 8 11:48:15 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Fri, 8 Apr 2022 11:48:15 0700
Subject: [theoryseminar] reminder: densest subgraph talk *today*, 2pm
InReplyTo:
References:
MessageID:
Theory folks,
If you've ever wondered how you find dense subgraphs of a large graph
efficiently. Chandra Chekuri will tell you how to do it and why some
heuristics work so well. 2pm today in Gates 259.
Cheers,
Moses
On Tue, Apr 5, 2022 at 10:00 AM Moses Charikar
wrote:
>
> Title: Densest Subgraph: Supermodularity, Iterative Peeling, and Flow
> Speaker: Chandra Chekuri (UIUC)
>
> Abstract: The densest subgraph problem in a graph (DSG), in the simplest
> form, is the following. Given an undirected graph G = (V, E) find a subset
> S of vertices that maximizes the ratio E(S)/S where E(S) is the set of
> edges with both endpoints in S. DSG and several of its variants are
> wellstudied in theory and practice and have many applications in data
> mining and network analysis. We study fast algorithms and structural
> aspects of DSG via the lens of supermodularity. For this we consider the
> densest supermodular subset problem (DSS): given a nonnegative
> supermodular function f over a ground set V, maximize f(S)/S.
>
> For DSG we describe a simple flowbased algorithm that outputs a
> (1?\epsilon)approximation in deterministic O(m polylog(n)/\epsilon) time
> where m is the number of edges.
>
> Greedy peeling algorithms have been very popular for DSG and several
> variants due to their efficiency, empirical performance, and worstcase
> approximation guarantees. We describe a simple peeling algorithm for DSS
> and analyze its approximation guarantee in a fashion that unifies several
> existing results. Boob et al. developed an iterative peeling algorithm for
> DSG which appears to work very well in practice, and made a conjecture
> about its convergence to optimality. We affirmatively answer their
> conjecture, and in fact prove that a natural generalization of their
> algorithm converges for any supermodular function f; the key to our proof
> is to consider an LP formulation that is derived via the Lov?sz extension
> of a supermodular function which is inspired by the LP relaxation of
> Charikar that has played an important role in several developments.
>
> This is joint work with Kent Quanrud and Manuel Torres.
>
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Apr 10 22:16:36 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 11 Apr 2022 05:16:36 +0000
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Jason will tell us about Breaking the Cubic Barrier for AllPairs MaxFlow: GomoryHu Tree in Nearly Quadratic Time
Abstract: In 1961, Gomory and Hu showed that the AllPairs MaxFlow problem of computing the maxflow between all ${n\choose 2}$ pairs of vertices in an undirected graph can be solved using only $n1$ calls to any (singlepair) maxflow algorithm. Even assuming a lineartime maxflow algorithm, this yields a running time of $O(mn)$, which is $O(n^3)$ when $m = \Theta(n^2)$. While subsequent work has improved this bound for various special graph classes, no subcubictime algorithm has been obtained in the last 60 years for general graphs. We break this longstanding barrier by giving an $\tilde{O}(n^{2})$time algorithm on general, weighted graphs. Combined with a popular complexity assumption, we establish a counterintuitive separation: allpairs maxflows are strictly \emph{easier} to compute than allpairs shortestpaths. Our algorithm produces a cutequivalent tree, known as the GomoryHu tree, from which the maxflow value for any pair can be retrieved in nearconstant time.
In this talk, I will highlight some of the recently developed techniques that proved critical to obtaining this result: the isolating cuts lemma, the singlesource minimum cuts problem, and the use of a random pivot vertex.
Joint work with Amir Abboud, Robert Krauthgamer, Debmalya Panigrahi, Thatchaphol Saranurak, and Ohad Trabelsi.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Apr 10 22:16:36 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 11 Apr 2022 05:16:36 +0000
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Jason will tell us about Breaking the Cubic Barrier for AllPairs MaxFlow: GomoryHu Tree in Nearly Quadratic Time
Abstract: In 1961, Gomory and Hu showed that the AllPairs MaxFlow problem of computing the maxflow between all ${n\choose 2}$ pairs of vertices in an undirected graph can be solved using only $n1$ calls to any (singlepair) maxflow algorithm. Even assuming a lineartime maxflow algorithm, this yields a running time of $O(mn)$, which is $O(n^3)$ when $m = \Theta(n^2)$. While subsequent work has improved this bound for various special graph classes, no subcubictime algorithm has been obtained in the last 60 years for general graphs. We break this longstanding barrier by giving an $\tilde{O}(n^{2})$time algorithm on general, weighted graphs. Combined with a popular complexity assumption, we establish a counterintuitive separation: allpairs maxflows are strictly \emph{easier} to compute than allpairs shortestpaths. Our algorithm produces a cutequivalent tree, known as the GomoryHu tree, from which the maxflow value for any pair can be retrieved in nearconstant time.
In this talk, I will highlight some of the recently developed techniques that proved critical to obtaining this result: the isolating cuts lemma, the singlesource minimum cuts problem, and the use of a random pivot vertex.
Joint work with Amir Abboud, Robert Krauthgamer, Debmalya Panigrahi, Thatchaphol Saranurak, and Ohad Trabelsi.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Mon Apr 11 11:49:35 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Mon, 11 Apr 2022 11:49:35 0700
Subject: [theoryseminar] =?utf8?q?=22What_is_the_Statistical_Complexity_?=
=?utf8?q?of_Reinforcement_Learning=3F=22_=E2=80=93_Sham_Kakade_?=
=?utf8?q?=28Thu=2C_14Apr_=40_1=3A00pm=29?=
MessageID:
What is the Statistical Complexity of Reinforcement Learning?Sham Kakade ?
Professor, Harvard
Thu, 14Apr / 1:00pm PT (note the alternate time) / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
Abstract
A fundamental question in the theory of reinforcement learning is what
(representational or structural) conditions govern our ability to
generalize and avoid the curse of dimensionality. With regards to
supervised learning, these questions are well understood theoretically:
practically, we have overwhelming evidence on the value of representational
learning (say through modern deep networks) as a means for sample efficient
learning, and, theoretically, there are wellknown complexity measures
(e.g. the VC dimension and Rademacher complexity) that govern the
statistical complexity of learning. Providing an analogous theory for
reinforcement learning is far more challenging, where even characterizing
any structural conditions which support sample efficient generalization is
far less well understood.
This talk will highlight recent advances towards characterizing when
generalization is possible in reinforcement learning (both in online and
offline settings), focusing on both necessary and sufficient conditions. In
particular, we will introduce a new complexity measure, the
DecisionEstimation Coefficient, that is proven to be necessary (and,
essentially, sufficient) for sampleefficient interactive learning.
Bio
Sham Kakade is a Gordon McKay Professor of Computer Science & Statistics at
Harvard University. He works on the mathematical foundations of machine
learning and AI. Sham?s thesis helped in laying the statistical foundations
of reinforcement learning. With his collaborators, his additional
contributions include: one of the first provably efficient policy search
methods, Conservative Policy Iteration, for reinforcement learning;
developing the mathematical foundations for the widely used linear bandit
models and the Gaussian process bandit models; the tensor and spectral
methodologies for provable estimation of latent variable models; the first
sharp analysis of the perturbed gradient descent algorithm, along with the
design and analysis of numerous other convex and nonconvex algorithms. He
is the recipient of the ICML Test of Time Award (2020), the IBM Pat
Goldberg best paper award (in 2007), INFORMS Revenue Management and Pricing
Prize (2014). He has been program chair for COLT 2011.
Sham was an undergraduate at Caltech, where he studied physics and worked
under the guidance of John Preskill in quantum computing. He then completed
his Ph.D. in computational neuroscience at the Gatsby Unit at University
College London, under the supervision of Peter Dayan. He was a postdoc at
the Dept. of Computer Science, University of Pennsylvania, where he
broadened his studies to include computational game theory and economics
from the guidance of Michael Kearns. Sham has been a Principal Research
Scientist at Microsoft Research, New England, an associate professor at the
Department of Statistics, Wharton, UPenn, and an assistant professor at the
Toyota Technological Institute at Chicago.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list islcolloq at lists.stanford.edu
.*

Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
This talk: http://isl.stanford.edu/talks/talks/2022q2/shamkakade/
 next part 
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Wed Apr 13 15:05:52 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Wed, 13 Apr 2022 22:05:52 +0000
Subject: [theoryseminar] "Balanced group testing"  David Hong (Friday,
April 15th, 2pm)
MessageID: <220E9BF324A74413A3E98D5D2DFAC5B9@stanford.edu>
Hi everyone,
We return with the Information Theory Forum (IT Forum) talks this week @Fri, April 15th, 2pm PT with Dr. David Hong. The talks are hosted and accessible via Zoom.
If you want to receive reminder emails, please join the IT Forum mailing list.
Details for this week?s talk are below:
Balanced group testing via hypergraph factorization for COVID18
David Hong, Univ. of Pennsylvania
Fri, 15th April, 2pm PT
Zoom Link
pwd: 032264
Abstract:
Consider the following challenge. Given a large population, we need to identify infected individuals but with a limited number of tests. We faced this challenge at a large scale during the last few years with COVID19. One way to tackle this challenge (tracing back to Dorfman) is to test groups (i.e., subsets) of individuals together. Individuals in groups that test negative are declared negative without further testing; everyone else is individually tested. Doing so can save numerous tests when relatively few people are infected (since most groups will test negative). Even greater savings are possible by assigning each individual to more than one group.
A crucial question is: how to form the groups? There has been tremendous research and progress on this topic. Here, we focus on the problem of forming groups that are "balanced" in the sense that: (a) each group has the same number of individuals, (b) each individual is in the same number (say q) of groups, and (c) the intersection of every q groups has the same number of individuals. The need for balance comes from reallife considerations when implementing group testing in the lab. Balanced groups create more consistency in the grouping procedure (which makes it less errorprone) and they can help make the performance more even across the individuals.
In this talk, we propose a method (we call HYPER) for producing balanced groups. It produces balanced designs by using hypergraph factorization. We evaluate HYPER under a realistic COVID19 simulation, and find that it matches or outperforms stateoftheart alternatives across a broad range of testingconstrained settings. We also evaluate HYPER under a common theoretical model, and find that its performance for noiseless tests appears to be close in some settings to a recently discovered informationtheoretic bound.
The contents of this talk are primarily based on the following paper:
> Group testing via hypergraph factorization applied to COVID19.
> David Hong, Rounak Dey, Xihong Lin, Brian Cleary, Edgar Dobriban.
> Nature Communications, 2022.
> https://doi.org/10.1038/s4146702229389z
Bio:
David Hong completed his B.S. in ECE and mathematics at Duke University under the Benjamin N. Duke full scholarship. He completed his Ph.D. in EECS at the University of Michigan under the NSF Graduate Research Fellowship. He is now an NSF Mathematical Sciences Postdoctoral Fellow in the Department of Statistics and Data Science at the University of Pennsylvania.
His broad interests lie in the foundations of data science and include group testing as well as lowrank matrix and tensor methods for highdimensional and heterogeneous data.
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 14 11:03:15 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 14 Apr 2022 18:03:15 +0000
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
InReplyTo:
References:
MessageID:
Hi everyone,
Because of rainy weather, today we will have theory lunch in Fujitsu (Gates 403).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 10, 2022 10:16 PM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Jason will tell us about Breaking the Cubic Barrier for AllPairs MaxFlow: GomoryHu Tree in Nearly Quadratic Time
Abstract: In 1961, Gomory and Hu showed that the AllPairs MaxFlow problem of computing the maxflow between all ${n\choose 2}$ pairs of vertices in an undirected graph can be solved using only $n1$ calls to any (singlepair) maxflow algorithm. Even assuming a lineartime maxflow algorithm, this yields a running time of $O(mn)$, which is $O(n^3)$ when $m = \Theta(n^2)$. While subsequent work has improved this bound for various special graph classes, no subcubictime algorithm has been obtained in the last 60 years for general graphs. We break this longstanding barrier by giving an $\tilde{O}(n^{2})$time algorithm on general, weighted graphs. Combined with a popular complexity assumption, we establish a counterintuitive separation: allpairs maxflows are strictly \emph{easier} to compute than allpairs shortestpaths. Our algorithm produces a cutequivalent tree, known as the GomoryHu tree, from which the maxflow value for any pair can be retrieved in nearconstant time.
In this talk, I will highlight some of the recently developed techniques that proved critical to obtaining this result: the isolating cuts lemma, the singlesource minimum cuts problem, and the use of a random pivot vertex.
Joint work with Amir Abboud, Robert Krauthgamer, Debmalya Panigrahi, Thatchaphol Saranurak, and Ohad Trabelsi.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 14 11:03:15 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 14 Apr 2022 18:03:15 +0000
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
InReplyTo:
References:
MessageID:
Hi everyone,
Because of rainy weather, today we will have theory lunch in Fujitsu (Gates 403).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 10, 2022 10:16 PM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Jason will tell us about Breaking the Cubic Barrier for AllPairs MaxFlow: GomoryHu Tree in Nearly Quadratic Time
Abstract: In 1961, Gomory and Hu showed that the AllPairs MaxFlow problem of computing the maxflow between all ${n\choose 2}$ pairs of vertices in an undirected graph can be solved using only $n1$ calls to any (singlepair) maxflow algorithm. Even assuming a lineartime maxflow algorithm, this yields a running time of $O(mn)$, which is $O(n^3)$ when $m = \Theta(n^2)$. While subsequent work has improved this bound for various special graph classes, no subcubictime algorithm has been obtained in the last 60 years for general graphs. We break this longstanding barrier by giving an $\tilde{O}(n^{2})$time algorithm on general, weighted graphs. Combined with a popular complexity assumption, we establish a counterintuitive separation: allpairs maxflows are strictly \emph{easier} to compute than allpairs shortestpaths. Our algorithm produces a cutequivalent tree, known as the GomoryHu tree, from which the maxflow value for any pair can be retrieved in nearconstant time.
In this talk, I will highlight some of the recently developed techniques that proved critical to obtaining this result: the isolating cuts lemma, the singlesource minimum cuts problem, and the use of a random pivot vertex.
Joint work with Amir Abboud, Robert Krauthgamer, Debmalya Panigrahi, Thatchaphol Saranurak, and Ohad Trabelsi.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Thu Apr 14 11:10:04 2022
From: tavorb at stanford.edu (Tavor Baharav)
Date: Thu, 14 Apr 2022 11:10:04 0700
Subject: [theoryseminar]
=?utf8?q?ISL_talk_at_1pm_today=3A_=22What_is_th?=
=?utf8?q?e_Statistical_Complexity_of_Reinforcement_Learning=3F=22?=
=?utf8?q?_=E2=80=93_Sham_Kakade?=
InReplyTo:
References:
MessageID:
Reminder: this talk will be today at 1pm via Zoom (link below).
On Mon, Apr 11, 2022 at 11:49 AM Tavor Baharav wrote:
> What is the Statistical Complexity of Reinforcement Learning?Sham Kakade
> ? Professor, Harvard
>
> Thu, 14Apr / 1:00pm PT (note the alternate time) / Zoom:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> Abstract
>
> A fundamental question in the theory of reinforcement learning is what
> (representational or structural) conditions govern our ability to
> generalize and avoid the curse of dimensionality. With regards to
> supervised learning, these questions are well understood theoretically:
> practically, we have overwhelming evidence on the value of representational
> learning (say through modern deep networks) as a means for sample efficient
> learning, and, theoretically, there are wellknown complexity measures
> (e.g. the VC dimension and Rademacher complexity) that govern the
> statistical complexity of learning. Providing an analogous theory for
> reinforcement learning is far more challenging, where even characterizing
> any structural conditions which support sample efficient generalization is
> far less well understood.
>
> This talk will highlight recent advances towards characterizing when
> generalization is possible in reinforcement learning (both in online and
> offline settings), focusing on both necessary and sufficient conditions. In
> particular, we will introduce a new complexity measure, the
> DecisionEstimation Coefficient, that is proven to be necessary (and,
> essentially, sufficient) for sampleefficient interactive learning.
> Bio
>
> Sham Kakade is a Gordon McKay Professor of Computer Science & Statistics
> at Harvard University. He works on the mathematical foundations of machine
> learning and AI. Sham?s thesis helped in laying the statistical foundations
> of reinforcement learning. With his collaborators, his additional
> contributions include: one of the first provably efficient policy search
> methods, Conservative Policy Iteration, for reinforcement learning;
> developing the mathematical foundations for the widely used linear bandit
> models and the Gaussian process bandit models; the tensor and spectral
> methodologies for provable estimation of latent variable models; the first
> sharp analysis of the perturbed gradient descent algorithm, along with the
> design and analysis of numerous other convex and nonconvex algorithms. He
> is the recipient of the ICML Test of Time Award (2020), the IBM Pat
> Goldberg best paper award (in 2007), INFORMS Revenue Management and Pricing
> Prize (2014). He has been program chair for COLT 2011.
>
> Sham was an undergraduate at Caltech, where he studied physics and worked
> under the guidance of John Preskill in quantum computing. He then completed
> his Ph.D. in computational neuroscience at the Gatsby Unit at University
> College London, under the supervision of Peter Dayan. He was a postdoc at
> the Dept. of Computer Science, University of Pennsylvania, where he
> broadened his studies to include computational game theory and economics
> from the guidance of Michael Kearns. Sham has been a Principal Research
> Scientist at Microsoft Research, New England, an associate professor at the
> Department of Statistics, Wharton, UPenn, and an assistant professor at the
> Toyota Technological Institute at Chicago.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list islcolloq at lists.stanford.edu
> .*
> 
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/islcolloq
> This talk: http://isl.stanford.edu/talks/talks/2022q2/shamkakade/
>
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 14 11:04:32 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 14 Apr 2022 18:04:32 +0000
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes in Fujitsu.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Thursday, April 14, 2022 11:03 AM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
Hi everyone,
Because of rainy weather, today we will have theory lunch in Fujitsu (Gates 403).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 10, 2022 10:16 PM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Jason will tell us about Breaking the Cubic Barrier for AllPairs MaxFlow: GomoryHu Tree in Nearly Quadratic Time
Abstract: In 1961, Gomory and Hu showed that the AllPairs MaxFlow problem of computing the maxflow between all ${n\choose 2}$ pairs of vertices in an undirected graph can be solved using only $n1$ calls to any (singlepair) maxflow algorithm. Even assuming a lineartime maxflow algorithm, this yields a running time of $O(mn)$, which is $O(n^3)$ when $m = \Theta(n^2)$. While subsequent work has improved this bound for various special graph classes, no subcubictime algorithm has been obtained in the last 60 years for general graphs. We break this longstanding barrier by giving an $\tilde{O}(n^{2})$time algorithm on general, weighted graphs. Combined with a popular complexity assumption, we establish a counterintuitive separation: allpairs maxflows are strictly \emph{easier} to compute than allpairs shortestpaths. Our algorithm produces a cutequivalent tree, known as the GomoryHu tree, from which the maxflow value for any pair can be retrieved in nearconstant time.
In this talk, I will highlight some of the recently developed techniques that proved critical to obtaining this result: the isolating cuts lemma, the singlesource minimum cuts problem, and the use of a random pivot vertex.
Joint work with Amir Abboud, Robert Krauthgamer, Debmalya Panigrahi, Thatchaphol Saranurak, and Ohad Trabelsi.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 14 11:04:32 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 14 Apr 2022 18:04:32 +0000
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in 10 minutes in Fujitsu.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Thursday, April 14, 2022 11:03 AM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
Hi everyone,
Because of rainy weather, today we will have theory lunch in Fujitsu (Gates 403).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 10, 2022 10:16 PM
To: thseminar at cs.stanford.edu ; theoryseminar at lists.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/14: Jason Li (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Jason will tell us about Breaking the Cubic Barrier for AllPairs MaxFlow: GomoryHu Tree in Nearly Quadratic Time
Abstract: In 1961, Gomory and Hu showed that the AllPairs MaxFlow problem of computing the maxflow between all ${n\choose 2}$ pairs of vertices in an undirected graph can be solved using only $n1$ calls to any (singlepair) maxflow algorithm. Even assuming a lineartime maxflow algorithm, this yields a running time of $O(mn)$, which is $O(n^3)$ when $m = \Theta(n^2)$. While subsequent work has improved this bound for various special graph classes, no subcubictime algorithm has been obtained in the last 60 years for general graphs. We break this longstanding barrier by giving an $\tilde{O}(n^{2})$time algorithm on general, weighted graphs. Combined with a popular complexity assumption, we establish a counterintuitive separation: allpairs maxflows are strictly \emph{easier} to compute than allpairs shortestpaths. Our algorithm produces a cutequivalent tree, known as the GomoryHu tree, from which the maxflow value for any pair can be retrieved in nearconstant time.
In this talk, I will highlight some of the recently developed techniques that proved critical to obtaining this result: the isolating cuts lemma, the singlesource minimum cuts problem, and the use of a random pivot vertex.
Joint work with Amir Abboud, Robert Krauthgamer, Debmalya Panigrahi, Thatchaphol Saranurak, and Ohad Trabelsi.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From saberi at stanford.edu Thu Apr 14 13:16:57 2022
From: saberi at stanford.edu (Amin Saberi)
Date: Thu, 14 Apr 2022 20:16:57 +0000
Subject: [theoryseminar] RL, dopamine, and how you make decisions
MessageID:
Hi,
Today, we will have another speaker in our seminar series on carbonbased vs. siliconbased learning. Josh Berke will speak on the role of dopamine in learning, its connections to ML, and the study of economic decisionmaking in people and other animals.
Amin
Dopamine and the multiscale learning of reward predictions
Josh Berke (UCSF Departments of Neurology and Psychiatry & Behavioral Sciences)
4:30 PM, Thursday, April 14th
Gates 403
Abstract: The field of Reinforcement Learning (RL) has seen a tremendously successful dialog between brain science and computer science. This includes the discovery that reward prediction errors  key learning signals in RL  can be encoded by pulses of dopamine. RL thus provides a formal framework for understanding the mechanisms by which we make decisions based on predictions of future rewards, and update those predictions. However, recent observations of more diverse dopamine signals across cells and brain regions are more challenging to reconcile with RL. I will present new results and simulations arguing that this diversity reflects the need for the brain to discount future rewards at multiple rates, depending on the behavioral context. This has close parallels in recent machine learning advances and has broad implications for economic decisionmaking in people and other animals.
Josh Berke is the Schmid Distinguished Professor in the UCSF Departments of Neurology and Psychiatry & Behavioral Sciences, and Director of UCSF's Wheeler Center for the Neurobiology of Addiction.
 next part 
An HTML attachment was scrubbed...
URL:
From omer.reingold at cs.stanford.edu Fri Apr 15 10:06:54 2022
From: omer.reingold at cs.stanford.edu (Omer Reingold)
Date: Fri, 15 Apr 2022 10:06:54 0700
Subject: [theoryseminar] David Harel  On Odor Reproduction
MessageID:
David Harel (bio below) is giving a talk that may be of wide interest.
Please note the unusual venue and feel free to forward to anybody that may
be interested.
Best wishes
Omer
Venue: Allen 101X Auditorium
Time: Wednesday 2pm, April 27
Title: On Odor Reproduction, and How to Test For It
Abstract:
For years there has been interest in the possibility of building a reliable
odor reproduction system (ORS), with its vast spectrum of applications:
from ecommerce, games and video, via the food and cosmetics industry, to
medical diagnosis. Such a system would enable an output device  the
whiffer  to release an imitation of an odor read in by an input device
 the sniffer  upon command. To realize this scheme, one must carry
out deep and complex research that combines computer science and
mathematics with chemistry, physics and biochemistry, and brain science
with psychophysical and physiological experimentation. In the process, we
expect a deep understanding of this least understood of our senses to
emerge. I will discuss the feasibility of an ORS, and will also address the
question (not unlike Turing?s 1950 question regarding artificial
intelligence) of how to test the validity of a candidate ORS.
Bio:
Prof. David Harel is President of the Israel Academy of Sciences and
Humanities, and has served as its Vice President since 2015. He has been at
the Weizmann Institute of Science since 1980, serving in the past as Dean
of the Faculty of Mathematics and Computer Science. He has worked in logic
and computability, software and systems engineering, modeling biological
systems, odor reproduction, and more. He invented Statecharts and
coinvented Live Sequence Charts. Among his books are ?*Algorithmics: The
Spirit of Computing*?, ?*Computers Ltd.: What They Really Can't Do*?
and "*Come,
Let's Play: ScenarioBased Programming Using LSCs and the PlayEngine". *His
awards include the ACM Karlstrom Outstanding Educator Award, the Israel
Prize, the ACM Software System Award, the Eme?t Prize, and five honorary
degrees. He is a Fellow of the ACM, the IEEE the AAAS and the EATCS, a
member of the Academia Europaea and the Israel Academy of Sciences and
Humanities, and an international member of the US National Academy of
Sciences, the American Academy of Arts and Sciences, the US National
Academy of Engineering and the Chinese Academy of Sciences. He is also a
Fellow of the Royal Society (FRS). He is an ardent peace and human rights
activist, and his main hobbies are photography
and choir singing
.
 next part 
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Fri Apr 15 11:53:51 2022
From: lunjia at stanford.edu (Lunjia Hu)
Date: Fri, 15 Apr 2022 18:53:51 +0000
Subject: [theoryseminar] Fw: IDEAL Workshop on Clustering
InReplyTo:
References:
MessageID:
Hi everyone,
The IDEAL Workshop on Clustering will take place next Friday and Saturday (Apr 22, 23). It will be at Northwestern University but you can also join remotely. Below is an invitation from the organizers.
Best,
Lunjia
________________________________
From: Konstantin Makarychev
Sent: Friday, April 15, 2022 11:21 AM
To: Vaggos Chatziafratis ; Eden Chlamtac ; Vincent CohenAddad ; Sanjoy Dasgupta ; Jafar Jafarov ; Shi Li ; Lunjia Hu ; Liren Shan ; Ola Svensson ; Ali Vakilian ; moses at cs.stanford.edu
Cc: Yury Makarychev
Subject: IDEAL Workshop on Clustering
Dear Colleagues,
We are inviting you to attend the IDEAL Workshop on Clustering. The workshop will take place at Northwestern University on Friday, April 22, and Saturday, April 23. It will be in a hybrid format. If you are interested in participating in the workshop (inperson or remotely), please register here: https://bit.ly/3uGF9Hr . You can find more information about the workshop at the workshop webpage.
Logistics
* Dates: Friday, April 22 and Saturday, April 23
* Location: Northwestern University, Evanston, IL
* Rooms: Mudd Library 3514 (subject to change)
* Streaming: Panopto and Zoom
Confirmed Speakers
Vaggos Chatziafratis (UC Santa Cruz, Northwestern, MIT/Northeastern), Eden Chlamt?? (BenGurion University, visiting TTIC), Vincent CohenAddad (Google Research), Sanjoy Dasgupta (UC San Diego), Jafar Jafarov (U Chicago), Shi Li (University at Buffalo), Lunjia Hu (Stanford), Liren Shan (Northwestern), Ola Svenson (EPFL), Ali Vakilian (TTIC)
Tentative Schedule for Friday, April 22
All times are in the Central Time Zone (CST).
? 8:409:10: Breakfast
? 9:109:15: Opening Remarks
? 9:15: Ola Svenson, NearlyTight and Oblivious Algorithms for Explainable Clustering
? 10:15: Vincent CohenAddad, Recent Progress on Correlation Clustering: Theory and Practice
? 11:15 Lunch
? 12:30: Sanjoy Dasgupta, Statistical consistency in clustering
? 1:30: Shi Li, Clustering with Outliers: Approximation and Distributed Algorithms
? 2:30 Coffee Break
? 3:00: Lunjia Hu, NearOptimal Explainable kMeans for All Dimensions
Tentative Schedule for Saturday, April 23
All times are in the Central Time Zone (CST).
? 9:00: Breakfast
? 9:15: Liren Shan, Explainable kmeans. Don?t be greedy, plant bigger trees!
? 10:15: Vaggos Chatziafratis, Hierarchical Clustering: Upper Bounds, Lower Bounds and Some Open Questions
? 11:15 Lunch
? 12:30: Jafar Jafarov, Correlation Clustering with Local and Global Objectives
? 1:30: Eden Chlamt??, Cascaded Norms in Clustering
? 2:30 Coffee Break
? 2:45: Ali Vakilian, Individual Fairness for kClustering
About the Series
The IDEAL workshop series brings in experts on topics related to the foundations of data science to present their perspective and research on a common theme. This workshop is part of the Spring 2022 Special Quarter on HighDimensional Data Analysis. This program is organized by Konstantin Makarychev (NU) and Yury Makarychev (TTIC).
Hope to see you all at the workshop!
 next part 
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Fri Apr 15 11:53:51 2022
From: lunjia at stanford.edu (Lunjia Hu)
Date: Fri, 15 Apr 2022 18:53:51 +0000
Subject: [theoryseminar] Fw: IDEAL Workshop on Clustering
InReplyTo:
References:
MessageID:
Hi everyone,
The IDEAL Workshop on Clustering will take place next Friday and Saturday (Apr 22, 23). It will be at Northwestern University but you can also join remotely. Below is an invitation from the organizers.
Best,
Lunjia
________________________________
From: Konstantin Makarychev
Sent: Friday, April 15, 2022 11:21 AM
To: Vaggos Chatziafratis ; Eden Chlamtac ; Vincent CohenAddad ; Sanjoy Dasgupta ; Jafar Jafarov ; Shi Li ; Lunjia Hu ; Liren Shan ; Ola Svensson ; Ali Vakilian ; moses at cs.stanford.edu
Cc: Yury Makarychev
Subject: IDEAL Workshop on Clustering
Dear Colleagues,
We are inviting you to attend the IDEAL Workshop on Clustering. The workshop will take place at Northwestern University on Friday, April 22, and Saturday, April 23. It will be in a hybrid format. If you are interested in participating in the workshop (inperson or remotely), please register here: https://bit.ly/3uGF9Hr . You can find more information about the workshop at the workshop webpage.
Logistics
* Dates: Friday, April 22 and Saturday, April 23
* Location: Northwestern University, Evanston, IL
* Rooms: Mudd Library 3514 (subject to change)
* Streaming: Panopto and Zoom
Confirmed Speakers
Vaggos Chatziafratis (UC Santa Cruz, Northwestern, MIT/Northeastern), Eden Chlamt?? (BenGurion University, visiting TTIC), Vincent CohenAddad (Google Research), Sanjoy Dasgupta (UC San Diego), Jafar Jafarov (U Chicago), Shi Li (University at Buffalo), Lunjia Hu (Stanford), Liren Shan (Northwestern), Ola Svenson (EPFL), Ali Vakilian (TTIC)
Tentative Schedule for Friday, April 22
All times are in the Central Time Zone (CST).
? 8:409:10: Breakfast
? 9:109:15: Opening Remarks
? 9:15: Ola Svenson, NearlyTight and Oblivious Algorithms for Explainable Clustering
? 10:15: Vincent CohenAddad, Recent Progress on Correlation Clustering: Theory and Practice
? 11:15 Lunch
? 12:30: Sanjoy Dasgupta, Statistical consistency in clustering
? 1:30: Shi Li, Clustering with Outliers: Approximation and Distributed Algorithms
? 2:30 Coffee Break
? 3:00: Lunjia Hu, NearOptimal Explainable kMeans for All Dimensions
Tentative Schedule for Saturday, April 23
All times are in the Central Time Zone (CST).
? 9:00: Breakfast
? 9:15: Liren Shan, Explainable kmeans. Don?t be greedy, plant bigger trees!
? 10:15: Vaggos Chatziafratis, Hierarchical Clustering: Upper Bounds, Lower Bounds and Some Open Questions
? 11:15 Lunch
? 12:30: Jafar Jafarov, Correlation Clustering with Local and Global Objectives
? 1:30: Eden Chlamt??, Cascaded Norms in Clustering
? 2:30 Coffee Break
? 2:45: Ali Vakilian, Individual Fairness for kClustering
About the Series
The IDEAL workshop series brings in experts on topics related to the foundations of data science to present their perspective and research on a common theme. This workshop is part of the Spring 2022 Special Quarter on HighDimensional Data Analysis. This program is organized by Konstantin Makarychev (NU) and Yury Makarychev (TTIC).
Hope to see you all at the workshop!
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Fri Apr 15 13:37:05 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Fri, 15 Apr 2022 13:37:05 0700
Subject: [theoryseminar] densest subgraph talk video
InReplyTo:
References:
MessageID:
In case you missed Chandra Chekuri's talk last week on finding dense
subgraphs,
you can view the recording here:
https://youtu.be/IGLd6Il7Sqs
Cheers,
Moses
On Fri, Apr 8, 2022 at 11:48 AM Moses Charikar
wrote:
> Theory folks,
>
> If you've ever wondered how you find dense subgraphs of a large graph
> efficiently. Chandra Chekuri will tell you how to do it and why some
> heuristics work so well. 2pm today in Gates 259.
>
> Cheers,
> Moses
>
> On Tue, Apr 5, 2022 at 10:00 AM Moses Charikar
> wrote:
>
>>
>> Title: Densest Subgraph: Supermodularity, Iterative Peeling, and Flow
>> Speaker: Chandra Chekuri (UIUC)
>>
>> Abstract: The densest subgraph problem in a graph (DSG), in the simplest
>> form, is the following. Given an undirected graph G = (V, E) find a subset
>> S of vertices that maximizes the ratio E(S)/S where E(S) is the set of
>> edges with both endpoints in S. DSG and several of its variants are
>> wellstudied in theory and practice and have many applications in data
>> mining and network analysis. We study fast algorithms and structural
>> aspects of DSG via the lens of supermodularity. For this we consider the
>> densest supermodular subset problem (DSS): given a nonnegative
>> supermodular function f over a ground set V, maximize f(S)/S.
>>
>> For DSG we describe a simple flowbased algorithm that outputs a
>> (1?\epsilon)approximation in deterministic O(m polylog(n)/\epsilon) time
>> where m is the number of edges.
>>
>> Greedy peeling algorithms have been very popular for DSG and several
>> variants due to their efficiency, empirical performance, and worstcase
>> approximation guarantees. We describe a simple peeling algorithm for DSS
>> and analyze its approximation guarantee in a fashion that unifies several
>> existing results. Boob et al. developed an iterative peeling algorithm for
>> DSG which appears to work very well in practice, and made a conjecture
>> about its convergence to optimality. We affirmatively answer their
>> conjecture, and in fact prove that a natural generalization of their
>> algorithm converges for any supermodular function f; the key to our proof
>> is to consider an LP formulation that is derived via the Lov?sz extension
>> of a supermodular function which is inspired by the LP relaxation of
>> Charikar that has played an important role in several developments.
>>
>> This is joint work with Kent Quanrud and Manuel Torres.
>>
>
 next part 
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Fri Apr 15 13:47:21 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Fri, 15 Apr 2022 13:47:21 0700
Subject: [theoryseminar] "Balanced group testing"  David Hong (Friday,
April 15th, 2pm)
InReplyTo: <220E9BF324A74413A3E98D5D2DFAC5B9@stanford.edu>
References: <220E9BF324A74413A3E98D5D2DFAC5B9@stanford.edu>
MessageID:
Reminder  this is happening in 13 minutes
On Wed, Apr 13, 2022 at 3:05 PM Pulkit Tandon wrote:
>
>
>
>
>
> * Hi everyone, We return with the Information Theory Forum (IT Forum
> ) talks this week @Fri,
> April 15th, 2pm PT with Dr. David Hong. The talks are hosted and accessible
> via Zoom. If you want to receive reminder emails, please join the IT Forum
> mailing list
> .
> Details for this week?s talk are below: Balanced group testing via
> hypergraph factorization for COVID18 David Hong, Univ. of Pennsylvania
> Fri, 15th April, 2pm PT Zoom Link
>
> pwd: 032264 *
>
> *Abstract:*
> Consider the following challenge. Given a large population, we need to
> identify infected individuals but with a limited number of tests. We faced
> this challenge at a large scale during the last few years with COVID19.
> One way to tackle this challenge (tracing back to Dorfman) is to test
> groups (i.e., subsets) of individuals together. Individuals in groups that
> test negative are declared negative without further testing; everyone else
> is individually tested. Doing so can save numerous tests when relatively
> few people are infected (since most groups will test negative). Even
> greater savings are possible by assigning each individual to more than one
> group.
>
> A crucial question is: how to form the groups? There has been tremendous
> research and progress on this topic. Here, we focus on the problem of
> forming groups that are "balanced" in the sense that: (a) each group has
> the same number of individuals, (b) each individual is in the same number
> (say q) of groups, and (c) the intersection of every q groups has the same
> number of individuals. The need for balance comes from reallife
> considerations when implementing group testing in the lab. Balanced groups
> create more consistency in the grouping procedure (which makes it less
> errorprone) and they can help make the performance more even across the
> individuals.
>
> In this talk, we propose a method (we call HYPER) for producing balanced
> groups. It produces balanced designs by using hypergraph factorization. We
> evaluate HYPER under a realistic COVID19 simulation, and find that it
> matches or outperforms stateoftheart alternatives across a broad range
> of testingconstrained settings. We also evaluate HYPER under a common
> theoretical model, and find that its performance for noiseless tests
> appears to be close in some settings to a recently discovered
> informationtheoretic bound.
>
> The contents of this talk are primarily based on the following paper:
> > Group testing via hypergraph factorization applied to COVID19.
> > David Hong, Rounak Dey, Xihong Lin, Brian Cleary, Edgar Dobriban.
> > Nature Communications, 2022.
> > https://doi.org/10.1038/s4146702229389z
>
> *Bio:*
> David Hong completed his B.S. in ECE and mathematics at Duke University
> under the Benjamin N. Duke full scholarship. He completed his Ph.D. in EECS
> at the University of Michigan under the NSF Graduate Research Fellowship.
> He is now an NSF Mathematical Sciences Postdoctoral Fellow in the
> Department of Statistics and Data Science at the University of Pennsylvania.
>
> His broad interests lie in the foundations of data science and include
> group testing as well as lowrank matrix and tensor methods for
> highdimensional and heterogeneous data.
>
 next part 
An HTML attachment was scrubbed...
URL:
From tselil at stanford.edu Fri Apr 15 19:06:31 2022
From: tselil at stanford.edu (Tselil Schramm)
Date: Fri, 15 Apr 2022 19:06:31 0700
Subject: [theoryseminar] Probability Seminar of possible interest
MessageID:
Hi friends,
Next week's probability seminar speaker is Ben Morris, and his talk may be
of interest to many on this list (see the announcement below).
Best,
Tselil

Title: Big key encryption and the Thorp shuffle
Speaker: Ben Morris, UC Davis
Date: Mon April 18th 2022, 4:00pm
Location: Sequoia 200
Abstract:
In the bounded retrieval model, the adversary has malware on the message
sender?s computer that can leak a certain amount of information (e.g., 10
percent of the size of the hard drive). In this setting, Bellare, Kane and
Rogaway gave an efficient symmetric encryption scheme.
Their scheme uses a giant key (a key so large that only a fraction of it
can be leaked) and can be described as follows:
The message sender looks at the giant key at some random locations to make
a key of conventional length.
Then she uses some standard symmetric encryption scheme, with the smaller
key, to encrypt the message.
Then she transmits the encrypted message, along with the list of random
locations where the giant key was examined.
One property of their scheme is that the encrypted message (including the
list of random locations) is larger than the original message. Rogaway
asked if an efficient scheme exists that does not increase the size of the
message. In this talk we present such a scheme.
The proof of security relies on an analysis of a card shuffle invented by
Ed Thorp in 1973.
No knowledge of cryptography will be assumed.
This is joint work with Hans Oberschelp and Hamilton Santhakumar.
 next part 
An HTML attachment was scrubbed...
URL:
From beh at stanford.edu Sat Apr 16 14:28:13 2022
From: beh at stanford.edu (Soheil Behnezhad)
Date: Sat, 16 Apr 2022 21:28:13 +0000
Subject: [theoryseminar] TOCASV is back (May 20@Stanford)!
MessageID:
Hi,
For the past several years, we had a (bi)annual event held alternatively at Stanford and Google called TOCASV for Bay Area theoreticians to meet. This was on pause for the last two years during the pandemic, but we will revive the tradition this year at Stanford on May 20th. It is going to be an inperson event with talks, posters, and food. Registration is free but required. Please register here by April 22nd. We will announce the program soon, but if you haven't attended TOCASV before, see the links on TOCASV'22 website to the previous events to get an idea of what to expect.
Everyone, especially students, are welcome and encouraged to present posters of their work. If you are planning to present a poster, indicate so in the registration form so we make sure to have a poster stand secured for you.
Mark your calendars and stay tuned for more info. We look forward to seeing everyone on May 20 at Stanford!
Best,
Soheil Behnezhad
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Apr 17 19:47:35 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 18 Apr 2022 02:47:35 +0000
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Kevin will tell us about: SemiRandom Sparse Recovery in NearlyLinear Time
Abstract: Sparse recovery is one of the most fundamental and wellstudied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearlylinear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various realworld settings.
We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semirandom adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be ssparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* informationtheoretically optimally in nearlylinear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.
Our approach differs from that of prior fast iterative methods with provable guarantees under semirandom generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NPhard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semirandom model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.
Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Apr 17 19:47:35 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 18 Apr 2022 02:47:35 +0000
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
MessageID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Kevin will tell us about: SemiRandom Sparse Recovery in NearlyLinear Time
Abstract: Sparse recovery is one of the most fundamental and wellstudied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearlylinear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various realworld settings.
We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semirandom adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be ssparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* informationtheoretically optimally in nearlylinear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.
Our approach differs from that of prior fast iterative methods with provable guarantees under semirandom generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NPhard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semirandom model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.
Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Mon Apr 18 22:41:38 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Mon, 18 Apr 2022 22:41:38 0700
Subject: [theoryseminar] Fwd: Economics Seminar on Wednesday,
April 20 with Professor Constantino Daskalakis
InReplyTo:
References:
MessageID:
Theory friends,
Costis Daskalakis will be speaking at the Economics seminar in GSB this
Wednesday, April 20, 3:455:00pm, on Equilibrium Computation and Machine
Learning.
See below for the abstract and more details about attending the talk via
zoom or in person.
Cheers,
Moses
 Forwarded message 
From: Rochelle Bagalso
Date: Thu, Apr 14, 2022 at 6:52 PM
Subject: Economics Seminar on Wednesday, April 20 with Professor
Constantino Daskalakis
To:
CC: Giselle Alvarez , Gabriel Lozano <
glozano at stanford.edu>, Rochelle Bagalso
The Economics seminar guest speaker on Wednesday, April 20th is Professor
Constantino Daskalakis from *Massachusetts Institute of Technology*. The
title of the presentation is *Equilibrium Computation and Machine Learning
(see abstract below).*
The hybrid seminar is scheduled from 3:45 ? 5:00 p.m. in C105. A Zoom link
is provided here
for those joining remotely or see full details below.
For those attending the seminar in C105 please review the guidelines
below. Please
note that a Google form will be sent out in the morning the day of the
seminar. All who will attend in person will need to signin prior to the
seminar.

Staff and student attendees must complete the HealthCheck
daily health attestation and
COVID19 testing requirements, where applicable.

Face coverings are no longer required at Stanford but strongly
recommended. See Health Alerts
for the most up to date policies.

Visitors: NonStanford affiliates (i.e. lecturers, guest speakers,
alumni) must follow the university Visitor policy
.
Join from PC, Mac, Linux, iOS or Android:
https://stanford.zoom.us/j/96553591190?pwd=blBLRUVUc1RaUFdlSGtYNWZ6YWVtZz09
Password: 254675
Thank you,
Rochelle

*Title: Equilibrium Computation and Machine Learning*
*Abstract*: Machine Learning has recently made significant advances in
challenges such as speech and image recognition, automatic translation, and
text generation, much of that progress being fueled by the success of
gradient descentbased optimization methods in computing local optima of
nonconvex objectives. From robustifying machine learning models against
adversarial attacks to causal inference, training generative models, and
learning in strategic environments, many outstanding challenges in Machine
Learning lie at its interface with Game Theory. On this front, however,
gradientdescent based optimization methods have been less successful.
Here, the role of singleobjective optimization is played by equilibrium
computation, but gradientdescent based methods commonly fail to find
equilibria, and even computing local approximate equilibria has remained
daunting. We shed light on these challenges presenting obstacles and
opportunities for Machine Learning and Game Theory going forward.
*Bio*: Constantinos (aka ?Costis") Daskalakis is a Professor of Electrical
Engineering and Computer Science at MIT. He holds a Diploma in Electrical
and Computer Engineering from the National Technical University of Athens,
and a PhD in Electrical Engineering and Computer Science from UC Berkeley.
He works on Computation Theory and its interface with Game Theory,
Economics, Probability Theory, Machine Learning and Statistics. He has
resolved longstanding open problems about the computational complexity of
Nash equilibrium, and the mathematical structure and computational
complexity of multiitem auctions. His current work focuses on
highdimensional statistics and learning from biased, dependent, or
strategic data. He has been honored with the ACM Doctoral Dissertation
Award, the Kalai Prize from the Game Theory Society, the Sloan Fellowship
in Computer Science, the SIAM Outstanding Paper Prize, the Microsoft
Research Faculty Fellowship, the Simons Investigator Award, the Rolf
Nevanlinna Prize from the International Mathematical Union, the ACM Grace
Murray Hopper Award, and the Bodossaki Foundation Distinguished Young
Scientists Award.

*Rochelle Bagalso*Program Coordinator, Faculty Support Team
*Stanford Graduate School of Business*Knight Management Center
Stanford University
655 Knight Way
,
W308B
Stanford, CA 943057298
Phone (650) 7362237
*Change Lives. Change Organizations. Change the World.*
 next part 
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Tue Apr 19 23:33:47 2022
From: moses at cs.stanford.edu (Moses Charikar)
Date: Tue, 19 Apr 2022 23:33:47 0700
Subject: [theoryseminar] RAIN Seminar on Weds April 20 with Ravi Jagadeesan
InReplyTo:
References:
MessageID:
Folks,
Ravi Jagadeesan's talk at the RAIN seminar tomorrow (Wed) on Matching and
Prices could be of interest to some of you. Ravi is a current postdoc and
soontobe professor of Economics at Stanford.
See details below. Note in particular, the nonstandard location if you
plan to join inperson.
Cheers,
Moses
 Forwarded message 
From: Grace Guan
Date: Thu, Apr 14, 2022 at 1:27 PM
Subject: [RAIN] inperson RAIN Seminar on Weds April 20 with Ravi Jagadeesan
To:
Hello everyone,
We will have our next RAIN seminar Wednesday April 20, at noon PT. We will
have Ravi Jagadeesan from Stanford talk about ?Matching and Prices?.
Details are below.
If you would like to meet with the speaker, please sign up for a time slot
at this link:
https://docs.google.com/spreadsheets/d/1bc4FQ1nwLN5lUBDMU4RbvCztuNrockhqRDNebxw0wQ/edit?usp=sharing
Time: April 20 at noon PT
Zoom Info:
https://stanford.zoom.us/j/93711053291?pwd=SzNHZmFUSWFzdEVUZGdxZkt1cXpHQT09
Password: 199142
Inperson room: Thornton 110
Please note this room is a little far from Gates and Huang so please allow
for additional travel time.
Hope to see you there!
Grace

Title: Matching and Prices
Abstract: Indivisibilities and budget constraints are pervasive features of
many matching markets. But when taken together, these features typically
cause failures of gross substitutability?a standard condition on
preferences imposed in most matching models. To accommodate budget
constraints and other income effects, we analyze matching markets under a
weaker condition: net substitutability. Although competitive equilibria do
not generally exist in our setting, we show that stable outcomes always
exist and are efficient. However, standard auctions and matching
procedures, such as the Deferred Acceptance algorithm and the Cumulative
Offer process, do not generally yield stable outcomes. We illustrate how
the flexibility of prices is critical for our results. We also discuss how
budget constraints and other income effects affect classic properties of
stable outcomes.
Joint work with Alex Teytelboym
Bio: Ravi Jagadeesan is a Postdoctoral Fellow at Stanford. Starting in
July, he will be an Assistant Professor in the Department of Economics at
Stanford. He completed his PhD in Business Economics at Harvard in Spring
2020. Before that, he graduated from Harvard with an A.B. in Mathematics
and an A.M. in Statistics in Spring 2018.
++**==++**==++**==++**==++**==++**==++**==
internetalgs mailing list
internetalgs at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/internetalgs
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 21 11:20:13 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 21 Apr 2022 18:20:13 +0000
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
InReplyTo:
References:
MessageID:
Hi everyone,
Today?s theory lunch will happen in Fujitsu (Gates 403), because of the light rain (and it seems our whiteboards in engineering quad are missing).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 17, 2022 7:47 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Kevin will tell us about: SemiRandom Sparse Recovery in NearlyLinear Time
Abstract: Sparse recovery is one of the most fundamental and wellstudied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearlylinear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various realworld settings.
We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semirandom adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be ssparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* informationtheoretically optimally in nearlylinear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.
Our approach differs from that of prior fast iterative methods with provable guarantees under semirandom generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NPhard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semirandom model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.
Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 21 11:20:13 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 21 Apr 2022 18:20:13 +0000
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
InReplyTo:
References:
MessageID:
Hi everyone,
Today?s theory lunch will happen in Fujitsu (Gates 403), because of the light rain (and it seems our whiteboards in engineering quad are missing).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 17, 2022 7:47 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Kevin will tell us about: SemiRandom Sparse Recovery in NearlyLinear Time
Abstract: Sparse recovery is one of the most fundamental and wellstudied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearlylinear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various realworld settings.
We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semirandom adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be ssparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* informationtheoretically optimally in nearlylinear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.
Our approach differs from that of prior fast iterative methods with provable guarantees under semirandom generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NPhard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semirandom model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.
Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 21 11:23:50 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 21 Apr 2022 18:23:50 +0000
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in Fujitsu (Gates 403) in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Thursday, April 21, 2022 11:20 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 4/21: Kevin Tian
Hi everyone,
Today?s theory lunch will happen in Fujitsu (Gates 403), because of the light rain (and it seems our whiteboards in engineering quad are missing).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 17, 2022 7:47 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Kevin will tell us about: SemiRandom Sparse Recovery in NearlyLinear Time
Abstract: Sparse recovery is one of the most fundamental and wellstudied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearlylinear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various realworld settings.
We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semirandom adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be ssparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* informationtheoretically optimally in nearlylinear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.
Our approach differs from that of prior fast iterative methods with provable guarantees under semirandom generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NPhard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semirandom model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.
Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Apr 21 11:23:50 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 21 Apr 2022 18:23:50 +0000
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
InReplyTo:
References:
MessageID:
A gentle reminder: This is happening in Fujitsu (Gates 403) in 10 minutes.
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Thursday, April 21, 2022 11:20 AM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theoryseminar] Theory Lunch 4/21: Kevin Tian
Hi everyone,
Today?s theory lunch will happen in Fujitsu (Gates 403), because of the light rain (and it seems our whiteboards in engineering quad are missing).
Best,
Junyao
________________________________
From: theoryseminar on behalf of Junyao Zhao
Sent: Sunday, April 17, 2022 7:47 PM
To: theoryseminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theoryseminar] Theory Lunch 4/21: Kevin Tian
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Kevin will tell us about: SemiRandom Sparse Recovery in NearlyLinear Time
Abstract: Sparse recovery is one of the most fundamental and wellstudied inverse problems. Standard statistical formulations of the problem are provably solved by general convex programming techniques and more practical, fast (nearlylinear time) iterative methods. However, these latter "fast algorithms" have previously been observed to be brittle in various realworld settings.
We investigate the brittleness of fast sparse recovery algorithms to generative model changes through the lens of studying their robustness to a "helpful" semirandom adversary, a framework which tests whether an algorithm overfits to input assumptions. We consider the following basic model: let A be an nxd measurement matrix which contains an unknown mxd subset of rows G which are bounded and satisfy the restricted isometry property (RIP), but is otherwise arbitrary. Letting x* in R^d be ssparse, and given either exact measurements b = Ax* or noisy measurements b = Ax* + xi, we design algorithms recovering x* informationtheoretically optimally in nearlylinear time. We extend our algorithm to hold for weaker generative models relaxing our planted RIP row subset assumption to a natural weighted variant, and show that our method's guarantees naturally interpolate the quality of the measurement matrix to, in some parameter regimes, run in sublinear time.
Our approach differs from that of prior fast iterative methods with provable guarantees under semirandom generative models (Cheng and Ge '18, Li et al. '20), which typically separate the problem of learning the planted instance from the estimation problem, i.e. they attempt to first learn the planted "good" instance (in our case, the matrix G). However, natural conditions on a submatrix which make sparse recovery tractable, such as RIP, are NPhard to verify and hence first learning a sufficient row reweighting appears challenging. We eschew this approach and design a new iterative method, tailored to the geometry of sparse recovery, which is provably robust to our semirandom model. Our hope is that our approach opens the door to new robust, efficient algorithms for other natural statistical inverse problems.
Based on joint work with Jonathan Kelner, Jerry Li, Allen Liu, and Aaron Sidford.
Cheers,
Junyao
 next part 
An HTML attachment was scrubbed...
URL:
From beh at stanford.edu Thu Apr 21 14:09:01 2022
From: beh at stanford.edu (Soheil Behnezhad)
Date: Thu, 21 Apr 2022 21:09:01 +0000
Subject: [theoryseminar] TOCASV is back (May 20@Stanford)!
InReplyTo: