From tavorb at stanford.edu Mon Nov 1 11:18:38 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Mon, 1 Nov 2021 11:18:38 -0700
Subject: [theory-seminar] =?utf-8?q?=22Distributed_Algorithms_for_Optimiza?=
=?utf-8?q?tion_in_Networks=22_=E2=80=93_Angelia_Nedich_=28Thu=2C_4?=
=?utf-8?q?-Nov_=40_4=3A00pm=29?=
Message-ID:
Distributed Algorithms for Optimization in NetworksAngelia Nedich ?
Professor, Arizona State University
Thu, 4-Nov / 4:00pm / Packard 101 (in person)
*Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). This week's speaker will join us via
Zoom, which will be screened in Packard 101, and streamed for those unable
to attend in
person: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*
Abstract
We will overview the distributed optimization algorithms starting with the
basic underlying idea illustrated on a prototype problem in machine
learning. In particular, we will focus on convex minimization problem where
the objective function is given as the sum of convex functions, each of
which is known by an agent in a network. The agents communicate over the
network with a task to jointly determine a minimum of the sum of their
objective functions. The communication network can vary over time, which is
modeled through a sequence of graphs over a static set of nodes
(representing the agents in a system). In this setting, the distributed
first-order methods will be discussed that make use of an agreement
protocol, which is a mechanism replacing the role of a coordinator. We will
discuss some refinements of the basic method and conclude with more recent
developments of fast methods that can match the performance of centralized
methods.
Bio
Angelia Nedich has a Ph.D. from Moscow State University, Moscow, Russia, in
Computational Mathematics and Mathematical Physics (1994), and a Ph.D. from
Massachusetts Institute of Technology, Cambridge, USA, in Electrical and
Computer Science Engineering (2002). She has worked as a senior engineer in
BAE Systems North America, Advanced Information Technology Division at
Burlington, MA. Currently, she is a faculty member of the school of
Electrical, Computer, and Energy Engineering at Arizona State University at
Tempe. Prior to joining Arizona State University, she has been a Willard
Scholar faculty member at the University of Illinois at Urbana-Champaign.
She is a recipient (jointly with her co-authors) of the Best Paper Award at
the Winter Simulation Conference 2013 and the Best Paper Award at the
International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and
Wireless Networks (WiOpt) 2015. Her general research interest is in
optimization, large-scale complex systems dynamics, variational
inequalities, and games.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list isl-colloq at lists.stanford.edu
.*
------------------------------
Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
This talk: http://isl.stanford.edu/talks/talks/2021q4/angelia-nedich/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Tue Nov 2 14:14:50 2021
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Tue, 2 Nov 2021 21:14:50 +0000
Subject: [theory-seminar] "Estimating and optimizing Information
Measures..." - Prof. Haim Permuter (Friday, Nov. 5th, 1pm)
Message-ID: <83F2627A-A316-4FD1-A529-4F266AE62D6C@stanford.edu>
Hi everyone,
IT (Information Theory) Forum continues this week @Friday, 5th November, 1pm with Prof. Haim Permuter speaking in-person. We will also try to have talk accessible via Zoom for remote attendees. Details below:
Estimating and optimizing Information Measures using neural networks and its application in communication.
Prof. Haim Permuter, Ben-Gurion University
Fri, 5th November, 1pm
In-person Talk: Linvil/Allen conference room 101
Zoom Link: https://stanford.zoom.us/j/92716427348?pwd=TlV6VHNscGxsTEdlOC8rWkMwaElldz09
pwd: 032264
Abstract:
In this talk we will develop a principled framework for neural estimation and optimization of information measures, which is then leveraged to estimate the feedforward and feedback capacities of general channels. To that end we propose a novel Directed Information Neural Estimator (DINE) that complements the Mutual Information Neural Estimation (MINE), and then develop methods for optimizing DINE and MINE over the channel input distributions. More specifically, two optimization methods are proposed, one for continuous channel input spaces and the other for discrete. While capacity estimation is the main application considered in this talk, we will discuss how the developed estimation and optimization techniques are applicable in additional scenarios where (maximized) Directed Information is of interest such as probability density estimation for processes with memory, causality identification and machine learning in general.
The talk is based on a joint work with Dor Tzur, Ziv Aharoni and Ziv Goldfeld.
Bio:
Haim Permuter received his B.Sc. (summa cum laude) from Ben-Gurion University (BGU) and Ph.D. from Stanford University, both in in Electrical Engineering, in 1997 and 2008, respectively. Between 1997-2004, he served as a scientific research officer in an R&D unit in the Israeli Defense Forces. In summer 2002 he worked for IBM, Almaden research center. He is a recipient of several rewards including Eshkol Fellowship, Wolf Award, Fulbright Fellowship, Stanford Graduate Fellowship, U.S.-Israel Binational Science Foundation Bergmann Memorial Award, and Allon Fellowship. Haim joined the faculty of Electrical Engineering Department at BGU in Oct 2008 as a tenure-track faculty, and is now a Professor, Luck-Hille Chair in Electrical Engineering. Haim serves as head of the communication, cyber and information track in his department. Haim served on the editorial boards of the IEEE Transactions on Information Theory in 2013-2016.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gvaliant at cs.stanford.edu Wed Nov 3 20:27:42 2021
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Wed, 3 Nov 2021 20:27:42 -0700
Subject: [theory-seminar] post-deadline happy hour tomorrow (Thursday) @4pm
In-Reply-To:
References:
Message-ID:
Hi Friends,
As usual, we will be having a post STOC deadline happy hour tomorrow
(Thursday) starting around 4pm by the whiteboards. I'll bring some drinks
and snacks. [And, of course, everyone is welcome even if you didn't
submit a paper.]
I hope to see you there,
-Greg
On Thu, Oct 28, 2021 at 11:50 AM Junyao Zhao wrote:
> A gentle reminder: This is happening in 10 minutes.
>
> Hi everyone,
>
> This week's theory lunch will take place Thursday at noon in the Engineering
> Quad
> .
> As usual, we'll start with some socializing, followed by a talk at 12:30pm. Our
> speaker this week is Guy Bresler. Guy will tell us about: *The
> Algorithmic Phase Transition of Random k-SAT for Low Degree Polynomials*
>
> *Abstract:* We study the algorithmic task of finding a satisfying
> assignment of a uniformly random k-SAT formula F with n variables and m
> clauses. It is known that a satisfying assignment exists with high
> probability at clause density m/n < 2^k log 2 - (1/2) (log 2 + 1) + o_k(1),
> while the best polynomial-time algorithm known, the Fix algorithm of
> Coja-Oghlan, finds a satisfying assignment at the much lower clause density
> (1 - o_k(1)) 2^k log k / k. This prompts the question: is it possible to
> efficiently find a satisfying assignment at higher clause densities?
>
> To understand the algorithmic threshold of random k-SAT, we study the
> limits of low degree polynomial algorithms, which are a powerful class of
> algorithms including Fix, Survey Propagation guided decimation (with
> bounded or mildly growing number of message passing rounds), and paradigms
> such as message passing and local graph algorithms. We show that low degree
> polynomial algorithms can find a satisfying assignment at clause density (1
> - o_k(1)) 2^k log k / k, matching Fix, and not at clause density (1 +
> o_k(1)) c* 2^k log k / k, where c* ~ 4.911. This shows the first sharp (up
> to constant factor) computational phase transition of random k-SAT for a
> class of algorithms. Our proof establishes and leverages a new many-way
> overlap gap property tailored to random k-SAT, which rigorously rules out
> efficient algorithms via clustering of the solution space. Joint work with
> Brice Huang.
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gvaliant at cs.stanford.edu Wed Nov 3 20:27:42 2021
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Wed, 3 Nov 2021 20:27:42 -0700
Subject: [theory-seminar] post-deadline happy hour tomorrow (Thursday) @4pm
In-Reply-To:
References:
Message-ID:
Hi Friends,
As usual, we will be having a post STOC deadline happy hour tomorrow
(Thursday) starting around 4pm by the whiteboards. I'll bring some drinks
and snacks. [And, of course, everyone is welcome even if you didn't
submit a paper.]
I hope to see you there,
-Greg
On Thu, Oct 28, 2021 at 11:50 AM Junyao Zhao wrote:
> A gentle reminder: This is happening in 10 minutes.
>
> Hi everyone,
>
> This week's theory lunch will take place Thursday at noon in the Engineering
> Quad
> .
> As usual, we'll start with some socializing, followed by a talk at 12:30pm. Our
> speaker this week is Guy Bresler. Guy will tell us about: *The
> Algorithmic Phase Transition of Random k-SAT for Low Degree Polynomials*
>
> *Abstract:* We study the algorithmic task of finding a satisfying
> assignment of a uniformly random k-SAT formula F with n variables and m
> clauses. It is known that a satisfying assignment exists with high
> probability at clause density m/n < 2^k log 2 - (1/2) (log 2 + 1) + o_k(1),
> while the best polynomial-time algorithm known, the Fix algorithm of
> Coja-Oghlan, finds a satisfying assignment at the much lower clause density
> (1 - o_k(1)) 2^k log k / k. This prompts the question: is it possible to
> efficiently find a satisfying assignment at higher clause densities?
>
> To understand the algorithmic threshold of random k-SAT, we study the
> limits of low degree polynomial algorithms, which are a powerful class of
> algorithms including Fix, Survey Propagation guided decimation (with
> bounded or mildly growing number of message passing rounds), and paradigms
> such as message passing and local graph algorithms. We show that low degree
> polynomial algorithms can find a satisfying assignment at clause density (1
> - o_k(1)) 2^k log k / k, matching Fix, and not at clause density (1 +
> o_k(1)) c* 2^k log k / k, where c* ~ 4.911. This shows the first sharp (up
> to constant factor) computational phase transition of random k-SAT for a
> class of algorithms. Our proof establishes and leverages a new many-way
> overlap gap property tailored to random k-SAT, which rigorously rules out
> efficient algorithms via clustering of the solution space. Joint work with
> Brice Huang.
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Nov 4 09:07:11 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 4 Nov 2021 16:07:11 +0000
Subject: [theory-seminar] No Theory Lunch today
Message-ID:
We won't have theory lunch today, but there will be a post STOC deadline happy hour.
Good luck with STOC submissions!
Cheers,
Junyao
Hi Friends,
As usual, we will be having a post STOC deadline happy hour tomorrow (Thursday) starting around 4pm by the whiteboards. I'll bring some drinks and snacks. [And, of course, everyone is welcome even if you didn't submit a paper.]
I hope to see you there,
-Greg
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Nov 4 09:07:11 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 4 Nov 2021 16:07:11 +0000
Subject: [theory-seminar] No Theory Lunch today
Message-ID:
We won't have theory lunch today, but there will be a post STOC deadline happy hour.
Good luck with STOC submissions!
Cheers,
Junyao
Hi Friends,
As usual, we will be having a post STOC deadline happy hour tomorrow (Thursday) starting around 4pm by the whiteboards. I'll bring some drinks and snacks. [And, of course, everyone is welcome even if you didn't submit a paper.]
I hope to see you there,
-Greg
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Thu Nov 4 10:51:55 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Thu, 4 Nov 2021 10:51:55 -0700
Subject: [theory-seminar]
=?utf-8?q?=22Distributed_Algorithms_for_Optimiza?=
=?utf-8?q?tion_in_Networks=22_=E2=80=93_Angelia_Nedich_=28Thu=2C_4?=
=?utf-8?q?-Nov_=40_4=3A00pm=29?=
In-Reply-To:
References:
Message-ID:
Reminder that this talk will be today at 4pm, streamed in Packard 101 (and
online at:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ).
Please join us for coffee and snacks at 3:30pm in the Grove outside Packard
(near Bytes' outdoor seating).
On Mon, Nov 1, 2021 at 11:18 AM Tavor Baharav wrote:
> Distributed Algorithms for Optimization in NetworksAngelia Nedich ?
> Professor, Arizona State University
>
> Thu, 4-Nov / 4:00pm / Packard 101 (in person)
>
> *Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). This week's speaker will join us via
> Zoom, which will be screened in Packard 101, and streamed for those unable
> to attend in
> person: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> *
> Abstract
>
> We will overview the distributed optimization algorithms starting with the
> basic underlying idea illustrated on a prototype problem in machine
> learning. In particular, we will focus on convex minimization problem where
> the objective function is given as the sum of convex functions, each of
> which is known by an agent in a network. The agents communicate over the
> network with a task to jointly determine a minimum of the sum of their
> objective functions. The communication network can vary over time, which is
> modeled through a sequence of graphs over a static set of nodes
> (representing the agents in a system). In this setting, the distributed
> first-order methods will be discussed that make use of an agreement
> protocol, which is a mechanism replacing the role of a coordinator. We will
> discuss some refinements of the basic method and conclude with more recent
> developments of fast methods that can match the performance of centralized
> methods.
> Bio
>
> Angelia Nedich has a Ph.D. from Moscow State University, Moscow, Russia,
> in Computational Mathematics and Mathematical Physics (1994), and a Ph.D.
> from Massachusetts Institute of Technology, Cambridge, USA, in Electrical
> and Computer Science Engineering (2002). She has worked as a senior
> engineer in BAE Systems North America, Advanced Information Technology
> Division at Burlington, MA. Currently, she is a faculty member of the
> school of Electrical, Computer, and Energy Engineering at Arizona State
> University at Tempe. Prior to joining Arizona State University, she has
> been a Willard Scholar faculty member at the University of Illinois at
> Urbana-Champaign. She is a recipient (jointly with her co-authors) of the
> Best Paper Award at the Winter Simulation Conference 2013 and the Best
> Paper Award at the International Symposium on Modeling and Optimization in
> Mobile, Ad Hoc, and Wireless Networks (WiOpt) 2015. Her general research
> interest is in optimization, large-scale complex systems dynamics,
> variational inequalities, and games.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list isl-colloq at lists.stanford.edu
> .*
> ------------------------------
>
> Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
> This talk: http://isl.stanford.edu/talks/talks/2021q4/angelia-nedich/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From divgarg at stanford.edu Thu Nov 4 14:44:44 2021
From: divgarg at stanford.edu (Div Garg)
Date: Thu, 4 Nov 2021 21:44:44 +0000
Subject: [theory-seminar] CS 25: Transformers Seminar, Nov 8 10am,
Prof. Geoffrey Hinton
Message-ID:
Hi everyone,
We will be hosting a very special guest, Prof. Geoffrey Hinton from the University of Toronto, for CS 25: Transformers United, who will speak about
GLOM: Representing part-whole hierarchies in a neural network
The talk will take place on Monday, Nov 8, 10 am - 11:20 am PT.
We will be opening the talk to everyone at Stanford, outside the class, to join the seminar.
Feel free to invite your friends inside Stanford, but please do not share the zoom link outside Stanford!
Join us at the zoom webinar using this link: https://stanford.zoom.us/j/92992933998?pwd=UEIranlBREpGQUJwWjdSVHEzZ0xnUT09
Password: 393452
You can find the abstract & bio for the talk below.
Looking forward to seeing you!
- Div, Chetanya and Advay
--
Div Garg
divyanshgarg.com
____________________
Prof. Geoffrey Hinton, University of Toronto
Title:
GLOM: Representing part-whole hierarchies in a neural network
Abstract:
I will present a single idea about representation which allows advances made by several different groups to be combined into an imaginary
system called GLOM. The advances include transformers, neural fields, contrastive representation learning, distillation and capsules. GLOM answers
the question: How can a neural network with a fixed architecture parse an image into a part-whole hierarchy which has a different structure for each image?
The idea is simply to use islands of identical vectors to represent the nodes in the parse tree. The talk will discuss the many ramifications of this idea.
If GLOM can be made to work, it should significantly improve the interpretability of the representations produced by transformer-like systems when applied
to vision or language.
Bio:
Geoffrey Hinton is known by many to be the godfather of deep learning and was the recipient of the Turing award in 2018. Aside from his seminal 1986 paper on backpropagation, Hinton has invented several foundational deep learning techniques throughout his decades-long career. Hinton currently splits his time between the University of Toronto and Google Brain. He is a fellow of the UK Royal Society and has received numerous awards to acknowledge his work.
Zoom invitation:
Topic: CS 25: Transformers United!
You are invited to a Zoom webinar.
When: Nov 8, 2021 10:00 AM Pacific Time (US and Canada)
Please click the link below to join the webinar:
https://stanford.zoom.us/j/92992933998?pwd=UEIranlBREpGQUJwWjdSVHEzZ0xnUT09
Passcode: 393452
Or One tap mobile :
US: +16507249799,,92992933998#,,,,*393452# or +18333021536,,92992933998#,,,,*393452# (Toll Free)
Or Telephone:
Dial(for higher quality, dial a number based on your current location):
US: +1 650 724 9799 or +1 833 302 1536 (Toll Free)
Webinar ID: 929 9293 3998
Passcode: 393452
International numbers available: https://stanford.zoom.us/u/acKlgk0vbQ
Or an H.323/SIP room system:
H.323:
162.255.37.11 (US West)
162.255.36.11 (US East)
115.114.131.7 (India Mumbai)
115.114.115.7 (India Hyderabad)
213.19.144.110 (Amsterdam Netherlands)
213.244.140.110 (Germany)
103.122.166.55 (Australia Sydney)
103.122.167.55 (Australia Melbourne)
149.137.40.110 (Singapore)
64.211.144.160 (Brazil)
149.137.68.253 (Mexico)
69.174.57.160 (Canada Toronto)
65.39.152.160 (Canada Vancouver)
207.226.132.110 (Japan Tokyo)
149.137.24.110 (Japan Osaka)
Meeting ID: 929 9293 3998
Passcode: 393452
SIP: 92992933998 at zoomcrc.com
Passcode: 393452
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Nov 7 22:44:48 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 8 Nov 2021 06:44:48 +0000
Subject: [theory-seminar] Theory Lunch 11/11: Jan van den Brand (Berkeley)
Message-ID:
Hi everyone,
Theory lunch will resume this Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Our speaker this week is Jan van den Brand. Jan will tell us about: Unifying Matrix Data Structures - Simplifying and Speeding up Iterative Algorithms
Abstract: Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volume computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this talk, I present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Nov 7 22:44:48 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 8 Nov 2021 06:44:48 +0000
Subject: [theory-seminar] Theory Lunch 11/11: Jan van den Brand (Berkeley)
Message-ID:
Hi everyone,
Theory lunch will resume this Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Our speaker this week is Jan van den Brand. Jan will tell us about: Unifying Matrix Data Structures - Simplifying and Speeding up Iterative Algorithms
Abstract: Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volume computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this talk, I present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Tue Nov 9 22:59:02 2021
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Wed, 10 Nov 2021 06:59:02 +0000
Subject: [theory-seminar] "Differentially Private Covariance-adaptive Mean
Estimation" - Lydia Zakynthinou (Friday, Nov. 12th, 1pm)
Message-ID:
Hi everyone,
IT (Information Theory) Forum continues this week @Friday, 12th November, 1pm with Lydia Zakynthinou. Details below:
Differentially Private Covariance-adaptive Mean Estimation
Lydia Zakynthinou, Northeastern University
Fri, 12th November, 1pm
Zoom Link: https://stanford.zoom.us/j/92716427348?pwd=TlV6VHNscGxsTEdlOC8rWkMwaElldz09
pwd: 032264
Abstract:
Mean estimation in Mahalanobis distance is a fundamental problem in statistics: given i.i.d. samples from a high-dimensional distribution with unknown mean and covariance, the goal is to find an estimator with small Mahalanobis distance from the true mean. To protect the privacy of the individuals who participate in the dataset, we study statistical estimators which satisfy differential privacy, a condition that has become a standard criterion for individual privacy in statistics and machine learning.
We present two differentially private mean estimators for multivariate (sub)Gaussian distributions with unknown covariance. All previous estimators with the same accuracy guarantee in Mahalanobis loss either require strong a priori bounds on the covariance matrix or require that the number of samples grows superlinearly with the dimension of the data, which is suboptimal. Our algorithms achieve nearly optimal sample complexity (matching that of the known-covariance case) by adapting the noise added due to privacy to the distribution?s covariance matrix, without explicitly estimating it.
Joint work with Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman.
Bio:
Lydia Zakynthinou is a PhD student in the Khoury College of Computer Sciences at Northeastern University. She is interested in the theoretical foundations of machine learning and data privacy and their connections to statistics and information theory. She earned her ECE diploma from the National Technical University of Athens in 2015 and her MSc on Logic, Algorithms, and Theory of Computation from the University of Athens in 2017. Since Fall 2020, her research has been supported by a Facebook Fellowship.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Nov 11 10:54:36 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 11 Nov 2021 18:54:36 +0000
Subject: [theory-seminar] Theory Lunch 11/11: Jan van den Brand
(Berkeley)
In-Reply-To:
References:
Message-ID:
?Gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Sunday, November 7, 2021 10:44 PM
To: thseminar at cs.stanford.edu ; theory-seminar at lists.stanford.edu
Subject: [theory-seminar] Theory Lunch 11/11: Jan van den Brand (Berkeley)
Hi everyone,
Theory lunch will resume this Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Our speaker this week is Jan van den Brand. Jan will tell us about: Unifying Matrix Data Structures - Simplifying and Speeding up Iterative Algorithms
Abstract: Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volume computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this talk, I present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Nov 11 10:54:36 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 11 Nov 2021 18:54:36 +0000
Subject: [theory-seminar] Theory Lunch 11/11: Jan van den Brand
(Berkeley)
In-Reply-To:
References:
Message-ID:
?Gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Sunday, November 7, 2021 10:44 PM
To: thseminar at cs.stanford.edu ; theory-seminar at lists.stanford.edu
Subject: [theory-seminar] Theory Lunch 11/11: Jan van den Brand (Berkeley)
Hi everyone,
Theory lunch will resume this Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Our speaker this week is Jan van den Brand. Jan will tell us about: Unifying Matrix Data Structures - Simplifying and Speeding up Iterative Algorithms
Abstract: Many algorithms use data structures that maintain properties of matrices undergoing some changes. The applications are wide-ranging and include for example matchings, shortest paths, linear programming, semi-definite programming, convex hull and volume computation. Given the wide range of applications, the exact property these data structures must maintain varies from one application to another, forcing algorithm designers to invent them from scratch or modify existing ones. Thus it is not surprising that these data structures and their proofs are usually tailor-made for their specific application and that maintaining more complicated properties results in more complicated proofs. In this talk, I present a unifying framework that captures a wide range of these data structures. The simplicity of this framework allows us to give short proofs for many existing data structures regardless of how complicated the to be maintained property is. We also show how the framework can be used to speed up existing iterative algorithms.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Nov 14 14:52:23 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Sun, 14 Nov 2021 22:52:23 +0000
Subject: [theory-seminar] Theory Lunch 11/18: Yang P. Liu
Message-ID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Yang will tell us about: Online Edge Coloring via Tree Recurrences and Correlation Decay
Abstract: We give an online algorithm that with high probability computes an edge coloring using (e/(e-1) + o(1))*Delta colors on a graph with maximum degree Delta = omega(log n) under online edge arrivals against oblivious adversaries, making first progress on the conjecture of Bar-Noy, Motwani, and Naor in this general setting. Our algorithm is based on reducing to a matching problem on locally treelike graphs, and then applying a tree recurrences based approach for arguing correlation decay.
This is joint work with Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, and Mehtaab Sawhney.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Nov 14 14:52:23 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Sun, 14 Nov 2021 22:52:23 +0000
Subject: [theory-seminar] Theory Lunch 11/18: Yang P. Liu
Message-ID:
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Yang will tell us about: Online Edge Coloring via Tree Recurrences and Correlation Decay
Abstract: We give an online algorithm that with high probability computes an edge coloring using (e/(e-1) + o(1))*Delta colors on a graph with maximum degree Delta = omega(log n) under online edge arrivals against oblivious adversaries, making first progress on the conjecture of Bar-Noy, Motwani, and Naor in this general setting. Our algorithm is based on reducing to a matching problem on locally treelike graphs, and then applying a tree recurrences based approach for arguing correlation decay.
This is joint work with Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, and Mehtaab Sawhney.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Mon Nov 15 09:44:00 2021
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Mon, 15 Nov 2021 09:44:00 -0800
Subject: [theory-seminar] =?utf-8?q?=22Optimization_in_Theory_and_Practice?=
=?utf-8?b?IiDigJMgU3RlcGhlbiBXcmlnaHQgKFRodSwgMTgtTm92IEAgNDowMHBt?=
=?utf-8?q?=29?=
Message-ID:
Optimization in Theory and Practice Stephen Wright ? Professor, University
of Wisconsin-Madison
Thu, 18-Nov / 4:00pm / Packard 101 (in person)
*Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). This week's speaker will join us via
Zoom, which will be screened in Packard 101, and streamed for those unable
to attend in person**:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
.*
*To avoid Zoom-bombing, we ask virtual attendees to sign in via the above
URL to receive the Zoom meeting details by email.*
Abstract
Complexity analysis in optimization seeks upper bounds on the amount of
work required to find approximate solutions of problems in a given class
with a given algorithm, and also lower bounds, usually in the form of a
worst-case example from a given problem class. The relationship between
theoretical complexity bounds and practical performance of algorithms on
?typical? problems varies widely across problem and algorithm classes, and
relative interest among researchers between the theoretical and practical
aspects of algorithm design and analysis has waxed and waned over the
years. This talk surveys complexity analysis and its relationship to
practical algorithms in optimization, with an emphasis on linear
programming and convex and nonconvex nonlinear optimization, providing
historical (and cultural) perspectives on research in these areas.
Bio
Stephen J. Wright holds the George B. Dantzig Professorship, the Sheldon
Lubar Chair, and the Amar and Balinder Sohi Professorship of Computer
Sciences at the University of Wisconsin-Madison. His research is in
computational optimization and its applications to data science and many
other areas of science and engineering. Prior to joining UW-Madison in
2001, Wright held positions at North Carolina State University (1986-1990)
and Argonne National Laboratory (1990-2001). He has served as Chair of the
Mathematical Optimization Society (2007-2010) and as a Trustee of SIAM for
the maximum three terms (2005-2014). He is a Fellow of SIAM. In 2014, he
won the W.R.G. Baker Award from IEEE for best paper in an IEEE archival
publication during 2009-2011. He was awarded the Khachiyan Prize by the
INFORMS Optimization Society in 2020 for lifetime achievements in
optimization, and received the NeurIPS Test of Time Award in 2020 for a
paper presented at that conference in 2011.
Prof. Wright is the author / coauthor of widely used text and reference
books in optimization including ?Primal Dual Interior-Point Methods? and
?Numerical Optimization?. He has published widely on optimization theory,
algorithms, software, and applications.
Prof. Wright served from 2014-2019 as Editor-in-Chief of the SIAM Journal
on Optimization and previously served as Editor-in-Chief of Mathematical
Programming Series B. He has also served as Associate Editor of
Mathematical Programming Series A, SIAM Review, SIAM Journal on Scientific
Computing, and several other journals and book series.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list isl-colloq at lists.stanford.edu
.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Mon Nov 15 16:24:30 2021
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Tue, 16 Nov 2021 00:24:30 +0000
Subject: [theory-seminar] "Convertible Codes: Enabling Redundancy Tuning in
Large-scale Storage Systems" - Prof. Rashmi Vinayak (Friday, Nov. 19th, 1pm)
Message-ID: <01C27CA3-C25B-4063-A66A-72D00170B7F6@stanford.edu>
Hi everyone
IT (Information Theory) Forum continues this week @Friday, 19th November, 1pm with Prof. Rashmi Vinayak, CMU. We will also have (zoom) meetups setup with Prof. Vinayak, 9am-12pm PT, in slots of 25mins. You can do a self sign-up in the Google sheet; feel free to sign-up for more than one slot, first-come first-served. Details below:
Convertible Codes: Enabling Redundancy Tuning in Large-scale Storage Systems
Prof. Rashmi Vinayak, CMU
Fri, 19th November, 1pm
Zoom Link: https://stanford.zoom.us/j/91355009595?pwd=VHRWT0t2RG83dUQ3ODZsYnJMMjRJQT09
pwd: 585251
Abstract
In large-scale data storage systems, erasure codes are employed to store data in a redundant fashion to protect against data loss. In this setting, a set of k data blocks to be stored is encoded using an [n, k] code to generate n blocks that are then stored on distinct storage devices. In contrast to how redundancy is configured in current systems, we show that the failure rates of devices vary significantly over time. Dynamically tuning the redundancy to match the observed failure rates provides more than 15% cost and energy savings (translating to a savings of millions of dollars). However, traditional codes suffer from prohibitively high resource overheads in changing the code parameters on already encoded data.
In this talk, we:
1. Introduce the concept of redundancy tuning and its benefits using real-world production data from large-scale cluster storage systems,
2. Present a new theoretical framework to formalize the notion of "code conversion"---the process of converting data encoded using an [n, k] code into data encoded using a code with different parameters [n', k'], while maintaining desired decodability properties,
3. Introduce "convertible codes", a new class of codes that enable resource-efficient conversion,
4. Prove tight bounds on resource requirements of convertible codes and present optimal explicit constructions.
Bio
Rashmi Vinayak is an assistant professor in the Computer Science department at Carnegie Mellon University. Rashmi is a recipient of NSF CAREER Award 2020-25, Tata Institute of Fundamental Research Memorial Lecture Award 2020, Facebook Distributed Systems Research Award 2019, Google Faculty Research Award 2018, and Facebook Communications and Networking Research Award 2017. Her work has received USENIX NSDI 2021 Community (Best Paper) Award, UC Berkeley Eli Jury Dissertation Award 2016, and IEEE Data Storage Best Paper and Best Student Paper Awards for 2011/2012. During her Ph.D. studies, Rashmi was a recipient of Facebook Fellowship 2012-13, the Microsoft Research PhD Fellowship 2013-15, and the Google Anita Borg Memorial Scholarship 2015-16. Rashmi received her Ph.D. from UC Berkeley in 2016, and was a postdoctoral scholar at UC Berkeley's AMPLab/RISELab from 2016-17.
Webpage: http://www.cs.cmu.edu/~rvinayak/
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Thu Nov 18 09:31:15 2021
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Thu, 18 Nov 2021 09:31:15 -0800
Subject: [theory-seminar]
=?utf-8?q?=22Optimization_in_Theory_and_Practice?=
=?utf-8?b?IiDigJMgU3RlcGhlbiBXcmlnaHQgKFRodSwgMTgtTm92IEAgNDowMHBt?=
=?utf-8?q?=29?=
In-Reply-To:
References:
Message-ID:
Reminder that this talk will be today at 4pm. Note: due to audio issues,
this talk will no longer be streamed from Packard 101, please join
virtually via Zoom:
*https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
.*
For those on campus, coffee and snacks will still be served at 3:30pm in
the Grove outside Packard.
On Mon, Nov 15, 2021 at 9:44 AM Kabir Chandrasekher
wrote:
> Optimization in Theory and Practice Stephen Wright ? Professor,
> University of Wisconsin-Madison
>
> Thu, 18-Nov / 4:00pm / Packard 101 (in person)
> *Please join us for coffee and snacks at 3:30pm in the Grove outside
> Packard (near Bytes' outdoor seating). This week's speaker will join us via
> Zoom, which will be screened in Packard 101, and streamed for those unable
> to attend in person**:
> https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
> .*
>
> *To avoid Zoom-bombing, we ask virtual attendees to sign in via the above
> URL to receive the Zoom meeting details by email.*
> Abstract
>
> Complexity analysis in optimization seeks upper bounds on the amount of
> work required to find approximate solutions of problems in a given class
> with a given algorithm, and also lower bounds, usually in the form of a
> worst-case example from a given problem class. The relationship between
> theoretical complexity bounds and practical performance of algorithms on
> ?typical? problems varies widely across problem and algorithm classes, and
> relative interest among researchers between the theoretical and practical
> aspects of algorithm design and analysis has waxed and waned over the
> years. This talk surveys complexity analysis and its relationship to
> practical algorithms in optimization, with an emphasis on linear
> programming and convex and nonconvex nonlinear optimization, providing
> historical (and cultural) perspectives on research in these areas.
> Bio
>
> Stephen J. Wright holds the George B. Dantzig Professorship, the Sheldon
> Lubar Chair, and the Amar and Balinder Sohi Professorship of Computer
> Sciences at the University of Wisconsin-Madison. His research is in
> computational optimization and its applications to data science and many
> other areas of science and engineering. Prior to joining UW-Madison in
> 2001, Wright held positions at North Carolina State University (1986-1990)
> and Argonne National Laboratory (1990-2001). He has served as Chair of the
> Mathematical Optimization Society (2007-2010) and as a Trustee of SIAM for
> the maximum three terms (2005-2014). He is a Fellow of SIAM. In 2014, he
> won the W.R.G. Baker Award from IEEE for best paper in an IEEE archival
> publication during 2009-2011. He was awarded the Khachiyan Prize by the
> INFORMS Optimization Society in 2020 for lifetime achievements in
> optimization, and received the NeurIPS Test of Time Award in 2020 for a
> paper presented at that conference in 2011.
>
> Prof. Wright is the author / coauthor of widely used text and reference
> books in optimization including ?Primal Dual Interior-Point Methods? and
> ?Numerical Optimization?. He has published widely on optimization theory,
> algorithms, software, and applications.
>
> Prof. Wright served from 2014-2019 as Editor-in-Chief of the SIAM Journal
> on Optimization and previously served as Editor-in-Chief of Mathematical
> Programming Series B. He has also served as Associate Editor of
> Mathematical Programming Series A, SIAM Review, SIAM Journal on Scientific
> Computing, and several other journals and book series.
>
> *This talk is hosted by the ISL Colloquium
> . To receive talk announcements, subscribe
> to the mailing list isl-colloq at lists.stanford.edu
> .*
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Nov 17 16:58:53 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 18 Nov 2021 00:58:53 +0000
Subject: [theory-seminar] Theory Lunch 11/18: Yang P. Liu
In-Reply-To:
References:
Message-ID:
?A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Sunday, November 14, 2021 2:52 PM
To: thseminar at cs.stanford.edu ; theory-seminar at lists.stanford.edu
Subject: [theory-seminar] Theory Lunch 11/18: Yang P. Liu
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Yang will tell us about: Online Edge Coloring via Tree Recurrences and Correlation Decay
Abstract: We give an online algorithm that with high probability computes an edge coloring using (e/(e-1) + o(1))*Delta colors on a graph with maximum degree Delta = omega(log n) under online edge arrivals against oblivious adversaries, making first progress on the conjecture of Bar-Noy, Motwani, and Naor in this general setting. Our algorithm is based on reducing to a matching problem on locally treelike graphs, and then applying a tree recurrences based approach for arguing correlation decay.
This is joint work with Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, and Mehtaab Sawhney.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Nov 17 16:58:53 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 18 Nov 2021 00:58:53 +0000
Subject: [theory-seminar] Theory Lunch 11/18: Yang P. Liu
In-Reply-To:
References:
Message-ID:
?A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Sunday, November 14, 2021 2:52 PM
To: thseminar at cs.stanford.edu ; theory-seminar at lists.stanford.edu
Subject: [theory-seminar] Theory Lunch 11/18: Yang P. Liu
Hi everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. As usual, we'll start with some socializing, followed by a talk at 12:30pm. Yang will tell us about: Online Edge Coloring via Tree Recurrences and Correlation Decay
Abstract: We give an online algorithm that with high probability computes an edge coloring using (e/(e-1) + o(1))*Delta colors on a graph with maximum degree Delta = omega(log n) under online edge arrivals against oblivious adversaries, making first progress on the conjecture of Bar-Noy, Motwani, and Naor in this general setting. Our algorithm is based on reducing to a matching problem on locally treelike graphs, and then applying a tree recurrences based approach for arguing correlation decay.
This is joint work with Janardhan Kulkarni, Yang P. Liu, Ashwin Sah, and Mehtaab Sawhney.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Nov 21 22:13:14 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 22 Nov 2021 06:13:14 +0000
Subject: [theory-seminar] Theory Lunch 11/25: No Talk (Thanksgiving)
Message-ID:
Hi everyone,
This week we won't have theory lunch due to Thanksgiving. Talks will resume next week. Wish you a happy Thanksgiving!
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Sun Nov 21 22:13:14 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 22 Nov 2021 06:13:14 +0000
Subject: [theory-seminar] Theory Lunch 11/25: No Talk (Thanksgiving)
Message-ID:
Hi everyone,
This week we won't have theory lunch due to Thanksgiving. Talks will resume next week. Wish you a happy Thanksgiving!
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aviad at cs.stanford.edu Tue Nov 23 10:49:12 2021
From: aviad at cs.stanford.edu (Aviad Rubinstein)
Date: Tue, 23 Nov 2021 10:49:12 -0800
Subject: [theory-seminar] Tuesday 11/30 3PM: Super special CS Theory/RAIN
joint seminar with Vijay Vazirani: "Online Bipartite Matching and Adwords"
Message-ID:
Hi Theorists,
Here is a good reason to come back after the break :)
Happy Thanksgiving!
Aviad
*********************************************
If you would like to meet with Vijay, please sign up for a time slot at
this link:
https://docs.google.com/spreadsheets/d/1wbnw-V28r37H3RPboH0JpWocAqowRz0fepjsHacWTE8/edit?usp=sharing
In-person room: Y2E2 101
Time: Tuesday 11/30 at 3 pm PT
*********************************************
Title: Online Bipartite Matching and Adwords
Abstract: Over the last three decades, the online bipartite matching (OBM)
problem has emerged as a central problem in the area of Online Algorithms.
Perhaps even more important is its role in the area of Matching-Based
Market Design. The resurgence of this area, with the revolutions of the
Internet and mobile computing, has opened up novel, path- breaking
applications, and OBM has emerged as its paradigmatic algorithmic problem.
In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an
optimal algorithm, called RANKING, for OBM, achieving a competitive ratio
of (1 ? 1/e); however, its analysis was difficult to comprehend. Over the
years, several researchers simplified the analysis.
We will start by presenting a ?textbook quality? proof of RANKING. Its
simplicity raises the possibility of extending RANKING all the way to a
generalization of OBM called the adwords problem. This problem is both
notoriously difficult and very significant, the latter because of its role
in the AdWords marketplace of Google. We will show how far this endeavor
has gone and what remains. We will also provide a broad overview of the
area of Matching-Based Market Design and pinpoint the role of OBM.
Based on: https://arxiv.org/pdf/2107.10777.pdf
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his
PhD from the University of California, Berkeley in 1983. He is currently a
Distinguished Professor at the University of California, Irvine.
Vazirani has made fundamental contributions to several areas of the theory
of algorithms, including algorithmic matching theory, approximation
algorithms and algorithmic game theory, as well as to complexity theory, in
which he established, with Les Valiant, the hardness of unique solution
instances of NP-complete problems. Over the last four years, he has been
working on algorithms for matching markets. He is one of the founders of
algorithmic game theory.
In 2001 he published Approximation Algorithms, which is widely regarded as
the definitive book on the topic. In 2007, he published the co-edited book
Algorithmic Game Theory. Another co-edited book, Online and Matching-Based
Market Design, will be published by Cambridge University Press in early
2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Mon Nov 29 09:55:55 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 29 Nov 2021 17:55:55 +0000
Subject: [theory-seminar] Theory Lunch 12/2: Caleb Koch
Message-ID:
Hi everyone,
Hope you had a great Thanksgiving break!
This week's theory lunch will take place Thursday at noon at the Shriram sunken terrace (notice the small change of location: this area is between Shriram center and Engineering quad). As usual, we'll start with some socializing, followed by a talk at 12:30pm. Caleb will tell us about: The Query Complexity of Certification
Abstract: We study the problem of certification: given queries to a Boolean function f with certificate complexity at most k and an input x, output a size-k certificate of f on x. Our main result is a new algorithm for certifying monotone functions with O(poly(k)*log n) queries which almost matches the information-theoretic lower bound Omega(k*log n). The design and analysis of our algorithm are based on a new connection to threshold phenomena in monotone functions. We also have an exponential-in-k lower bound when f is non-monotone which shows that our assumption of the structure of f is necessary for the polynomial dependence on k that we achieve. Joint work with Guy Blanc, Jane Lange, and Li-Yang Tan.
Also notice that this week we have a special Theory/RAIN seminar with Vijay Vazirani on Tuesday 3pm at Y2E2 101.
Cheers,
Junyao
*********************************************
If you would like to meet with Vijay, please sign up for a time slot at this link: https://docs.google.com/spreadsheets/d/1wbnw-V28r37H3RPboH0JpWocAqowRz0fepjsHacWTE8/edit?usp=sharing
In-person room: Y2E2 101
Time: Tuesday 11/30 at 3 pm PT
*********************************************
Title: Online Bipartite Matching and Adwords
Abstract: Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 ? 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis.
We will start by presenting a ?textbook quality? proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM.
Based on: https://arxiv.org/pdf/2107.10777.pdf
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine.
Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory.
In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Mon Nov 29 09:55:55 2021
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 29 Nov 2021 17:55:55 +0000
Subject: [theory-seminar] Theory Lunch 12/2: Caleb Koch
Message-ID:
Hi everyone,
Hope you had a great Thanksgiving break!
This week's theory lunch will take place Thursday at noon at the Shriram sunken terrace (notice the small change of location: this area is between Shriram center and Engineering quad). As usual, we'll start with some socializing, followed by a talk at 12:30pm. Caleb will tell us about: The Query Complexity of Certification
Abstract: We study the problem of certification: given queries to a Boolean function f with certificate complexity at most k and an input x, output a size-k certificate of f on x. Our main result is a new algorithm for certifying monotone functions with O(poly(k)*log n) queries which almost matches the information-theoretic lower bound Omega(k*log n). The design and analysis of our algorithm are based on a new connection to threshold phenomena in monotone functions. We also have an exponential-in-k lower bound when f is non-monotone which shows that our assumption of the structure of f is necessary for the polynomial dependence on k that we achieve. Joint work with Guy Blanc, Jane Lange, and Li-Yang Tan.
Also notice that this week we have a special Theory/RAIN seminar with Vijay Vazirani on Tuesday 3pm at Y2E2 101.
Cheers,
Junyao
*********************************************
If you would like to meet with Vijay, please sign up for a time slot at this link: https://docs.google.com/spreadsheets/d/1wbnw-V28r37H3RPboH0JpWocAqowRz0fepjsHacWTE8/edit?usp=sharing
In-person room: Y2E2 101
Time: Tuesday 11/30 at 3 pm PT
*********************************************
Title: Online Bipartite Matching and Adwords
Abstract: Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 ? 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis.
We will start by presenting a ?textbook quality? proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM.
Based on: https://arxiv.org/pdf/2107.10777.pdf
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine.
Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory.
In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdharris at stanford.edu Mon Nov 29 12:03:06 2021
From: mdharris at stanford.edu (Megan D. Harris)
Date: Mon, 29 Nov 2021 20:03:06 +0000
Subject: [theory-seminar] Theory Lunch 12/2: Caleb Koch
In-Reply-To:
References:
Message-ID:
Hello, Fellow Theory Lunch Goers!
Please note the location change for this Thursday's lunch meeting.
The Shriram Courtyard (which is where you were going to move-thank you!) is on the ground/street level and has a Sunken Oak Tree with amphitheater-style seating.
I hope this helps!
Best,
Megan Denise Harris
Faculty Administrator
Computer Science (Gates Building)
353 Jane Stanford Way, Rm 479
Stanford, CA 94305
206.313.1390
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Monday, November 29, 2021 9:55 AM
To: thseminar at cs.stanford.edu ; theory-seminar at lists.stanford.edu
Subject: [theory-seminar] Theory Lunch 12/2: Caleb Koch
Hi everyone,
Hope you had a great Thanksgiving break!
This week's theory lunch will take place Thursday at noon at the Shriram sunken terrace (notice the small change of location: this area is between Shriram center and Engineering quad). As usual, we'll start with some socializing, followed by a talk at 12:30pm. Caleb will tell us about: The Query Complexity of Certification
Abstract: We study the problem of certification: given queries to a Boolean function f with certificate complexity at most k and an input x, output a size-k certificate of f on x. Our main result is a new algorithm for certifying monotone functions with O(poly(k)*log n) queries which almost matches the information-theoretic lower bound Omega(k*log n). The design and analysis of our algorithm are based on a new connection to threshold phenomena in monotone functions. We also have an exponential-in-k lower bound when f is non-monotone which shows that our assumption of the structure of f is necessary for the polynomial dependence on k that we achieve. Joint work with Guy Blanc, Jane Lange, and Li-Yang Tan.
Also notice that this week we have a special Theory/RAIN seminar with Vijay Vazirani on Tuesday 3pm at Y2E2 101.
Cheers,
Junyao
*********************************************
If you would like to meet with Vijay, please sign up for a time slot at this link: https://docs.google.com/spreadsheets/d/1wbnw-V28r37H3RPboH0JpWocAqowRz0fepjsHacWTE8/edit?usp=sharing
In-person room: Y2E2 101
Time: Tuesday 11/30 at 3 pm PT
*********************************************
Title: Online Bipartite Matching and Adwords
Abstract: Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 ? 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis.
We will start by presenting a ?textbook quality? proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM.
Based on: https://arxiv.org/pdf/2107.10777.pdf
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine.
Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory.
In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdharris at stanford.edu Mon Nov 29 12:03:06 2021
From: mdharris at stanford.edu (Megan D. Harris)
Date: Mon, 29 Nov 2021 20:03:06 +0000
Subject: [theory-seminar] Theory Lunch 12/2: Caleb Koch
In-Reply-To:
References:
Message-ID:
Hello, Fellow Theory Lunch Goers!
Please note the location change for this Thursday's lunch meeting.
The Shriram Courtyard (which is where you were going to move-thank you!) is on the ground/street level and has a Sunken Oak Tree with amphitheater-style seating.
I hope this helps!
Best,
Megan Denise Harris
Faculty Administrator
Computer Science (Gates Building)
353 Jane Stanford Way, Rm 479
Stanford, CA 94305
206.313.1390
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Monday, November 29, 2021 9:55 AM
To: thseminar at cs.stanford.edu ; theory-seminar at lists.stanford.edu
Subject: [theory-seminar] Theory Lunch 12/2: Caleb Koch
Hi everyone,
Hope you had a great Thanksgiving break!
This week's theory lunch will take place Thursday at noon at the Shriram sunken terrace (notice the small change of location: this area is between Shriram center and Engineering quad). As usual, we'll start with some socializing, followed by a talk at 12:30pm. Caleb will tell us about: The Query Complexity of Certification
Abstract: We study the problem of certification: given queries to a Boolean function f with certificate complexity at most k and an input x, output a size-k certificate of f on x. Our main result is a new algorithm for certifying monotone functions with O(poly(k)*log n) queries which almost matches the information-theoretic lower bound Omega(k*log n). The design and analysis of our algorithm are based on a new connection to threshold phenomena in monotone functions. We also have an exponential-in-k lower bound when f is non-monotone which shows that our assumption of the structure of f is necessary for the polynomial dependence on k that we achieve. Joint work with Guy Blanc, Jane Lange, and Li-Yang Tan.
Also notice that this week we have a special Theory/RAIN seminar with Vijay Vazirani on Tuesday 3pm at Y2E2 101.
Cheers,
Junyao
*********************************************
If you would like to meet with Vijay, please sign up for a time slot at this link: https://docs.google.com/spreadsheets/d/1wbnw-V28r37H3RPboH0JpWocAqowRz0fepjsHacWTE8/edit?usp=sharing
In-person room: Y2E2 101
Time: Tuesday 11/30 at 3 pm PT
*********************************************
Title: Online Bipartite Matching and Adwords
Abstract: Over the last three decades, the online bipartite matching (OBM) problem has emerged as a central problem in the area of Online Algorithms. Perhaps even more important is its role in the area of Matching-Based Market Design. The resurgence of this area, with the revolutions of the Internet and mobile computing, has opened up novel, path- breaking applications, and OBM has emerged as its paradigmatic algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving a competitive ratio of (1 ? 1/e); however, its analysis was difficult to comprehend. Over the years, several researchers simplified the analysis.
We will start by presenting a ?textbook quality? proof of RANKING. Its simplicity raises the possibility of extending RANKING all the way to a generalization of OBM called the adwords problem. This problem is both notoriously difficult and very significant, the latter because of its role in the AdWords marketplace of Google. We will show how far this endeavor has gone and what remains. We will also provide a broad overview of the area of Matching-Based Market Design and pinpoint the role of OBM.
Based on: https://arxiv.org/pdf/2107.10777.pdf
Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his PhD from the University of California, Berkeley in 1983. He is currently a Distinguished Professor at the University of California, Irvine.
Vazirani has made fundamental contributions to several areas of the theory of algorithms, including algorithmic matching theory, approximation algorithms and algorithmic game theory, as well as to complexity theory, in which he established, with Les Valiant, the hardness of unique solution instances of NP-complete problems. Over the last four years, he has been working on algorithms for matching markets. He is one of the founders of algorithmic game theory.
In 2001 he published Approximation Algorithms, which is widely regarded as the definitive book on the topic. In 2007, he published the co-edited book Algorithmic Game Theory. Another co-edited book, Online and Matching-Based Market Design, will be published by Cambridge University Press in early 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tavorb at stanford.edu Mon Nov 29 14:37:23 2021
From: tavorb at stanford.edu (Tavor Baharav)
Date: Mon, 29 Nov 2021 17:37:23 -0500
Subject: [theory-seminar] =?utf-8?q?=22Connecting_Statistical_Problems_wit?=
=?utf-8?q?h_Different_Structures=22_=E2=80=93_Guy_Bresler_=28Thu?=
=?utf-8?b?LCAyLURlYyBAIDQ6MDBwbSk=?=
Message-ID:
Connecting Statistical Problems with Different StructuresGuy Bresler ?
Professor, MIT
Thu, 2-Dec / 4:00pm / Packard 101 (in person)
*Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating). The talk will be streamed on Zoom
for those unable to attend in
person: https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
*
Abstract
In this talk I will describe some simple average-case reduction techniques
and use these techniques to connect several statistical problems with
widely varying structures. These reduction techniques transform the
probability distribution defining a problem in an algorithmically efficient
way while preserving the strength of the underlying signal, thereby
transferring the computational phase transitions from one problem to
another. We?ll focus especially on the mixtures of linear regressions
problem, and show how this supervised learning problem over continuous
variables can be precisely related to the planted clique problem, a
combinatorial unsupervised learning problem. Along the way we?ll see a
method for turning positive correlations into negative correlations amongst
a subset of variables without knowing which subset is the correlated one,
as well as a clean but non-obvious way to turn a supervised learning
problem into an unsupervised one. The talk is based on joint work with
Matthew Brennan (https://arxiv.org/abs/2005.08099).
Bio
Guy Bresler is an associate professor in the Department of Electrical
Engineering and Computer Science at MIT, and a member of LIDS and IDSS.
Previously, he was a postdoc at MIT and before that completed his PhD in
the Department of EECS at UC Berkeley. His undergraduate degree is from the
University of Illinois at Urbana-Champaign. A major focus of his research
is on the computational complexity of statistical inference, a direction
that combines his interests in information theory, probability, and
computation.
*This talk is hosted by the ISL Colloquium
. To receive talk announcements, subscribe
to the mailing list isl-colloq at lists.stanford.edu
.*
------------------------------
Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
This talk: http://isl.stanford.edu/talks/talks/2021q4/guy-bresler/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aviad at cs.stanford.edu Mon Nov 29 15:29:03 2021
From: aviad at cs.stanford.edu (Aviad Rubinstein)
Date: Mon, 29 Nov 2021 15:29:03 -0800
Subject: [theory-seminar] Reminder: Tomorrow (11/30) 3PM: Super special CS
Theory/RAIN joint seminar with Vijay Vazirani: "Online Bipartite Matching
and Adwords"
In-Reply-To:
References:
Message-ID:
See you tomorrow!
On Tue, Nov 23, 2021, 10:49 AM Aviad Rubinstein
wrote:
> Hi Theorists,
>
> Here is a good reason to come back after the break :)
>
> Happy Thanksgiving!
>
> Aviad
>
>
> *********************************************
>
>
> If you would like to meet with Vijay, please sign up for a time slot at
> this link:
> https://docs.google.com/spreadsheets/d/1wbnw-V28r37H3RPboH0JpWocAqowRz0fepjsHacWTE8/edit?usp=sharing
>
>
> In-person room: Y2E2 101
>
> Time: Tuesday 11/30 at 3 pm PT
>
>
> *********************************************
>
> Title: Online Bipartite Matching and Adwords
>
>
> Abstract: Over the last three decades, the online bipartite matching
> (OBM) problem has emerged as a central problem in the area of Online
> Algorithms. Perhaps even more important is its role in the area of
> Matching-Based Market Design. The resurgence of this area, with the
> revolutions of the Internet and mobile computing, has opened up novel,
> path- breaking applications, and OBM has emerged as its paradigmatic
> algorithmic problem. In a 1990 joint paper with Richard Karp and Umesh
> Vazirani, we gave an optimal algorithm, called RANKING, for OBM, achieving
> a competitive ratio of (1 ? 1/e); however, its analysis was difficult to
> comprehend. Over the years, several researchers simplified the analysis.
>
>
> We will start by presenting a ?textbook quality? proof of RANKING. Its
> simplicity raises the possibility of extending RANKING all the way to a
> generalization of OBM called the adwords problem. This problem is both
> notoriously difficult and very significant, the latter because of its role
> in the AdWords marketplace of Google. We will show how far this endeavor
> has gone and what remains. We will also provide a broad overview of the
> area of Matching-Based Market Design and pinpoint the role of OBM.
>
>
> Based on: https://arxiv.org/pdf/2107.10777.pdf
>
>
> Bio: Vijay Vazirani got his undergraduate degree from MIT in 1979 and his
> PhD from the University of California, Berkeley in 1983. He is currently a
> Distinguished Professor at the University of California, Irvine.
>
>
> Vazirani has made fundamental contributions to several areas of the theory
> of algorithms, including algorithmic matching theory, approximation
> algorithms and algorithmic game theory, as well as to complexity theory, in
> which he established, with Les Valiant, the hardness of unique solution
> instances of NP-complete problems. Over the last four years, he has been
> working on algorithms for matching markets. He is one of the founders of
> algorithmic game theory.
>
>
> In 2001 he published Approximation Algorithms, which is widely regarded as
> the definitive book on the topic. In 2007, he published the co-edited book
> Algorithmic Game Theory. Another co-edited book, Online and Matching-Based
> Market Design, will be published by Cambridge University Press in early
> 2022; see its flyer: https://www.ics.uci.edu/~vazirani/flyer.pdf
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Mon Nov 29 22:56:05 2021
From: moses at cs.stanford.edu (Moses Charikar)
Date: Mon, 29 Nov 2021 22:56:05 -0800
Subject: [theory-seminar] Simons Institute Research Fellowship Applications
2022-2023
In-Reply-To:
References:
Message-ID:
Hi theory folks,
Peter Bartlett asked me to relay this message about Simons Institute
Research Fellowships for the next academic year.
The official announcement is below. The call for applications is at
https://simons.berkeley.edu/simons-berkeley-research-fellowship-call-applications
.
Cheers,
Moses
----------------------------------------------------------------------------------------------------
The Simons Institute for the Theory of Computing at UC Berkeley invites
applications for Research Fellowships for the academic year 2022-23.
Simons-Berkeley Research Fellowships are an opportunity for outstanding
junior scientists to spend one or both semesters at the Institute in
connection with one or more of its programs. The programs for 2022-23 are
as follows:
* Data-Driven Decision Processes (Fall 2022)
* Graph Limits and Processes on Networks: From Epidemics to Misinformation
(Fall 2022)
* Meta-Complexity (Spring 2023)
Applicants who already hold junior faculty or postdoctoral positions are
welcome to apply. In particular, applicants who hold, or expect to hold,
postdoctoral appointments at other institutions are encouraged to apply to
spend one semester as a Simons-Berkeley Fellow subject to the approval of
the postdoctoral institution. Further details and application instructions
can be found at
https://simons.berkeley.edu/simons-berkeley-research-fellowship-call-applications.
Information about the Institute and the above programs can be found at
http://simons.berkeley.edu.
*Deadline for applications: December 15, 2021.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: