From wyma at stanford.edu Thu Jan 3 06:37:14 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 3 Jan 2019 14:37:14 +0000
Subject: [theory-seminar] Theory Lunch Sign-up
Message-ID:
Hi Everyone,
Happy New Year!
Theory lunch resumes next Thursday (Jan 10). If you?d like to give a talk this quarter, you can sign up with me in person or here:
https://docs.google.com/document/d/1S0QcDMTn-JRaP1cRFRihZyeNfHUmpYLkyORKMcGIBgY/edit
See you all there!
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Jan 3 06:37:14 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 3 Jan 2019 14:37:14 +0000
Subject: [theory-seminar] Theory Lunch Sign-up
Message-ID:
Hi Everyone,
Happy New Year!
Theory lunch resumes next Thursday (Jan 10). If you?d like to give a talk this quarter, you can sign up with me in person or here:
https://docs.google.com/document/d/1S0QcDMTn-JRaP1cRFRihZyeNfHUmpYLkyORKMcGIBgY/edit
See you all there!
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From liyang at cs.stanford.edu Fri Jan 4 18:13:36 2019
From: liyang at cs.stanford.edu (Li-Yang Tan)
Date: Fri, 4 Jan 2019 18:13:36 -0800
Subject: [theory-seminar] Course announcement: Complexity Theory (CS254)
Message-ID:
Hi all,
Hope everyone had a good winter break.
This quarter I will be teaching CS254 Computational Complexity. The course
webpage is now accessible at:
http://theory.stanford.edu/~liyang/teaching/complexity19.html
Please let me know if you have any questions. Hope to see you in class on
Monday!
Best,
Li-Yang
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bplaut at stanford.edu Mon Jan 7 13:46:50 2019
From: bplaut at stanford.edu (Benjamin Plaut)
Date: Mon, 7 Jan 2019 16:46:50 -0500
Subject: [theory-seminar] Postdoc announcement
Message-ID:
Hi all,
Happy new year! Attached is an announcement for a postdoc opening at UC
Santa Cruz. Ashish Goel asked me to pass this along to the theory group.
Best,
Ben Plaut
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: postdoc_announcement_TRIPODS.docx
Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document
Size: 143852 bytes
Desc: not available
URL:
From ofirgeri at stanford.edu Mon Jan 7 15:46:23 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Mon, 7 Jan 2019 23:46:23 +0000
Subject: [theory-seminar] Theory Seminar (1/11): William Kuszmaul
Message-ID:
Hi all,
In our first theory seminar this quarter, William Kuszmaul (MIT) will talk about "Efficiently Approximating Edit Distance Between Pseudorandom Strings" (see abstract below). The talk will be as usual on Friday (1/11), 3:00pm in Gates 463A.
Hope to see you there!
Ofir
Efficiently Approximating Edit Distance Between Pseudorandom Strings
Speaker: William Kuszmaul (MIT)
We present an algorithm for approximating the edit distance ed(x,y) between two strings x and y in time parameterized by the degree to which one of the strings x satisfies a natural pseudorandomness property. The pseudorandomness model is asymmetric in that no requirements are placed on the second string y, which may be constructed by an adversary with full knowledge of x.
We say that x is (p,B)-pseudorandom if all pairs a and b of disjoint B-letter substrings of x satisfy ed(a,b) >= pB. Given parameters p and B, our algorithm computes the edit distance between a (p,B)-pseudorandom string x and an arbitrary string y within a factor of O(1/p) in time \tilde{O}(nB), with high probability. If x is generated at random, then with high probability it will be (\Omega(1), O(log n))-pseudorandom, allowing us to compute ed(x,y) within a constant factor in near linear time.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lunjia at stanford.edu Mon Jan 7 16:17:47 2019
From: lunjia at stanford.edu (Lunjia Hu)
Date: Tue, 8 Jan 2019 00:17:47 +0000
Subject: [theory-seminar] Theory Seminar (1/11): William Kuszmaul
Message-ID:
A non-text attachment was scrubbed...
Name: not available
Type: text/calendar
Size: 1574 bytes
Desc: not available
URL:
From lunjia at stanford.edu Mon Jan 7 16:38:08 2019
From: lunjia at stanford.edu (Lunjia Hu)
Date: Tue, 8 Jan 2019 00:38:08 +0000
Subject: [theory-seminar] Theory Seminar (1/11): William Kuszmaul
Message-ID:
A non-text attachment was scrubbed...
Name: not available
Type: text/calendar
Size: 2020 bytes
Desc: not available
URL:
From wyma at stanford.edu Wed Jan 9 12:53:08 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Wed, 9 Jan 2019 20:53:08 +0000
Subject: [theory-seminar] First Theory Lunch of the Quarter
Message-ID:
Hi All,
Just a reminder that theory lunch is starting this Thursday (tomorrow)! As always, we will meet from noon to 1pm at Gates 463A. This week, we will do introductions.
We are still looking for speakers for this quarter! If you want to talk about your research in front of a friendly (and happily full) audience, you can sign up in person on Thursday or here:
https://docs.google.com/document/d/1S0QcDMTn-JRaP1cRFRihZyeNfHUmpYLkyORKMcGIBgY/edit
If you know someone who is not on this mailing list and is interested in attending/speaking at theory lunch, please share this email with them. They should be able to subscribe to our mailing list through the following link:
https://mailman.stanford.edu/mailman/suoptions/theory-seminar/
Cheers,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Wed Jan 9 12:53:08 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Wed, 9 Jan 2019 20:53:08 +0000
Subject: [theory-seminar] First Theory Lunch of the Quarter
Message-ID:
Hi All,
Just a reminder that theory lunch is starting this Thursday (tomorrow)! As always, we will meet from noon to 1pm at Gates 463A. This week, we will do introductions.
We are still looking for speakers for this quarter! If you want to talk about your research in front of a friendly (and happily full) audience, you can sign up in person on Thursday or here:
https://docs.google.com/document/d/1S0QcDMTn-JRaP1cRFRihZyeNfHUmpYLkyORKMcGIBgY/edit
If you know someone who is not on this mailing list and is interested in attending/speaking at theory lunch, please share this email with them. They should be able to subscribe to our mailing list through the following link:
https://mailman.stanford.edu/mailman/suoptions/theory-seminar/
Cheers,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Fri Jan 11 12:40:38 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 11 Jan 2019 20:40:38 +0000
Subject: [theory-seminar] Theory Seminar (1/11): William Kuszmaul
In-Reply-To:
References:
Message-ID:
Reminder: Bill's talk is today at 3pm.
________________________________
From: Ofir Geri
Sent: Monday, January 7, 2019 3:46:23 PM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (1/11): William Kuszmaul
Hi all,
In our first theory seminar this quarter, William Kuszmaul (MIT) will talk about "Efficiently Approximating Edit Distance Between Pseudorandom Strings" (see abstract below). The talk will be as usual on Friday (1/11), 3:00pm in Gates 463A.
Hope to see you there!
Ofir
Efficiently Approximating Edit Distance Between Pseudorandom Strings
Speaker: William Kuszmaul (MIT)
We present an algorithm for approximating the edit distance ed(x,y) between two strings x and y in time parameterized by the degree to which one of the strings x satisfies a natural pseudorandomness property. The pseudorandomness model is asymmetric in that no requirements are placed on the second string y, which may be constructed by an adversary with full knowledge of x.
We say that x is (p,B)-pseudorandom if all pairs a and b of disjoint B-letter substrings of x satisfy ed(a,b) >= pB. Given parameters p and B, our algorithm computes the edit distance between a (p,B)-pseudorandom string x and an arbitrary string y within a factor of O(1/p) in time \tilde{O}(nB), with high probability. If x is generated at random, then with high probability it will be (\Omega(1), O(log n))-pseudorandom, allowing us to compute ed(x,y) within a constant factor in near linear time.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Mon Jan 14 09:50:30 2019
From: reingold at stanford.edu (Omer Reingold)
Date: Mon, 14 Jan 2019 09:50:30 -0800
Subject: [theory-seminar] STOCA Tuesday February 5th
Message-ID:
Google will be hosting a day of talks by the STOC PC. Details and a
tentative schedule is here: https://sites.google.com/view/stoca19/home Looks
great!
Omer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Jan 14 15:16:27 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 14 Jan 2019 23:16:27 +0000
Subject: [theory-seminar] Theory Lunch 1/17 -- Don Knuth
Message-ID:
Hi All,
This Thursday, we are fortunate to have Don Knuth tell us about le probl?me des m?nages. As usual, we meet from noon to 1pm at 463A.
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Jan 14 15:16:27 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 14 Jan 2019 23:16:27 +0000
Subject: [theory-seminar] Theory Lunch 1/17 -- Don Knuth
Message-ID:
Hi All,
This Thursday, we are fortunate to have Don Knuth tell us about le probl?me des m?nages. As usual, we meet from noon to 1pm at 463A.
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From anilesh at stanford.edu Tue Jan 15 11:56:53 2019
From: anilesh at stanford.edu (Anilesh K. Krishnaswamy)
Date: Tue, 15 Jan 2019 11:56:53 -0800
Subject: [theory-seminar] Talk of potential interest: RAIN Seminar Wedneday
Jan 16
In-Reply-To:
References:
Message-ID:
See below for details. There's a spreadsheet if you want to sign up to meet
the speaker as well.
---------- Forwarded message ---------
From: Hannah Qiuhan Li
Date: Mon, Jan 14, 2019 at 11:59 AM
Subject: [RAIN] RAIN Seminar Wedneday Jan 16
To: internetalgs at lists.stanford.edu
Hi everyone,
Welcome back from break! We will have our first RAIN seminar this Wednesday.
Speaker: Vasilis Gkatzelis
Location: Y2E2 101
Time: 12pm - 1pm
Details of the talk are attached. If you would like to meet with him,
please sign up using this sheet.
https://docs.google.com/spreadsheets/d/1D40Q2-yninu2oneugzmSK_d5_v151x7NpA4FC7tFLMg/edit?usp=sharing
*Title*: Cost-Sharing Methods for Scheduling Games under Uncertainty
*Abstract*: We study the performance of cost-sharing protocols in a selfish
scheduling setting with load-dependent cost functions. Previous work on
selfish scheduling protocols has focused on two extreme models: omnipotent
protocols that are aware of every machine and every job that is active at
any given time, and oblivious protocols that are aware of nothing beyond
the machine they control. The main focus of this talk is on a
well-motivated middle-ground model of "resource-aware" protocols, which are
aware of the set of machines that the system comprises, but unaware of what
jobs are active at any given time. Apart from considering budget-balanced
protocols, to which previous work was restricted, we augment the design
space by also studying the extent to which overcharging can lead to
improved performance.
*Bio*: Vasilis Gkatzelis is an assistant professor in computer science at
Drexel University. He previously held positions as a postdoctoral scholar
at the computer science departments of UC Berkeley and Stanford University,
and as a research fellow at the Simons Institute for the Theory of
Computing. He received his PhD from the Courant Institute of New York
University and his research focuses on problems in algorithmic game theory
and approximation algorithms.
Best,
Hannah Li
--++**==--++**==--++**==--++**==--++**==--++**==--++**==
internetalgs mailing list
internetalgs at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/internetalgs
--
----------------------------------------------------------------------------
Anilesh K. Krishnaswamy,
Ph.D. candidate,
Society & Algorithms Lab,
Stanford University.
Webpage: https://web.stanford.edu/~anilesh/
Cell: 650-387-7272
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Tue Jan 15 14:00:49 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 15 Jan 2019 22:00:49 +0000
Subject: [theory-seminar] Theory Seminar (1/18): Ilias Zadik
Message-ID:
Hi all,
This week in the theory seminar, Ilias Zadik from MIT will give on Algorithms and Algorithmic Intractability in High Dimensional Linear Regression (see abstract below). The talk will be as usual on Friday (1/18), 3:00pm in Gates 463A.
Ilias will be here on Thursday afternoon and Friday. If you'd like to meet with him, please sign up using the following link:
https://docs.google.com/spreadsheets/d/1Y26DZ8xAhwz81Y1YwoC1bpvaKSNZ2RF4kPM0GrTe8rM/edit?usp=sharing
Hope to see you there!
Ofir
Algorithms and Algorithmic Intractability in High Dimensional Linear Regression
Speaker: Ilias Zadik (MIT)
In this talk we will focus on the high dimensional linear regression problem. The goal is to recover a hidden k-sparse binary vector \beta under n noisy linear observations Y=X\beta+W where X is an n \times p matrix with iid N(0,1) entries and W is an n-dimensional vector with iid N(0,\sigma^2) entries. In the literature of the problem, an apparent asymptotic gap is observed between the optimal sample size for information-theoretic recovery, call it n*, and for computationally efficient recovery, call it n_alg.
We will discuss several new contributions on studying this gap. We first identify tightly the information limit of the problem using a novel analysis of the Maximum Likelihood Estimator (MLE) performance. Furthermore, we establish that the algorithmic barrier n_alg coincides with the phase transition point for the appearance of a certain Overlap Gap Property (OGP) over the space of k-sparse binary vectors. The presence of such an Overlap Gap Property phase transition, which originates in spin glass theory, is known to provide evidence of an algorithmic hardness. Finally, we show that in the extreme case where the noise level is zero, i.e. \sigma=0, the computational-statistical gap closes by proposing an optimal polynomial-time algorithm using the Lenstra-Lenstra-Lov\'asz lattice basis reduction algorithm.
This is joint work with David Gamarnik.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From farnia at stanford.edu Wed Jan 16 14:08:07 2019
From: farnia at stanford.edu (Farzan Farnia)
Date: Wed, 16 Jan 2019 22:08:07 +0000
Subject: [theory-seminar] ISL colloquium tomorrow 4:15-5:15,
prof. Babak Hassibi (Caltech)
In-Reply-To: <1CB2EE43-FA11-4B7D-A706-F90E1C3CD79E@stanford.edu>
References: <1CB2EE43-FA11-4B7D-A706-F90E1C3CD79E@stanford.edu>
Message-ID:
A gentle reminder that this seminar is happening tomorrow 4:15-5:15 pm in Packard 101.
Title: Stochastic Descent Algorithms: Minimax Optimality, Implicit Regularization, and Deep Networks
Speaker: Professor Babak Hassibi, California Institute of Technology
Location & time: Packard 101, 4:15-5:15pm, Thursday, Jan 17th
Abstract: Stochastic descent methods have had a long history in optimization, adaptive filtering, and online learning and have recently gained tremendous popularity as the workhorse for deep learning. So much so that, it is now widely recognized that the success of deep networks is not only due to their special deep architecture, but also due to the behavior of the stochastic descent methods used, which plays a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD)---originally developed for quadratic loss and linear models in the context of H-infinity control in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. These minimax properties can be used to explain the convergence and implicit-regularization of the algorithms when the linear regression problem is over-parametrized (in what is now being called the "interpolating regime"). In the nonlinear setting, exemplified by training a deep neural network, we show that when the setup is "highly over-parametrized", stochastic descent methods enjoy similar convergence and implicit-regularization properties. This observation gives some insight into why deep networks exhibit such powerful generalization abilities. It is also a further example of what is increasingly referred to as the "blessing of dimensionality?.
Biography: Babak Hassibi is the inaugural Mose and Lillian S. Bohn Professor of Electrical Engineering at the California Institute of Technology, where he has been since 2001, From 2011 to 2016 he was the Gordon M Binder/Amgen Professor of Electrical Engineering and during 2008-2015 he was Executive Officer of Electrical Engineering, as well as Associate Director of Information Science and Technology. Prior to Caltech, he was a Member of the Technical Staff in the Mathematical Sciences Research Center at Bell Laboratories, Murray Hill, NJ. He obtained his PhD degree from Stanford University in 1996 and his BS degree from the University of Tehran in 1989. His research interests span various aspects of information theory, communications, signal processing, control, and machine learning. He is an ISI highly cited author in Computer Science and, among other awards, is the recipient of the US Presidential Early Career Award for Scientists and Engineers (PECASE) and the David and Lucille Packard Fellowship in Science and Engineering. He is General co-Chair of the 2020 IEEE International Symposium on Information Theory (ISIT 2020).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From farnia at stanford.edu Thu Jan 17 10:58:26 2019
From: farnia at stanford.edu (Farzan Farnia)
Date: Thu, 17 Jan 2019 18:58:26 +0000
Subject: [theory-seminar] Reminder: ISL seminar today 4:15-5:15 at Packard
101, prof. Babak Hassibi (Caltech)
Message-ID: <2F5C6551-F6EE-41DC-9AB1-865141007BFC@stanford.edu>
A gentle reminder:
Title: Stochastic Descent Algorithms: Minimax Optimality, Implicit Regularization, and Deep Networks
Speaker: Professor Babak Hassibi, California Institute of Technology
Location & time: Packard 101, 4:15-5:15pm, Thursday, Jan 17th
Abstract: Stochastic descent methods have had a long history in optimization, adaptive filtering, and online learning and have recently gained tremendous popularity as the workhorse for deep learning. So much so that, it is now widely recognized that the success of deep networks is not only due to their special deep architecture, but also due to the behavior of the stochastic descent methods used, which plays a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD)---originally developed for quadratic loss and linear models in the context of H-infinity control in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. These minimax properties can be used to explain the convergence and implicit-regularization of the algorithms when the linear regression problem is over-parametrized (in what is now being called the "interpolating regime"). In the nonlinear setting, exemplified by training a deep neural network, we show that when the setup is "highly over-parametrized", stochastic descent methods enjoy similar convergence and implicit-regularization properties. This observation gives some insight into why deep networks exhibit such powerful generalization abilities. It is also a further example of what is increasingly referred to as the "blessing of dimensionality?.
Biography: Babak Hassibi is the inaugural Mose and Lillian S. Bohn Professor of Electrical Engineering at the California Institute of Technology, where he has been since 2001, From 2011 to 2016 he was the Gordon M Binder/Amgen Professor of Electrical Engineering and during 2008-2015 he was Executive Officer of Electrical Engineering, as well as Associate Director of Information Science and Technology. Prior to Caltech, he was a Member of the Technical Staff in the Mathematical Sciences Research Center at Bell Laboratories, Murray Hill, NJ. He obtained his PhD degree from Stanford University in 1996 and his BS degree from the University of Tehran in 1989. His research interests span various aspects of information theory, communications, signal processing, control, and machine learning. He is an ISI highly cited author in Computer Science and, among other awards, is the recipient of the US Presidential Early Career Award for Scientists and Engineers (PECASE) and the David and Lucille Packard Fellowship in Science and Engineering. He is General co-Chair of the 2020 IEEE International Symposium on Information Theory (ISIT 2020).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Jan 17 13:53:40 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 17 Jan 2019 21:53:40 +0000
Subject: [theory-seminar] Theory Lunch 1/17 -- Don Knuth
In-Reply-To:
References:
Message-ID:
Someone left a black jacket in 463A. If it is yours, come pick it up!
________________________________
From: theory-seminar on behalf of Weiyun Ma
Sent: Monday, January 14, 2019 3:16:27 PM
To: theory-seminar at lists.stanford.edu; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 1/17 -- Don Knuth
Hi All,
This Thursday, we are fortunate to have Don Knuth tell us about le probl?me des m?nages. As usual, we meet from noon to 1pm at 463A.
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Jan 17 13:53:40 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 17 Jan 2019 21:53:40 +0000
Subject: [theory-seminar] Theory Lunch 1/17 -- Don Knuth
In-Reply-To:
References:
Message-ID:
Someone left a black jacket in 463A. If it is yours, come pick it up!
________________________________
From: theory-seminar on behalf of Weiyun Ma
Sent: Monday, January 14, 2019 3:16:27 PM
To: theory-seminar at lists.stanford.edu; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 1/17 -- Don Knuth
Hi All,
This Thursday, we are fortunate to have Don Knuth tell us about le probl?me des m?nages. As usual, we meet from noon to 1pm at 463A.
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Fri Jan 18 12:29:45 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 18 Jan 2019 20:29:45 +0000
Subject: [theory-seminar] Theory Seminar (1/18): Ilias Zadik
In-Reply-To:
References:
Message-ID:
Reminder: Ilias' talk is today at 3pm in Gates 463A.
________________________________
From: Ofir Geri
Sent: Tuesday, January 15, 2019 2:00:49 PM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (1/18): Ilias Zadik
Hi all,
This week in the theory seminar, Ilias Zadik from MIT will give on Algorithms and Algorithmic Intractability in High Dimensional Linear Regression (see abstract below). The talk will be as usual on Friday (1/18), 3:00pm in Gates 463A.
Ilias will be here on Thursday afternoon and Friday. If you'd like to meet with him, please sign up using the following link:
https://docs.google.com/spreadsheets/d/1Y26DZ8xAhwz81Y1YwoC1bpvaKSNZ2RF4kPM0GrTe8rM/edit?usp=sharing
Hope to see you there!
Ofir
Algorithms and Algorithmic Intractability in High Dimensional Linear Regression
Speaker: Ilias Zadik (MIT)
In this talk we will focus on the high dimensional linear regression problem. The goal is to recover a hidden k-sparse binary vector \beta under n noisy linear observations Y=X\beta+W where X is an n \times p matrix with iid N(0,1) entries and W is an n-dimensional vector with iid N(0,\sigma^2) entries. In the literature of the problem, an apparent asymptotic gap is observed between the optimal sample size for information-theoretic recovery, call it n*, and for computationally efficient recovery, call it n_alg.
We will discuss several new contributions on studying this gap. We first identify tightly the information limit of the problem using a novel analysis of the Maximum Likelihood Estimator (MLE) performance. Furthermore, we establish that the algorithmic barrier n_alg coincides with the phase transition point for the appearance of a certain Overlap Gap Property (OGP) over the space of k-sparse binary vectors. The presence of such an Overlap Gap Property phase transition, which originates in spin glass theory, is known to provide evidence of an algorithmic hardness. Finally, we show that in the extreme case where the noise level is zero, i.e. \sigma=0, the computational-statistical gap closes by proposing an optimal polynomial-time algorithm using the Lenstra-Lenstra-Lov\'asz lattice basis reduction algorithm.
This is joint work with David Gamarnik.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From rad at cs.stanford.edu Mon Jan 21 07:46:55 2019
From: rad at cs.stanford.edu (Rad Niazadeh)
Date: Mon, 21 Jan 2019 07:46:55 -0800
Subject: [theory-seminar] Fwd: [sigecom-talk] postdoc positions at Game
Theory Lab in St.Petersburg, Russia (deadline: February 15)
In-Reply-To: <91819e54-1766-4a53-8a7c-e04896e564d2@googlegroups.com>
References: <91819e54-1766-4a53-8a7c-e04896e564d2@googlegroups.com>
Message-ID:
Hi all!
This might be interesting to some you.
Cheers,
Rad
---------- Forwarded message ---------
From: Fedor Sandomirskiy
Date: Mon, Jan 21, 2019 at 2:03 AM
Subject: [sigecom-talk] postdoc positions at Game Theory Lab in
St.Petersburg, Russia (deadline: February 15)
To: sigecom-talk
Dear Colleagues,
The International Laboratory of Game Theory and Decision Making in
St.Petersburg, Russia, opens a fully-funded postdoc position in Game theory
and Mechanism Design (http://scem.spb.hse.ru/en/ilgt/postdoc2019
).
The detailed description of the vacancy is below.
If you know somebody who could be interested in, please, encourage them to
apply. Note that the deadline is rather soon (February 15).
In case of any questions do not hesitate to contact the following members
of the Lab:
--Herve Moulin (Herve.Moulin at glasgow.ac.uk) and Anna Bogomolnaia (
Anna.Bogomolnaia at glasgow.ac.uk), the academic supervisors
--Alexander Nesterov (nesterovu at gmail.com), the laboratory head
--or Xenia Adaeva (xadaeva at hse.ru), the manager.
Best regards,
Fedor Sandomirskiy (Technion & HSE St.Petersburg).
The Higher School of Economics International Laboratory of Game Theory and
Decision Making http://scem.spb.hse.ru/en/ilgt/
in St.Petersburg, Russia invites applications for postdoctoral research
positions in the fields of Game Theory, Computational Economics, and
Discrete Mathematics.
The job involves:
--- working under the direct supervision of Prof. Herve Moulin, lab?s
academic supervisor,
--- participants are encouraged to pursue their own research in parallel
with working on research projects of Game Theory and Decision Making lab
in two broad areas:
o market design & mechanism design
o game theory: strategic, algorithmic, and experimental
--- writing research papers for international peer-reviewed journals in
co-authorship with the members of the Lab,
--- participation in organization of the events and other contribution to
the Lab?s development,
--- public presentations of candidate?s own research to the researchers in
the field and the broader academic community,
--- some teaching is encouraged, though not required.
Requirements:
--- a PhD from an international research university in such fields as:
Economics, Theoretical Computer Science or Mathematics,
--- a strong background in Game Theory, Microeconomics, Discrete
Mathematics and Theoretical computer science,
--- heavy emphasis on high-quality research,
--- ability to work in a team,
--- fluent English,
--- relevant experience is an asset although not required.
General conditions for Post-Doctoral Research positions can be found here
https://iri.hse.ru/faq_pd
Appointments will be normally made for one year. They assume
internationally competitive compensation and other benefits including
medical insurance.
A CV, research statement and two letters of recommendation should be
submitted directly via online application form
https://www.hse.ru/expresspolls/poll/229374241.html
*by February
15, 2019.* Please note that direct applications to the hiring laboratory
may not be reviewed.
About HSE
HSE is a young, dynamic, fast-growing Russian research university providing
unique research opportunities.
For more information about HSE, please visit our university portal
http://hse.ru/en
,
and the webpage of the Lab: International Laboratory of Game Theory and
Decision Making http://scem.spb.hse.ru/en/ilgt/
, or contact the following members of the Lab:
--Herve Moulin (Herve.Moulin at glasgow.ac.uk) and Anna Bogomolnaia (
Anna.Bogomolnaia at glasgow.ac.uk), the academic supervisors
--Alexander Nesterov (nesterovu at gmail.com), the laboratory head
--Xenia Adaeva (xadaeva at hse.ru), the manager.
To apply please follow the International Faculty Recruitment website link
https://www.hse.ru/expresspolls/poll/229374241.html
???????
--
You received this message because you are subscribed to the Google Groups
"sigecom-talk" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to sigecom-talk+unsubscribe at googlegroups.com.
To post to this group, send email to sigecom-talk at googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/sigecom-talk/91819e54-1766-4a53-8a7c-e04896e564d2%40googlegroups.com
.
For more options, visit https://groups.google.com/d/optout.
--
Rad Niazadeh,
Postdoctoral Scholar,
Computer Science Department, Stanford University,
484 Gates, 353 Serra Mall, Stanford, CA 94035.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Mon Jan 21 14:16:16 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Mon, 21 Jan 2019 22:16:16 +0000
Subject: [theory-seminar] Theory Seminar (1/25): Ronald de Haan
Message-ID:
Hi all,
This week's theory seminar talk will be by Ronald de Haan (University of Amsterdam): On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice (see abstract below). The talk will be as usual on Friday (1/25), 3:00pm in Gates 463A.
Hope to see you there!
Ofir
On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice
Speaker: Ronald de Haan (University of Amsterdam)
In the field of knowledge compilation, a class of Boolean circuits has been studied that strikes a balance between compactness and good algorithmic properties: circuits in Decomposable Negation Normal Form (DNNF). In this talk, I will visit some results about the possibilities and limits of using DNNF circuits in a social choice setting. This includes explaining how one can encode certain voting domains using DNNF circuits in polynomial time, and how this enables efficient use of some voting rules for these domains. It also includes showing lower bounds on the size of DNNF circuits for encoding other voting domains.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Mon Jan 21 17:33:00 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 22 Jan 2019 01:33:00 +0000
Subject: [theory-seminar] Theory Seminar (1/25): Ronald de Haan
In-Reply-To:
References:
Message-ID:
Change of plans: Ronald's talk will be next week (on Friday 2/1). This Friday we will have a talk by Christos Papadimitriou (an announcement will follow).
________________________________
From: Ofir Geri
Sent: Monday, January 21, 2019 2:16:16 PM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (1/25): Ronald de Haan
Hi all,
This week's theory seminar talk will be by Ronald de Haan (University of Amsterdam): On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice (see abstract below). The talk will be as usual on Friday (1/25), 3:00pm in Gates 463A.
Hope to see you there!
Ofir
On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice
Speaker: Ronald de Haan (University of Amsterdam)
In the field of knowledge compilation, a class of Boolean circuits has been studied that strikes a balance between compactness and good algorithmic properties: circuits in Decomposable Negation Normal Form (DNNF). In this talk, I will visit some results about the possibilities and limits of using DNNF circuits in a social choice setting. This includes explaining how one can encode certain voting domains using DNNF circuits in polynomial time, and how this enables efficient use of some voting rules for these domains. It also includes showing lower bounds on the size of DNNF circuits for encoding other voting domains.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Tue Jan 22 00:13:28 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 22 Jan 2019 08:13:28 +0000
Subject: [theory-seminar] Theory Seminar (1/25): Christos Papadimitriou
Message-ID:
Hi all,
This week in the theory seminar, we will have a talk by Christos Papadimitriou (Columbia) on Computation in the Brain (see abstract below). The talk will be as usual on Friday (1/25), 3:00pm in Gates 463A. Ronald de Haan's talk (which was originally scheduled for this week) will take place next week on Friday 2/1.
Hope to see you there!
Ofir
Computation in the Brain
Speaker: Christos Papadimitriou (Columbia)
How does the brain beget the mind? How do molecules, cells and synapses effect intelligence, reasoning, and language? Despite dazzling progress in experimental neuroscience we don't seem to be making progress in the overarching question -- the gap is huge and a completely new approach seems to be required. As Richard Axel recently put it: "We don't have a logic for the transformation of neural activity into thought."
What kind of formalism would qualify as this "logic"? I will sketch a possible answer.
(Joint work with Santosh Vempala, Dan Mitropolsky, and Mike Collins.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Jan 22 12:58:34 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 22 Jan 2019 20:58:34 +0000
Subject: [theory-seminar] Theory Lunch 1/24 -- Ray Li
Message-ID:
Hi everyone,
This Thursday at theory lunch, Ray will tell us about "Beating a log n approximation with the greedy algorithm on an active learning problem." See abstract below.
As always, please join us from noon to 1pm at 463A.
---------------------------------------------------------
Abstract:
The Decision Tree Problem is a classic formulation of active learning: Given n hypotheses and a set of tests that partition the hypotheses, construct a decision tree where each root-to-leaf path is a sequence of tests which uniquely identifies a hypothesis (leaf), and the (weighted) depth of the leaves is minimized. Previous works showed that greedy algorithms achieve O(log n)-approximations for this problem, and that beating a O(log n)-approximation is NP-hard. However, this threshold is not tight for the Uniform Decision Tree problem, where the weights of all hypotheses are equal. For the Uniform Decision Tree Problem, greedy algorithms are only known not to beat a O(log n/loglog n)-approximation, and [Kosaraju et al. '99] conjectured that O(log n/loglog n) is the correct approximation ratio for the greedy algorithm for Uniform Decision Tree problem, but this conjecture has remained open.
In this talk, we discuss recent work that proves the conjecture, showing that the greedy algorithm enjoys a O(log n/loglog n) approximation on the Uniform Decision Tree Problem. The proof technique in fact shows a more general theorem that has many other interesting consequences. One consequence is that we give subexponential algorithm for the Uniform Decision Tree problem. As a corollary, assuming the Exponential Time Hypothesis, achieving any super-constant approximation ratio on Uniform Decision Tree is not NP-hard. Time permitting, we discuss some of the proof techniques.
This is joint work with Percy Liang and Steve Mussmann.
---------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Jan 22 12:58:34 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 22 Jan 2019 20:58:34 +0000
Subject: [theory-seminar] Theory Lunch 1/24 -- Ray Li
Message-ID:
Hi everyone,
This Thursday at theory lunch, Ray will tell us about "Beating a log n approximation with the greedy algorithm on an active learning problem." See abstract below.
As always, please join us from noon to 1pm at 463A.
---------------------------------------------------------
Abstract:
The Decision Tree Problem is a classic formulation of active learning: Given n hypotheses and a set of tests that partition the hypotheses, construct a decision tree where each root-to-leaf path is a sequence of tests which uniquely identifies a hypothesis (leaf), and the (weighted) depth of the leaves is minimized. Previous works showed that greedy algorithms achieve O(log n)-approximations for this problem, and that beating a O(log n)-approximation is NP-hard. However, this threshold is not tight for the Uniform Decision Tree problem, where the weights of all hypotheses are equal. For the Uniform Decision Tree Problem, greedy algorithms are only known not to beat a O(log n/loglog n)-approximation, and [Kosaraju et al. '99] conjectured that O(log n/loglog n) is the correct approximation ratio for the greedy algorithm for Uniform Decision Tree problem, but this conjecture has remained open.
In this talk, we discuss recent work that proves the conjecture, showing that the greedy algorithm enjoys a O(log n/loglog n) approximation on the Uniform Decision Tree Problem. The proof technique in fact shows a more general theorem that has many other interesting consequences. One consequence is that we give subexponential algorithm for the Uniform Decision Tree problem. As a corollary, assuming the Exponential Time Hypothesis, achieving any super-constant approximation ratio on Uniform Decision Tree is not NP-hard. Time permitting, we discuss some of the proof techniques.
This is joint work with Percy Liang and Steve Mussmann.
---------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Wed Jan 23 03:26:22 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Wed, 23 Jan 2019 03:26:22 -0800
Subject: [theory-seminar] AWIS PALO ALTO: How do we advance women in the
STEM fields? - January, 29th, 7-9pm
In-Reply-To:
References:
Message-ID:
Hi everyone,
I'm forwarding below an event that could be of interest to many. (In
spite of the opening line, based on snooping around it does not seem to
be specifically for postdocs).
Best,
-- Cl?ment
-------- Forwarded Message --------
Subject: [MISC] AWIS PALO ALTO: How do we advance women in the STEM
fields? - January, 29th, 7-9pm
Date: Tue, 22 Jan 2019 18:22:25 +0000
From: Noga Or-Geva
To: postdoc-exchange at mailman.stanford.edu
> Dear Postdocs,
>
>
>
> *Join our January event at AWIS Palo Alto to learn more about *
>
> *advancing women in the STEM fields!*
>
>
> *We will be hosting an exciting panel ofinfluential **AWIS volunteers.*
>
>
> You can expect to _gain insights_ on some of the problems that women
> in STEM face today,
>
> _learn_ about options that you can use for promoting yourself in these
> fields and _find_ ways that you can become engaged in promoting others.
>
>
> As always, dinner and great company will be provided!
>
> Please sign up here - Tickets are limited.
>
> https://www.brownpapertickets.com/event/4023553
>
>
> *January 29, 2019***
> 7- 9 PM
> Stanford University MSOB X303
> 1265 Welch Road, 3rd Floor,
> Stanford, CA 94305
>
>
> January AWIS Palo Alto Event
>
> View this email in your browser
>
>
> *January 29, 2019*
>
> *Stanford University MSOB X303
> 1265 Welch Road, 3rd Floor,
> Stanford, CA 94305
> 7- 9 PM*
>
>
> /*How do we advance women in the STEM fields?
> AWIS PANEL - Engage yourself!*
>
> *AWIS Palo Alto chapter invites you to learn about our
> non-profit and how we (and YOU!) can support women in STEM. */
>
> /*Currentmembers will share their experiences at this unique
> panel.*/
>
>
>
>
> 7:00-7:30: Networking dinner
> 7:30-7:45: Announcements
> 7:45-9:00: Panel discussion
>
> AWIS Palo Alto members: $5
> Pre-registered non-members: $15
> Fee at door:?$20
> *Tickets are limited.*
> About AWIS:Association for Women in Scien ce
> (AWIS) is a global non-profit organization that
> inspires bold leadership, research, and solutions that advance women
> in STEM and drive systemic change.AWIS Palo Alto
> ?is the local chapter of AWIS, providing
> career support and building community among women in STEM in the
> Peninsula.
>
> Register Here
>
>
> Website
> Website
>
> Email
> Email
>
> LinkedIn
>
>
> LinkedIn
>
>
>
> /Copyright ? 2015, All rights reserved./
>
> unsubscribe from this list update subscription preferences
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
--+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
* postdoc-exchange info and options: https://mailman.stanford.edu/mailman/listinfo/postdoc-exchange
* SURPAS website: http://www.surpas.org
* OPA Website: http://postdocs.stanford.edu
* To unsubscribe: Send e-mail to postdoc-exchange-unsubscribe at lists.stanford.edu (no subject or content required)
All e-mails must contain one of the following tags in the subject line: [ADVICE] [BIO] [CHEM] [CLIPSS] [COMP] [EVENT] [HOUSING] [JOB] [MISC] [PETS] [SALE] [TALK] [VOLUNTEER]
From gvaliant at cs.stanford.edu Wed Jan 23 10:12:45 2019
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Wed, 23 Jan 2019 10:12:45 -0800
Subject: [theory-seminar] chalk talk/discussion with Urmila Mahadev,
4-5pm Packard 2014
Message-ID:
Hi Friends,
Urmila Mahadev will be around today, 4-5pm (in Packard, room 204), for an
informal conversation/"chalk talk". If you're free, I strongly suggest you
attend---whether you are keen to hear about some of her recent
breakthroughs in the theory of quantum computing, or just want to learn
more the bridges between quantum computing and cryptography, or are just
curious about the big challenges in the field.
I hope to see you there,
-Greg
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From marykw at stanford.edu Wed Jan 23 13:52:47 2019
From: marykw at stanford.edu (Mary Wootters)
Date: Wed, 23 Jan 2019 13:52:47 -0800
Subject: [theory-seminar] Summer opportunity for grad students at Canada/USA
Mathcamp 2019
Message-ID:
Hi grad students,
Forwarding this job ad for Mathcamp 2019 -- if you or anyone you know is
interested in teaching over the summer at Mathcamp, see below.
Best,
Mary
**********************************************************************
Canada/USA Mathcamp is looking for graduate students as leaders for its
summer 2019 session.
When: June 19 to August 1, 2019
Where: Lewis & Clark College, Portland, OR
Compensation: $4,000 stipend plus room and board for six weeks, and travel
expenses
Application Deadline: February 15, 2019
Details and online application: http://www.mathcamp.org/mentor/
This summer, we invite you to:
Be a leader in a vibrant community of talented and enthusiastic high-school
students and energetic faculty. Teach and learn what most interests you, in
an atmosphere of freedom and excitement. Be a friend and mentor to 120
marvelous kids. Be an architect of an experience that those 120 kids will
cherish for years.
Canada/USA Mathcamp (www.mathcamp.org) is a summer program for talented
high school students from all over the United States, Canada, and the rest
of the world. At Mathcamp, students interact with world-class
mathematicians, explore advanced topics in mathematics, and find a true
intellectual peer group.
The mentor job is a hybrid between a teaching position and a camp counselor
role. Your primary responsibility is to teach great classes, and you'll be
doing this in the context of a residential summer program: you live, eat,
and play with the campers. It's a lot of work and a lot of fun.
As a mentor at Mathcamp, you get an amazing teaching experience: there is
no set curriculum, so you create your own classes and teach the math you're
interested in. From group theory to projective geometry, from complex
analysis to cryptography, from fractals to voting theory ? there is an
abundance of mathematics that can be taught (with a little imagination) at
camp level. You'll have support (in both curriculum design and pedagogy)
from master teachers, and you'll work with students who are exceptionally
smart and engaged.
Mentors are also the camp's primary leaders and organizers, and cultivate
the rich life of the camp by planning activities, setting camp policy, and
serving as residential counselors?essentially, running the camp, and
bringing it to life with creative ideas, inside and outside the classroom.
Initiative, flexibility, and tolerance for a certain degree of chaos are a
must?that is part of what makes Mathcamp an exciting place to work!
Since women and minority students often face a shortage of role models in
mathematics, we are especially eager to recruit mentors from these groups.
For more information on the position and how to apply, visit
http://www.mathcamp.org/mentor/.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Wed Jan 23 16:51:07 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Thu, 24 Jan 2019 00:51:07 +0000
Subject: [theory-seminar] Theory Seminar (1/25): Christos Papadimitriou on
Computation in the Brain - New Location: Packard 101
In-Reply-To:
References:
Message-ID:
Hi all,
This Friday at 3:00pm we will have a talk by Christos Papadimitriou (Columbia) on Computation in the Brain (see abstract below).
Please note the change of location: the talk will be in Packard 101.
After the talk, we will a student meeting with Christos in Gates 463A - everyone who is interested is welcome to join!
Hope to see you there!
Ofir
Computation in the Brain
Speaker: Christos Papadimitriou (Columbia)
How does the brain beget the mind? How do molecules, cells and synapses effect intelligence, reasoning, and language? Despite dazzling progress in experimental neuroscience we don't seem to be making progress in the overarching question -- the gap is huge and a completely new approach seems to be required. As Richard Axel recently put it: "We don't have a logic for the transformation of neural activity into thought."
What kind of formalism would qualify as this "logic"? I will sketch a possible answer.
(Joint work with Santosh Vempala, Dan Mitropolsky, and Mike Collins.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Jan 24 13:20:22 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 24 Jan 2019 21:20:22 +0000
Subject: [theory-seminar] Theory Lunch 1/24 -- Ray Li
In-Reply-To:
References:
Message-ID:
The food is here! Sorry for the wait!
Hi everyone,
This Thursday at theory lunch, Ray will tell us about "Beating a log n approximation with the greedy algorithm on an active learning problem." See abstract below.
As always, please join us from noon to 1pm at 463A.
---------------------------------------------------------
Abstract:
The Decision Tree Problem is a classic formulation of active learning: Given n hypotheses and a set of tests that partition the hypotheses, construct a decision tree where each root-to-leaf path is a sequence of tests which uniquely identifies a hypothesis (leaf), and the (weighted) depth of the leaves is minimized. Previous works showed that greedy algorithms achieve O(log n)-approximations for this problem, and that beating a O(log n)-approximation is NP-hard. However, this threshold is not tight for the Uniform Decision Tree problem, where the weights of all hypotheses are equal. For the Uniform Decision Tree Problem, greedy algorithms are only known not to beat a O(log n/loglog n)-approximation, and [Kosaraju et al. '99] conjectured that O(log n/loglog n) is the correct approximation ratio for the greedy algorithm for Uniform Decision Tree problem, but this conjecture has remained open.
In this talk, we discuss recent work that proves the conjecture, showing that the greedy algorithm enjoys a O(log n/loglog n) approximation on the Uniform Decision Tree Problem. The proof technique in fact shows a more general theorem that has many other interesting consequences. One consequence is that we give subexponential algorithm for the Uniform Decision Tree problem. As a corollary, assuming the Exponential Time Hypothesis, achieving any super-constant approximation ratio on Uniform Decision Tree is not NP-hard. Time permitting, we discuss some of the proof techniques.
This is joint work with Percy Liang and Steve Mussmann.
---------------------------------------------------------
Best,
Anna
_______________________________________________
theory-seminar mailing list
theory-seminar at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/theory-seminar
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Jan 24 13:20:22 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 24 Jan 2019 21:20:22 +0000
Subject: [theory-seminar] Theory Lunch 1/24 -- Ray Li
In-Reply-To:
References:
Message-ID:
The food is here! Sorry for the wait!
Hi everyone,
This Thursday at theory lunch, Ray will tell us about "Beating a log n approximation with the greedy algorithm on an active learning problem." See abstract below.
As always, please join us from noon to 1pm at 463A.
---------------------------------------------------------
Abstract:
The Decision Tree Problem is a classic formulation of active learning: Given n hypotheses and a set of tests that partition the hypotheses, construct a decision tree where each root-to-leaf path is a sequence of tests which uniquely identifies a hypothesis (leaf), and the (weighted) depth of the leaves is minimized. Previous works showed that greedy algorithms achieve O(log n)-approximations for this problem, and that beating a O(log n)-approximation is NP-hard. However, this threshold is not tight for the Uniform Decision Tree problem, where the weights of all hypotheses are equal. For the Uniform Decision Tree Problem, greedy algorithms are only known not to beat a O(log n/loglog n)-approximation, and [Kosaraju et al. '99] conjectured that O(log n/loglog n) is the correct approximation ratio for the greedy algorithm for Uniform Decision Tree problem, but this conjecture has remained open.
In this talk, we discuss recent work that proves the conjecture, showing that the greedy algorithm enjoys a O(log n/loglog n) approximation on the Uniform Decision Tree Problem. The proof technique in fact shows a more general theorem that has many other interesting consequences. One consequence is that we give subexponential algorithm for the Uniform Decision Tree problem. As a corollary, assuming the Exponential Time Hypothesis, achieving any super-constant approximation ratio on Uniform Decision Tree is not NP-hard. Time permitting, we discuss some of the proof techniques.
This is joint work with Percy Liang and Steve Mussmann.
---------------------------------------------------------
Best,
Anna
_______________________________________________
theory-seminar mailing list
theory-seminar at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/theory-seminar
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Fri Jan 25 10:06:10 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 25 Jan 2019 18:06:10 +0000
Subject: [theory-seminar] Theory Seminar (1/25): Christos Papadimitriou
on Computation in the Brain - New Location: Packard 101
In-Reply-To:
References: ,
Message-ID:
Reminder: Christos' talk is today at 3pm in Packard 101.
________________________________
From: Ofir Geri
Sent: Wednesday, January 23, 2019 4:51:07 PM
To: thseminar at cs.stanford.edu; cs-seminars at lists.stanford.edu; nlp-seminar at lists.stanford.edu
Subject: Theory Seminar (1/25): Christos Papadimitriou on Computation in the Brain - New Location: Packard 101
Hi all,
This Friday at 3:00pm we will have a talk by Christos Papadimitriou (Columbia) on Computation in the Brain (see abstract below).
Please note the change of location: the talk will be in Packard 101.
After the talk, we will a student meeting with Christos in Gates 463A - everyone who is interested is welcome to join!
Hope to see you there!
Ofir
Computation in the Brain
Speaker: Christos Papadimitriou (Columbia)
How does the brain beget the mind? How do molecules, cells and synapses effect intelligence, reasoning, and language? Despite dazzling progress in experimental neuroscience we don't seem to be making progress in the overarching question -- the gap is huge and a completely new approach seems to be required. As Richard Axel recently put it: "We don't have a logic for the transformation of neural activity into thought."
What kind of formalism would qualify as this "logic"? I will sketch a possible answer.
(Joint work with Santosh Vempala, Dan Mitropolsky, and Mike Collins.)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From shivamgarg at stanford.edu Fri Jan 25 13:16:20 2019
From: shivamgarg at stanford.edu (Shivam Garg)
Date: Fri, 25 Jan 2019 21:16:20 +0000
Subject: [theory-seminar] Algorithms and Friends Lunch,
Monday 1/28 in Gates 463A
Message-ID:
Hi everyone,
We will have the first Algorithms and Friends lunch coming Monday (Jan 28), at noon, in Gates 463A. Tom Dean will tell us about Blind, Joint MIMO Decoding and Estimation.
The idea of Algorithms and Friends lunch is to engage in discussions with people from varied backgrounds, who have ideas and problems that may be of interest to a theory audience. We are looking for more speakers, so if you have suggestions, please let me know!
Title: Blind, Joint MIMO Decoding and Estimation
Abstract: Modern wireless communications systems rely on multiple antenna transmission schemes (MIMO) to achieve reliable, high-throughput communications. Such systems often require the receiver to have accurate knowledge of the channel which is obtained through the transmission of pilot symbols. Obtaining such knowledge will become increasingly challenging in future-generation wireless systems which will operate in conditions where the channel will vary more rapidly in time. In this talk we will discuss the problem of blind decoding, or attempting decoding when the receiver has no knowledge of the channel. Specifically, we formulate the problem of blind MIMO decoding as a non-convex optimization problem. We propose two algorithms to solve this problem. The first, based on gradient descent, is provably correct but inefficient. The second, inspired by the simplex algorithm, has not been proven to be correct but empirically performs well and is highly efficient.
Thanks,
Shivam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Fri Jan 25 22:27:29 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Fri, 25 Jan 2019 22:27:29 -0800
Subject: [theory-seminar] Theory Happy Hour: S02E02 ("Continuing Resolution")
Message-ID: <98dcc57e-44e9-e20e-aee3-07a4ee0c1fed@cs.stanford.edu>
Hi everyone,
It has been a while! A new year, a new quarter, a government shutdown
(and reopening), and no happy hour yet... to remedy that, Bruce and I
are suggesting you block your next Thursday evening, and join us on the
last day of January to a theory happy hour:
Thursday, January 31st
Gates 463
(moving to the AT&T patio if (as it should) weather allows)
5:30pm
There will be snacks, there will be drinks, and if you are so inclined
you can even pester Ofir to follow all that with a jam/music session.
Just don't tell him I suggested it.
See you on Thursday!
PS: as usual, if you have preferences or restrictions on the
food/drinks, please send me an email.
-- Cl?ment
From ccanonne at cs.stanford.edu Mon Jan 28 08:30:58 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Mon, 28 Jan 2019 08:30:58 -0800
Subject: [theory-seminar] TCS+ talk: Wednesday, February 6, Ran Canetti,
BU and TAU
Message-ID:
H everyone,
TCS+, the seminar series where the speaker is very flat on the wall but
we still get to ask them questions, is about to resume for the Spring
season. The first talk is by Ran Canetti (BU/TAU), on solving a
long-standing open problem in cryptography: deniable encryption (see
details below; as far as I understood, this is a work whose cool side
effect is to prove xkcd wrong: https://xkcd.com/538/)
So next week, Wednesday 6th, at 9:55am, come to Gates 463A to hear all
about it -- with breakfast.
Best,
-- Cl?ment
-------------------------------
Speaker: Ran Canetti (BU and TAU)
Title: Fully Bideniable Interactive Encryption
Abstract: While standard encryption guarantees secrecy of the encrypted
plaintext only against an attacker that has no knowledge of the
communicating parties? keys and randomness of encryption, deniable
encryption [Canetti et al., Crypto?96] provides the additional guarantee
that the plaintext remains secret even in face of entities that attempt
to coerce (or bribe) the communicating parties to expose their internal
states, including the plaintexts, keys and randomness. To achieve this
guarantee, deniable encryption equips the parties with faking
algorithms which allow them to generate fake keys and randomness that
make the ciphertext appear consistent with any plaintext of the parties?
choice. To date, however, only partial results were known: Either
deniability against coercing only the sender, or against coercing only
the receiver [Sahai-Waters, STOC ?14] or schemes satisfying weaker
notions of deniability [O?Neil et al., Crypto ?11].
In this paper we present the first fully bideniable interactive
encryption scheme, thus resolving the 20-years-old open problem. Our
scheme also provides an additional and new guarantee: Even if the sender
claims that one plaintext was used and the receiver claims a different
one, the adversary has no way of figuring out who is lying - the sender,
the receiver, or both. This property, which we call off-the-record
deniability, is useful when the parties don?t have means to agree on
what fake plaintext to claim, or when one party defects against the
other. Our protocol has three messages, which is optimal [Bendlin et
al., Asiacrypt?11], and needs a globally available reference string. We
assume subexponential indistinguishability obfuscation (IO) and one-way
functions.
Joint work with Sunoo Park and Oxana Poburinnaya.
From ofirgeri at stanford.edu Mon Jan 28 09:51:25 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Mon, 28 Jan 2019 17:51:25 +0000
Subject: [theory-seminar] Theory Seminar (2/1): Ronald de Haan
Message-ID:
Hi all,
This week's theory seminar talk will be by Ronald de Haan (University of Amsterdam): On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice (see abstract below). The talk will be as usual on Friday (2/1), 3:00pm in Gates 463A.
Hope to see you there!
Ofir
On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice
Speaker: Ronald de Haan (University of Amsterdam)
In the field of knowledge compilation, a class of Boolean circuits has been studied that strikes a balance between compactness and good algorithmic properties: circuits in Decomposable Negation Normal Form (DNNF). In this talk, I will visit some results about the possibilities and limits of using DNNF circuits in a social choice setting. This includes explaining how one can encode certain voting domains using DNNF circuits in polynomial time, and how this enables efficient use of some voting rules for these domains. It also includes showing lower bounds on the size of DNNF circuits for encoding other voting domains.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From shivamgarg at stanford.edu Mon Jan 28 11:21:54 2019
From: shivamgarg at stanford.edu (Shivam Garg)
Date: Mon, 28 Jan 2019 19:21:54 +0000
Subject: [theory-seminar] Algorithms and Friends Lunch,
Monday 1/28 in Gates 463A
In-Reply-To:
References:
Message-ID:
Reminder, this is starting in 40 mins. Food will be there.
________________________________
From: Shivam Garg
Sent: Friday, January 25, 2019 1:16:20 PM
To: theory-seminar at lists.stanford.edu; algorithms-and-friends at lists.stanford.edu
Cc: Tom Dean; Megan D. Harris
Subject: Algorithms and Friends Lunch, Monday 1/28 in Gates 463A
Hi everyone,
We will have the first Algorithms and Friends lunch coming Monday (Jan 28), at noon, in Gates 463A. Tom Dean will tell us about Blind, Joint MIMO Decoding and Estimation.
The idea of Algorithms and Friends lunch is to engage in discussions with people from varied backgrounds, who have ideas and problems that may be of interest to a theory audience. We are looking for more speakers, so if you have suggestions, please let me know!
Title: Blind, Joint MIMO Decoding and Estimation
Abstract: Modern wireless communications systems rely on multiple antenna transmission schemes (MIMO) to achieve reliable, high-throughput communications. Such systems often require the receiver to have accurate knowledge of the channel which is obtained through the transmission of pilot symbols. Obtaining such knowledge will become increasingly challenging in future-generation wireless systems which will operate in conditions where the channel will vary more rapidly in time. In this talk we will discuss the problem of blind decoding, or attempting decoding when the receiver has no knowledge of the channel. Specifically, we formulate the problem of blind MIMO decoding as a non-convex optimization problem. We propose two algorithms to solve this problem. The first, based on gradient descent, is provably correct but inefficient. The second, inspired by the simplex algorithm, has not been proven to be correct but empirically performs well and is highly efficient.
Thanks,
Shivam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Jan 28 16:14:41 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 29 Jan 2019 00:14:41 +0000
Subject: [theory-seminar] Theory Lunch 1/31 -- Joshua Brakensiek
Message-ID:
Hi all,
This Thursday at theory lunch, Josh will tell us about "The Polymorphic Gateway Between Structure and Algorithms--CSPs and Beyond." See abstract below.
As always, please join us from noon to 1pm at 463A.
---------------------------------------------------------
The Polymorphic Gateway Between Structure and Algorithms--CSPs and Beyond
Speaker: Joshua Brakensiek
What underlying mathematical structure (or lack thereof) in a computational problem governs its efficient solvability (or dictates its hardness)? In the realm of constraint satisfaction problems (CSPs), the algebraic dichotomy theorem gives a definitive answer: a polynomial time algorithm exists precisely when the problem admits non-trivial operations called polymorphisms that combine solutions to the problem to yield new solutions. Inspired and emboldened by this, one might speculate a broader polymorphic principle: if there are interesting ways to combine solutions to get more solutions, then the problem ought to be tractable (with the meanings of "interesting" and "tractable? being heavily context dependent).
Beginning with some background on the polymorphic approach to understanding the complexity of constraint satisfaction, the talk will discuss some extensions beyond CSPs where the polymorphic principle seems highly promising (yet far from understood). Specifically, we will discuss Promise CSPs where one is allowed to satisfy a relaxed version of the constraints (a framework that includes important problems like approximate graph coloring and discrepancy minimization), and the potential and challenges in applying the polymorphic framework to them. Our inquiries into these directions also reveal some interesting connections to optimization, such as algorithms to solve LPs over different rings (like integers adjoined with sqrt{2}).
Based on joint work with Venkat Guruswami.
---------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Jan 28 16:14:41 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 29 Jan 2019 00:14:41 +0000
Subject: [theory-seminar] Theory Lunch 1/31 -- Joshua Brakensiek
Message-ID:
Hi all,
This Thursday at theory lunch, Josh will tell us about "The Polymorphic Gateway Between Structure and Algorithms--CSPs and Beyond." See abstract below.
As always, please join us from noon to 1pm at 463A.
---------------------------------------------------------
The Polymorphic Gateway Between Structure and Algorithms--CSPs and Beyond
Speaker: Joshua Brakensiek
What underlying mathematical structure (or lack thereof) in a computational problem governs its efficient solvability (or dictates its hardness)? In the realm of constraint satisfaction problems (CSPs), the algebraic dichotomy theorem gives a definitive answer: a polynomial time algorithm exists precisely when the problem admits non-trivial operations called polymorphisms that combine solutions to the problem to yield new solutions. Inspired and emboldened by this, one might speculate a broader polymorphic principle: if there are interesting ways to combine solutions to get more solutions, then the problem ought to be tractable (with the meanings of "interesting" and "tractable? being heavily context dependent).
Beginning with some background on the polymorphic approach to understanding the complexity of constraint satisfaction, the talk will discuss some extensions beyond CSPs where the polymorphic principle seems highly promising (yet far from understood). Specifically, we will discuss Promise CSPs where one is allowed to satisfy a relaxed version of the constraints (a framework that includes important problems like approximate graph coloring and discrepancy minimization), and the potential and challenges in applying the polymorphic framework to them. Our inquiries into these directions also reveal some interesting connections to optimization, such as algorithms to solve LPs over different rings (like integers adjoined with sqrt{2}).
Based on joint work with Venkat Guruswami.
---------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Mon Jan 28 20:49:53 2019
From: reingold at stanford.edu (Omer Reingold)
Date: Mon, 28 Jan 2019 20:49:53 -0800
Subject: [theory-seminar] STOCA Tuesday February 5th
In-Reply-To:
References:
Message-ID:
Organizers asked me to mention that for badging & catering purposes,
registration would be very helpful - the webpage has the link to register:
https://sites.google.com/view/stoca19/home
Omer
On Mon, Jan 14, 2019 at 9:50 AM Omer Reingold wrote:
> Google will be hosting a day of talks by the STOC PC. Details and a
> tentative schedule is here: https://sites.google.com/view/stoca19/home Looks
> great!
>
> Omer
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Wed Jan 30 17:06:51 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Wed, 30 Jan 2019 17:06:51 -0800
Subject: [theory-seminar] Theory Happy Hour: S02E02 ("Continuing
Resolution")
In-Reply-To: <98dcc57e-44e9-e20e-aee3-07a4ee0c1fed@cs.stanford.edu>
References: <98dcc57e-44e9-e20e-aee3-07a4ee0c1fed@cs.stanford.edu>
Message-ID: <16850276-6f1c-8dd3-2c84-2f691e93ca57@cs.stanford.edu>
Reminder: this is tomorrow!
-- Cl?ment
On 1/25/19 10:27 PM, Cl?ment Canonne wrote:
> Hi everyone,
>
> It has been a while! A new year, a new quarter, a government shutdown
> (and reopening), and no happy hour yet... to remedy that, Bruce and I
> are suggesting you block your next Thursday evening, and join us on the
> last day of January to a theory happy hour:
>
> ????Thursday, January 31st
> ????Gates 463
> ????(moving to the AT&T patio if (as it should) weather allows)
> ????5:30pm
>
> There will be snacks, there will be drinks, and if you are so inclined
> you can even pester Ofir to follow all that with a jam/music session.
> Just don't tell him I suggested it.
>
> See you on Thursday!
>
> PS: as usual, if you have preferences or restrictions on the
> food/drinks, please send me an email.
>
> -- Cl?ment
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
From ccanonne at stanford.edu Thu Jan 31 17:36:19 2019
From: ccanonne at stanford.edu (Clement Louis Arthur Canonne)
Date: Fri, 1 Feb 2019 01:36:19 +0000
Subject: [theory-seminar] Theory Happy Hour: S02E02 ("Continuing
Resolution")
In-Reply-To: <16850276-6f1c-8dd3-2c84-2f691e93ca57@cs.stanford.edu>
References: <98dcc57e-44e9-e20e-aee3-07a4ee0c1fed@cs.stanford.edu>,
<16850276-6f1c-8dd3-2c84-2f691e93ca57@cs.stanford.edu>
Message-ID: <8a18c475-c7f9-4eed-a07c-a81ccb94dd8c@email.android.com>
Reminder: this was tomorrow yesterday! It's now, now (in 463A).
-- Cl?ment
On Jan 30, 2019 5:08 PM, Cl?ment Canonne wrote:
Reminder: this is tomorrow!
-- Cl?ment
On 1/25/19 10:27 PM, Cl?ment Canonne wrote:
> Hi everyone,
>
> It has been a while! A new year, a new quarter, a government shutdown
> (and reopening), and no happy hour yet... to remedy that, Bruce and I
> are suggesting you block your next Thursday evening, and join us on the
> last day of January to a theory happy hour:
>
> Thursday, January 31st
> Gates 463
> (moving to the AT&T patio if (as it should) weather allows)
> 5:30pm
>
> There will be snacks, there will be drinks, and if you are so inclined
> you can even pester Ofir to follow all that with a jam/music session.
> Just don't tell him I suggested it.
>
> See you on Thursday!
>
> PS: as usual, if you have preferences or restrictions on the
> food/drinks, please send me an email.
>
> -- Cl?ment
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
_______________________________________________
theory-seminar mailing list
theory-seminar at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/theory-seminar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: