From bspang at stanford.edu Mon Nov 4 10:25:00 2019
From: bspang at stanford.edu (Bruce Spang)
Date: Mon, 4 Nov 2019 18:25:00 +0000
Subject: [theory-seminar] Two theory seminars this week!
Message-ID:
Hi all
This week we have two theory seminars:
* On Tuesday 11/5 at 2pm in Gates 463A, John Wright will give a talk on "NEEXP in MIP*?
* On Wednesday 11/6 at 3pm in Gates 463A, Roy Frostig will give a talk on "Overfitting the Test Set: Limits and Algorithms for Multiclass Problems?
The abstracts are below. Hope to see you there!
Bruce
NEEXP in MIP*
John Wright
Tuesday, November 5?2:00 ? 3:00pm
A long-standing puzzle in quantum complexity theory is to understand the power of the class MIP* of multiprover interactive proofs with shared entanglement. This question is closely related to the study of entanglement through non-local games, which dates back to the pioneering work of Bell. In this work we show that MIP* contains NEEXP (non-deterministic doubly-exponential time), exponentially improving the prior lower bound of NEXP due to Ito and Vidick. Our result shows that shared entanglement exponentially increases the power of these proof systems, as the class MIP of multiprover interactive proofs without shared entanglement is known to be equal to NEXP.
This is joint work with Anand Natarajan.
Overfitting the Test Set: Limits and Algorithms for Multiclass Problems
Roy Frostig
Wednesday, November 6?3:00 ? 4:00pm
Reusing a held-out dataset in machine learning can lead to overfitting, and can invalidate the dataset. Still, recent studies reveal no evidence of significant overfitting on popular machine learning benchmarks. What prevents a misleading bias from creeping in to these results?
Focusing on classification, we show that having many class labels?as in these benchmarks?mitigates the worst-case rate of overfitting. Namely, making k adaptive queries to a dataset of n points with m classes, no algorithm can always achieve a bias beyond O(sqrt(k / (nm)) up to log factors. As for what's achievable, we construct an algorithm to "attack" a dataset that overfits at nearly the same rate, and that was later improved on by others to match the upper bound up to log factors.
The algorithm is practically usable, incidentally, and we show the actual bias that it can achieve on the popular ImageNet dataset, starting from a recent state-of-the-art classifier.
Based on work with Vitaly Feldman (Google Research) and Moritz Hardt (UC Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bspang at stanford.edu Tue Nov 5 13:52:53 2019
From: bspang at stanford.edu (Bruce Spang)
Date: Tue, 5 Nov 2019 21:52:53 +0000
Subject: [theory-seminar] Two theory seminars this week!
In-Reply-To:
References:
Message-ID: <6D241655-785A-4C45-B400-FE086FA0B6FD@stanford.edu>
John?s talk will start in ten minutes. Also due to the unusual time, we will be in Gates 463 (not 463a!)
Bruce
On Nov 4, 2019, at 10:25 AM, Bruce Spang > wrote:
Hi all
This week we have two theory seminars:
* On Tuesday 11/5 at 2pm in Gates 463A, John Wright will give a talk on "NEEXP in MIP*?
* On Wednesday 11/6 at 3pm in Gates 463A, Roy Frostig will give a talk on "Overfitting the Test Set: Limits and Algorithms for Multiclass Problems?
The abstracts are below. Hope to see you there!
Bruce
NEEXP in MIP*
John Wright
Tuesday, November 5?2:00 ? 3:00pm
A long-standing puzzle in quantum complexity theory is to understand the power of the class MIP* of multiprover interactive proofs with shared entanglement. This question is closely related to the study of entanglement through non-local games, which dates back to the pioneering work of Bell. In this work we show that MIP* contains NEEXP (non-deterministic doubly-exponential time), exponentially improving the prior lower bound of NEXP due to Ito and Vidick. Our result shows that shared entanglement exponentially increases the power of these proof systems, as the class MIP of multiprover interactive proofs without shared entanglement is known to be equal to NEXP.
This is joint work with Anand Natarajan.
Overfitting the Test Set: Limits and Algorithms for Multiclass Problems
Roy Frostig
Wednesday, November 6?3:00 ? 4:00pm
Reusing a held-out dataset in machine learning can lead to overfitting, and can invalidate the dataset. Still, recent studies reveal no evidence of significant overfitting on popular machine learning benchmarks. What prevents a misleading bias from creeping in to these results?
Focusing on classification, we show that having many class labels?as in these benchmarks?mitigates the worst-case rate of overfitting. Namely, making k adaptive queries to a dataset of n points with m classes, no algorithm can always achieve a bias beyond O(sqrt(k / (nm)) up to log factors. As for what's achievable, we construct an algorithm to "attack" a dataset that overfits at nearly the same rate, and that was later improved on by others to match the upper bound up to log factors.
The algorithm is practically usable, incidentally, and we show the actual bias that it can achieve on the popular ImageNet dataset, starting from a recent state-of-the-art classifier.
Based on work with Vitaly Feldman (Google Research) and Moritz Hardt (UC Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Tue Nov 5 18:33:07 2019
From: reingold at stanford.edu (Omer Reingold)
Date: Tue, 5 Nov 2019 18:33:07 -0800
Subject: [theory-seminar] poster session @ next TOCA-SV
In-Reply-To:
References:
Message-ID:
Hello everybody,
This is a reminder that TOCA-SV and the Motwani Colloquium are happening in
10 day (Friday the 15th). Details can be found here:
theorydish.blog/2019/11/05/next-week-toca-sv-motwani-colloquium/
Looking forward to seeing you all there!
Omer (for the organizing committee)
On Mon, Oct 7, 2019 at 6:10 PM Omer Reingold wrote:
> Hello everybody,
>
> The next TOCA-SV meeting at Stanford will be collocated with the next
> Motwani colloquium given by Ronitt Rubinfeld. Please hold November 15th
> for this special meeting.
>
> This year, the meeting will feature a poster session for students from all
> universities from the area (as well as visiting students). The idea is to
> give students and industry theoreticians an opportunity to get to know each
> other (partly with an eye for possible internships). If you would like to
> give such a talk, please send Greg Valiant (gvaliant at cs.stanford.edu) an
> email with a short abstract of the poster you propose as well as a few
> details about yourself by October 25.
>
> Looking forward to seeing as many of you there!
> Omer (for the organizing committee)
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Wed Nov 6 09:52:32 2019
From: nehgupta at stanford.edu (Neha Gupta)
Date: Wed, 6 Nov 2019 17:52:32 +0000
Subject: [theory-seminar] Theory Lunch 11/7 - Michael Kim
Message-ID:
Hi everyone,
This Thursday at theory lunch, Michael will tell us about "Evidence-Based Rankings" (abstract below). As usual, please join us from noon - 1pm in Gates 463A.
-----------------------------
Title: Evidence-Based Rankings
Abstract: Many selection procedures involve ordering candidates according to their qualifications. For example, a university might order applicants according to a perceived probability of graduation within four years, and then select the top 1000 applicants. In this work, we address the problem of ranking members of a population according to their ``probability'' of a positive outcome, based on a training set of historical binary outcome data (e.g., graduated in four years or not). We show how to obtain rankings that satisfy a number of desirable accuracy and fairness criteria, despite the coarseness of the training data. As the task of ranking is global (the rank of every individual depends not only on their own qualifications, but also on every other individuals' qualifications), ranking is more subtle and vulnerable to manipulation than standard prediction tasks.
As a step towards mitigating unfair discrimination caused by inaccuracies in rankings, we develop two parallel definitions of *evidence-based rankings*. The first definition relies on a semantic notion of *domination-compatibility*: if the training data suggest that members of a set S are more qualified (on average) than the members of T, then a ranking that favors T over S (where T *dominates* S) is blatantly inconsistent with the evidence, and likely to be discriminatory. The definition asks for domination-compatibility, not just for a single pair of sets, but rather for every pair from a rich collection C of possibly-intersecting subpopulations. The second definition aims at precluding even more general forms of discrimination; this notion of *evidence-consistency* requires that the ranking must be justified on the basis of consistency with the expectations for every set in the collection C. Somewhat surprisingly, while evidence-consistency is a strictly stronger notion than domination-compatibility when the collection C is predefined, the two notions are equivalent when the collection C may depend on the ranking in question.
Joint work with Cynthia Dwork, Omer Reingold, Guy Rothblum, and Gal Yona.
-------------------------------
Thanks,
Neha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Wed Nov 6 09:52:32 2019
From: nehgupta at stanford.edu (Neha Gupta)
Date: Wed, 6 Nov 2019 17:52:32 +0000
Subject: [theory-seminar] Theory Lunch 11/7 - Michael Kim
Message-ID:
Hi everyone,
This Thursday at theory lunch, Michael will tell us about "Evidence-Based Rankings" (abstract below). As usual, please join us from noon - 1pm in Gates 463A.
-----------------------------
Title: Evidence-Based Rankings
Abstract: Many selection procedures involve ordering candidates according to their qualifications. For example, a university might order applicants according to a perceived probability of graduation within four years, and then select the top 1000 applicants. In this work, we address the problem of ranking members of a population according to their ``probability'' of a positive outcome, based on a training set of historical binary outcome data (e.g., graduated in four years or not). We show how to obtain rankings that satisfy a number of desirable accuracy and fairness criteria, despite the coarseness of the training data. As the task of ranking is global (the rank of every individual depends not only on their own qualifications, but also on every other individuals' qualifications), ranking is more subtle and vulnerable to manipulation than standard prediction tasks.
As a step towards mitigating unfair discrimination caused by inaccuracies in rankings, we develop two parallel definitions of *evidence-based rankings*. The first definition relies on a semantic notion of *domination-compatibility*: if the training data suggest that members of a set S are more qualified (on average) than the members of T, then a ranking that favors T over S (where T *dominates* S) is blatantly inconsistent with the evidence, and likely to be discriminatory. The definition asks for domination-compatibility, not just for a single pair of sets, but rather for every pair from a rich collection C of possibly-intersecting subpopulations. The second definition aims at precluding even more general forms of discrimination; this notion of *evidence-consistency* requires that the ranking must be justified on the basis of consistency with the expectations for every set in the collection C. Somewhat surprisingly, while evidence-consistency is a strictly stronger notion than domination-compatibility when the collection C is predefined, the two notions are equivalent when the collection C may depend on the ranking in question.
Joint work with Cynthia Dwork, Omer Reingold, Guy Rothblum, and Gal Yona.
-------------------------------
Thanks,
Neha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bspang at stanford.edu Wed Nov 6 14:44:39 2019
From: bspang at stanford.edu (Bruce Spang)
Date: Wed, 6 Nov 2019 22:44:39 +0000
Subject: [theory-seminar] Two theory seminars this week!
In-Reply-To:
References:
Message-ID:
Just a reminder that Roy?s talk will be starting in 15 minutes!
Bruce
On Nov 4, 2019, at 10:25 AM, Bruce Spang > wrote:
Hi all
This week we have two theory seminars:
* On Tuesday 11/5 at 2pm in Gates 463A, John Wright will give a talk on "NEEXP in MIP*?
* On Wednesday 11/6 at 3pm in Gates 463A, Roy Frostig will give a talk on "Overfitting the Test Set: Limits and Algorithms for Multiclass Problems?
The abstracts are below. Hope to see you there!
Bruce
NEEXP in MIP*
John Wright
Tuesday, November 5?2:00 ? 3:00pm
A long-standing puzzle in quantum complexity theory is to understand the power of the class MIP* of multiprover interactive proofs with shared entanglement. This question is closely related to the study of entanglement through non-local games, which dates back to the pioneering work of Bell. In this work we show that MIP* contains NEEXP (non-deterministic doubly-exponential time), exponentially improving the prior lower bound of NEXP due to Ito and Vidick. Our result shows that shared entanglement exponentially increases the power of these proof systems, as the class MIP of multiprover interactive proofs without shared entanglement is known to be equal to NEXP.
This is joint work with Anand Natarajan.
Overfitting the Test Set: Limits and Algorithms for Multiclass Problems
Roy Frostig
Wednesday, November 6?3:00 ? 4:00pm
Reusing a held-out dataset in machine learning can lead to overfitting, and can invalidate the dataset. Still, recent studies reveal no evidence of significant overfitting on popular machine learning benchmarks. What prevents a misleading bias from creeping in to these results?
Focusing on classification, we show that having many class labels?as in these benchmarks?mitigates the worst-case rate of overfitting. Namely, making k adaptive queries to a dataset of n points with m classes, no algorithm can always achieve a bias beyond O(sqrt(k / (nm)) up to log factors. As for what's achievable, we construct an algorithm to "attack" a dataset that overfits at nearly the same rate, and that was later improved on by others to match the upper bound up to log factors.
The algorithm is practically usable, incidentally, and we show the actual bias that it can achieve on the popular ImageNet dataset, starting from a recent state-of-the-art classifier.
Based on work with Vitaly Feldman (Google Research) and Moritz Hardt (UC Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Mon Nov 11 07:58:44 2019
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Mon, 11 Nov 2019 07:58:44 -0800
Subject: [theory-seminar] Fwd: [ee-students-forum] Talk "Towards an
Average-case Complexity of High-dimensional Statistics" by Guy Bresler (Fr,
15-Nov @ 1:15pm)
In-Reply-To:
References:
Message-ID:
Hi All,
This Friday, Guy Bresler will be giving a talk that may be of interest to
some on this list.
Thanks,
Kabir
---------- Forwarded message ---------
From: Joachim Neu
Date: Sun, Nov 10, 2019 at 11:23 PM
Subject: [ee-students-forum] Talk "Towards an Average-case Complexity of
High-dimensional Statistics" by Guy Bresler (Fr, 15-Nov @ 1:15pm)
To: , isl-colloq <
isl-colloq at lists.stanford.edu>, ee-students-forum <
ee-students-forum at lists.stanford.edu>, cs-students-announce <
cs-students-announce at lists.stanford.edu>
JOINT SEMINAR of ISL COLLOQUIUM and IT FORUM
Speaker: Guy Bresler -- Professor, MIT
Title: Towards an Average-case Complexity of High-dimensional
Statistics
When: Friday, 15-Nov-2019, 1:15pm to 2:15pm
Where: Packard 202
Abstract:
The prototypical high-dimensional statistical estimation problem
entails finding a structured signal in noise. These problems have
traditionally been studied in isolation, with researchers aiming to
develop statistically and computationally efficient algorithms, as well
as to try to understand the fundamental limits governing the interplay
between statistical and computational cost. In this talk I will
describe a line of work that yields average-case reductions directly
between a number of central high-dimensional statistics problems,
relating two problems by transforming one into the other. It turns out
that several problems described by robust formulations can be addressed
by one set of techniques, and we will focus on these in the talk. In
this direction, we obtain the following average-case lower bounds based
on the planted clique conjecture: a statistical-computational gap in
robust sparse mean estimation, a detection-recovery gap in community
detection, and a universality principle for computational-statistical
gaps in sparse mixture estimation. In addition to showing strong
computational lower bounds tight against what is achievable by
efficient algorithms, the methodology gives insight into the common
features shared by different high-dimensional statistics problems with
similar computational behavior. Joint work with Matthew Brennan.
Bio:
Guy Bresler is an associate professor in the Department of Electrical
Engineering and Computer Science at MIT, and a member of LIDS and IDSS.
Previously, he was a postdoc at MIT and before that received his PhD
from the Department of EECS at UC Berkeley. His undergraduate degree is
from the University of Illinois at Urbana-Champaign. In the last
several years his research has focused on the interface between
computation and statistics with the aim of understanding the
relationship between combinatorial structure and computational
tractability of high-dimensional inference.
_______________________________________________
EE students forum mailing list
ee-students-forum mailing list
ee-students-forum at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/ee-students-forum
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From kabirc at stanford.edu Mon Nov 11 08:00:04 2019
From: kabirc at stanford.edu (Kabir Chandrasekher)
Date: Mon, 11 Nov 2019 08:00:04 -0800
Subject: [theory-seminar] Fwd: ISL Colloquium: "Spectral graph matching and
regularized quadratic relaxations", Jiaming Xu (Duke), Thurs. Nov. 14,
4:30pm Packard 101
In-Reply-To:
References:
Message-ID:
Hi All,
This Thursday, Jiaming Xu will be visiting and giving a talk which may of
interest to some on this list.
Thanks,
Kabir
---------- Forwarded message ---------
From: Kabir Chandrasekher
Date: Mon, Nov 11, 2019 at 7:57 AM
Subject: ISL Colloquium: "Spectral graph matching and regularized quadratic
relaxations", Jiaming Xu (Duke), Thurs. Nov. 14, 4:30pm Packard 101
To: , <
information_theory_forum at lists.stanford.edu>, <
ee-students-forum at lists.stanford.edu>, <
cs-students-announce at lists.stanford.edu>,
Cc: jiaming xu
*Title:* Spectral graph matching and regularized quadratic relaxations
*Speaker:* Jiaming Xu (Duke)
*Time & location: *Thursday November 14, 4:30-5:30 pm, Packard 101
Coffee and snacks will be served before the talk at 4pm in the Packard
second floor kitchen
*Abstract:* Given two unlabeled, edge-correlated graphs on the same set of
vertices, we study the "graph matching" problem of identifying the unknown
mapping from vertices of the first graph to those of the second. This
amounts to solving a computationally intractable quadratic assignment
problem. We propose a new spectral method, which computes the
eigendecomposition of the two graph adjacency matrices and returns a
matching based on the pairwise alignments between all eigenvectors of the
first graph with all eigenvectors of the second. Each alignment is
inversely weighted by the distance between the corresponding eigenvalues.
This spectral method can be equivalently viewed as solving a regularized
quadratic programming relaxation of the quadratic assignment problem. We
show that for a correlated Erdos-Renyi model, this method can return the
exact matching with high probability if the two graphs differ by at most a
1/polylog(n) fraction of edges, both for dense graphs and for sparse graphs
with at least polylog(n) average degree. Our analysis exploits local laws
for the resolvents of sparse Wigner matrices. Based on joint work with Zhou
Fan, Cheng Mao, Yihong Wu, all at Yale.
*Bio: *Jiaming Xu is an assistant professor in the Fuqua School of Business
at Duke University which he joined in July 2018. Before that, he was an
assistant professor in the Krannert School of Management at Purdue
University from August 2016 to June 2018, a research fellow with the Simons
Institute for the Theory of Computing, UC Berkeley from January 2016 to
June 2016, and a postdoctoral fellow with the Statistics Department, The
Wharton School, University of Pennsylvania from January 2015 to December
2015. He received the Ph.D. degree from UIUC in 2014 under the supervision
of Prof. Bruce Hajek, the M.S. degree from UT-Austin in 2011, and the B.E.
degree from Tsinghua University in 2009, all in Electrical and Computer
Engineering. His research interests include random graphs, high-dimensional
statistical inference, information theory, convex and non-convex
optimization, and queueing theory.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bspang at stanford.edu Mon Nov 11 13:33:21 2019
From: bspang at stanford.edu (Bruce Spang)
Date: Mon, 11 Nov 2019 21:33:21 +0000
Subject: [theory-seminar] This week's theory seminar
Message-ID:
Hi all!
This week?s theory seminar will be given by Jason Li, talking about ?The connectivity threshold for dense graphs.? The talk will be on Wednesday, November 13, from 3-4pm.
The abstract is below. Hope to see you there!
Bruce
The connectivity threshold for dense graphs
Jason Li
Consider a random graph model where there is an underlying simple graph $G = (V, E)$, and each edge is sampled independently with probability $p \in [0, 1]$. What is the smallest value of $p$ such that the resulting graph $G_p$ is connected with high probability? This is a well-studied question for special classes of graphs, such as complete graphs and hypercubes. For instance, when $G$ is the complete graph, we want the connectivity threshold for the Erd\H{o}s-R\'enyi $G(n,p)$ model: here the answer is known to be $\frac{\ln n + O(1)}{n}$. However, the problem is not well-understood for more general graph classes.
We first investigate this connectivity threshold problem for ``somewhat dense'' graphs. We show that for any $\delta\ge\widetilde O(\sqrt n)$, any $\delta$-regular, $\delta$-edge-connected graph has connectivity threshold $\frac{\ln n + O(1)}{\ delta}$, generalizing upon the case when $G$ is the complete graph. Our proof also bounds the number of approximate mincuts in such a dense graph, which may be of independent interest.
Next, for a general graph $G$, we define an explicit parameter $\beta_G \in (0,2\ln n]$, based on the number of approximate mincuts, and show that there is a sharp transition in the connectivity of $G$ at $p = \beta_G/\lambda$. Moreover, we show that the width of this transition is an additive $O(\ln \lambda/\lambda)$ term; this improves upon Margulis' classical result bounding the width of the threshold by $O(1/\sqrt{\lambda})$.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Mon Nov 11 17:39:34 2019
From: moses at cs.stanford.edu (Moses Charikar)
Date: Mon, 11 Nov 2019 17:39:34 -0800
Subject: [theory-seminar] Motwani CS Theory Colloquium: Ronitt Rubinfeld
(Nov 15)
Message-ID:
The Motwani Distinguished Lectures are a series of theory colloquia aimed
at a broad audience. The next lecture in the series will be given this
Friday, Nov 15, by Ronitt Rubinfeld (MIT and Tel Aviv University), on Local
Computation Algorithms (abstract below). You should definitely attend if
you can!
The talk is at 4:15pm on Nov 15 in Tressider Oak Lounge (2nd Floor of
Tressider Union).
There will be a light reception immediately following the talk. This Motwani
Colloquium is part of TOCA-SV, our biannual gathering of CS theoreticians
in academia and industry in the Silicon Valley area. See the full schedule
here:
https://theorydish.blog/2019/11/05/next-week-toca-sv-motwani-colloquium/
Hope to see you there!
Cheers,
Moses
Title: *Local Computation Algorithms*
Ronitt Rubinfeld, MIT and Tel Aviv University
Abstract
Consider a setting in which inputs to and outputs from a computational
problem are so large, that there is not time to read them in their
entirety. However, if one is only interested in small parts of the output
at any given time, is it really necessary to solve the entire computational
problem? Is it even necessary to view the whole input? We survey recent
work in the model of ?local computation algorithms? which for a given
input, supports queries by a user to values of specified bits of a legal
output. The goal is to design local computation algorithms in such a way
that very little of the input needs to be seen in order to determine the
value of any single bit of the output. Though this model describes
sequential computations, techniques from local distributed algorithms have
been extremely important in designing efficient local computation
algorithms. In this talk, we describe results on a variety of problems for
which sublinear time and space local computation algorithms have been
developed ? we will give special focus to finding maximal independent sets
and sparse spanning graphs.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Wed Nov 13 10:07:09 2019
From: nehgupta at stanford.edu (Neha Gupta)
Date: Wed, 13 Nov 2019 18:07:09 +0000
Subject: [theory-seminar] Theory Lunch 11/14 - Alexandra Kolla
Message-ID:
Hi everyone,
This Thursday at theory lunch, Alexandra will tell us about "Statistical physics algorithms for Unique Games" (abstract below). As usual, please join us from noon - 1pm in Gates 463A.
-------------------------------
Title: Statistical physics algorithms for Unique Games.
Abstract: In this talk we will discuss how two key techniques stemming from statistical physics can be applied to solve a variant of the notorious Unique Games Problem, potentially opening new avenues towards the Unique Games Conjecture. The variant, which we call Count-Unique Games, is a promise problem where in the "yes" case we are guaranteed to have a certain (relatively small) number of highly satisfiable assignments to the Unique Games instance. We note that in the standard Unique Games problem, the "yes" case only guarantees at least one such good assignment. We exhibit efficient algorithms based on approximating a suitable partition function for the Unique Games instance based on (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion, to solve Count-UGC for certain interesting ranges of parameters. We also pose the question of when Count-UGC is equivalent to UGC.
-------------------------------
Thanks,
Neha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Wed Nov 13 10:07:09 2019
From: nehgupta at stanford.edu (Neha Gupta)
Date: Wed, 13 Nov 2019 18:07:09 +0000
Subject: [theory-seminar] Theory Lunch 11/14 - Alexandra Kolla
Message-ID:
Hi everyone,
This Thursday at theory lunch, Alexandra will tell us about "Statistical physics algorithms for Unique Games" (abstract below). As usual, please join us from noon - 1pm in Gates 463A.
-------------------------------
Title: Statistical physics algorithms for Unique Games.
Abstract: In this talk we will discuss how two key techniques stemming from statistical physics can be applied to solve a variant of the notorious Unique Games Problem, potentially opening new avenues towards the Unique Games Conjecture. The variant, which we call Count-Unique Games, is a promise problem where in the "yes" case we are guaranteed to have a certain (relatively small) number of highly satisfiable assignments to the Unique Games instance. We note that in the standard Unique Games problem, the "yes" case only guarantees at least one such good assignment. We exhibit efficient algorithms based on approximating a suitable partition function for the Unique Games instance based on (i) a zero-free region and polynomial interpolation, and (ii) the cluster expansion, to solve Count-UGC for certain interesting ranges of parameters. We also pose the question of when Count-UGC is equivalent to UGC.
-------------------------------
Thanks,
Neha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jacobfox at stanford.edu Wed Nov 13 20:36:56 2019
From: jacobfox at stanford.edu (Jacob Fox)
Date: Thu, 14 Nov 2019 04:36:56 +0000
Subject: [theory-seminar] Combinatorics seminar talk tomorrow (Thursday) at
2pm by Omer Reingold in 384-H
In-Reply-To:
References: ,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
Message-ID:
Tomorrow (Thursday) we have a combinatorics seminar talk by Omer Reingold on pseudorandomness.
When: Thursday November 14, 2pm-3pm
Room: 384-H
Speaker: Omer Reingold (Stanford)?
Title: Recent Progress on Pseudorandomness for Small Memory
Abstract: One of the most important complexity-theoretic challenges is showing that randomized algorithms are not much more powerful than deterministic algorithms. Specifically, the two main challenges are to show that randomness ?cannot save time? and that randomness ?cannot save memory?.
Our focus here is on the latter. A major tool in this study is pseudorandom distributions that fool small memory computations ? meaning that small memory computations behave the same when fed these distributions and when fed the uniform distributions. Pseudorandomness for small memory also generalize many of the most useful pseudorandom objects, such as epsilon-bias distributions and bounded independence.
In this talk I will focus on a recent kind of pseudorandom distributions that has been at the center of major progress in this area. The distributions are based on mild pseudorandom restrictions, meaning that the bits of the distributions are fixed in stages using much more basic pseudorandomness. Since part of the audience is not fluent in the research on pseudorandomness, I will aim for the talk to be as self-contained as possible.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Thu Nov 14 10:59:22 2019
From: moses at cs.stanford.edu (Moses Charikar)
Date: Thu, 14 Nov 2019 10:59:22 -0800
Subject: [theory-seminar] Fwd: Simons Institute Research Fellowship
Applications
In-Reply-To:
References:
Message-ID:
For those of you applying for postdoc positions next year, see Peter
Bartlett's message below about opportunities at the Simons Institute.
(deadline for applications is Dec 15)
Cheers,
Moses
---------- Forwarded message ---------
From: Peter Bartlett
Date: Thu, Nov 14, 2019 at 10:54 AM
Subject: Simons Institute Research Fellowship Applications
To:
Hi Moses,
I hope all is well with you.
I'm writing to ask for your help in spreading the word about the Simons
Institute Research Fellowships for the next academic year. The original
call went out a while ago, and the deadline is December 15, 2019, for Fall
2020 and Spring 2021 programs. It would be great if you could make sure
that folks at Stanford are aware of this, as well as other outstanding
people you know. The fellowships are often compatible with other postdocs
or junior faculty positions; the only requirement is that the candidate be
at most six years from PhD by Fall 2020.
Here's the official announcement. The call for applications is at
https://simons.berkeley.edu/fellowship-application-2020-2021.
Thanks,
Peter
==============================================
The Simons Institute for the Theory of Computing at UC Berkeley invites
applications for Research Fellowships for academic year 2020-21.
Simons-Berkeley Research Fellowships are an opportunity for outstanding
junior scientists (at most 6 years from PhD by Fall 2020) to spend one or
both semesters at the Institute in connection with one or more of its
programs. The programs hosting fellows for 2020-21 are as follows:
* Probability, Geometry, and Computation in High Dimensions (Fall 2020)
* Theory of Reinforcement Learning (Fall 2020)
* Satisfiability: Theory, Practice, and Beyond (Spring 2021)
* Theoretical Foundations of Computer Systems (Spring 2021)
Applicants who already hold junior faculty or postdoctoral positions are
welcome to apply. In particular, applicants who hold, or expect to hold,
postdoctoral appointments at other institutions are encouraged to apply to
spend one semester as a Simons-Berkeley Fellow subject to the approval of
the postdoctoral institution.
Further details and application instructions can be found at
https://simons.berkeley.edu/fellows2020. Information about the Institute
and the above programs can be found at http://simons.berkeley.edu.
Deadline for Fall 2020 and/or Spring 2021 applications: December 15, 2019.
--
Peter Bartlett
Associate Director, Simons Institute for the Theory of Computing
Professor, Computer Science and Statistics
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Thu Nov 14 12:07:28 2019
From: moses at cs.stanford.edu (Moses Charikar)
Date: Thu, 14 Nov 2019 12:07:28 -0800
Subject: [theory-seminar] Fwd: Combinatorics seminar talk tomorrow
(Thursday) at 2pm by Omer Reingold in 384-H
In-Reply-To:
References:
Message-ID:
Not sure this was sent to the theory-seminar list. Omer will be speaking at
the Math colloquium on "Pseudorandomness for small memory" this afternoon
at 2pm. Details below.
Cheers,
Moses
---------- Forwarded message ---------
From: Jacob Fox
Date: Wed, Nov 13, 2019 at 8:37 PM
Subject: Combinatorics seminar talk tomorrow (Thursday) at 2pm by Omer
Reingold in 384-H
To: mathcolloq at lists.stanford.edu
Tomorrow (Thursday) we have a combinatorics seminar talk by Omer Reingold
on pseudorandomness.
When: Thursday November 14, 2pm-3pm
Room: 384-H
Speaker: Omer Reingold (Stanford)
Title: Recent Progress on Pseudorandomness for Small Memory
Abstract: One of the most important complexity-theoretic challenges is
showing that randomized algorithms are not much more powerful than
deterministic algorithms. Specifically, the two main challenges are to show
that randomness ?cannot save time? and that randomness ?cannot save
memory?.
Our focus here is on the latter. A major tool in this study is
pseudorandom distributions that fool small memory computations ? meaning
that small memory computations behave the same when fed these distributions
and when fed the uniform distributions. Pseudorandomness for small memory
also generalize many of the most useful pseudorandom objects, such as
epsilon-bias distributions and bounded independence.
In this talk I will focus on a recent kind of pseudorandom distributions
that has been at the center of major progress in this area. The
distributions are based on mild pseudorandom restrictions, meaning that the
bits of the distributions are fixed in stages using much more basic
pseudorandomness. Since part of the audience is not fluent in the research
on pseudorandomness, I will aim for the talk to be as self-contained as
possible.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Thu Nov 14 12:50:17 2019
From: reingold at stanford.edu (Omer Reingold)
Date: Thu, 14 Nov 2019 12:50:17 -0800
Subject: [theory-seminar] poster session @ next TOCA-SV
In-Reply-To:
References:
Message-ID:
This is happening tomorrow. For the details please see
theorydish.blog/2019/11/05/next-week-toca-sv-motwani-colloquium/
Omer
On Tue, Nov 5, 2019 at 6:33 PM Omer Reingold wrote:
> Hello everybody,
>
> This is a reminder that TOCA-SV and the Motwani Colloquium are happening
> in 10 day (Friday the 15th). Details can be found here:
> theorydish.blog/2019/11/05/next-week-toca-sv-motwani-colloquium/
>
> Looking forward to seeing you all there!
> Omer (for the organizing committee)
>
> On Mon, Oct 7, 2019 at 6:10 PM Omer Reingold
> wrote:
>
>> Hello everybody,
>>
>> The next TOCA-SV meeting at Stanford will be collocated with the next
>> Motwani colloquium given by Ronitt Rubinfeld. Please hold November 15th
>> for this special meeting.
>>
>> This year, the meeting will feature a poster session for students from
>> all universities from the area (as well as visiting students). The idea is
>> to give students and industry theoreticians an opportunity to get to know
>> each other (partly with an eye for possible internships). If you would like
>> to give such a talk, please send Greg Valiant (gvaliant at cs.stanford.edu)
>> an email with a short abstract of the poster you propose as well as a few
>> details about yourself by October 25.
>>
>> Looking forward to seeing as many of you there!
>> Omer (for the organizing committee)
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From moses at cs.stanford.edu Fri Nov 15 10:50:17 2019
From: moses at cs.stanford.edu (Moses Charikar)
Date: Fri, 15 Nov 2019 10:50:17 -0800
Subject: [theory-seminar] theory talks at TOCA-SV today!
Message-ID:
Theory folks,
We have an exciting program for TOCA-SV today -- we just got started. Come
over to Tressider to meet theory friends from the area, listen to the
talks, get lunch, look at our student posters and attend the Motwani
Colloquium later this afternoon. The schedule is here:
https://theorydish.blog/2019/11/05/next-week-toca-sv-motwani-colloquium/
Cheers,
Moses
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From dkogan at stanford.edu Mon Nov 18 23:09:42 2019
From: dkogan at stanford.edu (Dima Kogan)
Date: Mon, 18 Nov 2019 23:09:42 -0800
Subject: [theory-seminar] Fwd: 11/20 Hart Montgomery on Building Public Key
Cryptography from Minicrypt Primitives with Structure
In-Reply-To:
References:
Message-ID:
This week's security lunch may be of interest to the wider theory audience,
and especially to people interested in complexity theory.
Best,
Dima
---------- Forwarded message ---------
From: Dima Kogan
Date: Mon, Nov 18, 2019 at 11:06 PM
Subject: 11/20 Hart Montgomery on Building Public Key Cryptography from
Minicrypt Primitives with Structure
To:
This Wednesday at Security Lunch, Hart Montgomery from Fujitsu Labs will
give a talk about "Building Public Key Cryptography from Minicrypt
Primitives with Structure." (See abstract below.)
We will meet, as usual, at Gates 463A, with lunch at noon, and the talk at
12:15pm.
See you there,
Dima
==============================
Algebraic structure lies at the heart of cryptomania, and It has long been
assumed by many in the cryptographic research community that there is some
special relationship between mathematical structure and public key
cryptography. For instance, Barak commented [Bar17] that ?... it seems
that you can?t throw a rock without hitting a one-way function? but
public-key cryptography is somehow ?special.? In the same work, Barak
implicitly argues that there is some mathematical structure inherent in
public-key cryptography: ?One way to phrase the question we are asking is
to understand what type of structure is needed for public-key
cryptography.? The natural question is the following: can we formalize
the relationship between mathematical structure and public key cryptography?
It turns out we can! In this talk, I?ll explain how we can build many of
the most common primitives in cryptomania from simple minicrypt primitives
with structure. As an example, suppose we start with a standard weak PRF
F. If we make F input-homomorphic?that is, F(k, x) + F(k, y) = F(k, x +
y)?then it turns out we can use F to build many exciting cryptomania
constructions, like identity-based encryption, lossy trapdoor functions,
and more. We can generalize this approach further to both more simple
primitives like one-way functions and unpredictable functions as well as to
other structural relations, like ring homomorphisms and key homomorphisms.
This generalization enables us to construct virtually all of the most
common public key cryptosystems from structured minicrypt primitives,
including everything from simple key exchange to indistinguishability
obfuscation.
Our results provide some interesting new ways to look at theoretical
cryptographic research. Building new cryptosystems from structured
primitives allows us to instantiate the cryptosystem from many different
assumptions ?for free? and showing that new assumptions imply certain
structured primitives provides typically many cryptographic primitives from
the assumption also ?for free,? perhaps streamlining some areas of
research. In addition, we can argue for the existence of a ?cryptoplexity
hierarchy? that classifies cryptosystems based on increasing mathematical
structure that effectively models much of what we know about the black-box
power of cryptosystems.
This talk is based on three recent papers with a subset of Navid Alamati,
Sikhar Patranabis, and Arnab Roy.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aviad at cs.stanford.edu Tue Nov 19 10:48:22 2019
From: aviad at cs.stanford.edu (Aviad Rubinstein)
Date: Tue, 19 Nov 2019 10:48:22 -0800
Subject: [theory-seminar] Tomorrow's theory seminar + meet with Kshitij
Message-ID:
Hi theorists,
Kshitij Gajjar (cc'ed) will be
presenting in the tomorrow's theory seminar (3-4, see below for
title+abstract).
He will also be on campus before the seminar and will be happy to meet
(please email Kshitij directly to coordinate).
BTW, this week we'll also have a special seminar on Friday (Greg Bodwin).
Cheers,
Aviad
*Title:* Parametric Shortest Paths in Planar Graphs
*Abstract:* Suppose you have a road network with n traffic signals such
that the amount of traffic on each road varies as a linear function of
time. Then the shortest path from a source s to a destination t might be
different at different points of time. It was well known (Gusfield, 1980)
that the number of different shortest s-t paths is at most n^O(log n).
In this talk, we will show that there exists a planar road network for
which the number of different shortest s-t paths is at least n^?(log n),
refuting a conjecture of Nikolova (2009). We may examine some interesting
generalizations of this result, if time permits. This is based on joint
work with Jaikumar Radhakrishnan.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bspang at stanford.edu Wed Nov 20 13:48:53 2019
From: bspang at stanford.edu (Bruce Spang)
Date: Wed, 20 Nov 2019 21:48:53 +0000
Subject: [theory-seminar] Tomorrow's theory seminar + meet with Kshitij
In-Reply-To:
References:
Message-ID:
A reminder that this is happening in about an hour. Hope to see you there!
On Nov 19, 2019, at 10:48 AM, Aviad Rubinstein > wrote:
Hi theorists,
Kshitij Gajjar (cc'ed) will be presenting in the tomorrow's theory seminar (3-4, see below for title+abstract).
He will also be on campus before the seminar and will be happy to meet (please email Kshitij directly to coordinate).
BTW, this week we'll also have a special seminar on Friday (Greg Bodwin).
Cheers,
Aviad
Title: Parametric Shortest Paths in Planar Graphs
Abstract: Suppose you have a road network with n traffic signals such that the amount of traffic on each road varies as a linear function of time. Then the shortest path from a source s to a destination t might be different at different points of time. It was well known (Gusfield, 1980) that the number of different shortest s-t paths is at most n^O(log n).
In this talk, we will show that there exists a planar road network for which the number of different shortest s-t paths is at least n^?(log n), refuting a conjecture of Nikolova (2009). We may examine some interesting generalizations of this result, if time permits. This is based on joint work with Jaikumar Radhakrishnan.
_______________________________________________
theory-seminar mailing list
theory-seminar at lists.stanford.edu
https://mailman.stanford.edu/mailman/listinfo/theory-seminar
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From bspang at stanford.edu Wed Nov 20 14:12:43 2019
From: bspang at stanford.edu (Bruce Spang)
Date: Wed, 20 Nov 2019 22:12:43 +0000
Subject: [theory-seminar] This Friday's Theory Seminar: Greg Bodwin on
"Matrix Decompositions and Sparse Graph Regularity"
Message-ID: <2E0CBFF5-F0E2-4161-B293-595D6C3ABFCE@stanford.edu>
Hi all!
This Friday, November 22nd, we will have a special bonus theory seminar, featuring Greg Bodwin presenting his work "Matrix Decompositions and Sparse Graph Regularity.? It will be from 3-4pm in Gates 463a.
The title and abstract are below. Hope to see you there!
Bruce
Matrix Decompositions and Sparse Graph Regularity
Greg Bodwin
A common task in computer science and math is to approximate a complicated matrix with a simple one. Examples include low-rank approximation, cut approximation, CUR approximation, and others. We will survey some of these methods, and then we will give a new generalized matrix approximation theorem that captures all of these as special cases. Underlying our approximation theorem is a new matrix decomposition that we call the "Projection Value Decomposition (PVD)," which extends many important properties of the SVD into an arbitrary nonlinear domain.
In the second part, we will survey the area of sparse graph regularity, which tries to extend results like the famous Szemeredi Regularity Lemma to graphs on a subquadratic number of edges. We will then discuss an application of the PVD, which gives a generalized sparse regularity lemma that captures and strengthens several important sparse regularity lemmas in prior work.
Joint work with Santosh Vempala
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Wed Nov 20 15:32:23 2019
From: nehgupta at stanford.edu (Neha Gupta)
Date: Wed, 20 Nov 2019 23:32:23 +0000
Subject: [theory-seminar] Theory Lunch 11/21 - Kenji Kawaguchi
Message-ID:
Hi everyone,
This Thursday at theory lunch, Kenji will tell us about "Optimization Landscapes in Deep Learning" (abstract below). As usual, please join us from noon - 1pm in Gates 463A.
-------------------------------
Title: Optimization Landscapes in Deep Learning
Abstract: Deep learning has provided high-impact data-driven methods in various applications. However, theoretical guarantees in deep learning tend to provide too pessimistic insights with a gap from practical observations, often because of hidden special properties. Identifying such special properties can provide novel theoretical insights, and is potentially helpful for understanding and designing practical methods. In this talk, I will discuss special properties on nonconvex optimization landscapes of deep neural networks, as well as their implications on gradient descent methods and a few results on real-world applications.
Bio: Kenji Kawaguchi is a Ph.D. candidate at Massachusetts Institute of Technology (MIT), advised by Prof. Leslie Pack Kaelbling. He received his M.S. in Electrical Engineering and Computer Science from MIT. His research interests span deep learning, machine learning, artificial intelligence, nonconvex optimization and Bayesian optimization. His research has been cited widely in academia and used in classes at various universities. He was invited to speak at the 2019 International Congress on Industrial and Applied Mathematics Minisymposium on ?Theoretical Foundations of Deep Learning?. In 2018, he was invited for a summer research visit at Microsoft Research in Redmond. He was awarded the Funai Overseas Scholarship in 2014 and was selected for the Nakajimi Foundation Fellowship in 2013.
-------------------------------
Thanks,
Neha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From nehgupta at stanford.edu Wed Nov 20 15:32:23 2019
From: nehgupta at stanford.edu (Neha Gupta)
Date: Wed, 20 Nov 2019 23:32:23 +0000
Subject: [theory-seminar] Theory Lunch 11/21 - Kenji Kawaguchi
Message-ID:
Hi everyone,
This Thursday at theory lunch, Kenji will tell us about "Optimization Landscapes in Deep Learning" (abstract below). As usual, please join us from noon - 1pm in Gates 463A.
-------------------------------
Title: Optimization Landscapes in Deep Learning
Abstract: Deep learning has provided high-impact data-driven methods in various applications. However, theoretical guarantees in deep learning tend to provide too pessimistic insights with a gap from practical observations, often because of hidden special properties. Identifying such special properties can provide novel theoretical insights, and is potentially helpful for understanding and designing practical methods. In this talk, I will discuss special properties on nonconvex optimization landscapes of deep neural networks, as well as their implications on gradient descent methods and a few results on real-world applications.
Bio: Kenji Kawaguchi is a Ph.D. candidate at Massachusetts Institute of Technology (MIT), advised by Prof. Leslie Pack Kaelbling. He received his M.S. in Electrical Engineering and Computer Science from MIT. His research interests span deep learning, machine learning, artificial intelligence, nonconvex optimization and Bayesian optimization. His research has been cited widely in academia and used in classes at various universities. He was invited to speak at the 2019 International Congress on Industrial and Applied Mathematics Minisymposium on ?Theoretical Foundations of Deep Learning?. In 2018, he was invited for a summer research visit at Microsoft Research in Redmond. He was awarded the Funai Overseas Scholarship in 2014 and was selected for the Nakajimi Foundation Fellowship in 2013.
-------------------------------
Thanks,
Neha
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aviad at cs.stanford.edu Wed Nov 27 08:55:24 2019
From: aviad at cs.stanford.edu (Aviad Rubinstein)
Date: Wed, 27 Nov 2019 08:55:24 -0800
Subject: [theory-seminar] Fwd: Highlights of Algorithms 2020 --- Call for
Nominations
In-Reply-To:
References:
Message-ID:
---------- Forwarded message ---------
From: Mohsen Ghaffari
Date: Wed, Nov 27, 2019, 6:58 AM
Subject: Highlights of Algorithms 2020 --- Call for Nominations
To: Ghaffari Mohsen
Cc: Yossi Azar
Dear friends,
The 5th iteration of HALG (Highlights of Algorithms 2020) will take
place on June 3-5 at ETH Zurich and is now seeking nominations for
invited talks. I would appreciate it if you can forward this call to
your internal TCS mailing lists.
Best,
- Mohsen
---
Call for Invited Talk Nominations
5th Highlights of Algorithms conference (HALG 2020)
ETH Zurich, June 3-5, 2020
http://2020.highlightsofalgorithms.org/
The HALG 2020 conference seeks high-quality nominations for invited
talks that will highlight recent advances in algorithmic research.
Similarly to previous years, there are two categories of invited
talks:
A. survey (60 minutes): a survey of an algorithmic topic that has seen
exciting developments in last couple of years.
B. paper (30 minutes): a significant algorithmic result appearing in a
paper in 2019 or later.
To nominate, please email halg2020.nominations at gmail.com the following
information:
Basic details: speaker name + topic (for survey talk) or paper?s
title, authors, conference/arxiv + preferable speaker (for paper
talk).
Brief justification: Focus on the benefits to the audience, e.g.,
quality of results, importance/relevance of topic, clarity of talk,
speaker?s presentation skills.
All nominations will be reviewed by the Program Committee (PC) to
select speakers that will be invited to the conference.
Nominations deadline: December 20, 2020 (for full consideration).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: