From ofirgeri at stanford.edu Fri Mar 1 10:58:34 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 1 Mar 2019 18:58:34 +0000
Subject: [theory-seminar] Theory Seminar (3/1): Rachel Cummings
(Georgia Tech)
In-Reply-To:
References:
Message-ID:
Reminder: Rachel's talk is today at 3pm.
________________________________
From: theory-seminar on behalf of Ofir Geri
Sent: Monday, February 25, 2019 11:14:23 PM
To: thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Seminar (3/1): Rachel Cummings (Georgia Tech)
Hi all,
This Friday, Rachel Cummings (Georgia Tech) will give a theory seminar talk on Algorithmic Price Discrimination (see abstract below). The talk will be as usual at 3:00 PM in Gates 463A.
If you'd like to meet with the speaker, please email Mary at marykw at stanford.edu
Hope to see you there!
Ofir
Algorithmic Price Discrimination
Speaker: Rachel Cummings (Georgia Tech)
We consider a generalization of the third degree price discrimination problem studied in Bergemann et al. 2015, where an intermediary between the buyer and the seller can design market segments to maximize any linear combination of consumer surplus and seller revenue. Unlike in Bergemann et al. 2015, we assume that the intermediary only has partial information about the buyer's value. We consider three different models of information, with increasing order of difficulty. In the first model, we assume that the intermediary's information allows him to construct a probability distribution of the buyer's value. Next we consider the sample complexity model, where we assume that the intermediary only sees samples from this distribution. Finally, we consider a bandit online learning model, where the intermediary can only observe past purchasing decisions of the buyer, rather than her exact value. For each of these models, we present algorithms to compute optimal or near optimal market segmentation.
(joint work with Nikhil Devanur, Zhiyi Huang, and Xiangning Wang)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From rad at cs.stanford.edu Fri Mar 1 12:00:56 2019
From: rad at cs.stanford.edu (Rad Niazadeh)
Date: Fri, 1 Mar 2019 12:00:56 -0800
Subject: [theory-seminar] PhD student meeting with Rachel Cummings
Message-ID:
Hi everyone (specially PhDs),
We have Rachel Cummings (from Georgia Tech ISyE) visiting us today and
giving the theory seminar at 3pm. I was wondering if we can have a group of
PhDs meeting with Rachel. Rachel is doing machine learning, privacy and
algorithmic game theory; She is amazing and I bet you find several common
grounds with her to talk about (and even you might be able to find
opportunities for future collaborations with her). It would be fantastic if
you write your name in this spreadsheet. The time slot for this meeting is
today 4:30-5:00 pm (I imagine it can extend to 5:30 pm too).
https://docs.google.com/document/d/1mJJ8fMfy0VLrEJ2K89d5Q-vdqVi1Tz2KKeNOXu-xdGA/edit
If you have any questions or concerns, please let me know.
Cheers,
Rad
--
Rad Niazadeh,
Postdoctoral Scholar,
Computer Science Department, Stanford University,
484 Gates, 353 Serra Mall, Stanford, CA 94035.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Fri Mar 1 17:22:53 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Sat, 2 Mar 2019 01:22:53 +0000
Subject: [theory-seminar] Theory Seminar (3/1): Rachel Cummings
(Georgia Tech)
In-Reply-To:
References: ,
Message-ID:
Someone forgot a black coat in the seminar room - it's still there in case you are looking for it.
________________________________
From: Ofir Geri
Sent: Friday, March 1, 2019 10:58:34 AM
To: thseminar at cs.stanford.edu
Subject: Re: [theory-seminar] Theory Seminar (3/1): Rachel Cummings (Georgia Tech)
Reminder: Rachel's talk is today at 3pm.
________________________________
From: theory-seminar on behalf of Ofir Geri
Sent: Monday, February 25, 2019 11:14:23 PM
To: thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Seminar (3/1): Rachel Cummings (Georgia Tech)
Hi all,
This Friday, Rachel Cummings (Georgia Tech) will give a theory seminar talk on Algorithmic Price Discrimination (see abstract below). The talk will be as usual at 3:00 PM in Gates 463A.
If you'd like to meet with the speaker, please email Mary at marykw at stanford.edu
Hope to see you there!
Ofir
Algorithmic Price Discrimination
Speaker: Rachel Cummings (Georgia Tech)
We consider a generalization of the third degree price discrimination problem studied in Bergemann et al. 2015, where an intermediary between the buyer and the seller can design market segments to maximize any linear combination of consumer surplus and seller revenue. Unlike in Bergemann et al. 2015, we assume that the intermediary only has partial information about the buyer's value. We consider three different models of information, with increasing order of difficulty. In the first model, we assume that the intermediary's information allows him to construct a probability distribution of the buyer's value. Next we consider the sample complexity model, where we assume that the intermediary only sees samples from this distribution. Finally, we consider a bandit online learning model, where the intermediary can only observe past purchasing decisions of the buyer, rather than her exact value. For each of these models, we present algorithms to compute optimal or near optimal market segmentation.
(joint work with Nikhil Devanur, Zhiyi Huang, and Xiangning Wang)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Tue Mar 5 09:30:17 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Tue, 5 Mar 2019 09:30:17 -0800
Subject: [theory-seminar] TCS+ talk: Wednesday, March 6,
Shayan Oveis Gharan, U Washington
In-Reply-To: <2126002d-242d-0aac-8bb0-2bbfb3e212dd@cs.stanford.edu>
References: <2126002d-242d-0aac-8bb0-2bbfb3e212dd@cs.stanford.edu>
Message-ID:
Reminder: this is tomorrow morning, 10am!
-- Cl?ment
On 2/28/19 8:24 PM, Cl?ment Canonne wrote:
> Hi Everyone,
>
> As mentioned at the theory lunch today, there will be a TCS+ talk (A
> projected speaker! A talk from afar!) this coming
>
> ????Wednesday, at *10am*
>
> (as usual: with breakfast at 9:55am). Come and listen to Shayan Oveis
> Gharan talk about his recent work with (among others) our very own Nima
> Anari.
>
> See you next Wednesday,
>
> -- Cl?ment
>
> -------------------------------
> Speaker: Shayan Oveis Gharan, University of Washington
> Title: Strongly log concave polynomials, high dimensional simplicial
> complexes, and an FPRAS for counting Bases of Matroids
>
> Abstract: A matroid is an abstract combinatorial object which
> generalizes the notions of spanning trees, and linearly independent sets
> of vectors. I will talk about an efficient algorithm based on the Markov
> Chain Monte Carlo technique to approximately count the number of bases
> of any given matroid.
>
> The proof is based on a new connections between high dimensional
> simplicial complexes, and a new class of multivariate polynomials called
> completely log-concave polynomials. In particular, we exploit a
> fundamental fact from our previous work that the bases generating
> polynomial of any given matroid is a log-concave function over the
> positive orthant.
>
> Based on joint works with Nima Anari, Kuikui Liu, and Cynthia Vinzant.
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
From wyma at stanford.edu Tue Mar 5 11:21:06 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 5 Mar 2019 19:21:06 +0000
Subject: [theory-seminar] Theory Lunch 3/7 -- Shweta Jain
Message-ID:
Hi everyone,
This Thursday at theory lunch, Shweta Jain from UCSC will tell us about "Approximating the degree distribution using sublinear graph samples." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
----------------------------------------------------------
Approximating the degree distribution using sublinear graph samples
Speaker: Shweta Jain (UCSC)
The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation of real-world graphs poses a significant challenge, due to the distribution's heavy-tailed nature and large variance in degrees.
We present a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law.
We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1% of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10% of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis.
Joint work with Talya Eden, Ali Pinar, Dana Ron and C. Seshadhri.
Paper can be found at: https://arxiv.org/pdf/1710.08607.pdf
----------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Mar 5 11:21:06 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 5 Mar 2019 19:21:06 +0000
Subject: [theory-seminar] Theory Lunch 3/7 -- Shweta Jain
Message-ID:
Hi everyone,
This Thursday at theory lunch, Shweta Jain from UCSC will tell us about "Approximating the degree distribution using sublinear graph samples." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
----------------------------------------------------------
Approximating the degree distribution using sublinear graph samples
Speaker: Shweta Jain (UCSC)
The degree distribution is one of the most fundamental properties used in the analysis of massive graphs. There is a large literature on graph sampling, where the goal is to estimate properties (especially the degree distribution) of a large graph through a small, random sample. The degree distribution estimation of real-world graphs poses a significant challenge, due to the distribution's heavy-tailed nature and large variance in degrees.
We present a new algorithm, SADDLES, for this problem, using recent mathematical techniques from the field of sublinear algorithms. The SADDLES algorithm gives provably accurate outputs for all values of the degree distribution. For the analysis, we define two fatness measures of the degree distribution, called the h-index and the z-index. We prove that SADDLES is sublinear in the graph size when these indices are large. A corollary of this result is a provably sublinear algorithm for any degree distribution bounded below by a power law.
We deploy our new algorithm on a variety of real datasets and demonstrate its excellent empirical behavior. In all instances, we get extremely accurate approximations for all values in the degree distribution by observing at most 1% of the vertices. This is a major improvement over the state-of-the-art sampling algorithms, which typically sample more than 10% of the vertices to give comparable results. We also observe that the h and z-indices of real graphs are large, validating our theoretical analysis.
Joint work with Talya Eden, Ali Pinar, Dana Ron and C. Seshadhri.
Paper can be found at: https://arxiv.org/pdf/1710.08607.pdf
----------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Sun Mar 10 19:11:03 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Sun, 10 Mar 2019 19:11:03 -0700
Subject: [theory-seminar] Theory Happy Hour: S02E03 ("Hap-Pi Hour")
Message-ID:
Hi everyone,
As chrono-entomologists say, "time flies!" It's already been more than a
month since our last happy hour, and that feels way too long.
This is why, next Thursday, will be held a *happy hour*, to socialize
with the theory group while (i) acknowledging, I reckon, Pi Day; (ii)
commemorating the birth of Alexey Pajitnov, the father of Tetris; (iii)
realizing that Saint Patrick's day is almost upon us.
Thursday, March 14th
Gates 463
(moving to the AT&T patio if (as one hopes) weather allows)
6:00pm
There will be snacks (and pies), there will be drinks (including Irish
beers), so come along with your favorite green hat or
rotating-brick-based contraption!
See you on Thursday!
PS: as usual, if you have preferences or restrictions on the
food/drinks, please send me an email.
-- Cl?ment
From moses at cs.stanford.edu Mon Mar 11 13:25:30 2019
From: moses at cs.stanford.edu (Moses Charikar)
Date: Mon, 11 Mar 2019 13:25:30 -0700
Subject: [theory-seminar] Fwd: TODAY - CS Seminar - Nima Anari,
Simons Institute and Stanford University, Monday, March 11, 3-4pm, Gates 104
In-Reply-To: <9AB7670B-7710-4749-A154-194F8BCB73D4@stanford.edu>
References: <9AB7670B-7710-4749-A154-194F8BCB73D4@stanford.edu>
Message-ID:
Hi folks,
Nima's talk today 3-4pm might be of interest to many of you. Details below.
Cheers,
Moses
---------- Forwarded message ---------
From: Laura Kenny-Carlson
Date: Mon, Mar 11, 2019 at 8:43 AM
Subject: TODAY - CS Seminar - Nima Anari, Simons Institute and Stanford
University, Monday, March 11, 3-4pm, Gates 104
CS Seminar
Nima Anari
Simons Institute and Stanford University
Monday, March 11, 3-4pm, Gates 104
Algorithmic Convexity in the Discrete World
Abstract:
A central question in randomized algorithm design is what kind of
distributions can be efficiently sampled? On the continuous side, uniform
distributions over convex sets and more generally log-concave distributions
constitute the main tractable class. We will build a parallel theory on the
discrete side, that yields tractability for a large class of discrete
distributions. We will use this theory to resolve long-standing open
problems: sampling from matroids, generalized determinantal point
processes, and the random cluster model in the negative dependence regime.
The hammer enabling these algorithmic advances is the introduction and the
study of a class of polynomials, that we call Completely Log-Concave.
Sampling from discrete distributions becomes equivalent to approximately
evaluating associated multivariate polynomials, and we will see how we can
use very simple and easy-to-implement random walks to perform both tasks.
Bio:
Nima Anari is a Microsoft research fellow at the Simons Institute for the
Theory of Computing, and a research engineer at Stanford University,
department of Computer Science. Prior to that, he obtained his Ph.D. in
Computer Science from UC Berkeley, advised by Satish Rao, and continued as
a postdoctoral scholar at Stanford University, department of MS&E.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Mon Mar 11 14:50:38 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Mon, 11 Mar 2019 21:50:38 +0000
Subject: [theory-seminar] Two Theory Seminars This Week: Nic Resch (3/14)
and Seth Neel (3/15)
Message-ID:
Hi all,
This week we will have two theory seminars: Nic Resch (CMU) on Thursday and Seth Neel (UPenn) on Friday (see abstracts below). Both talks will in Gates 463A at 3:00 PM.
Hope to see you there!
Ofir
Lossless dimension expanders via linearized polynomials and subspace designs (Thursday 3/14)
Speaker: Nic Resch (CMU)
For a vector space F^n over a field F, an (?, ?)-dimension expander of degree d is a collection of d linear maps ?_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ?n, the image of U under all the maps, ?_{j=1}^d ?_j(U), has dimension at least ?dim(U). Over a finite field, a random collection of d=O(1) maps ?_j over excellent ?lossless? expansion with high probability: ? ? d for ? ? ?(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor ? = 1 + ? with constant degree is a non-trivial goal.
We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following: Lossless expansion over large fields; more precisely ? ? (1??)d and ? ? (1??)/d with d=O_?(1), when |F| ? ?(n). Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ? ? ?(?d) and ? ? ?(1/(?d)) with d = O_?(1), when |F| ? n^?.
Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (?(1), 1+?(1))-dimension expanders of constant degree over all fields. An approach based on ?rank condensing via subspace designs? led to dimension expanders with ? ? ?(?d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.
Based on joint work with Venkatesan Guruswami and Chaoping Xing.
Oracle Efficiency in Differential Privacy (Friday 3/15)
Speaker: Seth Neel (UPenn)
We develop theory for using heuristics to solve computationally hard problems in differential privacy. Heuristic approaches have enjoyed tremendous success in machine learning, for which performance can be empirically evaluated. However, privacy guarantees cannot be evaluated empirically, and must be proven --- without making heuristic assumptions. We show that learning problems over broad classes of functions can be solved privately and efficiently, assuming the existence of a non-private oracle for solving the same problem. Our first algorithm yields a privacy guarantee that is contingent on the correctness of the oracle. We then give a reduction which applies to a class of heuristics which we call certifiable, which allows us to convert oracle-dependent privacy guarantees to worst-case privacy guarantee that hold even when the heuristic standing in for the oracle might fail in adversarial ways. Finally, we consider a broad class of functions that includes most classes of simple boolean functions studied in the PAC learning literature, including conjunctions, disjunctions, parities, and discrete half-spaces. We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle. This in particular gives the first oracle-efficient algorithm for privately generating synthetic data for contingency tables. The most intriguing question left open by our work is whether or not every problem that can be solved differentially privately can be privately solved with an oracle-efficient algorithm. While we do not resolve this, we give a barrier result that suggests that any generic oracle-efficient reduction must fall outside of a natural class of algorithms (which includes the algorithms given in this paper).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aviad at cs.stanford.edu Mon Mar 11 22:23:36 2019
From: aviad at cs.stanford.edu (Aviad Rubinstein)
Date: Mon, 11 Mar 2019 22:23:36 -0700
Subject: [theory-seminar] Wanted: theory webmaster
Message-ID:
Dear theory students,
Our theory website needs your
help!
(And this means that we all need your help...)
We are looking for 1-2 courageous student volunteers to take over.
Thanks!
Aviad
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Tue Mar 12 10:14:43 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 12 Mar 2019 17:14:43 +0000
Subject: [theory-seminar] Two Theory Seminars This Week: Nic Resch
(3/14) and Seth Neel (3/15)
In-Reply-To:
References:
Message-ID:
There's a change of time: Nic's talk on Thursday will be at 4:00 PM. (Seth's talk on Friday remains at 3.)
Hope to see you there!
Ofir
________________________________
From: Ofir Geri
Sent: Monday, March 11, 2019 2:50 PM
To: thseminar at cs.stanford.edu
Subject: Two Theory Seminars This Week: Nic Resch (3/14) and Seth Neel (3/15)
Hi all,
This week we will have two theory seminars: Nic Resch (CMU) on Thursday and Seth Neel (UPenn) on Friday (see abstracts below). Both talks will in Gates 463A at 3:00 PM.
Hope to see you there!
Ofir
Lossless dimension expanders via linearized polynomials and subspace designs (Thursday 3/14)
Speaker: Nic Resch (CMU)
For a vector space F^n over a field F, an (?, ?)-dimension expander of degree d is a collection of d linear maps ?_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ?n, the image of U under all the maps, ?_{j=1}^d ?_j(U), has dimension at least ?dim(U). Over a finite field, a random collection of d=O(1) maps ?_j over excellent ?lossless? expansion with high probability: ? ? d for ? ? ?(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor ? = 1 + ? with constant degree is a non-trivial goal.
We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following: Lossless expansion over large fields; more precisely ? ? (1??)d and ? ? (1??)/d with d=O_?(1), when |F| ? ?(n). Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ? ? ?(?d) and ? ? ?(1/(?d)) with d = O_?(1), when |F| ? n^?.
Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (?(1), 1+?(1))-dimension expanders of constant degree over all fields. An approach based on ?rank condensing via subspace designs? led to dimension expanders with ? ? ?(?d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.
Based on joint work with Venkatesan Guruswami and Chaoping Xing.
Oracle Efficiency in Differential Privacy (Friday 3/15)
Speaker: Seth Neel (UPenn)
We develop theory for using heuristics to solve computationally hard problems in differential privacy. Heuristic approaches have enjoyed tremendous success in machine learning, for which performance can be empirically evaluated. However, privacy guarantees cannot be evaluated empirically, and must be proven --- without making heuristic assumptions. We show that learning problems over broad classes of functions can be solved privately and efficiently, assuming the existence of a non-private oracle for solving the same problem. Our first algorithm yields a privacy guarantee that is contingent on the correctness of the oracle. We then give a reduction which applies to a class of heuristics which we call certifiable, which allows us to convert oracle-dependent privacy guarantees to worst-case privacy guarantee that hold even when the heuristic standing in for the oracle might fail in adversarial ways. Finally, we consider a broad class of functions that includes most classes of simple boolean functions studied in the PAC learning literature, including conjunctions, disjunctions, parities, and discrete half-spaces. We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle. This in particular gives the first oracle-efficient algorithm for privately generating synthetic data for contingency tables. The most intriguing question left open by our work is whether or not every problem that can be solved differentially privately can be privately solved with an oracle-efficient algorithm. While we do not resolve this, we give a barrier result that suggests that any generic oracle-efficient reduction must fall outside of a natural class of algorithms (which includes the algorithms given in this paper).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Mar 12 12:02:19 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 12 Mar 2019 19:02:19 +0000
Subject: [theory-seminar] Theory Lunch 3/14 -- Fereshte Khani
Message-ID:
Hi everyone,
We will have our last theory lunch of the quarter this Thursday. Fereshte will tell us about "Fairness via Loss Variance Regularization." (See abstract below.) As usual, please join us from noon to 1pm at 463A.
As the weather gets warmer, it is also time to start thinking about theory lunch in spring! You are encouraged to sign up at the following link: https://docs.google.com/document/d/1S0QcDMTn-JRaP1cRFRihZyeNfHUmpYLkyORKMcGIBgY/edit?usp=sharing
----------------------------------------------------------
Fairness via Loss Variance Regularization
Speaker: Fereshte Khani
Statistical notions of fairness such as equalized odds control the discrepancy between the average loss (e.g., false positive rate) of particular groups based on sensitive attributes (e.g., race and gender) and the entire population. In this paper, we consider the setting where the sensitive attributes are unavailable and we want to protect all possible groups in the population. Unfortunately, it is impossible to directly control the discrepancy of the loss of all groups; we instead control a weighted discrepancy. We show that loss variance, a quantity typically used as a regularizer to improve test performance, is a tight approximation to the weighted loss discrepancy when the discrepancy of the losses are weighted by square root of the group size. We propose loss variance regularization as a training procedure to obtain a fair classifier that has a small weighted loss discrepancy across all groups. On four common fairness datasets, we show that loss variance regularization can halve the loss variance of a classifier and hence reduce the weighted loss discrepancy, without a significant drop in accuracy.
----------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Mar 12 12:02:19 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 12 Mar 2019 19:02:19 +0000
Subject: [theory-seminar] Theory Lunch 3/14 -- Fereshte Khani
Message-ID:
Hi everyone,
We will have our last theory lunch of the quarter this Thursday. Fereshte will tell us about "Fairness via Loss Variance Regularization." (See abstract below.) As usual, please join us from noon to 1pm at 463A.
As the weather gets warmer, it is also time to start thinking about theory lunch in spring! You are encouraged to sign up at the following link: https://docs.google.com/document/d/1S0QcDMTn-JRaP1cRFRihZyeNfHUmpYLkyORKMcGIBgY/edit?usp=sharing
----------------------------------------------------------
Fairness via Loss Variance Regularization
Speaker: Fereshte Khani
Statistical notions of fairness such as equalized odds control the discrepancy between the average loss (e.g., false positive rate) of particular groups based on sensitive attributes (e.g., race and gender) and the entire population. In this paper, we consider the setting where the sensitive attributes are unavailable and we want to protect all possible groups in the population. Unfortunately, it is impossible to directly control the discrepancy of the loss of all groups; we instead control a weighted discrepancy. We show that loss variance, a quantity typically used as a regularizer to improve test performance, is a tight approximation to the weighted loss discrepancy when the discrepancy of the losses are weighted by square root of the group size. We propose loss variance regularization as a training procedure to obtain a fair classifier that has a small weighted loss discrepancy across all groups. On four common fairness datasets, we show that loss variance regularization can halve the loss variance of a classifier and hence reduce the weighted loss discrepancy, without a significant drop in accuracy.
----------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Thu Mar 14 09:33:02 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Thu, 14 Mar 2019 09:33:02 -0700
Subject: [theory-seminar] Theory Happy Hour: S02E03 ("Hap-Pi Hour")
In-Reply-To:
References:
Message-ID: <5153810d-bf77-4139-8c36-2862877e4b43@cs.stanford.edu>
Reminder: this is today!
-- Cl?ment
On 3/10/19 7:11 PM, Cl?ment Canonne wrote:
> Hi everyone,
>
> As chrono-entomologists say, "time flies!" It's already been more than a
> month since our last happy hour, and that feels way too long.
>
> This is why, next Thursday, will be held a *happy hour*, to socialize
> with the theory group while (i) acknowledging, I reckon, Pi Day; (ii)
> commemorating the birth of Alexey Pajitnov, the father of Tetris; (iii)
> realizing that Saint Patrick's day is almost upon us.
>
> ????Thursday, March 14th
> ????Gates 463
> ????(moving to the AT&T patio if (as one hopes) weather allows)
> ????6:00pm
>
> There will be snacks (and pies), there will be drinks (including Irish
> beers), so come along with your favorite green hat or
> rotating-brick-based contraption!
>
> See you on Thursday!
>
> PS: as usual, if you have preferences or restrictions on the
> food/drinks, please send me an email.
>
> -- Cl?ment
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
From ofirgeri at stanford.edu Thu Mar 14 11:10:54 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Thu, 14 Mar 2019 18:10:54 +0000
Subject: [theory-seminar] Two Theory Seminars This Week: Nic Resch
(3/14) and Seth Neel (3/15)
In-Reply-To:
References: ,
Message-ID:
Reminder: Nic's talk is today at 4pm.
________________________________
From: Ofir Geri
Sent: Tuesday, March 12, 2019 10:14 AM
To: thseminar at cs.stanford.edu
Subject: Re: Two Theory Seminars This Week: Nic Resch (3/14) and Seth Neel (3/15)
There's a change of time: Nic's talk on Thursday will be at 4:00 PM. (Seth's talk on Friday remains at 3.)
Hope to see you there!
Ofir
________________________________
From: Ofir Geri
Sent: Monday, March 11, 2019 2:50 PM
To: thseminar at cs.stanford.edu
Subject: Two Theory Seminars This Week: Nic Resch (3/14) and Seth Neel (3/15)
Hi all,
This week we will have two theory seminars: Nic Resch (CMU) on Thursday and Seth Neel (UPenn) on Friday (see abstracts below). Both talks will in Gates 463A at 3:00 PM.
Hope to see you there!
Ofir
Lossless dimension expanders via linearized polynomials and subspace designs (Thursday 3/14)
Speaker: Nic Resch (CMU)
For a vector space F^n over a field F, an (?, ?)-dimension expander of degree d is a collection of d linear maps ?_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ?n, the image of U under all the maps, ?_{j=1}^d ?_j(U), has dimension at least ?dim(U). Over a finite field, a random collection of d=O(1) maps ?_j over excellent ?lossless? expansion with high probability: ? ? d for ? ? ?(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor ? = 1 + ? with constant degree is a non-trivial goal.
We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following: Lossless expansion over large fields; more precisely ? ? (1??)d and ? ? (1??)/d with d=O_?(1), when |F| ? ?(n). Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ? ? ?(?d) and ? ? ?(1/(?d)) with d = O_?(1), when |F| ? n^?.
Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (?(1), 1+?(1))-dimension expanders of constant degree over all fields. An approach based on ?rank condensing via subspace designs? led to dimension expanders with ? ? ?(?d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.
Based on joint work with Venkatesan Guruswami and Chaoping Xing.
Oracle Efficiency in Differential Privacy (Friday 3/15)
Speaker: Seth Neel (UPenn)
We develop theory for using heuristics to solve computationally hard problems in differential privacy. Heuristic approaches have enjoyed tremendous success in machine learning, for which performance can be empirically evaluated. However, privacy guarantees cannot be evaluated empirically, and must be proven --- without making heuristic assumptions. We show that learning problems over broad classes of functions can be solved privately and efficiently, assuming the existence of a non-private oracle for solving the same problem. Our first algorithm yields a privacy guarantee that is contingent on the correctness of the oracle. We then give a reduction which applies to a class of heuristics which we call certifiable, which allows us to convert oracle-dependent privacy guarantees to worst-case privacy guarantee that hold even when the heuristic standing in for the oracle might fail in adversarial ways. Finally, we consider a broad class of functions that includes most classes of simple boolean functions studied in the PAC learning literature, including conjunctions, disjunctions, parities, and discrete half-spaces. We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle. This in particular gives the first oracle-efficient algorithm for privately generating synthetic data for contingency tables. The most intriguing question left open by our work is whether or not every problem that can be solved differentially privately can be privately solved with an oracle-efficient algorithm. While we do not resolve this, we give a barrier result that suggests that any generic oracle-efficient reduction must fall outside of a natural class of algorithms (which includes the algorithms given in this paper).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Thu Mar 14 13:15:41 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Thu, 14 Mar 2019 13:15:41 -0700
Subject: [theory-seminar] TCS+ talk: Wednesday, March 20th, Sasho Nikolov,
U Toronto
Message-ID: <76ae3046-1cde-9d2d-8062-927b84552a47@cs.stanford.edu>
Hi everyone,
Next Wednesday morning (March 20th), at 10am we will have a projection
of Aleksandar Nikolov, from University of Toronto, on the wall, telling
us live about his recent work on "Sticky Brownian Rounding and its
Applications to CSPs" (cf. abstract below).
As usual, there will be breakfast at 9:55am, and the interactive talk at
10am.
See you then!
-- Cl?ment
-------------------------------
Speaker: Aleksandar Nikolov (University of Toronto)
Title: Sticky Brownian Rounding and its Applications to Constraint
Satisfaction Problems}
Abstract: Semidefinite programming is a powerful tool in the design and
analysis of approximation algorithms for combinatorial optimization
problems. In particular, the random hyperplane rounding method of
Goemans and Williamson has been extensively studied for more than two
decades, resulting in various extensions to the original technique and
beautiful algorithms for a wide range of applications. Despite the fact
that this approach yields tight approximation guarantees for some
problems, like Max Cut, for many others, like Max Sat, Max DiCut, and
constraint satisfaction problems with global constraints, the tight
approximation ratio is still unknown. One of the main reasons for this
is the fact that very few techniques for rounding semi-definite
relaxations are known.
In this work, we present a new general and simple method for rounding
semi-definite programs, based on Brownian motion. Our approach is
inspired by recent results in algorithmic discrepancy theory. We
develop and present tools for analyzing our new rounding algorithms,
utilizing mathematical machinery from the theory of Brownian motion,
complex analysis, and partial differential equations. Focusing on
constraint satisfaction problems, we apply our method to several
classical problems, including Max Cut, Max 2-Sat, and Max DiCut, and
derive new algorithms that are competitive with the best known results.
We further show that our algorithms can be used, together with the Sum
of Squares hierarchy, to approximate constraint satisfaction problems
subject to multiple global cardinality constraints.
Join work with Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Roy
Schwartz, and Mohit Singh
From ccanonne at cs.stanford.edu Thu Mar 14 17:08:31 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Thu, 14 Mar 2019 17:08:31 -0700
Subject: [theory-seminar] Theory Happy Hour: S02E03 ("Hap-Pi Hour")
In-Reply-To:
References:
Message-ID: <97edb58c-9a17-45d5-7242-65f7109439a5@cs.stanford.edu>
The sun is still around, and will be until 7:15pm at least. Let's do
this on the AT&T patio, at 6pm!
-- Cl?ment
On 3/10/19 7:11 PM, Cl?ment Canonne wrote:
> Hi everyone,
>
> As chrono-entomologists say, "time flies!" It's already been more than a
> month since our last happy hour, and that feels way too long.
>
> This is why, next Thursday, will be held a *happy hour*, to socialize
> with the theory group while (i) acknowledging, I reckon, Pi Day; (ii)
> commemorating the birth of Alexey Pajitnov, the father of Tetris; (iii)
> realizing that Saint Patrick's day is almost upon us.
>
> ????Thursday, March 14th
> ????Gates 463
> ????(moving to the AT&T patio if (as one hopes) weather allows)
> ????6:00pm
>
> There will be snacks (and pies), there will be drinks (including Irish
> beers), so come along with your favorite green hat or
> rotating-brick-based contraption!
>
> See you on Thursday!
>
> PS: as usual, if you have preferences or restrictions on the
> food/drinks, please send me an email.
>
> -- Cl?ment
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
From ofirgeri at stanford.edu Fri Mar 15 11:52:48 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 15 Mar 2019 18:52:48 +0000
Subject: [theory-seminar] Two Theory Seminars This Week: Nic Resch
(3/14) and Seth Neel (3/15)
In-Reply-To:
References:
Message-ID:
Reminder: Seth's talk is today at 3pm.
________________________________
From: Ofir Geri
Sent: Monday, March 11, 2019 2:50:38 PM
To: thseminar at cs.stanford.edu
Subject: Two Theory Seminars This Week: Nic Resch (3/14) and Seth Neel (3/15)
Hi all,
This week we will have two theory seminars: Nic Resch (CMU) on Thursday and Seth Neel (UPenn) on Friday (see abstracts below). Both talks will in Gates 463A at 3:00 PM.
Hope to see you there!
Ofir
Lossless dimension expanders via linearized polynomials and subspace designs (Thursday 3/14)
Speaker: Nic Resch (CMU)
For a vector space F^n over a field F, an (?, ?)-dimension expander of degree d is a collection of d linear maps ?_j : F^n \to F^n such that for every subspace U of F^n of dimension at most ?n, the image of U under all the maps, ?_{j=1}^d ?_j(U), has dimension at least ?dim(U). Over a finite field, a random collection of d=O(1) maps ?_j over excellent ?lossless? expansion with high probability: ? ? d for ? ? ?(1/\eta). When it comes to a family of explicit constructions (for growing n), however, achieving even expansion factor ? = 1 + ? with constant degree is a non-trivial goal.
We present an explicit construction of dimension expanders over finite fields based on linearized polynomials and subspace designs, drawing inspiration from recent progress on list decoding in the rank-metric. Our approach yields the following: Lossless expansion over large fields; more precisely ? ? (1??)d and ? ? (1??)/d with d=O_?(1), when |F| ? ?(n). Optimal up to constant factors expansion over fields of arbitrarily small polynomial size; more precisely ? ? ?(?d) and ? ? ?(1/(?d)) with d = O_?(1), when |F| ? n^?.
Previously, an approach reducing to monotone expanders (a form of vertex expansion that is highly non-trivial to establish) gave (?(1), 1+?(1))-dimension expanders of constant degree over all fields. An approach based on ?rank condensing via subspace designs? led to dimension expanders with ? ? ?(?d) over large finite fields. Ours is the first construction to achieve lossless dimension expansion, or even expansion proportional to the degree.
Based on joint work with Venkatesan Guruswami and Chaoping Xing.
Oracle Efficiency in Differential Privacy (Friday 3/15)
Speaker: Seth Neel (UPenn)
We develop theory for using heuristics to solve computationally hard problems in differential privacy. Heuristic approaches have enjoyed tremendous success in machine learning, for which performance can be empirically evaluated. However, privacy guarantees cannot be evaluated empirically, and must be proven --- without making heuristic assumptions. We show that learning problems over broad classes of functions can be solved privately and efficiently, assuming the existence of a non-private oracle for solving the same problem. Our first algorithm yields a privacy guarantee that is contingent on the correctness of the oracle. We then give a reduction which applies to a class of heuristics which we call certifiable, which allows us to convert oracle-dependent privacy guarantees to worst-case privacy guarantee that hold even when the heuristic standing in for the oracle might fail in adversarial ways. Finally, we consider a broad class of functions that includes most classes of simple boolean functions studied in the PAC learning literature, including conjunctions, disjunctions, parities, and discrete half-spaces. We show that there is an efficient algorithm for privately constructing synthetic data for any such class, given a non-private learning oracle. This in particular gives the first oracle-efficient algorithm for privately generating synthetic data for contingency tables. The most intriguing question left open by our work is whether or not every problem that can be solved differentially privately can be privately solved with an oracle-efficient algorithm. While we do not resolve this, we give a barrier result that suggests that any generic oracle-efficient reduction must fall outside of a natural class of algorithms (which includes the algorithms given in this paper).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Tue Mar 19 19:59:31 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Tue, 19 Mar 2019 19:59:31 -0700
Subject: [theory-seminar] TCS+ talk: Wednesday, March 20th, Sasho Nikolov,
U Toronto
In-Reply-To: <76ae3046-1cde-9d2d-8062-927b84552a47@cs.stanford.edu>
References: <76ae3046-1cde-9d2d-8062-927b84552a47@cs.stanford.edu>
Message-ID:
Reminder: this is tomorrow morning!
In view of the talk title, breakfast will (among other pastries) include
sticky brownies.
-- Cl?ment
On 3/14/19 1:15 PM, Cl?ment Canonne wrote:
> Hi everyone,
>
> Next Wednesday morning (March 20th), at 10am we will have a projection
> of Aleksandar Nikolov, from University of Toronto, on the wall, telling
> us live about his recent work on "Sticky Brownian Rounding and its
> Applications to CSPs" (cf. abstract below).
>
> As usual, there will be breakfast at 9:55am, and the interactive talk at
> 10am.
>
> See you then!
>
> -- Cl?ment
>
> -------------------------------
> Speaker: Aleksandar Nikolov (University of Toronto)
> Title: Sticky Brownian Rounding and its Applications to Constraint
> Satisfaction Problems}
>
> Abstract: Semidefinite programming is a powerful tool in the design and
> analysis of approximation algorithms for combinatorial optimization
> problems.? In particular, the random hyperplane rounding method of
> Goemans and Williamson has been extensively studied for more than two
> decades, resulting in various extensions to the original technique and
> beautiful algorithms for a wide range of applications. Despite the fact
> that this approach yields tight approximation guarantees for some
> problems, like Max Cut, for many others, like Max Sat, Max DiCut, and
> constraint satisfaction problems with global constraints, the tight
> approximation ratio is still unknown. One of the main reasons for this
> is the fact that very few techniques for rounding semi-definite
> relaxations are known.
>
> In this work, we present a new general and simple method for rounding
> semi-definite programs, based on Brownian motion. Our approach is
> inspired by recent results in algorithmic discrepancy theory.? We
> develop and present tools for analyzing our new rounding algorithms,
> utilizing mathematical machinery from the theory of Brownian motion,
> complex analysis, and partial differential equations. Focusing on
> constraint satisfaction problems, we apply our method to several
> classical problems, including Max Cut, Max 2-Sat, and Max DiCut, and
> derive new algorithms that are competitive with the best known results.
> We further show that our algorithms can be used, together with the Sum
> of Squares hierarchy, to approximate constraint satisfaction problems
> subject to multiple global cardinality constraints.
>
> Join work with Sepehr Abbasi-Zadeh, Nikhil Bansal, Guru Guruganesh, Roy
> Schwartz, and Mohit Singh
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
From gvaliant at cs.stanford.edu Wed Mar 27 13:58:39 2019
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Wed, 27 Mar 2019 13:58:39 -0700
Subject: [theory-seminar] Course announcement for Amin Saberi's "Matching
Theory"
Message-ID:
Hi Friends,
Amin Saberi will be teaching a graduate seminar on matching theory. Those
of you looking for cool graduate theory seminar courses, this might be the
one for you!
-g
-----
*MS&E 319: Matching TheoryAmin Saberi*
Monday 10:30AM-1:00 PM
200-219
The theory of matching with its roots in the work of mathematical giants
like Euler and Kirchhoff has played a central and catalytic role in
combinatorial optimization for decades. More recently, the growth of online
marketplaces for allocating advertisements, rides, or other goods and
services has led to new interest and progress in this area.
The course starts with classic results characterizing matchings in
bipartite and general graphs and explores connections with algebraic graph
theory and discrete probability. Those results are complemented with
models and algorithms developed for modern applications in market design,
online advertising, and ride sharing.
*Topics include *
*Matching, determinant, and Pfaffian*Matching and polynomial identity
testing
Isolating lemma and matrix inversion, matching in RNC
*Combinatorial and polyhedral characterizations I & II*
The assignment problem and its dual, primal-dual and auction algorithms
Tutte?s theorem, Edmond?s LP and the Blossom algorithm
*The Gallai-Edmonds decomposition*
Berge-Tutte formula, application in Nash bargaining
*The stable marriage problem*Gale-Shapley theorem, incentive and fairness
issues
LP characterization, counting stable matchings
*Online Matching I & II*Online bipartite matching
Online stochastic matching
Variations on weighted graphs
Applications in ride sharing and online advertising
*Counting matchings I*Van der Waerden conjecture, Bregman-Minc?s inequality
Deterministic approximations
Counting matchings in planar graphs
*Counting matchings II*
Self reducibility and equivalence of counting and sampling
Markov chain Monte Carlo algorithms
Ising model, applications, and basic properties
*Matching in quasi-NC*
Bipartite and general graphs
*The matching polynomial and its roots*
Heilman-Lieb theorem, maximum root of the matching polynomial, tree-like
walks
2-lifts and Bilu-Linial conjecture, and interlacing families
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Sat Mar 30 10:40:08 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Sat, 30 Mar 2019 10:40:08 -0700
Subject: [theory-seminar] FOCS paper swap
Message-ID: <25b76032-3dc2-4d5c-db92-da820723917f@cs.stanford.edu>
Hi everyone,
As some may have heard (I hope it doesn't come as a surprise to those
concerned), the FOCS deadline is next week, on Friday.
Keeping in with the somewhat-lived tradition of providing
not-quite-last-minute feedback on each other's submissions, I am
thinking of organizing a paper swap for that.
The details:
- you have a submission, or two, or k (or zero, but want to provide
feedback for others')
- you sign up on this link by Sunday evening, 8pm ET:
https://doodle.com/poll/xfh2r596aduxi2wf
- by Monday at noon, 12pm PT, you receive an email from me telling you
whom to send your paper to
- by Monday evening, 8pm PT, you send said current draft
main9-final7-almostthere-forreal3.pdf to that person
- on Wednesday, we gather for a (free) lunch at 12:30pm, and give each
other feedback on our respective submissions
How does that sound?
Best,
-- Cl?ment