From ofirgeri at stanford.edu Fri Feb 1 10:49:24 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 1 Feb 2019 18:49:24 +0000
Subject: [theory-seminar] Theory Seminar (2/1): Ronald de Haan
In-Reply-To:
References:
Message-ID:
Reminder: Ronald's talk is today at 3pm.
________________________________
From: Ofir Geri
Sent: Monday, January 28, 2019 9:51:25 AM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (2/1): Ronald de Haan
Hi all,
This week's theory seminar talk will be by Ronald de Haan (University of Amsterdam): On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice (see abstract below). The talk will be as usual on Friday (2/1), 3:00pm in Gates 463A.
Hope to see you there!
Ofir
On the use of Boolean circuits in Decomposable Negation Normal Form for Social Choice
Speaker: Ronald de Haan (University of Amsterdam)
In the field of knowledge compilation, a class of Boolean circuits has been studied that strikes a balance between compactness and good algorithmic properties: circuits in Decomposable Negation Normal Form (DNNF). In this talk, I will visit some results about the possibilities and limits of using DNNF circuits in a social choice setting. This includes explaining how one can encode certain voting domains using DNNF circuits in polynomial time, and how this enables efficient use of some voting rules for these domains. It also includes showing lower bounds on the size of DNNF circuits for encoding other voting domains.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Feb 4 14:11:42 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 4 Feb 2019 22:11:42 +0000
Subject: [theory-seminar] Theory Lunch 2/7 -- Hongyang Zhang
Message-ID:
Hi everyone,
This Thursday at theory lunch, Hongyang will tell us about "Recovery Guarantees for Quadratic Tensors with Limited Observations." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
Recovery Guarantees for Quadratic Tensors with Limited Observations
Speaker: Hongyang Zhang
We consider the tensor completion problem of predicting the missing entries of a tensor. The commonly used CP model has a triple product form, but an alternate family of quadratic models which are the sum of pairwise products instead of a triple product have emerged from applications such as recommendation systems. Non-convex methods are the method of choice for learning quadratic models, and this work examines their sample complexity and error guarantee.
Our main result is that with the number of samples being only linear in the dimension, all local minima of the mean squared error objective are global minima and recover the original tensor accurately. The techniques lead to simple proofs showing that convex relaxation can recover quadratic tensors provided with linear number of samples. We substantiate our theoretical results with experiments on synthetic and real-world data, showing that quadratic models have better performance than CP models in scenarios where there are limited amount of observations available.
Joint work with Vatsal Sharan, Moses Charikar and Yingyu Liang.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Feb 4 14:11:42 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 4 Feb 2019 22:11:42 +0000
Subject: [theory-seminar] Theory Lunch 2/7 -- Hongyang Zhang
Message-ID:
Hi everyone,
This Thursday at theory lunch, Hongyang will tell us about "Recovery Guarantees for Quadratic Tensors with Limited Observations." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
Recovery Guarantees for Quadratic Tensors with Limited Observations
Speaker: Hongyang Zhang
We consider the tensor completion problem of predicting the missing entries of a tensor. The commonly used CP model has a triple product form, but an alternate family of quadratic models which are the sum of pairwise products instead of a triple product have emerged from applications such as recommendation systems. Non-convex methods are the method of choice for learning quadratic models, and this work examines their sample complexity and error guarantee.
Our main result is that with the number of samples being only linear in the dimension, all local minima of the mean squared error objective are global minima and recover the original tensor accurately. The techniques lead to simple proofs showing that convex relaxation can recover quadratic tensors provided with linear number of samples. We substantiate our theoretical results with experiments on synthetic and real-world data, showing that quadratic models have better performance than CP models in scenarios where there are limited amount of observations available.
Joint work with Vatsal Sharan, Moses Charikar and Yingyu Liang.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Mon Feb 4 22:22:01 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 5 Feb 2019 06:22:01 +0000
Subject: [theory-seminar] Theory Seminar (2/8): Sofya Raskhodnikova
Message-ID:
Hi all,
This week in the theory seminar, Sofya Raskhodnikova (Boston University) will give a talk on Erasures vs. Errors in Property Testing and Local Decoding (see abstract below). The talk will be as usual on Friday (2/8), 3:00 PM in Gates 463A.
Hope to see you there!
Ofir
Erasures vs. Errors in Property Testing and Local Decoding
Speaker: Sofya Raskhodnikova (Boston University)
Corruption in data can be in the form of erasures (missing data) or errors (wrong data). Erasure-resilient property testing (Dixit, Raskhodnikova, Thakurta, Varma '16) and tolerant property testing (Parnas, Ron, Rubinfeld '06) are two models of sublinear algorithms that account for the presence of erasures and errors in input data, respectively. We discuss the erasure-resilient model and what can and cannot be computed in it. We separate this model from tolerant testing by showing that some properties can be tested in the erasure-resilient model with query complexity independent of the input size n, but require n^{Omega(1)} queries to be tested tolerantly.
To prove the separation, we initiate the study of the role of erasures in local decoding. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures. Motivated by the application in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of the famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. The main tool used in proving the separation in property testing is an approximate variant of a locally list decodable code that works against erasures. We conclude with some open questions on the general relationship between local decoding in the presence of errors and in the presence of erasures.
Joint work with Noga Ron-Zewi (Haifa University) and Nithin Varma (Boston University)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Mon Feb 4 22:57:30 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 5 Feb 2019 06:57:30 +0000
Subject: [theory-seminar] Theory Seminar (2/8): Sofya Raskhodnikova
In-Reply-To:
References:
Message-ID:
Also, if you'd like to meet with the speaker on Friday, please sign up:
https://docs.google.com/document/d/1RyyGnbNSD4y5oy1XwavvY2M8Yz2a0zP5Do9dukyFByY/edit#heading=h.ktj6o4s5g1cs
________________________________
From: Ofir Geri
Sent: Monday, February 4, 2019 10:22:01 PM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (2/8): Sofya Raskhodnikova
Hi all,
This week in the theory seminar, Sofya Raskhodnikova (Boston University) will give a talk on Erasures vs. Errors in Property Testing and Local Decoding (see abstract below). The talk will be as usual on Friday (2/8), 3:00 PM in Gates 463A.
Hope to see you there!
Ofir
Erasures vs. Errors in Property Testing and Local Decoding
Speaker: Sofya Raskhodnikova (Boston University)
Corruption in data can be in the form of erasures (missing data) or errors (wrong data). Erasure-resilient property testing (Dixit, Raskhodnikova, Thakurta, Varma '16) and tolerant property testing (Parnas, Ron, Rubinfeld '06) are two models of sublinear algorithms that account for the presence of erasures and errors in input data, respectively. We discuss the erasure-resilient model and what can and cannot be computed in it. We separate this model from tolerant testing by showing that some properties can be tested in the erasure-resilient model with query complexity independent of the input size n, but require n^{Omega(1)} queries to be tested tolerantly.
To prove the separation, we initiate the study of the role of erasures in local decoding. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures. Motivated by the application in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of the famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. The main tool used in proving the separation in property testing is an approximate variant of a locally list decodable code that works against erasures. We conclude with some open questions on the general relationship between local decoding in the presence of errors and in the presence of erasures.
Joint work with Noga Ron-Zewi (Haifa University) and Nithin Varma (Boston University)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From saberi at stanford.edu Tue Feb 5 20:48:56 2019
From: saberi at stanford.edu (Amin Saberi)
Date: Wed, 6 Feb 2019 04:48:56 +0000
Subject: [theory-seminar] Benny Sudakov speaks in the Combinatorics Seminar
this Thursday
Message-ID:
The following talk may be of interest to some of you:
When: Thursday, February 7, 3-4pm
Room: 384-H
Speaker: Benny Sudakov (ETH Zurich)
Title: Subgraph Statistics
Abstract: Given integers k, l and a graph G, how large can be the probability that a random k-vertex subsets of G span exactly l edges? The systematic study of this very natural question was recently initiated by Alon, Hefetz, Krivelevich and Tyomkyn who also proposed several interesting conjectures on this topic. In this talk we discuss a theorem which proves one of their conjectures and implies an asymptotic version of another. We also make some first steps towards analogous question for hypergraphs. Our proofs involve some Ramsey-type arguments, and a number of different probabilistic tools, such as polynomial anticoncentration inequalities and hypercontractivity.
Joint work with M. Kwan and T. Tran.
The seminar webpage is:
http://mathematics.stanford.edu/combinatorics-seminar/
Future combinatorics seminars: Nima Anari (Stanford) February 21, Lisa Sauermann (Stanford) February 28.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Wed Feb 6 07:10:31 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Wed, 6 Feb 2019 07:10:31 -0800
Subject: [theory-seminar] TCS+ talk: Wednesday, February 6, Ran Canetti,
BU and TAU
In-Reply-To:
References:
Message-ID:
Reminder: this is in less than 3 hours!
-- Cl?ment
On 1/28/19 8:30 AM, Cl?ment Canonne wrote:
> H everyone,
>
> TCS+, the seminar series where the speaker is very flat on the wall but
> we still get to ask them questions, is about to resume for the Spring
> season. The first talk is by Ran Canetti (BU/TAU), on solving a
> long-standing open problem in cryptography: deniable encryption (see
> details below; as far as I understood, this is a work whose cool side
> effect is to prove xkcd wrong: https://xkcd.com/538/)
>
> So next week, Wednesday 6th, at 9:55am, come to Gates 463A to hear all
> about it -- with breakfast.
>
> Best,
>
> -- Cl?ment
>
>
> -------------------------------
> Speaker: Ran Canetti (BU and TAU)
> Title: Fully Bideniable Interactive Encryption
>
> Abstract: While standard encryption guarantees secrecy of the encrypted
> plaintext only against an attacker that has no knowledge of the
> communicating parties? keys and randomness of encryption, deniable
> encryption [Canetti et al., Crypto?96] provides the additional guarantee
> that the plaintext remains secret even in face of entities that attempt
> to coerce (or bribe) the communicating parties to expose their internal
> states, including the plaintexts, keys and randomness. To achieve this
> guarantee, deniable encryption? equips the parties with faking
> algorithms which allow them to generate fake keys and randomness that
> make the ciphertext appear consistent with any plaintext of the parties?
> choice.?? To date, however, only partial results were known: Either
> deniability against coercing only the sender, or against coercing only
> the receiver [Sahai-Waters, STOC ?14] or schemes satisfying weaker
> notions of deniability [O?Neil et al., Crypto ?11].
>
> In this paper we present the first fully bideniable interactive
> encryption scheme, thus resolving the 20-years-old open problem. Our
> scheme also provides an additional and new guarantee: Even if the sender
> claims that one plaintext was used and the receiver claims a different
> one, the adversary has no way of figuring out who is lying - the sender,
> the receiver, or both. This property, which we call off-the-record
> deniability, is useful when the parties don?t have means to agree on
> what fake plaintext to claim, or when one party defects against the
> other. Our protocol has three messages, which is optimal [Bendlin et
> al., Asiacrypt?11], and needs a globally available reference string. We
> assume subexponential indistinguishability obfuscation (IO) and one-way
> functions.
>
> Joint work with? Sunoo Park and Oxana Poburinnaya.
>
> _______________________________________________
> theory-seminar mailing list
> theory-seminar at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/theory-seminar
From ofirgeri at stanford.edu Fri Feb 8 10:50:32 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Fri, 8 Feb 2019 18:50:32 +0000
Subject: [theory-seminar] Theory Seminar (2/8): Sofya Raskhodnikova
In-Reply-To:
References: ,
Message-ID:
Reminder: Sofya's talk is today at 3pm.
________________________________
From: Ofir Geri
Sent: Monday, February 4, 2019 10:57:30 PM
To: thseminar at cs.stanford.edu
Subject: Re: Theory Seminar (2/8): Sofya Raskhodnikova
Also, if you'd like to meet with the speaker on Friday, please sign up:
https://docs.google.com/document/d/1RyyGnbNSD4y5oy1XwavvY2M8Yz2a0zP5Do9dukyFByY/edit#heading=h.ktj6o4s5g1cs
________________________________
From: Ofir Geri
Sent: Monday, February 4, 2019 10:22:01 PM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (2/8): Sofya Raskhodnikova
Hi all,
This week in the theory seminar, Sofya Raskhodnikova (Boston University) will give a talk on Erasures vs. Errors in Property Testing and Local Decoding (see abstract below). The talk will be as usual on Friday (2/8), 3:00 PM in Gates 463A.
Hope to see you there!
Ofir
Erasures vs. Errors in Property Testing and Local Decoding
Speaker: Sofya Raskhodnikova (Boston University)
Corruption in data can be in the form of erasures (missing data) or errors (wrong data). Erasure-resilient property testing (Dixit, Raskhodnikova, Thakurta, Varma '16) and tolerant property testing (Parnas, Ron, Rubinfeld '06) are two models of sublinear algorithms that account for the presence of erasures and errors in input data, respectively. We discuss the erasure-resilient model and what can and cannot be computed in it. We separate this model from tolerant testing by showing that some properties can be tested in the erasure-resilient model with query complexity independent of the input size n, but require n^{Omega(1)} queries to be tested tolerantly.
To prove the separation, we initiate the study of the role of erasures in local decoding. Local decoding in the presence of errors has been extensively studied, but has not been considered explicitly in the presence of erasures. Motivated by the application in property testing, we begin our investigation with local list decoding in the presence of erasures. We prove an analog of the famous result of Goldreich and Levin on local list decodability of the Hadamard code. Specifically, we show that the Hadamard code is locally list decodable in the presence of a constant fraction of erasures, arbitrary close to 1, with list sizes and query complexity better than in the Goldreich-Levin theorem. The main tool used in proving the separation in property testing is an approximate variant of a locally list decodable code that works against erasures. We conclude with some open questions on the general relationship between local decoding in the presence of errors and in the presence of erasures.
Joint work with Noga Ron-Zewi (Haifa University) and Nithin Varma (Boston University)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From shivamgarg at stanford.edu Fri Feb 8 10:53:13 2019
From: shivamgarg at stanford.edu (Shivam Garg)
Date: Fri, 8 Feb 2019 18:53:13 +0000
Subject: [theory-seminar] Algorithms and Friends Lunch,
Monday 2/11 in Gates 463A
Message-ID:
Hi everyone,
We will have the second Algorithms and Friends lunch coming Monday (Feb 11), at noon, in Gates 463A. Manikanta Kotaru will tell us how to expand machine vision to use radios, and the unique algorithmic challenges associated with it.
Title: Radios and cameras: Expanding machine vision to use radios
Abstract: Emerging applications like virtual reality and autonomous vehicles demand high resolution machine vision for accurate localization and for making inferences about the world around them. In recent years, we have become painfully aware of the costly consequences that arise when these machine vision systems fail. In this talk, I will make a case for expanding machine vision applications to be able to use information from radios like WiFi, cellular and Bluetooth systems, and provide a comparison between radios and cameras. I will formulate radio-based machine vision as an inverse problem within super-resolution framework, and describe the unique algorithmic and computational challenges it poses.
Speaker Bio: Manikanta is a sixth year PhD student with Sachin Katti. His interests are in RF sensing and machine vision.
Thanks,
Shivam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From shivamgarg at stanford.edu Mon Feb 11 11:01:37 2019
From: shivamgarg at stanford.edu (Shivam Garg)
Date: Mon, 11 Feb 2019 19:01:37 +0000
Subject: [theory-seminar] Algorithms and Friends Lunch,
Monday 2/11 in Gates 463A
In-Reply-To:
References:
Message-ID:
Reminder, this is starting in an hour!
________________________________
From: Shivam Garg
Sent: Saturday, February 9, 2019 12:23 AM
To: algorithms-and-friends at lists.stanford.edu; theory-seminar at lists.stanford.edu
Cc: Manikanta Kotaru; Megan D. Harris
Subject: Algorithms and Friends Lunch, Monday 2/11 in Gates 463A
Hi everyone,
We will have the second Algorithms and Friends lunch coming Monday (Feb 11), at noon, in Gates 463A. Manikanta Kotaru will tell us how to expand machine vision to use radios, and the unique algorithmic challenges associated with it.
Title: Radios and cameras: Expanding machine vision to use radios
Abstract: Emerging applications like virtual reality and autonomous vehicles demand high resolution machine vision for accurate localization and for making inferences about the world around them. In recent years, we have become painfully aware of the costly consequences that arise when these machine vision systems fail. In this talk, I will make a case for expanding machine vision applications to be able to use information from radios like WiFi, cellular and Bluetooth systems, and provide a comparison between radios and cameras. I will formulate radio-based machine vision as an inverse problem within super-resolution framework, and describe the unique algorithmic and computational challenges it poses.
Speaker Bio: Manikanta is a sixth year PhD student with Sachin Katti. His interests are in RF sensing and machine vision.
Thanks,
Shivam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Feb 11 13:15:15 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 11 Feb 2019 21:15:15 +0000
Subject: [theory-seminar] Theory Lunch 2/14 -- Vaggos Chatziafratis
Message-ID:
Hi everyone,
This Thursday at theory lunch, Vaggos will present "On the Computational Power of Online Gradient Descent." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
On the Computational Power of Online Gradient Descent
Speaker: Vaggos Chatziafratis
How efficiently can we compute the weight vector of online gradient descent after T steps? We prove that the evolution of weight vectors in online gradient descent can encode arbitrary polynomial-space computations, even in very simple learning settings. Our results imply that, under weak complexity-theoretic assumptions, it is impossible to reason efficiently about the fine-grained behavior of online gradient descent. Specifically, during the talk, we will see how even in the extremely simple learning setting of soft-margin SVMs (support vector machines), the weight updates can encode an instance of the PSPACE-complete C-Path problem. Our reduction and our results also extend to simple ReLU neural networks.
(Full paper: https://arxiv.org/pdf/1807.01280.pdf)
Based on joint work with Tim Roughgarden (Columbia) and Josh Wang (Google).
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Feb 11 13:15:15 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 11 Feb 2019 21:15:15 +0000
Subject: [theory-seminar] Theory Lunch 2/14 -- Vaggos Chatziafratis
Message-ID:
Hi everyone,
This Thursday at theory lunch, Vaggos will present "On the Computational Power of Online Gradient Descent." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
On the Computational Power of Online Gradient Descent
Speaker: Vaggos Chatziafratis
How efficiently can we compute the weight vector of online gradient descent after T steps? We prove that the evolution of weight vectors in online gradient descent can encode arbitrary polynomial-space computations, even in very simple learning settings. Our results imply that, under weak complexity-theoretic assumptions, it is impossible to reason efficiently about the fine-grained behavior of online gradient descent. Specifically, during the talk, we will see how even in the extremely simple learning setting of soft-margin SVMs (support vector machines), the weight updates can encode an instance of the PSPACE-complete C-Path problem. Our reduction and our results also extend to simple ReLU neural networks.
(Full paper: https://arxiv.org/pdf/1807.01280.pdf)
Based on joint work with Tim Roughgarden (Columbia) and Josh Wang (Google).
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Mon Feb 11 23:09:07 2019
From: jmardia at stanford.edu (Jay Sushil Mardia)
Date: Tue, 12 Feb 2019 07:09:07 +0000
Subject: [theory-seminar] Stanford Compression Workshop 2019 : 15 Feb 2019
Message-ID:
Hello everyone,
We invite you to the Stanford Compression Workshop 2019 to be held this Friday!
Workshop Date: 15 February 2019
Workshop Venue: Bechtel Conference Center, Stanford
Registration and more details: https://compression.stanford.edu/2019-stanford-compression-workshop
Program
8:30 am - 9:00 am Registration + Breakfast
9:00 am - 9:15 am Opening remarks
9:15 am - 9:45 am Anne Aaron, Netflix ?
9:45 am - 10:15 am Jim Bankoski and Jan Skoglund, Google
Low rate speech coding using deep generative models
10:15 am - 10:30 am Elaina Chai, Stanford University
Quantization Error in Neural Networks
10:30 am - 10:45 am Break
10:45 am - 11:15 am Mikel Hernaez, University of Illinois Urbana-Champaign
An overview of the ISO-based MPEG-G standard for genomic information representation ?
11:15 am - 11:45 am Hanlee Ji, Stanford University
Advances in High Density Molecular Data Storage
11:45 am - 12:15 pm Victoria Popic, Illumina
12:15 pm - 1:30 pm Lunch
1:30 pm - 1:45 pm Sadjad Fouladi, Stanford University
Salsify: Low-Latency Network Video Through Tighter Integration Between a Video Codec and a Transport Protocol
1:45 pm - 2:00 pm Leighton Barnes, Stanford University
Learning Distributions from their Samples under Communication Constraints ?
2:00 pm - 2:15 pm Kristy Choi, Stanford University
Neural joint-source channel coding
2:15 pm - 2:30 pm Shubham Chandak, Stanford University
SPRING: a next-generation compressor for FASTQ data ?
2:30 pm - 2:45 pm Break
2:45 pm - 3:15 pm Song Han, Massachusetts Institute of Technology
3:15 pm - 4:00 pm Panel: Compression via and for machine learning
Hyeji Kim, Samsung AI Center Cambridge, UK ?
Dmitri Pavlichin, Stanford University
Oren Rippel, WaveOne
George Toderici, Google
Moderator:
Kedar Tatwawadi, Stanford University
4:00 pm - 4:15 pm Break
4:15 pm - 4:30 pm Ashutosh Bhown, Palo Alto High School
Soham Mukherjee, Monta Vista High School
Sean Yang, Saint Francis High School
Humans are still the best lossy image compressors ?
4:30 pm - 5:00 pm Jonathan Dotan, HBO, Stanford University
Compression, Self-Sovereign Data and a Decentralized Internet
5:00 pm - 6:00 pm Closing remarks + Poster session
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Wed Feb 13 23:52:59 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Thu, 14 Feb 2019 07:52:59 +0000
Subject: [theory-seminar] Theory Seminar (2/20): Venkatesan Guruswami
Message-ID:
Hi all,
Next week on Wednesday (2/20), Venkatesan Guruswami (CMU) will give a talk on Sub-packetization of Minimum Storage Regenerating Codes: A lower bound and a work-around (see abstract below).
The talk will be on Wednesday at 2:00 PM in Gates 463A. Please note the non-standard day and time. There will be no theory seminar this Friday or next Friday.
Hope to see you there!
Ofir
Sub-packetization of Minimum Storage Regenerating codes: A lower bound and a work-around
Speaker: Venkatesan Guruswami (CMU)
Modern cloud storage systems need to store vast amounts of data in a fault tolerant manner, while also preserving data reliability and accessibility in the wake of frequent server failures. Traditional MDS (Maximum Distance Separable) codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. Minimum storage regenerating (MSR) codes are a special sub-class of MDS codes that provide mechanisms for exact regeneration of a single code-block by downloading the minimum amount of information from the remaining code-blocks. As a result, MSR codes are attractive for use in distributed storage systems to ensure node repairs with optimal repair-bandwidth. However, all known constructions of MSR codes require large sub-packetization levels (which is a measure of the granularity to which a single vector codeword symbol needs to be divided into for efficient repair). This restricts the applicability of MSR codes in practice.
This talk will present a lower bound that exponentially large sub-packetization is inherent for MSR codes. We will also propose a natural relaxation of MSR codes that allows one to circumvent this lower bound, and present a general approach to construct MDS codes that significantly reduces the required sub-packetization level by incurring slightly higher repair-bandwidth as compared to MSR codes.
The lower bound is joint work with Omar Alrabiah, and the constructions are joint work with Ankt Rawat, Itzhak Tamo, and Klim Efremenko.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mrmillr at stanford.edu Thu Feb 14 14:11:34 2019
From: mrmillr at stanford.edu (Mark Roman Miller)
Date: Thu, 14 Feb 2019 22:11:34 +0000
Subject: [theory-seminar] CS PhD admit weekend volunteers
Message-ID:
Hello theory department students and faculty,
In a few weeks, the new CS PhD admits will visit Stanford for admit weekend (March 8-9). There are a few roles that still need to be filled by students in the theory department:
1) Theory Group Dinner contact - The admits, faculty, and some students attend a dinner Saturday night (March 9th). The organizers would like to have two attendees to contact if something needs to be communicated. The commitment is low - just attend the dinner with a charged phone and be call the staff to handle problems if they come up.
2) Theory Demo Session organizer - We're trying something new this year and asking the departments to each have a demo going on in some common space. The content and tone of the demos can be whatever you'd like. For example, last year robotics showed off their robots running around, while HCI had a lot of fun with a button-press and coming up with silly buttons.
If you're interested in either of these, send me an email!
Thanks,
Mark Miller
Admit Weekend Student Co-Chair
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Feb 19 11:34:48 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 19 Feb 2019 19:34:48 +0000
Subject: [theory-seminar] Theory Lunch 2/21 -- Farzan Farnia
Message-ID:
Hi everyone,
This Thursday at theory lunch, Farzan will tell us about "A Convex Duality Framework for GANs." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
A Convex Duality Framework for GANs
Abstract:
Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given a discriminator trained over the entire space of functions, this game reduces to finding the generative model which minimizes a divergence score, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is trained over smaller function classes such as convolutional neural networks. Then, a natural question is how the divergence minimization interpretation changes as we constrain the discriminator. In this talk, we address this question by developing a convex duality framework for analyzing GANs. We show GANs in general can be interpreted as minimizing divergence between two sets of probability distributions: generative models and discriminator moment matching models. We prove that this interpretation applies to a wide class of existing GAN formulations including vanilla GAN, f-GAN, Wasserstein GAN, Energy-based GAN, and MMD-GAN. We then use the convex duality framework to explain why regularizing the discriminator?s Lipschitz constant can dramatically improve models learned by GANs. We empirically demonstrate the power of different Lipschitz regularization methods for improving the training performance in standard GAN settings.
About the speaker:
Farzan Farnia is a final-year PhD candidate in the electrical engineering department at Stanford university where he is advised by David Tse. Farzan received his master's degree in electrical engineering from Stanford university in 2015 and prior to that two bachelor's degrees in electrical engineering and mathematics from Sharif university of technology in 2013. His research interests include information theory, statistical learning theory, and convex optimization. He has been the recipient of the Stanford graduate fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford electrical engineering PhD qualifying exams in 2014.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Tue Feb 19 11:34:48 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Tue, 19 Feb 2019 19:34:48 +0000
Subject: [theory-seminar] Theory Lunch 2/21 -- Farzan Farnia
Message-ID:
Hi everyone,
This Thursday at theory lunch, Farzan will tell us about "A Convex Duality Framework for GANs." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
A Convex Duality Framework for GANs
Abstract:
Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given a discriminator trained over the entire space of functions, this game reduces to finding the generative model which minimizes a divergence score, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is trained over smaller function classes such as convolutional neural networks. Then, a natural question is how the divergence minimization interpretation changes as we constrain the discriminator. In this talk, we address this question by developing a convex duality framework for analyzing GANs. We show GANs in general can be interpreted as minimizing divergence between two sets of probability distributions: generative models and discriminator moment matching models. We prove that this interpretation applies to a wide class of existing GAN formulations including vanilla GAN, f-GAN, Wasserstein GAN, Energy-based GAN, and MMD-GAN. We then use the convex duality framework to explain why regularizing the discriminator?s Lipschitz constant can dramatically improve models learned by GANs. We empirically demonstrate the power of different Lipschitz regularization methods for improving the training performance in standard GAN settings.
About the speaker:
Farzan Farnia is a final-year PhD candidate in the electrical engineering department at Stanford university where he is advised by David Tse. Farzan received his master's degree in electrical engineering from Stanford university in 2015 and prior to that two bachelor's degrees in electrical engineering and mathematics from Sharif university of technology in 2013. His research interests include information theory, statistical learning theory, and convex optimization. He has been the recipient of the Stanford graduate fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford electrical engineering PhD qualifying exams in 2014.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Wed Feb 20 10:44:19 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Wed, 20 Feb 2019 18:44:19 +0000
Subject: [theory-seminar] Theory Seminar (2/20): Venkatesan Guruswami
In-Reply-To:
References:
Message-ID:
Reminder: Venkat's talk is today at 2pm.
________________________________
From: Ofir Geri
Sent: Wednesday, February 13, 2019 11:52:59 PM
To: thseminar at cs.stanford.edu
Subject: Theory Seminar (2/20): Venkatesan Guruswami
Hi all,
Next week on Wednesday (2/20), Venkatesan Guruswami (CMU) will give a talk on Sub-packetization of Minimum Storage Regenerating Codes: A lower bound and a work-around (see abstract below).
The talk will be on Wednesday at 2:00 PM in Gates 463A. Please note the non-standard day and time. There will be no theory seminar this Friday or next Friday.
Hope to see you there!
Ofir
Sub-packetization of Minimum Storage Regenerating codes: A lower bound and a work-around
Speaker: Venkatesan Guruswami (CMU)
Modern cloud storage systems need to store vast amounts of data in a fault tolerant manner, while also preserving data reliability and accessibility in the wake of frequent server failures. Traditional MDS (Maximum Distance Separable) codes provide the optimal trade-off between redundancy and number of worst-case erasures tolerated. Minimum storage regenerating (MSR) codes are a special sub-class of MDS codes that provide mechanisms for exact regeneration of a single code-block by downloading the minimum amount of information from the remaining code-blocks. As a result, MSR codes are attractive for use in distributed storage systems to ensure node repairs with optimal repair-bandwidth. However, all known constructions of MSR codes require large sub-packetization levels (which is a measure of the granularity to which a single vector codeword symbol needs to be divided into for efficient repair). This restricts the applicability of MSR codes in practice.
This talk will present a lower bound that exponentially large sub-packetization is inherent for MSR codes. We will also propose a natural relaxation of MSR codes that allows one to circumvent this lower bound, and present a general approach to construct MDS codes that significantly reduces the required sub-packetization level by incurring slightly higher repair-bandwidth as compared to MSR codes.
The lower bound is joint work with Omar Alrabiah, and the constructions are joint work with Ankt Rawat, Itzhak Tamo, and Klim Efremenko.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gvaliant at cs.stanford.edu Wed Feb 20 17:48:36 2019
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Wed, 20 Feb 2019 17:48:36 -0800
Subject: [theory-seminar] Nima giving the Combinatorics Seminar (3pm Thurs)
Message-ID:
Hi Friends,
The combinatorics seminar tomorrow might be of interest to many of us.
Nima Anari will be talking about some new work on random walks, sampling,
and expansion of graphs associated to polytopes and matroids. The
talk will be at 3pm in 384-H. The full abstract/info is below.
Cheers,
-g
--
When: Thursday February 21, 3pm-4pm
Room: 384-H
Speaker: Nima Anari (Stanford)
Title: Matroids Expand
Abstract: It was conjectured by Mihail and Vazirani in 1989 that if we take
the graph formed by vertices and edges of a polytope whose vertices have
binary (0/1) coordinates, then the edge expansion of this graph is at least
1. The original motivation for this conjecture was to study Markov Chain
Monte Carlo methods that estimate the size of set families called matroids.
I will give a proof of the expansion conjecture for arbitrary matroids and
show efficient algorithms to sample from or to count matroids.
The key to the proof lies in a connection between matroids and multivariate
polynomials associated to them, with high-dimensional expanders and random
walks on them. This property of multivariate polynomials which we call
complete log-concavity provides a bridge connecting notions of convexity in
the continuous and discrete world, and acts as a hub connecting a diverse
set of areas.
Based on joint work with Shayan Oveis Gharan, Kuikui Liu, and Cynthia
Vinzant.
The seminar webpage is:
http://mathematics.stanford.edu/combinatorics-seminar/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Feb 21 14:01:09 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 21 Feb 2019 22:01:09 +0000
Subject: [theory-seminar] Theory Lunch 2/21 -- Farzan Farnia
In-Reply-To:
References:
Message-ID:
Someone left a grey hat in 463A. If this is yours, come get it!
________________________________
From: theory-seminar on behalf of Weiyun Ma
Sent: Tuesday, February 19, 2019 11:34:48 AM
To: theory-seminar at lists.stanford.edu; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 2/21 -- Farzan Farnia
Hi everyone,
This Thursday at theory lunch, Farzan will tell us about "A Convex Duality Framework for GANs." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
A Convex Duality Framework for GANs
Abstract:
Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given a discriminator trained over the entire space of functions, this game reduces to finding the generative model which minimizes a divergence score, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is trained over smaller function classes such as convolutional neural networks. Then, a natural question is how the divergence minimization interpretation changes as we constrain the discriminator. In this talk, we address this question by developing a convex duality framework for analyzing GANs. We show GANs in general can be interpreted as minimizing divergence between two sets of probability distributions: generative models and discriminator moment matching models. We prove that this interpretation applies to a wide class of existing GAN formulations including vanilla GAN, f-GAN, Wasserstein GAN, Energy-based GAN, and MMD-GAN. We then use the convex duality framework to explain why regularizing the discriminator?s Lipschitz constant can dramatically improve models learned by GANs. We empirically demonstrate the power of different Lipschitz regularization methods for improving the training performance in standard GAN settings.
About the speaker:
Farzan Farnia is a final-year PhD candidate in the electrical engineering department at Stanford university where he is advised by David Tse. Farzan received his master's degree in electrical engineering from Stanford university in 2015 and prior to that two bachelor's degrees in electrical engineering and mathematics from Sharif university of technology in 2013. His research interests include information theory, statistical learning theory, and convex optimization. He has been the recipient of the Stanford graduate fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford electrical engineering PhD qualifying exams in 2014.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Thu Feb 21 14:01:09 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Thu, 21 Feb 2019 22:01:09 +0000
Subject: [theory-seminar] Theory Lunch 2/21 -- Farzan Farnia
In-Reply-To:
References:
Message-ID:
Someone left a grey hat in 463A. If this is yours, come get it!
________________________________
From: theory-seminar on behalf of Weiyun Ma
Sent: Tuesday, February 19, 2019 11:34:48 AM
To: theory-seminar at lists.stanford.edu; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 2/21 -- Farzan Farnia
Hi everyone,
This Thursday at theory lunch, Farzan will tell us about "A Convex Duality Framework for GANs." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
A Convex Duality Framework for GANs
Abstract:
Generative adversarial network (GAN) is a minimax game between a generator mimicking the true model and a discriminator distinguishing the samples produced by the generator from the real training samples. Given a discriminator trained over the entire space of functions, this game reduces to finding the generative model which minimizes a divergence score, e.g. the Jensen-Shannon (JS) divergence, to the data distribution. However, in practice the discriminator is trained over smaller function classes such as convolutional neural networks. Then, a natural question is how the divergence minimization interpretation changes as we constrain the discriminator. In this talk, we address this question by developing a convex duality framework for analyzing GANs. We show GANs in general can be interpreted as minimizing divergence between two sets of probability distributions: generative models and discriminator moment matching models. We prove that this interpretation applies to a wide class of existing GAN formulations including vanilla GAN, f-GAN, Wasserstein GAN, Energy-based GAN, and MMD-GAN. We then use the convex duality framework to explain why regularizing the discriminator?s Lipschitz constant can dramatically improve models learned by GANs. We empirically demonstrate the power of different Lipschitz regularization methods for improving the training performance in standard GAN settings.
About the speaker:
Farzan Farnia is a final-year PhD candidate in the electrical engineering department at Stanford university where he is advised by David Tse. Farzan received his master's degree in electrical engineering from Stanford university in 2015 and prior to that two bachelor's degrees in electrical engineering and mathematics from Sharif university of technology in 2013. His research interests include information theory, statistical learning theory, and convex optimization. He has been the recipient of the Stanford graduate fellowship (Sequoia Capital fellowship) from 2013-2016 and the Numerical Technology Founders Prize as the second top performer of Stanford electrical engineering PhD qualifying exams in 2014.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Mon Feb 25 10:55:36 2019
From: reingold at stanford.edu (Omer Reingold)
Date: Mon, 25 Feb 2019 10:55:36 -0800
Subject: [theory-seminar] Fwd: Complexity School in Paris, June 17-21
In-Reply-To:
References:
Message-ID:
FYI
---------- Forwarded message ---------
From: Alexander Knop
Date: Fri, Feb 22, 2019 at 1:39 PM
Subject: Complexity School in Paris, June 17-21
To:
Dear Prof. Reingold,
I hope you are well. I am helping to organize a Complexity School in Paris,
France this June: https://caleidoscope.sciencesconf.org/
As you can see, there will be quite a diverse program! Our main lecturers
will be Rahul Santhanam (Circuits and lower bounds), Peter B?rgisser
(Algebraic and geometric complexity), Sam Buss (Proof complexity and
bounded arithmetic), and Anuj Dawar & Ugo Dal Lago (Machine-free
complexity).
We are going to apply for funding from the NSF for US-based students to
attend. Would any of your students (or those in your department) be
interested in attending, with or without financial support? If so please
let us know and we can include a request to fund their participation.
Please also let us know if you have access to funding to support their
participation (this is necessary information for the NSF funding
application).
Please let me know if you have any other questions.
Kind regards,
Alexander Knop
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Feb 25 12:47:38 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 25 Feb 2019 20:47:38 +0000
Subject: [theory-seminar] Theory Lunch 2/28 -- Fotis Iliopoulos
Message-ID:
Hi all,
This Thursday at theory lunch, Fotis Iliopoulos from UC Berkeley will tell us about "Efficiently list-edge coloring multigraphs asymptotically optimally." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
Efficiently list-edge coloring multigraphs asymptotically optimally
Speaker: Fotis Iliopoulos (UC Berkeley)
We give polynomial time algorithms for the seminal results of Kahn [18, 19], who showed that the Goldberg-Seymour and List-Coloring conjectures for (list-)edge coloring multigraphs hold asymptotically. Kahn?s arguments are based on the probabilistic method and are non-constructive. Our key insight is to show that the main result of Achlioptas, Iliopoulos and Kolmogorov [2] for analyzing local search algorithms can be used to make constructive applications of a powerful version of the so-called Lopsided Lovasz Local Lemma. In particular, we use it to design algorithms that exploit the fact that correlations in the probability spaces on matchings used by Kahn decay with distance.
Joint work with Alistair Sinclair.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Mon Feb 25 12:47:38 2019
From: wyma at stanford.edu (Weiyun Ma)
Date: Mon, 25 Feb 2019 20:47:38 +0000
Subject: [theory-seminar] Theory Lunch 2/28 -- Fotis Iliopoulos
Message-ID:
Hi all,
This Thursday at theory lunch, Fotis Iliopoulos from UC Berkeley will tell us about "Efficiently list-edge coloring multigraphs asymptotically optimally." (See abstract below.)
As usual, please join us from noon to 1pm at 463A.
---------------------------------------------------------------------
Efficiently list-edge coloring multigraphs asymptotically optimally
Speaker: Fotis Iliopoulos (UC Berkeley)
We give polynomial time algorithms for the seminal results of Kahn [18, 19], who showed that the Goldberg-Seymour and List-Coloring conjectures for (list-)edge coloring multigraphs hold asymptotically. Kahn?s arguments are based on the probabilistic method and are non-constructive. Our key insight is to show that the main result of Achlioptas, Iliopoulos and Kolmogorov [2] for analyzing local search algorithms can be used to make constructive applications of a powerful version of the so-called Lopsided Lovasz Local Lemma. In particular, we use it to design algorithms that exploit the fact that correlations in the probability spaces on matchings used by Kahn decay with distance.
Joint work with Alistair Sinclair.
---------------------------------------------------------------------
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From aviad at cs.stanford.edu Mon Feb 25 14:09:17 2019
From: aviad at cs.stanford.edu (Aviad Rubinstein)
Date: Mon, 25 Feb 2019 14:09:17 -0800
Subject: [theory-seminar] CA+students wanted for spring quarter CS354:
Unfulfilled Algorithmic Fantasies
Message-ID:
Dear theory students,
In the spring I'll be teaching a new advanced course titled "Topics in
intractability: Unfulfilled Algorithmic Fantasies".
It should be a lot of fun (see below for course description; talk to me if
you have further questions/suggestions).
Action items:
1. You should obviously take it!
2. I'm looking for a CA...
(I expect that the workload will be relatively light.)
Cheers,
Aviad
Over the past 45 years, understanding NP-hardness has been an amazingly
useful tool for algorithm designers. This course will expose students to
additional ways to reason about obstacles for designing efficient
algorithms. Topics will include unconditional lower bounds (query- and
communication-complexity), total problems, Unique Games, average-case
complexity, and fine-grained complexity.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ofirgeri at stanford.edu Mon Feb 25 23:14:23 2019
From: ofirgeri at stanford.edu (Ofir Geri)
Date: Tue, 26 Feb 2019 07:14:23 +0000
Subject: [theory-seminar] Theory Seminar (3/1): Rachel Cummings (Georgia
Tech)
Message-ID:
Hi all,
This Friday, Rachel Cummings (Georgia Tech) will give a theory seminar talk on Algorithmic Price Discrimination (see abstract below). The talk will be as usual at 3:00 PM in Gates 463A.
If you'd like to meet with the speaker, please email Mary at marykw at stanford.edu
Hope to see you there!
Ofir
Algorithmic Price Discrimination
Speaker: Rachel Cummings (Georgia Tech)
We consider a generalization of the third degree price discrimination problem studied in Bergemann et al. 2015, where an intermediary between the buyer and the seller can design market segments to maximize any linear combination of consumer surplus and seller revenue. Unlike in Bergemann et al. 2015, we assume that the intermediary only has partial information about the buyer's value. We consider three different models of information, with increasing order of difficulty. In the first model, we assume that the intermediary's information allows him to construct a probability distribution of the buyer's value. Next we consider the sample complexity model, where we assume that the intermediary only sees samples from this distribution. Finally, we consider a bandit online learning model, where the intermediary can only observe past purchasing decisions of the buyer, rather than her exact value. For each of these models, we present algorithms to compute optimal or near optimal market segmentation.
(joint work with Nikhil Devanur, Zhiyi Huang, and Xiangning Wang)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From ccanonne at cs.stanford.edu Thu Feb 28 20:24:49 2019
From: ccanonne at cs.stanford.edu (=?UTF-8?Q?Cl=c3=a9ment_Canonne?=)
Date: Thu, 28 Feb 2019 20:24:49 -0800
Subject: [theory-seminar] TCS+ talk: Wednesday, March 6, Shayan Oveis Gharan,
U Washington
Message-ID: <2126002d-242d-0aac-8bb0-2bbfb3e212dd@cs.stanford.edu>
Hi Everyone,
As mentioned at the theory lunch today, there will be a TCS+ talk (A
projected speaker! A talk from afar!) this coming
Wednesday, at *10am*
(as usual: with breakfast at 9:55am). Come and listen to Shayan Oveis
Gharan talk about his recent work with (among others) our very own Nima
Anari.
See you next Wednesday,
-- Cl?ment
-------------------------------
Speaker: Shayan Oveis Gharan, University of Washington
Title: Strongly log concave polynomials, high dimensional simplicial
complexes, and an FPRAS for counting Bases of Matroids
Abstract: A matroid is an abstract combinatorial object which
generalizes the notions of spanning trees, and linearly independent sets
of vectors. I will talk about an efficient algorithm based on the Markov
Chain Monte Carlo technique to approximately count the number of bases
of any given matroid.
The proof is based on a new connections between high dimensional
simplicial complexes, and a new class of multivariate polynomials called
completely log-concave polynomials. In particular, we exploit a
fundamental fact from our previous work that the bases generating
polynomial of any given matroid is a log-concave function over the
positive orthant.
Based on joint works with Nima Anari, Kuikui Liu, and Cynthia Vinzant.