From tpulkit at stanford.edu Wed Jun 1 03:03:02 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Wed, 1 Jun 2022 10:03:02 +0000
Subject: [theory-seminar] "Role of Channel Capacity in Learning GMMs" - Elad
Romanov (Friday, June 3rd, 2pm)
Message-ID: <76F2E85C-C5E8-4467-9A32-4ED1E691AD66@stanford.edu>
Hi everyone,
We will have an Information Theory Forum (IT Forum) talk this week @Fri, June 3rd, 2pm PT with Elad Romanov. The talks are hosted and accessible via Zoom.
If you want to receive reminder emails, please join the IT Forum mailing list.
Details for this week?s talk are below:
On the Role of Channel Capacity in Learning Gaussian Mixture Models
Elad Romanov, Stanford
Fri, 3rd June, 2pm PT
Zoom Link
pwd: 032264
Abstract:
We study the sample complexity of learning the unknown centers of a balanced spherical Gaussian mixture model (GMM) in R^d. In particular, we are interested in the following question: what is the maximal noise level (variance) for which the sample complexity is essentially the same as when estimating the centers from labeled measurements? To that end, we restrict attention to a Bayesian formulation of the problem, where the centers are uniformly distributed on the sphere. Our main results characterize the exact noise threshold, as a function of the number of centers k and dimension d, below which the GMM learning problem is essentially as easy as learning from labeled observations, and above which it is substantially harder. The exact location of the threshold occurs at (log k)/d = (1?2)log(1+1/sigma^2), which is the capacity of the additive white Gaussian noise (AWGN) channel. Thinking of the set of k centers as a code, this noise threshold can be interpreted as the largest noise level for which the error probability of the code over the AWGN channel is small. Previous works on the GMM learning problem have identified the minimum distance between the centers as a key parameter in determining the statistical difficulty of learning the corresponding GMM. While our results are only proved for GMMs whose centers are uniformly distributed over the sphere, they hint that perhaps it is the decoding error probability associated with the center constellation as a channel code that determines the statistical difficulty of learning the corresponding GMM, rather than just the minimum distance.
Joint work with Or Ordentlich (Hebrew University) and Tamir Bendory (Tel Aviv University).
Bio:
Elad Romanov is currently a postdoctoral researcher in the department of statistics, Stanford. His research interests span broadly signal processing, information theory, high-dimensional statistics, and everything [(math)+(data science)].
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Jun 2 09:11:16 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 2 Jun 2022 16:11:16 +0000
Subject: [theory-seminar] Theory Lunch 6/2: Seri Khoury (Berkeley)
In-Reply-To:
References:
Message-ID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Sunday, May 29, 2022 9:08 PM
To: theory-seminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 6/2: Seri Khoury (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Seri will tell us about: Hardness of Approximation in P via Short Cycle Removal: Cycle Detection, Distance Oracles, and Beyond
Abstract: Triangle finding is at the base of many conditional lower bounds in P, mainly for distance computation problems, and the existence of many $4$- or $5$-cycles in a worst-case instance had been the obstacle towards resolving major open questions.
We present a new technique for efficiently removing almost all short cycles in a graph without unintentionally removing its triangles. Consequently, triangle finding problems do not become easy even in almost $k$-cycle free graphs, for any constant $k\geq 4$. This allows us to establish new hardness of approximation results for distance related problems, and new hardness results for k-cycle finding problems.
In this talk, I will explain the short cycle removal technique and its consequences for approximate distance oracles, k-cycle detection, and more.
Based on a joint work with Amir Abboud, Karl Bringmann, and Or Zamir (to appear in STOC 2022).
BTW, I'm looking for volunteers to take over organizing theory lunch (starting from summer quarter). If you are interested, please let me know.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Thu Jun 2 09:11:16 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 2 Jun 2022 16:11:16 +0000
Subject: [theory-seminar] Theory Lunch 6/2: Seri Khoury (Berkeley)
In-Reply-To:
References:
Message-ID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Sunday, May 29, 2022 9:08 PM
To: theory-seminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 6/2: Seri Khoury (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Seri will tell us about: Hardness of Approximation in P via Short Cycle Removal: Cycle Detection, Distance Oracles, and Beyond
Abstract: Triangle finding is at the base of many conditional lower bounds in P, mainly for distance computation problems, and the existence of many $4$- or $5$-cycles in a worst-case instance had been the obstacle towards resolving major open questions.
We present a new technique for efficiently removing almost all short cycles in a graph without unintentionally removing its triangles. Consequently, triangle finding problems do not become easy even in almost $k$-cycle free graphs, for any constant $k\geq 4$. This allows us to establish new hardness of approximation results for distance related problems, and new hardness results for k-cycle finding problems.
In this talk, I will explain the short cycle removal technique and its consequences for approximate distance oracles, k-cycle detection, and more.
Based on a joint work with Amir Abboud, Karl Bringmann, and Or Zamir (to appear in STOC 2022).
BTW, I'm looking for volunteers to take over organizing theory lunch (starting from summer quarter). If you are interested, please let me know.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Fri Jun 3 13:52:58 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Fri, 3 Jun 2022 13:52:58 -0700
Subject: [theory-seminar] [isl-students] "Role of Channel Capacity in
Learning GMMs" - Elad Romanov (Friday, June 3rd, 2pm)
In-Reply-To: <76F2E85C-C5E8-4467-9A32-4ED1E691AD66@stanford.edu>
References: <76F2E85C-C5E8-4467-9A32-4ED1E691AD66@stanford.edu>
Message-ID:
Reminder: This is happening in 8 mins
On Wed, Jun 1, 2022 at 3:03 AM Pulkit Tandon wrote:
> Hi everyone,
>
> We will have an Information Theory Forum (IT Forum
> ) talk this week @Fri,
> June 3rd, 2pm PT with Elad Romanov. The talks are hosted and accessible via
> Zoom.
>
> *If you want to receive reminder emails, please join the IT Forum mailing
> list
> .*
>
> Details for this week?s talk are below:
>
> On the Role of Channel Capacity in Learning Gaussian Mixture Models
> Elad Romanov*, Stanford*
> *Fri, 3rd June, 2pm PT*
> Zoom Link
>
> pwd: 032264
>
> *Abstract:*
>
> We study the sample complexity of learning the unknown centers of a
> balanced spherical Gaussian mixture model (GMM) in R^d. In particular, we
> are interested in the following question: what is the maximal noise level
> (variance) for which the sample complexity is essentially the same as when
> estimating the centers from labeled measurements? To that end, we restrict
> attention to a Bayesian formulation of the problem, where the centers are
> uniformly distributed on the sphere. Our main results characterize the
> exact noise threshold, as a function of the number of centers k and
> dimension d, below which the GMM learning problem is essentially as easy as
> learning from labeled observations, and above which it is substantially
> harder. The exact location of the threshold occurs at (log k)/d = (1?2)log(1+1/sigma^2),
> which is the capacity of the additive white Gaussian noise (AWGN) channel.
> Thinking of the set of k centers as a code, this noise threshold can be
> interpreted as the largest noise level for which the error probability of
> the code over the AWGN channel is small. Previous works on the GMM learning
> problem have identified the minimum distance between the centers as a key
> parameter in determining the statistical difficulty of learning the
> corresponding GMM. While our results are only proved for GMMs whose centers
> are uniformly distributed over the sphere, they hint that perhaps it is the
> decoding error probability associated with the center constellation as a
> channel code that determines the statistical difficulty of learning the
> corresponding GMM, rather than just the minimum distance.
>
> Joint work with Or Ordentlich (Hebrew University) and Tamir Bendory (Tel
> Aviv University).
> *Bio:*
>
> Elad Romanov is currently a postdoctoral researcher in the department of
> statistics, Stanford. His research interests span broadly signal
> processing, information theory, high-dimensional statistics, and everything
> [(math)+(data science)].
> Best
> Pulkit
>
> _______________________________________________
> isl-students mailing list
> isl-students at lists.stanford.edu
> https://mailman.stanford.edu/mailman/listinfo/isl-students
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Mon Jun 6 00:19:38 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 6 Jun 2022 07:19:38 +0000
Subject: [theory-seminar] Theory Lunch 6/9: Yassine Hamoudi (Berkeley)
Message-ID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yassine will tell us about: Near-Optimal Quantum Algorithms for Multivariate Mean Estimation
Abstract: We describe a quantum algorithm for estimating the mean of a vector-valued random variable. Our result provides better approximation guarantees than any classical sub-Gaussian estimator when the sample complexity is larger than the dimension. In the low-sample regime, we show that no such quantum estimator exists. Our analysis uses tail inequalities for multivariate truncated statistics and a variety of quantum algorithmic techniques such as the Bernstein-Vazirani algorithm and linear amplitude amplification.
This is the last theory lunch of this quarter. For the summer quarter, we need your help to make theory lunch a success:
Speakers: If you have a result that you're excited to share with the group, please let us know.
Organizers: Because I won't be on campus every Thursday in summer, I'm looking for volunteers to take over organizing theory lunch. If you want to do some service and have more opportunities to interact with the speakers, please let me know.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Mon Jun 6 00:19:38 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Mon, 6 Jun 2022 07:19:38 +0000
Subject: [theory-seminar] Theory Lunch 6/9: Yassine Hamoudi (Berkeley)
Message-ID:
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yassine will tell us about: Near-Optimal Quantum Algorithms for Multivariate Mean Estimation
Abstract: We describe a quantum algorithm for estimating the mean of a vector-valued random variable. Our result provides better approximation guarantees than any classical sub-Gaussian estimator when the sample complexity is larger than the dimension. In the low-sample regime, we show that no such quantum estimator exists. Our analysis uses tail inequalities for multivariate truncated statistics and a variety of quantum algorithmic techniques such as the Bernstein-Vazirani algorithm and linear amplitude amplification.
This is the last theory lunch of this quarter. For the summer quarter, we need your help to make theory lunch a success:
Speakers: If you have a result that you're excited to share with the group, please let us know.
Organizers: Because I won't be on campus every Thursday in summer, I'm looking for volunteers to take over organizing theory lunch. If you want to do some service and have more opportunities to interact with the speakers, please let me know.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Jun 8 23:24:16 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 9 Jun 2022 06:24:16 +0000
Subject: [theory-seminar] Theory Lunch 6/9: Yassine Hamoudi (Berkeley)
In-Reply-To:
References:
Message-ID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Monday, June 6, 2022 12:19 AM
To: theory-seminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 6/9: Yassine Hamoudi (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yassine will tell us about: Near-Optimal Quantum Algorithms for Multivariate Mean Estimation
Abstract: We describe a quantum algorithm for estimating the mean of a vector-valued random variable. Our result provides better approximation guarantees than any classical sub-Gaussian estimator when the sample complexity is larger than the dimension. In the low-sample regime, we show that no such quantum estimator exists. Our analysis uses tail inequalities for multivariate truncated statistics and a variety of quantum algorithmic techniques such as the Bernstein-Vazirani algorithm and linear amplitude amplification.
This is the last theory lunch of this quarter. For the summer quarter, we need your help to make theory lunch a success:
Speakers: If you have a result that you're excited to share with the group, please let us know.
Organizers: Because I won't be on campus every Thursday in summer, I'm looking for volunteers to take over organizing theory lunch. If you want to do some service and have more opportunities to interact with the speakers, please let me know.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From junyaoz at stanford.edu Wed Jun 8 23:24:16 2022
From: junyaoz at stanford.edu (Junyao Zhao)
Date: Thu, 9 Jun 2022 06:24:16 +0000
Subject: [theory-seminar] Theory Lunch 6/9: Yassine Hamoudi (Berkeley)
In-Reply-To:
References:
Message-ID:
A gentle reminder: This is happening in 10 minutes.
________________________________
From: theory-seminar on behalf of Junyao Zhao
Sent: Monday, June 6, 2022 12:19 AM
To: theory-seminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: [theory-seminar] Theory Lunch 6/9: Yassine Hamoudi (Berkeley)
Hello everyone,
This week's theory lunch will take place Thursday at noon in the Engineering Quad. We'll start with some socializing, followed by a talk at 12:30pm. Yassine will tell us about: Near-Optimal Quantum Algorithms for Multivariate Mean Estimation
Abstract: We describe a quantum algorithm for estimating the mean of a vector-valued random variable. Our result provides better approximation guarantees than any classical sub-Gaussian estimator when the sample complexity is larger than the dimension. In the low-sample regime, we show that no such quantum estimator exists. Our analysis uses tail inequalities for multivariate truncated statistics and a variety of quantum algorithmic techniques such as the Bernstein-Vazirani algorithm and linear amplitude amplification.
This is the last theory lunch of this quarter. For the summer quarter, we need your help to make theory lunch a success:
Speakers: If you have a result that you're excited to share with the group, please let us know.
Organizers: Because I won't be on campus every Thursday in summer, I'm looking for volunteers to take over organizing theory lunch. If you want to do some service and have more opportunities to interact with the speakers, please let me know.
Cheers,
Junyao
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wzli at stanford.edu Thu Jun 16 10:23:20 2022
From: wzli at stanford.edu (Wenzheng Li)
Date: Thu, 16 Jun 2022 17:23:20 +0000
Subject: [theory-seminar] Quals talk: maximizing determinants under matroid
constraints
Message-ID:
Hi all,
I will be giving my quals talk on maximizing determinants under matroid constraints tomorrow starting at 10 am in Gates 100. I will talk about several related papers but mainly focus on the following https://arxiv.org/pdf/1707.02757.pdf. Feel free to come if you are interested.
Best,
Wenzheng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: