From tpulkit at stanford.edu Mon Oct 3 11:17:37 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Mon, 3 Oct 2022 18:17:37 +0000
Subject: [theory-seminar] "Extreme Lossless Text Compression" - Byron Knoll
(Fri, 10/07, 2-3pm, Packard 202)
Message-ID: <3C32E5C8-700B-4B9E-8675-CA9DC847889C@stanford.edu>
Hi everyone
Welcome back to a new quarter. We will continue with the Information Theory Forum (IT Forum) talks this week @Fri, 10/07, 2-3pm with Byron Knoll. The talk will be in-person at Packard 202 and will also be hosted via zoom for remote participants.
If you want to receive reminder emails, please join the IT Forum mailing list.
Details for this week?s talk are below:
Extreme Lossless Text Compression
Byron Knoll, Google
Fri, 7th October, 2-3pm PT
Zoom Link
pwd: 032264
Abstract:
I will be discussing the current state of the art approaches for lossless text compression, at the extreme end of maximizing compression at the cost of high resource usage. I will discuss the Hutter Prize and the architecture of the following compression programs: cmix, starlit, nncp, and tensorflow-compress.
Bio:
In 2011 I received my masters degree in computer science from the University of British Columbia. My supervisor was Nando de Freitas. My thesis was related to machine learning and data compression. Since then I have been working at the search ranking team at Google. Outside of work I continue to experiment with data compression as a hobby.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 4 10:45:31 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 4 Oct 2022 10:45:31 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Don Knuth
Message-ID:
Hi all,
We continue Theory lunch this thursday 6 Oct, at noon in the usual location
(engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
---------
*Title: *Ambidextrous Numbers
*Speaker: *Don Knuth
*Abstract:* Rossi and Thuswaldner recently introduced a fascinating new
mathematical object K = R ? Q2, where R is the field of real numbers and Q2
is the field of 2-adic numbers. It?s a metric space that?s closed under the
operations of addition, negation, halving, and taking limits. It?s somewhat
analogous to the complex numbers, because each member of K has a ?real
part? and a ?2-adic part.? We will learn more about these "ambidextrous
numbers" (so named because their properties involve both right-hand and
left-hand rules).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 4 10:45:31 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 4 Oct 2022 10:45:31 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Don Knuth
Message-ID:
Hi all,
We continue Theory lunch this thursday 6 Oct, at noon in the usual location
(engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
---------
*Title: *Ambidextrous Numbers
*Speaker: *Don Knuth
*Abstract:* Rossi and Thuswaldner recently introduced a fascinating new
mathematical object K = R ? Q2, where R is the field of real numbers and Q2
is the field of 2-adic numbers. It?s a metric space that?s closed under the
operations of addition, negation, halving, and taking limits. It?s somewhat
analogous to the complex numbers, because each member of K has a ?real
part? and a ?2-adic part.? We will learn more about these "ambidextrous
numbers" (so named because their properties involve both right-hand and
left-hand rules).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 6 09:29:55 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 6 Oct 2022 09:29:55 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Don Knuth
In-Reply-To:
References:
Message-ID:
Reminder: This is happening in appx 2.5 hours, see you there.
On Tue, Oct 4, 2022 at 10:45 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 6 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk
> (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> ---------
> *Title: *Ambidextrous Numbers
> *Speaker: *Don Knuth
>
> *Abstract:* Rossi and Thuswaldner recently introduced a fascinating new
> mathematical object K = R ? Q2, where R is the field of real numbers and Q2
> is the field of 2-adic numbers. It?s a metric space that?s closed under the
> operations of addition, negation, halving, and taking limits. It?s somewhat
> analogous to the complex numbers, because each member of K has a ?real
> part? and a ?2-adic part.? We will learn more about these "ambidextrous
> numbers" (so named because their properties involve both right-hand and
> left-hand rules).
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 6 09:29:55 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 6 Oct 2022 09:29:55 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Don Knuth
In-Reply-To:
References:
Message-ID:
Reminder: This is happening in appx 2.5 hours, see you there.
On Tue, Oct 4, 2022 at 10:45 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 6 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk
> (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> ---------
> *Title: *Ambidextrous Numbers
> *Speaker: *Don Knuth
>
> *Abstract:* Rossi and Thuswaldner recently introduced a fascinating new
> mathematical object K = R ? Q2, where R is the field of real numbers and Q2
> is the field of 2-adic numbers. It?s a metric space that?s closed under the
> operations of addition, negation, halving, and taking limits. It?s somewhat
> analogous to the complex numbers, because each member of K has a ?real
> part? and a ?2-adic part.? We will learn more about these "ambidextrous
> numbers" (so named because their properties involve both right-hand and
> left-hand rules).
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From iashlagi at stanford.edu Thu Oct 6 17:45:44 2022
From: iashlagi at stanford.edu (Itai Ashlagi)
Date: Fri, 7 Oct 2022 00:45:44 +0000
Subject: [theory-seminar] RAIN Seminar
Message-ID:
Dear colleagues/students,
This is our annual note regarding the RAIN seminar. If you find this seminar interesting, you can subscribe at https://mailman.stanford.edu/mailman/listinfo/internetalgs and see more information at http://rain.stanford.edu/schedule/ . We would welcome your participation, and please feel free to forward this note to anyone else you find appropriate. The talks typically focus on problems at the interface of social and economic sciences on one hand, and computational science and algorithms on the other.
The seminar is hosted by the Stanford Society and Algorithms Lab (SOAL), and is held roughly half the Wednesdays, usually from 12-1pm in Y2E2 101. We have a social hour (with some snacks and drinks) after the seminar that you are welcome to attend if you attend the seminar or feel an affinity with SOAL.
The first RAIN talk of the year is by Diyi Yang from Stanford on Wednesday Wed 10/12 at 12pm in Y2E2 101; details below. And in case you are wondering, RAIN stands for Research on Algorithms and Incentives in Networks.
Thanks,
Itai
Speaker: Diyi Yang
Title:
More Civility and Positivity for Socially Responsible Language Understanding
Abstract:
Natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals, which severely limits the functionality of these applications and the growth of the field. Our research focuses on the social part of language, towards building more socially responsible language technologies. In this talk, I will take a closer look at social factors in language and share two recent works for promoting more civility and positivity in language use. The first one studies hate speech by introducing a benchmark corpus on implicit hate speech and computational models to detect and explain latent hatred in language. The second examines positive reframing by neutralizing a negative point of view and generating a more positive perspective without contradicting the original meaning.
Bio:
Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research interests are computational social science and natural language processing. Her research goal is to understand the social aspects of language and to build socially aware NLP systems to better support human-human and human-computer interaction. Her work has received multiple paper awards or nominations at ACL, ICWSM, EMNLP, SIGCHI, and CSCW. She is a recipient of Forbes 30 under 30 in Science (2020), IEEE "AI 10 to Watch" (2020), the Intel Rising Star Faculty Award (2021), Microsoft Research Faculty Fellowship (2021), and NSF CAREER Award (2022).
Schedule for the quarter:
Oct. 12
Diyi Yang
More Civility and Positivity for Socially Responsible Language Understanding
Oct. 26
Jamie Morgenstern
Shifts in Distributions and Preferences in Response to Learning
Nov. 9
Shahar Dobzinski
TBD
Nov. 16
Nynke Niezink
TBD
Nov. 30
Michael Jordan
On Dynamics-Informed, Learning-Aware Mechanism Design
Dec. 12
Thodoris Lykouris
TBD
----------------------------------------------------------------------------------------------------------
Itai Ashlagi
Professor, Departments of Management Science and Engineering
Stanford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From iashlagi at stanford.edu Thu Oct 6 17:45:44 2022
From: iashlagi at stanford.edu (Itai Ashlagi)
Date: Fri, 7 Oct 2022 00:45:44 +0000
Subject: [theory-seminar] RAIN Seminar
Message-ID:
Dear colleagues/students,
This is our annual note regarding the RAIN seminar. If you find this seminar interesting, you can subscribe at https://mailman.stanford.edu/mailman/listinfo/internetalgs and see more information at http://rain.stanford.edu/schedule/ . We would welcome your participation, and please feel free to forward this note to anyone else you find appropriate. The talks typically focus on problems at the interface of social and economic sciences on one hand, and computational science and algorithms on the other.
The seminar is hosted by the Stanford Society and Algorithms Lab (SOAL), and is held roughly half the Wednesdays, usually from 12-1pm in Y2E2 101. We have a social hour (with some snacks and drinks) after the seminar that you are welcome to attend if you attend the seminar or feel an affinity with SOAL.
The first RAIN talk of the year is by Diyi Yang from Stanford on Wednesday Wed 10/12 at 12pm in Y2E2 101; details below. And in case you are wondering, RAIN stands for Research on Algorithms and Incentives in Networks.
Thanks,
Itai
Speaker: Diyi Yang
Title:
More Civility and Positivity for Socially Responsible Language Understanding
Abstract:
Natural language processing (NLP) has had increasing success and produced extensive industrial applications. Despite being sufficient to enable these applications, current NLP systems often ignore the social part of language, e.g., who says it, in what context, for what goals, which severely limits the functionality of these applications and the growth of the field. Our research focuses on the social part of language, towards building more socially responsible language technologies. In this talk, I will take a closer look at social factors in language and share two recent works for promoting more civility and positivity in language use. The first one studies hate speech by introducing a benchmark corpus on implicit hate speech and computational models to detect and explain latent hatred in language. The second examines positive reframing by neutralizing a negative point of view and generating a more positive perspective without contradicting the original meaning.
Bio:
Diyi Yang is an assistant professor in the Computer Science Department at Stanford University. Her research interests are computational social science and natural language processing. Her research goal is to understand the social aspects of language and to build socially aware NLP systems to better support human-human and human-computer interaction. Her work has received multiple paper awards or nominations at ACL, ICWSM, EMNLP, SIGCHI, and CSCW. She is a recipient of Forbes 30 under 30 in Science (2020), IEEE "AI 10 to Watch" (2020), the Intel Rising Star Faculty Award (2021), Microsoft Research Faculty Fellowship (2021), and NSF CAREER Award (2022).
Schedule for the quarter:
Oct. 12
Diyi Yang
More Civility and Positivity for Socially Responsible Language Understanding
Oct. 26
Jamie Morgenstern
Shifts in Distributions and Preferences in Response to Learning
Nov. 9
Shahar Dobzinski
TBD
Nov. 16
Nynke Niezink
TBD
Nov. 30
Michael Jordan
On Dynamics-Informed, Learning-Aware Mechanism Design
Dec. 12
Thodoris Lykouris
TBD
----------------------------------------------------------------------------------------------------------
Itai Ashlagi
Professor, Departments of Management Science and Engineering
Stanford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 11 09:09:34 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 11 Oct 2022 09:09:34 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Yeshwanth Cherapanamjeri
Message-ID:
Hi all,
We continue Theory lunch this thursday 13 Oct, at noon in the usual
location (engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
*Speaker:* Yeshwanth Cherapanamjeri (UC Berkeley)
*Title*: Uniform Approximations for Randomized Hadamard Transforms
*Abstract*: In this talk, I will present some recent work establishing
concentration properties for a class of structured random linear
transformations based on Hadamard matrices. This class of matrices has been
adopted as a computationally efficient alternative to "fully" random linear
transformations (for instance, a matrix of iid Gaussians) in applications
ranging from dimensionality reduction and compressed sensing to various
high dimensional machine learning tasks. However, previous theoretical results
only apply to the "low-dimensional" setting where a small number of rows
are sampled from a full transformation matrix. I will present our
"high-dimensional" result where we show that as far as the distribution of
the entries of the output are concerned, these structured transformations
behave much the same as a fully random transformation. I will then describe
an application of our inequality to the practically relevant setting of
kernel approximation where we obtain guarantees competitive with those for
fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper:
https://arxiv.org/abs/2203.01599
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 11 09:09:34 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 11 Oct 2022 09:09:34 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Yeshwanth Cherapanamjeri
Message-ID:
Hi all,
We continue Theory lunch this thursday 13 Oct, at noon in the usual
location (engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
*Speaker:* Yeshwanth Cherapanamjeri (UC Berkeley)
*Title*: Uniform Approximations for Randomized Hadamard Transforms
*Abstract*: In this talk, I will present some recent work establishing
concentration properties for a class of structured random linear
transformations based on Hadamard matrices. This class of matrices has been
adopted as a computationally efficient alternative to "fully" random linear
transformations (for instance, a matrix of iid Gaussians) in applications
ranging from dimensionality reduction and compressed sensing to various
high dimensional machine learning tasks. However, previous theoretical results
only apply to the "low-dimensional" setting where a small number of rows
are sampled from a full transformation matrix. I will present our
"high-dimensional" result where we show that as far as the distribution of
the entries of the output are concerned, these structured transformations
behave much the same as a fully random transformation. I will then describe
an application of our inequality to the practically relevant setting of
kernel approximation where we obtain guarantees competitive with those for
fully random matrices by Rahimi and Recht.
Based on joint work with Jelani Nelson. Link to paper:
https://arxiv.org/abs/2203.01599
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From marykw at stanford.edu Tue Oct 11 11:54:00 2022
From: marykw at stanford.edu (Mary Wootters)
Date: Tue, 11 Oct 2022 11:54:00 -0700
Subject: [theory-seminar] Women in Theory Forum: first talk October 19!
Message-ID:
Hi all,
The Women in Theory Forum
(WTF)
returns for this academic year, starting Wednesday October 19! We meet
monthly-ish for socializing, snacks, and a research talk.
If you are receiving this on the theory seminar mailing list and are
interested in receiving future WTF announcements, please sign up for the
WTF mailing list here:
https://mailman.stanford.edu/mailman/listinfo/womens-theory-forum (or email
Tselil Schramm (tselil-at-stanford-dot-edu) to get added).
Our first talk of the quarter will be given by our very own Ellen
Vitercik!
*When:* Wednesday October 19, 3:30pm
*Where:* Outside at the tree pit with the whiteboards, unless there's bad
weather (join the mailing list for location updates!)
*Speaker:* Ellen Vitercik
*Title: *Theoretical Foundations of Machine Learning for Cutting Plane
Selection
*Abstract: *Cutting-plane methods have enabled remarkable successes in
integer programming over the last few decades. State-of-the-art solvers
integrate a myriad of cutting-plane techniques to speed up the underlying
tree search algorithm used to find optimal solutions. In this talk, we
provide sample complexity guarantees for learning high-performing
cut-selection policies tailored to the instance distribution at hand. This
talk is based on joint work with Nina Balcan, Siddharth Prasad, and Tuomas
Sandholm from NeurIPS'21.
Hope to see you there!
Best,
Mary and Tselil
--
Mary Wootters (she/her)
Assistant Professor of Computer Science and Electrical Engineering
Stanford University
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Wed Oct 12 22:35:56 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Thu, 13 Oct 2022 05:35:56 +0000
Subject: [theory-seminar] "A deep dive into the craft of building data
compressors" - Dmitri Pavlichin (Fri, 10/14, 2-3pm, Packard 202)
Message-ID:
Hi everyone
We will continue with the Information Theory Forum (IT Forum) talk this week @Fri, 10/14, 2-3pm with Dmitri Pavlichin. The talk will be in-person at Packard 202 and will also be hosted via zoom for remote participants.
If you want to receive reminder emails, please join the IT Forum mailing list.
Talk Details:
A deep dive into the craft of building data compressors
Dmitri Pavlichin, Amazon
Fri, 14th October, 2-3pm PT
Zoom Link
pwd: 032264
Abstract:
General purpose compressors like Gzip and Zstandard perform well on many kinds of data and are standard tools of the computing trade. We can often do better with algorithms specialized for particular data domains, like genomic or numeric data, but the process of building new compressors is often ad hoc and requires domain expertise and compression familiarity. This talk dives into the craft of building new data compressors, focusing on examples from tabular and genomic datasets.
Bio:
Dmitri Pavlichin is an applied scientist at Amazon. In an earlier life he did research in information theory and bioinformatics as a postdoc with Tsachy Weissman at Stanford and co-founded a data compression startup.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 13 09:32:47 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 13 Oct 2022 09:32:47 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Yeshwanth Cherapanamjeri
In-Reply-To:
References:
Message-ID:
Reminder: This is happening in appx 2.5 hours
On Tue, Oct 11, 2022 at 9:09 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 13 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk
> (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> *Speaker:* Yeshwanth Cherapanamjeri (UC Berkeley)
> *Title*: Uniform Approximations for Randomized Hadamard Transforms
> *Abstract*: In this talk, I will present some recent work establishing
> concentration properties for a class of structured random linear
> transformations based on Hadamard matrices. This class of matrices has been
> adopted as a computationally efficient alternative to "fully" random linear
> transformations (for instance, a matrix of iid Gaussians) in applications
> ranging from dimensionality reduction and compressed sensing to various
> high dimensional machine learning tasks. However, previous theoretical results
> only apply to the "low-dimensional" setting where a small number of rows
> are sampled from a full transformation matrix. I will present our
> "high-dimensional" result where we show that as far as the distribution of
> the entries of the output are concerned, these structured transformations
> behave much the same as a fully random transformation. I will then describe
> an application of our inequality to the practically relevant setting of
> kernel approximation where we obtain guarantees competitive with those for
> fully random matrices by Rahimi and Recht.
>
> Based on joint work with Jelani Nelson. Link to paper:
> https://arxiv.org/abs/2203.01599
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 13 09:32:47 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 13 Oct 2022 09:32:47 -0700
Subject: [theory-seminar] Theory Lunch 6 Oct: Yeshwanth Cherapanamjeri
In-Reply-To:
References:
Message-ID:
Reminder: This is happening in appx 2.5 hours
On Tue, Oct 11, 2022 at 9:09 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 13 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk
> (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> *Speaker:* Yeshwanth Cherapanamjeri (UC Berkeley)
> *Title*: Uniform Approximations for Randomized Hadamard Transforms
> *Abstract*: In this talk, I will present some recent work establishing
> concentration properties for a class of structured random linear
> transformations based on Hadamard matrices. This class of matrices has been
> adopted as a computationally efficient alternative to "fully" random linear
> transformations (for instance, a matrix of iid Gaussians) in applications
> ranging from dimensionality reduction and compressed sensing to various
> high dimensional machine learning tasks. However, previous theoretical results
> only apply to the "low-dimensional" setting where a small number of rows
> are sampled from a full transformation matrix. I will present our
> "high-dimensional" result where we show that as far as the distribution of
> the entries of the output are concerned, these structured transformations
> behave much the same as a fully random transformation. I will then describe
> an application of our inequality to the practically relevant setting of
> kernel approximation where we obtain guarantees competitive with those for
> fully random matrices by Rahimi and Recht.
>
> Based on joint work with Jelani Nelson. Link to paper:
> https://arxiv.org/abs/2203.01599
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Sun Oct 16 22:29:13 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Mon, 17 Oct 2022 05:29:13 +0000
Subject: [theory-seminar] =?utf-8?q?=22Burrows=E2=80=93Wheeler_transform_a?=
=?utf-8?q?nd_Compression_via_Substring_Enumeration=22_-_Ilya_Grebnov_=28F?=
=?utf-8?q?ri=2C_10/21=2C_1-2pm=2C_Zoom=29?=
Message-ID: <50E63E32-076D-4C55-A49A-101499418F5B@stanford.edu>
Hi everyone
We will continue with the Information Theory Forum (IT Forum) talk this week @Fri, 10/21, 1-2pm (note change in time from usual 2-3pm to 1-2pm) with Ilya Grebnov. This talk will (only) be hosted via zoom for everyone.
If you want to receive reminder emails, please join the IT Forum mailing list.
Talk Details:
Burrows?Wheeler transform and Compression via Substring Enumeration (CSE)
Ilya Grebnov, Microsoft
Fri, 21st October, 1-2pm PT
Zoom Link
pwd: 032264
Abstract:
We will discuss Burrows?Wheeler transform (BWT) with a focus on lossless data compression and it's connection to Compression via Substring Enumeration (CSE) including details of practical implementation for non-binary alphabets known as M03 context aware compression algorithm showing highest compression ratio among BWT based compressors.
Bio:
Ilya Grebnov is a Software Architect of the Business Applications and Power Platform Group at Microsoft with nearly 15 years serving as a technical resource for both Business Applications and Microsoft Azure. Outside of work Ilya is a data compression enthusiast focusing on block sorting lossless data compression algorithms.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 18 08:48:25 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 18 Oct 2022 08:48:25 -0700
Subject: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
Message-ID:
Hi all,
We continue Theory lunch this thursday 20 Oct, at noon in the usual
location (engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
*Speaker:* Pravesh Kothari, CMU
*Title*: The Kikuchi Matrix Method
*Abstract*: In this talk, I will present a new method that reduces
understanding an appropriate notion of girth of hypergraphs to bounding the
spectrum of an associated ?Kikuchi? Matrix.
I will discuss three applications of this technique:
1) Finding a refutation algorithm for smoothed instances of constraint
satisfaction problems (obtained by randomly perturbing the literal patterns
in a worst-case instance with a small probability) that matches the best
running-time vs constraint-density trade-offs for the significantly special
and easier case of random CSPs,
2) Confirming Feige's 2008 Conjecture that postulated an extremal girth vs
density trade-off (a.k.a. Moore bounds) for k-uniform hypergraphs that
generalizes the Alon-Hoory-Linial Moore bound for graphs,
3) Proving a cubic lower bound on the block length of 3 query locally
decodable codes improving on the prior best quadratic lower bound from the
early 2000s.
Based on joint works with Omar Alrabiyah (Berkeley), Venkat Guruswami
(Berkeley), Tim Hsieh (CMU), Peter Manohar (CMU), Sidhanth Mohanty
(Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 18 08:48:25 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 18 Oct 2022 08:48:25 -0700
Subject: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
Message-ID:
Hi all,
We continue Theory lunch this thursday 20 Oct, at noon in the usual
location (engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
*Speaker:* Pravesh Kothari, CMU
*Title*: The Kikuchi Matrix Method
*Abstract*: In this talk, I will present a new method that reduces
understanding an appropriate notion of girth of hypergraphs to bounding the
spectrum of an associated ?Kikuchi? Matrix.
I will discuss three applications of this technique:
1) Finding a refutation algorithm for smoothed instances of constraint
satisfaction problems (obtained by randomly perturbing the literal patterns
in a worst-case instance with a small probability) that matches the best
running-time vs constraint-density trade-offs for the significantly special
and easier case of random CSPs,
2) Confirming Feige's 2008 Conjecture that postulated an extremal girth vs
density trade-off (a.k.a. Moore bounds) for k-uniform hypergraphs that
generalizes the Alon-Hoory-Linial Moore bound for graphs,
3) Proving a cubic lower bound on the block length of 3 query locally
decodable codes improving on the prior best quadratic lower bound from the
early 2000s.
Based on joint works with Omar Alrabiyah (Berkeley), Venkat Guruswami
(Berkeley), Tim Hsieh (CMU), Peter Manohar (CMU), Sidhanth Mohanty
(Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jvondrak at stanford.edu Tue Oct 18 11:58:06 2022
From: jvondrak at stanford.edu (Jan Vondrak)
Date: Tue, 18 Oct 2022 18:58:06 +0000
Subject: [theory-seminar] CS jobs at Baruch College
Message-ID:
Dear All,
Baruch College in New York is opening a new CS department from 2023, which will be hiring multiple positions.
Please check this if you're interested:
https://geometrynyc.wixsite.com/csjobs
Best regards,
-- Jan
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 20 10:02:00 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 20 Oct 2022 10:02:00 -0700
Subject: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
In-Reply-To:
References:
Message-ID:
Gentle reminder: This is happening in about 2 hours.
On Tue, Oct 18, 2022 at 8:48 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 20 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk (details
> below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> *Speaker:* Pravesh Kothari, CMU
> *Title*: The Kikuchi Matrix Method
>
> *Abstract*: In this talk, I will present a new method that reduces
> understanding an appropriate notion of girth of hypergraphs to bounding the
> spectrum of an associated ?Kikuchi? Matrix.
>
> I will discuss three applications of this technique:
>
> 1) Finding a refutation algorithm for smoothed instances of constraint
> satisfaction problems (obtained by randomly perturbing the literal patterns
> in a worst-case instance with a small probability) that matches the best
> running-time vs constraint-density trade-offs for the significantly special
> and easier case of random CSPs,
>
> 2) Confirming Feige's 2008 Conjecture that postulated an extremal girth vs
> density trade-off (a.k.a. Moore bounds) for k-uniform hypergraphs that
> generalizes the Alon-Hoory-Linial Moore bound for graphs,
>
> 3) Proving a cubic lower bound on the block length of 3 query locally
> decodable codes improving on the prior best quadratic lower bound from the
> early 2000s.
>
> Based on joint works with Omar Alrabiyah (Berkeley), Venkat Guruswami
> (Berkeley), Tim Hsieh (CMU), Peter Manohar (CMU), Sidhanth Mohanty
> (Berkeley).
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 20 10:02:00 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 20 Oct 2022 10:02:00 -0700
Subject: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
In-Reply-To:
References:
Message-ID:
Gentle reminder: This is happening in about 2 hours.
On Tue, Oct 18, 2022 at 8:48 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 20 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk (details
> below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> *Speaker:* Pravesh Kothari, CMU
> *Title*: The Kikuchi Matrix Method
>
> *Abstract*: In this talk, I will present a new method that reduces
> understanding an appropriate notion of girth of hypergraphs to bounding the
> spectrum of an associated ?Kikuchi? Matrix.
>
> I will discuss three applications of this technique:
>
> 1) Finding a refutation algorithm for smoothed instances of constraint
> satisfaction problems (obtained by randomly perturbing the literal patterns
> in a worst-case instance with a small probability) that matches the best
> running-time vs constraint-density trade-offs for the significantly special
> and easier case of random CSPs,
>
> 2) Confirming Feige's 2008 Conjecture that postulated an extremal girth vs
> density trade-off (a.k.a. Moore bounds) for k-uniform hypergraphs that
> generalizes the Alon-Hoory-Linial Moore bound for graphs,
>
> 3) Proving a cubic lower bound on the block length of 3 query locally
> decodable codes improving on the prior best quadratic lower bound from the
> early 2000s.
>
> Based on joint works with Omar Alrabiyah (Berkeley), Venkat Guruswami
> (Berkeley), Tim Hsieh (CMU), Peter Manohar (CMU), Sidhanth Mohanty
> (Berkeley).
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdharris at stanford.edu Thu Oct 20 11:06:15 2022
From: mdharris at stanford.edu (Megan D. Harris)
Date: Thu, 20 Oct 2022 18:06:15 +0000
Subject: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
In-Reply-To:
References:
Message-ID:
Hello All,
I want to apologize in advance, there will be no lunch today. I am so sorry, unfortunately I ordered it for next week. I am so sorry!
Kind Regards,
Megan Denise Harris | Faculty Administrator | Computer Science (Gates Building) |
353 Jane Stanford Way, Rm 187, Stanford, CA 94305 |
Office Phone | 650.723.1658 | Cell 206-313-1390 |
Campus Days: Monday and Thursday 7AM-3:30PM
________________________________
From: theory-seminar on behalf of Jay Mardia
Sent: Thursday, October 20, 2022 10:02 AM
To: praveshk at cs.cmu.edu ; theory-seminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
Gentle reminder: This is happening in about 2 hours.
On Tue, Oct 18, 2022 at 8:48 AM Jay Mardia > wrote:
Hi all,
We continue Theory lunch this thursday 20 Oct, at noon in the usual location (engineering quad treewell). We'll have lunch and socializing from 12 to 12:30, and then a talk (details below) from 12:30 to 1.
Cheers,
Jay
Speaker: Pravesh Kothari, CMU
Title: The Kikuchi Matrix Method
Abstract: In this talk, I will present a new method that reduces understanding an appropriate notion of girth of hypergraphs to bounding the spectrum of an associated ?Kikuchi? Matrix.
I will discuss three applications of this technique:
1) Finding a refutation algorithm for smoothed instances of constraint satisfaction problems (obtained by randomly perturbing the literal patterns in a worst-case instance with a small probability) that matches the best running-time vs constraint-density trade-offs for the significantly special and easier case of random CSPs,
2) Confirming Feige's 2008 Conjecture that postulated an extremal girth vs density trade-off (a.k.a. Moore bounds) for k-uniform hypergraphs that generalizes the Alon-Hoory-Linial Moore bound for graphs,
3) Proving a cubic lower bound on the block length of 3 query locally decodable codes improving on the prior best quadratic lower bound from the early 2000s.
Based on joint works with Omar Alrabiyah (Berkeley), Venkat Guruswami (Berkeley), Tim Hsieh (CMU), Peter Manohar (CMU), Sidhanth Mohanty (Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From mdharris at stanford.edu Thu Oct 20 11:06:15 2022
From: mdharris at stanford.edu (Megan D. Harris)
Date: Thu, 20 Oct 2022 18:06:15 +0000
Subject: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
In-Reply-To:
References:
Message-ID:
Hello All,
I want to apologize in advance, there will be no lunch today. I am so sorry, unfortunately I ordered it for next week. I am so sorry!
Kind Regards,
Megan Denise Harris | Faculty Administrator | Computer Science (Gates Building) |
353 Jane Stanford Way, Rm 187, Stanford, CA 94305 |
Office Phone | 650.723.1658 | Cell 206-313-1390 |
Campus Days: Monday and Thursday 7AM-3:30PM
________________________________
From: theory-seminar on behalf of Jay Mardia
Sent: Thursday, October 20, 2022 10:02 AM
To: praveshk at cs.cmu.edu ; theory-seminar at lists.stanford.edu ; thseminar at cs.stanford.edu
Subject: Re: [theory-seminar] Theory Lunch 20 Oct: Pravesh Kothari
Gentle reminder: This is happening in about 2 hours.
On Tue, Oct 18, 2022 at 8:48 AM Jay Mardia > wrote:
Hi all,
We continue Theory lunch this thursday 20 Oct, at noon in the usual location (engineering quad treewell). We'll have lunch and socializing from 12 to 12:30, and then a talk (details below) from 12:30 to 1.
Cheers,
Jay
Speaker: Pravesh Kothari, CMU
Title: The Kikuchi Matrix Method
Abstract: In this talk, I will present a new method that reduces understanding an appropriate notion of girth of hypergraphs to bounding the spectrum of an associated ?Kikuchi? Matrix.
I will discuss three applications of this technique:
1) Finding a refutation algorithm for smoothed instances of constraint satisfaction problems (obtained by randomly perturbing the literal patterns in a worst-case instance with a small probability) that matches the best running-time vs constraint-density trade-offs for the significantly special and easier case of random CSPs,
2) Confirming Feige's 2008 Conjecture that postulated an extremal girth vs density trade-off (a.k.a. Moore bounds) for k-uniform hypergraphs that generalizes the Alon-Hoory-Linial Moore bound for graphs,
3) Proving a cubic lower bound on the block length of 3 query locally decodable codes improving on the prior best quadratic lower bound from the early 2000s.
Based on joint works with Omar Alrabiyah (Berkeley), Venkat Guruswami (Berkeley), Tim Hsieh (CMU), Peter Manohar (CMU), Sidhanth Mohanty (Berkeley).
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From vitercik at stanford.edu Thu Oct 20 17:44:25 2022
From: vitercik at stanford.edu (Ellen Vitercik)
Date: Thu, 20 Oct 2022 17:44:25 -0700
Subject: [theory-seminar] Simons workshop on Societal Applications of
Decision Making
In-Reply-To: <32617af6728d4836a9c4223063ec9be8@SJ0PR02MB7424.namprd02.prod.outlook.com>
References: <32617af6728d4836a9c4223063ec9be8@SJ0PR02MB7424.namprd02.prod.outlook.com>
Message-ID:
Hi all,
MS&E faculty member Irene Lo is co-organizing a workshop at Simons in early
November that you may be interested in attending!
----
Hi everyone,
We are organizing a workshop on the *Societal Applications of Decision
Making* at the Simons Institute in Berkeley, CA on Nov 7-10, as part of a
semester-long program on Data-Driven Decision Processes.
The workshop will bring together researchers from multiple disciplines --
TCS, ML, Econ, and OR -- and will feature work on topics such as privacy,
fairness, civic participation, robustness and interpretability. More
details and a link for registration can be found at
https://simons.berkeley.edu/workshops/datadriven-workshop3. Registration is
free.
The workshop will also feature a poster session. We invite applications
from students, postdocs, and junior researchers. The application process is
simple: just fill out this Google form
with
the title and abstract for your poster.
*The deadline for poster applications is Friday October 21 EOD.*
Please apply and encourage your friends, colleagues, and students
to participate. Feel free to spread the word.
Thanks,
Shuchi Chawla, Rachel Cummings, and Irene Lo
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lytan at stanford.edu Mon Oct 24 12:30:15 2022
From: lytan at stanford.edu (Li-Yang Tan)
Date: Mon, 24 Oct 2022 12:30:15 -0700
Subject: [theory-seminar] Talk by Bernardo Subercaseaux this Wednesday 1:30pm
Message-ID:
Hi all,
Bernardo Subercaseaux is visiting us
this Wednesday (10/26) and will be giving a talk at 1:30pm in Gates 104.
Everyone's welcome to attend.
*Title:* More power to computers, more power to the people: from
computer-aided mathematics to formal explainability.
*Abstract: *
This talk aims to introduce two CS topics that appear completely disjoint
yet have exciting commonalities:
Firstly, computers took a fundamental role in combinatorial proofs for the
first time in 1976, with the ground-breaking Four Color Theorem, and since
then have been essential for a number of results, including the computation
of the packing-chromatic number of the infinite square lattice, which I
finished with Marijn Heule this year, and will explain relatively in
detail. Secondly, given the ubiquitous nature of ML classifiers affecting
our lives, the problem of explaining the decisions that algorithms make and
affect us all has become of paramount importance. I advocate for the area
of explainability/interpretability to take a formal direction, in which
"explaining decisions" takes clear semantics and has surprisingly become a
source of fascinating TCS problems. In particular, I will talk about
decision trees, which appear to be at an exciting frontier between the
interpretable and the non-interpretable; what forms of complexity can
decision trees encode? are they easily learnable? can we better our
understanding of the sub-classes of CNF formulas that can be succinctly
represented as decision trees? etc. Both in the areas of mathematics and
explainability of AI, I believe that delegating power to computers can make
us advance in exciting directions, and provide nice computational problems
along the way.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lytan at stanford.edu Mon Oct 24 12:30:15 2022
From: lytan at stanford.edu (Li-Yang Tan)
Date: Mon, 24 Oct 2022 12:30:15 -0700
Subject: [theory-seminar] Talk by Bernardo Subercaseaux this Wednesday 1:30pm
Message-ID:
Hi all,
Bernardo Subercaseaux is visiting us
this Wednesday (10/26) and will be giving a talk at 1:30pm in Gates 104.
Everyone's welcome to attend.
*Title:* More power to computers, more power to the people: from
computer-aided mathematics to formal explainability.
*Abstract: *
This talk aims to introduce two CS topics that appear completely disjoint
yet have exciting commonalities:
Firstly, computers took a fundamental role in combinatorial proofs for the
first time in 1976, with the ground-breaking Four Color Theorem, and since
then have been essential for a number of results, including the computation
of the packing-chromatic number of the infinite square lattice, which I
finished with Marijn Heule this year, and will explain relatively in
detail. Secondly, given the ubiquitous nature of ML classifiers affecting
our lives, the problem of explaining the decisions that algorithms make and
affect us all has become of paramount importance. I advocate for the area
of explainability/interpretability to take a formal direction, in which
"explaining decisions" takes clear semantics and has surprisingly become a
source of fascinating TCS problems. In particular, I will talk about
decision trees, which appear to be at an exciting frontier between the
interpretable and the non-interpretable; what forms of complexity can
decision trees encode? are they easily learnable? can we better our
understanding of the sub-classes of CNF formulas that can be succinctly
represented as decision trees? etc. Both in the areas of mathematics and
explainability of AI, I believe that delegating power to computers can make
us advance in exciting directions, and provide nice computational problems
along the way.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From pulkit1495 at gmail.com Mon Oct 24 19:09:54 2022
From: pulkit1495 at gmail.com (Pulkit Tandon)
Date: Mon, 24 Oct 2022 19:09:54 -0700
Subject: [theory-seminar] "Latent Variables and Lossless Compression" -
James Townsend (Fri, 10/28, 9-10am, Zoom)
Message-ID: <9C708B29-D63E-4ABD-9C1E-BFAA7C495095@gmail.com>
Hi everyone
We will continue with the Information Theory Forum (IT Forum ) talk this week @Fri, 10/28, 9-10am (note change in time from usual 2-3pm to 9-10am as our speaker will be joining from Europe) with James Townsend. This talk will (only) be hosted via zoom for everyone.
If you want to receive reminder emails, please join the IT Forum mailing list .
Talk Details:
Latent Variables and Lossless Compression
James Townsend, University of Amsterdam
Fri, 28th October, 9-10am PT
Zoom Link
pwd: 032264
Abstract:
I will give some background/history of ?latent variable models?, and explain how a last-in-first-out compression technique such as asymmetric numeral systems (ANS) allows you to introduce latent random variables during lossless compression. I will then discuss known examples where this is useful. These examples include (going from simple to more elaborate) rANS itself; ANS with the ?alias method?; and a method for compressing images using variational auto-encoders (VAEs). Some prior familiarity with ANS will be useful for understanding the talk.
Bio:
James Townsend is a post-doc machine learning researcher, based at the Amsterdam Machine Learning Lab (AMLab) at the University of Amsterdam. He completed his PhD, on lossless compression with latent variable models, in 2020, supervised by Professor David Barber at the UCL AI Centre in London. Most of his research to date has been on deep generative models and lossless compression. He is also interested in unsupervised learning more generally, approximate inference, Monte Carlo methods, optimization and the design of machine learning software systems.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Mon Oct 24 19:16:58 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Tue, 25 Oct 2022 02:16:58 +0000
Subject: [theory-seminar] "Latent Variables and Lossless Compression" -
James Townsend (Fri, 10/28, 9-10am, Zoom)
Message-ID: <7E041278-5206-4FC4-936B-330D57E3ACBF@stanford.edu>
Hi everyone
We will continue with the Information Theory Forum (IT Forum) talk this week @Fri, 10/28, 9-10am (note change in time from usual 2-3pm to 9-10am as our speaker will be joining from Europe) with James Townsend. This talk will (only) be hosted via zoom for everyone.
If you want to receive reminder emails, please join the IT Forum mailing list.
Talk Details:
Latent Variables and Lossless Compression
James Townsend, University of Amsterdam
Fri, 28th October, 9-10am PT
Zoom Link
pwd: 032264
Abstract:
I will give some background/history of ?latent variable models?, and explain how a last-in-first-out compression technique such as asymmetric numeral systems (ANS) allows you to introduce latent random variables during lossless compression. I will then discuss known examples where this is useful. These examples include (going from simple to more elaborate) rANS itself; ANS with the ?alias method?; and a method for compressing images using variational auto-encoders (VAEs). Some prior familiarity with ANS will be useful for understanding the talk.
Bio:
James Townsend is a post-doc machine learning researcher, based at the Amsterdam Machine Learning Lab (AMLab) at the University of Amsterdam. He completed his PhD, on lossless compression with latent variable models, in 2020, supervised by Professor David Barber at the UCL AI Centre in London. Most of his research to date has been on deep generative models and lossless compression. He is also interested in unsupervised learning more generally, approximate inference, Monte Carlo methods, optimization and the design of machine learning software systems.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Mon Oct 24 21:36:55 2022
From: reingold at stanford.edu (Omer Reingold)
Date: Mon, 24 Oct 2022 21:36:55 -0700
Subject: [theory-seminar] 9th TOCA-SV - 11/18
Message-ID:
The 9th TOCA-SV day is Coming on Friday 11/18/22, in the Google campus in
Mountain View. It is free but you need to register here
, where you can
also see an up-to-date list of talks and abstracts.
*Schedule (tentative):*
0930-1000: Breakfast
1000-1015: Welcome
1015-1100: Gagan Aggarwal (Google)
1100-1145: Li-Yang Tan (Stanford)
1145-1245: Short talks I
1245-1400: Lunch (provided) and campus tour
1400-1445: Sandy Irani (Simons/UC Berkeley)
1445-1530: Kunal Talwar (Apple)
1530-1600: Coffee Break
1600-1730: Short talks II
*Looking forward to seeing you all there! *
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From reingold at stanford.edu Mon Oct 24 21:49:31 2022
From: reingold at stanford.edu (Omer Reingold)
Date: Mon, 24 Oct 2022 21:49:31 -0700
Subject: [theory-seminar] 9th TOCA-SV - 11/18
In-Reply-To:
References:
Message-ID:
PS. Please register as early as possible (even if you are not 100% sure you
can make it so the food order will be sufficient.
On Mon, Oct 24, 2022 at 9:36 PM Omer Reingold wrote:
> The 9th TOCA-SV day is Coming on Friday 11/18/22, in the Google campus in
> Mountain View. It is free but you need to register here
> , where you can
> also see an up-to-date list of talks and abstracts.
> *Schedule (tentative):*
>
> 0930-1000: Breakfast
>
> 1000-1015: Welcome
>
> 1015-1100: Gagan Aggarwal (Google)
>
> 1100-1145: Li-Yang Tan (Stanford)
>
> 1145-1245: Short talks I
>
> 1245-1400: Lunch (provided) and campus tour
>
> 1400-1445: Sandy Irani (Simons/UC Berkeley)
>
> 1445-1530: Kunal Talwar (Apple)
>
> 1530-1600: Coffee Break
>
> 1600-1730: Short talks II
>
> *Looking forward to seeing you all there! *
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 25 08:16:00 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 25 Oct 2022 08:16:00 -0700
Subject: [theory-seminar] Theory Lunch 27 Oct: Victor Lecomte
Message-ID:
Hi all,
We continue Theory lunch this thursday 27 Oct, at noon in the usual
location (engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
*Speaker:* Victor Lecomte, Stanford
*Title:* The composition complexity of majority
*Abstract:* In this talk, we'll look at computing majority as a composition
of local functions: Maj_n = h(g_1, ..., g_m) where each g_j: {0,1}^n ?
{0,1} is an arbitrary function that queries only k << n variables, and h:
{0,1}^m ? {0,1} is an arbitrary combining function. It turns out we need m
? ?(n/k * log k) inner functions, instead of the ideal m = n/k. This
recovers as a corollary (and via an entirely different proof) the best
known lower bound for bounded-width branching programs for majority. It is
also the first step in a plan that we propose for breaking a longstanding
barrier in lower bounds for small-depth boolean circuits.
Novel aspects of our proof include sharp bounds on the information lost as
computation flows through the inner functions g_j, and the bootstrapping of
lower bounds for a multi-output function (Hamming weight) into lower bounds
for a single-output one (majority).
Based on a joint work with Prasanna Ramakrishnan and Li-Yang Tan here at
Stanford.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Oct 25 08:16:00 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 25 Oct 2022 08:16:00 -0700
Subject: [theory-seminar] Theory Lunch 27 Oct: Victor Lecomte
Message-ID:
Hi all,
We continue Theory lunch this thursday 27 Oct, at noon in the usual
location (engineering quad
treewell).
We'll have lunch and socializing from 12 to 12:30, and then a talk (details
below) from 12:30 to 1.
Cheers,
Jay
*Speaker:* Victor Lecomte, Stanford
*Title:* The composition complexity of majority
*Abstract:* In this talk, we'll look at computing majority as a composition
of local functions: Maj_n = h(g_1, ..., g_m) where each g_j: {0,1}^n ?
{0,1} is an arbitrary function that queries only k << n variables, and h:
{0,1}^m ? {0,1} is an arbitrary combining function. It turns out we need m
? ?(n/k * log k) inner functions, instead of the ideal m = n/k. This
recovers as a corollary (and via an entirely different proof) the best
known lower bound for bounded-width branching programs for majority. It is
also the first step in a plan that we propose for breaking a longstanding
barrier in lower bounds for small-depth boolean circuits.
Novel aspects of our proof include sharp bounds on the information lost as
computation flows through the inner functions g_j, and the bootstrapping of
lower bounds for a multi-output function (Hamming weight) into lower bounds
for a single-output one (majority).
Based on a joint work with Prasanna Ramakrishnan and Li-Yang Tan here at
Stanford.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Tue Oct 25 23:56:12 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Wed, 26 Oct 2022 06:56:12 +0000
Subject: [theory-seminar] Talk by Yann Collet
Message-ID:
Hi all
We will be having a guest talk by Yann Collet in EE274 Data Compression class on Thursday, 10/27, 430-6pm at STLC 118. Yann is a data compression expert from Meta and author of many widely used compressors such as zstd, FSE, lz4. He will be talking about "Designing Data Compression solutions for deployment in large cloud infrastructures?. Here is a brief abstract:
Congratulations ! You have developed a great new compression algorithm !
What will it take to get it deployed in a Data Center ?
The talk will be open to members of the Stanford community. Feel free to join!
Best regards
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lytan at stanford.edu Wed Oct 26 13:25:01 2022
From: lytan at stanford.edu (Li-Yang Tan)
Date: Wed, 26 Oct 2022 13:25:01 -0700
Subject: [theory-seminar] Talk by Bernardo Subercaseaux this Wednesday
1:30pm
In-Reply-To:
References:
Message-ID:
Just a reminder that Bernardo?s talk starts in about 10 mins. Also, we?ve
relocated to Gates 105.
On Monday, October 24, 2022, Li-Yang Tan wrote:
> Hi all,
>
> Bernardo Subercaseaux is visiting us
> this Wednesday (10/26) and will be giving a talk at 1:30pm in Gates 104.
> Everyone's welcome to attend.
>
> *Title:* More power to computers, more power to the people: from
> computer-aided mathematics to formal explainability.
>
> *Abstract: *
> This talk aims to introduce two CS topics that appear completely disjoint
> yet have exciting commonalities:
> Firstly, computers took a fundamental role in combinatorial proofs for the
> first time in 1976, with the ground-breaking Four Color Theorem, and since
> then have been essential for a number of results, including the computation
> of the packing-chromatic number of the infinite square lattice, which I
> finished with Marijn Heule this year, and will explain relatively in
> detail. Secondly, given the ubiquitous nature of ML classifiers affecting
> our lives, the problem of explaining the decisions that algorithms make and
> affect us all has become of paramount importance. I advocate for the area
> of explainability/interpretability to take a formal direction, in which
> "explaining decisions" takes clear semantics and has surprisingly become a
> source of fascinating TCS problems. In particular, I will talk about
> decision trees, which appear to be at an exciting frontier between the
> interpretable and the non-interpretable; what forms of complexity can
> decision trees encode? are they easily learnable? can we better our
> understanding of the sub-classes of CNF formulas that can be succinctly
> represented as decision trees? etc. Both in the areas of mathematics and
> explainability of AI, I believe that delegating power to computers can make
> us advance in exciting directions, and provide nice computational problems
> along the way.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From lytan at stanford.edu Wed Oct 26 13:25:01 2022
From: lytan at stanford.edu (Li-Yang Tan)
Date: Wed, 26 Oct 2022 13:25:01 -0700
Subject: [theory-seminar] Talk by Bernardo Subercaseaux this Wednesday
1:30pm
In-Reply-To:
References:
Message-ID:
Just a reminder that Bernardo?s talk starts in about 10 mins. Also, we?ve
relocated to Gates 105.
On Monday, October 24, 2022, Li-Yang Tan wrote:
> Hi all,
>
> Bernardo Subercaseaux is visiting us
> this Wednesday (10/26) and will be giving a talk at 1:30pm in Gates 104.
> Everyone's welcome to attend.
>
> *Title:* More power to computers, more power to the people: from
> computer-aided mathematics to formal explainability.
>
> *Abstract: *
> This talk aims to introduce two CS topics that appear completely disjoint
> yet have exciting commonalities:
> Firstly, computers took a fundamental role in combinatorial proofs for the
> first time in 1976, with the ground-breaking Four Color Theorem, and since
> then have been essential for a number of results, including the computation
> of the packing-chromatic number of the infinite square lattice, which I
> finished with Marijn Heule this year, and will explain relatively in
> detail. Secondly, given the ubiquitous nature of ML classifiers affecting
> our lives, the problem of explaining the decisions that algorithms make and
> affect us all has become of paramount importance. I advocate for the area
> of explainability/interpretability to take a formal direction, in which
> "explaining decisions" takes clear semantics and has surprisingly become a
> source of fascinating TCS problems. In particular, I will talk about
> decision trees, which appear to be at an exciting frontier between the
> interpretable and the non-interpretable; what forms of complexity can
> decision trees encode? are they easily learnable? can we better our
> understanding of the sub-classes of CNF formulas that can be succinctly
> represented as decision trees? etc. Both in the areas of mathematics and
> explainability of AI, I believe that delegating power to computers can make
> us advance in exciting directions, and provide nice computational problems
> along the way.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 27 09:37:06 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 27 Oct 2022 09:37:06 -0700
Subject: [theory-seminar] Theory Lunch 27 Oct: Victor Lecomte
In-Reply-To:
References:
Message-ID:
Reminder: This is happening at noon today.
On Tue, Oct 25, 2022 at 8:16 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 27 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk
> (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> *Speaker:* Victor Lecomte, Stanford
> *Title:* The composition complexity of majority
>
> *Abstract:* In this talk, we'll look at computing majority as a
> composition of local functions: Maj_n = h(g_1, ..., g_m) where each g_j:
> {0,1}^n ? {0,1} is an arbitrary function that queries only k << n
> variables, and h: {0,1}^m ? {0,1} is an arbitrary combining function. It
> turns out we need m ? ?(n/k * log k) inner functions, instead of the ideal
> m = n/k. This recovers as a corollary (and via an entirely different proof)
> the best known lower bound for bounded-width branching programs for
> majority. It is also the first step in a plan that we propose for breaking
> a longstanding barrier in lower bounds for small-depth boolean circuits.
>
> Novel aspects of our proof include sharp bounds on the information lost as
> computation flows through the inner functions g_j, and the bootstrapping of
> lower bounds for a multi-output function (Hamming weight) into lower bounds
> for a single-output one (majority).
>
> Based on a joint work with Prasanna Ramakrishnan and Li-Yang Tan here at
> Stanford.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Oct 27 09:37:06 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 27 Oct 2022 09:37:06 -0700
Subject: [theory-seminar] Theory Lunch 27 Oct: Victor Lecomte
In-Reply-To:
References:
Message-ID:
Reminder: This is happening at noon today.
On Tue, Oct 25, 2022 at 8:16 AM Jay Mardia wrote:
> Hi all,
>
> We continue Theory lunch this thursday 27 Oct, at noon in the usual
> location (engineering quad
> treewell).
> We'll have lunch and socializing from 12 to 12:30, and then a talk
> (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> *Speaker:* Victor Lecomte, Stanford
> *Title:* The composition complexity of majority
>
> *Abstract:* In this talk, we'll look at computing majority as a
> composition of local functions: Maj_n = h(g_1, ..., g_m) where each g_j:
> {0,1}^n ? {0,1} is an arbitrary function that queries only k << n
> variables, and h: {0,1}^m ? {0,1} is an arbitrary combining function. It
> turns out we need m ? ?(n/k * log k) inner functions, instead of the ideal
> m = n/k. This recovers as a corollary (and via an entirely different proof)
> the best known lower bound for bounded-width branching programs for
> majority. It is also the first step in a plan that we propose for breaking
> a longstanding barrier in lower bounds for small-depth boolean circuits.
>
> Novel aspects of our proof include sharp bounds on the information lost as
> computation flows through the inner functions g_j, and the bootstrapping of
> lower bounds for a multi-output function (Hamming weight) into lower bounds
> for a single-output one (majority).
>
> Based on a joint work with Prasanna Ramakrishnan and Li-Yang Tan here at
> Stanford.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Mon Oct 31 14:22:13 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Mon, 31 Oct 2022 21:22:13 +0000
Subject: [theory-seminar] "Speech and audio compression in the neural era" -
Jan Skoglund (Fri, 11/04, 2-3pm, Packard 202)
Message-ID: <35C4ED56-B535-4B29-8F72-476D570D05A9@stanford.edu>
Hi everyone
We will continue with the Information Theory Forum (IT Forum) talk this week @Fri, 11/04, 2-3pm (back to usual timing :)) with Jan Skoglund. This talk will be in-person at Packard 202 and will also hosted via zoom. Please join in-person for some coffee and snacks before the talk.
If you want to receive reminder emails, please join the IT Forum mailing list.
Talk Details:
Speech and audio compression in the neural era
Jan Skoglund, Google
Fri, 4th Nov, 2-3pm PT
Packard 202 and Zoom Link
pwd: 032264
Abstract:
In this talk we'll discuss data compression of digital audio signals such as speech and music. After a general introduction to the area of audio compression we'll focus on speech coding - compression of speech. Modern advances in AI and deep learning methods have shown to be remarkably successful in speech processing applications such as speech recognition and synthesis, and the talk will present some recent progress in low rate speech coding using neural modeling techniques.
Bio:
Jan Skoglund leads a team at Google in San Francisco, CA, developing speech and audio signal processing components for capture, real-time communication, storage, and rendering. These components have been deployed in Google software products such as Meet and hardware products such as Chromebooks. After receiving his Ph.D. degree at Chalmers University of Technology in Sweden, 1998, he worked on low bit rate speech coding at AT&T Labs-Research, Florham Park, NJ. He was with Global IP Solutions (GIPS), San Francisco, CA, from 2000 to 2011 working on speech and audio processing, such as compression, enhancement, and echo cancellation, tailored for packet-switched networks. GIPS' audio and video technology was found in many deployments by, e.g., IBM, Google, Yahoo, WebEx, Skype, and Samsung, and was open-sourced as WebRTC after a 2011 acquisition by Google. Since then he has been in the Open Codecs team of Chrome at Google.
Best
Pulkit
-------------- next part --------------
An HTML attachment was scrubbed...
URL: