From wyma at stanford.edu Tue Sep 6 17:14:38 2022
From: wyma at stanford.edu (Weiyun Ma)
Date: Wed, 7 Sep 2022 00:14:38 +0000
Subject: [theory-seminar] Quals talk: clustering with hierarchies
Message-ID:
Hi all,
I will be giving my quals talk on clustering problems with hierarchical structures tomorrow (Wed 9/7) at 3pm in Gates 100. I will talk about several papers and in particular focus on this paper on fitting distances by tree metrics. Hope to see you there!
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From wyma at stanford.edu Wed Sep 7 14:50:32 2022
From: wyma at stanford.edu (Weiyun Ma)
Date: Wed, 7 Sep 2022 21:50:32 +0000
Subject: [theory-seminar] Quals talk: clustering with hierarchies
In-Reply-To:
References:
Message-ID:
Just a note that the talk will start at 3:10pm. There is also a Zoom link:
https://stanford.zoom.us/j/99327624581?pwd=K0duN3JheXlJQVBHaDMvU1JpR1V3QT09
________________________________
From: Weiyun Ma
Sent: Tuesday, September 6, 2022 5:14 PM
To: theory-seminar at lists.stanford.edu
Subject: Quals talk: clustering with hierarchies
Hi all,
I will be giving my quals talk on clustering problems with hierarchical structures tomorrow (Wed 9/7) at 3pm in Gates 100. I will talk about several papers and in particular focus on this paper on fitting distances by tree metrics. Hope to see you there!
Best,
Anna
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From tpulkit at stanford.edu Wed Sep 21 13:31:14 2022
From: tpulkit at stanford.edu (Pulkit Tandon)
Date: Wed, 21 Sep 2022 20:31:14 +0000
Subject: [theory-seminar] New Course (Aut 22-23): EE274: Data Compression,
Theory and Applications
Message-ID:
New course announcement: EE274 (Autumn 2022-23)
Course title: Data Compression, Theory and Applications; Website
Lecturers: Kedar Tatwawadi, Shubham Chandak, Tsachy Weissman
TA: Pulkit Tandon
Description: The amount of data being generated, stored and communicated by humanity is growing at unprecedented rates, currently in the dozens of zettabytes (1 zettabyte = 1 trillion gigabytes) per year by the most conservative of estimates. Data compression, the field dedicated to representing information succinctly, is playing an increasingly critical role in enabling this growth. Progress in storage and communication technologies has led to enhanced capabilities, with a perpetual cat and mouse chase between growing the ability to handle more data and the amounts of it required by new technologies. We are all painfully aware of this conundrum as we run out of space on our phones due to the selfies, boomerang videos and documents we collect.
The goal of this course is to provide an understanding of how data compression enables representing all of this information in a succinct manner. Both theoretical and practical aspects of compression will be covered. A major component of the course is learning through doing - the students will work on a pedagogical data compression library and implement specific compression techniques.
Who is the course for: The course is suitable for both undergraduate and graduate students with basic probability and programming background. Please contact >, >, > or > in case of questions!
[https://ssl.gstatic.com/ui/v1/icons/mail/images/cleardot.gif]
More details: https://stanforddatacompressionclass.github.io/Fall22/
[cid:1B1595D6-C446-4DD2-9491-6746566D1A9C]
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: IMG-20220920-WA0001.jpg
Type: image/jpeg
Size: 97213 bytes
Desc: IMG-20220920-WA0001.jpg
URL:
From wxliang at stanford.edu Thu Sep 22 14:57:40 2022
From: wxliang at stanford.edu (Weixin Liang)
Date: Thu, 22 Sep 2022 21:57:40 +0000
Subject: [theory-seminar] [Statistics Seminar] When will you become the best
reviewer of your own papers?
Message-ID:
Exciting talk from Weijie Su (UPenn Wharton) on improving peer review at 4:30pm, September 27th in Sloan 380Y.
Title: When will you become the best reviewer of your own papers? A truthful owner-assisted scoring mechanism
Speaker: Weijie Su, University of Pennsylvania
Abstract: In 2014, NeurIPS received 1,678 paper submissions, while this number increased to 10,411 in 2022, putting a tremendous strain on the peer review process. In this talk, we attempt to address this challenge starting by considering the following scenario: Alice submits a large number of papers to a machine learning conference and knows about the ground-truth quality of her papers. Given noisy ratings provided by independent reviewers, can Bob obtain accurate estimates of the ground-truth quality of the papers by asking Alice a question about the ground truth? First, if Alice would truthfully answer the question because by doing so her payoff as additive convex utility over all her papers is maximized, we show that the questions must be formulated as pairwise comparisons between her papers. Moreover, if Alice is required to provide a ranking of her papers, which is the most fine-grained question via pairwise comparisons, we prove that she would be truth-telling. By incorporating the ground-truth ranking, we show that Bob can obtain an estimator with the optimal squared error in certain regimes based on any possible ways of truthful information elicitation. Moreover, the estimated ratings are substantially more accurate than the raw ratings when the number of papers is large and the raw ratings are very noisy. Finally, we conclude the talk with several extensions and some refinements for practical considerations.
This presentation is based on arXiv:2206.08149 and arXiv:2110.14802.
--
Best Regards,
Weixin Liang
Department of Computer Science, Stanford University
https://ai.stanford.edu/~wxliang/
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From sidford at stanford.edu Fri Sep 23 09:46:42 2022
From: sidford at stanford.edu (Aaron Sidford)
Date: Fri, 23 Sep 2022 09:46:42 -0700
Subject: [theory-seminar] Optimization Algorithms - New Course Announcement
Message-ID:
Hi all,
This quarter I am very excited to be teaching a new course on "Optimization
Algorithms." The course is going to cover fundamental optimization methods
and their provabe rates for solving canonical continuous optimization
problems. If you want to know more about the foundational theory and tools
for designing and analyzing continuous optimization methods (e.g. gradient
descent, stochastic mirror descent, Newton's method, momentum, etc.) then
this course might be for you!
The course description is below and the course syllabus is attached. Let me
know if you have any questions, I hope that you all had a great summer, and
I look forward to seeing you soon!
All the best,
Aaron
*CME 334 / **CS 369O / MS&E 312: Optimization Algorithms*
*Mon, Wed 1:30 PM - 2:50 PM at 200-203*
*Instructor: Aaron Sidford (**sidford at stanford.edu *
*)*
*Fundamental theory for solving continuous optimization problems with
provable efficiency guarantees. Coverage of both canonical optimization
methods and techniques, e.g. gradient descent, mirror descent, stochastic
methods, acceleration, higher-order methods, etc. and canonical
optimization problems, critical point computation for non-convex functions,
smooth-convex function minimization, regression, linear programming, etc.
Focus on provable rates for solving broad classes of prevalent problems
including both classic problems and those motivated by large-scale
computational concerns. Discussion of computational ramifications,
fundamental information-theoretic limits, and problem structure.
Prerequisite: linear algebra, multivariable calculus, probability, and
proofs.*
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
-------------- next part --------------
A non-text attachment was scrubbed...
Name: sidford_2022fa_cs369O_syllabus_v1.pdf
Type: application/pdf
Size: 88377 bytes
Desc: not available
URL:
From jmardia at stanford.edu Fri Sep 23 15:35:31 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Fri, 23 Sep 2022 15:35:31 -0700
Subject: [theory-seminar] Volunteer speakers for Theory Lunch
Message-ID:
Hi all,
The new quarter (and new year) begins soon, and we will resume weekly
Theory Lunches on Thursdays at noon from next week (29 Sept). Based on
responses to a survey, we will continue with Theory Lunch outdoors in the
treewell in the engineering quad.
This would be a great time for anyone who wants to give a talk (half hour
whiteboard talk) to shoot me an email. We have several open spots starting
the week of October 27.
Cheers,
Jay Mardia
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Fri Sep 23 15:35:31 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Fri, 23 Sep 2022 15:35:31 -0700
Subject: [theory-seminar] Volunteer speakers for Theory Lunch
Message-ID:
Hi all,
The new quarter (and new year) begins soon, and we will resume weekly
Theory Lunches on Thursdays at noon from next week (29 Sept). Based on
responses to a survey, we will continue with Theory Lunch outdoors in the
treewell in the engineering quad.
This would be a great time for anyone who wants to give a talk (half hour
whiteboard talk) to shoot me an email. We have several open spots starting
the week of October 27.
Cheers,
Jay Mardia
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Sep 27 09:10:22 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 27 Sep 2022 09:10:22 -0700
Subject: [theory-seminar] Theory Lunch 29 Sep: Guy Rothblum
Message-ID:
Hi all,
Theory lunch restarts this thursday 29 Sep, at noon in the usual
location (engineering
quad
treewell). We'll have lunch and socializing from 12 to 12:30, and then a
talk (details below) from 12:30 to 1.
Cheers,
Jay
---------
*Title: *Verifying The Unseen: Interactive Proofs for Distribution
Properties
*Speaker: *Guy Rothblum, Apple
*Abstract: *Given i.i.d. samples drawn from an unknown distribution over a
large domain [N], approximating several basic quantities, such as the
distribution's support size and its Shannon Entropy, requires at least
roughly (N / \log N) samples [Valiant and Valiant, STOC 2011].
Suppose, however, that we can interact with a powerful but untrusted
prover, who knows the entire distribution. Can we use such a prover to
approximately *verify* such statistical quantities more efficiently?
We show that this is indeed the case: a distribution's support size, its
entropy, and its distance from the uniform distribution, can all be
approximately verified via a 2-message interactive proof, where the
verifier's running time, the sample complexity, and the communication
complexity are all close to \sqrt{N}.
More generally, we give a tolerant interactive proof system with similar
parameters for verifying a distribution's proximity to any label-invariant
property (any property that is invariant to re-labeling of the elements in
the distribution's support).
Joint work with Tal Herman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Tue Sep 27 09:10:22 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Tue, 27 Sep 2022 09:10:22 -0700
Subject: [theory-seminar] Theory Lunch 29 Sep: Guy Rothblum
Message-ID:
Hi all,
Theory lunch restarts this thursday 29 Sep, at noon in the usual
location (engineering
quad
treewell). We'll have lunch and socializing from 12 to 12:30, and then a
talk (details below) from 12:30 to 1.
Cheers,
Jay
---------
*Title: *Verifying The Unseen: Interactive Proofs for Distribution
Properties
*Speaker: *Guy Rothblum, Apple
*Abstract: *Given i.i.d. samples drawn from an unknown distribution over a
large domain [N], approximating several basic quantities, such as the
distribution's support size and its Shannon Entropy, requires at least
roughly (N / \log N) samples [Valiant and Valiant, STOC 2011].
Suppose, however, that we can interact with a powerful but untrusted
prover, who knows the entire distribution. Can we use such a prover to
approximately *verify* such statistical quantities more efficiently?
We show that this is indeed the case: a distribution's support size, its
entropy, and its distance from the uniform distribution, can all be
approximately verified via a 2-message interactive proof, where the
verifier's running time, the sample complexity, and the communication
complexity are all close to \sqrt{N}.
More generally, we give a tolerant interactive proof system with similar
parameters for verifying a distribution's proximity to any label-invariant
property (any property that is invariant to re-labeling of the elements in
the distribution's support).
Joint work with Tal Herman.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From cpeale at stanford.edu Wed Sep 28 15:27:35 2022
From: cpeale at stanford.edu (Charlotte Gates Peale)
Date: Wed, 28 Sep 2022 22:27:35 +0000
Subject: [theory-seminar] Algorithmic Fairness Seminar 10/3: Roshni Sahoo,
"Learning from a Biased Sample"
Message-ID:
Hi everyone,
This year?s first algorithmic fairness seminar will be happening this coming Monday (10/3) at 2pm in Fujitsu. The seminar is open to anyone interested in recent research developments in algorithmic fairness and adjacent research areas. If this sounds exciting, we encourage you to attend as well as share with other relevant groups.
The seminar will be happening every Monday at 2pm in Fujitsu (4th floor of Gates). Coffee, tea, and various snacks will be provided.
Talk details are below, and check the website for more info! If you would like to join the mailing list, just let me (cpeale at stanford.edu) and/or Chris (csj93 at stanford.edu) know.
Speaker: Roshni Sahoo
Talk Title: Learning from a Biased Sample
Abstract: The empirical risk minimization approach to data-driven decision making assumes that we can learn a decision rule from training data drawn under the same conditions as the ones we want to deploy it under. However, in a number of settings, we may be concerned that our training sample is biased, and that some groups (characterized by either observable or unobservable attributes) may be under- or over-represented relative to the general population; and in this setting empirical risk minimization over the training set may fail to yield rules that perform well at deployment. Building on concepts from distributionally robust optimization and sensitivity analysis, we propose a method for learning a decision rule that minimizes the worst-case risk incurred under a family of test distributions whose conditional distributions of outcomes Y given covariates X differ from the conditional training distribution by at most a constant factor, and whose covariate distributions are absolutely continuous with respect to the covariate distribution of the training data. We apply a result of Rockafellar and Uryasev to show that this problem is equivalent to an augmented convex risk minimization problem. We give statistical guarantees for learning a robust model using the method of sieves and propose a deep learning algorithm whose loss function captures our robustness target. We empirically validate our proposed method in simulations and a case study with the MIMIC-III dataset. (https://arxiv.org/abs/2209.01754)
Hope to see you there!
Charlotte Peale and Chris Jung (seminar coordinators)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From cpeale at stanford.edu Wed Sep 28 15:27:35 2022
From: cpeale at stanford.edu (Charlotte Gates Peale)
Date: Wed, 28 Sep 2022 22:27:35 +0000
Subject: [theory-seminar] Algorithmic Fairness Seminar 10/3: Roshni Sahoo,
"Learning from a Biased Sample"
Message-ID:
Hi everyone,
This year?s first algorithmic fairness seminar will be happening this coming Monday (10/3) at 2pm in Fujitsu. The seminar is open to anyone interested in recent research developments in algorithmic fairness and adjacent research areas. If this sounds exciting, we encourage you to attend as well as share with other relevant groups.
The seminar will be happening every Monday at 2pm in Fujitsu (4th floor of Gates). Coffee, tea, and various snacks will be provided.
Talk details are below, and check the website for more info! If you would like to join the mailing list, just let me (cpeale at stanford.edu) and/or Chris (csj93 at stanford.edu) know.
Speaker: Roshni Sahoo
Talk Title: Learning from a Biased Sample
Abstract: The empirical risk minimization approach to data-driven decision making assumes that we can learn a decision rule from training data drawn under the same conditions as the ones we want to deploy it under. However, in a number of settings, we may be concerned that our training sample is biased, and that some groups (characterized by either observable or unobservable attributes) may be under- or over-represented relative to the general population; and in this setting empirical risk minimization over the training set may fail to yield rules that perform well at deployment. Building on concepts from distributionally robust optimization and sensitivity analysis, we propose a method for learning a decision rule that minimizes the worst-case risk incurred under a family of test distributions whose conditional distributions of outcomes Y given covariates X differ from the conditional training distribution by at most a constant factor, and whose covariate distributions are absolutely continuous with respect to the covariate distribution of the training data. We apply a result of Rockafellar and Uryasev to show that this problem is equivalent to an augmented convex risk minimization problem. We give statistical guarantees for learning a robust model using the method of sieves and propose a deep learning algorithm whose loss function captures our robustness target. We empirically validate our proposed method in simulations and a case study with the MIMIC-III dataset. (https://arxiv.org/abs/2209.01754)
Hope to see you there!
Charlotte Peale and Chris Jung (seminar coordinators)
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Sep 29 09:03:53 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 29 Sep 2022 09:03:53 -0700
Subject: [theory-seminar] Theory Lunch 29 Sep: Guy Rothblum
In-Reply-To:
References:
Message-ID:
Reminder: This is happening in approximately 3 hours. See you there.
On Tue, Sep 27, 2022 at 9:10 AM Jay Mardia wrote:
> Hi all,
>
> Theory lunch restarts this thursday 29 Sep, at noon in the usual location (engineering
> quad
>
> treewell). We'll have lunch and socializing from 12 to 12:30, and then a
> talk (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> ---------
> *Title: *Verifying The Unseen: Interactive Proofs for Distribution
> Properties
> *Speaker: *Guy Rothblum, Apple
>
> *Abstract: *Given i.i.d. samples drawn from an unknown distribution over
> a large domain [N], approximating several basic quantities, such as the
> distribution's support size and its Shannon Entropy, requires at least
> roughly (N / \log N) samples [Valiant and Valiant, STOC 2011].
>
> Suppose, however, that we can interact with a powerful but untrusted
> prover, who knows the entire distribution. Can we use such a prover to
> approximately *verify* such statistical quantities more efficiently?
>
> We show that this is indeed the case: a distribution's support size, its
> entropy, and its distance from the uniform distribution, can all be
> approximately verified via a 2-message interactive proof, where the
> verifier's running time, the sample complexity, and the communication
> complexity are all close to \sqrt{N}.
>
> More generally, we give a tolerant interactive proof system with similar
> parameters for verifying a distribution's proximity to any label-invariant
> property (any property that is invariant to re-labeling of the elements in
> the distribution's support).
>
> Joint work with Tal Herman.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From jmardia at stanford.edu Thu Sep 29 09:03:53 2022
From: jmardia at stanford.edu (Jay Mardia)
Date: Thu, 29 Sep 2022 09:03:53 -0700
Subject: [theory-seminar] Theory Lunch 29 Sep: Guy Rothblum
In-Reply-To:
References:
Message-ID:
Reminder: This is happening in approximately 3 hours. See you there.
On Tue, Sep 27, 2022 at 9:10 AM Jay Mardia wrote:
> Hi all,
>
> Theory lunch restarts this thursday 29 Sep, at noon in the usual location (engineering
> quad
>
> treewell). We'll have lunch and socializing from 12 to 12:30, and then a
> talk (details below) from 12:30 to 1.
>
> Cheers,
> Jay
>
> ---------
> *Title: *Verifying The Unseen: Interactive Proofs for Distribution
> Properties
> *Speaker: *Guy Rothblum, Apple
>
> *Abstract: *Given i.i.d. samples drawn from an unknown distribution over
> a large domain [N], approximating several basic quantities, such as the
> distribution's support size and its Shannon Entropy, requires at least
> roughly (N / \log N) samples [Valiant and Valiant, STOC 2011].
>
> Suppose, however, that we can interact with a powerful but untrusted
> prover, who knows the entire distribution. Can we use such a prover to
> approximately *verify* such statistical quantities more efficiently?
>
> We show that this is indeed the case: a distribution's support size, its
> entropy, and its distance from the uniform distribution, can all be
> approximately verified via a 2-message interactive proof, where the
> verifier's running time, the sample complexity, and the communication
> complexity are all close to \sqrt{N}.
>
> More generally, we give a tolerant interactive proof system with similar
> parameters for verifying a distribution's proximity to any label-invariant
> property (any property that is invariant to re-labeling of the elements in
> the distribution's support).
>
> Joint work with Tal Herman.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gvaliant at cs.stanford.edu Thu Sep 29 11:04:54 2022
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Thu, 29 Sep 2022 11:04:54 -0700
Subject: [theory-seminar] Don Knuth talking today at Combinatorics seminar
@3pm
Message-ID:
This Thursday, September 29 at 3pm we have a combinatorics seminar talk by
Don Knuth on Sierpinski Simplex graphs
Details for the talk are below.
What: Stanford Combinatorics Seminar
When: Thursday, Seotember 29, 3pm-4pm
Room: 384-H (Building 380, Fourth Floor, Room H)
Speaker: Don Knuth (Stanford)
Title: Sierpinski Simplex graphs
Abstract: The "Sierpinski triangle graph" (based on a fractal that
Mandelbrot liked to call the "Sierpinski gasket") and the analogous
"Sierpinski tetrahedron graph" are well known. They have a natural
generalization to simplexes of any dimension. Several elementary properties
are easily proved, and some intriguing open problems also arise.
The combinatorics seminar will be weekly at the same time and place. The
seminar page is here:
https://mathematics.stanford.edu/events/combinatorics
Next week's speaker is Liana Yepreyman (Emory), speaking on partitioning
cubic graphs into two isomorphic linear forests.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gvaliant at cs.stanford.edu Thu Sep 29 11:04:54 2022
From: gvaliant at cs.stanford.edu (Gregory Valiant)
Date: Thu, 29 Sep 2022 11:04:54 -0700
Subject: [theory-seminar] Don Knuth talking today at Combinatorics seminar
@3pm
Message-ID:
This Thursday, September 29 at 3pm we have a combinatorics seminar talk by
Don Knuth on Sierpinski Simplex graphs
Details for the talk are below.
What: Stanford Combinatorics Seminar
When: Thursday, Seotember 29, 3pm-4pm
Room: 384-H (Building 380, Fourth Floor, Room H)
Speaker: Don Knuth (Stanford)
Title: Sierpinski Simplex graphs
Abstract: The "Sierpinski triangle graph" (based on a fractal that
Mandelbrot liked to call the "Sierpinski gasket") and the analogous
"Sierpinski tetrahedron graph" are well known. They have a natural
generalization to simplexes of any dimension. Several elementary properties
are easily proved, and some intriguing open problems also arise.
The combinatorics seminar will be weekly at the same time and place. The
seminar page is here:
https://mathematics.stanford.edu/events/combinatorics
Next week's speaker is Liana Yepreyman (Emory), speaking on partitioning
cubic graphs into two isomorphic linear forests.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gblanc at stanford.edu Fri Sep 30 16:37:55 2022
From: gblanc at stanford.edu (Guy Blanc)
Date: Fri, 30 Sep 2022 16:37:55 -0700
Subject: [theory-seminar] Join the Algorithms and Friends Mailing List
Message-ID:
Hi everyone,
We are bringing back Algorithms and Friends
! For those of you not familiar,
Algorithms and Friends is an initiative to increase interaction between the
theory group and other Stanford researchers. As part of this initiative, we
try to help researchers from other areas with their algorithmic problems.
Whenever people submit algorithmic questions, it is circulated in a mailing
list for members of the theory group to help address. If you would like to
be part of this mailing list, you can sign up here:
https://mailman.stanford.edu/mailman/listinfo/algorithms-and-friends
We also organize a seminar series where researchers (professors, PhD
students, etc.) from varied backgrounds give a talk. In the past, we have
had interactions with researchers from diverse areas including people from
Economics, Med school, Electrical engineering, Political Science etc. We
are currently looking for speakers for Fall quarter. If you have
suggestions for speakers or any general ideas on how to improve, please let
us know!
Cheers,
Guy, Megha, Pras, Shivam
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
From gblanc at stanford.edu Fri Sep 30 16:37:55 2022
From: gblanc at stanford.edu (Guy Blanc)
Date: Fri, 30 Sep 2022 16:37:55 -0700
Subject: [theory-seminar] Join the Algorithms and Friends Mailing List
Message-ID:
Hi everyone,
We are bringing back Algorithms and Friends
! For those of you not familiar,
Algorithms and Friends is an initiative to increase interaction between the
theory group and other Stanford researchers. As part of this initiative, we
try to help researchers from other areas with their algorithmic problems.
Whenever people submit algorithmic questions, it is circulated in a mailing
list for members of the theory group to help address. If you would like to
be part of this mailing list, you can sign up here:
https://mailman.stanford.edu/mailman/listinfo/algorithms-and-friends
We also organize a seminar series where researchers (professors, PhD
students, etc.) from varied backgrounds give a talk. In the past, we have
had interactions with researchers from diverse areas including people from
Economics, Med school, Electrical engineering, Political Science etc. We
are currently looking for speakers for Fall quarter. If you have
suggestions for speakers or any general ideas on how to improve, please let
us know!
Cheers,
Guy, Megha, Pras, Shivam
-------------- next part --------------
An HTML attachment was scrubbed...
URL: