Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] The Practice of Theory Research - Final Talks

Omer Reingold reingold at stanford.edu
Thu Mar 3 11:16:03 PST 2022


Hello everybody,

CS 163 is a "research methods in theory of CS" course. There were 5 groups
doing research throughout the quarter, and they will be telling about their
progress next week. I append the talk schedules as well as abstracts. The
students will really appreciate an external audience so please let me know
if you can make it for any of the talks (even if you can attend a single
talk it will be great).

Thanks
Omer

*Tuesday, March 8th*



11:30-12: *Low-Bandwidth Evaluation of High-Degree Polynomials Using
High-Rate Error-Correcting Codes, *Connor Meany, Agustin Otero, Abraham
Ryzhik

12-12:30: *Weak Group Isomorphism in Almost-Quadratic Time, *Hanson Hao,
Ben Heller, Wenqi Li

12:30-1pm: *Card Guessing Game with Riffle Shuffles and an Adversarial
Perspective, *Suzy Lou, Josh Nkoy, Joey Rivkin, Colin Sullivan



*Thursday, March 10th*



11:30-12: *More Robust Distributed Merkle's Puzzles, *Christie Di and Jamie
Song

12-12:30: *Neural Networks Generalize from Self-Averaging Sub-classifiers
in the Same Way As Adaptive Boosting, *Peter Chatain and Michael Sun



* Abstracts:*


Title: *Low-Bandwidth Evaluation of High-Degree Polynomials Using High-Rate
Error-Correcting Codes*

Speakers:  Connor Meany, Agustin Otero, Abraham Ryzhik



Abstract: The problem of computing functions F:F^k→F on encoded data c=C(x)
∈ F^n is the most natural generalization to asking for decodings of it:
F(x) = x. In [SW22] the problem is solved using less encoded information
than necessary in trivially first decoding, then evaluating: \tilde{c}
−“C−1”−→ x-F−→F(x), for F linear over F.  We show how to maintain encoding
bandwidth gains below trivial while computing nonlinear functions of the
form F(x) =\sum f_i(x_i) for f_i ∈ F[X] a polynomial of degree at most p∈N,
∀i∈[k], without sustaining factors of p losses in encoding rate k/n, where
C: F^k → F^n. To achieve this, we show that changing the encoding function
to f(x)^p makes it not much harder to interpolate than the original f(x),
while tolerating errors as in [SW22]. This stands to improve the potency of
distributed storage and parallel computation, broadening the class of
functions that can be computed on desirable-rate encoded data with low
download bandwidth costs.



Title: *Weak Group Isomorphism in Almost-Quadratic Time*

Speakers: Hanson Hao, Ben Heller, Wenqi Li



Abstract: In DW2020, the group isomorphism problem is solved in nearly
linear time for a dense set of group orders. Thus, the barrier to improving
group isomorphism lies in the group orders not contained in this dense set.
Specifically, the hardest case is thought to be the case of p-groups.
Although there are not many reductions formalizing that this is the hardest
case, in GQ2021 there is some progress on this path with a reduction among
p-groups. The group isomorphism problem is of independent interest, but is
also motivated by the existence of a reduction from group isomorphism to
graph isomorphism Miller 1979, the complexity of which is a major unsolved
problem in complexity theory. Furthermore, the hard case of group
isomorphism with p-groups is thought to be a hard case of graph isomorphism
under the reduction from groups to graphs, however there is no reduction
making this provably true. At the moment, nobody really knows how to handle
the case of p-groups. And for most group orders that do not contain
p-groups, there is a nearly linear time algorithm to check for isomorphism.
In order to find new angles on this sort of problem, we will investigate a

weaker notion than isomorphism.



Title: *Card Guessing Game with Riffle Shuffles and an Adversarial
Perspective*

Speakers: Suzy Lou, Josh Nkoy, Joey Rivkin, Colin Sullivan



Abstract: This paper explores the card guessing game on a riffle-shuffled
deck. In this game, a dealer orders a deck of n cards numbered 1, . . . ,
n, and a guesser guesses the cards one at a time, attempting to maximize
their number of correct guesses in expectation. The paper extends previous
work on a variant of the card guessing game where the dealer is limited to
a single riffle shuffle. We show an optimal guesser strategy against
arbitrarily many riffle shuffles and compute a rough bound on the
expectation of correct guesses using this strategy. We also introduce an
adversarial analogue of a riffle shuffle dealer, in which a dealer can
order the cards in any way consistent with some fixed number of riffle
shuffles. We give an optimal optimal adversarial dealer strategy using this
model and compute the upper bound on the number of correct guesses against
this dealer.



Title: *More Robust Distributed Merkle's Puzzles*

Speakers: Christie Di and Jamie Song



Abstract: In 1974, Merkle proposed the first key exchanging scheme based on
symmetric primitives called “Merkle’s Puzzle”, which allows two players to
build a secure channel against an eavesdropper. In 2021, Dinur and Hassen
proposed a distributed merkle’s puzzle protocol but it fails to work under
constant fraction of semi-honest or malicious players. We will first
present a basic protocol which should be robust against any number of
semi-honest or malicious players, then we will proceed with two other
protocols with better query complexity. Our final goal is to prove the
optimality for a model against a constant proportion of semi-honest and
malicious players, or give an impossibility result to show the limit of
distributed merkle's puzzles in that setting. Our works could potentially
extend to other highly connected network models.



Title: *Neural Networks Generalize from Self-Averaging Sub-classifiers in
the Same Way As Adaptive Boosting*

Speakers: Peter Chatain and Michael Sun



Abstract: In recent years, neural networks (NN’s) have made giant leaps in
a wide variety of domains. NN’s are often referred to as “black box”
algorithms due to how little we can explain their empirical success. Our
foundational research seeks to explain why neural networks generalize. A
recent advancement derived a mutual information measure for explaining the
performance of deep NN’s through a sequence of increasingly complex
functions. We show deep NN’s learn a series of boosted classifiers whose
generalization is popularly attributed to self-averaging over an increasing
number of interpolating sub-classifiers. To our knowledge, we are the first
work to establish the connection between generalization in boosted
classifiers and generalization in deep NN’s. Our experimental evidence and
theoretical analysis suggest NNs trained with dropout exhibit similar
self-averaging behavior over interpolating sub-classifiers as cited popular
explanations for the post-interpolation generalization phenomenon in
boosting.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20220303/e296d716/attachment-0001.html>


More information about the theory-seminar mailing list