Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] "Palo Alto, We Have a Problem. There Is No Oracle!" – Amin Karbasi (Thu, 17-Mar @ 4:00pm)

Joachim Neu jneu at stanford.edu
Mon Mar 14 09:47:46 PDT 2022


Palo Alto, We Have a Problem. There Is No Oracle!
 Amin Karbasi – Professor, Yale University
Thu, 17-Mar / 4:00pm / Zoom:
https://stanford.zoom.us/meeting/register/tJckfuCurzkvEtKKOBvDCrPv3McapgP6HygJ
 
Please join us for coffee and snacks at 3:30pm in the Grove outside
Packard (near Bytes' outdoor seating).
 Abstract
Artificial intelligence is fundamentally about making decisions under
uncertainty from a massive pool of possibilities, where combinatorial
techniques have long been central tools. Indeed, many scientific and
engineering models feature inherently discrete decision variables—from
phrases in a corpus to objects in an image. Similarly, nearly all
aspects of the machine learning pipeline involve discrete tasks from
data summarization and sketching to feature selection and model
explanation.
Classically, in order to design optimization methods, we usually assume
that the objective function is either fully known or accessible via an
oracle. In many modern applications, however, the objectives we aim to
optimize should be rather learned, estimated, or simulated from data, a
process that is subject to stochastic fluctuations. Moreover, it has
long been known that solutions obtained from combinatorial optimization
methods can demonstrate striking sensitivity to changes in the
parameters of the underlying problem. So, what are the guarantees of
the combinatorial algorithms we develop (and teach) when the perfect
oracle does not exist? In this talk, we will address this challenge and
build a fundamentally new connection between discrete and (non-convex)
continuous optimizations that aim to lift the current provable methods
out of the sterile lab environment and scale them into the real world.
Bio
Amin Karbasi is currently an (untenured) associate professor of
Electrical Engineering, Computer Science, and Statistics & Data Science
at Yale University. He is also a research scientist at Google NY. He
has been the recipient of the National Science Foundation (NSF) Career
Award, Office of Naval Research (ONR) Young Investigator Award, Air
Force Office of Scientific Research (AFOSR) Young Investigator Award,
DARPA Young Faculty Award, National Academy of Engineering (NAE)
Grainger Award, Nokia Bell-Labs Prize, Amazon Research Award, Google
Faculty Research Award, Microsoft Azure Research Award, Simons Research
Fellowship, and ETH Research Fellowship. His work on machine learning,
statistics, and signal processing has received awards in a number of
premier conferences and journals, including Medical Image Computing and
Computer-Assisted Interventions Conference (MICCAI), Facebook-MAIN
award from AI-Neuroscience symposium, International Conference on
Artificial Intelligence and Statistics (AISTATS), IEEE Communications
Society Data Storage, International Conference on Acoustics, Speech,
and Signal Processing (ICASSP), ACM SIGMETRICS, and IEEE International
Symposium on Information Theory (ISIT). His Ph.D. work received the
Patrick Denantes Memorial Prize for the best doctoral thesis from the
School of Computer and Communication Sciences at EPFL, Switzerland.
This talk is hosted by the ISL Colloquium. To receive talk
announcements, subscribe to the mailing list isl-
colloq at lists.stanford.edu.
 
Mailing list: https://mailman.stanford.edu/mailman/listinfo/isl-colloq
This talk: http://isl.stanford.edu/talks/talks/2022q1/amin-karbasi/

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20220314/346faf89/attachment.html>


More information about the theory-seminar mailing list