Search Mailing List Archives
[theory-seminar] Theory Lunch 12/10: Ellen Vitercik
wajc at stanford.edu
Thu Dec 10 12:21:00 PST 2020
Reminder: the talk is starting in 10 minutes.
On Thu, 10 Dec 2020 at 10:01, David Wajc <wajc at stanford.edu> wrote:
> Reminder: this is happening in two hours.
> On Wed, 9 Dec 2020 at 09:01, David Wajc <wajc at stanford.edu> wrote:
>> Hi all,
>> The last Theory lunch of the quarter will be happening tomorrow at noon
>> (PDT), at our gather space:
>> https://gather.town/app/lR6jRBPK44nZ7V68/StanfordTheory (*password:*
>> Ellen will tell us about *How much data is sufficient to learn
>> high-performing algorithms?*
>> Algorithms often have tunable parameters that impact runtime and solution
>> quality. For many algorithms used in practice, no parameter setting admits
>> a meaningful worst-case guarantee, so the parameters are explicitly made
>> available for the user to tune. Alternatively, the parameters may be tuned
>> implicitly within the proof of a worst-case approximation ratio or runtime
>> bound. Worst-case instances, however, may be rare or nonexistent in
>> practice. A growing body of research has demonstrated that data-driven
>> algorithm design can lead to significant gains in runtime and solution
>> quality. Data-driven algorithm design uses a training set of problem
>> instances sampled from an unknown, application-specific distribution and
>> returns a parameter setting with strong average performance on the training
>> We provide a broadly applicable theory for deriving generalization
>> guarantees for data-driven algorithm design, which bound the difference
>> between the algorithm's expected performance and its average performance
>> over the training set. The challenge is that for many combinatorial
>> algorithms, performance is a volatile function of the parameters: slightly
>> perturbing the parameters can cause a cascade of changes in the algorithm’s
>> behavior. Prior research has proved generalization bounds by employing
>> case-by-case analyses of parameterized greedy algorithms, clustering
>> algorithms, integer programming algorithms, and selling mechanisms. We
>> uncover a unifying structure which we use to prove extremely general
>> guarantees, yet we recover the bounds from prior research. Our guarantees
>> apply whenever an algorithm's performance is a piecewise-constant, -linear,
>> or—more generally—piecewise-structured function of its parameters. As we
>> demonstrate, our theory also implies novel bounds for dynamic programming
>> algorithms used in computational biology.
>> This talk is based on joint work with Nina Balcan, Dan DeBlasio, Travis
>> Dick, Carl Kingsford, and Tuomas Sandholm.
>> *Pro tip:* To join the zoom talk:
>> (1) go to the lecture hall,
>> (2) grab a seat, and
>> (3)* press X to join the zoom lecture*.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the theory-seminar