Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[theory-seminar] Theory Seminars on Tuesday and Thursday

Shashwat Silas silas at stanford.edu
Mon Mar 5 11:35:25 PST 2018


Hi all,


This week we will have 2 theory seminars, tomorrow (2PM in Gates 415) and on Thursday (415PM in Gates 463A). Details below. If you'd like to meet with the speakers email Mary at marykw at stanford.edu.


Tuesday, March 6, 2018
Gates 415, 2PM Note the unusual room, day, and time!
Li-Yang Tan (TTIC)
Fooling Polytopes

We give an explicit pseudorandom generator with seed length poly(log m, 1/\delta) * log n that \delta-fools the class of all m-facet polytopes over {0,1}^n. The previous best seed length had linear dependence on m. As a corollary, we obtain a deterministic quasipolynomial time algorithm for approximately counting the number of feasible solutions of general {0,1}-integer programs.

Joint work with Ryan O'Donnell (CMU) and Rocco Servedio (Columbia)


Thursday, March 8, 2018
Gates 463A, 4:15PM
Kunal Talwar (Google Brain)
Two approaches to (Deep) learning with Differential Privacy

Machine learning techniques based on neural networks are achieving remarkable results in a wide variety of domains. Often, the training of models requires large, representative datasets, which may be crowd-sourced and contain sensitive information. The models should not expose private information in these datasets. Differential Privacy is a standard privacy definition that implies a strong and concrete guarantee on protecting such information. In this talk, I'll then outline two recent approaches to training deep neural networks while providing a differential privacy guarantee, and some new analysis tools we developed in the process. Our implementation and experiments demonstrate that we can train deep neural networks with non-convex objectives, under a modest privacy budget, and at a manageable cost in software complexity, training efficiency, and model quality. Based on joint works with Martin Abadi, Andy Chu, Úlfar Erlingsson, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Nicolas Papernot, Ananth Raghunathan, Daniel Ramage, Shuang Song and Li Zhang.


Thanks

Shashwat
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/theory-seminar/attachments/20180305/6ad4e9d4/attachment.html>


More information about the theory-seminar mailing list