Search Mailing List Archives
[theory-seminar] Theory seminar on Thursday
silas at stanford.edu
Tue Dec 5 11:47:35 PST 2017
As usual, theory seminar will be on Thursday at Gates 463A at 415pm. Hope you can make it. Additionally, if you would like to meet with the speaker sometime on Thursday please contact Mary at marykw at stanford.edu<mailto:marykw at stanford.edu>.
Aleksandar Nikolov (Toronto)
Proportional Volume Sampling and Approximation Algorithms for A-Optimal Design
We study the A-optimal design problem, in which we are given n rank-1 d-by-d positive semidefinite (PSD) matrices, and an integer k, and our goal is to find a set S of k matrices, so that the trace of the inverse of the sum of the matrices in S is minimized. This problem is one variant of the optimal experimental design problem in statistics, where each rank-1 matrix corresponds to a noisy linear measurement of an unknown vector, and the goal is to pick k measurements to take, so as to minimize the average variance of the error of the maximum likelihood estimator of the unknown vector. The problem also finds applications in sensor placement in wireless networks, sparse least squares regression, feature selection for k-means clustering, and low-rank matrix approximation. We introduce proportional volume sampling to obtain improved approximation algorithms for A-optimal design.
In traditional volume sampling we pick a subset of k (rank-1 PSD) matrices with probability proportional to the determinant of their sum. In proportional volume sampling this probability distribution is modified by multiplying the probability of each size k subset S by the probability mu(S) assigned to S by another probability distribution mu. In order to obtain improved approximation algorithms for the A-optimal design problem, we appeal to hard-core distributions mu based on the solution of a convex relaxation of the problem. Our results include a d-approximation when k=d, and a (1+epsilon)-approximation when k is on the order of (d/epsilon) + log(1/epsilon)/epsilon^2. When we are allowed repetitions of the selected rank-1 matrices, we achieve a k/(k-d+1)-approximation, which matches the integrality gap of the natural convex relaxation of the problem. We also show that our results cannot be extended to the related E-optimal design problem in which we maximize the minimum eigenvalue of the sum of selected matrices. In particular, we show that for E-optimal design the integrality gap of the natural convex relaxation is at least 1-epsilon for k as large as d/epsilon^2. We also derive generalizations of the problem to selecting fewer than d matrices, and consider an application to restricted invertibility principles.
Joint work with Mohit Singh and Uthaipon Tao Tantipongpipat
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the theory-seminar