Active Information Acquisition for Linear Optimization
Abstract: We consider partially-specified optimization
problems where the goal is to actively, but
efficiently, acquire missing information about
the problem in order to solve it. An algorithm designer wishes to solve a linear program (LP), max c
T x s.t. Ax ≤ b, x ≥ 0,
but does not initially know some of the parameters. The algorithm can iteratively choose
an unknown parameter and gather information
in the form of a noisy sample centered at the
parameter’s (unknown) value. The goal is to
find an approximately feasible and optimal solution to the underlying LP with high probability while drawing a small number of samples. We focus on two cases. (1) When the
parameters b of the constraints are initially unknown, we propose an efficient algorithm combining techniques from the ellipsoid method
for LP and confidence-bound approaches from
bandit algorithms. The algorithm adaptively
gathers information about constraints only as
needed in order to make progress. We give
sample complexity bounds for the algorithm
and demonstrate its improvement over a naive
approach via simulation. (2) When the parameters c of the objective are initially unknown,
we take an information-theoretic approach and
give roughly matching upper and lower sample complexity bounds, with an (inefficient)
successive-elimination algorithm.
0 Replies
Loading