Abstract: For this paper, we reviewed baseline algorithms and run times that were used to benchmark the Primal-Dual Block Frank-Wolfe Algorithm. The new algorithm, sought to reduce per-iteration cost, improving overall convergence, and when combined with Elastic Net Regularization created an algorithm that only depended on the sparsity of the solution set. We re-implemented five different baseline algorithms that were compared to the proposed algorithm, along with a sixth new baseline. We then ran the reproduction tests on four of the six data sets used by the original paper. With two of the data sets, Duke Breast Cancer, and RCV1, we conducted further optimization tests on the hyperparameters of the algorithms. When replicating baselines, we were largely able to confirm relative convergence rates between each respective algorithm. Under optimization tests however, we were able to speed up Accelerated Projected Gradient Descent significantly, making it faster than the paper’s proposed Primal-Dual Block Frank-Wolfe algorithm for certain data sets.
Track: Baseline
NeurIPS Paper Id: https://openreview.net/forum?id=B1lx2BBlLr
4 Replies
Loading