TL;DR: We extend results for minimizing the difference of submodular (DS) functions, from set functions to general functions over both discrete and continuous domains.
Abstract: Submodular functions, defined on continuous or discrete domains, arise in numerous applications. We study the minimization of the difference of submodular (DS) functions, over both domains, extending prior work restricted to set functions.
We show that all functions on discrete domains and all smooth functions on continuous domains are DS.
For discrete domains, we observe that DS minimization is equivalent to minimizing the difference of two convex (DC) functions, as in the set function case. We propose a novel variant of the DC Algorithm (DCA) and apply it to the resulting DC Program, obtaining comparable theoretical guarantees as in the set function case. The algorithm can be applied to continuous domains via discretization. Experiments demonstrate that our method outperforms baselines in integer compressive sensing and integer least squares.
Lay Summary: Many real-world problems, such as wireless communications, image processing, recommendation systems, and materials property prediction, involve choosing the best combination of variables to optimize a given objective, where the variables take discrete values (e.g., whole numbers). We study a broad class of such problems where the objective has a special mathematical structure: it can be written as the difference between two submodular functions. Submodular functions are functions that exhibit a diminishing returns property, where the value of adding something decreases as more is already included. They occur naturally in various applications.
We show that this structure is surprisingly general: any discrete function can be written in this form. Such problems are very hard to solve even approximately. We develop an efficient algorithm to tackle them, which is guaranteed to return locally optimal solutions, in the sense that the solution can't be improved by changing just one variable slightly, e.g., by adding or subtracting one. To do this, we transform the problem to another well-studied type of problem, where variables are continuous (real numbers), and adapt a classical method called the Difference of Convex Algorithm (DCA), commonly used for this kind of problem, to our setting. Our approach can also be used when the variables in the original problem are continuous by approximating the range of values with a grid. We tested our method on two challenging tasks and found that it consistently outperformed existing methods. This work opens up new possibilities for solving challenging real-world problems involving both discrete and continuous variables.
Link To Code: https://github.com/SamsungSAILMontreal/cont-diffsubmin
Primary Area: Optimization->Discrete and Combinatorial Optimization
Keywords: continuous and discrete submodular functions, Difference of submodular minimization, non-convex optimization, DC programming, DCA, Integer least squares, Integer Compressive sensing
Submission Number: 6719
Loading