Proximal DC Algorithm for Sparse Optimization
Many applications such as in signal processing, machine learning and operations research seek sparse solutions by adopting the cardinality constraint or rank constraint. We formulate such problems as DC (Difference of two Convex functions) optimization problems and apply DC Algorithm (DCA) to them. While the DCA has been widely used for this type of problems, it often requires a large computation time to solve a sequence of convex subproblems. Our algorithm, which we call Proximal DC Algorithm (PDCA), overcomes this issue of the ordinary DCA by employing a special DC decomposition of the objective function. In PDCA, closed-form solutions can be obtained for the convex subproblems, leading to efficient performance in numerical experiments. We also discuss the theoretical aspects: PDCA can be viewed as a nonconvex variant of the proximal gradient methods (PGM), which provides an insight on the relation between PGM and DCAs.