Optimal algorithms for smooth and strongly convex distributed optimization in networks

In this work, we determine the optimal convergence rates for strongly convex and smooth distributed optimization in two settings: centralized and decentralized communications over a network. For centralized (i.e. master/slave) algorithms, we show that distributing Nesterov’s accelerated gradient descent is optimal and achieves a precision in time that depends on the condition number of the (global) function to optimize, the diameter of the network, and the time needed to communicate values between two neighbors (resp. perform local computations). For decentralized algorithms based on gossip, we provide the first optimal algorithm, called the multi-step dual accelerated (MSDA) method, that achieves the a precision that depends on the condition number of the local functions and the (normalized) eigengap of the gossip matrix used for communication between nodes. We then verify the efficiency of MSDA against state-of-the-art methods for two problems: least-squares regression and classification by logistic regression. (joint work with Kevin Scaman, Sébastien Bubeck, Yin Tat Lee, and Laurent Massoulié)