Distributed Subgradient Methods for Non-Convex Non-Smooth Constrained Optimization Problems
Abstract
In this study, a networked system made up of an operator and a finite number of users is taken into consideration. Our goal is to propose two methods for finding the optimal solution to the problem of minimizing the sum of non-smooth, non-convex objective functions of the operator and all users, subject to the constraint that the solution must lie in the intersection of the fixed point sets of the operator and all users. The first method we present is a parallel sub-gradient approach, which assumes that each user can keep their connection with other users in the network. The second method involves an acceleration technique that divides the network into a finite number of subnetworks, each containing a subset of users. When users in each subnet can interact with one another, they can employ an incremental optimization technique that takes into account the information given by their neighbors. This gives us the opportunity to study the distributed broadcast algorithm. From this, we can develop an algorithm that combines the concepts of incremental optimization and broad optimization. Furthermore, we study the convergence analysis under acceptable assumptions. We also provide instances of applicable problems that fulfill the convergence assumptions. Finally, numerical examples have been given to demonstrate the efficacy of the described methods. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.

