filter by: Publication Year
(Descending) Articles
Optimization Methods and Software (10294937)
We develop an algorithm based on the idea of the bundle trust-region method to solve nonsmooth nonconvex constrained optimization problems. The resulting algorithm inherits some attractive features from both bundle and trust-region methods. Moreover, it allows effective control of the size of trust-region subproblems via the compression and aggregation techniques of bundle methods. On the other hand, the trust-region strategy is used to manage the search region and accept a candidate point as a new successful iterate. Global convergence of the developed algorithm is studied under some mild assumptions and its encouraging preliminary computational results are reported. © 2025 Informa UK Limited, trading as Taylor & Francis Group.
Journal of Global Optimization (09255001) 92(1)pp. 87-109
Bundle algorithms are currently considered as the most efficient methods for nonsmooth optimization. In most existing bundle methods (proximal, level, and trust region versions), it is necessary to solve at least one quadratic subproblem at each iteration. In this paper, a new bundle trust region algorithm with linear programming subproblems is proposed for solving nonsmooth nonconvex optimization problems. At each iteration, a piecewise linear model is defined, and using the infinity norm and the trust region technique, a linear subproblem is generalized. The algorithm is studied from both theoretical and practical points of view. Under the locally Lipschitz assumption on the objective function, global convergence of it is verified to stationary points. In the end, some encouraging numerical results with a MATLAB implementation are also reported. Computational results show that the developed method is efficient and robust for solving nonsmooth problems. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
We propose a method for nonsmooth nonconvex multiobjective optimization problems that is based on a bundle-type approach. The proposed method employs an extension of the redistributed proximal bundle algorithm and uses an augmented improvement function to handle different objectives. In which, at each iteration, a common piecewise linear model is used to approximate the augmented improvement function. Contrary to many existing multiobjective optimization methods, this algorithm works directly with objective functions, without using any kind of a priori chosen parameters or employing any scalarization. Under appropriate assumptions, we discuss convergence to points which satisfy a necessary condition for Pareto optimality. We provide numerical results for a set of nonsmooth convex and nonconvex multiobjective optimization problems in the form of tables, figures and performance profiles. The numerical results confirm the superiority of the proposed algorithm in the considered test problems compared to other multiobjective solvers, in the computational effort necessary to compute weakly Pareto stationary points. © 2024 Informa UK Limited, trading as Taylor & Francis Group.
Computational Optimization and Applications (15732894) 88(3)pp. 871-902
This paper develops an iterative algorithm to solve nonsmooth nonconvex optimization problems on complete Riemannian manifolds. The algorithm is based on the combination of the well known trust region and bundle methods. According to the process of the most bundle methods, the objective function is approximated by a piecewise linear working model which is updated by adding cutting planes at unsuccessful trial steps. Then at each iteration, by solving a subproblem that employs the working model in the objective function subject to the trust region, a candidate descent direction is obtained. We study the algorithm from both theoretical and practical points of view and its global convergence is verified to stationary points for locally Lipschitz functions. Moreover, in order to demonstrate the reliability and efficiency, a MATLAB implementation of the proposed algorithm is prepared and results of numerical experiments are reported. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
SIAM Journal on Optimization (10526234) 33(1)pp. 319-337
We propose a new algorithm for minimizing locally Lipschitz functions that combines both the bundle and trust region techniques. Based on the bundle methods the objective function is approximated by a piecewise linear working model which is updated by adding cutting planes at unsuccessful trial steps. The algorithm defines, at each iteration, a new trial point by solving a subproblem that employs the working model in the objective function subject to a region, which is called the trust region. The algorithm is studied from both theoretical and practical points of view. Under a lower-C1 assumption on the objective function, global convergence of it is verified to stationary points. In order to demonstrate the reliability and efficiency of the proposed algorithm, a MATLAB implementation of it is prepared and numerical experiments have been made using some academic nonsmooth test problems. Computational results show that the developed method is efficient for solving nonsmooth and nonconvex optimization problems. © 2023 Society for Industrial and Applied Mathematics.
COMPUTERS & OPERATIONS RESEARCH (03050548) 152
The objective functions in optimization models of the sum-of-squares clustering problem reflect intra-cluster similarity and inter-cluster dissimilarities and in general, optimal values of these functions can be considered as appropriate measures for compactness of clusters. However, the use of the objective function alone may not lead to the finding of separable clusters. To address this shortcoming in existing models for clustering, we develop a new optimization model where the objective function is represented as a sum of two terms reflecting the compactness and separability of clusters. Based on this model we develop a two-phase incremental clustering algorithm. In the first phase, the clustering function is minimized to find compact clusters and in the second phase, a new model is applied to improve the separability of clusters. The Davies-Bouldin cluster validity index is applied as an additional measure to compare the compactness of clusters and silhouette coefficients are used to estimate the separability of clusters. The performance of the proposed algorithm is demonstrated and compared with that of four other algorithms using synthetic and real-world data sets. Numerical results clearly show that in comparison with other algorithms the new algorithm is able to find clusters with better separability and similar compactness.
Numerical Algorithms (10171398) 94(2)pp. 765-787
In this paper, we focus on a descent algorithm for solving nonsmooth nonconvex optimization problems. The proposed method is based on the proximal bundle algorithm and the gradient sampling method and uses the advantages of both. In addition, this algorithm has the ability to handle inexact information, which creates additional challenges. The global convergence is proved with probability one. More precisely, every accumulation point of the sequence of serious iterates is either a stationary point if exact values of gradient are provided or an approximate stationary point if only inexact information of the function and gradient values is available. The performance of the proposed algorithm is demonstrated using some academic test problems. We further compare the new method with a general nonlinear solver and two other methods specifically designed for nonconvex nonsmooth optimization problems. © 2023, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Numerical Algorithms (10171398) 89(2)pp. 637-674
In this paper, a proximal bundle-based method for solving nonsmooth nonconvex constrained multiobjective optimization problems with inexact information is proposed and analyzed. In this method, each objective function is treated individually without employing any scalarization. Using the improvement function, we transform the problem into an unconstrained one. At each iteration, by the proximal bundle method, a piecewise linear model is built and by solving a convex subproblem, a new candidate iterate is obtained. For locally Lipschitz objective and constraint functions, we study the problem of computing an approximate substationary point (a substationary point), when only inexact (exact) information about the functions and subgradient values are accessible. At the end, some numerical experiments are provided to illustrate the effectiveness of the method and certify the theoretical results. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Annals of Operations Research (02545330) 311(2)pp. 1123-1154
For a class of nonsmooth nonconvex multiobjective problems, we develop an inexact multiple proximal bundle method. In our approach instead of scalarization, we find descent direction for every objective function separately by utilizing the inexact proximal bundle method. Then we attempt to find a common descent direction for all objective functions. We study the effect of the inexactness of the objective and subgradient values on the new proposed method and obtain the reasonable convergence properties. We further consider a class of difficult nonsmooth nonconvex problems, made even more difficult by inserting the inexactness in the available information. At the end, to demonstrate the efficiency of the proposed algorithm, some encouraging numerical experiments are provided. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
Optimization Letters (18624472) 16(5)pp. 1495-1511
A proximal bundle algorithm is proposed for solving unconstrained nonsmooth nonconvex optimization problems. At each iteration, using already generated information, the algorithm defines a convex model of the augmented objective function. Then by solving a quadratic subproblem a new candidate iterate is obtained and the algorithm is repeated. The novelty in our approach is that the objective function can be any arbitrary locally Lipschitz function without any additional assumptions. The global convergence, starting from any point, is also studied. At the end, some encouraging numerical results with a MATLAB implementation are reported. © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.
Journal of Global Optimization (09255001) 79(1)pp. 1-37
A filter proximal bundle algorithm is presented for nonsmooth nonconvex constrained optimization problems. The new algorithm is based on the proximal bundle method and utilizes the improvement function to regularize the constraint. At every iteration by solving a convex piecewise-linear subproblem a trial point is obtained. The process of the filter technique is employed either to accept the trial point as a serious iterate or to reject it as a null iterate. Under some mild and standard assumptions and for every possible choice of a starting point, it is shown that every accumulation point of the sequence of serious iterates is feasible. In addition, there exists at least one accumulation point which is stationary for the improvement function. Finally, some encouraging numerical results show that the proposed algorithm is effective. © 2020, Springer Science+Business Media, LLC, part of Springer Nature.
Computational Optimization and Applications (15732894) 80(2)pp. 411-438
A method, called an augmented subgradient method, is developed to solve unconstrained nonsmooth difference of convex (DC) optimization problems. At each iteration of this method search directions are found by using several subgradients of the first DC component and one subgradient of the second DC component of the objective function. The developed method applies an Armijo-type line search procedure to find the next iteration point. It is proved that the sequence of points generated by the method converges to a critical point of the unconstrained DC optimization problem. The performance of the method is demonstrated using academic test problems with nonsmooth DC objective functions and its performance is compared with that of two general nonsmooth optimization solvers and five solvers specifically designed for unconstrained DC optimization. Computational results show that the developed method is efficient and robust for solving nonsmooth DC optimization problems. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
Computational Optimization and Applications (15732894) 74(2)pp. 443-480
Proximal bundle method has usually been presented for unconstrained convex optimization problems. In this paper, we develop an infeasible proximal bundle method for nonsmooth nonconvex constrained optimization problems. Using the improvement function we transform the problem into an unconstrained one and then we build a cutting plane model. The resulting algorithm allows effective control of the size of quadratic programming subproblems via the aggregation techniques. The novelty in our approach is that the objective and constraint functions can be any arbitrary (regular) locally Lipschitz functions. In addition the global convergence, starting from any point, is proved in the sense that every accumulation point of the iterative sequence is stationary for the improvement function. At the end, some encouraging numerical results with a MATLAB implementation are also reported. © 2019, Springer Science+Business Media, LLC, part of Springer Nature.