We study a new trust region affine scaling method for general bound constrained optimiza- tion problems. At each iteration, we compute two trial steps. We compute one along some direction obtained by solving an appropriate quadratic model in an ellipsoidal region. This region is defined by an affine scaling technique. It depends on both the distances of current iterate to boundaries and the trust region radius. For convergence and avoiding iterations trapped around nonstationary points, an auxiliary step is defined along some newly defined approximate projected gradient. By choosing the one which achieves more reduction of the quadratic model from the two above steps as the trial step to generate next iterate, we prove that the iterates generated by the new algorithm are not bounded away from stationary points. And also assuming that the second-order sufficient condition holds at some nondegenerate stationary point, we prove the Q-linear convergence of the objective function values. Preliminary numerical experience for problems with bound constraints from the CUTEr collection is also reported.
In this paper,we consider the positive semi-definite space tensor cone constrained convex program,its structure and algorithms.We study defining functions,defining sequences and polyhedral outer approximations for this positive semidefinite space tensor cone,give an error bound for the polyhedral outer approximation approach,and thus establish convergence of three polyhedral outer approximation algorithms for solving this problem.We then study some other approaches for solving this structured convex program.These include the conic linear programming approach,the nonsmooth convex program approach and the bi-level program approach.Some numerical examples are presented.
The augmented Lagrangian method is a classical method for solving constrained optimization.Recently,the augmented Lagrangian method attracts much attention due to its applications to sparse optimization in compressive sensing and low rank matrix optimization problems.However,most Lagrangian methods use first order information to update the Lagrange multipliers,which lead to only linear convergence.In this paper,we study an update technique based on second order information and prove that superlinear convergence can be obtained.Theoretical properties of the update formula are given and some implementation issues regarding the new update are also discussed.
Conjugate gradient methods have played a special role in solving large scale nonlinear problems. Recently, the author and Dai proposed an efficient nonlinear conjugate gradient method called CGOPT, through seeking the conjugate gradient direction closest to the direction of the scaled memoryless BFGS method. In this paper, we make use of two types of modified secant equations to improve CGOPT method. Under some assumptions, the improved methods are showed to be globally convergent. Numerical results are also reported.