Lagrange Multiplier Method for Convex Programs
AUTOR(ES)
Juffin, R. J.
RESUMO
The problem of minimizing a convex function that is subject to the constraint that a number of other convex functions be nonpositive can be treated by the Lagrange multiplier method. Such a treatment was revived by Kuhn and Tucker and further studied by many other scientists. These studies led to an associated maximizing problem on the Lagrange function. The aim of this note is to give a short elementary proof that the infimum of the first problem is equal to the supremum of the second problem. To carry this out it is necessary to relax the constraints of the first (or the second) problem so that the constraints are enforced only in the limit. This relaxation of constraints is not necessary should prescribing upper bounds to all the convex functions define a bounded set of points in the domain of the functions. The domain of the functions can be n-dimensional space or a reflexive Banach space.
ACESSO AO ARTIGO
http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=432629Documentos Relacionados
- Clark's Theorem on linear programs holds for convex programs
- Convex programs having some linear constraints
- A CONVEX APPROXIMANT METHOD FOR NONCONVEX EXTENSIONS OF GEOMETRIC PROGRAMMING*
- A new primal-dual path-following method for convex quadratic programming
- On the convergence properties of the projected gradient method for convex optimization