Duality Gap in Nonlinear Programming

DOI : 10.17577/IJERTV1IS6514

Download Full-Text PDF Cite this Publication

Text Only Version

Duality Gap in Nonlinear Programming

Santosh Kumar .Shrivastav

Department of Mathematics, Jaypee University of Engineering and Technology, Guna (M.P), India

Duality has an important role in nonlinear programming. It gives a numerical and theoretical foundation for many optimization algorithms. Duality method can be used to solve NLP directly or indirectly as well as it is also useful for finding the upper or lower bound of objective function within its constraints. But duality theory has some limitations. When we use duality for convex problem, then it is best suited. But when we apply duality to non convex problem including discrete and mixed inter problems. It is not always easy to prove weak duality, strong duality, strict converse duality and converse duality theorems and zero duality gaps. In this paper, I have proposed an enhanced duality theory for nonlinear optimization in order to overcome from some limitations of previous dual methods. This enhanced duality theory leads to zero duality gap for nonlinear programming problem which was not possible through previous available methods. and in the last, I have discussed an example which proves that proposed enhanced duality theory is more efficient and effective than others available methods.

  1. The nonlinear programming problem (NLP) of the following form:

    (1)

    Where variable is the continuous part, and is discrete part, when , Y is finite discrete set of k-element. We assume that the objective function f is lower bounded and is continuous and differentiable with respect to x, where

    T

    and T are continuous

    in the continuous subspace X for any The NLP covers all type of nonlinear optimization problems.

    Duality is an important development for mathematical programming. An important issue of duality theory is the existence of duality gap, which is the difference between the optimal solution of the original problem and the lower bound obtained by solving the dual problem. The duality gap is zero for convex optimization problem generally, but in case of non-convex problem, it is not always true. The duality gap is generally nonzero for non-convex problems and may be large for some problems, in which case the dual approach is not useful. More-over, for discrete and mixed inter problems, duality gap may be nonzero even If the functions are convex. None zero gaps make the direct dual methods fail and the global optimization algorithms such as branch and bound less effective. There is number of research paper [3, 17] are available that has given a efficient method for

    removing duality gap in case non-convex problem and due to some limitations in previous works motivated much more for extensive study on this topic.

    I have found that some previous dual functions require large penalty values to ensure zero duality gaps for non convex problems. The large penalty values gives ill conditioned optimization problems and increases the difficulty for obtaining optimal solution. Since some previous zero gap results are developed for continuous and differentiable problems. Often the parameters for achieving zero- duality are related the Lagranges multipliers [19, 23] do not exist for discrete problems. Such result can be applied to discrete or mixed integers problems or problem with nondifferentiable constraints. However I have developed a duality theory in this research for the efficiency and effectiveness on duality gaps.

  2. Duality in nonlinear programming or for any mathematical programming is, generally, speaking, the statement of a relationship of a certain kind between two mathematical programming problems. The relationship commonly has three aspects.

    One problem the Primal is a constraint optimization problem.

    The existence of a solution to one of these problems ensures the existence of a solution of the other, in which case their respective extreme values are equal and

    If the constraints of the one problem are consistent while those of the other are not, there is a sequence of points satisfying the constraint of the first on which its objective function tends to infinity.

    Consider the NLP as:

    Where is an open subset to ; are differentiable.

    Wolfe type dual [ ]:

    (WD):

    Wolfe established weak duality under the assumption that f and g are convex at Mond and Weir type dual [18].

    In relation to (NLP) Mond and Weir [ ] established its corresponding dual as:

    (MWD):

    Mond and weir established the weak duality theorem under assumption that f is pseudo convex and is quasi-convex on

    Since duality is an important notion for mathematical programming. Let consider the following continuous nonlinear programming (CNLP) problem.

    (CP):

    T

    (2)

    T 0.

    X is compact subset of , f, g, and h are lower bounded and continuous, but not necessarily differentiable.

    Then Lagrangian function of the form:

    (3)

    Dual methods transform the original problem into a dual problem defined as follows.

    (4).

    Where the dual function is defined as:

    (5)

    The main results of the duality theory are the following. First, the objective value L obtained from solving the dual problem (CP dual) is a lower bound to the optimal objective value f of the original problem.There have been a number of studies for elimination the duality gaps. A number of previous works have indicated that duality gap can be reduced when a problem is decomposed or has certain special structures [1.2.21].

    To remove the duality gaps for nonconvex problems, augmented Lagrangian functions [3, 17] were introduced for continuous NLP. Rubinov et al. [19, 23] have extended the penalty function to a class of nonlinear penalty function with zero duality gaps, where the function takes the following form.

    1/ (6)

    Where > 0 is a parameter.

    Luo et al. [14] have proposed a nonconvex and non-smooth penalty function with zero duality gap based on the following formulation, where > 0 is a parameter.

    (7)

    An exact penalty function with zero duality gaps under certain assumptions is proposed by Pang [16] as follows.

    I p(x) I, I p(x) I I hm(x) I, Ig1+(x) I,.Igt+(x)I }] (8)

    In case of continuous problem as given in (2), most of the existing augmented Lagrangian functions and exact penalty functions with zero duality gap for nonconvex continuous optimization [5-7, 14-15, 18-20, 23].

    (9)

    Where are Lagrange multiplier vector, is Nonlinear Lagrangian terms, c 0 is a penalty parameter and is an augmenting function. A general framework that provides a unified treatment for a family of Lagrange type functions and conditions for achieving zero duality gap for constrained optimization problems under some convexity assumptions is given by Burachik and Robonov [1, 5]. A recent work by Nedic and Ozdaglar [15] develops necessary and sufficient conditions for to have zero duality gap based on a geometric analysis, which consider the geometric primal problem of finding the minimum intercept. The most previous methods for removing the duality gaps use a single penalty multiplier c before augmenting term. However a best c is hard to

    locate and control. In practice a common problem is that the single c is often too large, this makes the unconstrained optimization difficult. In this paper I have proposed multiple penalty multipliers which effectively reduce penalty values for ensuring zero duality gaps.

  3. I have develop our results for continuous nonlinear programming problems (CPs)

    (2)given as below.

    Definition: 3.1 (Constrained global minimum):

    A point x* in X is a constrained global minimum, if x* is feasible and for all feasible x in X.

    Definition: 3.2 The penalty function for (CP) in (2) is defined as:

    (10)

    Where

    Tand

    T

    Where we defined for a , and and are penalty multipliers.

    Definition: 3.3

    The directional derivative of a function at a point along a direction is

    (11)

    Definition: 3.4 (Constraint Qualification condition)

    A point x in X of (CP) satisfies the constraint qualification if there exist no direction p Rn along which the directional derivatives of the objective is non-zero but the directional derivatives of continuous equality and continuous active inequality constraints are all zero. That is, there does not exist p Rn such that

    Respectively, the sets of indices of continuous equality and continuous active inequality constraints. The constraint qualification is satisfied if both Ch and Cg are empty. Intuitively, constraint qualification at x ensures the existence of finite and that lead to a local minimum of (10) at x. Consider a neighboring point x + p infinitely close to x, where the objective function f at x decreases along p and all active constraints at x have zero directional derivative along p. In this case, all the active constraints at x

    + p are close to zero, and it will be impossible to find finite and in order to establish a local minimum of (10) at x with respect to x + p. To ensure a local minimum of (10) at x, the above scenario must not be true for any p at x.

    We compare our constraint-qualification condition to the well-known regularity condition in KKT condition that requires the linear independence of gradients of active constraint functions. Of course, the regularity condition only applies to differentiable problems, while our constraint-qualification condition does not require.

    Definition 3.5 (Feasible set and -extension):

    Let the set of all feasible points of (CP) be:

    F = {x | x X, h(x) = 0,g(x) 0}, (12) , the -extension of F, where >0 is a scalar value, is: F+ ={ x | x X, min ( II y-x II) } (12).

    Namely, F+ includes the points in F and all those points whose projection distance

    to F is within . Here, II, II denotes the Euclidean norm.

    Preposition 3.1: If f (x), h(x) and g(x) in (CP) are differentiable, if a constraint global minimum x in X of (CP) is regular, then it satisfies the constraint qualification.

    Theorem 3.1 (Enhanced duality theorem for continuous programming): Suppose x* X is a Constrained global minimum to (CP) and x* satisfies the constraint qualification, then there is no duality gap for the enhanced dual problem defined in (10) and (11), i.e. q*= f (x*).

    Proof: First, we have q* f (x*).Since

    = (13)

    Also according to theorem 3.1, there are 0 and ** 0 such that

    (14)

    Since *= f(x*)

    Taking the following DNLP

    (Pd): min f (y), Where y = (y1. . . yw)T Y

    Subject to h(y) = 0 and g(y) 0, (15)

    Whose f is lower bounded, Y is a finite discrete set, and f, g and h are not necessarily continuous and differentiable with respect to y.

    Definition 3.6 (Constrained global minimum of Pd):

    A point y* Y is a CGMd, a constrained global minimum of Pd, if y is feasible and f (y*) f (y) for all feasible y Y.

    Theorem 3.2: Let y* Y be a constraint global minimum to (DP) , there exist finite * 0 and * 0 such that

    (16).

    Proof:

    Given y*, since = f(y*) for any * 0 and * 0, we need to prove that there exist finite * 0 and * 0 such that

    (17)

    We take the * and * such that:

    , i=1, 2m (18)

    , j=1, 2t (19)

    Next, we show that f(y*)

    For a feasible point y Y , since h(y) =0 and g(y)0, We have

    For an infeasible point y Y, if there is at least one equality constraint hi(y) that is not satisfied (|hi(y)| > 0), we have:

    (21)

    If there is at least one inequality constraint g j (y) that is not satisfied (g j (y) > 0), We have:

    (22)

    Equation (16) is proved after combining (20), (21) and (22).

    The extended dual problem for Pd is the same in definition 3.2 defined for CP, Except that the variable space is Y instead of X. Based on Theorem 3.3, we have the following result for discrete-space extended duality, which can be proved in the same way as the proof to Theorem 3.2.

    Example:

    I have illustrated a continuous problem where there is a duality gap for the

    original duality theory but not for the proposed enhanced extended duality theory. Consider the following CNLP:

    f (x) = x1 + x2 Subject to h (x) = x1 x2 -1

    It is obvious that f* = 2 at (x1*, x2*) = (1, 1), for the original duality, the dual function is:

    We have q* = In contrast using proposed duality theory, the extended dual function is:

    It is easy to validate that, for .

    Therefore, we have q* = f*=2 and there is no duality gap for the proposed duality approach.

  4. Conclusions

In this paper, we have proposed the theory of duality for nonlinear optimization. The theory overcomes the limitations of conventional duality theory by providing a duality condition that leads to zero duality gap for general nonconvex optimization problems in discrete, continuous and mixed spaces. Based on proposed penalty function, the proposed theory requires less penalty values to achieve zero duality gap comparing to previous efforts for removing the duality gap, thus alleviating the ill conditioning of dual functions.

References:

  1. Aubin, J.P., Ekeland, I.: Estimates of the duality gap in nonconvex optimization. Math. Oper. Res. 1,225245 (1976)

  2. Ben-Tal, A., Eiger, G., Gershovitz, V.: Global optimization by reducing the duality gap. Math. Program.63, 193212 (1994)

  3. Bertsekas, D.P.: Distributed dynamic programming. Trans. Autom. Control AC- 27(3), 610616(1982)

  4. Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, Belmont (1999)

  5. Burachik, R.S., Rubinov, A.: On the absence of duality gap for Lagrange-type functions. J. Indust Manag. Optim. 1(1), 3338 (2005)

  6. Burke, J.V.: Calmness and exact penalization. SIAM J. Control Optim. 29, 493 497 (1991)

  7. Burke, J.V.: An exact penalization viewpoint of constrained optimization. SIAM J. Control Optim.29, 968998 (1991)

  8. Conn, A.R., Gould, N.I.M., Toint, Ph.L.: LANCELOT: A Fortran Package for Large-Scale NonlinearOptimization. Heidelberg, Springer (1992)

  9. Floudas, C.A., Pardalos, P.M.: A Collection of Test Problems for Constrained Global OptimizationAlgorithms. Lecture Notes in Computer Science, vol. 455. Springer, Berlin (1990)

  10. Gill, P.E., Murray, W., Saunders, M.: SNOPT: An SQP algorithm for large- scale constrained optimization.SIAM J. Optim. 12, 9791006 (2002)

  11. Gould, N.I.M., Orban, D., Toint, P.L.: An interior-point _1-penalty method for nonlinear optimization.Technical Report RAL-TR-2003-022, Rutherford Appleton Laboratory Chilton, Oxfordshire, UK,November (2003)

  12. Huang, X.X., Yang, X.Q.: A unified augmented Lagrangian approach to duality and exact penalization.Math. Oper. Res. 28(3), 533552 (2003)

  13. Koziel, S., Michalewics, Z.: Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. Evolut. Comput. 7(1), 1944 (1999)

  14. Luo, Z.Q., Pang, J.S.: Error bounds in mathematical programming. Math. Program. Ser. B, 88(2)(2000)

  15. Nedi´c, A., Ozdaglar, A.: A geometric framework for nonconvex optimization duality using augmented Lagrangian functions. J. Glob. Optim. 40(4), 545573 (2008)

  16. Pang, J.S.: Error bounds in mathematical programming. Math. Program. 79, 299332 (1997)

  17. Rockafellar, R.T.: Augmented Lagrangian muliplier functions and duality in nonconvex programming.SIAM J. Control Optim. 12, 268285 (1974)

  18. Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Springer, Berlin (1998)

  19. Rubinov, A.M., Glover, B.M., Yang, X.Q.: Decreasing functions with applications to penalization.S IAM J. Optim. 10, 289313 (1999)

  20. Rubinov, A.M., Glover, B.M., Yang, X.Q.: Modified Lagrangian and penalty functions in continuous optimization. Optimization 46, 327351 (1999)

  21. Tuy, H.: On solving nonconvex optimization problems by reducing the duality gap. J. Glob. Optim.32, 349365 (2005)

  22. Ferrier, G.D., Goffe, W.L., Rogers, J.: Global optimization of statistical functions with simulated annealing. J. Econ. 60(1), 6599 (1994)

  23. Wang, X., Xing, G., Zhang, Y., Lu, C., Pless, R., Gill, C.: Integrated coverage and connectivity configura-tion.

Leave a Reply