Particle Swarm Optimization Approach For Solving Complex Variable Fractional Programming Problems

DOI : 10.17577/IJERTV2IS4845

Download Full-Text PDF Cite this Publication

Text Only Version

Particle Swarm Optimization Approach For Solving Complex Variable Fractional Programming Problems

Ibrahim M. Hezam (1)* Osama Abdel Raouf (2) (1)Department of Mathematics& computer, Faculty of Education, Ibb University, Yemen.

(2)Department of Operations Research and Decision Support, Faculty of Computers & Information, Minufiya University, Egypt.

Abstract

This paper uses particle swarm optimization (PSO) to solve complex variable fractional programming problems (CVFPP) where the objective function includes the two parts (real and imaginary), the input is complex while the output is always real. Particle swarm optimization can be denoted as an effective technique for solving linear or nonlinear, non analytic complex fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set can also be solved using the proposed technique. Numerical examples are given to show the feasibility, the effectiveness and robustness of the proposed algorithm.

Keywords: Particle Swarm Optimization, Fractional Programming, complex function.

  1. Introduction

    Mathematical programming in complex space has many applications such as control theory, signal processing and electrical networks.The alternating currents/voltages are using complex variable z C n to stand for elements of the network. Where z x iy x , y . The theory of complex programming is employed in varied fields of electric engineering, such as blind deconvolution, blind equalization, minimal entropy, optimal receiver. Some good applications of fractional programming in complex spaces having the application potential of this field. Additional application examples to minimize the real part of complex function the blind deconvolution/equalization and to maximize kurtosis see Lai H.C., and Liu J.C.,

    [12] and the references therein. There were many authors who had studied complex Programming problems. Complex space was first studied by Levinson [18] 1966 for linear programming (LP). Later Swarap and Sharma

    [25] in 1970 studied linear fractional

    programming (LFP). Thereafter nonlinear complex programming for fractional or non fractional was treated by numerous authors [8-9, 12, 13,15-16, 21-23]. Also studied complex programming for nonlinear minmax fractional or non fractional from different viewpoints. Recently Chen_ Lai_ Schaible [3] introduced a generalized Charnes-Cooper variable transformation to change fractional complex programming into non fractional programming, and proved that the optimal solution of complex fractional programming can be reduced to an optimal solution of the equivalent non fractional programming and vice versa. Many authors also considered the dual problems concerning to primal complex programming problems with various different dual forms. [10,11, 14, and [19].

    Youness, E. A., and Mekawy, I. M. [27] in (2012). Introduced a study on fuzzy complex linear programming problems and provided the proof of Kuhn-Tucker stationary point necessary optimality theorem.

    Sorber L., and at all. [24] (2012) used complex Taylor series expansions to generalize existing optimization algorithms for both general nonlinear optimization problems and nonlinear least squares problems. They applied these methods to two case studies which demonstrate that complex derivatives can lead to greater insight in the structure of the problem, and that this structure can often be exploited to improve computational complexity and storage cost.

    Lai H.C., and Huang T.Y., [10] in 2012 studied a duality programming problem for a complex non differentiable minimax fractional programming with complex variables. They established the duality theorem and proved that the optimal values of the duality problem as well as the primary problem are equal under some reasonable assumptions. That is, there are no duality gap between the primary problem and its dual problem.

    Clerc M., and Kennedy J., [3] in 2002 proposed particle swarm optimization algorithm for

    finding optimal regions of complex search spaces through the interaction of individuals in a population of particles, but did not deal with FPP with complex variable.

    However, all the above researches discussed the necessary and sufficient optimality conditions of various models of complex programming under generalized convexities. Most functions arising from the applications are non-analytic and difficult to deal with classical mathematical methods so it is necessary to use an effective technique, such as (PSO) that is able to deal with such functions. The PSO proposed technique can handle any type of FPP nevertheless the nature of the solutions space.

    The paper aims to investigate the direct solution of the complex variable fractional programming problem using particle swarm optimization. Section 2 will introduce the formulation of the complex fractional programming. Particle swarm optimization algorithm is reviewed in section 3. Illustrative examples and discussion on the results are presented in Section 4. Finally, conclusions are presented in Section5.

  2. Complex Fractional Programming The paper, considers the following general complex fractional programming problem (CVFPP) model mathematical:

    Re f z , z

    1. convex (strictly) at

      Q C 2n if

      Re f f Re f ,

    2. pseudoconvex (strictly) at

      Q C 2n if

      Re f 0 Re f f 0,

      0.

    3. quasiconvex at

    Q C 2n if

    Re f 0 Re f f 0.

    Since the complex numbers are unordered so it is not possible for a function with complex outputs to be convex, but it is entirely possible for functions that accept complex inputs to be convex. The outputs are real; where the inputs that are complex. For this reason, the objective function can be written as (1)

  3. Particle Swarm Optimization (PSO) Particle swarm optimization is a population based stochastic optimization technique developed by Eberhart and Kennedy in 1995,

    inspired by social behavior of bird flocking or fish schooling [1,6,17].

    min/ max

    Re g z , z

    subject to h z , z S C n

    where

    (1)

    The characteristics of (PSO) can be represented as follows:

    i

    i

    i

    i

    x k The current position of the particle i at iteration k;

    f , g :C n C n C

    and h :C n C n C m

    v k The current velocity of the particle i at iteration k;

    i

    i

    are supposed to be analytic functions, S is compact. Actually, complex programming problems are extended from the optimization

    k The personal best position of the particle i

    y

    y

    i

    i

    at iteration k;

    theory for real vector spaces, and

    C n is

    y k

    The neighbourhood best position of the

    isomorphic to

    R 2n

    under the isomorphism of

    particle.

    z x iy x , y , and their complex conjugates z x iy , and so a function of n

    complex variables can be regarded as a function of 2n real variables.

    i i 1 1 i i

    i i 1 1 i i

    The following definition for generalized

    The velocity update step is specified for each dimension j 1,,Nd, hence, vi,j represents the

    jth element of the velocity vector of the ith particle. Thus the velocity of particle i is updated using the following equation:

    convexty of complex function follows from Lai and Huang [10].

    v k t 1 w v k t c r t y t x

    t

    Definition The real part of an analytic

    c2 r2 t yi t x i t

    where w is weighting function,

    c1,2

    are

    function f from

    C 2n

    to R is called,

    weighting coefficients,

    ri ,2 t

    are random

    respectively,

    numbers between 0 and 1.

    The current position (searching point in the solution space) can be modified by the following equation:

    positions of all particles are updated according to equation (3).

    x

    x

    i

    i

    After updating, k should be checked and

    i i i

    i i i

    x k 1 x k v k 1

    (3)

    limited within the allowed range.

    y

    y

    i

    i

    Step 5: Memory updating. Update k

    Penalty functions

    In the penalty functions method, the constrained

    and

    y k

    when the following condition is met.

    i

    i

    optimization problem is solved using unconstrained optimization method by incorporating the constraints into the objective function thus transforming it into an unconstrained problem.

    y k t if f x k t 1 f y k t

    y k t 1 i i i

    y k t 1 i i i

    i k k k

    i k k k

    x i t 1 if f x i t 1 f y i t

    fitness f x Penalty Factor Error

    The detailed operation of particle swarm optimization is given below:

    Step 1: Initialize parameters and population. Step 2: Initialization. Randomly set the position and velocity of all particles, within pre- defined ranges and on D dimensions in the feasible space (i.e.it satisfies all the constraints). Step 3: Velocity Updating. At each iteration, velocities of all particles are updated according to equation (2)

    i

    i

    After updating, v k should be checked and

    maintained within a pre-specified range to avoid aggressive random walking.

    Step 4: Position Updating. Assuming a unit time interval between successive iterations, the

    where f(x) is the objective function subject to

    maximization.

    Step 6: Termination Checking. Repeat Steps 2 to 4 until definite termination conditions are met, such as a pre-defined number of iterations or a failure to make progress for a fixed number of iterations.

  4. Illustrative Examples with Discussion and Results

    Five benchmarks examples were collected from the literature to demonstrate the efficiency and robustness of solving with complex variables using swarm intelligence programming problems. The numerical results which are compared to other methods are illustrated in Table 1. The algorithms have been implemented using MATLAB R2011.

    Table (1): Comparison results of the (PSO) with other methods.

    Fun.

    Fun.

    Optimal solution

    Optimal value

    f 1

    (Max)

    x , y 1, 0 z x 1

    2

    f 2

    (Min)

    (x*,y*)= (0,0) z*=0

    0

    (Max)

    (x*,y*)= (2,0) z*=2

    7

    f 3

    (Min)

    (x*,y*)= (1,1) z*=1+i

    -0.5

    (Max)

    (x*,y*)= (0,0) z*=0

    0

    f 4

    (Min)

    (x*,y*)= (0,1) z*=i

    -0.4

    (Max)

    (x*,y*)= (2,0) z*=2

    0.15

    f 5

    (Min)

    (x*,y*)= (1,2) z*=1+2i

    -0.6

    (Max)

    (x*,y*)= (2,1) z*=2+i

    0.6

    The functions related to the difference examples listed in the previous table are follow as:

    basic feasible solution at x , y 1,1 , corresponding to z x iy 1 i . Then

    f 1 : max f (z ) Re z 2 1

    function value equal to -0.5.

    When applying the PSO algorithm to solve the

    subject to z C z

    1

    above problem in maximum case, we get the optimal basic feasible solution at

    This function is just for clear illustration to how

    PSO can reach almost the same optimized solution. However the function does not represent a fractional programming problem but still a good evidence for the dominancy of PSO

    x , y 0, 0 , corresponding to function value equal to 2.

    Rez 2 1

    f 4 : Rez 2z 3

    z 0 . Then

    in solving complex nonlinear programing optimisation problems.

    The simple solution. Since

    In the set A, where A is the closed triangle of the corners z=0, z=2, and z=i.

    Applying the PSO algorithm to solve the above problem in minimum case, we get the optimal

    z 2 1 z 2 1 2 for z 1

    basic feasible solution at x , y 0,1 ,

    corresponding to

    z x iy 0 i i . Then

    The maximum must be 2 . On the other hand we obtain the value 2 at the point z 1 in the closed unit disc, and we conclude that the maximum is indeed 2.

    Mejibro L., [20], applied the known real methods. He obtained the best solution at

    function value equal to -0.4.

    When applying the PSO algorithm to solve the above problem in maximum case, we get the optimal basic feasible solution at

    x , y 2, 0 , corresponding to z x 2 .

    x , y 1, 0 , corresponding to z x 1. Then function value equal to 2.

    Then function value equal to 0.15.

    When applying the (PSO) algorithm we get near exact optimal solution at (1,0). A corresponding function value equal to 2 was obtained.

    Rez 3 1

    f 2 : Rez 1

    In the set A, where A is the closed triangle of the corners z=0, z=2, and z=i.

    Applying the PSO algorithm to solve the above problem in minimum case, we get the optimal basic feasible solution at x , y 0, 0 ,

    corresponding to z x 0 . Then function value equal to 0.

    When applying the PSO algorithm to solve the above problem in maximum case, we get the optimal basic feasible solution at

    Rez 2

    f 5 :

    f 5 :

    Rez 2

    In the set A, where A is the closed square of the corners z=0, z=1, and z=i. z=2i.

    Applying the PSO algorithm to solve the above problem in minimum case, we get the optimal basic feasible solution at x , y 0,1 ,

    corresponding to z x iy 0 i i . Then function value equal to -1.

    When applying the PSO algorithm to solve the

    above problem in maximum case, we get the optimal basic feasible solution at

    x , y 1, 0 , corresponding to z x 1 . Then function value equal to 1.

    Overall, we observe that particle swarm optimization technique can ever solve (CVFPP)

    x , y 2, 0 , corresponding to Then function value equal to 7.

    Rez 2 3z 2

    f 3 : Rez 1

    z x 2 .

    easily. (PSO) is effective in nonlinear and non- analytic fractional objective functions. (PSO) can be efficiently used for large datasets and a multi-processor environment.

    It also does not require the problem defined function to be continuous.

    It can find optimal or near-optimal solutions,

    Applying the PSO algorithm to solve the above problem in minimum case, we get the optimal

    and may be suitable for discrete and combinatorial problems.

    In spite of the referred advantages, (PSO) possesses some drawbacks. The global (PSO) is that it tends to be trapped in a local optimum under some initialization conditions. The selection of parameters in (PSO) and penalty function method for handling the constrained problems may affect the optimal solution. This be checked by exciting the (PSO) algorithm more than one time at different parameers.

  5. Conclusions

This paper uses particle swarm optimization (PSO) to solve complex variable fractional programming problems (CVFPP) where the objective function includes the two parts (real and imaginary), the input is complex while the output is always real. Particle swarm optimization can be denoted as an effective technique for solving linear or nonlinear, non- analytic complex fractional objective functions. Problems with an optimal solution at a finite point and an unbounded constraint set can also be solved using the proposed technique. A set of five benchmark numerical examples are given to show the feasibility, effectiveness, competences, and robustness of the proposed algorithm in solving CVFPP. The propose technique does not have any sophisticated math computation as well as the superiority in computation time.

References

  1. Bratton, D., & Kennedy, J. (2007, April). Defining a standard for particle swarm optimization. In Swarm Intelligence Symposium, 2007. SIS 2007. IEEE (pp. 120-127). IEEE.

  2. Charnes, A., & Cooper, W. W. (1973). An explicit general solution in linear fractional programming. Naval Research Logistics Quarterly, 20(3), 449-467.

  3. Chen, J. C., Lai, H. C., & Schaible, S. (2005). Complex fractional programming and the Charnes-Cooper transformation. journal of optimization theory and applications, 126(1), 203-213.

  4. Clerc, M., & Kennedy, J. (2002). The particle swarm-explosion, stability, and convergence in a multidimensional complex space. Evolutionary Computation, IEEE Transactions on, 6(1), 58-73.

  5. Jaberipour, M., & Khorram, E. (2010). Solving the sum-of-ratios problems by a harmony search algorithm. Journal of computational and applied mathematics, 234(3), 733-742.

  6. Jones, K. O., & Boizanté, G. (2011, June). Comparison of Firefly algorithm optimisation, particle swarm optimisation and differential evolution. In Proceedings of the 12th International Conference on Computer Systems and Technologies (pp. 191-197). ACM.

[7] Lai H.C., & Huang T.Y., (2010),"

Complex Analysis Methods Related an Optimization Problem with Complex Variables", European Journal Of Pure And Applied Mathematics Vol. 3, No. 6, 2010,

989-1005.

[8] Lai, H. C., & Huang, T. Y. (2009).

Optimality conditions for a non differentiable minimax programming in complex spaces. Nonlinear Analysis: Theory, Methods & Applications, 71(3), 1205-1212.

[9] Lai, H. C., & Huang, T. Y. (2009).

Optimality conditions for non differentiable minimax fractional programming with complex variables. Journal of Mathematical Analysis and Applications, 359(1), 229-

239.

  1. Lai, H. C., & Huang, T. Y. (2012). Non differentiable minimax fractional programming in complex spaces with parametric duality. Journal of Global Optimization, 1-12.

  2. Lai, H. C., & Lee, J. C. (2002). On duality theorems for a non differentiable minimax fractional programming. Journal of Computational and Applied Mathematics, 146(1), 115-126.

  3. Lai, H. C., & Liu, J. C. (2002). Complex fractional programming involving generalized quasi/pseudo convex functions. ZAMM-Journal of Applied Mathematics and Mechanics/ Zeitschrift für Angewandte Mathematik und Mechanik, 82(3), 159-166.

[13] Lai, H. C., & Liu, J. C. (2002).

Nondifferentiable fractional programming in complex spaces involving (_,, )- convex analytic functions. Indian J. Pure Appl. Math, 33, 917-932.

[14] Lai, H. C., & Liu, J. C. (2009). Duality for non differentiable minimax programming in complex spaces. Nonlinear Analysis: Theory, Methods & Applications, 71(12), e224-e233.

[15] Lai, H. C., Lee, J. C., & Ho, S. C. (2006).

Parametric duality on minimax programming involving generalized convexity in complex space. Journal of mathematical analysis and applications, 323(2), 1104-1115.

  1. Lai, H. C., Liu, J. C., & Schaible, S. (2008). Complex minimax fractional programming of analytic functions. Journal of

    Optimization Theory and Applications, 137(1), 171-184.

  2. Lee, K. Y., & El-Sharkawi, M. A. (Eds.). (2008). Modern heuristic optimization techniques: theory and applications to power systems (Vol. 39). Wiley-IEEE press, PP.71-88.

  3. Levinson, N. (1966). Linear programming in complex space. Journal of Mathematical Analysis and Applications, 14(1), 44-62.

[19] Liu, J. C., Lin, C. C., & Sheu, R. L. (1997).

Optimality and duality for complex non differentiable fractional programming. Journal of Mathematical Analysis and Applications, 210(2), 804-824.

  1. Mejibro L.,(2008)," complex functions c-1

    examples concerning complex numbers" ISBN 978-87-7681-385-7.

  2. Mishra, S. K. (2005). Complex minimax programming under generalized type-I functions. Computers & Mathematics with Applications, 50(1), 1-11.

  3. Mishra, S. K., Wang, S. Y., & Lai, K. K. (2004). Complex minimax programming under generalized convexity. Journal of computational and applied mathematics, 167(1), 59-71.

  4. Mond, B. (1973). Nonlinear complex programming. Journal of Mathematical Analysis and Applications, 43(3), 633-641.

  5. Sorber, L., Van Barel, M., & De Lathauwer, L. (2012). Unconstrained optimization of real functions in complex variables. SIAM Journal on Optimization, 22(3), 879-898.

  6. Swarup K., Sharma J.C., (1970)," Programming with linear fractional functionals in complex spaces", Cahiers centre d'Etudes Rech. Oper. 12, 103_109.

  7. Tamer B. Farag "A Parametric Analysis on Multicriteria Integer Fractional Decision- Making Problems" Ph.D.Thesis, Faculty of Science, Helwan University (2012), pp.9- 16.

  8. Youness, E. A., & Mekawy, I. M. (2012). A Study on Fuzzy Complex Linear Programming Problems. Int. J. Contemp. Math. Sciences, 7(19), 897-908.

Leave a Reply