 Open Access
 Total Downloads : 371
 Authors : Razan A. Jamous, Assem A. Tharwat, Essam El. Seidy, Bayoumi Ibrahim Bayoumi
 Paper ID : IJERTV4IS050254
 Volume & Issue : Volume 04, Issue 05 (May 2015)
 Published (First Online): 15052015
 ISSN (Online) : 22780181
 Publisher Name : IJERT
 License: This work is licensed under a Creative Commons Attribution 4.0 International License
A new Particle Swarm with Center of Mass Optimization
Razan A. Jamous
Department of Mathematics – Faculty of Science Ain Shams University.
Cairo Egypt
Essam El. Seidy
Assistant Professor of Pure Mathematics Department of Mathematics – Faculty of Science – Ain Shams University.
Cairo Egypt
Assem A. Tharwat
Professor of Operations Research Department of Operations Research and Decision Support
Faculty of Computer and Information Cairo University. Cairo Egypt
Bayoumi Ibrahim Bayoumi
Professor of Pure Mathematics Department of Mathematics – Faculty of Science
Ain Shams University.
Cairo Egypt
Abstract This work deals with a new modification of Particle Swarm with Center of Mass Optimization, Which We denoted by (PSOCM ). This modification gives a new efficient search technique. It gets benefit from the physical principle center of mass to move the particles to the new best predicted position. The new proposed technique improves the performance of current PSO technique. To evaluate the proposed algorithm (PSOCM) we compare it with two existed versions of PSO techniques, Center Particle Swarm Optimization (Center PSO) and Linear Decreasing Weight particle swarm optimization (LDWPSO) algorithms, the experimental results show that the PSOCM overcome Center PSO and LDWPSO in term of convergence rate, complexity, and scalability.
Keywords Computational intelligence; Particle Swarm Optimization; Local; Global, and Center of Mass.
I.INTRODUCTION
Kennedy and Eberhart introduced particle Swarm Optimization (PSO) in 1995 as a stochastic optimization algorithm based on social simulation model [1].
The research in PSO has resulted in a large number of new PSO algorithms that improves the performance of the original PSO and enables application of PSO to different optimization problem types (e.g., unconstrained
optimization, optimization in dynamic environments,
(a bird in the search space) is called a particle, and each particle has fitness value which is evaluated by the objective function to be optimized, and has a velocity which directs the flying of the particle. All particles fly through the problem space by following the current optimum particle.
Many research papers have appeared in the literature using particle swarm optimization (PSO). A number of basic modifications to the basic PSO have been developed to improve speed of convergence and the quality of solutions found by the PSO. These modifications include the introduction of an inertia weight, velocity clamping, and velocity constriction.
The following description of the PSO algorithm is adapted from [2]. In PSO, a swarm consists of N particles moving around in a Ddimensional searching space. Let Xi(t) = (xi1, xi2, , xid) denote the position of particle i in the search space at time step t, Vi(t) = (vi1, vi2, , vid) denote the velocity particle i in the search space at time step t, Pi = (pi1, pi2, , pid) denote the best solution achieved so far by the particle itself, Pg= (pg1, pg2, , pgd) denote the best solution achieved so far by the whole swarm. Adding a velocity to the current position, as follows, changes the new position of the particle:
X(t+1) = X(t) + V(t+1) (1)
id id id
constrained optimization, multiobjective optimization and
V(t+1) = w. V(t) + c1r1(Pid X(t)) + c2r2(Pgd X(t)) (2)
finding multiple solutions). Elaborate theoretical studies of id id id id
PSO dynamics have been done, and PSO parameter sensitivity analyses have resulted in a better understanding of the influence of PSO control parameters. PSO applications vary in complexity and cover a wide range of application areas. The PSO algorithm simulates the behaviors of bird flocking, the flight of a bird flock can be simulated with relative accuracy by simply maintaining a target distance between each bird and its immediate neighbors. This distance may depend on its size and desirable behavior. Therefore in PSO, each single solution
Where c1 and c2 are two positive constants, r1 and r2 are two random numbers in the range [0, 1]; w is the inertia weight. The velocity vector drives the optimization process, and reflects both the experiential knowledge of the particle and socially exchanged information from the particles neighborhood. The experiential knowledge of a particle is generally referred to as the cognitive component, which is proportional to the distance of the particle from its own best position (referred to as pbesti). The socially exchanged information is referred to as the social component of the velocity equation (2), which is
proportional to the distance of the particle from the best position found by the swarm (referred to as gbest).
Linear Decreasing Weight particle swarm optimization (LDWPSO) algorithm was presented by Shi and Eberhart [3]. The inertia weight w is decreased linearly over the searching iterations, from an initial value to a final value as follows:
B. Assumptions
Back to the particle swarm optimization algorithm and in particular to the equation of velocity, which controls the movement of the particles using the main parameters (gbest, lbest, acceleration coefficients and inertia weight), a new effective center of mass particle Xcm is proposed. This particle will contribute to accelerating the convergence of the algorithm in quite less number of iterations. It can also help to enhance the quality of the
w = (w

w ) Ã— (Max.IterIter) + w
(3)
solution (by finding closer solution to the optimal). Unlike
max
min
Max.Iter
max
the center particle in Center PSO algorithm [2], the center
Where w is the inertia weight that controls the velocity of particles, wmax is the initial inertia weight, wmin is the final inertia weight, Max. Iter is the maximum number of iterations, and Iteris the current iteration. LDWPSO algorithm uses equation (1) to update position, equation (2) to update velocity and equation (3) to update the inertia weight. Center particle swarm optimization algorithm (Center PSO) was presented by [2] based on introducing a center particle to the LDWPSO algorithm proposed where
of mass particle is virtual particle represent to the swarm at every iteration, and weighted by the fitness values of the particles that form the swarm, it has no velocity and doesnt involved in all operations of the ordinary particle, such as fitness evaluation, competition for the best particle. By considering the particle swarm as a system of point masses, and make the value of the objective function of each particle meets the mass, then the weighted center of
swarm can be calculated as follows:
a center particle is incorporated into linearly decreasing weight particle swarm optimization (LDWPSO). Unlike
Xcm(t) = 1 N F(xi) Ã— xi(t)
F i=1
cm
(7)
other ordinary particles in LDWPSO, the center particle has no explicit velocity, and is set to the center of the swarm at every iteration. But it is involved in all operations the same as the ordinary particle, such as fitness evaluation, competition for the best particle, except for the velocity calculation. The center particle has opportunities to become the gbest of the swarm. After N1particles update their positions as the usual PSO algorithms at every iteration, a center particle is updated according the following formula:
Where F(xi) is the fitness values at position xi(t) of particle i, and Fcm is the summation of particles fitness values. By analogy with the summation of masses M in eqution (8), Fcm can be calculated using equation (10). The fitness value F(xi) is considered to be the same value as the used objective function f(xi) when the algorithm searches for maximum optima, and to be the inverse of the objective function value searching for minimum optima as in equations (11) and (12).
X(t+1) = 1
N1 X(t+1)
(4)
Fcm = N
F(xi)
(8)
cd N1
i=1 id
i=1
( )
We will compare our proposed algorithm with these methods.
F xi = f(xi) , when maximizing an objective function
(9)
Section 2 of this work gives the proposed algorithm. In section 3, the evaluation of the proposed algorithm is
F(x ) = 1
i
f(xi)+
, when minimizing an objective function,
presented. Finally, in Section 4 we conclusion this paper by the summary of main points.
II.THE PROPOSED TECHNIQUE

Definitions

System of point masses is that system consisting of N individual point masses 1,, N. Their motion is described by specifying their position vectors r1 ,, rN as function of time t: r i(t), where i =1,,N. Center of Gravity or Center
0 (10)
The range of the objective function assumed to be non negative in this case

Velocity Update Function
By calculating the center of swarm Xcm, the best position over all the swarm Pg (gbest), and the best position discovered by each particle Pi (pbest), the new velocity update equation of each particle can be produced using the following formula:
of Mass, is a point in a system of point masses at which the
P(t)+X(t)
P(t)X(t)
position vector R is calculated using the masses mi and
v(t+1) = wv(t) + c1r1 ( id cm X(t)) + c2r2 ( gd cm
position vectors r i as follows:
id id
2 id 2
R = 1 N
m Ã— r
(5)
X(t)) (11)
M
M = N
i=1 i i
mi
(6)
id
(t+1)
(t+1)
(t)
i=1
Xid
= vid
+ Xid
(12)
Where M is the summation of masses and N is the number of masses. The terms Center of Mass and Center of Gravity are equivalently used in a uniform gravity field to represent a unique point in an object or system, which can be used to describe the systems response to external forces and torques. Center of Mass can also be defined as an average of masses factored by their distances from reference point [4].
Consequently, the new position of each particle is updated using equation (14), and the searching process continues in progress until a stopping criterion is met. It must be noted that, if the inertia weight w in equation (13) was decreasing linearly, the resulted technique will be PSOCM, and if the inertia weight was fixed, the resulted technique will be PSOCM1. A roughly comparative visualization between the movement of particles in both PSOCM technique and SPSO algorithm is shown in Figure1.
+ 2
( + 1)
2
( + 1)
Algorithm The proposed (PSOCM) algorithm 01: begin
02: Randomly initialize particles swarm 03: while (stopping criterion is not met) 04: for i =1 the swarm size
05: Compute fitness of the particle swarm
() + . ()
06: Find local best Pi and global best Pg
07: Calculate center of swarm Xcm by equation (3) 08: for d=1 the problem dimensionality
PSOCOM Standard PSO
Fig 1. Comparative movement of a particle in SPSO and PSOCM.

Description of Particle Swarm with Center of Mass Technique
As said previously, the main difference between our proposed technique PSOCM and Center PSO algorithm is that the Center PSO used average of particles positions to calculate the center particle. All particles except center update their velocities and positions using equations of the standard PSO. On the other hand, in our PSOCM, center of mass particle is calculated using weighted average of particles positions as given in equation (3), and doesnt involved in all operations of the ordinary particle. The philosophy behind that is to make such a center of mass particle Xcm effects on the attraction of particles towards the global optima, and helps them to avoid the local optima. This can be explained by clarifying the role of the second and
09: Update particle velocity using equation (7) 10: Update particle position using equation (8) 11: end
12: end
13: update the inertia weight value by equation (3) 14: endwhile
15: endalgorithm
Evaluation the performance of PSOCM Technique
The benchmark test functions are problems with varying difficulty levels and problem size. Those problems will be solved by the proposed PSOCM technique and the other versions of particle swarm optimization algorithms, namely, SPSO, LDWPSO, Center PSO and Mean PSO. Problem Set consists of four scalable problems, namely, Rosenbrock, Rastrigrin, Griewank and Ackely function, the dimension of those problems can be increased/decreased, so the complexity of those problem increases as the problem size is increased [5].
Rosenbrock: A unimodal function, with significant
P(t)+X(t)
(t)
interaction between the variables. Its global minimum
2
third terms in equation (7). The second term ( id cm Xid )
is responsible for the attraction of the particles current position
equal to zero located at (1, 1,, 1), so there are n design variables with lower and upper limits of [100, 100].
towards the mean of the positive direction of its own best
f1(x) = n1(100(xi+1 x2)2 + (xi 1)2)
(13)
i=1 i
position (pbest) and the positive direction of the center of mass particles position (Xcm), which helps the cognitive behavior component to avoid the local optima. On the other hand, the
P(t)X(t)
id
third term ( gd cm X(t)) is responsible for the attraction of
2
Rastrigin: A multimodal version of Spherical function, characterized by deep local minima arranged as sinusoidal bumps, there are n design variables with lower and upper limits of [10, 10], its global minimum equal to zero at (0, 0, , 0).
the particles current position towards the mean of the positive
f2(x) = n
(x2 10 cos(2xi) + 10)
(14)
i=1
i
direction of the global best position (gbest) and the negative direction of the center of mass particles position (Xcm), which helps maintaining the diversity of the swarm during the searching process. This increases the opportunity of fast convergence to global (or near global optima), where the center
of mass particle will attract particles to the region of best found
Griewank: A multimodal function with significant interaction between its variables caused by the product term, there are n design variables with lower and upper limits of [600, 600], its global minimum equal to zero at (0, 0, , 0).
f3(x) = 1 n x2 n cos (xi) + 1 (15)
solutions, that gives particles the best chance to occupy the
4000
i=1 i
i=1 i
position of global best found solution during the search process, all previous movement are supported by linearly decreasing weight which give the balance between exploration and exploitation during the search process.
Pseudo Code of the PSO with Center of Mass Algorithm
Ackley: A multimodal function with deep local minima, there are n design variables with the lower and upper limits of [30, 30], its global minimum equal to zero at (0, 0, ,
0).
0.02 1 n x2 (1 n
cos(2x ))
The following pseudocode explains the whole process of
PSOCM:
f4(x) = 20 e
n i=1 i e n
i=1
i + 20 + e
(16)

Parameter Settings
The evaluation of proposed PSOCM performance against the performance of PSO versions was performed by three comparisons. Firstly, the same set of prameters was
Size 
D. 
Max. I. 
LDWPSO 
Center PSO 
PSOCM 
20 
10 
1000 
9.1437Â±4.9662 
9.0801Â±4.6473 
0.0000Â±0.0000 
20 
1500 
47.7768Â±17.6409 
47.4496Â±15.9583 
0.0000Â±0.0000 

30 
2000 
87.9939Â±25.6677 
126.5994Â±36.3899 
0.0000Â±0.0000 

40 
10 
1000 
5.5618Â±3.2518 
5.8603Â±3.3086 
0.0000Â±0.0000 
20 
1500 
26.9677Â±9.1263 
30.9730Â±9.8311 
0.0000Â±0.0000 

30 
2000 
52.0860Â±16.7072 
80.9298Â±24.1957 
0.0000Â±0.0000 

80 
10 
1000 
3.7311Â±1.9312 
3.8704Â±2.3116 
0.0000Â±0.0000 
20 
1500 
20.0285Â±7.7322 
22.1876Â±8.4571 
0.0000Â±0.0000 

30 
2000 
33.2141Â±9.1025 
61.0605Â±19.7476 
0.0000Â±0.0000 

160 
10 
1000 
1.9369Â±1.1581 
2.3282Â±1.3219 
0.0000Â±0.0000 
20 
1500 
10.5770Â±3.9736 
14.6657Â±4.8183 
0.0000Â±0.0000 

30 
2000 
36.7956Â±11.5786 
39.7013Â±10.0061 
0.0000Â±0.0000 
applied to the three tested algorithms, namely, LDWPSO, Center PSO and the proposed PSOCM to solve the first four benchmark test functions. Inertia weight w was linearly decreased from 0.9 to 0.4; acceleration coefficients were set to c1 = c2 = 2; the maximum velocity was set to Vmax = Xmax. The complexity was investigated for three cases of the first four problems dimensionality (D = 10, 20, 30), correspondingly, the maximum number of iterations was set to (Itermax = 1000, 1500, 2000). Initialization range for particles position was set to 15 xi 30, 2.56 xi
5.12, 300 xi 600 and 15 xi 30 for f1(x), f2(x), f3(x) and f4(x) respectively. The scalability of algorithms was investigated using four population sizes (N =20, 40, 80,
160) for each function with different dimensionality; all solutions were computed over 100 runs.
Standard deviation (SD) is a widely used measurement of variability or diversity used in statistics and probability theory. A low standard deviation indicates that the data points tend to be very close to the mean, whereas high standard deviation indicates that the data is spread out over a large range of values. Technically, the standard deviation of a statistical population, data set, or probability distribution is the square root of its variance and calculated as follows:
Table I Mean fitness value for Rosenbrock function
i=1
SD = = N
(xi )2N
(17)
Table II. Mean fitness value for Rastrigin function.
Size 
D. 
Max. I. 
LDWPSO 
Center PSO 
PSOCM 
20 
10 
1000 
67.3562Â±118.3036 
37.0061Â±73.9258 
8.1585Â±0.1818 
20 
1500 
79.3558Â±100.6168 
66.2748Â±74.3958 
18.1835Â±0.2072 

30 
2000 
175.8825Â±322.3206 
82.3127Â±98.1343 
28.2233Â±0.2581 

40 
10 
1000 
34.0945Â±76.6419 
23.6052Â±48.5820 
8.1023Â±0.0279 
20 
1500 
51.9005Â±124.0668 
43.7853Â±73.6872 
18.1208Â±0.1053 

30 
2000 
87.4123Â±145.7247 
66.5193Â±73.1078 
28.1548Â±0.1729 

80 
10 
1000 
14.3379Â±36.3526 
10.5008Â±27.3195 
8.0956Â±0.0352 
20 
1500 
47.6202Â±66.7143 
29.4490Â±42.4971 
18.0989Â±0.0136 

30 
2000 
59.5880Â±69.6433 
57.5455Â±67.5122 
28.1076Â±0.0632 

160 
10 
1000 
12.3286Â±35.5062 
10.0699Â±23.3492 
8.0911Â±0.0466 
20 
1500 
33.5459Â±51.3636 
20.8180Â±30.9508 
18.0928Â±0.0483 

30 
2000 
57.2036Â±83.5046 
50.2963Â±50.2122 
28.0932Â±0.0497 
Where is the variance, N is the number of data points xi, and is the mean (average) of data points xi.
Analysis of the Results
To evaluate the performance of the proposed technique, the comparisons of the proposed PSOCM technique with LDWPSO and Center PSO are performed in this experiment. All experiments performed on Intel corei3 1.8 GHz laptop with 4 GB of RAM under WIN7 platform.
Comparison of PSOCM to LDWPSO and Center PSO
It should be noted that, according to the used precision of numeric data types, the numeric value is considered equals zero if it was less than 5.0Ã—10324, and it will be displayed to 0.0000Â±0.0000 [6]. Tables 1, 2, 3 and 4 list the mean fitness value and standard deviation of the solutions averaged over 100 runs for Rosenbrock, Rastrigin, Griewank and Ackley functions respectively. As a result from these tables, it is observed that the proposed PSOCM superiors the LDWPSO and Center PSO for all the four test benchmark problems by influence of swarm size scalability, dimension complexity and the convergence rate (speed), it gives the optimal solution accurately for Rastrigrin and Griewank functions, and more close to optimal for Ackley function. For scalability, it can be seen that as the swarm size increases the average minimum value by LDWPSO and Center PSO decreases and become close to the optimal, with opportunity for Center PSO to overcome LDWPSO, but they both are bigger than that of PSOCM. Noticeable, average by PSOCM is the
smallest one and fixed for the test functions except for Rosenbrock, it decreases slowly.
15Â±1.27E15
Size 
D. 
Max. I. 
LDWPSO 
Center PSO 
PSOCM 
4.44E 

20 
10 20 30 
1000 1500 2000 
0.5064Â±0.1804 0.9905Â±0.2373 1.3294Â±0.2671 
0.4576Â±0.1552 0.8677Â±0.1954 1.0353Â±0.1547 
16Â±0.0000 2.04E 15Â±1.77E15 3.71E 
15Â±9.64E16 

4.44E 

40 
10 20 30 
1000 1500 2000 
0.3759Â±0.1253 0.7634Â±0.2029 1.0762Â±0.1997 
0.3470Â±0.1114 0.6615Â±0.1379 0.7688Â±0.1189 
16Â±0.0000 2.26E 15Â±1.78E15 3.46E 
4.44E 

80 
10 20 30 
1000 1500 2000 
0.2797Â±0.0956 0.5798Â±0.1479 0.8852Â±0.1720 
0.2513Â±0.1039 0.4374Â±0.0887 0.5764Â±0.1095 
16Â±0.0000 1.79E 15Â±1.72E15 3.36E 
15Â±1.36E15 

4.44E 

16Â±0.0000 

160 
10 
1000 
0.1855Â±0.0849 
0.1645Â±0.0857 
1.47E 
20 
1500 
0.4945Â±0.1349 
0.3227Â±0.0551 
15Â±1.61E15 

30 
2000 
0.7022Â±0.0727 
0.4369Â±0.0727 
3.22E 

15Â±1.47E 

15 
Table III. Mean fitness value for Griewank function.
Table IV. Mean fitness value for Griewank function.
Figure1 :Rosenbrock function convergence.
Fig 2.Rastrigin function convergence.
Size 
D. 
Max. I. 
LDWPSO 
Center PSO 
PSOCM 
20 
10 
1000 
0.0891Â±0.0419 
0.0815Â±0.0493 
0.0000Â±0.0000 
20 
1500 
0.0663Â±0.2332 
0.0588Â±0.1314 
0.0000Â±0.0000 

30 
2000 
0.0331Â±0.0668 
0.0111Â±0.1959 
0.0000Â±0.0000 

40 
10 
1000 
0.0867Â±0.0423 
0.0776Â±0.0395 
0.0000Â±0.0000 
20 
1500 
0.0222Â±0.0223 
0.0226Â±0.0410 
0.0000Â±0.0000 

30 
2000 
0.0158Â±0.0175 
0.0248Â±0.0355 
0.0000Â±0.0000 

80 
10 
1000 
0.0688Â±0.0281 
0.0659Â±0.0279 
0.0000Â±0.0000 
20 
1500 
0.0221Â±0.0205 
0.0215Â±0.0211 
0.0000Â±0.0000 

30 
2000 
0.0119Â±0.0146 
0.0192Â±0.0233 
0.0000Â±0.0000 

160 
10 
1000 
0.0577Â±0.0236 
0.0552Â±0.0221 
0.0000Â±0.0000 
20 
1500 
0.0233Â±0.0196 
0.0217Â±0.0195 
0.0000Â±0.0000 

30 
2000 
0.0099Â±0.0115 
0.0108Â±0.0109 
0.0000Â±0.0000 
Fig 3. Griewank function convergence.
Fig 4. Ackley function convergence.
According to complexity, when the problem dimension increases (more complexity) the average minimum value obtained by LDWPSO, CenterPSO and PSOCM increases, but the average obtained by PSOCM is still the smallest one, it increases slowly while dimenstion rises for all tests except Rastrigin and Griewank (the optimal is reached). Up to convergence rate, the convergence of PSOCM is faster than that ofLDWPSO and CenterPSO.
III.CONCLUSION AND FUTURE WORK
In this paper, a new variation of PSO, called Particle Swarm Optimization with Center of Mass (PSOCM), is brought forward. A virtual particle called center of mass is inserted to the formula of velocity to help the cognitive behavior component to avoid local optima, and to help maintaining the diversity of the swarm during the searching process. This increases the opportunity of fast convergence to global (or near global optima), where the center of mass particle will attract particles to the region of best found solutions, and this gives particles the best chance to occupy the position of global best found solution during the search process. Two versions of particle swarm optimization, namely, LDWPSO, and CenterPSO, were considered with the proposed PSOCM to be compared. A set of wellknown optimization benchmark test problems with varying difficulty levels and problem size are considered to evaluate the compared algorithms. This set of problems consists of four scalable problems, namely, Rosenbrock, Rastrigrin, Griewank and Ackely function, the dimension of those problems can be increased/decreased at will, so the complexity of those problem increases as the problem size is increased.
In the future, more theoretical work on PSOCM will be performed, and some real examples from industry and other fields will be applied to PSOCM to evaluate its performance. More physical rules may be used to enhance the performance of the particle swarm optimization, which is still a hot topic for researchers to explore and examine. New enhanced particle swarm optimization algorithm may be investigated to be used in forecasting PSO based models.
REFERENCES

J. Kennedy, J. and Eberhart, C., Particle Swarm Optimization. Proceedings of the 1995 IEEE International Conference on Neural Networks, Australia, 1995, pp. 19421948.

Y. Liu, Qin, Z., Shi, Z. and Lu, J., Center particle swarm optimization, Neurocomputing Vol. 70, 2007, pp. 672679.

Y. Shi, and Eberhart, R.C. Parameter selection in particle swarm optimization, Proceedings of the 7th International Conference on Evolutionary Programming VII, March 2527, 1998, p.591600.

W .Benenson, J.Harris, W., Stocker, H. and Lutz H., Handbook of Physics. New York: SpringerVerlag, Inc., p. 82, 2002.

K. Deep, and Bansal, J.C. Mean particle swarm optimization for function optimization, Int. J. Computational Intelligence Studies,
Vol. 1, No. 1, 2009, pp.7292

J. Sharp, Microsoft Visual C# 2008, Microsoft Press, p. 33, 2008.

D. Alrijadjis, K. Tanaka, and Mu S., "A Modified Particle Swarm Optimization with Nonlinear Decreasing Inertia Weight Based PID Controller for Ultrasonic Motor" , International Journal of Innovation, Management and Technology, Vol. 3, No. 3, June 2012.

N. Azadaniet , S. Hosseinian and B. Moradzadeh, Generation and reserve dispatch in a competitive market using constrained particle swarm optimization. Int. J. Electr. Power Energ. Syst. 32:7986, 2010.