🏆
Global Knowledge Platform
Serving Researchers Since 2012

A Brief Review on Binary-Coded Genetic Algorithm for Optimization

DOI : https://doi.org/10.5281/zenodo.18901454
Download Full-Text PDF Cite this Publication

Text Only Version

A Brief Review on Binary-Coded Genetic Algorithm for Optimization

Rohit Goyal

Department of Mathematics, Maharaj Singh College, Saharanpur (UP), India.

Sanjay Kumar

Professor, Department of Mathematics, Maharaj Singh College, Saharanpur (UP), India.

Abstract – This research paper presents a comprehensive review of one of an Optimization technique in Operations Research (OR) known as Binary-Coded Genetic Algorithm. This paper emphasizes their importance, past and recent developments, applications, challenges encountered, and prospective future directions. The paper starts with introduction of Genetic Algorithm by outlining its definition, scope, and application. Subsequently, it investigates into specialized techniques like Binary-Coded Genetic Algorithms and its historical evolution. This paper consist of the four major function (Rosenbrock, Himmelblau, Rastrigin and Ackley) to understand the selection, crossover, mutation operators and survivor strategies and visualize their behaviour graphically in approaching towards the optimal solution for a given problem. Recent innovations in optimization techniques including hybrid approaches, multi-objective optimization strategies for big data scenarios alongside integration with machine learning and artificial intelligence are scrutinized as well. Additionally, this paper addresses the problems faced within optimization research while identifying emerging trends that may shape future endeavours in the field.

Keywords: Operations Research; Optimization Techniques; Linear Programming; Genetic Algorithm; Binary- Coded Genetic Algorithms; Multi-Objective Optimization; Machine Learning; Artificial Intelligence.

. INTRODUCTION

    1. Genetic Algorithms (GA) Overview of Genetic Algorithms

      Genetic Algorithms (GAs) are optimization methodologies inspired by the concepts of natural selection and genetics. They function by simulating evolutionary processes, where potential solutions to a pr oblem represented as individuals or chromosomes progressively evolve through generations via mechanisms such as selection, crossover, and mutation.

      Mechanisms Involved in GAs

      GAs utilize several key genetic operators:

      • Selection involves identifying individuals from the current population for reproduction based on their fitness levels.

      • Crossover refers to merging genetic information from two parent entities to produce new offspring.

      • Mutation introduces random alterations within an individual's genetic material to ensure diversity across the population.

        Applications of GAs

        GAs have demonstrated effectiveness across various optimization challenges including scheduling, routing problems, machine learning tasks, and engineering design projects. They excel particularly in scenarios characterized by extensive search spaces, non-linear constraints, and multiple objectives.

        Various optimization techniques (along with genetic algorithm) with their applications are shown in (Table 1) to understand how different optimization techniques contributes in differen t sector of todays world.

        Computer Intelligence

        Neural network

        Evolutionary Algorithms

        Fuzzy logics

        Genetic Algorithms

        Evolutionary Strategies

        Genetic Programming

        Evolutionary Programming

        Differential Evolution

        Particle Swarm

        Evolable Hardware

        Table 1: Comparison of Optimization Techniques in Operations Research

        Optimization Technique

        Problem Types

        Convergence Speed

        Scalability

        Applications

        Linear Programming (LP)

        Linear

        High

        High

        Production planning, transportation, network optimization

        Integer Programming (IP)

        Discrete

        Moderate

        Moderate

        Project scheduling, facility location, resource allocation

        Nonlinear Programming (NLP)

        Nonlinear

        Moderate

        Moderate

        Engineering design, financial modelling , data analysis

        Genetic Algorithms (GA)

        Any

        Moderate

        High

        Scheduling, optimization problems with complex search spaces

        Simulated Annealing (SA)

        Any

        Low to Moderate

        Moderate

        Combinatorial optimization, parameter estimation

        Tabu Search (TS)

        Any

        Moderate

        High

        Traveling salesman problem, job scheduling

        Particle Swarm Optimization (PSO)

        Any

        Moderate

        High

        Function optimization, neural network training

        Ant Colony

        Optimization (ACO)

        Combinatorial

        Moderate

        High

        Vehicle routing, scheduling, network optimization

    2. BINARY-CODED GENETIC ALGORITHM

Overview

Introduction to Binary-Coded Genetic Algorithms: A Binary-Coded Genetic Algorithm (BCGA) is a type of Genetic Algorithm in which candidate solutions (chromosomes) are represented as binary strings consisting of 0s and 1s. It uses evolutionary operators such as selection, crossover, and mutation to search for the optimal solution.

Genetic Operators: BCGAs employ several genetic operators including selection, crossover, and mutation. Selection involves choosing individuals from the current population for reproduction based on their fitness. Crossover involves combining genetic information from two parent individuals to create new offspring. Mutation introduces random changes to the genetic material of individuals to maintain diversity in the population.

Applications of BCGA: Binary-Coded Genetic Algorithm (BCGA) is widely used in solving discrete and combinatorial optimization problems. It is commonly applied in knapsack problems, feature selection in machine learning, and scheduling tasks where decisions are represented as 0 or 1. BCGA is also useful in logical rule discovery and network design problems. It is particularly suitable when decision variables are binary in nature.

. LITERATURE REVIEW

[1] John H. Holland (1975), was the first one who introducing genetic algorithms and binary string encoding. Genetic operators act on binary chromosomes. The book written by him laid the foundation for binary-coded GAs. [2] N. Srinivas & Kalyanmoy Deb (1994) introduced NSGA(while not exclusively binary, it uses genetic encoding mechanisms fundamental to GAs shows early multi-objective GA structures that often assume binary representation.

[3] Jiangming Mao, Junichi Murata, Kotaro Hirasawa & Jinglu Hu (2003) analyses crossover, mutation, and robustness specifically for binary-coded GAs and investigates how operators affect bit distributions in populations and proposes improved strategies. [4] Also Real/binary-like coded vs binary coded GAs for fuzzy knowledge base generation (2004) compares binary coding with variant encodings in genetic search for fuzzy systems. [5] Yaohua He & Chi-Wai Hui (2010) applies binary encoding in GA to complex scheduling problems, with specific crossover tailored to binary strings. [6] Dongli Jia, Xintao Duan &Muhammad Khurram Khan (2013) works on binary representations and compares performance with other binary approaches. [7] Simon M. Lucas et al.(2017) shows a compact GA using probability distributions over binary string space (no explicit crossover), enhancing performance in noisy environments. [8] Avijit Basak (2020) proposes memory-efficient representation for binary chromosomes, improving allele retention and computational performance.[9] Mariana A. Londe et al.(2023) perform a comprehensive survey of random-key encoding (a generalization of binary representation), including binary-like structures and biased operators which is useful context for binary-coded GAs. [10] Fanggidae A. et al. (2024) proposes adaptive simulated binary crossover variants for optimization.

Table 2.1: Classification of Contribution in field of Binary-Coded Genetic Algorithm

Year

Research Paper / Book

Author(s)

Field of Use

Application Area

1975

Adaptation in Natural and Artificial Systems

John Henry Holland

Evolutionary Computation Theory

Foundation of binary encoding in GA

1994

Multi-objective Optimization using Non- dominated Sorting

Kalyanmoy Deb

Multi-Objective Optimization

Engineering design, resource allocation

2003

Increasing Robustness of Binary-Coded Genetic

Algorithm

Mao et al.

Evolutionary Algorithm Design

Operator robustness analysis

2004

Binary vs Real Coding GA for Fuzzy Knowledge Base

Generation

Various Authors

Artificial Intelligence / Control Systems

Optimization of fuzzy rule systems

2010

Binary Coding GA for

Multi-Purpose Process Scheduling

He & Hui

Chemical & Process Engineering

Production scheduling

2013

Efficient Binary Differential Evolution with Parameter Adaptation

Jia et al.

Discrete & Combinatorial Optimization

Feature selection, binary decision problems

2017

Sliding Window Compact Genetic Algorithm

Lucas et al.

Machine Learning / Noisy Optimization

Binary probability model optimization

2021

Memory Optimized Data

Structure for Binary Chromosomes in GA

Basak

Computer Science

(Algorithm Implementation)

Memory-efficient chromosome storage

2023

Biased Random-Key

Genetic Algorithms: A Review

Londe et al.

Operations Research / Metaheuristics

Encoding strategy comparison

2024

Self-Adaptive Simulated Binary Crossover-Elitism in

GA

Fanggidae et al.

Optimization Algorithm

Development

Adaptive crossover strategies

Table 2.2: Google Scholar Citation (Binary-Coded Genetic Algorithm)

Paper / Work (Title & Author)

Year

Approx. Citation Count

Impact Category

Binary versus real coding for genetic algorithms: A false dichotomy? J. Gaffney, C. Pearce & D. Green

2010

~2225 citations

Medium

Robust encodings in genetic algorithms: a survey of encoding issues (includes binary encodings) Ronald (survey)

2003

~73 citations

High (survey)

Real/binary like vs binary coded GA for fuzzy knowledge bases Sofiane Achiche

2004

Not widely cited on Google Scholar

Medium

Increasing Robustness of Binary-coded Genetic Algorithm Mao, Murata, Hirasawa, Hu

2003

~0very low citations

Low

Molecular recognition using a binary genetic search algorithm A. W. Payne & R. C. Glen

1993

Binary-Real Coded Genetic Algorithm applications (Hybrid schemes) Omar A. Abdul-Rahman, M. Munetomo, K. Akama

2011

~1few citations

Low

Table 2.3: Limitations of Binary-Coded Genetic Algorithm (BCGA)

S. No.

Limitation

Description

Effect on Performance

1

Precision Limitation

Continuous variables must be converted into fixed-length binary strings, causing discretization.

Approximation errors and reduced solution accuracy

2

Hamming Cliff Problem

Small change in numeric value may require flipping many bits in binary form.

Slow convergence and unstable search behavior

3

Premature Convergence

Population diversity reduces quickly in binary representation.

Trapping in local optimum

4

Poor for Continuous Optimization

Requires encoding and decoding of real values.

Less efficient than real-coded GA for continuous problems

5

Long Chromosome Length

Higher precision requires more bits per variable.

Increased memory usage and computation time

6

Scalability Issues

As number of variables increases, chromosome size grows significantly.

Reduced efficiency in large-scale problems

7

Weak Local Search Capability

Bit-flip mutation does not always produce small meaningful changes.

Poor fine-tuning near optimal solution

8

Encoding/Decoding Overhead

Conversion between binary and real values adds extra computation.

Increased processing time

9

Generic Representation

Binary encoding does not exploit problem- specific structure.

May perform worse than specialized encodings

. UNDERSTANDING THE BINARY-CODED GENETIC ALGORITHM

    1. Properties of Practical Optimization Problems

      Non differential functions and constraints

      Discontinuous Function Discrete/ Discontinuous Search Space

      Mixed Variable Multimodal Function

      Robust Solution Reliable Solution

    2. Generalized Framework of GA Techniques

  1. Solution representation (%Genetics)

  2. Input: t := 1(Generation counter); (Maximum allowed generation =T)

  3. Initialize random population (P(t)); (%Parent population)

  4. Evaluate(P(t)); (%Evaluate objective, constraints and assign fitness)

  5. while t T do

  6. M(t):= Selection(P(t)); (%Survival of the fittest)

  7. Q(t):=Variation(M(t)); (%Crossover and mutation)

  8. Evaluate Q(t); (%0ffspring population)

  9. P(t +1):= Survivor(P(t),Q(t));( %Survival of the fittest)

  10. t:=t+1;

  11. end while

    t : t+1

    P(t+1) : = Survivor(P(t),Q(t))

    P(t) : Random Initial population

    Evaluate P(t) and assign fitness

    Is t T ?

    Terminate

    M(t) : = Selection(P(t))

    Q(t): = Variation(M(t))

    Evaluate Q(t) and assign fitness

    3.3 Understanding the Genetic Algorithm Using Rosenbrock Function

    ROSENBROCK FUNCTION:-

    1

    Minimize f(1, 2)=100(2 2)2 + (1 1)2, bounds 5 1 5 and 4 2 4 Optimum solution is x* =(1,1) and f()=0

    Solution Representation

    Decision variables for an optimization problem are represented using Boolean variables in binary-coded genetic algorithm.

    • Binary variable: (0, 1)

    • real variable

    • Discrete and Integer variable

      Real variable Representation

    • Suppose string length (l) = 5 is chosen for xi ;

    • Maximum value of binary string of (l) = 5, that is, DV(s) of (11111} =b =31 and minimum value is a = 0

    • Suppose lower bound is ()=3 and upper bound is ()=10

    • ()

      ()()

      Value of decision variable, =

      + DV(s), DV(s) is a decoded value of binary string.

      21

    • = 3 + 7 DV(s)

      31

    • String is = {11011}

    • Decoded value DV(s) = (1*24)+(1*23)+(0*22) + (1*21)+(1*20)=27

    • = 3 + 7 27 = 9.096

      31

    • Precision =7/31=0.226

      • The neighbour of 9.096 is 9.096 + 0.226 = 9.322.

      • We cannot get any value of z; between 9.096 and 9.322.

    • If we want a precision of 0.01, then 7/(2-1) = 0.01

    • l=2(1+7/0.01)=9.453 or 10.

      Step 1:-Solution Representation

    • Let the chromosome string length is l = 10.

    • First five bits are used to represent 1 and rest of them for 2

      Step 2:-Generate initial random population

    • Let the population size is N = 8.

    • Let the first string is 01100 11010. The first five bits (01100) are used to represent 1and the remaining bits (11010) for 2.

      Index

      Chromosomes

      DV(1)

      DV(2)

      1

      01100 11010

      12

      26

      2

      11000 01011

      24

      11

      3

      00110 00110

      6

      6

      4

      01000 10111

      8

      23

      5

      10100 11101

      20

      29

      6

      01101 01000

      13

      8

      7

      00101 11011

      5

      27

      8

      11100 11000

      28

      24

      The scaling formula is

      (U) (L)

      x = x(L) + x1 x1 DV(s)

      1 1 2l1

      x = 5 + 10 DV(s)

      1 251

      The scaling formula is

      (U) (L)

      x = x(L) + x2 x2 DV(s)

      2 2 2l1

      x = 4 + 8 DV(s)

      2 251

      The decoded value of (01100) is DV(s)=12

      The decoded value of (11010) is DV(s)=26

      10

      x1 = 5 + 25 1 12 = 1.129

      8

      x2 = 4 + 25 1 26 = 2.710

      Step 3:-Evaluate Population Solution 1:-

    • Let solution 1 is represented as (1)=(1.129 , 2.710)

    • Objective function value:-

      f(1.129,2.710)=100(2.710 (1.129)2)2 + (1 (1.129))2 = 210.445

    • Let us choose the fitness value same as the function value.

      Parent population

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      1

      01100 11010

      12

      26

      1.129

      2.710

      210.445

      2

      11000 01011

      24

      11

      2.742

      1.161

      7536.407

      3

      00110 00110

      6

      6

      3.065

      2.452

      14041.882

      4

      01000 10111

      8

      23

      2.419

      1.935

      1546.603

      5

      10100 11101

      20

      29

      1.452

      3.484

      189.732

      6

      01101 01000

      13

      8

      0.806

      1.935

      671.924

      7

      00101 11011

      5

      27

      3.387

      2.968

      7252.209

      8

      11100 11000

      28

      24

      4.032

      2.194

      19793.183

      Step 4:-Termination Condition

    • BGA gets terminated when generation counter is more than the allowed maximum generations, that is, (t>T).

    • Some other criterion can also be considered for terminating the algorithm.

    • Since it is the first generation, we continue and move to selection operator.

      Step 5:- Selection

    • The purpose is to identify good (usually above average) solutions in the population.

    • In this process, we eliminate bad solutions in the population.

    • We make multiple copies of good solutions.

    • Selection can be used either before or after search/variation operators.

      • When selection is used before search/variation operators, it is called as reproduction or selection.

      • After search/variation operators: The process of choosing the next generation population from parent and offspring populations is referred to as survivor or elimination, It is also being referred to as environmental selection.

    • Common selection operators are

      • Fitness proportionate selection

      • Tournament selection

      • Ranking selection, etc.

        Binary Tournament Selection Operator And Its Working Principles:

      • It is similar to playing a tournament among the teams. Here, binary stands for tournament between two teams.

      • Outcome of binary tournament selection operator is: win, loss or tie.

      • It is performed by picking two solutions randomly and choose the one that has better fitness value.

        Index

        1

        2

        3

        4

        Fitness

        210.445

        7536.407

        14041.882

        1546.603

        Index

        5

        6

        7

        8

        Fitness

        189.732

        671.924

        7252.209

        Index

        f(1, 2)

        Winner

        Index

        f(1, 2)

        Winner

        2

        7536.407

        Index 4

        5

        189.732

        Index 5

        4

        1546.603

        7

        7252.209

        7

        7252.209

        Index 7

        4

        1546.603

        Index 4

        3

        14041.882

        2

        7536.407

        8

        19793.183

        Index 6

        3

        14041.882

        Index 6

        6

        671.924

        6

        671.924

        1

        210.445

        Index 5

        8

        19793.183

        Index 1

        5

        189.732

        1

        210.445

        Step 6(a):-Crossover Operator

    • Crossover operator is responsible for creating new solutions. These new solutions explore the search space.

    • Crossover is performed with probability (). Generally, the value of is kept high that supports exploration of search space.

    • Types of crossover operators

      • Single-point crossover operator

      • n-point crossover operator

      • Uniform crossover operator

        Single-point crossover operator

    • For performing single-point crossover, two solutions are picked randomly from the pool at a time.

    • Matting pool is created after performing selection operator.

      Old Index

      New Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      4

      1

      01000 10111

      8

      23

      2.419

      1.935

      1546.603

      7

      2

      00101 11011

      5

      27

      3.387

      2.968

      7252.209

      6

      3

      01101 01000

      13

      8

      0.806

      1.935

      671.924

      5

      4

      10100 11101

      20

      29

      1.452

      3.484

      189.732

      5

      5

      10100 11101

      20

      29

      1.452

      3.484

      189.732

      4

      6

      01000 10111

      8

      23

      2.419

      1.935

      1546.603

      6

      7

      01101 01000

      13

      8

      0.806

      1.935

      671.924

      1

      8

      01100 11010

      12

      26

      1.129

      2.710

      210.445

    • Let solutions with the following new indexes are picked are picked for performing crossover.

      • Solutions {4,7},{8,2},{5,1} and {6,3}

    • The random numbers are generated for each pair of solutions. These random numbers are compared with for performing crossover.

    • Let = 0.9

      Pair

      Random number(r)

      Pair

      Random number(r)

      {4,7}

      0.75

      {8,2}

      0.23

      {5,1}

      0.93

      {6,3}

      0.68

    • For the first pair, since r= 0.75< = 0.9, we perform crossover operator.

    • Randomly, we choose 8th size to perform crossover.

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      P4:

      10100111 | 01

      20

      29

      1.452

      3.484

      189.732

      P7:

      01101010 | 00

      13

      8

      0.806

      1.935

      671.924

      O4:

      10100111 | 00

      20

      28

      1.452

      3.226

      125.336

      O7:

      01101010 | 01

      13

      9

      0.806

      1.677

      545.121

    • For the second pair, since r= 0.23< = 0.9, we perform crossover operator.

    • Randomly, we choose 3rd size to perform crossover.

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      P8:

      011 | 0011010

      12

      26

      1.129

      2.710

      210.445

      P2:

      001 | 0111011

      5

      27

      3.387

      2.968

      7252.209

      O8:

      011 | 0111011

      13

      27

      0.806

      2.968

      540.287

      O2:

      001 | 0011010

      4

      26

      3.710

      2.710

      12236.916

    • For the third pair, since r= 0.93< = 0.9, we do not perform crossover operator. These solutions are copied directly.

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      O5

      10100 11101

      20

      29

      1.452

      3.484

      189.732

      O1

      01000 10111

      8

      23

      2.419

      1.935

      1546.603

    • For the fourth pair, since r= 0.68< = 0.9, we perform crossover operator.

    • Randomly, we choose 6rd size to perform crossover.

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      P6

      010001 | 0111

      8

      23

      2.419

      1.935

      1546.603

      P3

      011010 | 1000

      13

      8

      0.806

      1.935

      671.924

      O6

      010001 | 1000

      8

      24

      2.419

      2.194

      1351.054

      O3

      011010 | 0111

      13

      7

      0.806

      2.194

      812.047

    • Crossover can create good or bad solutions with respect to their parent solutions.

    • If a bad solution is created, it will be eliminated during selection in further generations.

    • If a good solutions is created, it will have multiple copies in further generations

    • As crossover is performed on two parent solutions which survived the tournament selection.

      • New solutions/ offspring will more likely to preserve good traits of parents and will evolve as better solutions than their parents.

        New Index

        Chromosomes

        DV(1)

        DV(2)

        x1

        x2

        f(1, 2)

        4

        10100111 | 00

        20

        28

        1.452

        3.226

        125.336

        7

        01101010 | 01

        13

        9

        0.806

        1.677

        545.121

        8

        011 | 0111011

        13

        27

        0.806

        2.968

        540.287

        2

        001 | 0011010

        4

        26

        3.710

        2.710

        12236.916

        5

        10100 11101

        20

        29

        1.452

        3.484

        189.732

        1

        01000 10111

        8

        23

        2.419

        1.935

        1546.603

        6

        010001 | 1000

        8

        24

        2.419

        2.194

        1351.054

        3

        011010 | 0111

        13

        7

        0.806

        2.194

        812.047

        Step 6(b):-Mutation Operator

    • Mutation operator is also responsible for search aspect of GA.

    • The purpose of mutation is to keep diversity in the population.

    • Mutation is generally performed with a small probability()

      Bit-wise mutation operator:

    • A solution is chosen with the probability (=0.10) and a random bit is chosen for mutation.

    • Following are the random numbers for performing mutation.

      Index

      Random number(r)

      Index

      Random number(r)

      1

      0.05

      2

      0.32

      3

      0.15

      4

      0.01

      5

      0.06

      6

      0.24

      7

      0.50

      8

      0.54

    • For solution 1, since r=0.05< = 0.1, we perform mutation

    • Randomly, we pick 4th bit-position to mutate 0 to 1, or 1 to 0

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      1

      010 0 010111

      8

      23

      2.419

      1.935

      1546.603

      1

      010 1 010111

      10

      23

      1.774

      1.935

      154.658

    • For solution 2, since r=0.32> = 0.1, we do not perform mutation

    • Copy the solution

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      2

      0010011010

      4

      26

      3.710

      2.710

      12236.916

    • For solution 3, since r=0.15> = 0.1, we do not perform mutation

    • Copy the solution

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      3

      0110100111

      13

      7

      0.806

      2.194

      812.047

    • For solution 4, since r=0.01< = 0.1, we perform mutation

    • Randomly, we pick 5th bit-position to mutate 0 to 1, or 1 to 0

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      4

      1010 0 11100

      20

      28

      1.452

      3.226

      125.336

      4

      1010 1 11100

      21

      28

      1.774

      3.226

      1.208

    • For solution 5, since r=0.06< = 0.1, we perform mutation

    • Randomly, we pick 1th bit-position to mutate 0 to 1, or 1 to 0

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      5

      1 010011101

      20

      29

      1.452

      3.484

      189.732

      5

      0 010011101

      20

      29

      3.710

      3.484

      10585.571

    • For other solution , since r > = 0.1, we do not perform mutation

    • Copy them

      New Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      6

      0100011000

      8

      24

      2.419

      2.194

      1351.054

      7

      0110101001

      13

      9

      0.806

      1.677

      545.121

      8

      0110111011

      13

      27

      0.806

      2.968

      540.287

      Offspring population after mutation

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      1

      0101010111

      10

      23

      1.774

      1.935

      154.658

      2

      0010011010

      4

      26

      3.710

      2.710

      12236.916

      3

      0110100111

      13

      7

      0.806

      2.194

      812.047

      4

      1010111100

      21

      28

      1.774

      3.226

      1.208

      5

      0010011101

      20

      29

      3.710

      3.484

      10585.571

      6

      0100011000

      8

      24

      2.419

      2.194

      1351.054

      7

      0110101001

      13

      9

      0.806

      1.677

      545.121

      8

      0110111011

      13

      27

      0.806

      2.968

      540.287

    • Similar to crossover operator, mutation operator can create better or worse solution than parent solution.

    • However, good solution will be emphasized and bad solution may get deleted by selection operator in further generations.

      Step 7:- Survivor:

    • It is used to preserve good solutions for the next generation.

    • Survivor or elimination stage is also referred to as environmental selection.

      ( + )-strategy

    • stands for parent population and stands for offspring population.

    • We combine both the population and choose the best solutions for the next generation population.

      Parent

      Population

      Offspring

      Population

      Index

      f(1, 2)

      Index

      f(1, 2)

      P1

      210.445

      O1

      154.658

      P2

      7536.407

      O2

      12236.916

      P3

      14041.882

      O3

      812.047

      P4

      1546.603

      O4

      1.208

      P5

      189.732

      O5

      10585.571

      P6

      671.924

      O6

      1351.054

      P7

      7252.209

      O7

      545.121

      P8

      19793.183

      O8

      540.287

    • Since, it is the minimization problem, we select solution in an ascending order of their fitness values.

    • Select O4, O1, P5, P1, O8, O7, P6 and O3.

      Next Generation population

      Index

      Chromosomes

      DV(1)

      DV(2)

      x1

      x2

      f(1, 2)

      1

      1010111100

      21

      28

      1.774

      3.226

      1.208

      2

      0101010111

      10

      23

      1.774

      1.935

      154.658

      3

      10100 11101

      20

      29

      1.452

      3.484

      189.732

      4

      01100 11010

      12

      26

      1.129

      2.710

      210.445

      5

      0110111011

      13

      27

      0.806

      2.968

      540.287

      6

      0110101001

      13

      9

      0.806

      1.677

      545.121

      7

      01101 01000

      13

      8

      0.806

      1.935

      671.924

      8

      0110100111

      13

      7

      0.806

      2.194

      812.047

        1. Selection Operators for Binary Variables

          Common selection methods are:-

          • Fitness proportionate selection

          • Tournament selection

          • Ranking selection

            (A) Fitness Proportionate Selection

    • Let us calculate probability of solution 1

      =1

      • Sum of all fitness values= = 51242.386

      • Probability,

        = 210.445 =0.0041

        51242.386

    • Similarly, we can calculate probability of other solutions.

    • is the cumulative probability.

      Index

      f(1, 2)

      1

      210.445

      0.0041

      0.0041

      2

      7536.407

      0.1471

      0.1512

      3

      14041.882

      0.2740

      0.4252

      4

      1546.603

      0.0302

      0.4554

      5

      189.732

      0.0037

      0.4591

      6

      671.924

      0.0131

      0.4122

      7

      7252.209

      0.1415

      0.6137

      8

      19793.183

      0.3863

      1.0000

      SUM

      51242.386

      Index

      f(1, 2)

      2

      7536.407

      3

      14041.882

      7

      7252.209

      7

      7252.209

      7

      7252.209

      8

      19793.183

      8

      19793.183

      8

      19793.183

    • The figure is drawn using the cumulative probability(Table 1)

    • is the random number between 0 to 1.

    • Let the first random number is 1=0.07218. As per the cumulative probability, solution 2 is selected.

    • The second number random number is 2=0.68799. Solution 8 is selected.

    • The rest of the random numbers are

      • 3=0.49976, 4=0.31172, 5=0.51961, 6=0.48610, 7=0.87648, 8=0.99177.

    • Selected solutions are 7, 3, 7, 7, 8,8.

    • Solution 2 gets 1 copy, solution 3 gets 1 copy, solution 7 gets 3 copies and solution

      8 gets 3 copies(Table 2)

    • Two solutions become super-solutions in the population.

      (B) Stochastic Remainder Roulette Wheel Selection Operator

    • We calculate probability of each solution using the formula used with fitness proportionate selection operator.

    • We then multiply the population size (N) with the probability of each solution.(Table 1)

      Index

      f(1, 2)

      ( = 8)

      Decimals

      1

      210.445

      0.0041

      0.0329

      0.0329

      0.0329

      2

      7536.407

      0.1471

      1.1766

      0.1766

      0.2094

      3

      14041.882

      0.2740

      2.1922

      0.1922

      0.4017

      4

      1546.603

      0.0302

      0.2415

      0.2415

      0.6431

      5

      189.732

      0.0037

      0.0296

      0.0296

      0.6728

      6

      671.924

      0.0131

      0.1049

      0.1049

      0.7777

      7

      7252.209

      0.1415

      1.1322

      0.1322

      0.9099

      8

      19793.183

      0.3863

      3.0901

      0.0901

      1.0000

      Sum=1

    • We assign a number of copies to the solution based on the integer value of product .

    • For example, 1 = 0.0329 for solution 1, we do not assign any copy to it.

    • For solution 2, 2 = 1.17. Therefore, we assign 1 copy of it.

    • Similarly, solution 3 gets 2 copies, solution 7 gets 1 copy and solution 8 get 3 copies.

    • The total number of solutions selected is 7.

    • We have to choose 1 more solution to keep the population size fixed, that is, N=8

    • We then use the fitness proportionate selection operator using only the decimal values(Table 2)

    • Let the random number is 1=0.76335. We now select solution 6.

    • We can see that the first part of stochastic remainder roulette wheel selection operator is deterministic because the number of copies is decided using the integral value.

    • It is considered less noisy than fitness proportionate selection operator in the sense of introducing less variance in its realization.

      (C) Stochastic Universal Sampling (SUS)

      Index

      f(1, 2)

      P1

      210.445

      0.0041

      0.0041

      P2

      7536.407

      0.1471

      0.1512

      P3

      14041.882

      0.2740

      0.4252

      P4

      1546.603

      0.0302

      0.4554

      P5

      189.732

      0.0037

      0.4591

      P6

      671.924

      0.0131

      0.4122

      P7

      7252.209

      0.1415

      0.6137

      P8

      19793.183

      0.3863

      1.0000

    • Only one random number r is required for selecting solutions.

    • Since N different solutions have to be chosen, a set of N equi spaced number is created.

    • R={r, r+1/N, r+2/N ,. , r+(N1)/} mod 1.

    • Let r = 0.40416 and 1/N = 0. 125. Here, the population size is N = 8.

    • Solution 3 is selected first using r.

    • The second solution is selected using r+1/N = 0.40416 + 0.125 = 0.52916, that is, solution 7.

    • The series of numbers for selecting solutions is 0.40416, 0.52916, 0.65416, 0.77916, 0.90416, 0.02916, 0.15416, 0.27916.(subtract 1 if term become greater than 1)

    • The following solutions are then selected: 8, 8, 8, 2, 3, 3, .

      Limitation

    • Domination of super solution in early generation

      • Suppose, the fitness(maximization) of five solutions in the population is given as

        1 = 10, 2 = 5 , 3 = 70, 4 = 7, 5 = 8

      • Because of the fitness proportionate selection, solution 3 will get more copies ans will become a super-solution.

    • Slower convergence in later generations

      • Suppose, the fitness of five solutions in the population is given as

        1 = 19, 2 = 21 , 3 = 22, 4 = 18, 5 = 20

      • Every solution will get a copy. It means that selection operator has no role in selecting solutions and thus, it becomes dummy.

        (D)Tournament Selection Operator

        Tournament selection with tournament size k:- Randomly, we sample a subset P of k solutions from the population P. Select solution P with the best fitness.

    • Suppose k=4, we choose four solutions randomly, say, {2,6,3,9} and their fitness value are

      2 = 10, 6 = 5.9 , 3 = 16.7, 9 = 9.4

    • For maximization problem, who will win the tournament? Solution 3

    • For minimization problem, who will win the tournament? Solution 6

    • Often, tournament size k = 2 is used, which is known as binary tournament selection operator.

        1. Operators for survivor or Elimination

          ( + ) strategy scheme

    • Let the size of parent population is .

    • Let number of offspring solutions are generated after variation operator.

    • The next population is filled by choosing the best number of solutions from the combined populations of parent and offspring solutions.

      ( + ) strategy scheme where ( > )

    • Let the size of parent population is .

    • Let number of offspring solutions are generated after variation operator.

    • The next population is filled by choosing the best number of solutions from the offspring population only.

        1. Crossover Operators for Binary Variables:

    • Crossover operator creates offspring/new solutions from more than one parent.

    • Crossover probability()

      • 100% strings are used in the crossover operator.

      • 100(1 )% of the population are simply copied to the new population.

    • (A)Single-point crossover operator

      • Choose a random site on the two parents

      • Split parents at this crossover site

      • Create children by exchanging tails.

        Parent 1: 1 0 1 0 1 1 | 1 0 1 0

        Parent 2: 0 1 0 1 0 0 | 1 1 1 0

        Offspring 1: 1 0 1 0 1 1 | 1 1 1 0

        Offspring 2: 0 1 0 1 0 0 | 1 0 1 0

    • (B) n-point crossover:-

      • Choose n random crossover sites.

      • Split along those sites.

      • Glue parts, alternating between parents.

    • Eg:- 2-point crossover:-

      Parent 1: 1 0 1 | 0 1 1 | 1 0 1 0

      Parent 2: 0 1 0 | 1 0 0 | 1 1 1 0

      Offspring 1: 1 0 1 | 1 0 0 | 1 0 1 0

      Offspring 2: 0 1 0 | 0 1 1 | 1 1 1 0

    • Uniform crossover

      • Select a bit-string z of length n uniformly at random.

      • For all i in 1 to n

      • If =1, then bit i in offspring-1 is else .

      • If =1, then bit i in offspring-2 is else .

      Parent 1:

      1 0 1 0 1 1 0 1 0

      Parent 2:

      0 1 0 1 0 0 1 1 1 0

      z = 1 0 1 0 0 0 1 1 1 0

      Offspring 1:

      1 1 1 1 0 0 1 0 1 0

      Offspring 2:

      0 0 0 0 1 1 1 1 1 0

        1. Mutation Operator for Binary String

      The mutation operator introduces small, random changes to a solutions chromosomes.

      (A) Local Mutation

    • One randomly chosen bit is flipped.

    • 1 0 1 0 1 1 1 0 1 0

    • 1 0 1 1 1 1 1 0 1 0

      (B) Global Mutation

    • Each bit flipped independently with a given probability .

    • It is also called the per bit mutation rate, which is often =1/n ,where n is the chromosome length.

    • 1 0 1 0 1 1 1 0 1 0

    • 1 0 0 0 1 1 1 1 1 0

    • Pr[k bits flipped] =

      . (1

      )

      ()

      . APPLICATION OF BINARY CODED GENETIC ALGORITHM

        1. Rosenbrock Function:-

          Minimize f(1, 2, . . , ) = 1(100(+1 2)2 + (1

          )2),

          =1

          Bounds 5 5 and i = 1,2,,N

    • Optimal solution is = (1,1,1, . . ,1) and f(x) = 0

      Example 1(Number of variables=2 and Total string length = 10)

      BGA Parameters

      Number of variables: N=2

      Population size: 40

      No. of Generation: 200

      Binary string length of 1 is 1= 5

      Binary string length of 2 is 2= 5

      Total length of binary string is l = 1+ 2 = 10

      Probability of crossover: = 1.0

      Probability of mutation: = 1/= 0.5

      Binary tournament selection operator

      Single-point crossover operator

      Bit-wise mutation operator

      ( + )-strategy

      Example 2(Number of variables=2 and Total string length = 20)

      BGA Parameters

      Number of variables: N=2

      Population size: 40

      No. of Generation: 200

      Binary string length of 1 is 1= 10

      Binary string length of 2 is 2= 10

      Total length of binary string is l = 1+ 2 = 20

      Probability of crossover: = 1.0

      Probability of mutation: = 1/= 0.5

      Binary tournament selection operator

      Single-point crossover operator

      Bit-wise mutation operator

      ( + )-strategy

      Example 3(Number of variables=2 and Total string length = 40)

      BGA Parameters

      Number of variables: N=2

      Population size: 40

      No. of Generation: 200

      Binary string length of 1 is 1= 20

      Binary string length of 2 is 2= 20

      Total length of binary string is l = 1+ 2 = 40

      Probability of crossover: = 1.0

      Probability of mutation: = 1/= 0.5

      Binary tournament selection operator

      Single-point crossover operator

      Bit-wise mutation operator

      ( + )-strategy

      Example 4(Number of variables=4 and Total string length = 40)

      BGA Parameters

      Number of variables: N=4

      Population size: 40

      No. of Generation: 200

      Binary string length of 1 is 1= 20

      Binary string length of 2 is 2= 20

      Total length of binary string is l = 1+ 2 = 40

      Probability of crossover: = 1.0

      Probability of mutation: = 1/= 0.5

      Binary tournament selection operator

      Single-point crossover operator

      Bit-wise mutation operator

      ( + )-strategy

        1. Himmelblau Function:-

          Minimize f (1, 2) = (2 + 2 11)2 + (1 + 2 7)2,

          1 1

          bounds 5 1 5 and 5 2 5

    • Multi-modal function: It has 4 minimum points.

    • First optimal solution is = (3,2) and f(x) = 0

    • Second optimal solution is = (2.805,3.131) and f(x) = 0

    • Third optimal solution is = (3.779, 3.283) and f(x) = 0

    • Fourth optimal solution is = (3.584, 1.848) and f(x)

      = 0

      BGA Parameters

      Number of variables: N=2

      Population size: 60

      No. of Generation: 200

      Binary string length of 1 is 1= 20

      Binary string length of 2 is 2= 20

      Total length of binary string is l = 1+ 2 = 40

      Probability of crossover: = 1.0

      Probability of mutation: = 1/= 0.5

      Binary tournament selection operator

      Single-point crossover operator

      Bit-wise mutation operator

      ( + )-strategy

        1. Rastrigin Function:-

          Minimize f(1, 2, . . , ) = 10 + (2 10cos(2 )),

          =1

          bounds 5.12 5.12 and i = 1,2,,N

    • Optimal solution is = (1,1,1, . . ,1) and f(x) = 0

Example 1(Number of variables=2 and Total string length = 40)

BGA Parameters

Number of variables: N=2

Population size: 60

No. of Generation: 200

Binary string length of 1 is 1= 20

Binary string length of 2 is 2= 20

Total length of binary string is l = 1+ 2 = 40

Probability of crossover: = 1.0

Probability of mutation: = 1/= 0.5

Binary tournament selection operator

Single-point crossover operator

Bit-wise mutation operator

( + )-strategy

Example 2(Number of variables=4 and Total string length = 40)

BGA Parameters

Number of variables: N=4

Population size: 60

No. of Generation: 200

Binary string length of 1 is 1= 20

Binary string length of 2 is 2= 20

Total length of binary string is l = 1+ 2 = 40

Probability of crossover: = 1.0

Probability of mutation: = 1/= 0.5

Binary tournament selection operator

Single-point crossover operator

Bit-wise mutation operator

( + )-strategy

    1. Ackley Function:-

Minimize f (1, 2) = 20exp (0.20.5(2 + 2))

1 2

exp (0.5(cos(21) + cos(22))) + exp (1) + 20

bounds 5 1 5 and 5 2 5

  • Optimal solution is = (0,0) and f(x) = 0

BGA Parameters

Number of variables: N=2

Population size: 60

No. of Generation: 200

Binary string length of 1 is 1= 20

Binary string length of 2 is 2= 20

Total length of binary string is l = 1+ 2 = 40

Probability of crossover: = 1.0

Probability of mutation: = 1/= 0.5

Binary tournament selection operator

Single-point crossover operator

Bit-wise mutation operator

( + )-strategy

. RECENT ADVANCES IN OPTIMIZATION TECHNIQUES

    1. Hybrid Optimization Methods Introduction to Hybrid Optimization Methods

      Hybrid optimization methods integrate multiple optimization techniques, capitalizing on their strengths while mitigating inherent weaknesses. Typically, these approaches combine traditional algorithms with metaheuristic or machine learning strategies to enhance both the quality of solutions and the speed of convergence.

      Examples of Hybrid Optimization Methods

      Notable examples include hybrids such as Genetic Algorithm-Tabu Search, Simulated Annealing-Particle Swarm Optimization, and Genetic Algorithm-Gradient Descent combinations. These methodologies have demonstrated promising outcomes across a variety of optimization challenges.

    2. Multi-Objective Optimization Introduction to Multi-Objective Optimization

      Multi-objective optimization addresses scenarios where several conflicting objectives must be optimized simultaneously. Unlike single-objective methods that strive for one optimal solution, multi-objective strategies seek to identify a range of solutions reflecting trade-offs among competing goals.

      Methods for Multi-Objective Optimization

      Commonly utilized techniques encompass Pareto-based approaches, weighted sum methodologies, and evolutionary algorithms like NSGA-II (Non-dominated Sorting Genetic Algorithm II). Such methods empower decision makers by providing insights into trade-offs between different objectives thereby facilitating informed choices.

    3. Big Data and Optimization Integration of Big Data and Optimization

      As big data continues its rapid expansion, there is an increasing application of optimization techniques aimed at analysing vast datasets to extract meaningful insights. This area involves optimizing various processes including data storage retrievals processing tasks which enhances overall eff iciency and performance levels.

      Applications of Big Data Optimization

      The implications are extensive spanning numerous sectors such as finance healthcare marketing transportation enabling organizations to refine resource allocation customer segmentation risk management supply chain logistics through large scale analytics applications.

    4. Optimization in Machine Learning and AI Role of Optimization in Machine Learning and AI

In machine learning contexts optimization is vital for training models effectively enhancing their performance parameters using popular methods including gradient descent stochastic gradient descent along with variants like Adam RMSprop extensively employed within algorithmic framewor ks.

Advancements in Optimization for AI

Recent strides in AI-driven optimization focus on creating more efficient algorithms automatic hyper – parameter tuning mechanisms specialized optimizations tailored specifically towards distinct deep learning architectures resulting in notable improvements regarding model scalability efficacy

Table 5.1: Applications of Optimization Techniques in Various Industries

Industry

Optimization Techniques Used

Examples of Optimization Problems

Logistics

Linear Programming (LP), Integer Programming (IP), Genetic Algorithms

(GA)

Vehicle routing, inventory management ,warehouse layout

optimization

Manufacturing

Linear Programming (LP), Nonlinear

Programming (NLP), Simulated Annealing (SA)

Production scheduling, facility

layout design, supply chain optimization

Healthcare

Integer Programming (IP), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO)

Staff scheduling, resource allocation, patient flow optimization

Finance

Non-linear Programming (NLP), Genetic Algorithms (GA), Simulated Annealing

(SA)

Portfolio optimization, risk management, option pricing

Telecommunications

Tabu Search (TS), Particle Swarm

Optimization (PSO), Ant Colony Optimization (ACO)

Network optimization, routing, spectrum allocation

. CHALLENGES AND FUTURE DIRECTIONS

    1. Challenges in Optimization

      Complexity of Optimization Problems: Many real-world optimization problems are highly complex, with nonlinearities, high dimensionality, and non-convexities that pose challenges for traditional optimization techniques.

      Scalability and Efficiency: As optimization problems become larger and more complex, scalability and efficiency become critical concerns. Traditional optimization algorithms may struggle to handle large-scale problems efficiently.

      Incorporating Uncertainty: Dealing with uncertainty and variability in optimization problems, such as uncertain parameters or dynamic environments, remains a significant challenge.

    2. Emerging Trends in Optimization Research

      Hybrid and Adaptive Optimization Methods: The development of hybrid and adaptive optimization methods that combine multiple techniques and dynamically adjust their parameters to adapt to changing problem characteristics.

      Meta-learning and Transfer Learning: Leveraging meta-learning and transfer learning techniques to transfer knowledge from one optimization problem to another, improving optimization performance and generalization.

      Integration of Optimization with AI: The integration of optimization techniques with artificial intelligence (AI) methods such as reinforcement learning and deep learning to develop more powerful and adaptive optimization algorithms.

    3. Future Prospects of Optimization in OR

Interdisciplinary Applications: Optimization techniques will continue to find applications in diverse fields such as healthcare, energy, finance, and smart cities, leading to interdisciplinary collaborations and novel optimization approaches.

Advancements in Optimization Algorithms: Ongoing research efforts will focus on developing more efficient, scalable, and robust optimization algorithms capable of handling complex real-world problems.

Integration with Emerging Technologies: Optimization will be increasingly integrated with emerging technologies such as quantum computing, edge computing, and blockchain to address new challenges and opportunities in optimization.

. CONCLUSION

    1. Summary of Key Points: This paper has provided an overview of optimization techniques like Binary Coded Genetic Algorithm and it usage by taking four different functions. It has also discussed challenges, emergingtrends, and future prospects in optimization research.

    2. Importance of Optimization Techniques in OR: Optimization techniques play a crucial role in Operations Research by enabling analysts to solve complex decision-making problems efficiently and effectively. They offer powerful tools for optimizing processes, resource allocation, and decision strategies across various domains.

    3. Potential Impact on Future Research and Practice: The ongoing advancements in optimization techniques hold the potential to revolutionize research and practice in Operations Research and related fields. By addressing current challenges, embracing emerging trends, and fostering interdisciplinary collaborations, optimization techniques can drive innovation and create new opportunities for solving complex real-world problems.

. REFERENCES

[1]. John H. Holland (1975) – Adaptation in Natural and Artificial Systems

[2]. N. Srinivas & Kalyanmoy Deb (1994) Multiobjective optimization using nondominated sorting in GAs.

[3]. Jiangming Mao, Junichi Murata, Kotaro Hirasawa & Jinglu Hu (2003) Increasing Robustness of Binary-coded Genetic Algorithm. IEEJ Transactions on Electronics, Information and Systems.

[4]. Real/binary-like coded vs binary coded GAs for fuzzy knowledge base generation (2004), Engineering Applications of AI.

[5]. Yaohua He & Chi-Wai Hui (2010) A Binary Coding Genetic Algorithm for Multi-Purpose Process Scheduling,Chemical Engineering Science.

[6]. Dongli Jia, Xintao Duan & Muhammad Khurram Khan (2013) Efficient Binary Differential Evolution with Parameter Adaptation,Int. J. Comput. Intell. Syst. [7]. Simon M. Lucas et al.(2017) Efficient Noisy Optimization with the Sliding Window Compact Genetic Algorithm,arXiv, 2017. [8]. Avijit Basak(2020) A Memory Optimized Data Structure for Binary Chromosomes in Genetic Algorithm, arXiv, 2021

[9]. Mariana A. Londe et al.(2023) Biased Random-Key Genetic Algorithms: A Review arXiv, 2023

[10]. Fanggidae A. et al.(2024) Self-Adaptive Simulated Binary Crossover-Elitism in GAs Int. Journal of Intelligent Systems and Applications in Engineering, 2024

[11]. A Review of Genetic Algorithm: Operations and Applications (2024), Semarak Ilmu.

[12]. Bertsimas, D., & Freund, R. M. (2016). Data, models, and decisions: The fundamentals of management science. Dynamic Ideas.

[13]. Chvátal, V. (2013). Linear programming. Courier Corporation. Hillier, F. S., & Lieberman, G. J. (2012). Introduction to Operations Research. McGraw-Hill Education.

[14]. Taha, H. A. (2016). Operations research: An introduction. Pearson Education.

[15]. Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (2013). Nonlinear programming: Theory and algorithms. John Wiley & Sons. \ [16]. Goldberg, D. E. (1989). Genetic algorithms in search, optimization, and machine learning. Addison Wesley.

[17]. Deb, K., Pratap, A., Agarwal, S., & Meyarivan, T. (2002). A fast and elitist multi-objective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation, 6(2), 182-197.

[18]. Ma, Z., & Zhang, J. (2019). A survey of big data optimization: Opportunities, challenges, and methodologies. IEEE Access, 7, 135673- 135689.

[19]. Badri Vishal Padamwar, Hemant Pandey (2019). Optimization Techniques in Operations Research: A Review.Vol.10 No.1 (2019), 746- 752.