Multi-Objective Optimization Methods as a Decision Making Strategy

DOI : 10.17577/IJERTV9IS030480

Download Full-Text PDF Cite this Publication

Text Only Version

Multi-Objective Optimization Methods as a Decision Making Strategy

Muhammad Nagy Department of Architecture Ain Shams University Cairo, Egypt.

Yasser Mansour Department of Architecture Ain Shams University Cairo, Egypt.

Sherif Abdelmohsen Department of Architecture Ain Shams University Cairo, Egypt.

AbstractOne of the most basic concepts in humankinds life is the search for an optimal state. As long as the life continues, seeking for perfection in decision making of many areas is fundamental. The concept of optimization is considered as a main goal for a decision maker (DM) to achieve the best solution or most favorable set of solutions of one or more given criteria. In fact, most of real-world problems involve many correlated and often conflicted objectives that should be maximized or minimized to have the problem solved; such setting creates a harder situation for a DM to define all those contradicting objectives in terms of a one single objective following single-objective optimization (SOO) approach. To overcome this limitation, multi-objective optimization (MOO) becomes one of the recent optimization approaches to formulate decision making problems in a more realistic manner.

As the ultimate goal of solving MOO problems is to find the optimal set of non-dominated solutions, which is called Pareto- optimal set of Pareto-optimal solutions, using classical methods of exact, heuristics or metaheuristics methods become more complicated and cannot guarantee that those optimal solutions will be found. Therefore, many methods have been developed to facilitate the process of solving MOO problems with respect to the role of DM, due to his authority to give further preference information concerning the Pareto-optimal solutions.

To this end, this study introduces a brief definition of MOO problem formulation, representation and solution; including analytical comparison of most common MOO problem solving methods in the literature. Its concluded that MOO methods tree could be classified into four main branches based on DM preference articulation: No preference, A-priori, Interactive, and A-posteriori preferences articulation of DM.

KeywordsMulti-objective optimization; No preference; A- priori; A-posteriori; Interactive; Pareto-Optimal; Evolutionary Algorithms.

  1. MULTI-OBJECTIVE OPTIMIZATION PROBLEM FORMULATION

    The first step in performing a MOO is to formulate the problem appropriately. A MOO problem is defined by four parts: a set of decision variables, objective functions, bounds on the decision variables, and constraints. Since objectives can be either minimized or maximized to find a set of optimal solutions that satisfy involved constraints. Hence, we state the MOO problem in its general form as following:

    Find the design variable vector

    ,

    which minimizes/ maximizes the objective functions

    ,m = 1, 2, , M

    Subject to

    n number of bound(s)

    i = 1, 2, , n J number of inequality constraint(s):

    j= 1, 2, , J

    K number of equality constraint(s):

    , k= 1, 2, , K

    Referring to the statement of MOO problem, the objective functions or their objective values where m = 1, 2,

    , M, form an objective vector , which can be written as:

    1. INTRODUCTION

      Multi-objective optimization (MOO), also called many- objective, multi-criteria, multi-attribute, multi-performance, Pareto, or vector optimization, is defined as the procedure for optimizing two or more conflicting objectives simultaneously subject to certain constraints. Mathematically, MOO problem is the problem of finding a vector of decision variables which satisfies a number of constraints and optimizes a vector function whose elements represent the objective functions which are to be either minimized or maximized simultaneously. These functions form a mathematical description of performance criteria which are usually in conflict with each other. Hence, the term optimize means finding such a solution which would give the values of all the objective functions acceptable to the DM.

      Or more conveniently as: =

      . where T indicates the transposition of the column vector to the row vector.

      There are two Euclidean spaces are considered in MOO problems: The design, decision, state, or search space , which is the n-dimensional real coordinate space of the design variables in which each coordinate axis corresponds to a component of vector x; The objective, solution, or response space which is the M-dimensional real coordinate space of the objective functions and formed by the objective function values in which each coordinate axis corresponds to a component of vector [1].

      For each solution in the first space it gives a certain point in the second space, denoted by: i.e.

      which determines a quality of this solution in terms of the objective function values . Its referred to a solution as a design variable vector and a point as the corresponding objective vector

      The solutions satisfying both objective functions constraints, and variable bounds constitute a feasible decision variable space S or and it corresponding feasible objective space Z or .

  2. MULTI-OBJECTIVE OPTIMIZATION PROBLEM REPRESENTATION

    In MOO, where there is more than one objective, each objective has its own fitness landscape, as variations in objective values with changes in design variable values are likely to be different for different objectives (see Fig.1). For many problems, objectives compete with each other, so that solutions that improve values of one objective might worsen values in another. Therefore, it is less clear which solutions are better than others, as the solution that results in the highest peak in the fitness landscape for one objective might result in the lowest peak in the fitness landscape for the other objective and vice versa. In such cases, the optimality of a solution is determined using the concept of dominance [2].

  3. MULTI-OBJECTIVE OPTIMIZATION PROBLEM SOLUTION In real life, its rarely and ideal to find a single solution that simultaneously optimizes all objective functions of a MOO problem. Therefore, the definition of optimality has different meaning in this case. A MOO problem solution is to find good

    compromises or trade-offs of conflicting objective functions in an optimal manner, rather than finding a single solution.

    Fig. 1. Illustration of the relationship between (a) the fitness landscape of objective 1, (b) the fitness landscape for objective 2 and (c) the Pareto frontier

    Fig. 2. Graphical depiction (mapping) of (a) a decision space onto (b) an objective space, where both objectives are to minimized

    From this front, a DM (be it a human or an algorithm) can finally choose the configurations that, in his opinion, suit best.

    Mathematically, the most commonly adopted definition of optimality for MOO problems is that originally proposed by Francis Ysidro Edgeworth and later generalized by Vilfredo Pareto using the term Pareto optimality [3]. Unlike SOO problems, where the superiority (optimality) of a solution over other solutions is easily determined by comparing their objective function values, in MOO problems, the goodness (optimality) of a solution is determined by the concept of dominance. For a given a set of solutions in the design space, where the set of all possible combinations of design variables exists, the non-dominated solution set is a set of all solutions that are not dominated by any other member of the solution set. The non-dominated set of the entire feasible design space is called the Pareo-optimal set which represents a complete set of Pareto-optimal Solutions. Pareto-optimal solutions are trade-off solutions for which any improvement in one objective results in worsening of at least one other objective. The boundary defined by the set of all points (Pareto-optimal points) mapped from the Pareto-optimal set to the feasible objective space is called the Pareto-optimal front or Pareto frontier. The previous terms will be detailed in the following subsections.

    1. Pareto Dominance

      The concept of domination is illustrated in Fig. 3. A solution

      is said to dominate the other

      solution (denoted by

      ), if both the following conditions are true:

      • The solution is no worse than in all objectives Thus, the solutions are compared based on their objective function values (or location of the corresponding points and on the objective space).

      • The solution is strictly better than in at least one objective.

        This definition could be mathematically represented as follows:

        Fig. 3. The concept of domination illustrated in a minimization problem of two objective functions

    2. Pareto Optimal Solution

      The phrase Pareto-optimal solution is taken to mean with respect to the entire design variable space unless otherwise specified. In other words, is called Pareto-optimal solution, if there exists no feasible vector x which would improve some criterion without causing a simultaneous worsening in at least one other objective.

      A solution is a Pareto-optimal solution with respect , if and only if:

      there is no for which, Dominates,

    3. Pareto Optimal Set

      Pareto optimal solutions are those solutions within the design space whose corresponding objective vector components cannot be all simultaneously improved. These solutions are also termed non-inferior, admissible, or efficient solutions, with the entire set represented by . Their corresponding vectors are termed non-dominated; selecting a vector(s) from this vector set (the Pareto front set .) implicitly indicates acceptable Pareto-optimal solution. These solutions may have no apparent relationship besides their membership in the Pareto optimal set. They form the set of all solutions whose associated vectors are nondominated; Pareto optimal solutions are classified as such based on their evaluated functional values. Mathematically, for a given MOO problem , the Pareto-optimal set , is defined as:

    4. Pareto Frontier

      Non-dominated vectors of entire Pareto-optimal set are collectively known as the Pareto-optimal front or Pareto frontier when they are plotted in the objective space. In general, its not easy to find an analytical expression of the line or surface that contains these Pareto Frontier points and, in most cases, it turns out to be impossible.

      The normal procedure to generate the Pareto frontier is to compute many solutions in and their corresponding points in . When theres a sufficient number of these, then its

      possible to determine the non-dominated points and to produce the pareto frontier.

      Mathematically, for a given MOO problem , and Pareto- optimal set , the Pareto frontier is defined as:

    5. Weak Pareto Optimality

      A solution is a weak Pareto-optimal if there is no such that for .

    6. Strict Pareto Optimality

      A solution is strictly Pareto-optimal if there is no such that for .

    7. Special Solutions and Points

    There are some related definitions of special solutions which are often used in MOO algorithms. The following subsection provides some discussion about them.

    1. Ideal Design Vector & Ideal Objective Vector

      For each M conflicting objective functions, there exists one different optimal solution. An objective vector constructed with these individual optimal objective values constitutes the ideal objective vector

      Let be a vector of design

      variables which optimizes (either minimizes or maximizes) the m-th objective function . This vector is called ideal solution or ideal design vector. To determine this solution, the minimum (or maximum) attainable objective values for all objective functions should be found separately. Assuming this minimum (or maximum) of each m-th objective function could be found as:

      Then, the vector is the ideal objective vector for a MOO problem

      In general, corresponds to a non-existent solution, because the minimum solution for each function need not be the same solution. is used as a reference solution that solutions closer to it are better. Moreover, many algorithms require the knowledge of the lower bound on each objective function to normalize objective values in a common range.

      The only way a corresponds to a feasible solution is when the minimal solutions to all objective functions are identical. In this case, the objectives are not conflicting to each other and the minimum solution to any objective function would be the only optimal solution to the MOO problem. [4]

    2. Utopian Objective Vector

      The ideal objective vector denotes an array of the lower bound of all objective functions. This means that for every objective function there exists at least one solution in the feasible search space sharing an identical value with the corresponding element in the ideal solution. Some algorithms may require a solution which has an objective value strictly better than (and not equal to) that of any solution in the design space. For this purpose, the utopian objective vector is defined as follows:

      A utopian objective vector has each of its components marginally smaller than that of the ideal objective vector, or with for all i = 1, 2, , M.

    3. Nadir Objective Vector

    Unlike the ideal objective vector which represents the lower bound of each objective in the entire feasible search space, the nadir objective vector represents the upper bound of each objective in the entire Pareto-optimal set and not in the entire search space. A nadir objective vector must not be confused with worst objective vectors marked as W in Figure found by using the worst feasible function values in the entire search space.

    Although the ideal objective vector is easy to compute (except in complex multimodal objective problems), the nadir objective vector is difficult to compute in practice. However, for well-behaved problems, the nadir objective vector can be derived from ideal objective vector by using the payoff table method. For two objectives, if

    And

    are coordinates of the minimum solutions of respectively, in the objective space, then the nadir vector can be estimated as:

    The nadir objective vector may represent an existent or a non-existent solution, depending in the convexity and continuity of the Pareto-optimal set. In order to normalize each objective in the entire range of the Pareto-optimal region, the knowledge of nadir and ideal objective vectors can be used as follows:

    Fig. 4. Multi-objective optimization methods

    the DM is a person who can give further preference information concerning the Pareto optimal solutions., many approaches have

    Fig. 5. Multi-objective optimization methods classified based on the existence of DM preferences

  4. MULTI-OBJECTIVE OPTIMIZATION METHODS

    The ultimate goal of solving a MOO problem is to identify Pareto-optimal solutions. To achieve this goal, using exact, heuristics or metaheuristic methods become more complicated and cannot guarantee that those optimal solutions will be found. As the DM is a person who can give further preference information concerning the Pareto-optimal solutions., many approaches have been developed with respect to his role to facilitate the process of solving MOO problems. Figure is dividing them into four main categories as shown in Fig. 5 [5] [6] [7].

    1. No-preference Articulation Methods

      No articulation of preference information method is used only when the DM is not available, suh as the cases of online optimization where the tasks that need to be solved quickly in

      a time span between ten milliseconds to a few minutes [8]. These methods provide only one solution to get computed.

      An example of these methods is the so-called global criterion method that aims to minimize the distance to the ideal objective vector.

      Different metrics can be used in this evaluation such as – metric. The -metric , , is a norm metric on

      , defined by , where the -norm is defined by .

      Minimize

      where:

      • When , its a maximum metric and it becomes a non-smooth optimization problem.

      • When , the solution obtained is a Pareto-optimal solution.

      • When , the solution obtained is a weakly Pareto- optimal solution.

    2. A-Priori Preference Articulation Method

      This approach is based on a prior articulation of preferences. It requires DM to reach an assent about the relative importance (weight) of each objective. The preferences of DM are then used in a technique called scalarization, i.e. aggregation of multiple objectives, that converts a MOO problem into a SOO problem which can be solved using a variety of solution approaches.

      However, in many applications, it is difficult for DM to devise a satisfactory weighting scheme for scalarization, because some features of a problem are not fully understood during the early stages of decision making. Moreover, some objectives may be formulated in complex mathematical forms that are difficult for non-analysts to understand. Indeed, some objectives are qualitative like aesthetic, cultural or other realms that are expensive to get quantified.

      Weighted Sum Method (shown in Fig. 6) is considered one of this method examples. Its based on scalarization of a set of multiple objectives into a single objective by adding each objective pre-multiplied by a DM supplied weight

      . Then, a weight of an objective is chosen in proportion to the relative importance of the objective.

      Above, the are weights given by the DM representing the relative importance of the objectives. If weight it means that the DM appreciates improvement on objective more than on objective .

      Fig. 6. Weighted sum method to solve minimization problem of two functions

      Minimize:

      Where:

      ,

      Subject to:

      n number of bound(s)

      i = 1, 2, , n

      J number of inequality constraint(s)

      j= 1, 2, , J

      K number of equality constraint(s) , k= 1, 2, , K

    3. Interactive (Prgressive) Preference Articulation Method This approach is based on an interactive (or progressive) articulation of preferences, in which the DMs preferences are refined and incorporated into the search process. During each iteration of the process, DM is presented a (typically small) subset of non-dominated solutions, and based on these solutions they provide local information about his preferences for objectives. Then a SOO problem is formulated and solved. Solutions to this problem are used by DM to improve his understanding of the problem and to adjust his preferences, which can be used to form a new problem. This process repeats until DM is satisfied.

      Although these methods can be used to encourage the participation of DM, there is no guarantee that an acceptable solution will be reached, and even if such a solution is identified, it cannot be guaranteed to be Pareto-optimal.

      Interactive decision making has become popular during the past several decades, and a large number of methods have been developed in the literature. This is partly due to the more intensive involvement of DM in the process of searching for alternative solutions. This process can lead to more satisfactory final decisions (compared to prior approaches). However, interactive methods are often based on the generation of a small number of alternatives and they therefore may overlook important non-dominated solutions. One of popular examples of this method is satisficing trade- off analysis method and method of steuer.

    4. A-Posteriori Preference Preference Articulation Method This approach is based on a posterior articulation of preferences. It does not require the intensive participation of DM during the process of generating alternatives. Instead, the application of this approach depends on methods that can be used to generate a diverse set of Pareto optimal solutions that are evenly distributed on the Pareto frontier; these solutions are subsequently presented to DM who make a final decision about the problem by examining and negotiating about the merits of alternatives.

    The major difficulties to apply this approach are: First, it not easy to develop solution methods that can effectively generate the Pareto frontier. Traditionally, prior articulation methods have been used to generate the Pareto optimal solutions by systematically adjusting the associated parameters (e.g., preference weights) in order to yield

    Drawbacks

    Do not take into account which problem is solved.

    Example

    Global criterion method.

    A-priori Methods

    Definition

    Methods where the DM articulates preferences before optimization.

    Idea

    Benefits

    Computed PO solutions are based on the preferences of the DM (no unnecessary solutions).

    Drawbacks

    It may be difficult for the DM to express preferences before he has seen any solutions.

    Example

    Weighted Sum Method.

    Interactive (Progressive) Methods

    Definition

    Methods that allow the DM to guide the search by alternating optimization and preference articulation iteratively

    Idea

    – Solution process is iterative:

    – Solution process ends when the DM is satisfied with the PO solution obtained.

    Benefits

    Drawbacks

    Example

    Satisficing trade-off analysis method.

    A-posteriori Methods

    Definition

    Methods where the DM articulates preferences after optimization.

    Idea

    Benefits

    Drawbacks

    Example

    MOEAs.

    • Ask first the preferences of the DM, then optimize using the preferences.

    • Only such PO solutions are produced that are of interest to the DM.

    1. Initialization: compute some PO solution(s).

    2. Show PO solution(s) to the DM.

    3. Is the DM satisfied? If no, ask the DM to give new preferences. Otherwise, stop. A most preferred solution has been found.

    4. Compute new PO solution(s) by taking into account new preferences. Go to step 2.

    • Only such solutions are computed that are of interest to the DM.

    • DM is able to steer the solution process with his/her preferences.

    • DM can learn about the interdependences between the conflicting objectives trough the solutions obtained based on the preferences which helps adjusting the preferences.

    • DM has to invest a lot of time in the solution process.

    • High influence of the DM on the process is needed.

    • If computing PO solutions takes time, DM does not necessarily remember what happened in the early phases.

    • Compute different PO solutions, then DM selects the most preferred one.

    • Approximation of the PO set (or part of it) is provided.

    • Well suited for problems with 2 objectives since the PO solutions can be easily visualized for the DM.

    • Understanding of the whole PO set.

    • Approximating the PO set often time consuming.

    • DM has to choose the most preferred solution among large number of solutions.

    • Visualization of the solutions for high number of objectives.

    Drawbacks

    Do not take into account which problem is solved.

    Example

    Global criterion method.

    A-priori Methods

    Definition

    Methods where the DM articulates preferences before optimization.

    Idea

    Benefits

    Computed PO solutions are based on the preferences of the DM (no unnecessary solutions).

    Drawbacks

    It may be difficult for the DM to express preferences before he has seen any solutions.

    Example

    Weighted Sum Method.

    Interactive (Progressive) Methods

    Definition

    Methods that allow the DM to guide the search by alternating optimization and preference articulation iteratively

    Idea

    – Solution process is iterative:

    – Solution process ends when the DM is satisfied with the PO solution obtained.

    Benefits

    Drawbacks

    Example

    Satisficing trade-off analysis method.

    A-posteriori Methods

    Definition

    Methods where the DM articulates preferences after optimization.

    Idea

    Benefits

    Drawbacks

    Example

    MOEAs.

    • Ask first the preferences of the DM, then optimize using the preferences.

    • Only such PO solutions are produced that are of interest to the DM.

    1. Initialization: compute some PO solution(s).

    2. Show PO solution(s) to the DM.

    3. Is the DM satisfied? If no, ask the DM to give new preferences. Otherwise, stop. A most preferred solution has been found.

    4. Compute new PO solution(s) by taking into account new preferences. Go to step 2.

    • Only such solutions are computed that are of interest to the DM.

    • DM is able to steer the solution process with his/her preferences.

    • DM can learn about the interdependences between the conflicting objectives through the solutions obtained based on the preferences which helps adjusting the preferences.

    • DM has to invest a lot of time in the solution process.

    • High influence of the DM on the process is needed.

    • If computing PO solutions takes time, DM does not necessarily remember what happened in the early phases.

    • Compute different PO solutions, then DM selects the most preferred one.

    • Approximation of the PO set (or part of it) is provided.

    • Well suited for problems with 2 objectives since the PO solutions can be easily visualized for the DM.

    • Understanding of the whole PO set.

    • Approximating the PO set often time consuming.

    • DM has to choose the most preferred solution among large number of solutions.

    • Visualization of the solutions for high number of objectives.

    different solutions. The problem with this approach, however, is that it may overlook important solutions, especially when the Pareto front contains concave, and/or discontinuous sections. Second, the existence of a large number of non- dominated solutions will impose a substantial cognitive burden on DM who must somehow select a solution from the multitude of alternatives.

    An important advantage of the posterior approach is also clear: a full representation of the Pareto front can present the true multi-objective structure of the problem, which may lead to a better decision. If an intuitive and user-friendly decision support tool is available, stakeholders can concentrate on examining and negotiating tradeoffs among interesting solutions. This approach may also foster a wider participation from stakeholders because they may be able to find niche solutions that are beneficial to their view of a problem and its resolution.

    Multi-objective optimization evolutionary algorithms (MOEAs) are one of the most used examples of applying a- posteriori preference articulation. MOEAs are often classified into three main groups: aggregation-based, dominance-based and performance indicator-based algorithms. Aggregation based algorithms decompose MOO problem into a number of single-objective sub-problems and subsequently solve them simultaneously. One of the commonly used aggregation- based MOEA algorithms is MOEA/D.

    Dominance-based algorithms are the most common type of MOEAs in the literature and are based on the Pareto dominance-based evaluation of individuals. A commonly used dominance-based MOEAs is NSGA-II. Finally, indicator-based algorithms use an indicator function (a measure of the area dominated by the Pareto optimal front) to gauge the quality of the population in an MOEA.

    Over the past few decades MOEAs have been increasingly used in practice due to advantages such as, a set of representative Pareto optimal solutions can be obtained in a single run, multiple local, discrete, and nonconvex Pareto- optimal fronts and different types of variables, objective functions, and constraints can be easily handled. However, they are also equally criticized for their lack of convergence proof, being computationally very expensive and the need to set a number of algorithm parameters such population size etc.

  5. CONCLUSION

    Throughout the brief overview of the methods available in the literature of MOO, this research concludes 4 main classifications of those methods presented with a brief explanation of the method definition, idea, benefits, drawbacks, and examples. Table. 1 provides a summary of each of the four main classifications. This could provide an overview to researchers and practitioners about the field of MOO as a whole and thereby promote its utilization among the industries.

    TABLE I.

    No preference

    Definition

    Methods where the DM is not available

    Idea

    Compute some PO solution.

    Benefits

    Fast methods as one PO solution is enough and there is no communication needed with the DM.

  6. REFERENCES

  1. D. Kalyanmoy, Multi-objective optimization using evolutionary algorithms: an introduction, KanGAL, Indian Institute of Technology Kanpur, February 2011.

  2. Maier. H .R. et al, Introductory Overview: Optimization Using Evolutionary Algorithms and Other Metaheuristics, Environmental Modelling & Software, vol. 114, pp. 195-213, April 2019.

  3. A. Ajith, J. Lakhmi, Evolutionary Multiobjective Optimization, Advanced Information and Knowledge Processing (AI&KP), Springer, London, 2005.

  4. D. Kalyanmoy, Multi-objective optimization using evolutionary algorithms, Wiley interscience series in systems and optimization, John Wiley & Sons, Ltd, England, 2001.

  5. M. Annelies De, O. Catheline, O. Jos Van, C. Dirk, Towards sustainable biomass-for-bioenergy supply chains by trading off between multiple objectives, Young Researchers Conference – Biomass. World Sustainable Energy Days, Wels, Austria, 2005.

  6. D. W. O. Ladislas, Multiobjective optimization: history and promise, Third China-Japan-Korea Joint Symposium on Optimization of Structural and Mechanical Systems, vol. 2, January 2004.

  7. C. Judyta, B. W. Neil, Multicriteria optimization in architectureal design goal-oriented methods and computaional morphogenisis, shapes of logic everything what surround us can be described, Oficyna Wydawnicza Politechniki Wrocawskiej, Wrocaw, Poland, 2016.

  8. W. Thomas, Global optimization algorithms: theory and application, 2009, self-published.

Leave a Reply