<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.1 20151215//EN" "http://jats.nlm.nih.gov/publishing/1.1/JATS-journalpublishing1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xml:lang="en" article-type="review-article" dtd-version="1.1">
<front>
<journal-meta>
<journal-id journal-id-type="pmc">CMC</journal-id>
<journal-id journal-id-type="nlm-ta">CMC</journal-id>
<journal-id journal-id-type="publisher-id">CMC</journal-id>
<journal-title-group>
<journal-title>Computers, Materials &#x0026; Continua</journal-title>
</journal-title-group>
<issn pub-type="epub">1546-2226</issn>
<issn pub-type="ppub">1546-2218</issn>
<publisher>
<publisher-name>Tech Science Press</publisher-name>
<publisher-loc>USA</publisher-loc>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">68087</article-id>
<article-id pub-id-type="doi">10.32604/cmc.2025.068087</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>A Review of the Evolution of Multi-Objective Evolutionary Algorithms</article-title>
<alt-title alt-title-type="left-running-head">A Review of the Evolution of Multi-Objective Evolutionary Algorithms</alt-title>
<alt-title alt-title-type="right-running-head">A Review of the Evolution of Multi-Objective Evolutionary Algorithms</alt-title>
</title-group>
<contrib-group>
<contrib id="author-1" contrib-type="author" corresp="yes">
<name name-style="western"><surname>Hanne</surname><given-names>Thomas</given-names></name><xref ref-type="aff" rid="aff-1">1</xref><email>thomas.hanne@fhnw.ch</email></contrib>
<contrib id="author-2" contrib-type="author">
<name name-style="western"><surname>Moghaddam</surname><given-names>Mohammad Jahani</given-names></name><xref ref-type="aff" rid="aff-2">2</xref></contrib>
<aff id="aff-1"><label>1</label><institution>Institute for Information Systems, University of Applied Sciences and Arts Northwestern Switzerland</institution>, <addr-line>Olten, 4600</addr-line>, <country>Switzerland</country></aff>
<aff id="aff-2"><label>2</label><institution>Department of Electrical Engineering, Lan.C., Islamic Azad University</institution>, <addr-line>Langarud, 4471311127</addr-line>, <country>Iran</country></aff>
</contrib-group>
<author-notes>
<corresp id="cor1"><label>&#x002A;</label>Corresponding Author: Thomas Hanne. Email: <email>thomas.hanne@fhnw.ch</email></corresp>
</author-notes>
<pub-date date-type="collection" publication-format="electronic">
<year>2025</year>
</pub-date>
<pub-date date-type="pub" publication-format="electronic">
<day>23</day><month>10</month><year>2025</year>
</pub-date>
<volume>85</volume>
<issue>3</issue>
<fpage>4203</fpage>
<lpage>4236</lpage>
<history>
<date date-type="received">
<day>20</day>
<month>5</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>01</day>
<month>9</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>&#x00A9; 2025 The Authors.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Published by Tech Science Press.</copyright-holder>
<license xlink:href="https://creativecommons.org/licenses/by/4.0/">
<license-p>This work is licensed under a <ext-link ext-link-type="uri" xlink:type="simple" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</ext-link>, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</license-p>
</license>
</permissions>
<self-uri content-type="pdf" xlink:href="TSP_CMC_68087.pdf"></self-uri>
<abstract>
<p>Multi-Objective Evolutionary Algorithms (MOEAs) have significantly advanced the domain of Multi-Objective Optimization (MOO), facilitating solutions for complex problems with multiple conflicting objectives. This review explores the historical development of MOEAs, beginning with foundational concepts in multi-objective optimization, basic types of MOEAs, and the evolution of Pareto-based selection and niching methods. Further advancements, including decom-position-based approaches and hybrid algorithms, are discussed. Applications are analyzed in established domains such as engineering and economics, as well as in emerging fields like advanced analytics and machine learning. The significance of MOEAs in addressing real-world problems is emphasized, highlighting their role in facilitating informed decision-making. Finally, the development trajectory of MOEAs is compared with evolutionary processes, offering insights into their progress and future potential.</p>
</abstract>
<kwd-group kwd-group-type="author">
<kwd>Multi-objective optimization</kwd>
<kwd>evolutionary algorithms</kwd>
<kwd>Pareto-based selection</kwd>
<kwd>decomposition-based methods</kwd>
<kwd>advanced analytics</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<label>1</label>
<title>Introduction</title>
<sec id="s1_1">
<label>1.1</label>
<title>The Significance of Multi-Objective Optimization (MOO)</title>
<p>Real-world problems rarely involve optimizing a single objective. Zeleny&#x2019;s statement (1982): &#x201C;Multiple objectives are all around us&#x201D; [<xref ref-type="bibr" rid="ref-1">1</xref>] reflects the ubiquity and importance of considering multiple criteria, goals, or objectives in our daily lives. Decision-making often must account for conflicting objectives, necessitating trade-offs. For instance, in automotive design, reducing fuel consumption and emissions often conflicts with cost and performance. Similarly, in supply chain optimization, minimizing costs can conflict with maximizing customer satisfaction. These complexities have led to the emergence of MOO as a crucial area of research. It wasn&#x2019;t until the early 1970s that formal approaches to considering multiple objectives in decision-making processes were widely studied. A review of the historic development of this field is provided by [<xref ref-type="bibr" rid="ref-2">2</xref>]. Since the 1970s, specific conferences, newsletters, journals, and academic societies have been established for this new research area.</p>
<p>The foundation of MOO is the concept of Pareto-optimality, introduced by Vilfredo Pareto in the early 20th century. A solution is Pareto-optimal if improving one objective necessitates the degradation of at least one other. However, formal methods for MOO gained prominence only in the 1970s with the development of structured approaches to multi-criteria decision-making (MCDM) or multi-criteria decision analysis (MCDA) [<xref ref-type="bibr" rid="ref-1">1</xref>,<xref ref-type="bibr" rid="ref-2">2</xref>]. These methods have since evolved to address diverse applications in engineering, economics, and beyond.</p>
<p>In general, two primary research directions can be identified: The first focuses on determining viable solutions to a MOO problem. This inquiry led to the development of Pareto-optimal solutions (also known as Pareto-efficient or simply efficient) and methods for identifying such solutions across various problem types. Given that there is typically no single, definitive Pareto-optimal solution, a second question emerges: How can we assist a decision-maker in choosing an alternative from those that are mathematically sound (i.e., the Pareto-optimal solutions)? This question spawned numerous approaches that incorporate additional information, particularly regarding a decision maker&#x2019;s preferences, into the solution process. While this second question involves considering the availability and suitability of relevant information, aspects of rational decision-making, psychological factors, and the user-friendliness of methods, the first question deals with optimization in potentially complex problems. Methods addressing the first question are referred to as type 1 approaches, while those addressing the second are called type 2 approaches. It is also possible to address both questions simultaneously in a single approach that utilizes additional (preference-based) information during the optimization process. This information can be provided beforehand or progressively throughout the optimization process (interactive methods). These combined approaches are classified as type 3 approaches. This typology not only guides algorithm design but also frames how decision-making and optimization are intertwined in practical applications.</p>
<p>Complex optimization problems have found promising solutions in Evolutionary Algorithms (EAs), particularly when conventional optimization and operations research methods fall short due to assumptions about problem properties like linearity, convexity, or differentiability. This makes EAs attractive for tackling intricate MOO challenges as well. However, the widespread exploration of EAs for multi-objective problems didn&#x2019;t gain traction until the late 1980s. The field&#x2019;s growth culminated in the organization of specialized conferences, beginning with the in-augural International Conference on Evolutionary Multi-Criterion Optimization in 2001 [<xref ref-type="bibr" rid="ref-3">3</xref>]. This event gave rise to the commonly used acronym for the field, EMO.</p>
</sec>
<sec id="s1_2">
<label>1.2</label>
<title>Challenges in Multi-Objective Optimization</title>
<p>Multi-Objective Optimization Problems (MOOPs) present unique challenges compared to single-objective problems. These challenges can be broadly categorized as follows:
<list list-type="simple">
<list-item><label>&#x2022;</label>
<p>Solution Representation:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Identifying the Pareto front, a set of Pareto-optimal solutions, is computationally intensive, particularly for high-dimensional problems. Recent advancements, such as the use of deep learning techniques to approximate the Pareto front, have shown promise in alleviating some computational burdens.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Ensuring diversity in solutions along the Pareto front is critical to provide decision-makers with meaningful trade-offs. To address this, several algorithms and techniques have been developed:
<list list-type="simple">
<list-item><label>&#x25CB;</label>
<p>Crowding Distance: Used in algorithms like NSGA-II, this technique maintains diversity by measuring the density of solutions around a given solution, helping to ensure that the population covers the Pareto front effectively [<xref ref-type="bibr" rid="ref-4">4</xref>].</p></list-item>
<list-item><label>&#x25CB;</label>
<p>Fitness Sharing: This approach modifies the fitness of solutions based on their proximity to others in the objective space, encouraging a spread of solutions across different regions of the Pareto front.</p></list-item>
<list-item><label>&#x25CB;</label>
<p>Multi-Objective Evolutionary Strategies: These strategies often incorporate mechanisms to maintain diversity, such as adaptive mutation rates and selection pressure that favor diverse populations.</p></list-item>
</list></p></list-item>
</list></p></list-item>
<list-item><label>&#x2022;</label> 
<p>Decision Support [<xref ref-type="bibr" rid="ref-5">5</xref>]:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Decision-making involves selecting a single solution from the Pareto front based on preferences. This task requires robust frameworks that can accommodate user preferences effectively.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>The implications of user preferences are significant as they can greatly influence the optimization process and the final decision. Preferences can guide the selection of solutions that not only meet technical criteria but also align with the decision-maker&#x2019;s values and priorities. Incorporating user preferences can be done through various methods, such as a priori, interactive, or a posteriori approaches.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>A priori methods allow users to specify preferences before the optimization process begins, which can streamline the search towards preferred regions of the solution space. Interactive methods enable users to provide feedback during the optimization process, allowing for a more adaptive approach that can adjust based on real-time insights. A posteriori methods involve analyzing the results after the optimization to understand how well the solutions meet user preferences.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Ultimately, effectively incorporating user preferences enhances satisfaction with the selected solutions and improves the overall decision-making process, ensuring that the results are not only mathematically sound but also contextually relevant.</p></list-item>
</list></p></list-item>
<list-item><label>&#x2022;</label>
<p>Problem Complexity:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Real-world MOOPs often involve nonlinear, nonconvex, or stochastic systems that are difficult to model using traditional optimization techniques [<xref ref-type="bibr" rid="ref-6">6</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>The high computational cost of solving such problems limits the scalability of classical methods. Researchers have begun exploring the integration of surrogate models to approximate objective functions, significantly reducing computational load in complex environments. Additionally, adaptive sampling techniques that focus computational resources on promising regions of the solution space have been introduced to improve efficiency [<xref ref-type="bibr" rid="ref-7">7</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>MOOPs can also involve a large number of objectives (many-objective optimization) and high-dimensional decision spaces, which further increases complexity and computational burden. The challenge of handling high-dimensional search spaces and the sensitivity to parameter settings are also significant limitations [<xref ref-type="bibr" rid="ref-8">8</xref>&#x2013;<xref ref-type="bibr" rid="ref-10">10</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>The presence of irregular Pareto fronts (discontinuous, degenerate, or inverted) in real-world MOOPs poses additional challenges for MOEAs, as it makes designing efficient optimization algorithms particularly tricky [<xref ref-type="bibr" rid="ref-11">11</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Balancing convergence (finding solutions close to the true Pareto front) and diversity (maintaining a wide spread of solutions along the Pareto front) is a persistent challenge in MOEAs, especially as the number of objectives increases [<xref ref-type="bibr" rid="ref-12">12</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>The interpretability of MOEA results, particularly for high-dimensional problems, can also be a limitation [<xref ref-type="bibr" rid="ref-7">7</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Some MOEAs may suffer from premature convergence and insufficient population diversity, especially in high-dimensional data scenarios [<xref ref-type="bibr" rid="ref-13">13</xref>].</p></list-item>
</list></p></list-item>
</list></p>
</sec>
<sec id="s1_3">
<label>1.3</label>
<title>Multi-Objective vs. Many-Objective Optimization: A Critical Distinction</title>
<p>The distinction between &#x201C;multi-objective&#x201D; and &#x201C;many-objective&#x201D; optimization is primarily a matter of the number of objectives involved, which in turn significantly impacts the complexity of the problem and the effectiveness of traditional Multi-Objective Evolutionary Algorithms (MOEAs).</p>
<p>Multi-objective Optimization (MOOPs) involve optimizing two or three conflicting objectives simultaneously. In this context, the concept of Pareto dominance is well-defined and relatively straightforward to apply. Such problems can be characterized as follows:
<list list-type="bullet">
<list-item>
<p>Number of Objectives: Typically, 2 or 3 objectives.</p></list-item>
<list-item>
<p>Pareto Front Visualization: The Pareto front (or Pareto set) can often be visualized and understood relatively easily in 2D or 3D space.</p></list-item>
<list-item>
<p>Algorithm Performance: Many traditional MOEAs, such as NSGA-II and SPEA2, per-form well in this setting, effectively balancing convergence and diversity.</p></list-item>
<list-item>
<p>Decision-Making: Decision-makers can often analyze the trade-offs among a small number of objectives more intuitively.</p></list-item>
</list></p>
<p>Many-objective Optimization: Many-objective optimization problems (MaOPs) refer to problems with a large number of objectives, typically four or more, and often extending to tens or even hundreds of objectives. The increase in the number of objectives introduces significant challenges that differentiate MaOPs from traditional MOOPs. MaOPs can be characterized as follows:
<list list-type="simple">
<list-item><label>&#x2022;</label>
<p>Number of Objectives: Generally, 4 or more objectives, often extending to a much larger number.</p></list-item>
<list-item><label>&#x2022;</label>
<p>Curse of Dimensionality: As the number of objectives increases, the objective space becomes high-dimensional. This leads to several issues:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Dominance Resistance: In high-dimensional objective spaces, almost all solutions tend to be non-dominated with respect to each other. This phenomenon, known as &#x201C;dominance resistance&#x201D; or &#x201C;curse of dimensionality in objective space,&#x201D; makes Pareto dominance-based selection mechanisms less effective in guiding the search towards the true Pareto front. The selection pressure towards the Pareto front diminishes, making it difficult for MOEAs to converge.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Increased Computational Cost: The computational cost of maintaining diversity and calculating Pareto dominance relationships increases significantly with the number of objectives.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Visualization Difficulty: Visualizing and understanding the Pareto front becomes extremely challenging, if not impossible, in high-dimensional spaces.</p></list-item>
</list></p></list-item>
<list-item><label>&#x2022;</label>
<p>Algorithm Adaptation: Traditional MOEAs often struggle with MaOPs due to the dominance resistance problem. This has led to the development of new types of algorithms specifically designed for many-objective problems, such as:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Decomposition-based MOEAs (e.g., MOEA/D): These methods are particularly well-suited for MaOPs as they transform the multi-objective problem into a set of single-objective subproblems, which helps overcome the dominance resistance issue.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Indicator-based MOEAs: These algorithms use performance indicators (e.g., Hyper-volume, Inverted Generational Distance) to guide the search, providing a scalar measure of solution quality that can maintain selection pressure in high-dimensional spaces.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Reference-point based MOEAs: These methods incorporate user preferences or reference points to guide the search towards specific regions of interest in the high-dimensional objective space.</p></list-item>
</list></p></list-item>
<list-item><label>&#x2022;</label>
<p>Decision-Making Complexity: Analyzing and selecting a preferred solution from a vast set of non-dominated solutions in a high-dimensional objective space is a significant challenge for decision-makers.</p></list-item>
</list></p>
<p><xref ref-type="table" rid="table-1">Table 1</xref> summarizes the key differences between multi-objective and many-objective optimization.</p>
<table-wrap id="table-1">
<label>Table 1</label>
<caption>
<title>Multi-objective vs. many-objective optimization key differences</title>
</caption>
<table>
<colgroup>
<col/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th>Feature</th>
<th align="center">MOOPs</th>
<th align="center">MaOPs</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>Number of objectives</bold></td>
<td>2 or 3 objectives.</td>
<td>4 or more objectives (can be tens or hundreds).</td>
</tr>
<tr>
<td><bold>Dominance behavior</bold></td>
<td>Pareto dominance is effective in guiding the search.</td>
<td>Dominance resistance (most solutions are non-dominated), reducing selection pressure.</td>
</tr>
<tr>
<td><bold>Computational cost</bold></td>
<td>Manageable computational cost for dominance comparisons.</td>
<td>High computational cost for dominance comparisons and diversity maintenance.</td>
</tr>
<tr>
<td><bold>Visualization</bold></td>
<td>Pareto front can be easily visualized in 2D/3D.</td>
<td>Visualization of the Pareto front is extremely difficult or impossible.</td>
</tr>
<tr>
<td><bold>Algorithm suitability</bold></td>
<td>Traditional MOEAs (e.g., NSGA-II, SPEA2) perform well.</td>
<td>Requires specialized algorithms (e.g., decomposition-based, indicator-based) to handle dominance resistance.</td>
</tr>
<tr>
<td><bold>Decision-making</bold></td>
<td>Easier to analyze trade-offs and select solutions.</td>
<td>Very challenging to analyze trade-offs and select solutions due to the large number of non-dominated solutions.</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In summary, while both multi-objective and many-objective optimization deal with multiple conflicting objectives, the sheer number of objectives in many-objective problems introduces a &#x201C;curse of dimensionality&#x201D; that fundamentally changes the behavior of Pareto dominance and necessitates different algorithmic approaches and decision-making strategies.</p>
</sec>
<sec id="s1_4">
<label>1.4</label>
<title>Applications of MOEAs</title>
<p>MOEAs have proven invaluable in diverse domains, including:
<list list-type="simple">
<list-item><label>&#x2022;</label>
<p>Engineering:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Structural Design Optimization: MOEAs are employed to optimize designs under multiple constraints. For instance, Reference [<xref ref-type="bibr" rid="ref-14">14</xref>] demonstrated the use of the NSGA-II algorithm for optimizing truss structures, achieving significant improvements in weight reduction while maintaining strength standards. MOEAs are frequently used to optimize designs under multiple constraints like a study by Kalyanmoy Deb and his colleagues that explored various applications of MOEAs in structural optimization, showcasing their effectiveness in minimizing material usage while satisfying performance criteria [<xref ref-type="bibr" rid="ref-15">15</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Energy Systems Management: Gong et al. [<xref ref-type="bibr" rid="ref-16">16</xref>] applied MOEAs to optimize the operation of power systems, focusing on minimizing costs while maximizing reliability and sustainability. Their results indicated enhanced performance in managing renewable energy sources. In a study by [<xref ref-type="bibr" rid="ref-17">17</xref>], MOEAs were demonstrated for optimizing smart grid operations, balancing cost, reliability, and environmental impact in energy distribution systems, which is crucial for integrating renewable energy sources.</p></list-item>
</list></p></list-item>
<list-item><label>&#x2022;</label> 
<p>Economics:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Portfolio Optimization: A study by Mohagheghi et al. [<xref ref-type="bibr" rid="ref-18">18</xref>] used MOEAs to develop optimal investment portfolios that balance risk and return, demonstrating how these algorithms can effectively navigate the trade-offs inherent in financial decision-making. Furthermore, a recent study by Bradshaw et al. [<xref ref-type="bibr" rid="ref-19">19</xref>] utilized the MOEA framework to optimize investment portfolios, effectively balancing risk and return in volatile markets, illustrating the robustness of MOEAs in financial decision-making.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Resource Allocation: MOEAs have been effectively utilized in optimizing resource distribution among competing projects. For instance, a study by Zopounidis and Doumpos [<xref ref-type="bibr" rid="ref-20">20</xref>] illustrated the use of MOEAs in resource allocation, which allowed for more equitable and efficient distribution strategies in various economic scenarios. Recent work by Fern&#x00E1;ndez et al. [<xref ref-type="bibr" rid="ref-21">21</xref>] applied MOEAs for resource allocation across competing projects, demonstrating improved efficiency in distribution strategies.</p></list-item>
</list></p></list-item>
<list-item><label>&#x2022;</label> 
<p>Emerging Fields:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Advanced Analytics and Machine Learning: MOEAs have been successfully applied in hyperparameter tuning for machine learning models. For example, a study by Bader and Zitzler [<xref ref-type="bibr" rid="ref-22">22</xref>] employed MOEAs to optimize model parameters, achieving improved predictive performance in complex datasets. Their work highlighted the synergy between MOEAs and data analytics, demonstrating enhanced decision-making processes in rapidly changing environments. Additionally, a recent study by Rom et al. [<xref ref-type="bibr" rid="ref-23">23</xref>] explored the use of MOEAs for hyperparameter tuning in deep learning models, showcasing how these algorithms can enhance model performance on large datasets.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Healthcare: A recent application in healthcare optimization used MOEAs to allocate medical resources in emergency departments, balancing patient wait times and resource utilization effectively, showcasing the versatility and adaptability of MOEAs in critical real-world applications. A study by Eriskin et al. [<xref ref-type="bibr" rid="ref-24">24</xref>] highlighted the adaptability of MOEAs in balancing patient care quality and resource management during crisis situations, such as the COVID-19 pandemic.</p></list-item>
</list></p></list-item>
</list></p>
<p>These specific case studies illustrate the continued relevance and effectiveness of MOEAs in solving complex optimization challenges across diverse fields, underscoring their effectiveness in solving complex optimization problems involving multiple conflicting objectives. They are also reflecting their adaptability to contemporary issues and advancements in technology.</p>
</sec>
<sec id="s1_5">
<label>1.5</label>
<title>Historical Milestones of Multi-Objective Evolutionary Algorithms (MOEAs)</title>
<p>The evolution of MOEAs has been shaped by several landmark contributions, ranging from the conceptual foundation of evolutionary algorithms to the development of sophisticated hybrid techniques. These milestones represent significant progress in algorithmic strategies, problem formulation, and application scope.</p>
<p><xref ref-type="table" rid="table-2">Table 2</xref> provides a chronological overview of key milestones in MOEA development. The conceptual groundwork for intelligent behavior in machines can be traced back to Alan Turing&#x2019;s early ideas in the 1950s, which laid the foundation for machine learning and computational intelligence. Although not directly proposing evolutionary algorithms, Turing&#x2019;s vision inspired subsequent developments in artificial intelligence. The foundation of genetic algorithms (GAs) was laid in the early 1960s by John Holland, who sought methods to automatically generate adaptive finite-state automata. His initial publications on this concept predate his widely cited book Adaptation in Natural and Artificial Systems (1975), which formalized and popularized GA methodology. A more comprehensive historical account of GA development can be found in Goldberg&#x2019;s book Genetic Algorithms in Search, Optimization, and Machine Learning (1989), which outlines the foundational ideas and their early evolution [<xref ref-type="bibr" rid="ref-25">25</xref>]. The first formal evolutionary algorithms, however, were introduced later by Ingo Rechenberg and John Holland in the 1970s [<xref ref-type="bibr" rid="ref-26">26</xref>&#x2013;<xref ref-type="bibr" rid="ref-28">28</xref>]. Starting from the foundational theories, the field has witnessed the emergence of influential algorithms like NSGA, SPEA, NSGA-II, and MOEA/D. These developments reflect a continuous effort to improve convergence behavior, maintain diversity, and tackle increasingly complex multi-objective problems.</p>
<table-wrap id="table-2">
<label>Table 2</label>
<caption>
<title>Timeline of key milestones of MOEA</title>
</caption>
<table>
<colgroup>
<col align="center"/>
<col/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th align="center">Period</th>
<th>Year</th>
<th align="center">Milestone</th>
<th align="center">Significance</th>
</tr>
</thead>
<tbody>
<tr>
<td>Early foundations</td>
<td>1950s</td>
<td>Turing&#x2019;s early ideas on machine intelligence</td>
<td>Conceptual foundation for evolutionary computation</td>
</tr>
<tr>
<td></td>
<td>1960s</td>
<td>Genetic Algorithms (GA) introduced by John Holland</td>
<td>Early use of natural selection principles to create adaptive systems (Later consolidated in ANAS, 1975; see also Goldberg, 1989)</td>
</tr>
<tr>
<td></td>
<td>1980s</td>
<td>Formal integration of Pareto optimality</td>
<td>Enabled multi-objective formulations in evolutionary algorithms (MOGAs)</td>
</tr>
<tr>
<td>Key algorithms</td>
<td>1994</td>
<td>NSGA by Srinivas &#x0026; Deb</td>
<td>Introduced Pareto-based sorting and diversity preservation</td>
</tr>
<tr>
<td></td>
<td>1998</td>
<td>SPEA by Zitzler &#x0026; Thiele [<xref ref-type="bibr" rid="ref-29">29</xref>]</td>
<td>Fitness based on domination strength; enhanced convergence and diversity</td>
</tr>
<tr>
<td></td>
<td>1999</td>
<td>PAES by Knowles &#x0026; Corne [<xref ref-type="bibr" rid="ref-30">30</xref>]</td>
<td>Archive-based diversity strategy for small populations</td>
</tr>
<tr>
<td>Hybrid and modern</td>
<td>2002</td>
<td>MOEA/D by Zhang &#x0026; Li</td>
<td>Problem decomposition into scalar subproblems for parallel optimization</td>
</tr>
<tr>
<td></td>
<td>2015</td>
<td>MOEA/D-H (Hybrid)</td>
<td>Combines MOEA/D with local search for better convergence in complex landscapes</td>
</tr>
<tr>
<td></td>
<td>2020s</td>
<td>Machine Learning-integrated MOEAs</td>
<td>Enhances adaptability and performance in dynamic, real-world environments</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>Recent years have seen a shift toward integrating MOEAs with advanced techniques, such as local search, decomposition strategies, and machine learning models. This trend demonstrates the field&#x2019;s dynamic nature and its responsiveness to the challenges posed by real-world, high-dimensional, and dynamic optimization scenarios.</p>
<p>The historical development of MOEAs is characterized by foundational theories, key algorithmic breakthroughs, and hybrid advancements. <xref ref-type="table" rid="table-2">Table 2</xref> presents a condensed timeline of these significant milestones, highlighting their contributions to the evolution of the field.</p>

</sec>
<sec id="s1_6">
<label>1.6</label>
<title>Objectives of This Review</title>
<p>This review aims to:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Provide a comprehensive historical perspective on the evolution of MOEAs.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Highlight key methodologies, including Pareto-based, decomposition-based, and hybrid approaches.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Analyze significant applications and identify gaps in the literature.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Offer insights into future research directions, emphasizing scalability, real-time performance, and integration with emerging technologies.</p></list-item>
</list></p>
<p>By synthesizing these aspects, this article seeks to serve as a resource for researchers and practitioners, advancing both theoretical understanding and practical application of MOEAs.</p>
</sec>
</sec>
<sec id="s2">
<label>2</label>
<title>Foundational Concepts and Traditional MOEAs</title>
<sec id="s2_1">
<label>2.1</label>
<title>Basic Approaches</title>
<p>MOOPs are characterized by the need to optimize two or more conflicting objectives simultaneously. This complexity arises in various fields, including engineering, economics, and environmental management, where decision-makers must balance trade-offs among competing goals. The formulation of an MOOP can be expressed mathematically as follows:
<disp-formula id="eqn-1"><label>(1)</label><mml:math id="mml-eqn-1" display="block"><mml:mi>M</mml:mi><mml:mi>a</mml:mi><mml:mi>x</mml:mi><mml:mi>i</mml:mi><mml:mi>m</mml:mi><mml:mi>i</mml:mi><mml:mi>z</mml:mi><mml:mi>e</mml:mi><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>}</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>X</mml:mi><mml:mo>&#x2286;</mml:mo><mml:msup><mml:mi>R</mml:mi><mml:mrow><mml:mi>n</mml:mi></mml:mrow></mml:msup></mml:math></disp-formula>where <inline-formula id="ieqn-1"><mml:math id="mml-ieqn-1"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> represents a vector of <inline-formula id="ieqn-2"><mml:math id="mml-ieqn-2"><mml:mi>q</mml:mi></mml:math></inline-formula> objective functions to be maximized, <italic>X</italic> is the solution or search space, and <inline-formula id="ieqn-3"><mml:math id="mml-ieqn-3"><mml:mi>f</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>X</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the objective space. A solution <inline-formula id="ieqn-4"><mml:math id="mml-ieqn-4"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> is said to dominate <inline-formula id="ieqn-5"><mml:math id="mml-ieqn-5"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula> (denoted <inline-formula id="ieqn-6"><mml:math id="mml-ieqn-6"><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>&#x227A;</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub></mml:math></inline-formula>) if <inline-formula id="ieqn-7"><mml:math id="mml-ieqn-7"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow><mml:mo>&#x2265;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> for all <inline-formula id="ieqn-8"><mml:math id="mml-ieqn-8"><mml:mi>i</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:mi>q</mml:mi></mml:math></inline-formula> and <inline-formula id="ieqn-9"><mml:math id="mml-ieqn-9"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo><mml:mo fence="false" stretchy="false">&#x27E9;</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub><mml:mo stretchy="false">(</mml:mo><mml:msub><mml:mi>x</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mo stretchy="false">)</mml:mo></mml:math></inline-formula> for at least one <italic>i</italic>.</p>
<p>The <bold>Pareto-optimal set</bold> is defined as [<xref ref-type="bibr" rid="ref-28">28</xref>]:
<disp-formula id="eqn-2"><label>(2)</label><mml:math id="mml-eqn-2" display="block"><mml:msup><mml:mi>P</mml:mi><mml:mrow><mml:mo>&#x2217;</mml:mo></mml:mrow></mml:msup><mml:mo>=</mml:mo><mml:mrow><mml:mo>{</mml:mo><mml:mi>x</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>X</mml:mi><mml:mrow><mml:mo stretchy="false">|</mml:mo></mml:mrow><mml:mrow><mml:mi mathvariant="normal">&#x2203;</mml:mi><mml:mspace width="negativethinmathspace" /><mml:mspace width="negativethinmathspace" /><mml:mspace width="negativethinmathspace" /><mml:mrow><mml:mo>/</mml:mo></mml:mrow></mml:mrow><mml:mi>y</mml:mi><mml:mo>&#x2208;</mml:mo><mml:mi>X</mml:mi><mml:mspace width="thinmathspace" /><mml:mspace width="thinmathspace" /><mml:mrow><mml:mtext>such that&#xA0;</mml:mtext></mml:mrow><mml:mrow><mml:mtext>y</mml:mtext></mml:mrow><mml:mo>&#x227A;</mml:mo><mml:mrow><mml:mtext>x</mml:mtext></mml:mrow><mml:mo>}</mml:mo></mml:mrow></mml:math></disp-formula></p>
<p>This set represents solutions where no objective can be improved without degrading another, and the corresponding Pareto front is the image of <inline-formula id="ieqn-10"><mml:math id="mml-ieqn-10"><mml:msup><mml:mi>P</mml:mi><mml:mrow><mml:mo>&#x2217;</mml:mo></mml:mrow></mml:msup></mml:math></inline-formula> in the objective space. Identifying the Pareto front is central to solving MOOPs, as it provides a comprehensive view of the trade-offs involved in the decision-making process [<xref ref-type="bibr" rid="ref-31">31</xref>].</p>
</sec>
<sec id="s2_2">
<label>2.2</label>
<title>Early MOEAs and Selection Mechanisms</title>
<p>In the context of EAs, the multi-objective nature of an optimization problem often appears irrelevant during certain steps. Operations such as random initialization of solutions, mutation, crossover, and recombination generally operate solely within the solution space, without directly considering the objective functions. However, the selection process&#x2014;a critical step in EAs&#x2014;explicitly involves the objective function(s) via the fitness function. Frequently, the fitness function mirrors the objective function(s), making selection dependent on relative fitness values.</p>
<p>Selection mechanisms play a crucial role in guiding the search process of MOEAs. They determine which individuals from the population are chosen for reproduction based on their fitness relative to multiple objectives. Effective selection strategies not only enhance the convergence towards the Pareto front but also ensure a diverse set of solutions, which is essential for exploring the trade-offs among conflicting objectives. As the field of MOEAs evolves, various innovative selection techniques have emerged, each aiming to improve the efficiency and effectiveness of the optimization process [<xref ref-type="bibr" rid="ref-26">26</xref>]. Common selection strategies include:
<list list-type="simple">
<list-item><label>1)</label><p><bold>Tournament Selection:</bold> Tournament Selection is a popular method in evolutionary algorithms where a subset of individuals is randomly chosen from the population, and the best individual among them is selected for reproduction. This method introduces a competitive aspect to selection, allowing for a balance between exploration and exploitation. The size of the tournament can be adjusted to control selection pressure; larger tournaments favor stronger individuals, while smaller tournaments maintain diversity by giving weaker individuals a chance to be selected [<xref ref-type="bibr" rid="ref-32">32</xref>].</p></list-item>
<list-item><label>2)</label><p><bold>Roulette Wheel Selection:</bold> Roulette Wheel Selection, also known as fitness proportionate selection, assigns a probability of selection to each individual based on its fitness. It is a method where the probability of selecting an individual for reproduction is directly proportional to its fitness relative to the entire population. This means that individuals with higher fitness values have a greater chance of being selected, akin to a roulette wheel where each individual&#x2019;s slice of the wheel corresponds to its fitness. Each individual is represented on a wheel, and the wheel is spun to select individuals for reproduction. The main advantage of this method is its simplicity and efficiency; While this method is straightforward and effective, it can lead to issues such as premature convergence, where a few highly fit individuals dominate the selection process, potentially reducing genetic diversity in the population [<xref ref-type="bibr" rid="ref-33">33</xref>,<xref ref-type="bibr" rid="ref-34">34</xref>].</p></list-item>
<list-item><label>3)</label><p><bold>Rank-Based Selection:</bold> Rank-Based Selection addresses some limitations of fitness proportionate selection by ranking individuals based on their fitness rather than using their absolute fitness values. In this method, individuals are assigned selection probabilities based on their ranks, which helps to maintain diversity in the population and prevents the dominance of highly fit individuals. This approach is particularly useful in MOO, where maintaining a di-verse set of solutions is crucial [<xref ref-type="bibr" rid="ref-6">6</xref>,<xref ref-type="bibr" rid="ref-9">9</xref>].</p></list-item>
<list-item><label>4)</label><p><bold>NSGA-II Selection:</bold> Non-dominated Sorting Genetic Algorithm II (NSGA-II) employs a fast non-dominated sorting approach to rank individuals based on their Pareto dominance. In this method, individuals are sorted into different fronts based on their level of dominance, and a crowding distance metric is used to maintain diversity within each front. This ensures that both convergence towards the Pareto front and diversity in the population are preserved [<xref ref-type="bibr" rid="ref-30">30</xref>].</p></list-item>
<list-item><label>5)</label><p><bold>SPEA2 Selection:</bold> The Strength Pareto Evolutionary Algorithm 2 (SPEA2) enhances its predecessor by maintaining an external archive of non-dominated solutions. This algorithm uses both the fitness of individuals and a density estimation to select individuals for reproduction, improving the convergence and diversity of solutions in the search space [<xref ref-type="bibr" rid="ref-35">35</xref>].</p></list-item>
<list-item><label>6)</label><p><bold>Crowding Distance:</bold> Crowding Distance is a technique used in multi-objective optimization to ensure diversity among solutions in the population. It measures how close an individual is to its neighbors in the objective space. By favoring individuals with a larger crowding distance during selection, this method helps maintain a diverse set of solutions and prevents the algorithm from converging prematurely to a small region of the solution space [<xref ref-type="bibr" rid="ref-4">4</xref>,<xref ref-type="bibr" rid="ref-36">36</xref>].</p></list-item>
<list-item><label>7)</label><p><bold>Elitist Selection:</bold> Elitist Selection is a strategy that ensures that a subset of the best-performing individuals in the population is preserved and carried over to the next generation. This method focuses on retaining high-quality solutions, which can help accelerate convergence towards optimal solutions. By guaranteeing that the best individuals survive, elitist se-lection can improve the overall performance of the evolutionary algorithm. However, it is essential to balance elitism with diversity to avoid premature convergence to suboptimal solutions [<xref ref-type="bibr" rid="ref-33">33</xref>,<xref ref-type="bibr" rid="ref-37">37</xref>].</p></list-item>
</list></p>
<p><xref ref-type="table" rid="table-3">Table 3</xref> provides a clear overview of different selection strategies used in MOEAs, highlighting their respective strengths and weaknesses. This can help understanding the trade-offs involved in choosing a selection strategy for each specific optimization problem.</p>
<table-wrap id="table-3">
<label>Table 3</label>
<caption>
<title>Common selection strategies in multi-objective evolutionary algorithms</title>
</caption>
<table>
<colgroup>
<col/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th>Selection strategy</th>
<th align="center">Strengths</th>
<th align="center">Weaknesses</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="2">Tournament selection</td>
<td>Simple to implement<break/> Maintains diversity well</td>
<td>Can lead to premature convergence if tournaments are small</td>
</tr>
<tr>
<td></td>
<td></td>
</tr>
<tr>
<td rowspan="2">Roulette wheel selection</td>
<td>Probabilistic approach allows for diverse selection</td>
<td>May favor overly fit individuals, leading to loss of diversity</td>
</tr>
<tr>
<td></td>
<td>Easy to understand</td>
</tr>
<tr>
<td rowspan="2">Rank-based selection</td>
<td>Reduces the impact of fitness scaling</td>
<td>May ignore significant differences between individuals</td>
</tr>
<tr>
<td></td>
<td>Encourages exploration of less fit individuals</td>
</tr>
<tr>
<td>NSGA-II selection</td>
<td>Balances convergence and diversity well<break/>Efficient in maintaining a good spread of solutions</td>
<td>More complex to implement compared to simpler methods</td>
</tr>
<tr>
<td rowspan="2">SPEA2 selection</td>
<td>Maintains a fine balance between convergence and diversity</td>
<td>Computationally intensive due to the need for fitness assignment</td>
</tr>
<tr>
<td></td>
<td>Uses an external archive for elite solutions</td>
</tr>
<tr>
<td rowspan="2">Crowding distance</td>
<td>Helps maintain diversity by favoring solutions far from others</td>
<td>Complexity increases with dimensionality of the objective space</td>
</tr>
<tr>
<td></td>
<td>Effective in multi-modal problems</td>
</tr>
<tr>
<td>Elitism selection</td>
<td>Ensures the best solutions are preserved across generations</td>
<td>Can lead to loss of diversity if not balanced with exploration strategies</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>EAs adapt well to MOOPs due to their population-based nature, which enables simultaneous exploration of multiple solutions. However, traditional EA selection mechanisms are unsuitable for multi-objective settings, as solutions can be incomparable. These strategies rely on the ability to clearly determine whether one solution is superior to another. In MOO, however, the comparison is complicated by the possibility of incomparability. For two solutions, and the relationship (dominates) or might not hold; instead, and could be incomparable.</p>
<p>To address these challenges, early researchers (e.g., Fonseca and Fleming, 1995) explored scalar-valued fitness functions that allowed standard selection techniques to be applied. Two primary approaches emerged: as discussed in the following.</p>
<sec id="s2_2_1">
<label>2.2.1</label>
<title>Aggregation of Objectives (Aggregated Objective Functions)</title>
<p>Aggregation methods combine multiple objectives into a single objective function using additional information, simplifying the optimization process. This approach allows for the application of traditional optimization techniques but may overlook the trade-offs between objectives. The effectiveness of aggregation depends on the choice of weights assigned to each objective, which can significantly influence the results [<xref ref-type="bibr" rid="ref-33">33</xref>]. From objective functions, a single aggregated objective can be derived through:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Simple Additive Weighting (SAW): Weighted sums of objectives, where the weights reflect the relative importance of each objective.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Reference-point Approaches: A reference point (e.g., ideal or utopia point) is defined, and the distance to this point is minimized (e.g., goal programming).</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Utility Functions: A utility (or value) function is constructed based on the objectives.</p></list-item>
</list></p>
<p>While these methods facilitate the use of traditional EAs, they have significant drawbacks:
<list list-type="simple">
<list-item>
<label>&#x2717;</label><p>Dependence on Decision Maker Input: Requires predefined weights or reference points.</p></list-item>
<list-item>
<label>&#x2717;</label><p>Concentration Bias: Solutions tend to cluster around regions optimal for the aggregated objective, often failing to represent the Pareto set comprehensively.</p></list-item>
</list></p>
<p>To address the challenges posed by high-dimensional objective spaces, researchers have explored methods for aggregating groups of objectives. Recent studies have shown that aggregating objectives can effectively reduce the dimensionality of the problem while preserving essential trade-offs among objectives. This approach allows for a more manageable optimization process without sacrificing solution quality [<xref ref-type="bibr" rid="ref-38">38</xref>]. Techniques such as weighted sums, Pareto front approximations, and reference-point methods are notable examples of effective aggregation strategies, which have been successfully applied in various engineering problems [<xref ref-type="bibr" rid="ref-39">39</xref>].</p>
</sec>
<sec id="s2_2_2">
<label>2.2.2</label>
<title>Scalarization without Explicit Input (Population-Based Scalarization)</title>
<p>Population-based scalarization techniques aim to derive a single objective function from a population of solutions without requiring explicit input from the decision-maker. This method allows for a more dynamic adaptation to the search space, facilitating the exploration of diverse solutions while still guiding the search towards the Pareto front. For example:
<list list-type="simple">
<list-item><label>&#x2022;</label>
<p>Vector Evaluated Genetic Algorithm (VEGA): Proposed by Schaffer and Grefenstette [<xref ref-type="bibr" rid="ref-39">39</xref>], VEGA divides the population into subgroups. Each subgroup is selected based on one of the objectives. The total reproduction probability of a solution is proportional to the weighted sum of its fitness values. This method, however, tends to bias solutions toward convex regions of the Pareto front, making it less effective for detecting solutions in concave regions.</p></list-item>
<list-item><label>&#x2022;</label>
<p>Objective-based Random Selection: In this method, a single objective is randomly selected during each selection step. Early examples include:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Fourman (1985) [<xref ref-type="bibr" rid="ref-40">40</xref>]: Tournament selection using randomly selected objectives.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Kursawe (1991) [<xref ref-type="bibr" rid="ref-41">41</xref>]: Partitioning the population based on randomly chosen objectives within the framework of evolution strategies. Kursawe also introduced a diploid encoding scheme to promote population diversity.</p></list-item>
</list></p></list-item>
</list></p>
<p>Despite these innovations, these approaches share a tendency to concentrate solutions in specific areas of the Pareto set, limiting their ability to fully explore the objective space. Addressing this challenge remains a key focus in the evolution of multi-objective evolutionary algorithms. While early scalarization methods like VEGA and random objective selection introduced foundational mechanisms for handling multiple objectives, they often struggled with issues such as biased solution distributions and limited diversity across the Pareto front. These limitations highlighted the need for more adaptive, robust, and scalable approaches. In response, modern developments in MOEAs have emerged, offering enhanced strategies to overcome these challenges and better support complex real-world optimization problems.</p>
</sec>
</sec>
<sec id="s2_3">
<label>2.3</label>
<title>Strategies for Handling Multiple Objectives: Aggregation vs. Decomposition Developments</title>
<p>Both decomposition-based methods and aggregation methods are strategies used in multi-objective optimization to simplify the problem by transforming multiple objectives into a single, or a set of single, objective problems. However, they differ fundamentally in their approach and how they handle the objectives.</p>
<p>Aggregation Methods (Scalarization): Aggregation methods, also known as scalarization techniques, combine multiple objectives into a single objective function. This is typically achieved by assigning weights to each objective and summing them up, or by using other mathematical formulations. The goal is to transform the multi-objective problem into a single-objective problem that can then be solved using traditional optimization techniques. Basic concept of how they work are as follows:</p>
<p>Simple Additive Weighting (SAW): This is the most common form, where objectives are multiplied by predefined weights and then summed. For example, for objectives <inline-formula id="ieqn-11"><mml:math id="mml-ieqn-11"><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>,</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>,</mml:mo><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>, the aggregated objective would be <inline-formula id="ieqn-12"><mml:math id="mml-ieqn-12"><mml:mi>F</mml:mi><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>=</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>1</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mn>2</mml:mn></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow><mml:mo>+</mml:mo><mml:mo>&#x2026;</mml:mo><mml:mo>+</mml:mo><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:msub><mml:mi>f</mml:mi><mml:mrow><mml:mi>q</mml:mi></mml:mrow></mml:msub><mml:mrow><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula>, where <inline-formula id="ieqn-13"><mml:math id="mml-ieqn-13"><mml:msub><mml:mi>w</mml:mi><mml:mrow><mml:mi>i</mml:mi></mml:mrow></mml:msub></mml:math></inline-formula> are the weights.</p>
<p>Reference-point Approaches: These methods define a &#x201C;reference point&#x201D; (e.g., an ideal or utopia point) and minimize the distance to this point.</p>
<p>Utility Functions: A utility function is constructed to represent the decision-maker&#x2019;s preferences over different objective values.</p>
<p>Such approaches can be characterized as follows:</p>
<p>Simplicity: They simplify the optimization process by reducing it to a single-objective problem.</p>
<p>Dependence on Decision Maker Input: They often require the decision-maker to specify preferences (e.g., weights or reference points) before the optimization begins.</p>
<p>Concentration Bias: A significant drawback is their tendency to bias solutions towards specific regions of the Pareto front, especially convex ones, and they may fail to find solutions in concave regions of the Pareto front. This means they might not be able to generate a diverse set of Pareto-optimal solutions, potentially limiting the exploration of the entire Pareto front.</p>
<p>Decomposition-based Methods: Decomposition-based methods, exemplified by MOEA/D (Multi-Objective Evolutionary Algorithm based on Decomposition), approach the multi-objective problem by decomposing it into a set of scalar optimization subproblems. These subproblems are then optimized simultaneously and collaboratively. Decomposition-based methods work as follows:
<list list-type="bullet">
<list-item>
<p>Subproblem Creation: The multi-objective problem is transformed into a number of single-objective subproblems, often using a set of weight vectors or reference points. Each subproblem is designed to optimize a specific aspect of the overall multi-objective problem.</p></list-item>
<list-item>
<p>Collaborative Optimization: Instead of solving each subproblem independently, decom-position-based methods optimize them in a collaborative manner. Information is shared between neighboring subproblems, allowing for a more efficient exploration of the Pareto front.</p></list-item>
<list-item>
<p>Example MOEA/D: MOEA/D decomposes a multi-objective problem into a number of scalar subproblems using Tchebycheff approach or other aggregation functions. Each sub-problem is associated with a weight vector, and solutions to these subproblems are optimized by leveraging information from their neighbors.</p></list-item>
</list></p>
<p>The methods can be characterized as follows:
<list list-type="bullet">
<list-item>
<p>Scalability: They are particularly effective in handling many-objective problems (problems with a large number of objectives) because they transform the problem into a set of simpler subproblems, reducing the computational complexity associated with high-dimensional objective spaces.</p></list-item>
<list-item>
<p>Diversity and Convergence: By optimizing multiple subproblems simultaneously and collaboratively, decomposition-based methods can achieve both good convergence towards the Pareto front and maintain diversity among the solutions.</p></list-item>
<list-item>
<p>Less Prone to Concentration Bias: Unlike simple aggregation methods, decomposition-based approaches are generally better at finding solutions across the entire Pareto front, including concave regions, due to their collaborative optimization strategy.</p></list-item>
<list-item>
<p>Adaptability: They can be adapted to various problem types and can incorporate adaptive weight strategies to dynamically adjust to the problem landscape.</p></list-item>
</list></p>
<p><xref ref-type="table" rid="table-4">Table 4</xref> summarizes the key differences between aggregation methods (scalarization) and de-composition-based methods.</p>
<table-wrap id="table-4">
<label>Table 4</label>
<caption>
<title>Aggregation methods (Scalarization) vs. decomposition-based methods key differences</title>
</caption>
<table>
<colgroup>
<col/>
<col align="center"/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th>Feature</th>
<th align="center">Aggregation methods (Scalarization)</th>
<th align="center">Decomposition-based methods</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>Approach</bold></td>
<td>Combine all objectives into a single objective function.</td>
<td>Decompose the multi-objective problem into multiple single-objective subproblems.</td>
</tr>
<tr>
<td><bold>Optimization</bold></td>
<td>Solve a single, transformed problem.</td>
<td>Solve multiple subproblems simultaneously and collaboratively.</td>
</tr>
<tr>
<td><bold>Preference input</bold></td>
<td>Often require explicit preference input (e.g., weights) beforehand.</td>
<td>Can incorporate preferences, but the decomposition itself aids in exploration.</td>
</tr>
<tr>
<td><bold>Pareto front coverage</bold></td>
<td>May struggle with concave regions and can exhibit concentration bias.</td>
<td>Generally better at covering the entire Pareto front, including concave regions.</td>
</tr>
<tr>
<td><bold>Scalability</bold></td>
<td>Less scalable to many-objective problems due to the difficulty of setting weights and covering the entire Pareto front.</td>
<td>Highly scalable to many-objective problems by breaking them down into manageable subproblems.</td>
</tr>
<tr>
<td><bold>Examples</bold></td>
<td>Weighted Sum, Epsilon-Constraint, Goal Programming.</td>
<td>MOEA/D</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>In essence, while both methods aim to simplify multi-objective problems, aggregation methods directly combine objectives into one, often requiring prior knowledge of preferences and potentially limiting the exploration of the Pareto front. Decomposition-based methods, on the other hand, break the problem into smaller, interconnected subproblems, allowing for more effective exploration of the Pareto front, especially in high-dimensional and complex scenarios.</p>
</sec>
</sec>
<sec id="s3">
<label>3</label>
<title>Classification of Multi-Objective Evolutionary Algorithms (MOEAs)</title>
<p>Traditional optimization techniques, such as gradient-based methods, rely on assumptions like linearity, convexity, or differentiability, which limit their applicability for complex MOOPs. EAs, by contrast, do not impose such rigid constraints, providing a flexible framework capable of exploring complex solution spaces. EAs inspired by natural selection and biological evolution offer a population-based approach capable of exploring diverse solutions simultaneously, making them particularly well-suited for tackling MOOPs.</p>
<p>The application of EAs to multi-objective problems&#x2014;known as MOEAs&#x2014;began in the late 1980s. Initial efforts included approaches like the Vector Evaluated Genetic Algorithm (VEGA) [<xref ref-type="bibr" rid="ref-39">39</xref>]. The earliest attempt to identify Pareto-optimal fronts using evolutionary principles was made by VEGA [<xref ref-type="bibr" rid="ref-39">39</xref>] in 1984. Prior to VEGA, genetic algorithms focused solely on single-objective optimization. VEGA introduced a way to handle multiple objectives by dividing the population into subgroups and selecting individuals based on different objectives. While VEGA introduced the notion of evaluating solutions across multiple objectives, it suffered from poor diversity and biased convergence. However, its tendency to bias solutions toward specific regions of the Pareto front (especially convex ones) revealed the need for more balanced selection strategies. In 1993, the Niched Pareto Genetic Algorithm (NPGA) was proposed by Horn et al. as an early attempt to maintain diversity among non-dominated solutions using a tournament selection mechanism and fitness sharing. NPGA further advanced the application of Pareto dominance in MOEAs and highlighted the importance of maintaining a well-spread Pareto front [<xref ref-type="bibr" rid="ref-42">42</xref>]. The Multi-Objective Genetic Algorithm (MOGA) [<xref ref-type="bibr" rid="ref-43">43</xref>] was introduced in 1993, which enhanced the selection mechanism through fitness sharing and ranking, resulting in a better spread of solutions. Building upon these and considering the limitations of VEGA&#x2014;such as lack of diversity control and inefficiency in identifying well-distributed Pareto fronts&#x2014;led to the development of NSGA (Non-dominated Sorting Genetic Algorithm) (1994) [<xref ref-type="bibr" rid="ref-44">44</xref>] which proposed non-dominated sorting and elitism, offering improved convergence toward well-distributed Pareto fronts. This enabled the algorithm to classify and evolve solutions based on Pareto dominance rather than scalarized fitness values. While NSGA improved dominance-based ranking, it still struggled with convergence speed and maintaining diverse solutions. Strength Pareto Evolutionary Algorithm (SPEA) [<xref ref-type="bibr" rid="ref-36">36</xref>] advanced the field by introducing a strength-based fitness assignment where each individual&#x2019;s fitness is determined by the number of other individuals it dominates. This approach differed from earlier methods (like VEGA, MOGA, and NSGA), which emphasized non-dominated sorting but lacked a quantitative dominance metric for selection pressure. This innovation enabled better selection pressure toward optimal solutions while preserving diversity.</p>
<p>Despite their advancements, early MOEAs had notable limitations. They often struggled to effectively handle large-scale problems due to computational inefficiencies, particularly when the number of objectives increased. Many early algorithms operated under specific assumptions about problem structure, which restricted their applicability to certain problem types. For instance, they frequently relied on a fixed population size and pre-defined selection mechanisms that did not adapt well to the dynamic nature of some optimization landscapes. These limitations called for the development of more sophisticated approaches that could better accommodate the complexities of real-world MOO scenarios.</p>
<p>The field gained formal recognition with the First International Conference on Evolutionary Multi-Criterion Optimization (EMO) in 2001 [<xref ref-type="bibr" rid="ref-3">3</xref>], leading to the establishment of EMO as a distinct research area. Recent advancements in MOEAs have introduced innovative techniques that enhance their performance and applicability. Also, this section explores the modern developments, highlighting how they address the challenges faced in multi-objective optimization. These trends and developments reflect the evolving landscape of MOEAs, showcasing the integration of advanced computational techniques to address increasingly complex optimization challenges across various domains. MOEAs have seen rapid advancements, as discussed in the following subsections.</p>
<sec id="s3_1">
<label>3.1</label>
<title>Pareto-Based MOEAs</title>
<p>Pareto-based methods, such as Non-dominated Sorting Genetic Algorithm (NSGA)-II [<xref ref-type="bibr" rid="ref-36">36</xref>,<xref ref-type="bibr" rid="ref-44">44</xref>], are known for their efficiency and widespread application. NSGA, despite its conceptual strengths, suffered from high computational cost due to nested loops in its sorting procedure and lacked an explicit mechanism to maintain diversity. NSGA-II was proposed to address both of these issues: it introduced a fast non-dominated sorting algorithm and the crowding distance concept for diversity preservation, significantly improving scalability and efficiency in practice.</p>
<p>Pareto-based ranking [<xref ref-type="bibr" rid="ref-45">45</xref>,<xref ref-type="bibr" rid="ref-46">46</xref>] is a fundamental concept in MOEAs that prioritizes solutions based on Pareto dominance. Instead of relying solely on fitness values, solutions are ranked according to their ability to dominate others in terms of multiple objectives. This approach promotes a diverse set of solutions that represent different trade-offs among objectives. In multi-objective optimization problems in engineering design, Pareto-based ranking is widely used to identify optimal de-signs that balance performance metrics. For instance, in structural optimization, engineers can use Pareto-based ranking to select designs that minimize weight while maximizing strength.</p>
</sec>
<sec id="s3_2">
<label>3.2</label>
<title>Decomposition-Based MOEAs</title>
<p>Decomposition-based methods, such as MOEA/D [<xref ref-type="bibr" rid="ref-47">47</xref>], decompose the multi-objective problem into a series of scalar optimization subproblems, which are then optimized in a collaborative manner. Traditional Pareto-based algorithms like NSGA-II and SPEA rely on dominance comparisons, which become computationally expensive and less effective in many-objective scenarios. MOEA/D was introduced to address this by decomposing a multi-objective problem into a set of scalar subproblems and optimizing them simultaneously. This allowed for better scalability and a structured search, particularly useful in high-dimensional problems. This approach allows for more efficient exploration of the Pareto front and better coverage of the solution space. MOEA/D has been successfully applied in complex engineering problems, such as multi-objective scheduling in manufacturing, where different objectives (e.g., minimizing completion time and maximizing resource utilization) need to be optimized simultaneously [<xref ref-type="bibr" rid="ref-48">48</xref>,<xref ref-type="bibr" rid="ref-49">49</xref>].</p>
<p>Most existing MOEAs used population-based approaches, which could be memory- or computation-intensive. Pareto Archived Evolution Strategy (PAES) provided a minimalist alternative using a single-solution evolution strategy and a Pareto archive to retain diversity. It targeted simplicity and efficiency while still achieving good convergence and coverage in smaller or resource-constrained environments.</p>
<sec id="s3_2_1">
<label>3.2.1</label>
<title>Modern Developments Using Diversity Preservation Mechanisms [<xref ref-type="bibr" rid="ref-50">50</xref>,<xref ref-type="bibr" rid="ref-51">51</xref>]</title>
<p>Techniques like crowding distance (in NSGA-II) ensure a well-distributed set of solutions. Diversity preservation mechanisms ensure that the solutions in the population are well-distributed across the objective space. Techniques such as crowding distance, used in NSGA-II, help maintain diversity by favoring solutions that are farther apart in the objective space, thus preventing premature convergence. In environmental management, diversity preservation mechanisms can be employed in land-use optimization problems, where it is essential to maintain a diverse set of land-use options that balance ecological, economic, and social objectives.</p>
</sec>
<sec id="s3_2_2">
<label>3.2.2</label>
<title>Addressing Scalability Challenges in MOEAs Using Decomposition Approaches [<xref ref-type="bibr" rid="ref-48">48</xref>,<xref ref-type="bibr" rid="ref-49">49</xref>]</title>
<p>Decomposition methods have gained traction as effective strategies for tackling many-objective problems. The use of decomposition-based methods in MOEAs continues to evolve, with new algorithms being proposed that leverage reference points and adaptive mechanisms to enhance performance in high-dimensional objective spaces [<xref ref-type="bibr" rid="ref-52">52</xref>]. The MOEA/D framework, proposed by Zhang and Li [<xref ref-type="bibr" rid="ref-47">47</xref>], exemplifies this approach by transforming a multi-objective optimization problem into a set of scalar subproblems. Each subproblem is solved in parallel using different weight vectors, which helps manage complexity and improve coverage of the Pareto front. Re-cent advancements in MOEA/D have included adaptive weight strategies and enhanced neighborhood exploration mechanisms to further improve performance in high-dimensional spaces [<xref ref-type="bibr" rid="ref-53">53</xref>]. Specific examples include adaptive algorithms that dynamically adjust weight vectors based on the current population distribution, as seen in recent studies [<xref ref-type="bibr" rid="ref-54">54</xref>].</p>
</sec>
</sec>
<sec id="s3_3">
<label>3.3</label>
<title>Indicator-Based MOEAs</title>
<p>Indicator-based methods utilize performance metrics to guide the selection process in MOEAs. Recent works have proposed using indicators such as hypervolume and generational distance to evaluate the quality of the approximated Pareto front. These metrics serve as alternative criteria for selection, enabling algorithms to focus on improving the overall quality of solutions rather than merely exploring the search space [<xref ref-type="bibr" rid="ref-55">55</xref>]. Indicators like the hypervolume indicator have been particularly effective in guiding the optimization process toward better-converged solutions, as demonstrated in recent applications in resource allocation [<xref ref-type="bibr" rid="ref-54">54</xref>].</p>
<p>Recent advancements have integrated performance metrics directly into the selection process of MOEAs. Techniques such as hypervolume maximization and Pareto spread minimization have been employed to enhance the effectiveness of the search process. These methods allow for a more nuanced evaluation of solutions, leading to improved convergence and diversity [<xref ref-type="bibr" rid="ref-56">56</xref>,<xref ref-type="bibr" rid="ref-57">57</xref>]. Practical applications in engineering design often leverage these techniques to achieve optimal trade-offs among competing objectives, showcasing their effectiveness in real-world scenarios [<xref ref-type="bibr" rid="ref-58">58</xref>].</p>
</sec>
<sec id="s3_4">
<label>3.4</label>
<title>Preference-Based MOEAs</title>
<p>Preference-based evolutionary optimization has emerged as a significant advancement in MOEA research. These methods incorporate user-defined preferences during the optimization process, allowing decision-makers to influence the search towards more desirable solutions. Recent studies have highlighted the potential of artificial preference relations, such as reference directions or preferred areas of solutions, to enhance exploration efficiency and user satisfaction [<xref ref-type="bibr" rid="ref-59">59</xref>]. Examples include applications in resource allocation problems where user preferences significantly impact solution selection, demonstrating the practical utility of these methods [<xref ref-type="bibr" rid="ref-54">54</xref>].</p>
</sec>
<sec id="s3_5">
<label>3.5</label>
<title>Hybrid MOEAs</title>
<p>Hybrid methods that combine MOEAs with other optimization techniques, such as swarm intelligence and local search algorithms, are becoming more prevalent. These combinations can enhance the search capabilities of MOEAs, enabling them to tackle more complex and high-dimensional optimization problems effectively [<xref ref-type="bibr" rid="ref-54">54</xref>,<xref ref-type="bibr" rid="ref-60">60</xref>&#x2013;<xref ref-type="bibr" rid="ref-62">62</xref>]. A recent study proposed a Hybrid Selection based MOEA (HS-MOEA) that combines dominance, decomposition, and indicator-based strategies to enhance the balance between diversity and convergence in multi-objective optimization problems. This approach demonstrated superior performance on various test suites, including DTLZ and WFG [<xref ref-type="bibr" rid="ref-54">54</xref>]. A notable study by Singh &#x0026; Chaturvedi [<xref ref-type="bibr" rid="ref-60">60</xref>] integrated Particle Swarm Optimization with MOEAs to enhance convergence speed while maintaining solution diversity.</p>
<p>Combining evolutionary methods with machine learning or metaheuristics to enhance performance, improve convergence, and solution quality. For example, combining MOEAs with reinforcement learning has demonstrated promising results in dynamic optimization environments. While MOEA/D offered a powerful decomposition strategy, it lacked fine-tuning capabilities in local regions. MOEA/D-H hybridized MOEA/D with local search methods to combine global exploration with local exploitation. This hybrid approach improved solution quality in complex real-world problems with rugged or discontinuous landscapes. As optimization problems became more dynamic and data-driven, classical MOEAs showed limitations in adaptability and computational efficiency. The integration of machine learning techniques (e.g., neural networks, surrogate models, reinforcement learning) into MOEAs aimed to guide the search process more intelligently, improve convergence in real-time applications, and handle expensive objective evaluations through prediction and adaptation mechanisms.</p>
<sec id="s3_5_1">
<label>3.5.1</label>
<title>Modern Developments: Incorporating Machine Learning [<xref ref-type="bibr" rid="ref-63">63</xref>]</title>
<p>Surrogate models and reinforcement learning are now employed to enhance decision-making and exploration in multi-objective optimization. The integration of machine learning techniques, such as surrogate models and reinforcement learning, significantly enhances decision-making and exploration in multi-objective optimization. Surrogate models approximate the objective functions, reducing computation time, while reinforcement learning helps in adaptively selecting solutions. Surrogate-based optimization is widely used in engineering design problems, such as aerodynamic shape optimization, where evaluating the objective functions can be computationally expensive. Machine learning techniques help in efficiently finding optimal designs by predicting performance based on limited evaluations.</p>
</sec>
<sec id="s3_5_2">
<label>3.5.2</label>
<title>Modern Developments: Integration of Deep Learning Techniques [<xref ref-type="bibr" rid="ref-64">64</xref>,<xref ref-type="bibr" rid="ref-65">65</xref>]</title>
<p>Deep learning techniques have been integrated into MOEAs to enhance their ability to model complex relationships between objectives. This integration allows for more effective exploration of the solution space and improved convergence towards optimal solutions. For instance, Li et al. [<xref ref-type="bibr" rid="ref-64">64</xref>] explored the use of deep reinforcement learning to guide the search process in MOEAs, improving convergence speed and solution quality. This hybrid approach leverages the strengths of deep learning in recognizing patterns and optimizing search strategies.</p>
</sec>
<sec id="s3_5_3">
<label>3.5.3</label>
<title>Ensemble Methods [<xref ref-type="bibr" rid="ref-66">66</xref>,<xref ref-type="bibr" rid="ref-67">67</xref>]</title>
<p>Ensemble methods combine multiple MOEAs to leverage their strengths and mitigate weaknesses. They are increasingly being applied in MOEAs to enhance solution diversity and robustness. By combining multiple MOEA strategies, researchers have demonstrated improved performance and robustness in complex optimization problems. For example, a study by Chen et al. [<xref ref-type="bibr" rid="ref-49">49</xref>] presented an ensemble approach that integrates different MOEAs, leading to better exploration of the objective space and enhanced Pareto front approximation.</p>
</sec>
<sec id="s3_5_4">
<label>3.5.4</label>
<title>Emerging Trends in Hybrid Approaches</title>
<p>The integration of MOEAs with other optimization techniques, such as metaheuristics and ma-chine learning, is becoming increasingly common. These hybrid approaches aim to balance exploration and exploitation more effectively. Recent applications include multi-agent systems and adaptive control, demonstrating their versatility and effectiveness in complex optimization scenarios [<xref ref-type="bibr" rid="ref-68">68</xref>]. Specific case studies have shown that combining MOEAs with swarm intelligence techniques leads to improved exploration of the solution space, particularly in dynamic environments [<xref ref-type="bibr" rid="ref-58">58</xref>].</p>
<p>Recent developments have also focused on the integration of advanced machine learning paradigms&#x2014;including neural networks, surrogate models, and large language models (LLMs)&#x2014;into the design of MOEAs. These integrations aim to improve convergence speed, model expensive objective functions, and enhance decision support in complex, high-dimensional, and dynamic optimization tasks.</p>
<p>A comprehensive and up-to-date account of such developments is presented in the recent book by Saxena et al. (2024), Machine Learning Assisted Evolutionary Multi- and Many-Objective Optimization [<xref ref-type="bibr" rid="ref-69">69</xref>]. The book discusses various ML-assisted strategies such as preference learning, model-based selection, performance prediction, and automated knowledge transfer. Moreover, it addresses the integration of ML in many-objective problems (MaOPs), a growing challenge in the EMO field, and outlines emerging trends such as deep learning and LLM-based hybridizations that guide or accelerate the evolutionary search process.</p>
<p><xref ref-type="table" rid="table-5">Table 5</xref> presents the motivation behind the evolution of various MOEA methods.</p>
<table-wrap id="table-5">
<label>Table 5</label>
<caption>
<title>Motivation behind the evolution of various MOEA methods</title>
</caption>
<table>
<colgroup>
<col/>
<col/>
<col align="center"/>
</colgroup>
<thead>
<tr>
<th>Algorithm</th>
<th>Year</th>
<th align="center">Motivation behind its development</th>
</tr>
</thead>
<tbody>
<tr>
<td><bold>VEGA</bold></td>
<td>1984</td>
<td>First MOEA; introduced multi-objective selection by dividing the population per objective; biased toward convex regions; lacked diversity mechanisms.</td>
</tr>
<tr>
<td><bold>NPGA</bold></td>
<td>1993</td>
<td>Introduced niching and fitness sharing among non-dominated solutions</td>
</tr>
<tr>
<td><bold>MOGA</bold></td>
<td>1993</td>
<td>Introduced ranking and fitness sharing to improve Pareto spread</td>
</tr>
<tr>
<td><bold>NSGA</bold></td>
<td>1994</td>
<td>Addressed VEGA&#x2019;s limitations by introducing non-dominated sorting; improved selection of diverse Pareto-optimal solutions, but had high computational cost.</td>
</tr>
<tr>
<td><bold>MOGA</bold></td>
<td>1993</td>
<td>Incorporated ranking and niche formation to better distribute solutions across the Pareto front; further refined multi-objective selection pressure.</td>
</tr>
<tr>
<td><bold>SPEA</bold></td>
<td>1998</td>
<td>Introduced fitness assignment based on the strength of domination; improved convergence and diversity control in the solution population.</td>
</tr>
<tr>
<td><bold>PAES</bold></td>
<td>1999</td>
<td>Proposed a minimalist, single-solution strategy using a Pareto archive for diversity preservation; suited for small-scale or low-resource scenarios.</td>
</tr>
<tr>
<td><bold>NSGA-II</bold></td>
<td>2000</td>
<td>Solved NSGA&#x2019;s efficiency issues with fast sorting and introduced crowding distance for explicit diversity maintenance; became a benchmark in MOEAs.</td>
</tr>
<tr>
<td><bold>MOEA/D</bold></td>
<td>2002</td>
<td>Introduced problem decomposition into scalar subproblems; improved scalability and performance in many-objective problems.</td>
</tr>
<tr>
<td><bold>MOEA/D-H</bold></td>
<td>2015</td>
<td>Combined MOEA/D with local search to enhance local exploitation; improved convergence in complex or rugged optimization landscapes.</td>
</tr>
<tr>
<td><bold>ML-integrated MOEAs</bold></td>
<td>2020s</td>
<td>Leveraged machine learning (e.g., surrogate models, reinforcement learning) to improve adaptability, convergence speed, and handle expensive objective evaluations.</td>
</tr>
</tbody>
</table>
</table-wrap>
<p>The landscape of MOEAs is rapidly evolving, with ongoing research addressing the complexities posed by many-objective optimization and high-dimensional spaces. The integration of innovative techniques, such as decomposition methods, preference-based optimization, and hybrid approaches, is paving the way for more effective solutions to diverse and complex real-world problems.</p>
<p>To provide a comprehensive overview of the diverse landscape of MOEAs, <xref ref-type="fig" rid="fig-1">Fig. 1</xref> categorizes the main methods and techniques, including traditional, decomposition-based, AI-based, and hybrid approaches.</p>
<fig id="fig-1">
<label>Figure 1</label>
<caption>
<title>Classification of MOEAs</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-1.tif"/>
</fig>
</sec>
</sec>
</sec>
<sec id="s4">
<label>4</label>
<title>Advanced Topics and Emerging Trends</title>
<sec id="s4_1">
<label>4.1</label>
<title>Many-Objective Optimization (MaO)</title>
<p>MOOPS, which involve more than three objectives, introduce significant challenges due to the exponential growth in the number of Pareto-optimal solutions. This phenomenon complicates the search for optimal solutions and necessitates the development of specialized techniques. The term &#x201C;many-objective optimization&#x201D; has been introduced to characterize these complex problems, emphasizing the need for innovative approaches to maintain solution diversity and convergence [<xref ref-type="bibr" rid="ref-70">70</xref>,<xref ref-type="bibr" rid="ref-71">71</xref>]. The implications of many-objective optimization extend beyond mere solution count; they affect computational resources and the effectiveness of algorithms in converging to a diverse set of optimal solutions.</p>
<p>As the complexity of real-world problems increases, many applications now involve more than three or four conflicting objectives. This shift has given rise to the field of MaO, typically referring to optimization problems with four or more objectives. The conventional MOEA frameworks, especially Pareto-based methods like NSGA-II and SPEA2, often face scalability issues in such contexts due to challenges like loss of selection pressure, insufficient diversity, and difficulties in Pareto ranking when most solutions tend to become non-dominated.</p>
<p>This has led to the development of EMaO algorithms, which extend MOEAs with specialized techniques for high-dimensional objective spaces. Examples include:
<list list-type="bullet">
<list-item>
<p>NSGA-III, which introduces reference points to maintain diversity in many-objective problems.</p></list-item>
<list-item>
<p>MOEA/D with adaptive weights, which improves decomposition-based scalability.</p></list-item>
<list-item>
<p>Objective reduction and dimensionality reduction techniques to mitigate the curse of dimensionality.</p></list-item>
<list-item>
<p>Indicator-based methods, such as hypervolume-based selection or IGD (Inverted Generational Distance), optimized for many-objective landscapes.</p></list-item>
</list></p>
<p>Additionally, machine learning techniques and large language models (LLMs) have been recently incorporated into EMaO frameworks to guide the search process intelligently, predict performance, or reduce computational complexity. The growing importance of EMaO is reflected in the increasing number of benchmark suites (e.g., MaF test problems), dedicated workshops, and focused research on real-world, large-scale optimization tasks in fields such as energy systems, supply chains, and bioinformatics.</p>
</sec>
<sec id="s4_2">
<label>4.2</label>
<title>Real-Time Decision Support</title>
<p>The application of MOEAs in real-time decision-making scenarios is on the rise. Researchers are exploring how MOEAs can be utilized in systems requiring immediate responses, such as autonomous systems and smart cities. This trend emphasizes the need for algorithms that can deliver quick and reliable solutions in dynamic and uncertain environments [<xref ref-type="bibr" rid="ref-72">72</xref>&#x2013;<xref ref-type="bibr" rid="ref-74">74</xref>].</p>
<p>Large-Scale Decision Variable Analysis [<xref ref-type="bibr" rid="ref-75">75</xref>,<xref ref-type="bibr" rid="ref-76">76</xref>]: Research on an algorithm denoted as LMEA-DVQA highlights the importance of decision variable analysis in improving the performance of multi-objective evolutionary algorithms. This study compares several algorithms and emphasizes the effectiveness of the proposed method in various test scenarios [<xref ref-type="bibr" rid="ref-20">20</xref>].</p>
</sec>
<sec id="s4_3">
<label>4.3</label>
<title>Ethical Considerations</title>
<p>Fairness, transparency, and ethical implications in algorithmic decisions are becoming key concerns. With the growing importance of ethical considerations in decision-making, future research may also address the fairness and equity of solutions generated by MOEAs. Future MOEA research will likely incorporate ethical criteria directly into fitness functions or develop mechanisms to ensure equity and sustainability in generated solutions. Ensuring that optimization processes consider social and ethical implications will be vital in areas such as resource allocation and environmental sustainability [<xref ref-type="bibr" rid="ref-73">73</xref>,<xref ref-type="bibr" rid="ref-77">77</xref>&#x2013;<xref ref-type="bibr" rid="ref-79">79</xref>].</p>
</sec>
<sec id="s4_4">
<label>4.4</label>
<title>Parallel and Distributed MOEAs</title>
<p>The utilization of parallel computing resources, including GPUs and distributed architectures, has significantly improved the scalability of MOEAs. Recent advancements emphasize the importance of leveraging parallelization to enhance computational efficiency, particularly in high-dimensional optimization problems [<xref ref-type="bibr" rid="ref-58">58</xref>]. This trend is expected to continue as computational re-sources become more accessible and powerful.</p>
</sec>
<sec id="s4_5">
<label>4.5</label>
<title>Integration of Decision-Making in EMO</title>
<p>While Evolutionary Multi-Objective Optimization (EMO) techniques are adept at generating a diverse set of trade-off solutions (Pareto fronts), the ultimate selection of a single solution or a subset of solutions often involves a human decision-maker. This process bridges the gap between optimization and real-world application. As such, the integration of decision-making mechanisms into EMO algorithms has become a key area of interest.</p>
<p>There are three main paradigms for incorporating decision-making in EMO:
<list list-type="bullet">
<list-item>
<p>A priori approaches, where decision-makers specify preferences (e.g., weights, aspiration levels) before the optimization begins.</p></list-item>
<list-item>
<p>A posteriori approaches, where the algorithm generates the Pareto front, and the decision-maker selects a preferred solution afterward.</p></list-item>
<list-item>
<p>Interactive approaches, where preferences are iteratively updated during the optimization process, allowing the search to dynamically adjust to evolving decision criteria.</p></list-item>
</list></p>
<p>Recent advancements include preference-based EMO algorithms (e.g., reference-point-based NSGA-III) and interactive EMO systems, which employ machine learning or surrogate models to predict and incorporate user preferences on the fly. These approaches enhance the practical usability of EMO methods, especially in domains such as environmental policy planning, engineering design, healthcare, and financial portfolio selection.</p>
<p>Incorporating decision-making into EMO frameworks also facilitates the resolution of many-objective problems, where visualizing or interpreting large Pareto sets becomes increasingly difficult. Decision-making support systems, visualization tools, and dimensionality reduction techniques now play a pivotal role in making EMO results actionable.</p>
</sec>
</sec>
<sec id="s5">
<label>5</label>
<title>Bibliometric Analysis and MOEA Visibility</title>
<sec id="s5_1">
<label>5.1</label>
<title>Bibliometric Analysis and Its Relevance to the Evolution of MOEAs</title>
<p>Bibliometric analysis provides valuable insights into the historical trajectory and scholarly attention surrounding the development of MOEAs. While previous sections have discussed the algorithmic innovations in MOEAs from a technical standpoint, this section complements that perspective by revealing how such developments are reflected in the research landscape through publication trends, journal focus, and geographic distribution.</p>
<p>The annual volume of publications on MOEAs across major academic databases (ScienceDirect, IEEE Xplore, Web of Science) demonstrates a significant growth trend over the past two decades. This increase correlates with key algorithmic breakthroughs such as the introduction of NSGA-II, MOEA/D, and recent hybrid and deep-learning-based MOEAs, showing a tangible link between algorithmic milestones and scholarly output.</p>
<p>Moreover, the distribution of MOEA-related research across journals reveals the interdisciplinary nature of these algorithms. Journals focusing on evolutionary computation, soft computing, applied mathematics, operations research, and domain-specific applications (e.g., energy, healthcare, logistics) have all contributed to shaping the algorithmic evolution of MOEAs. This trend underlines how methodological advancements have been driven both by theoretical interest and practical demand.</p>
<p>Highly cited authors and institutions identified through citation analysis also mirror the timeline of algorithmic development. For instance, the rise in citations of foundational works by Deb (e.g., NSGA-II) and Zhang (e.g., MOEA/D) corresponds to the widespread adoption of these algorithms in subsequent research. Similarly, countries with significant contributions&#x2014;such as India, China, Germany, and the USA&#x2014;have played central roles in both theoretical and application-oriented advancements.</p>
<p>Thus, rather than merely presenting descriptive bibliometric statistics, this section contextualizes the scholarly evolution of MOEAs in parallel with their technical advancements. The bibliometric patterns observed serve as a macro-level indicator of how the field has matured, diversified, and responded to new computational and societal challenges.</p>
</sec>
<sec id="s5_2">
<label>5.2</label>
<title>MOEAs in ScienceDirect</title>
<p>In this section, the visibility of MOEAs in the ScienceDirect website is explored. By the search of (&#x201C;Multi-objective Evolutionary Algorithm&#x201D;) keywords in this science base, the following results were obtained. The number of journal papers in each year is shown in <xref ref-type="fig" rid="fig-2">Fig. 2</xref>:</p>
<fig id="fig-2">
<label>Figure 2</label>
<caption>
<title>Number of MOEA papers in each year in ScienceDirect</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-2.tif"/>
</fig>
<p>The number of all journal papers in this field has been 6955 papers that quotas of different journals are as follows and illustrated in <xref ref-type="fig" rid="fig-3">Fig. 3</xref>: Applied Soft Computing (699), Expert Systems with Applications (473), Swarm and Evolutionary Computation (449), Information Sciences (383), Computers &#x0026; Industrial Engineering (238), Engineering Applications of Artificial Intelligence (202), Energy (195), Knowledge-Based Systems (170), Computers &#x0026; Operations Research (124), Neurocomputing (123), Applied Energy (120), Journal of Cleaner Production (112), Energy Conversion and Management (107), Europe-an Journal of Operational Research (104), International Journal of Electrical Power &#x0026; Energy Systems (90), Journal of Hydrology (86), Renewable Energy (65), Applied Thermal Engineering (64), Procedia Computer Science (61), Renewable and Sustainable Energy Reviews (60), IFAC-Papers On-Line (56), Environmental Modelling &#x0026; Software (55), Future Generation Computer Systems (54), Ocean Engineering (52), Reliability Engineering &#x0026; System Safety (49).</p>
<fig id="fig-3">
<label>Figure 3</label>
<caption>
<title>The quotas of different journals in ScienceDirect (MOEA)</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-3.tif"/>
</fig>
<p>An increasing number of publications per year are observed that can allow us to say the field of MOEAs has now reached a stage of maturity after the earliest papers published at 2001, and there are also many basic issues yet to be resolved and there is an active and vibrant worldwide community of researchers working on these issues.</p>
<p>We repeated the search this time with the keywords of (&#x201C;Multi-objective&#x201D; and &#x201C;Evolutionary Algorithm&#x201D;) keywords in this ScienceDirect website. The number of all journal papers in this field is 20,814 papers and the number of Journal papers in each year is as <xref ref-type="fig" rid="fig-4">Fig. 4</xref>. In this case, the quotas of different journals are as follows and illustrated in <xref ref-type="fig" rid="fig-5">Fig. 5</xref>.</p>
<fig id="fig-4">
<label>Figure 4</label>
<caption>
<title>Number of MO&#x0026;EA papers in each year in ScienceDirect</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-4.tif"/>
</fig><fig id="fig-5">
<label>Figure 5</label>
<caption>
<title>The quotas of different journals in ScienceDirect (MO&#x0026;EA)</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-5.tif"/>
</fig>
</sec>
<sec id="s5_3">
<label>5.3</label>
<title>MOEAs in IEEE Publications</title>
<p>By similar search of (&#x201C;Multi-objective Evolutionary Algorithm&#x201D;) keywords in IEEE website, the following results were obtained. From 2003 up to 2025: Conference Publications (980), Journals &#x0026; Magazines, Early Access Articles (234). The time extension of publications in IEEE is as follows (<xref ref-type="fig" rid="fig-6">Fig. 6</xref>):</p>
<fig id="fig-6">
<label>Figure 6</label>
<caption>
<title>Number of MOEA papers in each time extension in IEEE publications</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-6.tif"/>
</fig>
<p>2003&#x2013;2010: Conference Publications (245), Journals &#x0026; Magazines, Early Access Articles (9).</p>
<p>2011&#x2013;2020: Conference Publications (481), Journals &#x0026; Magazines, Early Access Articles (98).</p>
<p>2021&#x2013;2024: Conference Publications (239), Journals &#x0026; Magazines, Early Access Articles (127).</p>
<p>The number of all journal papers in this field has been 234 papers that quotas of different journals are as follows and illustrated in <xref ref-type="fig" rid="fig-7">Fig. 7</xref>.</p>
<fig id="fig-7">
<label>Figure 7</label>
<caption>
<title>The quotas of different journals in IEEE publications (MOEA)</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-7.tif"/>
</fig>
<p>We repeated the search this time with the keywords of (&#x201C;Multi-objective&#x201D; and &#x201C;Evolutionary Algorithm&#x201D;) keywords in IEEE too. The number of all journal papers in this field has been 1195 papers and the number of Journal papers in each time extension is as <xref ref-type="fig" rid="fig-8">Fig. 8</xref>. The quotas of different journals in this case are illustrated in <xref ref-type="fig" rid="fig-9">Fig. 9</xref>.</p>
<fig id="fig-8">
<label>Figure 8</label>
<caption>
<title>Number of MO&#x0026;EA papers in each time extension in IEEE publications</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-8.tif"/>
</fig><fig id="fig-9">
<label>Figure 9</label>
<caption>
<title>The quotas of different journals in IEEE publications (MO&#x0026;EA)</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-9.tif"/>
</fig>
<p>As can be seen in <xref ref-type="fig" rid="fig-6">Figs. 6</xref> and <xref ref-type="fig" rid="fig-8">8</xref>, although the final time extension covers a shorter range (only 2021 to 2025), the number of journal articles in this field has grown compared to previous larger time extensions, indicating the growing attention of the scientific community to this research field.</p>
<p>The authors with the most publication in IEEE, are brought in the following: Yaochu Jin(62), Qingfu Zhang (44), Xingyi Zhang (39), Kay Chen Tan (39), Ye Tian (35), Gary G. Yen (33), Jun Zhang (26), Xin Yao (24), Maoguo Gong (23), Ling Wang (21), Shengxiang Yang (20), Qiuzhen Lin (19), Hisao Ishibuchi (19), Hai-Lin Liu (19), Lei Zhang (19), Dunwei Gong (18), Ran Cheng (17), Yong Wang (16), Aimin Zhou (16), Jing Liu (15), Licheng Jiao (15), Fan Cheng (14), Ke Tang (13), Zexuan Zhu (13), Carlos A. Coello Coello (13).</p>
<p>Also in these years, the number of papers in IEEE conferences with this topic in different countries has been as: China (350), Canada (80), Australia (67), USA (60), Japan (55), Singapore (50), Spain (49), UK (41), New Zealand (38), Mexico (28), Poland (28), Norway (27), Brazil (24).</p>
</sec>
<sec id="s5_4">
<label>5.4</label>
<title>MOEAs in the ISI Web of Science (WOS)</title>
<p>By search of (&#x201C;Multi-objective Evolutionary Algorithm&#x201D;) keywords in ISI WOS, 3241 documents are found which include 2062 Article, 1220 Proceeding Paper, 26 Early Access, 23 Review Article, 7 Book Chapters, 4 Retracted Publication, 2 Editorial Material, 2 Meeting Abstract, 1 Correction, and 1 Letter. Also, the number of published papers in each year in this database is shown in <xref ref-type="fig" rid="fig-10">Fig. 10</xref>.</p>
<fig id="fig-10">
<label>Figure 10</label>
<caption>
<title>Number of MOEA papers in each time extension in ISI WOS</title>
</caption>
<graphic mimetype="image" mime-subtype="tif" xlink:href="CMC_68087-fig-10.tif"/>
</fig>
<p>In this knowledge database, the authors with the most published papers are: Coello, Carlos A Coello (62), Kim, Kwang-Yong (29), Koziel, Slawomir (23), Tan, Kay Chen (22), GAO, Liang (21), Jin, Yaochu (20), Wang, Ling (19), Bekasiewicz, Adrian (18), London, Joao Bosco Augusto (17), Delbem, Alexandre C B (17), Yao, Xin (17), While, L. (16), Zhang, Hui Jie (16), Jiao, Licheng (16), Rui, Wang (16), Cheng, Fan (15), Husain, Afzal (13), Guo, Xiwang (13), Liu, Jing (13), Konstantinidis, Andreas (13).</p>
</sec>
<sec id="s5_5">
<label>5.5</label>
<title>The Most Cited Papers at the ISI WOS</title>
<p>The search on the ISI Web of Science allows us to get the most cited papers that can provide a picture on the important contributions on the topic that are representative approaches of different categorization areas. One notable point about the most cited articles is that 4 of the 6 most cited articles were published in the Journal of Evolutionary Computation.</p>
<p>Regarding the search methodology, let us mention that the citation data reported in this subsection is based on a keyword-specific bibliometric search in the ISI Web of Science database using the exact term &#x201C;Multi-objective Evolutionary Algorithm&#x201D;. This keyword was chosen to ensure consistency and specificity in identifying publications that explicitly fall under the MOEA category. However, it is important to note that some of the most impactful publications in the broader EMO field&#x2014;such as the NSGA-II paper by Deb et al. (2002)&#x2014;may not contain this exact phrase in their title or abstract and thus may not appear at the top of this particular search. To address this limitation and better reflect the true academic impact, we have included a dedicated subentry for the NSGA-II paper, which, as of 2024, has accumulated over 33,000 citations, making it the most cited paper in the field of evolutionary computation.</p>
<p>NSGA-II (33,478 Citations) [<xref ref-type="bibr" rid="ref-36">36</xref>]: This seminal paper by Deb et al. (2002) introduced the NSGA-II algorithm, which significantly improved upon earlier multi-objective genetic algorithms in terms of computational efficiency and diversity preservation. Although the paper may not explicitly use the phrase &#x201C;Multi-objective Evolutionary Algorithm&#x201D; in its title or keywords, it is universally recognized as one of the foundational and most impactful works in the EMO domain, and as such, it is included here outside of the strict keyword-based filtering for the sake of completeness and clarity.</p>
<p>HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization (1532 Citations) [<xref ref-type="bibr" rid="ref-45">45</xref>]: This paper addresses the problem of the high computational effort required to calculate Hyper-volume (as the only single set quality measure that is known to be strictly monotonic with regard to Pareto dominance), which has prevented the full exploitation of the potential of this index. It proposes a fast search algorithm that uses Monte Carlo simulation to approximate the exact values of Hypervolume. The main idea is based on the ranking of solutions induced by the Hyper-volume index, rather than the actual values of the indicator. As a result, HypE is presented in detail as an estimation algorithm for MOO that reduces the available computational resources, increases the accuracy of the estimates, and provides a trade-off for the execution times.</p>
<p>Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization (1137 Citations) [<xref ref-type="bibr" rid="ref-80">80</xref>]: In this paper, for the first time, the optimization of MOOP using the Gray Wolf Optimizer (GWO) algorithm is investigated. For this purpose, the Multi-Objective Gray Wolf Optimizer (MOGWO) is introduced by integrating a fixed-size external archive for saving and retrieving Pareto-optimal solutions with GWO. This archive is used to define the GWO optimization parameters such as social hierarchy and simulate the hunting behavior of wolves. It shows improved accuracy compared to some other previous MOEAs.</p>
<p>Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework (536 Citations) [<xref ref-type="bibr" rid="ref-81">81</xref>]: This paper introduces a method for many-objective, multimodal optimization by combining epsilon dominance, epsilon-progress as a measure of convergence speed, randomized restarts, and auto-adaptive multi-operator recombination in a unified optimization framework called Borg MOEA. Borg MOEA represents a class of algorithms whose operators are adaptively selected based on the problem and it is not a single algorithm.</p>
<p>MOEA/D with Adaptive Weight Adjustment (523 Citations) [<xref ref-type="bibr" rid="ref-82">82</xref>]: This paper introduces a MOEA based on decomposition (MOEA/D) with Adaptive Weight Vec-tor Adjustment (MOEA/D-AWA) to deal with the complex Pareto front in target MOOP. The considered MOOP is decomposed into a set of scalar subproblems using uniformly distributed aggregation weight vectors. Then, by analyzing the geometric relationship between weight vectors and optimal solutions under the Chebyshev decomposition scheme, a novel weight vector initialization method and an adaptive weight vector adjustment strategy with periodic weight adjustment capability are proposed.</p>
<p>Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO) (499 Citations) [<xref ref-type="bibr" rid="ref-83">83</xref>]: This paper introduces the Sigma method as a new method for finding the best local guides (global best particle) for each particle in a population of a set of Pareto optimal solutions in Multi-Objective Particle Swarm Optimization (MOPSO). This selection method has a significant impact on the convergence and diversity of solutions, especially when optimizing problems with a large number of objectives.</p>
<p>Evaluating the &#x03B5;-domination based multi-objective evolutionary algorithm for a quick computation of pareto-optimal solutions (498 Citations) [<xref ref-type="bibr" rid="ref-84">84</xref>]: This paper attempts to present a computational solution for Pareto-optimal solutions to MOOP based on the concept of epsilon dominance, such that offers a good compromise in terms of con-vergence close to the Pareto-optimal front, solution diversity, and computational time. This method allows decision makers to control the achievable accuracy of the obtained Pareto-optimal solutions.</p>
</sec>
<sec id="s5_6">
<label>5.6</label>
<title>Notable Researchers and Contributions</title>
<p>While the bibliometric analysis in this section was conducted using the specific keyword &#x201C;Multi-objective Evolutionary Algorithm,&#x201D; we acknowledge that this approach may not have captured all seminal publications and contributors in the broader EMO and MCDM communities. To address this, we highlight below the contributions of several prominent researchers whose work has significantly shaped the field:
<list list-type="bullet">
<list-item>
<p>Kalyanmoy Deb: Widely regarded as a foundational figure in EMO, he introduced several key algorithms including NSGA and NSGA-II [<xref ref-type="bibr" rid="ref-36">36</xref>,<xref ref-type="bibr" rid="ref-44">44</xref>], which are central to the evolution of MOEAs. His work is extensively cited and referenced throughout this paper.</p></list-item>
<list-item>
<p>Carlos Fonseca &#x0026; Peter Fleming: Their work on fitness assignment strategies for multi-objective genetic algorithms (1993, 1995) laid important groundwork for dominance-based approaches and is already referenced in <xref ref-type="sec" rid="s2">Section 2</xref> and the timeline table.</p></list-item>
<list-item>
<p>Joshua Knowles: Co-developer of the Pareto Archived Evolution Strategy (PAES), he made early contributions to minimalist, archive-based MOEAs, which we discussed in <xref ref-type="sec" rid="s1_4">Section 1.4</xref>.</p></list-item>
<list-item>
<p>Michael Emmerich: Known for his work on surrogate-assisted and performance indicator-based EMO methods, contributing to methods like SMS-EMOA and hypervolume approximations.</p></list-item>
<list-item>
<p>Kaisa Miettinen: A key contributor to interactive multi-objective optimization and decision-making, particularly within the MCDM community.</p></list-item>
<list-item>
<p>Sanaz Mostaghim: Her research includes swarm intelligence approaches and hybridization strategies in MOEAs, including Particle Swarm and multi-modal optimization.</p></list-item>
</list></p>
<p>We recognize that a more comprehensive representation of the field may require extending the keyword strategy to include alternative terms like &#x201C;evolutionary multi-objective optimization,&#x201D; &#x201C;multi-criterion decision-making,&#x201D; or &#x201C;interactive EMO.&#x201D; This limitation is acknowledged and will be addressed in future extensions of this study.</p>
</sec>
</sec>
<sec id="s6">
<label>6</label>
<title>Future Directions and Conclusions</title>
<p>As the field of MOEAs continues to evolve, several future directions and research trends are emerging. These directions reflect both ongoing challenges and the expanding potential of MOEAs in diverse applications:
<list list-type="simple">
<list-item><label>&#x2B9A;</label> 
<p>Hybrid Approaches: There is a growing trend towards hybridizing MOEAs with other optimization techniques, such as machine learning algorithms and swarm intelligence methods. These hybrid approaches aim to enhance the exploration and exploitation capabilities of MOEAs, leading to improved performance in complex optimization landscapes and in uncertain and dynamic environments. Recent studies have demonstrated the effectiveness of combining MOEAs with machine learning techniques to adaptively refine search strategies [<xref ref-type="bibr" rid="ref-61">61</xref>,<xref ref-type="bibr" rid="ref-62">62</xref>,<xref ref-type="bibr" rid="ref-72">72</xref>,<xref ref-type="bibr" rid="ref-84">84</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Adaptive Mechanisms [<xref ref-type="bibr" rid="ref-85">85</xref>,<xref ref-type="bibr" rid="ref-86">86</xref>]: The development of adaptive mechanisms within MOEAs is becoming increasingly important. These mechanisms allow algorithms to adjust their parameters dynamically based on the characteristics of the optimization problem at hand. This adaptability allows for more efficient exploration and exploitation of the solution space, leading to improved optimization outcomes. Recent work by Qiao et al. [<xref ref-type="bibr" rid="ref-85">85</xref>] implemented adaptive mutation and crossover rates in MOEAs, resulting in improved performance across various benchmark problems. Research indicates that adaptive MOEAs can significantly enhance convergence rates and solution quality, particularly in dynamic environments where problem characteristics may change over time [<xref ref-type="bibr" rid="ref-64">64</xref>&#x2013;<xref ref-type="bibr" rid="ref-66">66</xref>,<xref ref-type="bibr" rid="ref-75">75</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Many-Objective Optimization: As real-world problems often involve more than three objectives, many-objective optimization is gaining traction. New algorithms are being developed to effectively handle the challenges posed by high-dimensional objective spaces, focusing on maintaining diversity while ensuring convergence to optimal solutions. Recent advancements in many-objective evolutionary algorithms (ManyOEAs) highlight their potential in fields such as drug design and engineering [<xref ref-type="bibr" rid="ref-52">52</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Scalability Enhancements: MOEAs must increasingly address high-dimensional and many-objective optimization problems. Research is focusing on scalable designs, such as decomposition-based and surrogate-assisted evolutionary algorithms (SAEAs), which reduce computational burden while maintaining solution quality. Further development of robust surrogate models and adaptive parameter tuning strategies is expected to enhance the scalability and effectiveness of MOEAs in solving complex real-world problems.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Cross-Disciplinary Applications: The scope of MOEAs is expanding to emerging fields such as personalized medicine, smart cities, autonomous vehicles, and sustainability science. Tailoring algorithms to the complex, data-intensive nature of these domains presents new challenges and opportunities and researchers should focus on tailoring MOEAs to meet these challenges presented by complex, dynamic environments.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Integration with Machine Learning: The synergy between MOEAs and machine learning techniques is becoming a pivotal research direction. Approaches involving reinforcement learning, neural networks, and ensemble methods are being employed to guide search strategies, adaptively refine solution spaces, and improve convergence behavior. This integration can also lead to intelligent and adaptive MOEAs capable of learning from historical evaluations.</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Integration with Big Data and AI: With the increasing volume and complexity of data, MOEAs are being combined with big data analytics and artificial intelligence (AI) techniques. The convergence of big data analytics and AI with MOEAs is expected to enhance their capabilities. By leveraging large datasets and predictive models, MOEAs can improve their search processes and adapt to changing conditions, leading to better optimization results. This integration aims to boost the adaptability and accuracy of MOEAs, especially in sectors like healthcare, finance, logistics, and energy systems, where data-driven decision-making is essential [<xref ref-type="bibr" rid="ref-63">63</xref>,<xref ref-type="bibr" rid="ref-84">84</xref>,<xref ref-type="bibr" rid="ref-87">87</xref>&#x2013;<xref ref-type="bibr" rid="ref-89">89</xref>].</p></list-item>
<list-item><label>&#x2B9A;</label> 
<p>Feature Selection Framework [<xref ref-type="bibr" rid="ref-52">52</xref>,<xref ref-type="bibr" rid="ref-89">89</xref>&#x2013;<xref ref-type="bibr" rid="ref-91">91</xref>]: An evolutionary framework that integrates dominance and decomposition for feature selection in classification tasks has been developed. This framework addresses the balance between convergence and diversity, showcasing improved performance over existing methods [<xref ref-type="bibr" rid="ref-75">75</xref>].</p></list-item>
</list></p>
<p>Recent comparative studies have evaluated various MOEAs, including NSGA-II, MOEA/D, and others, across multiple benchmark test sets. These studies provide insights into the strengths and weaknesses of different selection methods in handling complex multi-objective problems [<xref ref-type="bibr" rid="ref-75">75</xref>,<xref ref-type="bibr" rid="ref-92">92</xref>,<xref ref-type="bibr" rid="ref-93">93</xref>].</p>
<p>In summary, future developments in MOEAs are anticipated to emphasize hybridization, adaptability, adaptability, many-objective optimization, real-time applications, ethical considerations, scalability, and integration with emerging computational paradigms. These trends reflect a shift towards more intelligent, robust, and context-aware optimization frameworks capable of addressing real-world complexities across disciplines.</p>
</sec>
</body>
<back>
<ack>
<p>Not applicable.</p>
</ack>
<sec>
<title>Funding Statement</title>
<p>The authors received no specific funding for this study.</p>
</sec>
<sec>
<title>Author Contributions</title>
<p>The authors confirm contribution to the paper as follows: Conceptualization, Thomas Hanne; methodology, Thomas Hanne, Mohammad Jahani Moghaddam; investigation, Mohammad Jahani Moghaddam; writing&#x2014;original draft preparation, Thomas Hanne, Mohammad Jahani Moghaddam; writing&#x2014;review and editing, Thomas Hanne, Mohammad Jahani Moghaddam; visualization, Mohammad Jahani Moghaddam. All authors reviewed the results and approved the final version of the manuscript.</p>
</sec>
<sec sec-type="data-availability">
<title>Availability of Data and Materials</title>
<p>Not applicable.</p>
</sec>
<sec>
<title>Ethics Approval</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement">
<title>Conflicts of Interest</title>
<p>The authors declare no conflicts of interest to report regarding the present study.</p>
</sec>
<ref-list content-type="authoryear">
<title>References</title>
<ref id="ref-1"><label>[1]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Zeleny</surname> <given-names>M</given-names></string-name></person-group>. <source>Multiple criteria decision making</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>McGraw Hill</publisher-name>; <year>1982</year>.</mixed-citation></ref>
<ref id="ref-2"><label>[2]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Gal</surname> <given-names>T</given-names></string-name>, <string-name><surname>Hanne</surname> <given-names>T</given-names></string-name></person-group>. <chapter-title>On the development and future aspects of vector optimization and MCDM</chapter-title>. In: <source>Multicriteria analysis</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>1997</year>. p. <fpage>130</fpage>&#x2013;<lpage>45</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-642-60667-0_14</pub-id>.</mixed-citation></ref>
<ref id="ref-3"><label>[3]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Zitzler</surname> <given-names>E</given-names></string-name>, <string-name><surname>Thiele</surname> <given-names>L</given-names></string-name>, <string-name><surname>Deb</surname> <given-names>K</given-names></string-name>, <string-name><surname>Coello Coello</surname> <given-names>CA</given-names></string-name>, <string-name><surname>Corne</surname> <given-names>D</given-names></string-name></person-group>. <article-title>Evolutionary multi-criterion optimization</article-title>. In: <conf-name>First International Conference, EMO, 2001 Zurich</conf-name>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2001</year>. doi:<pub-id pub-id-type="doi">10.1007/3-540-44719-9</pub-id>.</mixed-citation></ref>
<ref id="ref-4"><label>[4]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sharma</surname> <given-names>S</given-names></string-name>, <string-name><surname>Kumar</surname> <given-names>V</given-names></string-name></person-group>. <article-title>A comprehensive review on multi-objective optimization techniques: past, present and future</article-title>. <source>Arch Comput Meth Eng</source>. <year>2022</year>;<volume>29</volume>(<issue>7</issue>):<fpage>5605</fpage>&#x2013;<lpage>33</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s11831-022-09778-9</pub-id>.</mixed-citation></ref>
<ref id="ref-5"><label>[5]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Marler</surname> <given-names>RT</given-names></string-name>, <string-name><surname>Arora</surname> <given-names>JS</given-names></string-name></person-group>. <article-title>Survey of multi-objective optimization methods for engineering</article-title>. <source>Struct Multidiscip Optim</source>. <year>2004</year>;<volume>26</volume>(<issue>6</issue>):<fpage>369</fpage>&#x2013;<lpage>95</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s00158-003-0368-6</pub-id>.</mixed-citation></ref>
<ref id="ref-6"><label>[6]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cerda-Flores</surname> <given-names>SC</given-names></string-name>, <string-name><surname>Rojas-Punzo</surname> <given-names>AA</given-names></string-name>, <string-name><surname>N&#x00E1;poles-Rivera</surname> <given-names>F</given-names></string-name></person-group>. <article-title>Applications of multi-objective optimization to industrial processes: a literature review</article-title>. <source>Processes</source>. <year>2022</year>;<volume>10</volume>(<issue>1</issue>):<fpage>133</fpage>. doi:<pub-id pub-id-type="doi">10.3390/pr10010133</pub-id>.</mixed-citation></ref>
<ref id="ref-7"><label>[7]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Osaba</surname> <given-names>E</given-names></string-name>, <string-name><surname>Villar-Rodriguez</surname> <given-names>E</given-names></string-name>, <string-name><surname>Del Ser</surname> <given-names>J</given-names></string-name>, <string-name><surname>Nebro</surname> <given-names>AJ</given-names></string-name>, <string-name><surname>Molina</surname> <given-names>D</given-names></string-name>, <string-name><surname>LaTorre</surname> <given-names>A</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>A Tutorial on the design, experimentation and application of metaheuristic algorithms to real-World optimization problems</article-title>. <source>Swarm Evol Comput</source>. <year>2021</year>;<volume>64</volume>(<issue>4</issue>):<fpage>100888</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.swevo.2021.100888</pub-id>.</mixed-citation></ref>
<ref id="ref-8"><label>[8]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Farzane</surname> <given-names>K</given-names></string-name>, <string-name><surname>Alireza</surname> <given-names>BD</given-names></string-name></person-group>. <article-title>A review and evaluation of multi and many-objective optimization: methods and algorithms</article-title>. <source>Glob J Ecol</source>. <year>2022</year>;<volume>7</volume>(<issue>2</issue>):<fpage>104</fpage>&#x2013;<lpage>19</lpage>. doi:<pub-id pub-id-type="doi">10.17352/gje.000070</pub-id>.</mixed-citation></ref>
<ref id="ref-9"><label>[9]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Cho</surname> <given-names>JH</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>IR</given-names></string-name>, <string-name><surname>Chan</surname> <given-names>KS</given-names></string-name>, <string-name><surname>Swami</surname> <given-names>A</given-names></string-name></person-group>. <article-title>A survey on modeling and optimizing multi-objective systems</article-title>. <source>IEEE Commun Surv Tutor</source>. <year>2017</year>;<volume>19</volume>(<issue>3</issue>):<fpage>1867</fpage>&#x2013;<lpage>901</lpage>. doi:<pub-id pub-id-type="doi">10.1109/COMST.2017.2698366</pub-id>.</mixed-citation></ref>
<ref id="ref-10"><label>[10]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Masri</surname> <given-names>H</given-names></string-name>, <string-name><surname>Talbi</surname> <given-names>EG</given-names></string-name></person-group>. <article-title>Recent advances in multiobjective optimization</article-title>. <source>Ann Oper Res</source>. <year>2022</year>;<volume>311</volume>(<issue>2</issue>):<fpage>547</fpage>&#x2013;<lpage>50</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10479-022-04589-4</pub-id>.</mixed-citation></ref>
<ref id="ref-11"><label>[11]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hua</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Hao</surname> <given-names>K</given-names></string-name>, <string-name><surname>Jin</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>A survey of evolutionary algorithms for multi-objective optimization problems with irregular Pareto fronts</article-title>. <source>IEEE/CAA J Autom Sin</source>. <year>2021</year>;<volume>8</volume>(<issue>2</issue>):<fpage>303</fpage>&#x2013;<lpage>18</lpage>. doi:<pub-id pub-id-type="doi">10.1109/JAS.2021.1003817</pub-id>.</mixed-citation></ref>
<ref id="ref-12"><label>[12]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Pei</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Li</surname> <given-names>J</given-names></string-name></person-group>. <article-title>A survey on search strategy of evolutionary multi-objective optimization algorithms</article-title>. <source>Appl Sci</source>. <year>2023</year>;<volume>13</volume>(<issue>7</issue>):<fpage>4643</fpage>. doi:<pub-id pub-id-type="doi">10.3390/app13074643</pub-id>.</mixed-citation></ref>
<ref id="ref-13"><label>[13]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Qian</surname> <given-names>W</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>H</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>H</given-names></string-name>, <string-name><surname>Yang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>R</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>A synergistic MOEA algorithm with GANs for complex data analysis</article-title>. <source>Mathematics</source>. <year>2024</year>;<volume>12</volume>(<issue>2</issue>):<fpage>175</fpage>. doi:<pub-id pub-id-type="doi">10.3390/math12020175</pub-id>.</mixed-citation></ref>
<ref id="ref-14"><label>[14]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Coello</surname> <given-names>CAC</given-names></string-name>, <string-name><surname>Lamont</surname> <given-names>GB</given-names></string-name></person-group>. <source>Applications of multi-objective evolutionary algorithms</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>World Scientific</publisher-name>; <year>2004</year>. doi:<pub-id pub-id-type="doi">10.1142/5712</pub-id>.</mixed-citation></ref>
<ref id="ref-15"><label>[15]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Deb</surname> <given-names>K</given-names></string-name>, <string-name><surname>Jain</surname> <given-names>H</given-names></string-name></person-group>. <article-title>An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: solving problems with box constraints</article-title>. <source>IEEE Trans Evol Comput</source>. <year>2014</year>;<volume>18</volume>(<issue>4</issue>):<fpage>577</fpage>&#x2013;<lpage>601</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TEVC.2013.2281535</pub-id>.</mixed-citation></ref>
<ref id="ref-16"><label>[16]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Gong</surname> <given-names>DW</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Qi</surname> <given-names>CL</given-names></string-name></person-group>. <article-title>Environmental/economic power dispatch using a hybrid multi-objective optimization algorithm</article-title>. <source>Int J Electr Power Energy Syst</source>. <year>2010</year>;<volume>32</volume>(<issue>6</issue>):<fpage>607</fpage>&#x2013;<lpage>14</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.ijepes.2009.11.017</pub-id>.</mixed-citation></ref>
<ref id="ref-17"><label>[17]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Revathi</surname> <given-names>R</given-names></string-name>, <string-name><surname>Senthilnathan</surname> <given-names>N</given-names></string-name>, <string-name><surname>Kumar Chinnaiyan</surname> <given-names>V</given-names></string-name>, <string-name><surname>Sevugan Rajesh</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Multi-objective optimization framework for enhancing efficiency and sustainability in smart grids</article-title>. <source>Energy Convers Manag</source>. <year>2025</year>;<volume>341</volume>(<issue>13</issue>):<fpage>120079</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.enconman.2025.120079</pub-id>.</mixed-citation></ref>
<ref id="ref-18"><label>[18]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Mohagheghi</surname> <given-names>V</given-names></string-name>, <string-name><surname>Mousavi</surname> <given-names>SM</given-names></string-name>, <string-name><surname>Vahdani</surname> <given-names>B</given-names></string-name></person-group>. <article-title>A new multi-objective optimization approach for sustainable project portfolio selection: a realworld application under interval-valued fuzzy environment</article-title>. <source>Iranian J Fuzzy Syst</source>. <year>2016</year>;<volume>13</volume>(<issue>6</issue>):<fpage>41</fpage>&#x2013;<lpage>68</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s13369-015-1779-6</pub-id>.</mixed-citation></ref>
<ref id="ref-19"><label>[19]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Bradshaw</surname> <given-names>NA</given-names></string-name>, <string-name><surname>Walshaw</surname> <given-names>C</given-names></string-name>, <string-name><surname>Ierotheou</surname> <given-names>C</given-names></string-name>, <string-name><surname>Parrott</surname> <given-names>AK</given-names></string-name></person-group>. <article-title>A multi-objective evolutionary algorithm for portfolio optimisation</article-title>. In: <conf-name>Proceedings from Artificial Intelligence and Simulation of Behaviour Symposium 2009 on Evolutionary Systems</conf-name>. <publisher-loc>London, UK</publisher-loc>: <publisher-name>The Society for the Study of Artificial Intelligence and Simulation of Behaviour</publisher-name>; <year>2019</year>. p. <fpage>27</fpage>&#x2013;<lpage>32</lpage>.</mixed-citation></ref>
<ref id="ref-20"><label>[20]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zopounidis</surname> <given-names>C</given-names></string-name>, <string-name><surname>Doumpos</surname> <given-names>M</given-names></string-name></person-group>. <article-title>Multi-criteria decision aid in financial decision making: methodologies and literature review</article-title>. <source>Multi Criteria Decision Anal</source>. <year>2002</year>;<volume>11</volume>(<issue>4&#x2013;5</issue>):<fpage>167</fpage>&#x2013;<lpage>86</lpage>. doi:<pub-id pub-id-type="doi">10.1002/mcda.333</pub-id>.</mixed-citation></ref>
<ref id="ref-21"><label>[21]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Fern&#x00E1;ndez</surname> <given-names>E</given-names></string-name>, <string-name><surname>Rangel-Valdez</surname> <given-names>N</given-names></string-name>, <string-name><surname>Cruz-Reyes</surname> <given-names>L</given-names></string-name>, <string-name><surname>Gomez-Santillan</surname> <given-names>C</given-names></string-name></person-group>. <article-title>A new approach to group multi-objective optimization under imperfect information and its application to project portfolio optimization</article-title>. <source>Appl Sci</source>. <year>2021</year>;<volume>11</volume>(<issue>10</issue>):<fpage>4575</fpage>. doi:<pub-id pub-id-type="doi">10.3390/app11104575</pub-id>.</mixed-citation></ref>
<ref id="ref-22"><label>[22]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Bader</surname> <given-names>J</given-names></string-name>, <string-name><surname>Zitzler</surname> <given-names>E</given-names></string-name></person-group>. <article-title>HypE: an algorithm for fast hypervolume-based many-objective optimization</article-title>. <source>Evol Comput</source>. <year>2011</year>;<volume>19</volume>(<issue>1</issue>):<fpage>45</fpage>&#x2013;<lpage>76</lpage>. doi:<pub-id pub-id-type="doi">10.1162/EVCO_a_00009</pub-id>; <pub-id pub-id-type="pmid">20649424</pub-id></mixed-citation></ref>
<ref id="ref-23"><label>[23]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Rom</surname> <given-names>ARM</given-names></string-name>, <string-name><surname>Jamil</surname> <given-names>N</given-names></string-name>, <string-name><surname>Ibrahim</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Multi objective hyperparameter tuning via random search on deep learning models</article-title>. <source>TELKOMNIKA Telecommun Comput Electron Control</source>. <year>2024</year>;<volume>22</volume>(<issue>4</issue>):<fpage>956</fpage>. doi:<pub-id pub-id-type="doi">10.12928/telkomnika.v22i4.25847</pub-id>.</mixed-citation></ref>
<ref id="ref-24"><label>[24]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Eriskin</surname> <given-names>L</given-names></string-name>, <string-name><surname>Karatas</surname> <given-names>M</given-names></string-name>, <string-name><surname>Zheng</surname> <given-names>YJ</given-names></string-name></person-group>. <article-title>A robust multi-objective model for healthcare resource management and location planning during pandemics</article-title>. <source>Ann Oper Res</source>. <year>2024</year>;<volume>335</volume>(<issue>3</issue>):<fpage>1471</fpage>&#x2013;<lpage>518</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10479-022-04760-x</pub-id>; <pub-id pub-id-type="pmid">35645446</pub-id></mixed-citation></ref>
<ref id="ref-25"><label>[25]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Goldberg</surname> <given-names>DE</given-names></string-name></person-group>. <source>Genetic algorithm in search, optimization and machine learning</source>. <publisher-loc>Reading, MA, USA</publisher-loc>: <publisher-name>Addison Wesley Publishing Company</publisher-name>; <year>1989</year>.</mixed-citation></ref>
<ref id="ref-26"><label>[26]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Deb</surname> <given-names>K</given-names></string-name></person-group>. <chapter-title>Introduction to evolutionary multiobjective optimization</chapter-title>. In: <source>Multiobjective optimization: interactive and evolutionary approaches</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2008</year>. p. <fpage>59</fpage>&#x2013;<lpage>96</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-540-88908-3</pub-id>.</mixed-citation></ref>
<ref id="ref-27"><label>[27]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Holland</surname> <given-names>JH</given-names></string-name></person-group>. <article-title>Adaptation in natural and artificial systems [master&#x2019;s thesis]. Ann Arbor, MI, USA: University of Michigan</article-title>; <year>1975</year>.</mixed-citation></ref>
<ref id="ref-28"><label>[28]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Rechenberg</surname> <given-names>I</given-names></string-name></person-group>. <chapter-title>Evolutionsstrategien</chapter-title>. In: <source>Simulationsmethoden in der Medizin und Biologie</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>1978</year>. p. <fpage>83</fpage>&#x2013;<lpage>114</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-642-81283-5</pub-id>.</mixed-citation></ref>
<ref id="ref-29"><label>[29]</label><mixed-citation publication-type="other"><person-group person-group-type="author"><string-name><surname>Zitzler</surname> <given-names>E</given-names></string-name>, <string-name><surname>Thiele</surname> <given-names>L</given-names></string-name></person-group>. <article-title>An evolutionary algorithm for multiobjective optimization: the strength pareto approach. TIK report No.: 43</article-title>; <year>1998</year>.</mixed-citation></ref>
<ref id="ref-30"><label>[30]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Knowles</surname> <given-names>J</given-names></string-name>, <string-name><surname>Corne</surname> <given-names>D</given-names></string-name></person-group>. <article-title>The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation</article-title>. In: <conf-name>Proceedings of the 1999 Congress on Evolutionary Computation-CEC99; 1999 Jul 6&#x2013;9</conf-name>; <publisher-loc>Washington, DC, USA</publisher-loc>. p. <fpage>98</fpage>&#x2013;<lpage>105</lpage>. doi:<pub-id pub-id-type="doi">10.1109/CEC.1999.781913</pub-id>.</mixed-citation></ref>
<ref id="ref-31"><label>[31]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Smolinski</surname> <given-names>TG</given-names></string-name></person-group>. <chapter-title>Multi-objective evolutionary algorithms</chapter-title>. In: <source>Encyclopedia of computational neuroscience</source>. <publisher-loc>New York, NY, USA</publisher-loc>: <publisher-name>Springer New York</publisher-name>; <year>2022</year>. p. <fpage>2103</fpage>&#x2013;<lpage>5</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-1-0716-1006-0_16</pub-id>.</mixed-citation></ref>
<ref id="ref-32"><label>[32]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Deb</surname> <given-names>K</given-names></string-name></person-group>. <source>Multi-objective optimization using evolutionary algorithms</source>. <publisher-loc>Chichester, UK</publisher-loc>: <publisher-name>Wiley</publisher-name>; <year>2001</year>.</mixed-citation></ref>
<ref id="ref-33"><label>[33]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zitzler</surname> <given-names>E</given-names></string-name>, <string-name><surname>Thiele</surname> <given-names>L</given-names></string-name></person-group>. <article-title>Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach</article-title>. <source>IEEE Trans Evol Comput</source>. <year>1999</year>;<volume>3</volume>(<issue>4</issue>):<fpage>257</fpage>&#x2013;<lpage>71</lpage>. doi:<pub-id pub-id-type="doi">10.1109/4235.797969</pub-id>.</mixed-citation></ref>
<ref id="ref-34"><label>[34]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Kang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Li</surname> <given-names>K</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>R</given-names></string-name></person-group>. <article-title>A survey on Pareto front learning for multi-objective optimization</article-title>. <source>J Membr Comput</source>. <year>2025</year>;<volume>7</volume>(<issue>2</issue>):<fpage>128</fpage>&#x2013;<lpage>34</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s41965-024-00170-z</pub-id>.</mixed-citation></ref>
<ref id="ref-35"><label>[35]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Zitzler</surname> <given-names>E</given-names></string-name>, <string-name><surname>Laumanns</surname> <given-names>M</given-names></string-name>, <string-name><surname>Thiele</surname> <given-names>L</given-names></string-name></person-group>. <chapter-title>SPEA2: improving the strength pareto evolutionary algorithm</chapter-title>. In: <source>Evolutionary multi-criterion optimization</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2001</year>. p. <fpage>95</fpage>&#x2013;<lpage>100</lpage>.</mixed-citation></ref>
<ref id="ref-36"><label>[36]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Petchrompo</surname> <given-names>S</given-names></string-name>, <string-name><surname>Coit</surname> <given-names>DW</given-names></string-name>, <string-name><surname>Brintrup</surname> <given-names>A</given-names></string-name>, <string-name><surname>Wannakrairot</surname> <given-names>A</given-names></string-name>, <string-name><surname>Parlikad</surname> <given-names>AK</given-names></string-name></person-group>. <article-title>A review of Pareto pruning methods for multi-objective optimization</article-title>. <source>Comput Ind Eng</source>. <year>2022</year>;<volume>167</volume>(<issue>1</issue>):<fpage>108022</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.cie.2022.108022</pub-id>.</mixed-citation></ref>
<ref id="ref-37"><label>[37]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Coello Coello</surname> <given-names>CA</given-names></string-name>, <string-name><surname>Gonz&#x00E1;lez Brambila</surname> <given-names>S</given-names></string-name>, <string-name><surname>Figueroa Gamboa</surname> <given-names>J</given-names></string-name>, <string-name><surname>Castillo Tapia</surname> <given-names>MG</given-names></string-name>, <string-name><surname>Hern&#x00E1;ndez G&#x00F3;mez</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Evolutionary multiobjective optimization: open research areas and some challenges lying ahead</article-title>. <source>Complex Intell Syst</source>. <year>2020</year>;<volume>6</volume>(<issue>2</issue>):<fpage>221</fpage>&#x2013;<lpage>36</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s40747-019-0113-4</pub-id>.</mixed-citation></ref>
<ref id="ref-38"><label>[38]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Kruisselbrink</surname> <given-names>JW</given-names></string-name>, <string-name><surname>Emmerich</surname> <given-names>MTM</given-names></string-name>, <string-name><surname>B&#x00E4;ck</surname> <given-names>T</given-names></string-name>, <string-name><surname>Bender</surname> <given-names>A</given-names></string-name>, <string-name><surname>IJzerman</surname> <given-names>AP</given-names></string-name>, <string-name><surname>van der Horst</surname> <given-names>E</given-names></string-name></person-group>. <chapter-title>Combining aggregation with Pareto optimization: a case study in evolutionary molecular design</chapter-title>. In: <source>Evolutionary multi-criterion optimization</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2009</year>. p. <fpage>453</fpage>&#x2013;<lpage>67</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-642-01020-0_36</pub-id>.</mixed-citation></ref>
<ref id="ref-39"><label>[39]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Schaffer</surname> <given-names>JD</given-names></string-name>, <string-name><surname>Grefenstette</surname> <given-names>JJ</given-names></string-name></person-group>. <chapter-title>Multi-objective learning via genetic algorithms</chapter-title>. In: <person-group person-group-type="editor"><string-name><surname>Joshi</surname> <given-names>A</given-names></string-name></person-group>, editor. <source>Proceedings of the Ninth International Joint Conference on Artificial Intelligence</source>. Vol. <volume>85</volume>. <publisher-loc>San Mateo, CA, USA</publisher-loc>: <publisher-name>Morgan Kaufmann</publisher-name>; <year>1985</year>. p. <fpage>593</fpage>&#x2013;<lpage>5</lpage>.</mixed-citation></ref>
<ref id="ref-40"><label>[40]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Fourman</surname> <given-names>MP</given-names></string-name></person-group>. <article-title>Compaction of symbolic layout using genetic algorithms</article-title>. In: <conf-name>Proceedings of the 1st International Conference on Genetic Algorithms</conf-name>. <publisher-loc>Mahwah, NJ, USA</publisher-loc>: <publisher-name>l. Erlbaum Associates Inc.</publisher-name>; <year>1985</year>. p. <fpage>141</fpage>&#x2013;<lpage>53</lpage>.</mixed-citation></ref>
<ref id="ref-41"><label>[41]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Kursawe</surname> <given-names>F</given-names></string-name></person-group>. <chapter-title>A variant of evolution strategies for vector optimization</chapter-title>. In: <source>Parallel problem solving from nature</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>; <year>2006</year>. p. <fpage>193</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1007/bfb0029752</pub-id>.</mixed-citation></ref>
<ref id="ref-42"><label>[42]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Horn</surname> <given-names>J</given-names></string-name>, <string-name><surname>Nafpliotis</surname> <given-names>N</given-names></string-name>, <string-name><surname>Goldberg</surname> <given-names>DE</given-names></string-name></person-group>. <article-title>A niched Pareto genetic algorithm for multiobjective optimization</article-title>. In: <conf-name>Proceedings of the First IEEE Conference on Evolutionary Computation. IEEE World Congress on Computational Intelligence; 1994 Jun 27&#x2013;29</conf-name>; <publisher-loc>Orlando, FL, USA</publisher-loc>. p. <fpage>82</fpage>&#x2013;<lpage>7</lpage>. doi:<pub-id pub-id-type="doi">10.1109/ICEC.1994.350037</pub-id>.</mixed-citation></ref>
<ref id="ref-43"><label>[43]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Fonseca</surname> <given-names>CM</given-names></string-name>, <string-name><surname>Fleming</surname> <given-names>PJ</given-names></string-name></person-group>. <article-title>Genetic algorithms for multi-objective optimization: formulation, discussion and generalization</article-title>. In: <conf-name>Genetic Algorithms: Proceedings of the Fifth International Conference. Vol. 93</conf-name>. <publisher-loc>San Mateo, CA, USA</publisher-loc>: <publisher-name>Morgan Kaufmann</publisher-name>; <year>1993</year>. p. <fpage>416</fpage>&#x2013;<lpage>23</lpage>.</mixed-citation></ref>
<ref id="ref-44"><label>[44]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Srinivas</surname> <given-names>N</given-names></string-name>, <string-name><surname>Deb</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Muiltiobjective optimization using nondominated sorting in genetic algorithms</article-title>. <source>Evol Comput</source>. <year>1994</year>;<volume>2</volume>(<issue>3</issue>):<fpage>221</fpage>&#x2013;<lpage>48</lpage>. doi:<pub-id pub-id-type="doi">10.1162/evco.1994.2.3.221</pub-id>.</mixed-citation></ref>
<ref id="ref-45"><label>[45]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Deb</surname> <given-names>K</given-names></string-name>, <string-name><surname>Pratap</surname> <given-names>A</given-names></string-name>, <string-name><surname>Agarwal</surname> <given-names>S</given-names></string-name>, <string-name><surname>Meyarivan</surname> <given-names>T</given-names></string-name></person-group>. <article-title>A fast and elitist multiobjective genetic algorithm: NSGA-II</article-title>. <source>IEEE Trans Evol Comput</source>. <year>2002</year>;<volume>6</volume>(<issue>2</issue>):<fpage>182</fpage>&#x2013;<lpage>97</lpage>. doi:<pub-id pub-id-type="doi">10.1109/4235.996017</pub-id>.</mixed-citation></ref>
<ref id="ref-46"><label>[46]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wei</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>F</given-names></string-name>, <string-name><surname>Zhu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>J</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>A Pareto dominance relation based on reference vectors for evolutionary many-objective optimization</article-title>. <source>Appl Soft Comput</source>. <year>2024</year>;<volume>157</volume>(<issue>1</issue>):<fpage>111505</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.asoc.2024.111505</pub-id>.</mixed-citation></ref>
<ref id="ref-47"><label>[47]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Li</surname> <given-names>H</given-names></string-name></person-group>. <article-title>MOEA/D: a multiobjective evolutionary algorithm based on decomposition</article-title>. <source>IEEE Trans Evol Comput</source>. <year>2007</year>;<volume>11</volume>(<issue>6</issue>):<fpage>712</fpage>&#x2013;<lpage>31</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TEVC.2007.892759</pub-id>.</mixed-citation></ref>
<ref id="ref-48"><label>[48]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>K</given-names></string-name></person-group>. <article-title>A survey of multi-objective evolutionary algorithm based on decomposition: past and future</article-title>. <source>IEEE Trans Evol Comput</source>. <year>2024</year>. doi:<pub-id pub-id-type="doi">10.1109/TEVC.2024.3496507</pub-id>.</mixed-citation></ref>
<ref id="ref-49"><label>[49]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>X</given-names></string-name>, <string-name><surname>Yin</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>D</given-names></string-name>, <string-name><surname>Fan</surname> <given-names>X</given-names></string-name></person-group>. <article-title>A decomposition-based many-objective evolutionary algorithm with adaptive weight vector strategy</article-title>. <source>Appl Soft Comput</source>. <year>2022</year>;<volume>128</volume>(<issue>4</issue>):<fpage>109412</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.asoc.2022.109412</pub-id>.</mixed-citation></ref>
<ref id="ref-50"><label>[50]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yue</surname> <given-names>C</given-names></string-name>, <string-name><surname>Song</surname> <given-names>J</given-names></string-name>, <string-name><surname>Liang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>M</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>K</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>H</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>A multimodal multiobjective evolutionary algorithm based on neighborhood and enhanced special crowding distance</article-title>. <source>Knowl Based Syst</source>. <year>2025</year>;<volume>315</volume>(<issue>2</issue>):<fpage>113340</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.knosys.2025.113340</pub-id>.</mixed-citation></ref>
<ref id="ref-51"><label>[51]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Lv</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Li</surname> <given-names>S</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>H</given-names></string-name></person-group>. <article-title>A multimodal multi-objective evolutionary algorithm with two-stage dual-indicator selection strategy</article-title>. <source>Swarm Evol Comput</source>. <year>2023</year>;<volume>82</volume>(<issue>1</issue>):<fpage>101319</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.swevo.2023.101319</pub-id>.</mixed-citation></ref>
<ref id="ref-52"><label>[52]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>K</given-names></string-name>, <string-name><surname>Qu</surname> <given-names>B</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>K</given-names></string-name>, <string-name><surname>Yue</surname> <given-names>C</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>An evolutionary multiobjective method based on dominance and decomposition for feature selection in classification</article-title>. <source>Sci China Inf Sci</source>. <year>2024</year>;<volume>67</volume>(<issue>2</issue>):<fpage>120101</fpage>. doi:<pub-id pub-id-type="doi">10.1007/s11432-023-3864-6</pub-id>.</mixed-citation></ref>
<ref id="ref-53"><label>[53]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhou</surname> <given-names>A</given-names></string-name>, <string-name><surname>Qu</surname> <given-names>BY</given-names></string-name>, <string-name><surname>Li</surname> <given-names>H</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>SZ</given-names></string-name>, <string-name><surname>Suganthan</surname> <given-names>PN</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>Q</given-names></string-name></person-group>. <article-title>Multiobjective evolutionary algorithms: a survey of the state of the art</article-title>. <source>Swarm Evol Comput</source>. <year>2011</year>;<volume>1</volume>(<issue>1</issue>):<fpage>32</fpage>&#x2013;<lpage>49</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.swevo.2011.03.001</pub-id>.</mixed-citation></ref>
<ref id="ref-54"><label>[54]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Dutta</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mallipeddi</surname> <given-names>R</given-names></string-name>, <string-name><surname>Das</surname> <given-names>KN</given-names></string-name></person-group>. <article-title>Hybrid selection based multi/many-objective evolutionary algorithm</article-title>. <source>Sci Rep</source>. <year>2022</year>;<volume>12</volume>(<issue>1</issue>):<fpage>6861</fpage>. doi:<pub-id pub-id-type="doi">10.1038/s41598-022-10997-0</pub-id>; <pub-id pub-id-type="pmid">35478221</pub-id></mixed-citation></ref>
<ref id="ref-55"><label>[55]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Zitzler</surname> <given-names>E</given-names></string-name>, <string-name><surname>K&#x00FC;nzli</surname> <given-names>S</given-names></string-name></person-group>. <chapter-title>Indicator-based selection in multiobjective search</chapter-title>. In: <source>Parallel problem solving from nature&#x2014;PPSN VIII</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2004</year>. p. <fpage>832</fpage>&#x2013;<lpage>42</lpage>. doi:<pub-id pub-id-type="doi">10.1007/978-3-540-30217-9_84</pub-id>.</mixed-citation></ref>
<ref id="ref-56"><label>[56]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Fleischer</surname> <given-names>M</given-names></string-name></person-group>. <chapter-title>The measure of Pareto optima applications to multi-objective metaheuristics</chapter-title>. In: <source>Evolutionary multi-criterion optimization</source>. <publisher-loc>Berlin/Heidelberg, Germany</publisher-loc>: <publisher-name>Springer</publisher-name>; <year>2003</year>. p. <fpage>519</fpage>&#x2013;<lpage>33</lpage>. doi:<pub-id pub-id-type="doi">10.1007/3-540-36970-8_37</pub-id>.</mixed-citation></ref>
<ref id="ref-57"><label>[57]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Huband</surname> <given-names>S</given-names></string-name>, <string-name><surname>Hingston</surname> <given-names>P</given-names></string-name>, <string-name><surname>While</surname> <given-names>L</given-names></string-name>, <string-name><surname>Barone</surname> <given-names>L</given-names></string-name></person-group>. <article-title>An evolution strategy with probabilistic mutation for multi-objective optimisation</article-title>. In: <source>The 2003 Congress on Evolutionary Computation, 2003. CEC<sup>&#x2032;</sup>03; 2003 Dec 8&#x2013;12</source>; <publisher-loc>Canberra, ACT, Australia</publisher-loc>. p. <fpage>2284</fpage>&#x2013;<lpage>91</lpage>. doi:<pub-id pub-id-type="doi">10.1109/CEC.2003.1299373</pub-id>.</mixed-citation></ref>
<ref id="ref-58"><label>[58]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hua</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Hao</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Adaptive normal vector guided evolutionary multi- and many-objective optimization</article-title>. <source>Complex Intell Syst</source>. <year>2024</year>;<volume>10</volume>(<issue>3</issue>):<fpage>3709</fpage>&#x2013;<lpage>26</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s40747-024-01353-y</pub-id>.</mixed-citation></ref>
<ref id="ref-59"><label>[59]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yang</surname> <given-names>XS</given-names></string-name>, <string-name><surname>Deb</surname> <given-names>S</given-names></string-name>, <string-name><surname>Hanne</surname> <given-names>T</given-names></string-name>, <string-name><surname>He</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Attraction and diffusion in nature-inspired optimization algorithms</article-title>. <source>Neural Comput Appl</source>. <year>2019</year>;<volume>31</volume>(<issue>7</issue>):<fpage>1987</fpage>&#x2013;<lpage>94</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s00521-015-1925-9</pub-id>.</mixed-citation></ref>
<ref id="ref-60"><label>[60]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Singh</surname> <given-names>G</given-names></string-name>, <string-name><surname>Chaturvedi</surname> <given-names>AK</given-names></string-name></person-group>. <article-title>Hybrid modified particle swarm optimization with genetic algorithm (GA) based workflow scheduling in cloud-fog environment for multi-objective optimization</article-title>. <source>Clust Comput</source>. <year>2024</year>;<volume>27</volume>(<issue>2</issue>):<fpage>1947</fpage>&#x2013;<lpage>64</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10586-023-04071-1</pub-id>.</mixed-citation></ref>
<ref id="ref-61"><label>[61]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Bai</surname> <given-names>Z</given-names></string-name>, <string-name><surname>Zhou</surname> <given-names>H</given-names></string-name>, <string-name><surname>Shi</surname> <given-names>J</given-names></string-name>, <string-name><surname>Xing</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>J</given-names></string-name></person-group>. <article-title>A hybrid multi-objective evolutionary algorithm with high solving efficiency for UAV defense programming</article-title>. <source>Swarm Evol Comput</source>. <year>2024</year>;<volume>87</volume>:<fpage>101572</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.swevo.2024.101572</pub-id>.</mixed-citation></ref>
<ref id="ref-62"><label>[62]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>An</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>X</given-names></string-name>, <string-name><surname>Gao</surname> <given-names>K</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>Z</given-names></string-name></person-group>. <article-title>A hybrid multi-objective evolutionary algorithm for solving an adaptive flexible job-shop rescheduling problem with real-time order acceptance and condition-based preventive maintenance</article-title>. <source>Expert Syst Appl</source>. <year>2023</year>;<volume>212</volume>(<issue>2</issue>):<fpage>118711</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.eswa.2022.118711</pub-id>.</mixed-citation></ref>
<ref id="ref-63"><label>[63]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>W</given-names></string-name>, <string-name><surname>Xiao</surname> <given-names>G</given-names></string-name>, <string-name><surname>Gen</surname> <given-names>M</given-names></string-name>, <string-name><surname>Geng</surname> <given-names>H</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Deng</surname> <given-names>M</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Enhancing multi-objective evolutionary algorithms with machine learning for scheduling problems: recent advances and survey</article-title>. <source>Front Ind Eng</source>. <year>2024</year>;<volume>2</volume>:<fpage>1337174</fpage>. doi:<pub-id pub-id-type="doi">10.3389/fieng.2024.1337174</pub-id>.</mixed-citation></ref>
<ref id="ref-64"><label>[64]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>S</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>F</given-names></string-name>, <string-name><surname>He</surname> <given-names>Q</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Deep reinforcement learning for multi-objective combinatorial optimization: a case study on multi-objective traveling salesman problem</article-title>. <source>Swarm Evol Comput</source>. <year>2023</year>;<volume>83</volume>(<issue>4</issue>):<fpage>101398</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.swevo.2023.101398</pub-id>.</mixed-citation></ref>
<ref id="ref-65"><label>[65]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hou</surname> <given-names>M</given-names></string-name>, <string-name><surname>Jin</surname> <given-names>S</given-names></string-name>, <string-name><surname>Cui</surname> <given-names>X</given-names></string-name>, <string-name><surname>Peng</surname> <given-names>C</given-names></string-name>, <string-name><surname>Zhao</surname> <given-names>K</given-names></string-name>, <string-name><surname>Song</surname> <given-names>L</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Protein multiple conformation prediction using multi-objective evolution algorithm</article-title>. <source>Interdiscip Sci</source>. <year>2024</year>;<volume>16</volume>(<issue>3</issue>):<fpage>519</fpage>&#x2013;<lpage>31</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s12539-023-00597-5</pub-id>; <pub-id pub-id-type="pmid">38190097</pub-id></mixed-citation></ref>
<ref id="ref-66"><label>[66]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Wang</surname> <given-names>F</given-names></string-name>, <string-name><surname>Liao</surname> <given-names>F</given-names></string-name>, <string-name><surname>Li</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Yan</surname> <given-names>X</given-names></string-name>, <string-name><surname>Chen</surname> <given-names>X</given-names></string-name></person-group>. <article-title>An ensemble learning based multi-objective evolutionary algorithm for the dynamic vehicle routing problem with time windows</article-title>. <source>Comput Ind Eng</source>. <year>2021</year>;<volume>154</volume>(<issue>3</issue>):<fpage>107131</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.cie.2021.107131</pub-id>.</mixed-citation></ref>
<ref id="ref-67"><label>[67]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Zhang</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>K</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>L</given-names></string-name>, <string-name><surname>Sheng</surname> <given-names>W</given-names></string-name>, <string-name><surname>Kang</surname> <given-names>Q</given-names></string-name></person-group>. <article-title>Evolving ensembles using multi-objective genetic programming for imbalanced classification</article-title>. <source>Knowl Based Syst</source>. <year>2022</year>;<volume>255</volume>(<issue>2</issue>):<fpage>109611</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.knosys.2022.109611</pub-id>.</mixed-citation></ref>
<ref id="ref-68"><label>[68]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Bechikh</surname> <given-names>S</given-names></string-name>, <string-name><surname>Kessentini</surname> <given-names>M</given-names></string-name>, <string-name><surname>Ben Said</surname> <given-names>L</given-names></string-name>, <string-name><surname>Gh&#x00E9;dira</surname> <given-names>K</given-names></string-name></person-group>. <chapter-title>Preference incorporation in evolutionary multiobjective optimization</chapter-title>. In: <source>Advances in computers</source>. <publisher-loc>Amsterdam, Netherlan</publisher-loc>: <publisher-name>Elsevier</publisher-name>; <year>2015</year>. p. <fpage>141</fpage>&#x2013;<lpage>207</lpage>. doi:<pub-id pub-id-type="doi">10.1016/bs.adcom.2015.03.001</pub-id>.</mixed-citation></ref>
<ref id="ref-69"><label>[69]</label><mixed-citation publication-type="book"><person-group person-group-type="author"><string-name><surname>Saxena</surname> <given-names>DK</given-names></string-name>, <string-name><surname>Mittal</surname> <given-names>S</given-names></string-name>, <string-name><surname>Deb</surname> <given-names>K</given-names></string-name>, <string-name><surname>Goodman</surname> <given-names>ED</given-names></string-name></person-group>. <source>Machine learning assisted evolutionary multi- and many- objective optimization</source>. <publisher-loc>Singapore</publisher-loc>: <publisher-name>Springer Nature Singapore</publisher-name>; <year>2024</year>. doi:<pub-id pub-id-type="doi">10.1007/978-981-99-2096-9</pub-id>.</mixed-citation></ref>
<ref id="ref-70"><label>[70]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>B</given-names></string-name>, <string-name><surname>Li</surname> <given-names>J</given-names></string-name>, <string-name><surname>Tang</surname> <given-names>K</given-names></string-name>, <string-name><surname>Yao</surname> <given-names>X</given-names></string-name></person-group>. <article-title>Many-objective evolutionary algorithms: a survey</article-title>. <source>ACM Comput Surv</source>. <year>2015</year>;<volume>48</volume>(<issue>1</issue>):<fpage>1</fpage>&#x2013;<lpage>35</lpage>. doi:<pub-id pub-id-type="doi">10.1145/2792984</pub-id>.</mixed-citation></ref>
<ref id="ref-71"><label>[71]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Sato</surname> <given-names>H</given-names></string-name>, <string-name><surname>Ishibuchi</surname> <given-names>H</given-names></string-name></person-group>. <article-title>Evolutionary many-objective optimization: difficulties, approaches, and discussions</article-title>. <source>IEEJ Trans Electr Electron Eng</source>. <year>2023</year>;<volume>18</volume>(<issue>7</issue>):<fpage>1048</fpage>&#x2013;<lpage>58</lpage>. doi:<pub-id pub-id-type="doi">10.1002/tee.23796</pub-id>.</mixed-citation></ref>
<ref id="ref-72"><label>[72]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Angelo</surname> <given-names>JS</given-names></string-name>, <string-name><surname>Guedes</surname> <given-names>IA</given-names></string-name>, <string-name><surname>Barbosa</surname> <given-names>HJC</given-names></string-name>, <string-name><surname>Dardenne</surname> <given-names>LE</given-names></string-name></person-group>. <article-title>Multi-and many-objective optimization: present and future in <italic>de novo</italic> drug design</article-title>. <source>Front Chem</source>. <year>2023</year>;<volume>11</volume>:<fpage>1288626</fpage>. doi:<pub-id pub-id-type="doi">10.3389/fchem.2023.1288626</pub-id>; <pub-id pub-id-type="pmid">38192501</pub-id></mixed-citation></ref>
<ref id="ref-73"><label>[73]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Azevedo</surname> <given-names>BF</given-names></string-name>, <string-name><surname>Rocha</surname> <given-names>AMAC</given-names></string-name>, <string-name><surname>Pereira</surname> <given-names>AI</given-names></string-name></person-group>. <article-title>Hybrid approaches to optimization and machine learning methods: a systematic literature review</article-title>. <source>Mach Learn</source>. <year>2024</year>;<volume>113</volume>(<issue>7</issue>):<fpage>4055</fpage>&#x2013;<lpage>97</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10994-023-06467-x</pub-id>.</mixed-citation></ref>
<ref id="ref-74"><label>[74]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Alao</surname> <given-names>O</given-names></string-name></person-group>. <article-title>Data driven financial decision-making for minority enterprises: capital access, investment strategies, and creditworthiness optimization</article-title>. <source>Int Res J Moderniz Eng Technol Sci</source>. <year>2025</year>;<volume>7</volume>(<issue>3</issue>):<fpage>9845</fpage>&#x2013;<lpage>64</lpage>.</mixed-citation></ref>
<ref id="ref-75"><label>[75]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Li</surname> <given-names>J</given-names></string-name>, <string-name><surname>Xu</surname> <given-names>S</given-names></string-name>, <string-name><surname>Zheng</surname> <given-names>J</given-names></string-name>, <string-name><surname>Jiang</surname> <given-names>G</given-names></string-name>, <string-name><surname>Ding</surname> <given-names>W</given-names></string-name></person-group>. <article-title>Research on multi-objective evolutionary algorithms based on large-scale decision variable analysis</article-title>. <source>Appl Sci</source>. <year>2024</year>;<volume>14</volume>(<issue>22</issue>):<fpage>10309</fpage>. doi:<pub-id pub-id-type="doi">10.3390/app142210309</pub-id>.</mixed-citation></ref>
<ref id="ref-76"><label>[76]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Madani</surname> <given-names>A</given-names></string-name>, <string-name><surname>Engelbrecht</surname> <given-names>A</given-names></string-name>, <string-name><surname>Ombuki-Berman</surname> <given-names>B</given-names></string-name></person-group>. <article-title>Cooperative coevolutionary multi-guide particle swarm optimization algorithm for large-scale multi-objective optimization problems</article-title>. <source>Swarm Evol Comput</source>. <year>2023</year>;<volume>78</volume>(<issue>1</issue>):<fpage>101262</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.swevo.2023.101262</pub-id>.</mixed-citation></ref>
<ref id="ref-77"><label>[77]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Lin</surname> <given-names>H</given-names></string-name>, <string-name><surname>Yue</surname> <given-names>C</given-names></string-name>, <string-name><surname>Ban</surname> <given-names>X</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Evolutionary constrained multi-objective optimization: a review</article-title>. <source>Vicinagearth</source>. <year>2024</year>;<volume>1</volume>(<issue>1</issue>):<fpage>5</fpage>. doi:<pub-id pub-id-type="doi">10.1007/s44336-024-00006-5</pub-id>.</mixed-citation></ref>
<ref id="ref-78"><label>[78]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Salimian</surname> <given-names>M</given-names></string-name>, <string-name><surname>Ghobaei-Arani</surname> <given-names>M</given-names></string-name>, <string-name><surname>Shahidinejad</surname> <given-names>A</given-names></string-name></person-group>. <article-title>An evolutionary multi-objective optimization technique to deploy the IoT services in fog-enabled networks: an autonomous approach</article-title>. <source>Appl Artif Intell</source>. <year>2022</year>;<volume>36</volume>(<issue>1</issue>):<fpage>2008149</fpage>. doi:<pub-id pub-id-type="doi">10.1080/08839514.2021.2008149</pub-id>.</mixed-citation></ref>
<ref id="ref-79"><label>[79]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Idowu</surname> <given-names>E</given-names></string-name></person-group>. <article-title>Data-driven multi-objective optimization with fairness constraints: balancing efficiency with equity in algorithmic decision-making</article-title>. <source>Preprints</source>. <year>2024 [cited 2025 Aug 16]</year>. Available from: <ext-link ext-link-type="uri" xlink:href="https://www.preprints.org/manuscript/202407.0924/v1">https://www.preprints.org/manuscript/202407.0924/v1</ext-link>.</mixed-citation></ref>
<ref id="ref-80"><label>[80]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Mirjalili</surname> <given-names>S</given-names></string-name>, <string-name><surname>Saremi</surname> <given-names>S</given-names></string-name>, <string-name><surname>Mirjalili</surname> <given-names>SM</given-names></string-name>, <string-name><surname>dos S Coelho</surname> <given-names>L</given-names></string-name></person-group>. <article-title>Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization</article-title>. <source>Expert Syst Appl</source>. <year>2016</year>;<volume>47</volume>(<issue>6</issue>):<fpage>106</fpage>&#x2013;<lpage>19</lpage>. doi:<pub-id pub-id-type="doi">10.1016/j.eswa.2015.10.039</pub-id>.</mixed-citation></ref>
<ref id="ref-81"><label>[81]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Hadka</surname> <given-names>D</given-names></string-name>, <string-name><surname>Reed</surname> <given-names>P</given-names></string-name></person-group>. <article-title>Borg: an auto-adaptive many-objective evolutionary computing framework</article-title>. <source>Evol Comput</source>. <year>2013</year>;<volume>21</volume>(<issue>2</issue>):<fpage>231</fpage>&#x2013;<lpage>59</lpage>. doi:<pub-id pub-id-type="doi">10.1162/EVCO_a_00075</pub-id>; <pub-id pub-id-type="pmid">22385134</pub-id></mixed-citation></ref>
<ref id="ref-82"><label>[82]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Qi</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Ma</surname> <given-names>X</given-names></string-name>, <string-name><surname>Liu</surname> <given-names>F</given-names></string-name>, <string-name><surname>Jiao</surname> <given-names>L</given-names></string-name>, <string-name><surname>Sun</surname> <given-names>J</given-names></string-name>, <string-name><surname>Wu</surname> <given-names>J</given-names></string-name></person-group>. <article-title>MOEA/D with adaptive weight adjustment</article-title>. <source>Evol Comput</source>. <year>2014</year>;<volume>22</volume>(<issue>2</issue>):<fpage>231</fpage>&#x2013;<lpage>64</lpage>. doi:<pub-id pub-id-type="doi">10.1162/EVCO_a_00109</pub-id>; <pub-id pub-id-type="pmid">23777254</pub-id></mixed-citation></ref>
<ref id="ref-83"><label>[83]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Mostaghim</surname> <given-names>S</given-names></string-name>, <string-name><surname>Teich</surname> <given-names>J</given-names></string-name></person-group>. <article-title>Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO)</article-title>. In: <conf-name>Proceedings of the 2003 IEEE Swarm Intelligence Symposium, SIS&#x2019;03; 2003 Apr 26</conf-name>; <publisher-loc>Indianapolis, IN, USA</publisher-loc>. p. <fpage>26</fpage>&#x2013;<lpage>33</lpage>. doi:<pub-id pub-id-type="doi">10.1109/SIS.2003.1202243</pub-id>.</mixed-citation></ref>
<ref id="ref-84"><label>[84]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Deb</surname> <given-names>K</given-names></string-name>, <string-name><surname>Mohan</surname> <given-names>M</given-names></string-name>, <string-name><surname>Mishra</surname> <given-names>S</given-names></string-name></person-group>. <article-title>Evaluating the <italic>Epsilon</italic>-domination based multi-objective evolutionary algorithm for a quick computation of Pareto-optimal solutions</article-title>. <source>Evol Comput</source>. <year>2005</year>;<volume>13</volume>(<issue>4</issue>):<fpage>501</fpage>&#x2013;<lpage>25</lpage>. doi:<pub-id pub-id-type="doi">10.1162/106365605774666895</pub-id>; <pub-id pub-id-type="pmid">16297281</pub-id></mixed-citation></ref>
<ref id="ref-85"><label>[85]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Qiao</surname> <given-names>K</given-names></string-name>, <string-name><surname>Liang</surname> <given-names>J</given-names></string-name>, <string-name><surname>Yu</surname> <given-names>K</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>M</given-names></string-name>, <string-name><surname>Qu</surname> <given-names>B</given-names></string-name>, <string-name><surname>Yue</surname> <given-names>C</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>A self-adaptive evolutionary multi-task based constrained multi-objective evolutionary algorithm</article-title>. <source>IEEE Trans Emerg Top Comput Intell</source>. <year>2023</year>;<volume>7</volume>(<issue>4</issue>):<fpage>1098</fpage>&#x2013;<lpage>112</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TETCI.2023.3236633</pub-id>.</mixed-citation></ref>
<ref id="ref-86"><label>[86]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Chen</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Pan</surname> <given-names>D</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>H</given-names></string-name>, <string-name><surname>Gan</surname> <given-names>W</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>D</given-names></string-name>, <etal>et al</etal></person-group>. <article-title>Dynamic multiobjective evolutionary algorithm with adaptive response mechanism selection strategy</article-title>. <source>Knowl Based Syst</source>. <year>2022</year>;<volume>246</volume>:<fpage>108691</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.knosys.2022.108691</pub-id>.</mixed-citation></ref>
<ref id="ref-87"><label>[87]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Yu</surname> <given-names>G</given-names></string-name>, <string-name><surname>Ma</surname> <given-names>L</given-names></string-name>, <string-name><surname>Wang</surname> <given-names>X</given-names></string-name>, <string-name><surname>Du</surname> <given-names>W</given-names></string-name>, <string-name><surname>Du</surname> <given-names>W</given-names></string-name>, <string-name><surname>Jin</surname> <given-names>Y</given-names></string-name></person-group>. <article-title>Towards fairness-aware multi-objective optimization</article-title>. <source>Complex Intell Syst</source>. <year>2024</year>;<volume>11</volume>(<issue>1</issue>):<fpage>50</fpage>. doi:<pub-id pub-id-type="doi">10.1007/s40747-024-01668-w</pub-id>.</mixed-citation></ref>
<ref id="ref-88"><label>[88]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Liu</surname> <given-names>S</given-names></string-name>, <string-name><surname>Vicente</surname> <given-names>LN</given-names></string-name></person-group>. <article-title>Accuracy and fairness trade-offs in machine learning: a stochastic multi-objective approach</article-title>. <source>Comput Manag Sci</source>. <year>2022</year>;<volume>19</volume>(<issue>3</issue>):<fpage>513</fpage>&#x2013;<lpage>37</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10287-022-00425-z</pub-id>.</mixed-citation></ref>
<ref id="ref-89"><label>[89]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Jiao</surname> <given-names>R</given-names></string-name>, <string-name><surname>Nguyen</surname> <given-names>BH</given-names></string-name>, <string-name><surname>Xue</surname> <given-names>B</given-names></string-name>, <string-name><surname>Zhang</surname> <given-names>M</given-names></string-name></person-group>. <article-title>A survey on evolutionary multiobjective feature selection in classification: approaches, applications, and challenges</article-title>. <source>IEEE Trans Evol Comput</source>. <year>2024</year>;<volume>28</volume>(<issue>4</issue>):<fpage>1156</fpage>&#x2013;<lpage>76</lpage>. doi:<pub-id pub-id-type="doi">10.1109/TEVC.2023.3292527</pub-id>.</mixed-citation></ref>
<ref id="ref-90"><label>[90]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Xue</surname> <given-names>Y</given-names></string-name>, <string-name><surname>Cai</surname> <given-names>X</given-names></string-name>, <string-name><surname>Neri</surname> <given-names>F</given-names></string-name></person-group>. <article-title>A multi-objective evolutionary algorithm with interval based initialization and self-adaptive crossover operator for large-scale feature selection in classification</article-title>. <source>Appl Soft Comput</source>. <year>2022</year>;<volume>127</volume>(<issue>3</issue>):<fpage>109420</fpage>. doi:<pub-id pub-id-type="doi">10.1016/j.asoc.2022.109420</pub-id>.</mixed-citation></ref>
<ref id="ref-91"><label>[91]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Coelho</surname> <given-names>D</given-names></string-name>, <string-name><surname>Madureira</surname> <given-names>A</given-names></string-name>, <string-name><surname>Pereira</surname> <given-names>I</given-names></string-name>, <string-name><surname>Gonc</surname> <given-names>R</given-names></string-name></person-group>. <article-title>Multi-objective evolutionary algorithms and metaheuristics for feature selection: a review</article-title>. <source>Int J Comput Inform Syst Indust Manag Applicat</source>. <year>2022</year>;<volume>14</volume>:<fpage>285</fpage>&#x2013;<lpage>96</lpage>.</mixed-citation></ref>
<ref id="ref-92"><label>[92]</label><mixed-citation publication-type="conf-proc"><person-group person-group-type="author"><string-name><surname>Ishibuchi</surname> <given-names>H</given-names></string-name>, <string-name><surname>Pang</surname> <given-names>LM</given-names></string-name>, <string-name><surname>Shang</surname> <given-names>K</given-names></string-name></person-group>. <article-title>Difficulties in fair performance comparison of multiobjective evolutionary algorithms</article-title>. In: <conf-name>Proceedings of the Genetic and Evolutionary Computation Conference Companion</conf-name>. <publisher-loc>Boston, MA, USA</publisher-loc>: <publisher-name>ACM</publisher-name>; <year>2022</year>. p. <fpage>937</fpage>&#x2013;<lpage>57</lpage>. doi:<pub-id pub-id-type="doi">10.1145/3520304.3533634</pub-id>.</mixed-citation></ref>
<ref id="ref-93"><label>[93]</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><string-name><surname>Morales-Hern&#x00E1;ndez</surname> <given-names>A</given-names></string-name>, <string-name><surname>Van Nieuwenhuyse</surname> <given-names>I</given-names></string-name>, <string-name><surname>Rojas Gonzalez</surname> <given-names>S</given-names></string-name></person-group>. <article-title>A survey on multi-objective hyperparameter optimization algorithms for machine learning</article-title>. <source>Artif Intell Rev</source>. <year>2023</year>;<volume>56</volume>(<issue>8</issue>):<fpage>8043</fpage>&#x2013;<lpage>93</lpage>. doi:<pub-id pub-id-type="doi">10.1007/s10462-022-10359-2</pub-id>.</mixed-citation></ref>
</ref-list>
</back></article>