Pareto Improvement of Pareto-Based Multi-Objective Evolutionary Algorithms

This week’s BEACON Researchers at Work blog post is by Prof. Lihong Xu and Prof. Erik D. Goodman

Prof. Lihong Xu and Prof. Erik D Goodman

The Greenhouse research team of BEACON, led by Prof. Lihong Xu (from Tongji University, China) and Prof. Erik D. Goodman, has been doing work on BEACON’s international collaboration projects to model, optimize and control the microclimate inside greenhouses with the aid of evolutionary techniques. We explore the capabilities of evolutionary algorithms for solving problems with multiple conflicting objectives, beginning by adapting some Multi-Objective Evolutionary Algorithms (MOEAs), including NSGA-II, developed by BEACON’s Prof. Kalyanmoy Deb, to explicitly tradeoff the greenhouse production and the associated energy cost. This strategy worked pretty well until we tried to take more objectives (such as control precision) into consideration, and one major setback was the undesired deterioration of MOEA performance as the problem dimension increased.

NSGA-II and many other MO optimization techniques use the concept of Pareto domination. One solution dominates another if it is not worse than the other solution with respect to ALL objectives and is better with respect to at least one. We typically seek a set of solutions that are not dominated by others in the set and which are well distributed throughout the solution space. But it is well known that as the number of objectives increases in a multi-objective optimization problem, the fraction of points in the space that are non-dominated rises rapidly, because there are so many objectives in which a point can be better than another point, so not be dominated, since domination requires the dominating solution to be better in at least one objective and not worse in the others. (Here we ignore the effects of constraints, which make solutions feasible or infeasible, and under which any feasible solution dominates any infeasible solution.) As shown in Fig. 1, Pareto optimality orthogonally partitions the objective space into three subspaces—i.e., “better”, “worse”, and “indifferent” (that is, better in some dimensions and worse in others). It is easy to calculate that the percentage of the “better” (or “worse”) solutions decreases exponentially with the problem dimensionality, that is to say, more and more solutions are mutually non-dominating as the number of objectives grows.

Fig. 1. Demonstration of the Pareto optimality widely adopted in conventional MOEAs (a solution is Pareto optimal means there is no other solution that is better in at least one dimension and not worse in all others), and a probabilistic analysis of its deficiency (its discriminability drops exponentially with the problem dimensionality).

To prevent this Pareto homogenization tendency, Prof. Xu’s former graduate student and former BEACON Visiting Scholar Chenwen Zhu generalized the Pareto optimality notion by controlling the size proportions of these subspaces, essentially by changing the orientations of the boundaries from being orthogonal to a more relaxed condition. However, he has done a symmetric generalization, a property not shared by previous generalizations. The main idea of the new optimality criterion was to offer MOEAs consistent solution discriminability so that all the objective spaces could be treated as “pseudo-two-dimensional” spaces, and therefore lessen the aforementioned performance deterioration. To quantitatively compare the selection pressure of different optimality criteria, we proposed the concept of the dominating ratio and adopted the Monte Carlo Method to estimate it. Experiments on the greenhouse problem and several benchmark functions showed that this relaxed Pareto optimality did greatly improve the convergence capabilities of Pareto-based MOEAs. However, as pointed out by David Wheeler, though problems in computer science can always be solved by adding another layer of abstraction, it will usually create another problem. Here, what the symmetric generalization of Pareto optimality cost were an increase in runtime and possible loss of solution diversity.

To address these difficulties, and given the qualitative nature of the Pareto optimality, our solution to the runtime issue was to add another abstraction layer (which we called the ordinal space) to presort all the solutions in each dimension before the dominance calculation procedure. Instead of repeatedly conducting pairwise comparisons of the objective values, we exploited semi-ideal solutions and the concept of joint order to complete the same solution evaluation task with only some simple arithmetic calculations of the ordinal information. By taking advantage of this trading-space-for-time technique, three fast algorithms were developed and then tested on both artificial and evolving solution datasets. According to the results collected, the speedup of Pareto-based MOEAs was quite significant, and the runtime overhead of the generalization process could easily be recovered.

As for the issue of diversity loss, a distributed evolution architecture with adaptive parameter settings was proposed to implement a complementary diversity compensation. In this case, Pareto optimality was further generalized to its asymmetric form to provide MOEAs with the flexibility of modifying their evolving directions. Multiple slave processors with independent generalization parameters were first clustered to achieve the maximum coverage of the original Pareto optima, and a master processor was then introduced to adaptively tune each parameter to minimize the overlap between different slave processors. The scheme validation was performed on benchmark functions with different types of known Pareto optima (e.g., concave, convex, and disconnected), and for each problem, a relatively complete set of solutions was always be obtained.

Fig. 2. Pictorial demonstration of the algorithm framework that improves Pareto-based MOEAs in a Pareto sense. (a) Conventional MOEAs; (b) Symmetric generalization of the Pareto optimality to increase the convergence probability; (c) Ordinal optimization to reduce the runtime; (d) (e) Adaptive distributed evolution to maintain the solution diversity.

Here, a natural question arose: since the common performance indexes of MOEAs —i.e., convergence, runtime and diversity—could be separately taken care of, how would MOEAs perform if we put all the proposed techniques together? To answer this question, the Pareto optimality generalization, the ordinal optimization, and the adaptive distributed evolution were then synthesized as a unified but modularized framework, GOOD-MOEA (see Fig. 2). Theoretically, this is a versatile algorithm framework that applies to most Pareto-based MOEAs. And just as expected, experimental results on the greenhouse problem and several benchmark functions all suggested that it is a framework that could improve the convergence, the runtime, and the solution diversity of Pareto-based MOEAs at the same time—that is, this is an algorithm framework that is capable of making Pareto improvement of Pareto-based MOEAs.


The main result above has been published in the Journal— IEEE Transactions on Evolutionary Computation, 2016, 20(2):299-315

For more information about this work, you can contact Prof. Lihong Xu (the corresponding author of the published paper above) at email:


[1] C. Zhu, L. Xu(*) and E. D. Goodman, Generalization of Pareto Optimality for Many-Objective Evolutionary Optimization, IEEE Transactions on Evolutionary Computation, 2016, 20(2):299-315

This entry was posted in BEACON Researchers at Work and tagged , , , . Bookmark the permalink.

Comments are closed.