For example, one can set for the optimization problem
ProblemTruss the weight of the truss to be minimized, while a constraint is set on the maximum stress and displacement of the truss.
This module aggregates objectives and constraints into a single output.
|Algorithm||deterministic (as no gradient handling is implemented)|
|Design Variables||continuous variables, discrete or mixed variables are possible.|
|Initial Search Region||not affected|
|Typical X||not affected|
|Starting at his module|| One connection of type |
|Ending at this module|| One connection of type |
The options are currently described as "pop-up help".
The result of an evaluation of a solution may contain objective values f and constraint values g:
While bjectives are to be minimized (
min(f)), constraints are to be fulfilled (
g =< 0).
0. No Constraint Handling
This option turns off the constraint handling. Objective values and constraints are not changed.
1. Delete Constraints
Deletes all constraints. This can be used if the goal is to minimize the objective function(s) only. The objective functions are not changed.
2. Penalty Method
To each objective function, a penalty (larger or equal to zero) is added. All violated constraints (i.e. the constraint value is >= 0) are summed and multiplied with a penalty factor
penalized objective i = fi + sumj(max(gj,0))*p
Increasing the penalty factor
p puts more weight on the constraints compared to the objectives.
3. Stochastic Ranking
- only for single objective problems
- requires population based search methods such as CMA-ES. There are 3 steps
Rough Description of Main Steps of Stochastic Ranking (1)
- For each solution, the penalty is computed as the sum of all violated constraints (i.e. the constraint value is >= 0)
p = sumj(max(gj,0))as in the Penalty Method.
- The solutions of the population are randomly ranked.
- Always two adjacent solutions of the population are compared either based on their objective value or based on their penalty. Typically the comparison according to the penalty should have a higher probability of about 60%. The winner of the comparison obtains the better ranking position.
- The comparision is repeated several times.
(1) T. P. Runarsson and X. Yao, "Stochastic Ranking for Constrained Evolutionary Optimization", IEEE Transactions on Evolutionary Computation, Vol. 4, No. 3, pp. 284-294, Sep. 2000.