Documentation/Modules/ConstraintHandler
Inhaltsverzeichnis
Summary
Some optimization problems define objectives as well as constraints.
For example, one can set for the optimization problem ProblemTruss
the weight of the truss to be minimized, while a constraint is set on the maximum stress and displacement of the truss.
This module aggregates objectives and constraints into a single output.
Properties
General
Algorithm  deterministic (as no gradient handling is implemented) 

Design Variables  continuous variables, discrete or mixed variables are possible. 
Objectives  any number 
Constraints  any number 
Boundaries  not affected 
Initial Search Region  not affected 
Typical X  not affected 
Initialization  not required 
Connections
Starting at his module  One connection of type optimization


Ending at this module  One connection of type optimization

Actions
Name  Description 

   
Options
The options are currently described as "popup help".
Module Description
The result of an evaluation of a solution may contain objective values f and constraint values g:
While bjectives are to be minimized (min(f)
), constraints are to be fulfilled (g =< 0
).
0. No Constraint Handling
This option turns off the constraint handling. Objective values and constraints are not changed.
1. Delete Constraints
Deletes all constraints. This can be used if the goal is to minimize the objective function(s) only. The objective functions are not changed.
2. Penalty Method
To each objective function, a penalty (larger or equal to zero) is added. All violated constraints (i.e. the constraint value is >= 0) are summed and multiplied with a penalty factor p
:
penalized objective i = f_{i} + sum_{j}(max(g_{j},0))*p
Increasing the penalty factor p
puts more weight on the constraints compared to the objectives.
3. Stochastic Ranking
Limitations:
 only for single objective problems
 requires population based search methods such as CMAES. There are 3 steps
Rough Description of Main Steps of Stochastic Ranking (1)
 For each solution, the penalty is computed as the sum of all violated constraints (i.e. the constraint value is >= 0)
p = sum_{j}(max(g_{j},0))
as in the Penalty Method.  The solutions of the population are randomly ranked.
 Always two adjacent solutions of the population are compared either based on their objective value or based on their penalty. Typically the comparison according to the penalty should have a higher probability of about 60%. The winner of the comparison obtains the better ranking position.
 The comparision is repeated several times.
Usage

Source Code
References
(1) T. P. Runarsson and X. Yao, "Stochastic Ranking for Constrained Evolutionary Optimization", IEEE Transactions on Evolutionary Computation, Vol. 4, No. 3, pp. 284294, Sep. 2000.